View direction is behaving in a way that doesn't make sense

My understanding of View Direction is that it should give me a vector that describes the direction from the camera to the vertex. However, I haven’t gotten it to behave in a way that makes sense to me at all.

The Problem

Here’s a really simple shader that returns the dot product of the view direction and the inverted normal in object space, meaning it should be fully bright when the camera faces the surface head-on, and be dark when perfectly perpendicular.

Shader "Custom/TestViewDirection"
{
    Properties { }
    SubShader
    {
        Tags { "RenderType"="Opaque" "RenderPipeline"="UniversalPipeline" "UniversalMaterialType" = "Unlit"}

        Pass
        {
            HLSLPROGRAM

            #pragma vertex vert
            #pragma fragment frag

            #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"

            struct Attributes
            {
                float4 positionOS : POSITION;
				float4 normalOS : NORMAL;
                float2 uv : TEXCOORD0;
            };

            struct Varyings
            {
                float4 positionCS : SV_POSITION;
                float2 uv : TEXCOORD0;
                float3 viewDirLocal : TEXCOORD1;
                float3 normalOS : TEXCOORD2;
            };
            Varyings vert(Attributes IN)
            {
                Varyings OUT;
				
                OUT.positionCS = TransformObjectToHClip(IN.positionOS.xyz);
				
				OUT.uv = IN.uv;
				
                VertexPositionInputs vertexInput = GetVertexPositionInputs(IN.positionOS.xyz);
                OUT.viewDirLocal = TransformWorldToObject(GetWorldSpaceNormalizeViewDir(vertexInput.positionWS));
                OUT.normalOS = IN.normalOS.xyz;
    
                return OUT;
            }

            half3 frag(Varyings IN) : SV_Target
            {
                return dot(IN.viewDirLocal, -IN.normalOS);
            }

            ENDHLSL
        }
    }
}

However, the way it’s behaving seems to have very little to do with the view direction, and is heavily reliant on world space information.
Moving around the object to test view directions
Rotating the object itself
So evidently, I have no idea how this really works.
I tried creating a subdivided cube mesh as well, just in case normals at corners work in a weird way, but this behaviour didn’t change at all.
What am I missing here? Am I fundamentally mistaken on what the view direction is? How would I actually get the information I’m looking for here?

Background

I’ll give some further feedback since I’d also like to know what the “best” approach to my end goal here would be.
This is all so that I can finish writing a stylised iris shader.
I have some math planned out that I need to run to essentially “transform” the UVs used to sample the iris texture. These equations need 3 values as input:

  1. Angle between surface normal and view direction
    This I should be able to get with this, assuming the two vectors are in the same space:
degrees(acos(dot(-normal, viewDir)))
  1. The X, and…
  2. Y of the UVs, transformed to be along the axis formed by the view direction.
    This will be hard to describe, so allow me to illustrate it.
    First, since the irises are flat, let’s flatten everything along the axis formed by the normals of the surface of the mesh. (This is also how I planned to transform the view direction vector)

    Next, take the flattened view direction vector, and create “upward” and “right” direction vectors based on it.

And with these new vectors, this should define a new 2D space that I want to be able to translate 2D UV coordinates to and from.

With coordinates in this form, I can plug those values into my equation to transform them, and then transform the resulting coordinates back into the original UV space to get the accurately transformed UVs.

Concluding…

If anybody can tell me why view direction isn’t returning the values I’m expecting, or has a better/different approach all together for getting the values I need, please do share. All help is appreciated!

you should be renormalizing the values in the fragment stage (as they get interpolated, the numbers lose their size, and make the dot product not return what you want, since the vectors are different sizes.), though i don’t know if it would cause such extreme artifacts.

Just tried this, thanks. It did reduce the crazy look of it, but it hasn’t resolved the view direction not working the way I expect.


The handle is world-space there. It seems to me as if the view direction is essentially equal to (1, 0, 0) in world space for some reason, irrespective of the orientation of the actual view.
This is Unity 6000.0.29f1, for the record.
Here’s how I modified the code, I just shoved normalize() on basically everything

Shader "Custom/TestViewDirection"
{
    Properties { }
    SubShader
    {
        Tags { "RenderType"="Opaque" "RenderPipeline"="UniversalPipeline" "UniversalMaterialType" = "Unlit"}

        Pass
        {
            HLSLPROGRAM

            #pragma vertex vert
            #pragma fragment frag

            #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"

            struct Attributes
            {
                float4 positionOS : POSITION;
				float4 normalOS : NORMAL;
                float2 uv : TEXCOORD0;
            };

            struct Varyings
            {
                float4 positionCS : SV_POSITION;
                float2 uv : TEXCOORD0;
                float3 viewDirLocal : TEXCOORD1;
                float3 normalOS : TEXCOORD2;
            };
            Varyings vert(Attributes IN)
            {
                Varyings OUT;
				
                OUT.positionCS = TransformObjectToHClip(IN.positionOS.xyz);
				
				OUT.uv = IN.uv;
				
                VertexPositionInputs vertexInput = GetVertexPositionInputs(IN.positionOS.xyz);
                OUT.viewDirLocal = normalize(TransformWorldToObject(GetWorldSpaceNormalizeViewDir(vertexInput.positionWS)));
                OUT.normalOS = normalize(IN.normalOS.xyz);
    
                return OUT;
            }

            half3 frag(Varyings IN) : SV_Target
            {
                return dot(normalize(IN.viewDirLocal), normalize(-IN.normalOS));
            }

            ENDHLSL
        }
    }
}

Edit: Figured it out. Just need to add the position of the vertex back to the world space view dir so it’s relative to the origin of the object when it’s transformed back into object space. Then I just subtract the vertex’s object-space position after the transformation. I also don’t need to invert the normal in the fragment stage apparently. Still would appreciate pointers on the best means of implementing the system described in the original post though!

Update: it’s not just (1, 0, 0), it’s dependant on the world offset, and seems to also depend on the view direction at extreme angles. I don’t really understand what’s happening here. The little icon in this gif is at (0, 0, 0) in world space.
ezgif.com-optimize (21)