My understanding of View Direction is that it should give me a vector that describes the direction from the camera to the vertex. However, I haven’t gotten it to behave in a way that makes sense to me at all.
The Problem
Here’s a really simple shader that returns the dot product of the view direction and the inverted normal in object space, meaning it should be fully bright when the camera faces the surface head-on, and be dark when perfectly perpendicular.
Shader "Custom/TestViewDirection"
{
Properties { }
SubShader
{
Tags { "RenderType"="Opaque" "RenderPipeline"="UniversalPipeline" "UniversalMaterialType" = "Unlit"}
Pass
{
HLSLPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
struct Attributes
{
float4 positionOS : POSITION;
float4 normalOS : NORMAL;
float2 uv : TEXCOORD0;
};
struct Varyings
{
float4 positionCS : SV_POSITION;
float2 uv : TEXCOORD0;
float3 viewDirLocal : TEXCOORD1;
float3 normalOS : TEXCOORD2;
};
Varyings vert(Attributes IN)
{
Varyings OUT;
OUT.positionCS = TransformObjectToHClip(IN.positionOS.xyz);
OUT.uv = IN.uv;
VertexPositionInputs vertexInput = GetVertexPositionInputs(IN.positionOS.xyz);
OUT.viewDirLocal = TransformWorldToObject(GetWorldSpaceNormalizeViewDir(vertexInput.positionWS));
OUT.normalOS = IN.normalOS.xyz;
return OUT;
}
half3 frag(Varyings IN) : SV_Target
{
return dot(IN.viewDirLocal, -IN.normalOS);
}
ENDHLSL
}
}
}
However, the way it’s behaving seems to have very little to do with the view direction, and is heavily reliant on world space information.
So evidently, I have no idea how this really works.
I tried creating a subdivided cube mesh as well, just in case normals at corners work in a weird way, but this behaviour didn’t change at all.
What am I missing here? Am I fundamentally mistaken on what the view direction is? How would I actually get the information I’m looking for here?
Background
I’ll give some further feedback since I’d also like to know what the “best” approach to my end goal here would be.
This is all so that I can finish writing a stylised iris shader.
I have some math planned out that I need to run to essentially “transform” the UVs used to sample the iris texture. These equations need 3 values as input:
- Angle between surface normal and view direction
This I should be able to get with this, assuming the two vectors are in the same space:
degrees(acos(dot(-normal, viewDir)))
- The X, and…
- Y of the UVs, transformed to be along the axis formed by the view direction.
This will be hard to describe, so allow me to illustrate it.
First, since the irises are flat, let’s flatten everything along the axis formed by the normals of the surface of the mesh. (This is also how I planned to transform the view direction vector)
Next, take the flattened view direction vector, and create “upward” and “right” direction vectors based on it.
And with these new vectors, this should define a new 2D space that I want to be able to translate 2D UV coordinates to and from.
With coordinates in this form, I can plug those values into my equation to transform them, and then transform the resulting coordinates back into the original UV space to get the accurately transformed UVs.
Concluding…
If anybody can tell me why view direction isn’t returning the values I’m expecting, or has a better/different approach all together for getting the values I need, please do share. All help is appreciated!