There are two options. If you know the normals are in view space, you can either transform the “world up” vector into view space, or transform the normals from world space into view space.
fixed4 frag (v2f i) : SV_Target
{
float3 normals = decode_normal(tex2D(_CameraDepthNormalsTexture, i.uv));
float3 world_up = mul(UNITY_MATRIX_V, float3(0.0, 1.0, 0.0)); // transform from world to view space
float t = dot(normals, world_up);
return lerp(tex2D(_MainTex, i.uv), _SurfaceColor, smoothstep(_SurfaceSensibility, 1.0, t));
}
fixed4 frag (v2f i) : SV_Target
{
float3 normals = mul(decode_normal(tex2D(_CameraDepthNormalsTexture, i.uv)), UNITY_MATRIX_V); // transform from view to world space
float3 world_up = float3(0.0, 1.0, 0.0);
float t = dot(normals, world_up);
return lerp(tex2D(_MainTex, i.uv), _SurfaceColor, smoothstep(_SurfaceSensibility, 1.0, t));
}
Note, there’s no view to world space matrix, but the world to view space matrix is guaranteed to be an orthogonal matrix (fancy way of saying the matrix isn’t oddly scaled or warped in any way), and a handy property of an orthogonal matrix is the transpose and inverse are identical. You use mul() with a vector before the matrix, it applies the transpose of that matrix, so that’s the view to world matrix.
Whoops, right. This is a post process, so the UNITY_MATRIX_V will be an identity matrix (no rotation or translation, and a uniform scale of 1). You want to use unity_WorldToCamera, like this:
To explain that * float3(1,1,-1), Unity’s View space is -Z forward, but the unity_WorldToCamera matrix is not, it is +Z forward. The reason is that matrix is the Camera game object transform (without scale) rather than the camera’s view matrix. So you have the flip the Z direction to make it match view space.
And I’ve posted longer descriptions of the matrices, like I did here:
Thought that image isn’t Unity specific, that’s describing the common setup for OpenGL (which Unity mostly adheres to). And doesn’t talk about the difference between the “camera” and “view” matrices I mentioned above, nor other things like Unity’s use of a reversed Z depth. Nor the fact if you’re not rendering using OpenGL, the NDC z is from 0.0 to 1.0, not -1.0 to 1.0 (which is unique to OpenGL as it’s one of the first graphics APIs, and literally every graphics API after didn’t do that, because it’s dumb).