recalculate normal from object to world the simplest way

Currently I use these two instructions to recalculate normal from object to world in my shader

           float4 wnorm = mul(unity_ObjectToWorld, v.normal);
           float3 objectNorm = normalize(wnorm);

Could anyone advise me if this is optimal, or could be optimised i.e. reduced to single matrix operation using some of shader’s builtin matrices like UNITY_MATRIX_MVP, UNITY_MATRIX_MV, UNITY_MATRIX_V, UNITY_MATRIX_P - or similar; i have just no idea which one would be appropriate.

Use the built in function:
float3 worldNormal = UnityObjectToWorldNormal(v.normal.xyz);

Internally this is the same as:
float3 worldNormal = normalize(mul(v.normal.xyz, (float3x3)unity_WorldToObject));

On the surface it’s very similar to what you’re using, and the same performance, but what you’re doing is also wrong for normals. It also might not compile for some platforms.

At the most basic, v.normal should be defined as a float3 or half3 value in the appdata struct, but unity_ObjectToWorld is a float4x4. A directional vector only needs 3 components, not 4, and there’s no way to actually mul a float4x4 and a float3, so either the float3 needs to be a float4, or the float4x4 needs to be a float3x3.

For most desktop platforms / GPUs the shader compiler will fix this for you and guess you actually want to do one of the following, assuming v.normal is a float3:
mul(unity_ObjectToWorld, float4(v.normal.xyz, 0.0)) // returns a float4 with w = 0
or
mul((float3x3)unity_ObjectToWorld, v.normal.xyz) // returns a float3

These have identical results for the first three components, though the later will be slightly more efficient … some shader compilers can also smartly convert the first one effectively be the second anyway. However many mobile platforms and consoles will outright error with the code you have, or even guess the wrong “corrected code” and apply positional offsets to the directional vector.

It should be noted the second of those corrected cases returns a float3, not a float4 like you have your wnorm variable. Using a float4 there might result in the w component being set to 0, or it might cause the shader compiler to guess the wrong corrected shader code. You’re also then normalizing that float4, which should be a float3, which again may result in incorrect results. Generally it won’t as that w component should be a 0, but it means that normalize might be slightly slower than it needs to be, or the compiler will again be guessing you actually want a normalized float3, but it’s not something you should rely on.

Fixing the code issues outlined above you should be using something like this:
float3 wnorm = mul((float3x3)unity_ObjectToWorld, v.normal); // v.normal is a float3, mul returns a float3
float3 objectNorm = normalize(wnorm);

I oppose to the use of “objectNorm” as the variable name here btw, as it is still a world space normal, but the variable makes it look like it’s an object space normal.

So that solves the issues with the shader code itself, and that code now matches the built in UnityObjectToWorldDir() function, but the results are still going to be wrong if the object is non-uniformly scaled! When dealing with normals of a non-uniformly scaled object you need to use the inverse transpose object to world matrix. Luckily this is easy to do, as Unity provides the inverse unity_WorldToObject matrix, and swapping the order of the vector and matrix in the mul function will transpose the matrix. Thus brings us to the code Unity’s function uses. But, since it’s not really any less efficient to do it the right way all of the time, there’s no reason not to. As for why you want to use the inverse transpose, check this link:
https://paroj.github.io/gltut/Illumination/Tut09%20Normal%20Transformation.html

6 Likes

Thank you very much for the answer and your very thoroughtful explanation and pointing errors I was not aware of.