How does Unity handle normal maps internally?

Update:

At the time of writing, Unity uses only the alpha and green channel for some normal mapping, storing the x fraction in alpha and the y in green. See Wolfram’s answer for details.

Here is the relevant code. It is crude, but seems to produce adequate results. currentBuffer is an array of integers which acts as a hightmap.

for (int row = 1; row < pixelsY - 1; row ++){
	for (int column = 1; column < pixelsX - 1; column ++){
		int pos = (row * pixelsX) + column;
		byte x = (byte) (127 + (((currentBuffer[pos - 1] 
			+ currentBuffer[pos + 1])/4) >> intensityShift));
	 	byte y = (byte) (127 + (((currentBuffer[pos - pixelsX] 
			+ currentBuffer [pos + pixelsX])/4) >> intensityShift));
		nmTemp[pos] = new Color32 (0,y,0,x);		
	}
}

Below is a screen shot of a scene using a normalmap generated using the above code to simulate the ripples in a sink. The shader is Unity’s built-in Transparent/Bumped specular.


I am working on a procedurally generated and updated normal map. I have been able to create the texture during runtime and update it every frame. However, the shader is rendering the normal map is in a similar way to those that aren’t imported specifically as normal maps in the editor.

A texture format for normal maps doesn’t seem to exist in TextureFormat enumeration and I am assuming that x, y and z in tangent space don’t correspond to r g and b in the texture’s colour values. Does anyone know how unity uses the r g b and a properties of Color when dealing with normal maps?

Normal Maps are handled by the shader as far as I know. Why just don’t download the here and check it yourself and modify according to your needs?

In %ProgramFiles%\Unity\Editor\Data\CGIncludes\UnityCG.cginc you find the following

inline fixed3 UnpackNormal(fixed4 packednormal)
{
#if defined(SHADER_API_GLES) && defined(SHADER_API_MOBILE)
    return packednormal.xyz * 2 - 1;
#else
    fixed3 normal;
    normal.xy = packednormal.wy * 2 - 1;
    normal.z = sqrt(1 - normal.x*normal.x - normal.y * normal.y);
    return normal;
#endif
}

The shaders call the UnpackNormal on the normal map before applying the value to o.Normal

Tseng’s answer contains the information you are looking for: for GLES and mobile shaders, the mapping is as one would expect - direct mapping from [0…1] rgb to [-1…1] xyz.

However, in all other cases it appears, Unity uses a specialized coding, storing the x-fraction in a, and the y-fraction in g. red and blue are ignored. I guess this is done to allow encoding of 16bit normal maps, where it would store the 16bit x-fraction in b+a, and the y-fraction in r+g.

In these cases, the z-fraction is reconstructed automatically, which is possible since the length is always 1. However, for that to work, your x and y components need to be the actual x/y-fractions of the normal vector. For example, for a diagonal normal of (0.57735,0.57735,0.57735) (which has a length of 1), the packed values for x and y are (0.57735*0.5+0.5)*255=201, which you’d then store in green and alpha.

Normal maps are difficult to write. The values for RGB actually loosely line up with the XYZ components of the normal. It isn’t exactly one to one, but each channel does relate to the component of the normal. That’s part of the reason that parts of the normal map without any added detail are blue. That corresponds to (0 , 0 , 1) and that’s the forward vector for the normal. This whole paragraph is a little simplistic, but it does get the idea across pretty well.

In old-school shaders, you had to do some multiplication with the old normal and the normal map value to get the actual normal. Surface shaders do the last step for you. You just pass in a normal and it will do the matrix multiplications that it needs to.

Creating normal maps isn’t easy. The CG page shines some light on what it is and how to do it.