I’m working on shaders that need to combine textures and output them to a file, everything was fine, until I got to the normal maps. If shaders expect a proper (128, 128, 255) base color normal map as their final input, which is why you have to UnpackNormal() on the AG normal map, then why is the result of the UnpackNormal so dark?
I’ve tried doing gamma correction on it, and all manner of multiplying/subtracting/addition combinations that I could think of, and of course normalizing. And while I’ve managed to get the base color close to a traditional normal map, the precision in each channel came out very compressed, altering the curvatures of the normalmap.
I’m about ready to just have my normal maps marked as regular textures for this so I can avoid it all together, but it would be good to know what exactly is going on under the hood that results in this, and if anyone know the proper math to go from the AG normal back to RGB. Thanks
Thanks but I know the math for UnpackNormal. My question is why it creates a normal map result like what I’m showing. I plugged the UnpackedNormal into my Albedo so I could see how it looks, and also am rendering it to a RenderTexture and file to see it externally… UnpackNormal doesn’t seem to be turning it into a regular Normal map completely.
Because that’s not what UnpackNormal does. UnpackNormal converts a color (RGB) value into a normal vector. If you want to see how the map looks sample it directly without unpacking.
No, that’s not what UnpackNormal does, it doesn’t even look at the blue channel if you look at the function, it only assigns new data to it. By default, Unity converts your RGB normal maps to a more memory efficient format when imported, discarding your Blue channel, and putting the Red channel into Alpha, because the DXT5 format’s compression has higher precision in the Green and Alpha channels. UnpackNormal takes the Green and Alpha channel and recomputes the Blue channel using them, as well as puts the Alpha back into the red channel.
If you just purely sample the normal map like that, there will only be two unique texture channels, RGB will all be the same as eachother, and Alpha will be a different one. That’s the stored format, only 2 channels actually used.
Yes, this was one of the first things I tried, but it does not give a perfect replication. The B channel has errors after reconstruction, but it’s as close as it can get I guess. I am just going to export the raw AG normals instead of putting it through that precision loss.
Btw you don’t need to make a new float3 to add onto an existing float3, you can just write + 0.5 and it will add it to all 3 values, same goes for any other value matrix.
You can also save on performance a bit by multiplying by 0.5 instead of dividing by 2. Division takes a lot more processing time than multiplication.
I just pasted most of that conversion from somewhere. It’s not like you need to optimize something that will be hardly ever used.
If you wanted to save a few cycles then I guess you could convert more directly from AG → RGB. I’m not sure in what scenarios it would matter though.
Besides if the compiler wasn’t written as a student assignment then it should optimize all that automatically and will most likely do a better job than hand tweaking but no I wasn’t looking at the generated assembly code.