Before U3, I had mad a shader for transparent/Bumped specular, where the diffuse contained the specular in the alpha, and the bumpmap contained the transparency in the alpha. But with the new UnpackNormals() system, the w/a component of the vector gets internally changed when the texture is set to “Normal map”. And then this happens in the unpack normals function:
Now, I still want to use the approach I did before, but I’m not sure what happens, if I simply DON’T set the texture as a normal map, and then just use the old approach directly on the map:
o.Normal = tex2D(_BumpMap, IN.uv_BumpMap)*2-1;
Obviously Unity has had their reasons for the change, but which? Why do they suddenly need to invoke a sqrt operation to obtain the z component, when openGL isn’t the api?
And also I constantly get warnings to fix the “normalmap” setting of the texture, if I don’t set it as such. Which is just annoying.
So what to do… I really want to take control of the .a component for my own use (to save texture memory), but I also don’t want to do something which has some consequences I don’t realize.
You can use your old system fine, no consequences, you just have to remove references to unpack normal in the shader, and replace them with the old n*2-1, I believe doing so will stop the alerts about normal maps not being normal maps. At least thats how it appears to work, as i’m having trouble going the other way, not using unpack normal in the shader but wanting Unity to flag up any texture that isn’t set to normal map.
So as long as you set your normalmap to ‘texture’ or maybe ‘advanced’ would be better, you can still use your old shader code, including using the alpha, assuming nothing else stops it from being used in U3.
As to why this was done, its to allow dxt5 compression on the normalmap with minimal loss of quality. This can make a huge saving when using many normal maps, and the quality is almost indistinguishable from RGB24 format.
If you do a search online for DXT _Y_X you should come across the original papers/documents about the studies into how to compress normal maps and why this was the most favoured solution.
Thanks for the help. And yeah, I knew about doing the *2-1 on the .rgb alone, but I’m glad you concerned yourself with giving me details anyway But you may want to remove the “Normal” in front of the “Alpha”, but I’m sure it’s just a quick copy/paste mistake, when writing your reply. And great to finally understand the reasons for the change. But I’m curious though, how this save in memory can make up for the increased calculation on each fragment when using a sqrt for obtaining the final vector. Normally I run low on fps before video-ram, when making graphics myself. Thought sqrt should be kept at a minimum
Acutally, it’s more expensive computationally and memory-wise. Just the result is usually substantially better in fidelity to the uncompressed texture.
That’s the thing: if you used DXT1 or 5 with the old method, you would get compression artifacts. I think a lot of people switched to uncompressed normal maps for this reason. So on average, the new method saves memory at the expense of fragment shader ops, because it allows everyone to use compressed normal maps.
Exactly what I had understood. So maybe farfarer was talking about the increased memory usage from using dxt5 instead of dxt1, and not the saved memory usage from using dxt5 instead of uncompressed.
So, what is the payoff, performance wise? How much more does a fragment calculation containing a normal look up cost, when invoking a square root operation along the way. Doubled? +1/3? How do you describe such an increase in calculation time? Would be nice with some numbers on this, to weigh the pros and cons.
Yeah, sorry. I meant it’s more expensive than DXT1, cheaper than uncompressed. Didn’t realise people actually used uncompressed textures in real games (unless they’ve got of memory coming out of their ears) so I just assumed it was obvious I was talking about compressed, guess not
As far as I can tell… the difference for calculating the blue channel is fairly insignificant - it’s a very widely used method of normal map compression. I’m sure if it was that inefficient it wouldn’t be used so much. You could set up a test situation to find out, though.