Hey guys. What began as an experiment in using normal-maps and point lights to add a shiny bas-relief effect to what was otherwise a simple 2D sprite game, turned into something which has me questioning the soundness of normal-mapping as a technology.
Basically, it goes like this:
-Create a material with a color map and a normal map. Apply this material to a cube.
-Set up the camera to look down at the top of your cube.
-Put a point light above the cube.
-Move the point light around a bit.
As you can see, the flat surface of the normal map looks shiny and bumpy and stuff.
But now put the light off to one side, so that your cube is lit from one side. Then rotate the cube along its Y axis.
The lit side rotates with the object! This is most noticible when the object being rotated has a big sphere on it.
What the heck does this mean? Well, each pixel of the normal map is just a color that represents the tangent of that pixel. So, if the light hits it from above, it appears to shine off in whatever direction based on the color of that pixel.
Rotate your object, and each pixel, which still represents light casting off in a single direction, moves around on the screen. But it’s still casting light in that particular direction. It makes sense when you think about it.
If you have the game StarScape, you can even notice how they address this problem. Each ship apparently has a collection of 32 or 64 normal maps, one designed for each direction the ship can face. If you rotate your ship very very slowly, you can notice the artifact. Turn 2 or 3 degrees, and your shadows don’t change, until you pass a certian threshold, and the shadows change.
But here’s the part that REALLY messes with my head:
3D games like Halo DON’T take this into account! A normal map is a normal map, period, and it never rebakes or changes colors according to the lighting situation around it!
The weird thing is, I can’t understand why this doesn’t ruin the immersion of the 3D game’s graphic, but it’s painfully obvious in a 2D game. If MasterChief walks under a light, and the top of his helmet has a little dent in it, that dent should become a spike if he turns his head!
Does this happen all the time and we just ignore it? Or is there some fundamnetal difference between 2D graphics and 3D graphics? From what I can tell, a cube in Unity with a normal-mapped texture applied to it should be fundamentally the same as a medium-poly character mesh with a normal map UV-mapped onto it. So why is it painfully obvious in my 2D application, but nobody complains about it in 3D games?