Abnormal Normal Maps -or- Trick of the Light

Hey guys. What began as an experiment in using normal-maps and point lights to add a shiny bas-relief effect to what was otherwise a simple 2D sprite game, turned into something which has me questioning the soundness of normal-mapping as a technology.

Basically, it goes like this:

-Create a material with a color map and a normal map. Apply this material to a cube.

-Set up the camera to look down at the top of your cube.

-Put a point light above the cube.

-Move the point light around a bit.

As you can see, the flat surface of the normal map looks shiny and bumpy and stuff.

But now put the light off to one side, so that your cube is lit from one side. Then rotate the cube along its Y axis.

The lit side rotates with the object! This is most noticible when the object being rotated has a big sphere on it.

What the heck does this mean? Well, each pixel of the normal map is just a color that represents the tangent of that pixel. So, if the light hits it from above, it appears to shine off in whatever direction based on the color of that pixel.

Rotate your object, and each pixel, which still represents light casting off in a single direction, moves around on the screen. But it’s still casting light in that particular direction. It makes sense when you think about it.

If you have the game StarScape, you can even notice how they address this problem. Each ship apparently has a collection of 32 or 64 normal maps, one designed for each direction the ship can face. If you rotate your ship very very slowly, you can notice the artifact. Turn 2 or 3 degrees, and your shadows don’t change, until you pass a certian threshold, and the shadows change.

But here’s the part that REALLY messes with my head:

3D games like Halo DON’T take this into account! A normal map is a normal map, period, and it never rebakes or changes colors according to the lighting situation around it!

The weird thing is, I can’t understand why this doesn’t ruin the immersion of the 3D game’s graphic, but it’s painfully obvious in a 2D game. If MasterChief walks under a light, and the top of his helmet has a little dent in it, that dent should become a spike if he turns his head!

Does this happen all the time and we just ignore it? Or is there some fundamnetal difference between 2D graphics and 3D graphics? From what I can tell, a cube in Unity with a normal-mapped texture applied to it should be fundamentally the same as a medium-poly character mesh with a normal map UV-mapped onto it. So why is it painfully obvious in my 2D application, but nobody complains about it in 3D games?

Normalmaps are usually defined in tagentspace or object space.
In Unity we use tangent space because its the most flexible. In practice that means the normal map normals are relative to the surface,

So when you rotate an object the normal map the normal map is rotated with it so to speak. (This is done at runtime in a pixel shader / vertex shader)

So bottom line it is still relative to the rotated object. The nice thing about tangent space is that even when you have a skinned character by updating the tangents the normal map will take the change into account.

Whoops, yeah. I just figured that out. It turns out that the problem I was experiencing was caused by Unity and NXidia’s NormalMapTools plugin for phototshop having different ideas about which alpha channel means which direction. Either that, or the default UVmap for the Cube primitive flips your textures. I’m not sure which.

Anyway, I resolved the issue by simply flipping the textures along the X axis. (X scale set to -1.)

This looks beautiful. :smile: Now I need to learn how to churn out copies of my cubes and script them to move around, and this will be the best game evar!

I still don’t understand what you are talking about.

Screenshot?

Sure, um…

How do you take a screenshot on the mac?

Apple + Shift + 3 for Fullscreen, Apple + Shift + 4 for a specific screen section

It doesn’t make any sense to me either. I know that the issue is resolved now, but it sounds like a lot of effort went into this theory. If the theory isn’t completely destroyed, I would like to hear about it.

As far as I can tell, it was a theory that was not taking tangent space into account. But that wouldn’t work for dynamic interactive scenes.

Yeah, I figured it was somehting like that.

Theory is totally destoryed, yeah. :smile:

See, before Unity, I tried this in Flash, which needed a technique to fake normal maps using displacement filters. (There is no Tangent Space in a purely 2D app.)

So as soon as I noticed weird lighting results, I started trying to figure it out from an Object Space point of view. Then I thought “Hey, wait a minute, why isn’t this effect obvious in all 3D games?”

The rest is a high-viscosity sludge of conjecture and caffine, the like of which solidifies into sticky forum threads as soon as it hits the air. You can attempt strength checks once per round to attempt to break free of my nonsense.

Sorry.

  1. 3D games tend to be full of textures that have some lighting baked into them, which doesn’t actually look right when viewed from arbitrary angles.

  2. Normal maps (and bump maps) should be lit dynamically and shouldn’t exhibit the problem you’re describing. It’s lightmaps that will have the problem you’re describing.

Fundamentally, people are pretty forgiving of mismatched lighting in graphics, especially fast-moving graphics. Discrepancies tend to be more obvious with 2d, although it’s still quite common to see text layouts with inconsistent text shadowing, or UI elements with inconsistent lighting.

E.g. Firefox’s widgets are lit from top left (like Mac OS Classic, but not like OS X, which is lit from above). But Firefox’s toolbar icons are lit from above. So just in one, widely used app, with a fairly widely liked UI, there’s a lighting discrepancy.

Oh, I know. The widgets that ship with OSX are mostly consistant, but the more third-party programs I download, the weirder and weirder my toolbar looks. :stuck_out_tongue:

I wouldn’t think there would be any kind of problem until you add a dynamic light source to the lightmapped scene (i.e. a flashlight). Is there any other potential problem?

Also, what is the standard way of taking care of this? I can’t wrap my head a way to mix lightmapping and normal mapping that would work. (Then again, I’ve only started thinking about it while posting this…)

…can you just have a normal map on the lightmapped object and blend the new light color offset with a shader?

Lightmapping is a kludge. It looks OK most of the time but if there’s any kind of dynamic lighting, it’s guaranteed to be wrong.

The long-term solution is dynamic shadows and normal/bump maps, which should be pretty much correct pretty much always*, but you’re talking far more expensive to render.

  • Well, actually not when a bumped surface is in shadow, since the bumps are another kludge.

Light mapping also lets you bake arbitrarily clever lighting effects (e.g. radiosity) into scenes, and most of the time folks won’t notice or care about inconsistencies.

It’s all a case of what looks good (for the time!), and what you can get away with.

When I first saw Ultima Underworld I was blown away. It looks pretty darn horrible by today’s standards.

Going way down the line … specular is a kludge. Most high end rendering for film these days avoids specular like the plague (it’s just a cheap approximation of reflection, right?).

There’ll always be more details to clean up.