Normal maps from images alone ??

I understand how normal-maps are produced by say 3ds Max. What I don’t undersatnd is how they can be generated by images. I mean, how does Unity it get direction information from an image ?

It doesn’t.

It assumes that the image is a height-map. So all bright pixels are to appear “raised” above the surface, all dark pixels should appear “sunken” into the surface. From there, it can compare the heights of any given pixel to it’s neighbouring pixels and work out the angle between those two heights, which it then encodes into a normal map.

Isn’t that a bump map as opposed to a normal map (they are different)

Thanks

I hope i get your question right.

The same way than producing it in 3D. In 3D your low poly mesh is your base, and the high poly mesh is the offset. And this offset gets calculated into colours, which represents the normal of this pixel relative to the surface of the base object. In 2D the greyscale colour 128 is your base. And the colour information in your picture is the offset. And again this offset gets calculated into colours, which represents the normal of this pixel relative to the base.

@Tiles
so 0-255 for red, 0-255 for blue and 0-255 green are the ranges.

A number less than 127 = up, a number greater than 127 = down

up and down would actually be the x,y,z directions

The normal map is just an approximation

thanks

CrazyBump looks at differences in contrast between pixels I’d say.