object space normal mapping

I would like the “Reflective Bumped Unlit” shader to use object space normal mapping instead of tangent space as it is now, is this in any way possible?

Thanks

This is definitely possible. You can get the source for the shader in the sticky at the top of the forum. You’ll have to remove the tangent space generation bit and just read the normal straight out of the normal map.

OK! thanks, will try that

Interesting, i did not realize this was possible.

Does object space normal mapping have the same limitation as tangent space, namely being ‘glitchy’ when uv seams or different smoothing groups are considered?

Other than that, is there any reason to not use object space; any drawbacks or visual differences compared to tangent space?

Yes. You can’t deform the mesh.

I’m pretty sure that discontinuities across UV seams are due to the tangent space basis not being continuous across the seam. Object space normal mapping should be continuous as long as the normals corresponding to the UV coordinates are continuous. That is: if you use your object space normal map as a base colour to test it, and you don’t see any sharp lines, then there will be no sharp lines when it is used as a normal map, either.

Jessy’s point is not quite correct: strictly speaking, if you use a skinned mesh and do not rotate any bones, you’ll get the same results with object space as with tangent space normal mapping. There are some cases where this will look fine (telescoping sections are all I can think of at the moment), and some where it will look bad (animating a flag by moving bones, but not rotating them).

Additionally, object space normal mapping has the following problems:

• Re-using object space normal maps is difficult. To do it properly, the normal map must be used with each texel facing in the same direction as the original. That is, you can’t rotate the normal map in object space and expect it to look right.
• Unity doesn’t generate object space normal maps, so you’ll have to find a program that does.

The benefit is that object space normal mapping is faster than tangent space normal mapping.

I’m not sure what you’re saying about the flag. I don’t see how object space normal mapping would work at all, except where the verts are 100% weighted to one bone, if you push one bone and pull another. You say it would look bad, which implies to me that you’re saying it won’t have the same results either.

Also, since it’s not dependent on geometry, it allows for transitioning between LODs to have less effect on lighting. Can work pretty nice for terrain LODs, if you’re generating a height map from something like World Machine and using that to generate your primary normals.

Sorry, the flag example isn’t really self-evident. I was thinking of this guy’s problem. If you aren’t rotating your bones, then you aren’t rotating any tangents or normals, which means that tangent space won’t rotate, either. In this case, it will look the same as object space normal mapping, in that the normals won’t rotate. In the case of a flag, this won’t look good. In the case of a piston or something extending (where there is no rotation involved) then things might look ok if the bones are just being translated.

Doesn’t Skin Normals fix that person’s problem?

(And in so doing, make it so that tangent and object space normal maps do in fact have different results?)

Thank you for the valuable info.

On a related topic, does anyone know where can i find a complete solution for generating tangents? Ive been able to find some basic formulas, but nothing foolproof.

As I understand it, if Skin Normals is off, normals will not be updated at all. If it is on, they will be updated using the transpose of the inverse of their bone’s bone-to-object matrix hierarchy. That is, they will only change if the bones rotate. Otherwise, Unity would have to re-calculate normals every frame, which is an expensive operation and wouldn’t look good, as seams would pop in and out of existence as the mesh deformed.

Unfortunately, there is nothing foolproof due to the hairy ball theorem (not even joking). I imagine that decent modellng programs should have usable solutions built in, though. I’ve heard the term “smoothing groups” thrown around, although I don’t know much about them.

Smoothing groups split normals, anyway. So imagine using a setting of 180 in Unity’s “Automatically calculate normals” box, by default, but then you get to selectively choose where to use a value of zero, instead. In terms of vertices rendered, it’s the same effect as putting a UV seam there. I assume this helps with calculating tangents, as the effective “continuity” of the mesh ends when you make edges sharp, but I don’t know much about working with tangents yet; my modeler, Blender, doesn’t even have a tangent visualization mode – just face and vertex normal visualization modes.

Perhaps i should have been more specific. I realize there is simply no way to make tangent space work with uv seams, but what im looking for is something more than just a basic formula for calculating tangents; something that would deal with common pitfalls like mirrored mapping, cylindrical mapping etc. - basically i want to be able to calculate tangents the same way Unity does when importing meshes.

Are you sure about this? I was under the impression that smoothing groups mean simply splitting/duplicating vertices.

How would one go about splitting normals in Unity when generating a mesh from scratch?

Same thing. A vertex can’t have more than one normal, so in order to have two normals in the same place, you need two vertices. To a person modeling something, there’s no use in thinking about it this way. You just use some tool to “make a hard edge”.

You just need to put multiple vertices in the same place, and draw triangles accordingly. You can even overwrite the result with normals that aren’t accurate to the surface, but I don’t know if there’s a use for that kind of thing.

I could be wrong, but I would assume that Unity creates tangents based on existing (or generated) normals and existing UV coordinates. This method will not fix mirrored mapping or anything. It just infers the tangents from the orientation of the UV mapping.

Splitting normals means splitting vertices, when you’re dealing with mesh data. A given vertex can only have one normal, so the entire data set (colour, position, UV, etc.) must be duplicated before the second normal can be added.

This depends on what sort of shape you are generating, but due to the above requirement, you need to split vertices for hard edges.

There is a case where using smoothing groups won’t result in the normals changing. That’s when the hard edge is in the middle of a planar surface. But using a smoothing group there is worthless and a waste of resources.

(Of course, it won’t be a waste of resources if you need a UV seam or hard edge vertex color split there, but there’s no point in having the clutter of a smoothing group edge there, either. As I was saying in my last post, you could overwrite the normals at this edge, but there’s no point to it unless you have some kind of crazy normal-based effect in mind.)

I have been trying to remove the tangent space generation bit in the “Reflective Bumped Unlit”. But with no success unfortunately. I have basicly no idea what im doing when it comes to cg.

I would realy appreciate if someone could post the shader with tangent space generation removed.

Thanks