I spent a while getting displacement to work. There’s not much info out there for this that I could find, the docs are nearly non-existant (Unity - Manual: Surface Shaders with DX11 / OpenGL Core Tessellation is about all there is, and that’s just a few code snippets and no real detail), and there are a lot of gotchas. Thought I’d drop a few notes in case it helps the next person searching for it.
I tried a bit to get a vector displacement map to work, but they’re a lot more expensive and I don’t need them right now, so I stuck with regular displacement. (Maybe I’ll put together a sample scene, but I’ve spent too much time on this already–maybe later…)
Quick points (“TLDR”, read below if you want the “why”):
- Use xNormal. I got usable results out of it where I couldn’t out of Mudbox or Zbrush.
- Set your scene to linear (Project Settings → Player → Other Settings → Color Space).
- Use floating-point displacement maps. In xNormal, set “Normalization” (in “Height map” options) to “Raw FP values” and save as .EXR to get this.
- Leave displacement at 1. You don’t need a magic displacement factor when you use float maps (but see below).
- If your model has a file scale on it (or you’ve set a scale factor), you have to adjust for this in the shader. My models are at 0.01 scale since they were exported from Maya (centimeter scale), so I have to multiply displacement by 0.01 in the shader to match. If your model explodes when you turn displacement on, check this.
- Set your displacement textures to type “Default”, turn off “sRGB”, and turn off compression.
Why linear?
Unity apparently won’t let you sample floating-point textures without color conversions unless your scene is set to linear color space. (Project Settings → Player → Other Settings → Color Space) In gamma mode, float textures are converted to sRGB when you sample them–even if “sRGB (Color Texture)” is unchecked. Unity devs claim this is “by design”. Why would you design it so non-color textures have color conversions applied? If it’s by design, then it’s a design bug.
xNormal notes:
- If I just exported a mesh and a high-res mesh, xNormal couldn’t figure it out. It would map one half of the character to the opposite side and give garbage. I worked around this by lopping off the left half of the character (in both the low- and high-res mesh) before giving it to xNormal, and this problem went away. I also had some random garbage that went away by disabling “Closest hit if ray fails”.
- Make sure you have non-overlapping UVs, at least within the side of the mesh you give to xNormal. Overlapping UVs don’t work for generating maps.
- xNormal will output an RGB EXR, which is a lot bigger than it needs to be. This can just be converted to a single-channel greyscale file, but I haven’t chased down a tool that can do that yet.
Troubleshooting:
- Look at your displacement map after you generate it. If you’re generating a map from a low-res mesh to a high-res mesh, think about how much distance separates the two meshes. If it’s a human-scale character, it’s probably less than 1cm. If your model is in cm scale, you should be seeing numbers less than 1 (less than .1 in my case). If I see a bright white patch with numbers like 5, I know something is wrong with the map, since there’s no place where the high-res mesh is 5cm from the low-res one.
- Remember that your models may not be in the same scale as your Unity scene. I’m exporting from Maya and my characters are in cm scale, and Unity’s “Use File Scale” option scales these to meters on import.
- You might have precision issues if your models are in meter scale. At cm scale I have displacements like 0.05. If I was in meter scale, that would be 0.00005. That’s too small a number to store in a 16-bit float. I don’t have this problem since I’m at cm scale, so I don’t know if this is actually an issue or know how to fix it if it is.
Other stuff:
- Displacement maps can be stored as integer or floating-point images. If they’re grey with numbers hovering around 50% grey, it’s an integer map. You need to subtract 0.5 from these and then multiply them by a scale that you’re supposed to get when you generate the map. You have to enter this value every single time you generate a map. That’s an awful workflow. Floating-point maps are much nicer and simpler: they encode the actual object-space distance, with no displacement factors. If you load them in Photoshop, you’ll see they’re mostly black, with positive and negative numbers (which you can see in the info panel if you hover over the image).
- Texture compression (at least the compressors Unity uses) doesn’t work well with displacement maps, even on “high”, so leave it off. “Crunch compression” is probably fine (that looks like lossless archive-level compression).
- In my shader, I removed the displacement factor that was in the examples (float displacement doesn’t need it) and hardcoded a 0.01 factor to adjust for model scale:
float d = tex2Dlod(_DispTex, float4(v.texcoord.xy,0,0)).r;
v.vertex.xyz += d * v.normal * 0.01f;