How to unify precision between Depth, DepthNormals?

If you try the Ambient Obscurance image effect in Unity you will notice a sharp difference in quality when the camera’s depthTextureMode is set to DepthTextureMode.Depth versus DepthNormals.

The problem: in Depth mode you get a nice, 16/24-bit nonlinear depth texture, just like the real depth buffer. But in DepthNormals mode you get fixed point depth, with 1/255 precision. If your camera’s far plane is, say, 1300 meters, then this results in 5.1 meters of depth precision when calculating Ambient Obscurance. Results: massive AO shadow acne.

This problem affects any post fx that need to sample depth, when the game is in DepthNormals mode.

[25197-depth+is+smoother+than+depthnormals.jpg|25197]

Hi Ben

I admire your brevity.

These are good questions.

Easy enough to whittle everything down to one bit for simple projects. This project has to run on lots of different machines with different configurations.

For instance, on fast machines, DepthNormals mode is enabled to help smooth gridlines on Unity terrain caused by ddx/ddy calculations. However, if you change from Depth to DepthNormals you’re greeted with a ton of noisy acne suddenly on the ground. “So what?” So I’m getting showstopper bugs about this.

Anyway, since this is Unity Answers I’ll propose an answer. This acne can be curbed by artificially compressing camera dimensions in the SAO discrimination stage. As I mentioned, a far plane of 1300 means 5-meter deltas with Unity’s DepthNormals format. The nice thing is that SAO can be calculated in sanitized space, so I just shrink the far plane in the SAO stage. (If you do this, _Radius needs to be converted since it was probably tuned in worldspace)