I’m kind of new to shaders, so I’m sure this is a stupid question, but I can’t seem to get it to work right after searching everywhere and reading everything I can.
I’m trying to make a shader that renders the thickness of an object. I’ve been trying a two-pass approach, rendering the back side distance, then GrabPassing it, and rendering the front distance minus that. but everything I try seems to end up with the object changing color as I move in and out.
I’m on Unity 5.1.1f1 and don’t really have a problem with requiring higher shader models.
Any suggestions for how to approach this would be welcome.
Do you have some screenshots? The general approach seems fine, though it does require you to render to a HDR target. A 8 bit target will either clamp or be inprecise. Also note that it only works for a single layer of convex geometry.
I would actually not use grabpass, but just render the back side distance to a 16 or 32 bit red only render target. Then during front side rendering you can retrieve those values.
Getting it to work for multiple layers is a bit more tricky.
It changes color because the depth buffer uses a nonlinear distribution. It uses a lot more precision closer to the camera where it’s more important and less in the distance. If you just use the raw value, the different between the front and back will change because it isn’t in “linear space” so to speak. The values get compressed (darkened) in the distance and expanded (brightened) as you get closer to the camera. You can see the precision loss: the banding issues get worse as you get farther from the camera.
As jvo3dc mentioned, you should create a custom render target with enough precision, and make sure you are packing linear values into it.
Use “RFloat” rendertexture if you can which will give you a single 32-bit value. In the vertex shader calculate the depth manually (distance from camera), and then in the frag shader just write it into the buffer.
In your second pass, you can read that buffer in and compare that with your current depth.