So I’m in my post-fx shader and for fun I output color.rgb = tex2D(_MainTex, uv).aaa;
Most stuff is white (1), some stuff like terrain and my character are black (0) so it seems completely random and currently unused.
Using camera.depthTextureMode instantly doubles draw-calls which for my target hardware is a no-no.
Yet we know somewhere in the pipeline surfaces are writing z, first for opaque, then the transparent geometry gets blended. At some point a final z is known gpu-side. Of course that’s the z-buffer which from the looks of it we can’t insta-transfer for gpu-side readback to cameraDepthTexture (Unity docs hint that the native z-buffer is represented by cameraDepthBuffer whenever some form of OpenGL is active, but definitely doesn’t seem the case on my -force-opengl project and also not on-device from the FPS drop I’m seeing).
There’s so many “loopholes” where Unity’s default behavior can be hacked (custom terrain shaders etc etc), isn’t there anyway after final blending and before blit-to-postfx to capture a fragments final z and output as its alpha component?
Definitely. I guess Unity does not have any special purpose for the alpha channel other than blending, so you can store depth in it. There are two problems, though… Unless the texture is HDR and uses half or full precision instead of a single byte, the precision simply isn’t enough in most cases. Half precision has some problems as well in native / linear format, but you can easily achieve acceptable results by using logarithmic encoding instead.
The second problem are transparent objects. By default, they would blend the depth of the object behind them by the alpha they output. You would have to change the blending mode in all transparent shaders to something like Blend SrcAlpha OneMinusSrcAlpha, Zero One to preserve the alpha channel. You can’t store the depth of transparent objects this way, though, because the alpha channel is already being used for blending.
True about 8-bit imprecision, but let’s assume for simple depth-based postfx the lower-precision approximation would be acceptable. How would one go about it?
I’m pretty sure it won’t, but… apart from what I said about transparent objects, you simply write the depth into alpha channel. I’m not sure if it’s possible to get the projected position in a surface shader, but if so, you can get the projected depth of a pixel from pos.z / pos.w, or linear depth from pos.w / farPlane.