The deferred fog, in our implementation, is currently a full screen quad pass that blends with the skybox for nice effects. But on most pixels, it actually does nothing since there is no fog effect to perform as pixels are too close. I’d like to use the depth buffer to avoid executing the fragment shader on those pixels.
For this purpose I started modifying the vertex shader so that the Z value corresponds to the start of the fog in clip space. I thought it would be: (fogStart - nearClip) / (farClip - nearClip). But it doesn’t work! I wonder if there is something involved due to perspective division, reverse Z or something.
I started poking around values just to see if the concept would work and it does. For a fog start of 300, a near clip of 5 and a far clip of 2500, FOV at 45°, a Z value of 0.0063 properly culls all the pixels just as if I had clipped in the fragment shader. But I have no idea how to come up with that value from the parameters that I know of (near clip, far clip, fog start)
In this screenshot, we see that the fragment shader is not executed on pixels too close. The green/orange is debugging values. Green shows where the fog computation is supposed to occur and red shows how much fog to apply.
Looking at the cameraProjection matrix in the frame debugger, I notice that it is not affected at all by near and far clip values. It’s quite an unconventional projection matrix. Further digging shows that the actual projection UNITY_MATRIX_P is actually called glstate_projection_matrix and it’s also quite weird. It looks like the usual A and B values of the matrix are A = near / (far - near) and B = near. What kind of perspective projection matrix is Unity using?
Any idea how to compute a proper Z value for depth testing to work for fog?
You want to convert a view space depth to clip space depth? Mostly you just need to apply the UNITY_MATRIX_P and you’re done… correcting for clip space w.
// view space fog start position, x&y don’t matter, w must be 1
float4 fogStartView = float4(0,0, _FogStart, 1);
// apply projection transform to view space position to transform into clip space
float4 fogStartClip = mul(UNITY_MATRIX_P, fogStartView);
// do perspective divide (transform from clip to normalized device coordinates, aka NDC)
float fogStartNDCZ = fogStartClip.z / fogStartClip.w;
// calculate clip space position of vertex
o.pos = UnityObjectToClipPos(v.vertex);
// convert NDC z back to clip space using the vertex clip pos w.
o.pos.z = fogStartNDCZ * o.pos.w;
A bit of explanation about why the “real” projection matrix looks like it does. First, matrixes in c# are defined in OpenGL form, and Unity transforms that into whatever from is needed for the given platform. So that means if you’re using Windows, it’ll be transformed into a Direct3D projection matrix, but the unity_CameraProjection matrix is the original OpenGL form. In addition Unity uses a reversed Z depth when rendering with anything other than OpenGL, which requires inverting the projection matrix. Look up “reversed z depth” to find out why.
The funny thing is no platform except Android uses OpenGL by default anymore. Windows is D3D11. Linux is Vulkan. MacOS & iOS are both Metal. Android currently defaults to OpenGLES, but you can use Vulkan on most new phones from the last few years. In a few more years I suspect no platform that Unity supports will still use the original OpenGL projection matrix on the GPU.
This all stems from Unity originally being an OpenGL only engine btw.
Yes, that’s what I also thought initially but surprisingly it doesn’t work. This is a post process shader ran from a commandbuffer in BeforeImageEffectsOpaque so it might change things a bit. Here’s my code by the way:
Running the math manually shows the value ends up a lot greater than 0.0063. Hence why I started this thread.
EDIT: actually, it’s not greater, my memories were wrong. It’s less than 0!