I’m currently writing a custom render feature (URP) which draws some geometry in isolation using a custom MRT (multi render target) shader. The first target is the color texture and copies the camera targets settings. The second target is a custom depth texture which is meant to record depths which do not actually write to the depth buffer. I then allow the standard frag rules to write to the “real” depth buffer (which is either the color RT’s depth buffer or a separate RT depending on the platform).
I don’t want to get bogged down too much with the actual effect I am trying to create, the main issue is that I need my custom depth texture to be in the exact same format as a real depth texture so that I can later use it to reconstruct world space positions in a screen blit.
Here is my attempt at calculating a depth value between 0 and 1:
// This is logic inspired by Unity's built in UNITY_Z_0_FAR_FROM_CLIPSPACE helpers.
// Rather than needing to divide by the far plane after UNITY_Z_0_FAR_FROM_CLIPSPACE,
// these helpers get directly to that value, saving on unneeded mult and divide operations
#if UNITY_REVERSED_Z
// TODO: workaround. There's a bug where SHADER_API_GL_CORE gets erroneously defined on switch.
#if (defined(SHADER_API_GLCORE) && !defined(SHADER_API_SWITCH)) || defined(SHADER_API_GLES) || defined(SHADER_API_GLES3)
//GL with reversed z => z clip range is [near, -far] -> remapping to [0, 1]
#define UNITY_Z_0_1_FROM_CLIPSPACE(coord) max((coord - _ProjectionParams.y) / (-_ProjectionParams.z - _ProjectionParams.y), 0.0)
#define UNITY_Z_1_0_FROM_CLIPSPACE(coord) (1.0 - UNITY_Z_0_1_FROM_CLIPSPACE(coord));
#else
//D3d with reversed Z => z clip range is [near, 0] -> remapping to [0, 1]
//max is required to protect ourselves from near plane not being correct/meaningful in case of oblique matrices.
#define UNITY_Z_0_1_FROM_CLIPSPACE(coord) max(1.0 - (coord / _ProjectionParams.y), 0.0)
#define UNITY_Z_1_0_FROM_CLIPSPACE(coord) min(coord / _ProjectionParams.y, 1.0)
#endif
#elif UNITY_UV_STARTS_AT_TOP
//D3d without reversed z => z clip range is [0, far] -> remapping to [0, 1]
#define UNITY_Z_0_1_FROM_CLIPSPACE(coord) ((coord) / _ProjectionParams.z)
#define UNITY_Z_1_0_FROM_CLIPSPACE(coord) (1.0 - UNITY_Z_0_1_FROM_CLIPSPACE(coord))
#else
//Opengl => z clip range is [-near, far] -> remapping to [0, 1]
#define UNITY_Z_0_1_FROM_CLIPSPACE(coord) max(((coord + _ProjectionParams.y) / (_ProjectionParams.z + _ProjectionParams.y)), 0)
#define UNITY_Z_1_0_FROM_CLIPSPACE(coord) 1.0 - UNITY_Z_0_1_FROM_CLIPSPACE(coord)
#endif
The idea here is that I use UNITY_Z_1_0_FROM_CLIPSPACE(In.svPosition.z)
to get the depth value for my texture and then later read this texture depth just like you would from _CameraDepthTexture
:
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/DeclareDepthTexture.hlsl"
inline float SampleSceneDepthFromScreenUV(in float2 screenUV)
{
#if UNITY_REVERSED_Z
return SampleSceneDepth(screenUV).x;
#else
return lerp(UNITY_NEAR_CLIP_VALUE, 1, SampleSceneDepth(screenUV).x);
#endif
}
inline float3 ComputeSceneDepthWorldSpacePosition(in float2 screenUV, in float cameraDepth)
{
return ComputeWorldSpacePosition(
screenUV,
cameraDepth,
UNITY_MATRIX_I_VP);
}
inline float3 ComputeSceneDepthWorldSpacePosition(in float2 screenUV)
{
return ComputeSceneDepthWorldSpacePosition(
screenUV,
SampleSceneDepthFromScreenUV(screenUV));
}
Now I’ve arrived at this solution for getting a 1 (near plane) to 0 (far plane) value after some sleuthing to find out that SV_POSITION
is not really in clip space as it was when building it in the vertex shader. It’s also not strictly in NDC space (as Unity defines it anyways). From my experimentation and sleuthing combined,
SV_POSITION is something like:
xy = the pixel position of that fragment ([0,0] > [_ScaledScreenParams])
z = the non-linear z depth (platform dependent ranges relating to _ProjectionParams in different ways)
w = the w set in the vertex shader, which happens to be the camera space depth for perspective rendering and 1.0 for orthographic rendering
Now, when I use UNITY_Z_1_0_FROM_CLIPSPACE(In.svPosition.z)
it does indeed give me a value between 1 and 0, but when I alter my shader to write the exact same fragment depths as the built in depth buffer will I observe a much faster trail off to black in the depth buffer compared to my texture (using the frame debugger). This means that when I go to reconstruct world position the depth value is incorrect.
My first thought was that my values are linear, but I don’t understand how that is the case if I started with In.svPosition.z
(a supposedly non-linear value) given the math I’m doing couldn’t really linearize it (especially the Metal case that is doing a simple division). Why is there no documentation to explain exactly what space SV_POSITION
provides in a frag shader (whether it’s standardized or different per platform, I just need to understand how to comprehensively deal with it)?
So the main questions are:
- How do you manually calculate a depth value equivalent to ZWrite values within a geometry frag shader?
- Is there a built in function (despite my best efforts failing to find one) for this very basic need?
At this point I’m also willing to just pass the world space or some other space to the frag shader if someone can point me to any way which can calculate the exact same depth values as the built in ZWrite would do. It seems like this would be a very standard thing to do but I have put an embarrassing amount of time into this on my own at this point. Help would be greatly appreciated!
NOTE:
Just to be clear, I don’t need a method to calculate the depth values stored in the camera targets depth buffer (necessarily). I need the final value to be what gets written into the _CameraDepthTexture when you enable depth textures in Unity’s render settings.