Hi Everyone, I have learned modern render pipeline for a while. There is a question trouble me few days as mentioned at title.
I saw the following functions/variables in the urp source code:
// Z buffer to linear depth.
// Does NOT correctly handle oblique view frustums.
// Does NOT work with orthographic projection.
// zBufferParam = { (f-n)/n, 1, (f-n)/n*f, 1/f }
float LinearEyeDepth(float depth, float4 zBufferParam)
{
return 1.0 / (zBufferParam.z * depth + zBufferParam.w);
}
// Values used to linearize the Z buffer (http://www.humus.name/temp/Linearize%20depth.txt)
// x = 1-far/near
// y = far/near
// z = x/far
// w = y/far
// or in case of a reversed depth buffer (UNITY_REVERSED_Z is 1)
// x = -1+far/near
// y = 1
// z = x/far
// w = 1/far
float4 _ZBufferParams;
// From http://www.humus.name/temp/Linearize%20depth.txt
// But as depth component textures on OpenGL always return in 0..1 range (as in D3D), we have to use
// the same constants for both D3D and OpenGL here.
// OpenGL would be this:
// zc0 = (1.0 - far / near) / 2.0;
// zc1 = (1.0 + far / near) / 2.0;
// D3D is this:
float zc0 = 1.0f - far * invNear;
float zc1 = far * invNear;
Vector4 zBufferParams = new Vector4(zc0, zc1, zc0 * invFar, zc1 * invFar);
if (SystemInfo.usesReversedZBuffer)
{
zBufferParams.y += zBufferParams.x;
zBufferParams.x = -zBufferParams.x;
zBufferParams.w += zBufferParams.z;
zBufferParams.z = -zBufferParams.z;
}
I did some math to verify it, and it make scene in D3D-like render pipeline z-buffer. But the question is, how LinearEyeDepth() deals with OpenGL-like render pipeline z-buffer scene zBufferParams’s value dosen’t seem to change anywhere else in the code.
Sorry for my bad English.
Post this question using translation software.
I’d appreciate it if anyone responded.
You always start with the value that you read from the depth map which is always 0-1, both in OpenGL and D3D. And this is only used for inverse projections, so the math is always the same. Even though the NDC z range is -1 to 1 in OpenGL, what gets written into the depth map is 0 to 1 for both APIs.
glDepthRange specifies how NDC coordinates are converted to depth buffer values.
Thank you for you reply.
Why the depth value not remapped to [-1, 1] before mul with inverse projections in OpenGL?
Or, I think the projection matrixs are different between OpenGL and D3D. So, use the same depth map value to mul with inverse projection should get different result eye depth.
Maybe I should confirm another thing. Is the projection matrix in Unity rely on APIs or not?
Huh, my mind is messing up now.
So the projection matrix that you see in C# is the same for all APIs. It gets adjusted via GL.GetGPUProjectionMatrix when it is set as a shader parameter. Usually this is done automatically. You only have to call this if you want to pass a projection matrix to your shader without telling Unity that it is a projection matrix (e.g. SetGlobalMatrix as opposed to SetProjectionMatrix).
So the value that you read from the depth buffer is always 0-1 for all APIs and the C# projection matrix is also always the same for all APIs.