Hi, I have to write some shaders that works well in both OpenGL and Direct3D.
In one of the shaders, I render the depth of the scene into a RenderTexture (writing to its color buffer).
In that shader, I set:
ZWrite On
ZTest Less
And I clear the depth buffer to 0.
I know that in OpenGL the depth buffer goes from 0 (near plane) to 1 (far plane) and that in Direct3d goes from 1 (near plane) to 0 (far plane). I was expecting Unity to hide this difference but I think it doesn’t. I’m using i.vertex.z in the fragment shader as depth value (being i.vertex the SV_POSITION after multiplying a vertex with its MVP matrix in the vertex shader). By the way, I’m sending Model, View and Projection matrices to the shader manually by calling material.SetMatrix(name, matrix) (I needed to be like that).
My fragment shader is as following:
struct fragOutput {
float4 color : SV_TARGET;
float depth : SV_DEPTH;
};
fragOutput frag(v2f i)
{
fragOutput o;
float depth = i.vertex.z;
o.color = float4(depth, depth, depth, 1);
o.depth = depth;
return o;
}
In OpenGL the previous code works like this:
- depth test works well (geometries close to the near plane cover geometries behind them)
- geometries close to the near plane has a dark color and geometries farther away are whiter.
Running the same shader in Direct3D results like this:
- depth doesn’t work. Geometries at the back are being drawn over geometries at the front.
- geometries at the front are darker and geometries at the back are whiter
So according to that (by inspecting the output color), i.vertex.z goes from 0 (near) to 1 (far plane) in both, OpenGL and Direct3D. The depth written to the depth buffer in Direct3D is wrong though.
To fix that, I added the following code.
#if SHADER_API_D3D11 || SHADER_API_D3D9
o.depth = 1 - i.vertex.z;
#else
o.depth = i.vertex.z;
#endif
And with that the colors “seems” to be the same (they are slightly different in terms of intensity) and depth test works.
Inverting the value written into the depth buffer seems to make it work but I don’t understand why. If i.vertex.z is 0 for vertices in the near plane, the value that I write to the depth buffer (direct3D convention) then it should be 1 and by using ZTest Less, they should be discarded. I’m right?
There are 2 problems with this approach, first, the intensity of the pixels are slightly different, what makes me get a higher accumulate error when doing additional calculations and second, there is a drop of FPS when running using Direct3D. I guess it is because sometimes I write a value higher than 1 (I don’t know why this happens) to the depth buffer and that makes the gpu run some extra code.
Please let me know if what I’m doing is the right way and if my guess about i.vertex.z is correct.
Maybe I should use the helper macros commented here Unity - Manual: Cameras and depth textures but I don’t understand how they work, when I should call them (vertex or fragment shader), what’s the input and what’s the output.
I would like if someone explain better how those macros are supposed to be used by given a full example. According to the doc:
What is “i”? Is it a float3? is it in clip coordinates? where do I get the output back? in the fragment shader? by assigning the return value to a variable in the vertex shader?
When I searched for that this is the only example that I found but that doesn’t match the documentation.
Many thanks for your help!