Direct3D -> OpenGL camera depth difference

Hi!

I’m working on getting a shader effect to work on OpenGL platforms, and I’m stumped. The shader is for a fadeout plane for Infinite Depth Pits Of Death. It fades out based on the difference in distance from the camera to the plane and the camera to the depth texture. This gives a nice, fog-like effect.

It’s not giving good results on OpenGL, though - I’m using OpenGLES3. Here’s screenshots showing the effect, and the difference between the platforms:

Screens

Here’s the shader code:

Shader code

Shader "Custom/HeightFogShader" {
    Properties {
        _Color ("Main Color", Color) = (1,1,1,1)
        _MainTex ("Base (RGB) Trans (A)", 2D) = "white" {}

        _DistanceMultiplier("Distance multiplier", Float) = 1

    }
    SubShader {
        Tags {"Queue"="Transparent+2" "IgnoreProjector"="True" "RenderType"="Transparent"}

        Pass{

            Blend SrcAlpha OneMinusSrcAlpha
            ZWrite Off
            Cull Off

            CGPROGRAM
            #pragma fragment frag
            #pragma vertex vert
            #include "UnityCG.cginc"

            fixed4 _Color;
            uniform sampler2D _CameraDepthTexture; //Depth Texture
            uniform sampler2D _MainTex;
            uniform float _DistanceMultiplier;
            float4 _MainTex_ST;

            struct v2f{
                float2 uv : TEXCOORD0;
                float4 pos : SV_POSITION;
                float4 projPos : TEXCOORD1; //Screen position of pos
            };

            v2f vert(appdata_base v){
                v2f o;
                o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
                o.projPos = ComputeScreenPos(o.pos);
                o.uv = TRANSFORM_TEX (v.texcoord, _MainTex);
                return o;
            }

            half4 frag (v2f i) : SV_Target {
                float sceneZ = LinearEyeDepth(tex2Dproj(_CameraDepthTexture, i.projPos));
                float projZ = i.projPos.z;

                half4 c = tex2D(_MainTex, i.uv);
                c.r *= _Color.r;
                c.g *= _Color.g;
                c.b *= _Color.b;

                float distVal = (sceneZ - projZ) * _DistanceMultiplier * 1.5;
                c.a *= _Color.a * distVal * c.a - 0.6f;

                return c;
            }
            ENDCG
        }
    }

    Fallback "Transparent/VertexLit"
}

From doing some debugging, it seems like

o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
o.projPos = ComputeScreenPos(o.pos);

gives different values on the platforms. So if I create a debug color based on i.projPos.z, that changes. On the other hand,

tex2Dproj(_CameraDepthTexture, i.projPos)

gives the same result on the platforms. So somehow sampling the depth texture takes the above difference into account. This causes the discrepancy between the platforms, but I can’t figure out how to counteract that discrepancy.

Can anyone explain what’s going on and how I should go about getting the same result in OpenGL?

OpenGL and DirectX use different projection matrices. ComputeScreenPos() adjusts the x and y components in a way that is consistent across platforms, but it doesn’t touch the z or w components of the float4 passed to it. If you need a depth value that matches the output of LinearEyeDepth you need to use the COMPUTE_EYEDEPTH macro.

o.projPos = ComputeScreenPos(o.pos);
COMPUTE_EYEDEPTH(o.projPos.z);

See the built in particle shaders for further reference if you want, but the above should solve your issue.

Also, in case you might think “why did Unity make them different?” This isn’t a Unity “thing”, this is the two different APIs chose to implement clip space differently long before Unity existed. OpenGL predates DirectX by nearly a decade 4 years, and DirectX’s clip space is considered by many the “correct” way, so much so that OpenGL 4 and Vulkan offer extensions / options to emulate DirectX’s clipping planes. The short version is OpenGL’s projection space Z is -1 to 1, where DirectX is 0 to 1. OpenGL seems “cleaner” as the X and Y for both OpenGL and DirectX are also in -1 to 1, but because of floating point precision -1 to 1 can cause problems where the highest precision is mid way between the near and far plane rather than closer to the camera.

Hey, thanks a lot for the help!

I had an inclination to this being down to -1 to 1 vs 0 to 1, but I had no idea how to fix it. You put me on the right track!
After adding the COMPUTE_EYEDEPTH macro, I had to remove the -0.6f from the final line of the frag function. It seems like that was there to compensate for… something relating to this.

It looks the same on both platforms now. Again, thanks a bunch!

Man, shaders are hard. Imagine if application code was like this. “Well, on consoles, the Z-direction in the scene is reversed, so you have to do this:”

#if UNITY_STANDALONE
transform.Translate(direction) * Time.deltaTime;
#else
transform.Tranlsate(new Vector3(direction.x, direction.y, -direction.z)) * Time.deltaTime;
#endif

Nobody would accept it! But for some reason shader languages push handling platform differences to the user. Probably for speed concerns, but there has to be a better way to do this :stuck_out_tongue:

It is. Unity just hides most of that weirdness behind their APIs. If you try to do stuff outside of Unity (in a native plugin) or just going beyond the built in APIs (like any kind of manual file handling) you’re going to be using a lot of #if platform switches.

Funny you should mention that, because the view depth is reversed on consoles. Again, Unity’s code handles all of that for you so you don’t have to think about it. For the most part Unity’s shader macros try to make it all work with out you having to worry about it too. In shaders there’s separate UNITY_MATRIX_V and unity_WorldToCamera matrices, which one matches Unity’s scene space and the other matches what the rendering APIs actually want.