Applying the shader to a manually rendered camera, renders the Main Camera instead

Hi,

Currently I am running 2020.1.0b8 on Linux, as I require some of its other non-graphical features.

My scene has a disabled Camera object, which I would like to apply a shader (code at the bottom) via RenderWithShader() as for my purpose the camera should only render on demand. In another project that I pulled over from Windows running 2020.1.0b6 this works as expected, allowing me to then use GetRawTextureData() etc.

In a new project created in Linux the same code doesn’t appear to work. RenderWithShader() does not produce any meaningful result. Render() followed by Blit() applies the shader to the Main Camera and only when the Game window is selected in the Editor. Blit() doesn’t seem to care what the source texture argument is.

Is there some kind of magic project setting that was carried over from Windows/0b6 that allows RenderWithShader() to function on a disabled Camera? Is my problem also a known issue in *nix?

Some of my attempts:
cam.RenderWithShader(depthShader, null); : this works in a project pulled over from Windows, but not in another project created fresh in Linux.

cam.Render();
Camera.main.targetTexture = null;
Graphics.Blit(cam.targetTexture, cam.targetTexture, depthMaterial);

: this applies the shader to what is shown in the Game window, and only when the Game window is selected in Editor. In the previous project I can go back to the Scene window and have it continue to function on another camera.

The depth shader code for completeness:

// all credit to: https://github.com/ronja-tutorials/ShaderTutorials/blob/master/Assets/017_DepthPostprocessing/DepthPostprocessing.shader

Shader "EyeDepth"{
    SubShader{
        // markers that specify that we don't need culling
        // or comparing/writing to the depth buffer
        Cull Off
        ZWrite Off
        ZTest Always

            Tags { "RenderType" = "Opaque" }

        Pass{
            CGPROGRAM
            //include useful shader functions
            #include "UnityCG.cginc"

            //define vertex and fragment shader
            #pragma vertex vert
            #pragma fragment frag

            //the depth texture
            sampler2D _CameraDepthTexture;

            //the object data that's put into the vertex shader
            struct appdata{
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            //the data that's used to generate fragments and can be read by the fragment shader
            struct v2f{
                float4 position : SV_POSITION;
                float2 uv : TEXCOORD0;
            };

            //the vertex shader
            v2f vert(appdata v){
                v2f o;
                //convert the vertex positions from object space to clip space so they can be rendered
                o.position = UnityObjectToClipPos(v.vertex);
                o.uv = v.uv;
                return o;
            }

            //the fragment shader
            float frag(v2f i) : SV_TARGET{
                //get depth from depth texture
                float depth = tex2D(_CameraDepthTexture, i.uv).r;
                //linear depth between camera and far clipping plane
                depth = Linear01Depth(depth) * _ProjectionParams.z;

                return depth;      
            }

            ENDCG
        }
    }

            FallBack "Diffuse"
}

At least for my admittedly narrow scope of problem, it appears to be fixed. I added the following to Project Settings > Graphics > Video > Always Included Shaders:

  • Hidden/Compositing
  • Hidden/VideoComposite
  • Hidden/VideoDecode

This, and disabling MSAA on the Camera rendering depth, has resolved this issue for the time being.