Hello everyone,
I’m trying to use the main camera depth texture in a shader (with _CameraDepthTexture). My setup is this: I have a camera that renders the scene without some layer (by disabling it in culling mask), then in an image effect script’s OnRenderImage function I call RenderWithShader (with my shader) with another camera that has this layer only. My shader use the camera depth texture to test for object depth manually and not draw what’s hidden behind objects of the scene.
This works very well when no directional light is present, or, to be more precise, when this directional light cast no shadows, but as soon as it casts shadows my shader has different content for _CameraDepthTexture.
I noticed that as soon as there is a directional light casting shadows, the rendering pipeline differs slightly:
This screenshot shows what appears in the frame debugger when NO directional light is casting shadows:
and this one, when a directional light is casting shadows:
UpdateDepthTexture is called in the second case, clearing camera depth texture before each render thus breaking my manual depth testing.
Is this is normal or is this a bug?