Skybox, "Resolve Depth" and SAMPLE_DEPTH_TEXTURE

I have an depth-mask object that writes to the depth buffer, but nowhere else (ZWrite On, ColorMask 0).

I need to sample this object’s depth value using SAMPLE_DEPTH_TEXTURE later in the rendering process, for an image effect.

When I render it in opaque geometry before the skybox (Queue < 2500), the skybox doesn’t render behind it, because (I’m assuming) it uses early depth “culling” (?) to avoid overdraw for optimization.

When I render it in transparent geometry after the skybox (Queue > 2500), the skybox renders correctly, and the frame debugger shows the depth texture as correct too, but SAMPLE_DEPTH_TEXTURE seems to ignore the new value. I presume this is because _CameraDepthTexture is only written to in the “Resolve Depth” step, which happens before the skybox render for the aforementioned optimizations.

I need to have the skybox properly rendered behind the invisible object, and the object’s depth written for sampling. How can I do this?

The easiest solution would be to render the skybox before opaque geometry (and therefore before depth resolution), even if it comes at the cost of some overdraw. However, I don’t know of any option that does this. Even modifying the render queue in the built-in shaders seems to do nothing (it’s likely hardcoded?). I could write my own skybox shader and apply it to custom geometry, but it’s a lot of trouble and I’d rather avoid it.

Likewise, I could fix it by forcing another depth resolution (writing the depth texture to _CameraDepthTexture), but I don’t know how to do that (from my CommandBuffer, trying to Blit() from Depth warns that the render texture is not found).

I also tried using _LastCameraDepthTexture, but it doesn’t seem to work either.

Any help / suggestion is welcome!

Depending on you exact needs you could have the mask render to a separate RenderTexture instead of the depth buffer. This would cost a bit more memory, but then it won’t conflict with the skybox.

Unfortunately, I need the depth information from every object in the scene.

Well, you could just sample both the depth buffer and the RenderTexture and combine the results.

It’s a possibility. But in rendering in a separate render texture, wouldn’t I also lose the existing depth information? For instance, that object would render as foremost, even if there are others in front of it. Also, it would require editing every image effect that uses SAMPLE_DEPTH_TEXTURE, which is hard to maintain.

True, in those circumstances, rendering the sky earlier sounds like the best solution. It doesn’t have to be in front of everything, but just in front of the depth masked object.

Yes, but how?

Well, this way I’m afraid. I’m assuming the skybox is just a cubemap in reality, so not too hard to sample from a shader.

There’s the double camera option. One camera renders the skybox and only the skybox, and the second camera renders everything else.

I wrote about it here:

It might not work with deferred though.

Won’t work, because setting the second camera to depth clear would clear the area behind the depth mask object (and don’t clear would cause smearing).

Second camera renders everything but the skybox, which is rendered by the first camera. This is purely to get the skybox to render earlier.

If you use forward rendering, _CameraDepthTexture is renderred using ShadowCaster pass of opaque objects(render queue <= 2500). so you can write depth in ShadowCaster pass and don’t write depth and color in forward opaque pass. (SHADOWS_DEPTH can be used to differentiate these two passes, if you use Surface Shader)