How to access rendered depth buffer properly?

I want to save depth buffer of rendered image for later use. How do I do this?

I don’t get this weird convention with DepthTextureMode.Depth and _CameraDepthTexture in a shader.
Is there a way to use actual RenderTexture.depthBuffer? I don’t see if it’s possible to do anything with RenderBuffers at all.

Anyone?

You can not access the buffers from CPU at all (that means you can not read its content in script).
Be it the color or depth buffer.
RenderTextures exist on the GPU only (they are Unity wrapper around Render Targets, FBO, …)

To get it out there you would need write a shader that visualizes the depth buffer and use Texture2D.ReadPixels when this render texture is the currently active one on your camera so you can grab the data into a Texture2D which you can use in your further scripting. Just be aware that texture readback from gpu is not exactly fast and depending on your need it might not even make sense to go there

1 Like

The builtin depth texture in unity is actually just a “shader replacement” shader. If you got lost with the builtin values and scripts, just check the builtin shaders source (can be found in one of the top sticky posts in shaders section) and there you learn how you can render your very own depth texture yourself.
Although, you may have problems when using deferred or forward rendering, as in deferred unity already renders the depth anyways. So if you custom render it in a different shader, it will be some unnecessary extra work.

Briefly, your rendered texture of depth is already there if you just enable
**Camera.depthTextureMode = DepthTextureMode.Depth;**in one of your scripts awake or start function.

And the rendertexture in RenderTextureFormat.Depth format will be available to all your shaders as it is automatically set as a global uniform property with the name _CameraDepthTexture

I hope it was clear, sorry english is not my main language.

Oh, by the way if i misunderstood the question, and if you want to save the depth image of a previous frame and carry it over to next frame(s), you need to do it custom way.

Thanks.
I need depth later in a compute shader. It doesn’t have to be transfered to CPU just to be accessible at GPU again.
Just to copy data from current depth buffer to another buffer somehow.

So you want to save the current frames buffer, and use this in a later frame doing kind of a motion blur thing right?
As, the rendertexture _CameraDepthTexture is not exposed in scripts but only to shaders, you must reproduce it yourself with a custom replacement shader.
Check the example project “Replacement Shaders” in unity site. There is a full depth texture shader example there.
Doing custom shader also helps you get full control on what is to write on the buffer. You can even write transparent objects if you find a smart usage for it.

Ended up with rendering depth buffer into a texture with a separate render using a special shader.
Doesn’t seem optimal.

What else did you expect, you are trying to use a past frames depth info inside the current frame. There is no other way to do it in any other engine or whatever.

yeah, but depth buffer is still a buffer on GPU. though, it’s not possible in Unity to grab it after the frame is rendered.

1 Like

Depth “buffer” in unity works quite different than what you expect. Read the documentation about the camera depth image in the manual.

I know. Might also try multiple rendering targets for that.

Hi, how did you solve this at last?
I’ve came to the same question recently. I tried to render some particles in another RenderTexture while using the main camera’s depth buffer for the right occlusion. Here’s the code snippet:
void OnPostRender()
{
bool bFog = RenderSettings.fog;
RenderSettings.fog = false;

offscreenCamera.CopyFrom(main);
offscreenCamera.enabled = false;

offscreenCamera.depthTextureMode = DepthTextureMode.None;

offscreenCamera.cullingMask = layerMask;

offscreenCamera.clearFlags = CameraClearFlags.Nothing;
offscreenCamera.backgroundColor = backColor;
offscreenCamera.targetTexture = offscreenRT;

//Graphics.SetRenderTarget(offscreenRT.colorBuffer, Graphics.activeDepthBuffer);
//GL.Clear(false, true, backColor);

offscreenCamera.RenderWithShader(combinedShader, “RenderType”);

RenderSettings.fog = bFog;
}

I also tried the code commented while not set the camera’s targetTexture, it turns out that the offscreenCamera created a rt and a depth buffer on it’s own.

I’m wondering is there any way to share the main camera’s depth?

I haven’t found a way - I’m releasing an off screen particle system next week (render at 1/4 screen size, nearest depth upsample to the screen), and currently I have to do the z-test in the particle shaders, which while not too expensive (especially when rendering at low res) makes it harder for novices to write new shaders for the system.

I tried using ReadPixels to sample values of various depth pixels in an image effect and it was as slow as expected. If we do not need the whole depth buffer necessarily as a texture, we could just get selective values and output those.

Could we get the depth buffer in a compute shader and then output data from it as arrays of values? Has anyone done something like that?

I’m stuck this now :frowning:

2012 - 2019
we still have no access to the original depth texture
maybe it will be available in HDRP v12 in 2030 or so…

Graphics.activeDepthBuffer doesn’t seem to work :slight_smile:
BuiltinRenderTextureType.Depth also ends up as “UnityDefault2D”.
This is nice

why is a unity tech asking a question ???

Because that was 7 years ago and they weren’t a Unity employee at the time?

4 Likes

Just to show no-one knows the answer?

1 Like