How to write into the camera depth in a custom URP render pass?

I am trying to write the depth texture in a custom render pass in URP (Render Graph API). In the old built-in RP I had no problems achieving this, but I cannot get it to work in URP.

I assume it’s a configuration issue with the pipeline or the pass, because I reduced the problem to rendering a single cube with the default URP Lit shader, which does not show up in the _CameraDepthTexture.

This is the render pass:

public class TestRenderPass : ScriptableRenderPass
{
	Material _material;
	int _layerMask;

	public TestRenderPass(Material material, int layerMask)
	{
		_material = material;
		_layerMask = layerMask;
	}

	public override void RecordRenderGraph(RenderGraph renderGraph, ContextContainer frameContext)
	{
		using (var builder = renderGraph.AddRasterRenderPass<PassData>("Test Render Pass", out var passData))
		{
			var renderingData = frameContext.Get<UniversalRenderingData>();
			var cameraData = frameContext.Get<UniversalCameraData>();
			var lightData = frameContext.Get<UniversalLightData>();
			var sortFlags = cameraData.defaultOpaqueSortFlags;
			var renderQueueRange = RenderQueueRange.opaque;
			var filterSettings = new FilteringSettings(renderQueueRange, _layerMask);

			var drawSettings = RenderingUtils.CreateDrawingSettings(new ShaderTagId("UniversalForward"), renderingData, cameraData, lightData, sortFlags);

			if (_material != null)
			{
				drawSettings.overrideMaterial = _material;
			}

			var rendererListParameters = new RendererListParams(renderingData.cullResults, drawSettings, filterSettings);

			passData.RendererListHandle = renderGraph.CreateRendererList(rendererListParameters);

			var resourceData = frameContext.Get<UniversalResourceData>();
			builder.UseRendererList(passData.RendererListHandle);
			builder.SetRenderAttachment(resourceData.activeColorTexture, 0);
			builder.SetRenderAttachmentDepth(resourceData.activeDepthTexture, AccessFlags.Write);

			builder.SetRenderFunc((PassData data, RasterGraphContext context) => ExecutePass(data, context));
		}
	}

	static void ExecutePass(PassData data, RasterGraphContext context)
	{
		context.cmd.DrawRendererList(data.RendererListHandle);
	}

	class PassData
	{
		public RendererListHandle RendererListHandle;
	}
}

And the renderer feature:

public class TestRenderFeature : ScriptableRendererFeature
{
	[SerializeField] Material material;
	[SerializeField] LayerMask layerMask;

	TestRenderPass _renderPass;

	public override void Create()
	{
		if (material == null)
		{
			Debug.LogError(nameof(TestRenderFeature) + ": " + nameof(material) + " is not set.");
			return;
		}

		_renderPass = new TestRenderPass(material, layerMask)
		{
			renderPassEvent = RenderPassEvent.AfterRenderingOpaques
		};
	}

	public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
	{
		if (_renderPass != null)
		{
			if (renderingData.cameraData.cameraType is CameraType.Game or CameraType.SceneView)
			{
				renderer.EnqueuePass(_renderPass);
			}
		}
	}
}

I added a cube and set it to a separate layer that is culled by the renderer.

image

I then added the renderer feature and set it to render meshes on that layer with a default Lit material.

image

If I toggle the renderer feature, the cube disappears and reappears as expected.

However, it never shows up in the _CameraDepthTexture. Only the sphere next to it, rendered normally, is visible in the depth texture:

What am I missing?

Looks like depth is handled differently in the Scene view and the Game view. It works correctly in the Game view:

In the Game view, the opaque objects render passes are first and they draw into the _CameraDepthAttachment. This is followed by a Copy Depth pass that reads the _CameraDepthAttachment texture and writes it into _CameraDepthTexture. This makes sense, since my renderer is configured to Depth Texture Mode “After Opaques”.

However, in the Scene view, there is a Draw Depth Only pass before any opaque passes and the _CameraDepthTexture is only written in this step. This would explain why the cube (Test Render Pass) does not show up in the depth texture. It seems like this is an independent render pass that iterates over the opaque meshes and renders their depth, but does not read or copy the _CameraDepthAttachment. Therefore, moving the render pass before this step has no effect.

1 Like

resourceData.activeDepthTexture is either the backbuffer or the _CameraDepthAttachment.

resourceData.cameraDepthTexture is the depth copy (ie _CameraDepthTexture)

Indeed, that’s a good description. We were just looking into this, discussion if we can get the scene view camera behavior more like the game view to avoid this disparity.

Thank you for the quick response!

What’s the conceptual difference between the two depth textures? I tried to find any explanation or documentation but could not find anything.

If I try to set cameraDepthTexture as the depth attachment I get an error saying that it has a color format.

builder.SetRenderAttachmentDepth(resourceData.cameraDepthTexture, AccessFlags.Write);

InvalidOperationException: Trying to SetRenderAttachmentDepth on a texture that has a color format R32_SFloat. Use a texture with a depth format instead. (pass ‘Test Render Pass’ resource ‘_CameraDepthTexture’).

Am I not supposed to do that? The UniversalRenderer indeed uses

depthDescriptor.graphicsFormat = GraphicsFormat.R32_SFloat;

in some cases.

That would be nice! I guess in the meantime I can try to add an extra Copy Depth pass after opaques just for the Scene view.

By the way, can I read AND write the depth texture in a single frag shader dispatch? I was under the impression that this was not possible, or at least not widely supported, due to GPU or graphics library limitations.

I assume this still is true and AccessFlags.ReadWrite just means that the texture is read and written within a single pass but separate shader dispatches, right? Or does this limitation not apply to depth textures?

I am in the process of porting a ray marching compute shader from BiRP to a frag shader in URP. To have SSAO and DoF post-processing with BiRP, the compute shader wrote depth values into a custom, separate depth+normals texture, which was then blitted into the camera depth+normals. It would be nice if I could do it without the extra full-screen blit.

(I am moving to a frag shader because URP doesn’t have the same extensibility for Lights as BiRP, which allowed me to blit into the shadow maps)

I read not long ago that is was not possible, I am researching because I am having the same problems. Could you please post an in depth response to how you solved this issue? I am working in 2D and I do not know HLSL at all almost, so this issue is almost completely overwhelming me. If not I understand.