Out the gate, this is Unity 6000.0.7f1, URP 17.0.3 and relates to a ScriptableRenderFeature I wrote utilising the render graph (no obsolete/legacy stuff).
In my project, I have 2 layers I need more control over the rendering for. As a result, I handle the rendering myself through a render feature. My issue stems from the depth normals part of this process.
These are the two layers:
- Character
- Overlay
Essentially, I want to render the depth normals/depth for the character pass into the cameraNormalsTexture and activeDepthTexture via my own scripting. I have gotten this to work without issues.
For the overlay, however, I want to have a copy of just the depth for later use, but otherwise also render it into the cameraNormalsTexture and activeDepthTexture just like with the character layer.
Previously, I was just rendering it all twice, and then using that. This is completely unnecessary though, since in theory, I can just make a copy when I first render them out.
So here’s how the new system I wrote works. For reference, the square is the character layer, and the sphere is the overlay layer.
My implementation of this currently “works”, but has a few issues.
- MSAA is not enabled on these textures, meaning the edges are very jagged and unappealing. This is pretty noticeable with the models in my game.
- We have to create 2 screen-size buffers, one of which I don’t even use, which feels extremely wasteful. I haven’t really tested this on any specific hardware to have concrete proof of anything, but it’s also just a more convoluted way of doing things, so I’d rather streamline the process a little better either way.
This is the way I would prefer to implement the system.
Now in theory, this should also work since the other one was working before. We’ll come back to MSAA in a moment, so here’s what happens as I try to implement this new version.

So the intermediary depth texture (green star) needs to match the cameraNormalsTexture (blue star) in setup, which in this case happens to be MSAA. No problem - I needed MSAA anyways. But in practise,
That error floods the console every time I set the MSAA sample count for these textures. I don’t understand why, googling was giving me nothing, this is clearly a moment where my inexperience is holding me back. It isn’t helping that the Render Graph is very new, and a lot of stuff online is kinda missing about how to use it properly. I think I might’ve jumped into learning this stuff at a bit of an awkward time.
Here’s the code for the previous version that was “working” up until I added the MSAA samples part. This is just the pass responsible for the overlay layer, not the character layer. It’s scheduled BeforeRenderingPrePass.
It’s probably atrocious and commits numerous sins, and those who actually know what they’re doing (unlike me) please offer your feedback on things I’m doing wrong or small optimisations I might be able to make (part of the mess is because it’s not done yet, though). A lot of this is kinda hodge-podge trial-and-error as I just took what I could find and threw it together. I’m also sorry about the scarcity of comments…
internal class BufferDepth : ScriptableRenderPass
{
public LayerMask layers;
private Material copyDepthMat;
private Material transferDepthNormalsMat;
public BufferDepth(string passName)
{
profilingSampler = new ProfilingSampler(passName);
}
public void SetupMembers(Shader copyDepthShader, Shader transferDepthShader)
{
copyDepthMat = CoreUtils.CreateEngineMaterial(copyDepthShader);
transferDepthNormalsMat = CoreUtils.CreateEngineMaterial(transferDepthShader);
}
private void InitRendererLists(ContextContainer frameData, ref RenderNormalsPassData passData, RenderGraph renderGraph)
{
UniversalCameraData cameraData = frameData.Get<UniversalCameraData>();
UniversalRenderingData renderingData = frameData.Get<UniversalRenderingData>();
passData.rendererListHandle = renderGraph.CreateRendererList(
new RendererListParams(
renderingData.cullResults,
RenderingUtils.CreateDrawingSettings(
new ShaderTagId("DepthNormals"),
renderingData,
cameraData,
frameData.Get<UniversalLightData>(),
cameraData.defaultOpaqueSortFlags),
new FilteringSettings(RenderQueueRange.opaque,
layers)));
}
public override void RecordRenderGraph(RenderGraph renderGraph, ContextContainer frameData)
{
UniversalCameraData cameraData = frameData.Get<UniversalCameraData>();
UniversalResourceData resourcesData = frameData.Get<UniversalResourceData>();
RenderTextureDescriptor normalsDesc = new(cameraData.cameraTargetDescriptor.width, cameraData.cameraTargetDescriptor.height, RenderTextureFormat.ARGBHalf); //Use ARGBHalf because I couldn't find a RenderTextureFormat equivalent for R8G8B8A8_SNorm, and we need the ability to go into the negatives
RenderTextureDescriptor depthDesc = new(cameraData.cameraTargetDescriptor.width, cameraData.cameraTargetDescriptor.height, RenderTextureFormat.Depth, cameraData.cameraTargetDescriptor.depthBufferBits);
//This is the part that causes that bindMS error
normalsDesc.msaaSamples = cameraData.cameraTargetDescriptor.msaaSamples;
depthDesc.msaaSamples = cameraData.cameraTargetDescriptor.msaaSamples;
TextureHandle normalsTex = UniversalRenderer.CreateRenderGraphTexture(renderGraph, normalsDesc, "Overlay Depth Normals Buffer", false);
TextureHandle depthTex = UniversalRenderer.CreateRenderGraphTexture(renderGraph, depthDesc, "Overlay Depth Buffer", false);
//Render the overlay layer
using (var builder = renderGraph.AddRasterRenderPass<RenderNormalsPassData>("Render Overlay Depth Normals", out var passData, profilingSampler))
{
InitRendererLists(frameData, ref passData, renderGraph);
builder.UseRendererList(passData.rendererListHandle);
builder.SetRenderAttachment(normalsTex, 0);
builder.SetRenderAttachmentDepth(depthTex);
builder.SetRenderFunc((RenderNormalsPassData data, RasterGraphContext rgContext) => ExecutePass(data, rgContext));
}
#region This part just copies the depth texture into a non-depth-texture so other passes that use shader graph stuff can use it. I'm probably going to remove this part entirely in the future, though.
OverlayDepthBuffer buffer = frameData.GetOrCreate<OverlayDepthBuffer>();
RenderTextureDescriptor outputDesc = new(cameraData.cameraTargetDescriptor.width, cameraData.cameraTargetDescriptor.height, RenderTextureFormat.RFloat, 0, cameraData.cameraTargetDescriptor.mipCount, RenderTextureReadWrite.Default);
buffer.buffer = UniversalRenderer.CreateRenderGraphTexture(renderGraph, outputDesc, "Overlay Depth", false);
using (var builder = renderGraph.AddRasterRenderPass<CopyPassData>("Copy Overlay Depth", out var passData, profilingSampler))
{
passData.mat = copyDepthMat;
passData.depth = depthTex;
builder.UseTexture(depthTex);
builder.SetRenderAttachment(buffer.buffer, 0);
builder.SetRenderFunc((CopyPassData data, RasterGraphContext rgContext) => ExecutePass(data, rgContext));
}
#endregion
//Finally, this part uses a shader with zwrite on, ztest lequal, and that has a fragment that looks like this:
//float4 frag(const v2f i, out float depth : SV_Depth) : SV_Target
//{
// depth = tex2D(_SecondaryDepth, i.uv).r;
// return tex2D(_SecondaryDepthNormals, i.uv);
//}
//It uses that to copy the depth and normals back into the proper camera textures.
using (var builder = renderGraph.AddRasterRenderPass<TransferNormalsPassData>("Transfer Depth Normals", out var passData, profilingSampler))
{
passData.mat = transferDepthNormalsMat;
passData.normals = normalsTex;
passData.depth = depthTex;
builder.UseTexture(normalsTex);
builder.UseTexture(depthTex);
builder.SetRenderAttachment(resourcesData.cameraNormalsTexture, 0);
builder.SetRenderAttachmentDepth(resourcesData.activeDepthTexture);
builder.SetRenderFunc((TransferNormalsPassData data, RasterGraphContext rgContext) => ExecutePass(data, rgContext));
}
}
static void ExecutePass(RenderNormalsPassData data, RasterGraphContext context)
{
context.cmd.DrawRendererList(data.rendererListHandle);
}
static void ExecutePass(CopyPassData data, RasterGraphContext context)
{
data.mat.SetTexture("_Depth", data.depth);
context.cmd.DrawProcedural(Matrix4x4.identity, data.mat, 0, MeshTopology.Triangles, 3, 1);
}
static void ExecutePass(TransferNormalsPassData data, RasterGraphContext context)
{
data.mat.SetTexture("_SecondaryDepthNormals", data.normals);
data.mat.SetTexture("_SecondaryDepth", data.depth);
context.cmd.DrawProcedural(Matrix4x4.identity, data.mat, 0, MeshTopology.Triangles, 3, 1);
}
public void Dispose()
{
CoreUtils.Destroy(copyDepthMat);
CoreUtils.Destroy(transferDepthNormalsMat);
}
protected class RenderNormalsPassData
{
internal RendererListHandle rendererListHandle;
}
private class CopyPassData
{
internal Material mat;
internal TextureHandle depth;
}
private class TransferNormalsPassData
{
internal Material mat;
internal TextureHandle normals;
internal TextureHandle depth;
}
}
So, here are the main issues:
- Why can’t I enable MSAA for these textures?
- Why can’t I create a texture in the R8G8B8A8_SNorm format like the depth normals map uses? I can’t find an equivalent within RenderTextureFormat, and don’t know how to make textures using GraphicsFormat.
- To be honest, I would’ve preferred not needing an intermediate depth texture at all, just rendering directly into the depth texture first, copying it there, and then allowing the rest of the depth normals prepass to occur like normal. However, RenderFeatureEvent does not give you that level of granularity when it comes to sequencing. Are these restrictions for good practise/parallelism/GPU reasons, or are they just arbitrary?
Thank you very much for reading all this, any and all help is appreciated.



