Introduction of Render Graph in the Universal Render Pipeline (URP)

@tatoforever, still processing, we will contact you shortly.

@Deadcow, I raised the issue internally, we should get back to you.

@FikaProductions, nice! Glad to hear that!

@alexanderameye,

How can we override the color that is used?

How do you create your texture? Not sure why the ClearRenderTarget() call doesn’t work in your case but if it is imported, you should be able to pass the clear color value in ImportResourceParams argument of RenderGraph.ImportTexture() overload. If it is created through RenderGraph, you should be able to specify the clear color value with the texture descriptor passed in RenderGraph.CreateTexture(). Something like this should work:

            var textureDesc = resourceData.cameraOpaqueTexture.GetDescriptor(renderGraph);
            textureDesc.clearColor= Color.green;
            var textureHandle = renderGraph.CreateTexture(textureDesc);

            builder.SetRenderAttachment(textureHandle, 0);

In both cases, Render Graph will do the clear for you the first time the resource is used and you won’t need to call ClearRenderTarget() in the execute node anymore.

I would want the first pass to get culled if the renderers in the renderer list are not visible on screen, and similarly the secondary pass should get culled if the first pass is culled

The RG compiler used to cull passes in this exact situation. But it seems that in most cases it was a performance loss because it was creating a direct dependency between the native culling process and the RG compiling step: RG having to wait for the culling results to start compiling. So this is not something we will implement in the near future.

Having said that, maybe you can try to implement this specific behavior in your custom pass if you deem it necessary. If so, you might want to have a look at ScriptableRenderContext.QueryRendererListStatus.

2 Likes

Thank you. I added a bit more details in my thread and still researching :pray:t2:

I’m attempting something straightforward but keep bumping into unexpected limitations.

I’ve got a procedural skybox that’s expensive to render. I’ve also got opaque objects which fade into that skybox. To avoid dealing with transparency issues and to avoid re-rendering skybox pixels, I want to render the skybox once and have opaque shaders copy from it later. So the gist is:

  • Before opaques, render skybox shader* to a render texture
  • Assign render texture to global shader variable _SkyboxTexture
  • Render opaque geometry with shaders that make use of _SkyboxTexture

*To be clear this is just rendering a full-screen mat, not the built-in skybox environment

I’ve tried and failed to assemble these steps from samples like:
CopyFeature
OutputTextureRenderFeature
TextureRefRendererFeature
etc.

What’s the render-graph approved way to do this?

1 Like

Hello,
I’ve been trying to move my project to RenderGraph and so far, so good. Except when I decided to change the way I’m drawing meshes in my game to scrap individual meshes and use RenderMeshInstanced. (I was using DrawMesh originaly)

This broke my RenderGraph PostFX. Previously, I just had to use CreateRendererList, to make a pass with DrawRendererList, and my RenderGraphPass would get what I drawed previously in the script with the DrawMesh. Now, despite using the same materials, shaders, etc with RenderMeshInstanced, no instanced meshes is going through the RenderGraph PostFX script.

I tried to use cmd.DrawMeshInstanced in a pass, but nothing happens. I’d have used an UnsafePass, if it had an ExecuteCommandBuffer, but it doesn’t. I also tried to follow the DrawMeshInstanced with a DrawMeshRendererList (same parameters that works without instancing), but it’s worse (get a bunch of errors during play).

==> Is it at all possible to create a custom RenderGraph pass with the aim of drawing instanced mesh in a render texture with a custom shader ?

Got it working, my unlit instanced shader was missing a LightMode tag with an existing lightmode (UniversalForward). I don’t know why it’s working without when not doing instancing…

2 Likes

Try to look at how Unity crates the Camera Opaque Texture directly inside their sources, the file you would look for is:
Library/PackageCache/com.unity.render-pipelines.universal/Runtime/Passes/CopyColorPass.cs

At the end there’s a ‘RenderInternal’ method that takes care of a couple key things:

  • Sets the allow culling to false (since it’s likely no further pass down the line would use your texture, it would get culled and the whole pass not rendered)
  • Sets the texture as a global AFTER the pass renders

I also take an extra step and allow ‘global modifications’, look for a method related to allow global changes.

Now, this is still quite confusing, depending on the type of Render pass added you might or might not have access to all the builder and command buffer properties.
These depend if you are executing an Unsafe Pass, Raster Pass or just “Render Pass” (never used this one).

Hope that helps and happens to be in part what you were missing.

1 Like

Thank you I will give this a shot.

1 Like

Maybe we’re the only ones, but so far RenderGraph has been nothing but a buggy dumpster fire leading to nothing but hours spend trying to figure out why ios builds won’t build or run and android phone builds keep crashing (Missing Vulkan framebuffer attachment image, errors and various others)

Also, the massive performance difference between each 6000.x version is rather disturbing (i guess the person doing the integration testing didn’t survive the last round of lay offs.)

And no, I can’t be arsed to spend a day for every little crash, bug, etc during crunch to create a clean version of our project to send with bug reports, I’ll be spending more time doing Unity’s job than my own, for which I actually get paid.

Anyway, I’m sure it will get better, but so far, we haven’t been able to use RenderGraph in a single project yet.

Hey @svenneve,

Thanks for your interesting feedback. Could you elaborate a bit more on maybe one or two specific issues so we can take a look? Right now, I understand that you are unhappy with the current state of RG, but I would like to know more so we can improve it.

Also, the massive performance difference between each 6000.x version is rather disturbing

Can you clarify this as well? Which 6.x versions are you comparing? What kind of performance regression did you notice? Is it specific to RG? It would also be helpful to specify which APIs, devices were used.

Thanks!

1 Like

I was using cameraData.cameraTargetDescriptor and UniversalRenderer.CreateRenderGraphTexture before where the descriptor was then of type RenderTextureDescriptor instead of TextureDesc which did not have the clearColor property. I did not know that cameraTargetDescriptor could be considered deprecated but I do now!

Clearing now works as expected.

Thank you so much for the response, you really helped me out.

1 Like

I dont like the idea of setting many constant properties and textures that won’t change in the Execute function (as I understood it runs every render). what can I do about that, can I bind textures only once?

Hey @miurev, so this is not possible with Render Graph API. Were you able to do it previously with some other Unity APIs? I have asked around and it doesn’t seem that this is currently possible with the Unity engine, but I might be wrong. If I understand correctly, you are looking for something similar to the D3D12 bundle concept (Creating and recording command lists and bundles - Win32 apps | Microsoft Learn), correct?

1 Like

Okay, thanks for answer. Yes you got my idea in the right way, I was searching for such bundles.

Also another qusetion appeared. I have same Scriptable Render Pass for Compute Shader Dispatch. As a result I have a texture, but it does not have any mip maps. There is a call for RenderTexture to generate mips, but there is no such call in context.cmd or whaterver else to do this for Texture Handle. Obviously for some reason auto generate mips option (which I can set in texture descriptor also does nothing). Any possible way to do this API call, or write some mipmaps somehow directly from Compute Shader?

//The only way I came up with is not good, I can bind different mip levels as different textures and try to write to them, that will affect the flexibility of compute shader a lot. If you know any way to handle this problem, please help :confused:

About previous unity APIs, before, when I was interacting ComputeShader from MonoBehaviour, there was ability to just bind texture to compute shader and then use this texture at any point of the game, unitill unbinding it (probably idk what is happening there under the hood, might be that its still is binded by cmd every frame, but from render programming knowledge it supposed to be separate from render pipeline invocation ?) if you can clearify how that actually works, that will help a lot.
Thats not looks like bundles in this case tho, but still texture seemed to be attached only once.

BLANK SCREEN UPON BUILD, Older working build also broke at same time

Postprocessing Final Blit Pass/Draw UIToolkit/uGUI Overlay: Attachment 0 was created with 2 samples but 1 samples were requested

NextSubpass : Not in a renderpass
EndRenderpass : Not in a render pass

MacOS Unity 6 URP, GPU Resident drawer with APV.

Requesting help, thank you in advance!

Edit: Workaround fix : Enable MSAA in RenderPipeline Asset

Hi,

What is the used Unity version ?

1 Like

I’ve been trying to get into writing features and passes for URP 17 but I’ve been struggling a lot!

I’m currently trying to make a render feature that renders all object within a certain bound around the camera into a RenderTexture, writing to another texture with UAV.

  • I want to cull all renders with a bound that’s smaller than a given size. Ideally I’d do the culling myself in a Burst Job.
  • I’m rendering to a fake camera, just like how shadows are renderer.
  • I need a way to render my meshes in such a way that all the triangles are split up individually, without having to require geometry shaders.
  • I have three options, 1. I grab the meshe’s vertex and index buffer, and draw each mesh using a Command.DrawProcedural call that uses the index buffer to generate the Attributes manually. 2. Using burst and the mesh idea, I’d figure out all the meshes I need to render and generate duplicated version on the GPU. 3. Same thing as 2, but generating the mesh on the GPU.

The issues are that:

  • There doesn’t seem to be any way to access the renderers in a renderer list, or any way to get a list of all the active renderers in a scene. (Without requiring the user to manually add scripts on ever object)
  • Even if I had list of all the renders, culled them, and generated a “split up” version of them, there doesn’t seem to be any way for me to batch all the draw calls of my new split up meshes, while ensuring they draw to my UAV. It looks like I’d have to issue a commandBuffer.Draw call for every single mesh.

What should I do in this case?

Unity 6000.32 first then I upgraded to 6000.38, same problem.

It was all working before, but I was repeatedly trying to build my game, to avoid other errors like APV null reference

EDIT: I found another post that fixed it: Depth normals pre-pass as an option, separate from SSAO, more details at bottom.

I’m having a bit of trouble understanding the new system, I’m trying to get the normal and/or depth output and copy it to a render texture to use later for a custom billboard shader. I’ve figured out how to pass in an existing render texture and copy the default color output to it but I can’t get a version using normals to work. I’ve also gotten depth using the color format but not through depth format.

None of the documentation I can find show how to use depth or normals. I’ve also looked at some of the samples including DepthNormalOnlyPass.cs but what I can find for the documentation doesn’t really cover much of it. I’ve been stuck on this for an entire week and would really appreciate a pointer or two.

    //this is how I setup pass in my ScriptableRenderFeature
    public override void SetupRenderPasses(ScriptableRenderer renderer,
                                        in RenderingData renderingData)
    {
        if (renderingData.cameraData.cameraType == CameraType.Game)
        {
            CustomNormalPass.ConfigureInput(ScriptableRenderPassInput.Normal);
            //CustomNormalPass.SetTarget(renderer.cameraColorTargetHandle);
            //I haven't noticed if this actually does anything
            CustomNormalPass.SetTarget(renderer.cameraDepthTargetHandle);
        }
    }
    //renderPassEvent is also set to RenderPassEvent.AfterRenderingPrePasses which I think is correct
    //which renders right after draw depth

    //this part is in my ScriptableRenderPass
    public override void RecordRenderGraph(RenderGraph renderGraph, ContextContainer frameData)
    {
        string passName = "DepthNormals To RenderTexture";

        UniversalRenderingData renderingData = frameData.Get<UniversalRenderingData>();
        UniversalCameraData cameraData = frameData.Get<UniversalCameraData>();
        UniversalLightData lightData = frameData.Get<UniversalLightData>();

        using (var builder = renderGraph.AddRasterRenderPass<PassData>(passName,
            out var passData))
        {
	          UniversalResourceData resourceData = frameData.Get<UniversalResourceData>();

            RTHandle rtHandle = RTHandles.Alloc(_outputRenderTexture);
            TextureHandle textureToWrite = renderGraph.ImportTexture(rtHandle);

            RenderTextureDescriptor desc = cameraData.cameraTargetDescriptor;

            desc.msaaSamples = 1;
            desc.depthStencilFormat = UnityEngine.Experimental.Rendering.GraphicsFormat.R32_SFloat;
            desc.depthBufferBits = 32;

            //using activeDepthTexture gives me "multisampled texture bound to non-multisampled sampler" error
            //changing msaaSamples doesnt fix the multisampled error
            //I dont know how to use cameraNormalsTexture

            //I've only figured out how to use cameraDepthTexture
            //passData.copySourceTexture = resourceData.cameraNormalsTexture;
            //passData.copySourceTexture = resourceData.activeDepthTexture;
            passData.copySourceTexture = resourceData.cameraDepthTexture;

            builder.UseTexture(passData.copySourceTexture, AccessFlags.Read);

            //I got depth color working with this earlier but not with SetRenderAttachmentDepth
            //builder.SetRenderAttachment(textureToWrite, 0, AccessFlags.Write);
            builder.SetRenderAttachmentDepth(textureToWrite, AccessFlags.Write);

            builder.AllowPassCulling(false);
            builder.AllowGlobalStateModification(true);

            // Set the ExecutePass method as the rendering function that render graph calls
            // for the render pass. 
            builder.SetRenderFunc((PassData data, RasterGraphContext context)
                => ExecutePass(data, context));
        }
    }


    static void ExecutePass(PassData data, RasterGraphContext context)
    {
        Blitter.BlitTexture(context.cmd, data.copySourceTexture,
            new Vector4(1, 1, 0, 0), 0, false);
    }

And here is where im using the depthNormal texture in my shader

              float depthValue; float3 normalValues;
              DecodeDepthNormal(tex2D(_NormalTex, i.uv), depthValue, normalValues);
              fixed4 testNormal = tex2D(_NormalTex, i.uv);

              float4 depthColors = float4(depthValue, depthValue, depthValue, 1.0f);
              float4 normalColors = float4(normalValues, 1.0f);

              //all three just return black
              return depthColors;
              return normalColors;
              return testNormal;

Here’s how I set up the render texture, I can have color or depth enabled but not both

I just found the solution, the problem was that depth pass was being used by default instead of the normal pass. If anyone else is having the same problem, add this code to the respective functions in your ScriptableRenderFeature

        // Private Fields
        private EnableDepthNormalsPass m_SSAOPass = null;

        /// <inheritdoc/>
        public override void Create()
        {
            // Create the pass...
            if (m_SSAOPass == null)
            {
                m_SSAOPass = new EnableDepthNormalsPass();
            }
        }

        /// <inheritdoc/>
        public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
        {
            bool shouldAdd = m_SSAOPass.Setup(renderer);
            if (shouldAdd)
            {
                renderer.EnqueuePass(m_SSAOPass);
            }
        }

        // The SSAO Pass
        private class EnableDepthNormalsPass : ScriptableRenderPass
        {
            internal bool Setup(ScriptableRenderer renderer)
            {
                ConfigureInput(ScriptableRenderPassInput.Normal);

                return true;
            }

            public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
            {
                // Do Nothing - we only need the DepthNormals to be configured in the Setup method
            }
        }

This fixed the normals but now im getting “EnableDepthNormalsPass does not have an implementation of the RecordRenderGraph method” warning spammed, if anyone knows how to disable this warning or bypass it, let me know.