Introduction of Render Graph in the Universal Render Pipeline (URP)

Yay more coexistence!

3 Likes

Is it possible to rasterize compute shader + other raster command at the same time?
I don’t know why here is different “RasterCommandBuffer” and “ComputeCommandBuffer” and no examples about “ComputeCommandBuffer”

for example, the pseudocode of what I am currently using

cmd.DrawProcedural //rendering shoreline mask
cmd.DispatchCompute //compute foam relative to the shoreline mask
cmd.DispatchCompute //compute blur pass
cmd.BlitTriangle //render foam to screen

For us to be able to merge subpasses optimally and guarantee optimal RTs setup, we have Raster passes that only allow rasterization operations (Draw, etc) and compute passes that allow dispatches, so you shouldn’t be able in a raster pass to do a dispatch, which would break the RenderPass setup (talking in terms of Vulkan subpasses, where a render pass is made of a set of subpasses). So the API is more strict and requires to use the type of pass for your need.

In your case you can do what you need by scheduling RasterPass->ComputePass->RasterPass

pseudo code:

using (var builder = renderGraph.AddRasterRenderPass<PassData1>("DrawProceduralPass", out var passData))
            {
                // initialize pass
               
                // ...
               
                builder.SetRenderFunc((PassData1 data, RasterGraphContext context) =>
                {
                    context.cmd.DrawProcedural(...);
                });
            }

            using (var builder = renderGraph.AddComputePass<PassData2>("DispatchesPass", out var passData))
            {
                // initialize pass
               
                // ...
               
                builder.SetRenderFunc((PassData2 data, ComputeGraphContext context) =>
                {
                    // do stuff
                   
                    context.cmd.DispatchCompute(...);
                   
                    // do more stuff
                   
                    context.cmd.DispatchCompute(...);
                });
            }

            using (var builder = renderGraph.AddRasterRenderPass<PassData3>("BlitPass", out var passData))
            {
                // initialize pass
               
                // ...
               
                builder.SetRenderFunc((PassData3 data, RasterGraphContext context) =>
                {
                    Blitter.BlitTexture(...);
                });
            }

We are also adding samples to show ComputePass usages

3 Likes

Hi. Any plan to utilize burst or even better utilize more dots tech to improve performance of Render Graph to next level?

GPU Performance still need extremely more optimization. Currently it’s still extremely slow at mobile platform specially Android. At Android Mi 9T Pro (Snagdragon 855), GPU stalling is around 13ms+. See CASE IN-56966 for repro project.

9395093--1314734--upload_2023-10-7_21-48-54.png

No. We are adding caching of the compiled graph so the RG compiler only needs to run once you modify the graph, and not every frame. This removes most of the RG CPU cost per frame.

6 Likes

I see. How about GPU cost? Is that possible to reduce GPU cost significantly for Case IN-56966 at previous post #25 too?

That case seems to be specific to entities graphics. It is unrelated to RenderGraph. The benefits should be the same.

Is this in HDRP too? Seems like more or less free performance.

Yes, the compiler caching will work for both URP and HDRP. The compiler time that is removed is not that significant on high end platforms with fast CPUs though, somewhere between 0.1-0.3ms, but every bit helps of course.

1 Like

Hi everyone,
an update on our progress. The stabilization work of RenderGraph is progressing well. We decided to ship the new and improved URP version with RenderGraph in Unity 23.3.a13. The RenderGraph checkbox is visible from that version in the global settings. Currently, RG is still off by default but we expect to have RG on by default around the early 23.3 beta. Since we are still in alpha phase, we are making some last changes to the APIs based on the feedback we have gotten from internal and external testing. Although we are entering the last weeks until beta, we still like to hear your feedback if you have any. You can see the last bits landing in the graphics repo.

We are still working on the RG compiler caching. We first needed to refactor the compiler, which we are completing now. Additionally, we’re further optimizing the main thread CPU cost of URP to make room for RenderGraph. Next, we’ll start adding more helper functions soon to reduce the amount of boiler plate that you would need to write with the new lower level API.

12 Likes

Hi,

I am trying to use the new Blitter method for full screen rendering and i cannot use it instead of the mesh based method, since it seems is not the same, as i cannot write to the rendertexture transparency

I need to upgrade to the new Blitter due to this warning:
warning CS0618: ‘RenderingUtils.fullscreenMesh’ is obsolete: ‘Use Blitter.BlitCameraTexture instead of CommandBuffer.DrawMesh(fullscreenMesh, …)’

I use this notes below, but even in this simple sample when write the result to a rendertexture with transparency and change the Alpha in the shader, the result is always opaque
https://docs.unity3d.com/Packages/com.unity.render-pipelines.universal@14.0/manual/customize/blit-overview.html

I attach a photo showing how i try to pass Alpha to the render texture and the result, with the small modification in the suggest code in the above Unity documents.

Is the Render Graph going to resolve this issue and make everything compatible, also will the Blitter get an upgrade to work correctly with transparency, so the current renderer features may be converted ?

Other relevant threads:
https://discussions.unity.com/t/849588
https://discussions.unity.com/t/899051
https://discussions.unity.com/t/910592
https://discussions.unity.com/t/889379
https://discussions.unity.com/t/807274

Also if the CommandBuffer.DrawMesh(fullscreenMesh) method gets deprecated, does this mean that all our effects will stop working, as the Blitter is not an option and is not replacing at all the mesh based method, as the warning falsely suggests ? Is there any plan to handle that, or will be a show stopper in using Unity ?

Thanks

Currently, RG is still off by default but we expect to have RG on by default around the early 23.3 beta

Hello, what will happen when a user upgrades their project from 2023.2 to 2023.3 (where RenderGraph is enabled by default) if the project contains assets whose renderer features/passes don’t support RenderGraph?

I attempted to run a Unity 2022.3 project in Unity 2023.3, and turned RG on, it seems that if any asset’s renderer feature/pass doesn’t support RenderGraph, the pass will be ignored with a warning log stating “RecordRenderGraph is not implemented; the pass ____ won’t be recorded in the current RenderGraph.”

I understand that RenderGraph is important for auto RenderTexture (RT) management, and I appreciate it. However, based on the example RenderGraph code from previous replies, it appears that all renderer features/passes must be entirely rewritten in a new manner. This could potentially take months if there are hundreds of renderer features/passes to support.

It isn’t easy for an asset developer to ensure their assets run on all Unity versions. For example:

#if !UNITY_2022_2_OR_NEWER
//RenderTargetHandle rendering....
#endif
#if UNITY_2022_2_OR_NEWER
// RTHandle rendering...
#endif
#if UNITY_2023_3_OR_NEWER
// RG rendering...(many of us, including me, may not have a good idea of what to write here)
#endif

Therefore, I would appreciate some advice from Unity’s staff for asset developers:

  • Should asset developers be concerned about the release of RenderGraph in 2023.3?
  • How challenging is it to support RenderGraph, especially compared to the “RenderTargetHandle to RTHandle” transition?
  • Is there any guideline for supporting RenderGraph, similar to the document about converting from RenderTargetHandle to RTHandle?

Thank you.

4 Likes

From my understanding, the Graph is a replacement for renderer features and we will need to redo all in the Graph.

That is why is mentioned as a Graph based URP version. It will not be probably the same at all with previous one.

The biggest question is until when those APIs will be changed and not have any solid generic base that will make transition transparent, than just ask to redo everything every few months

A project that upgrades will have RenderGraph turned off, we’ll likely call this “Compatibility mode”. This will give you time to upgrade your RenderFeatures.

There will be a compatibility mode but there will be strong incentive to upgrade your assets to the new API. Since the compatibility mode is there, you’ll have time to upgrade.

We’ve upgraded a number of assets and it can be quite straightforward. We’ll share more docs on how to upgrade efficiently soon.

We’ll have a lot of documentation and samples. We’ll also add helper functions to make it more straightforward. We expect to share more soon in December.

2 Likes

So what if some of the effects are not upgradable ? Or take years to do so in development time ? Given it is a totally different scheme ?

How are we supposed to convert complex code and rendering without spend infinite amount of time to redo everything, in a potentially not possible to convert scenario ?

Also making code to Graph is extremely cumbersome, it is like slowing down the development by a factor of 10 times slower, this will make it extremely harder to convert anything.

If the renderer feature are not working, this means there is no compatible mode, is as good as non existent, as anyone can turn the Graph on and see a broken project. There is no helping in having this mode in how badly all projects of users will break.

Has the team really though well enough about such a change ? Given the millions of broken projects that can be broken for ever as well potentially.

Plus everything with graph is usually vastly slower than the normal code, has this been addressed and is 100% guaranteed that a converted effect will make sense and not be much slower and useless ?

Also is there a single reason besides not want to spend extra time, to not allow the existing renderer features be adapted to run on the new system and work together ?

Because frankly i would rather work with code and not slow down my development for months trying to create what can be done in code in minutes, in hours in the Graph.

Thanks

Is there a sample/example which copies the temporary buffer back over the camera’s color buffer? Arrived here from https://discussions.unity.com/t/895405

1 Like

We are very careful with the introduction of RenderGraph and are aware of the implications. We want to help users and the ecosystem with this transition and let everyone over time benefit from this new URP backend. For developers, the main difference is the adjustments to the ScriptableRenderPass API. Please have a look at the preview documentation describing the main differences.

It has nothing to do with a visual graph system like ShaderGraph, VFX Graph or Visual Scripting. It is sometimes also referred to as a Frame Graph (see some resources around that Design Pattern here).

We will provide dozens of examples how to transition your code, and plan also to add more utility functions to avoid unnecessary boiler-plate code.

2 Likes

Every effect is upgradable. Some existing effects might make use of a hack and work on some platforms but actually have unsupported behavior. So these might require rework to work properly with the new API that offers more guardrails and doesn’t allow unsupported behavior.
Even complex RenderFeatures should take at most a few days to convert once all our learning materials are ready to share.

RenderGraph is a new and improved API to code RenderPasses. It’s not a visual graph. You can find the info in the document we shared above.

Yes, very much so. And discussed it with many experts and many asset store providers. Unfortunately any change requires work but we made sure that the benefits for our users and wider community are worth it, and that our approach minimizes both the work and complexity required to upgrade. We also have ensured that there is a compatibility mode to make sure you have time to upgrade your assets.

1 Like

This is an example CopyRenderFeature.cs that records a rendering command to copy, or blit, the contents of the source texture to the color render target of the render pass.

Maybe this is a helpful example for you?

using UnityEngine;
using UnityEngine.Experimental.Rendering.RenderGraphModule;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;

public class CopyRenderFeature : ScriptableRendererFeature
{
    // Render pass that copies the camera’s active color texture to a destination texture.
    // To simplify the code, this sample does not use the destination texture elsewhere in the frame. You can use the frame debugger to inspect its contents.
    class CopyRenderPass : ScriptableRenderPass
    {
        // This class stores the data that the render pass needs. The RecordRenderGraph method populates the data and the render graph passes it as a parameter to the rendering function.
        class PassData
        {
            internal TextureHandle copySourceTexture;
        }

        // Rendering function that generates the rendering commands for the render pass.
        // The RecordRenderGraph method instructs the render graph to use it with the SetRenderFunc method.
        static void ExecutePass(PassData data, RasterGraphContext context)
        {
            // Records a rendering command to copy, or blit, the contents of the source texture to the color render target of the render pass.
            // The RecordRenderGraph method sets the destination texture as the render target with the UseTextureFragment method.
            Blitter.BlitTexture(context.cmd, data.copySourceTexture, new Vector4(1, 1, 0, 0), 0, false);
        }

        // This method adds and configures one or more render passes in the render graph.
        // This process includes declaring their inputs and outputs, but does not include adding commands to command buffers.
        public override void RecordRenderGraph(RenderGraph renderGraph, ContextContainer frameData)
        {
            string passName = "Copy To Debug Texture";

            // Add a raster render pass to the render graph. The PassData type parameter determines the type of the passData out variable
            using (var builder = renderGraph.AddRasterRenderPass<PassData>(passName, out var passData))
            {
                // UniversalResourceData contains all the texture handles used by URP, including the active color and depth textures of the camera

                UniversalResourceData resourceData = frameData.Get<UniversalResourceData>();

                // Populate passData with the data needed by the rendering function of the render pass

                // Use the camera’s active color texture as the source texture for the copy
                passData.copySourceTexture = resourceData.activeColorTexture;

                // Create a destination texture for the copy based on the settings, such as dimensions, of the textures that the camera uses.
                // Set msaaSamples to 1 to get a non-multisampled destination texture.
                // Set depthBufferBits to 0 to ensure that the CreateRenderGraphTexture method creates a color texture and not a depth texture.
                UniversalCameraData cameraData = frameData.Get<UniversalCameraData>();
                RenderTextureDescriptor desc = cameraData.cameraTargetDescriptor;
                desc.msaaSamples = 1;
                desc.depthBufferBits = 0;

                // For demonstrative purposes, this sample creates a transient, or temporary, destination texture.
                // UniversalRenderer.CreateRenderGraphTexture is a helper method that calls the RenderGraph.CreateTexture method.
                // It simplifies your code when you have a RenderTextureDescriptor instance instead of a TextureDesc instance.
                TextureHandle destination =
                    UniversalRenderer.CreateRenderGraphTexture(renderGraph, desc, "CopyTexture", false);

                // Declare that this render pass uses the source texture as a read-only input
                builder.UseTexture(passData.copySourceTexture);

                // Declare that this render pass uses the temporary destination texture as its color render target.
                // This is similar to cmd.SetRenderTarget prior to the RenderGraph API.
                builder.UseTextureFragment(destination, 0);

                // RenderGraph automatically determines that it can remove this render pass because its results, which are stored in the temporary destination texture, are not used by other passes.
                // For demonstrative purposes, this sample turns off this behavior to make sure that RenderGraph executes the render pass.
                builder.AllowPassCulling(false);

                // Set the ExecutePass method as the rendering function that RenderGraph calls for the render pass.
                // This sample uses a lambda expression to avoid memory allocations.
                builder.SetRenderFunc((PassData data, RasterGraphContext context) => ExecutePass(data, context));
            }
        }
    }

    CopyRenderPass m_CopyRenderPass;

    public override void Create()
    {
        m_CopyRenderPass = new CopyRenderPass();

        // Configure the injection point in which URP runs the pass
        m_CopyRenderPass.renderPassEvent = RenderPassEvent.AfterRenderingOpaques;
    }

    // URP calls this method every frame, once for each Camera. This method lets you inject ScriptableRenderPass instances into the scriptable Renderer.
    public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
    {
        renderer.EnqueuePass(m_CopyRenderPass);
    }
}
4 Likes