Introduction of Render Graph in the Universal Render Pipeline (URP)

I doubt it, cause i imagine the goal is to strip all of that from the code base entirely.

What i recommend is making sure the features you’re missing are there.

For the layman only SSR is missing (which is an easy fix and more a consequence of forward, and ssr doesn’t work in forward even in built-in). But any internal stuff is largely unknown unless you specifically use that stuff.

1 Like

Hello, I’ve been looking through the only UnsafePass sample, but I can’t see a way to access ScriptableRenderContext from it, which I normally used to manually perform Culling in Unity2022LTS during my rendering routine, in this way:

ScriptableCullingParameters cullingParameters;
renderingData.cameraData.camera.TryGetCullingParameters(out cullingParameters);
renderingData.cullResults = context.Cull(ref cullingParameters);

What is the designated equivalent of that during an unsafe pass in RenderGraph?

Hi,

Is there any update for the rendering to a 3D texture set with SetRandomWriteTarget, using a camera with replace shader mode renderer feature in RenderGraph ?

Specifically the below functionality

                        . 
         Graphics.SetRandomWriteTarget(1, integerVolume);
			voxelCamera.targetTexture = dummyVoxelTextureAAScaled;
			//voxelCamera.RenderWithShader(voxelizationShader, "");
         voxelCamera.RenderSingleCamera(); //camera with shader replacement shader renderer feature
			Graphics.ClearRandomWriteTargets();

I use the render to 3d texture to voxelize the world using a URP replacement shader for the LUMINA real time global illumination system. This worked in the previous compatibility mode and Unity 2022.3. Alternatively i voxelize by toggle to BiRP, but i would rather use native URP.

Thanks

Hi everyone,

an update on our progress.

We’ve landed dozens of small fixes and optimizations to the URP RenderGraph adoption in the next patch release. The URP passes now better declare their resource usage, making sure that render graph can correctly optimize resource re-usage and merge passes.

An important change is that global textures that you set through the RenderGraph API (builder.SetGlobalTextureAfterPass) will be automatically unset (set to a black texture) after the RenderGraph is done executing. This was mentioned here already. The clearing was previously behind a define. This will help you to avoid hard-to-discover bugs where a frame by accident is using the global texture of a previous frame. We noticed a number of such issues in URP. Those were not discovered by our automated testing because the test scenes are static so the resources don’t change over frames. These are now all fixed.

As a reminder, when you create a TextureHandle using RenderGraph.CreateTexture, that handle is only valid until the RenderGraph ends executing in that frame. Every camera currently has it’s own RenderGraph so its only valid per frame/per camera. RG will assign/allocate an RTHandle/RenderTexture automatically and that could (by coincidence) be the same as last frame, but that’s not guaranteed. Therefore, every global texture that is set using RG needs to be cleared to avoid bugs. This will now be done automatically when you set a global with the RenderGraph API. However, the behavior of the other APIs to set a global texture is unchanged (commandBuffer.SetGlobalTexture, Shader.SetGlobalTexture). These you can still use for RTHandles that you manage yourself and import in the RenderGraph.

We also made a small change to the TextureDesc that provides the parameters to create a TextureHandle. This now has a single format (either for color or for depth stencil). This avoids subtle bugs. A TextureHandle only points to a single resource, either color or depth stencil, not up to two like a RenderTexture. It is recommended to use this TextureDesc.format instead of TextureDesc.colorFormat or TextureDesc.depthBufferBits to set the format. The later legacy properties are slightly more expensive to call and depthBufferBits does not give you the exact format you want. For example, when copying descriptors, desc1.depthBufferBits=desc2.depthBufferBits can result in desc1 having a different depthStencilFormat than desc2. When doing desc1.format=desc2.format, you can be sure it’s exactly the same.

Additionally, we fixed a number of bugs with regards to the depth stencil format of the _CameraDepthAttachment and the _CameraDepthTexture. URP allows you to configure the format of these resources on the Renderer so they can be different. Be careful when using cameraData.cameraTargetDescriptor to get the depth stencil format of the current depth target. Those are not necessarily the same. This descriptor is for the backbuffer. It’s is much safer to use resource.GetDescriptor(renderGraph) to have the correct info of a resource, for example when making a copy.

The RenderGraph samples will be updated to reflect these best practices.

Currently, we are investigating a number of issues with MSAA. Also, we are looking into the use case to get new CullResults. Likely we are missing API here that we will add soon.

As always, you can get a better understanding by looking at the commits in the Graphics repo. The changes will ship in 6000.0.22f1.

11 Likes

Hi, in which Unity version is planned to see those changes ?

Thanks

The post specified:

2 Likes

I’d absolutely love being able to render in isolation with a pre-decided list of rendering and lighting components built into a cullresults of sorts. This will be a gigantic boon to rendering certain objects without the use of render layers that can be difficult to manage. e.g. if I know I want to render only 50 or so specific objects as if they’re lit by different things then being able to construct their cullresults ahead of time and then reusing them will be great.

If you can please also make some easier API for handling lighting configuration (e.g. deciding which lights are picked or manually providing main light vectors) that would be incredible. (E.g. addressing Spherical Harmonics in a clean method)

Thanks, i must have missed that last part, or was added in the subsequent edits after my question.

(I am fairly new to the render graph)
I encountered a bug? or a thing that is quite frustrating.

I was able to make it work in my custom pass by looking at SSAO code, but the property that the SSAO uses is internal (renderingModeActual), thus my concern. Is there any simpler way to make the normals buffer work in Forward and Deferred mode simultaneously?

bool isDeferred = universalRenderer != null && universalRenderer.renderingModeActual == RenderingMode.Deferred;

The code seems to work regardless of rendering mode, but I assume that code in SSAO works with things like DepthNormals etc.

Code for adding normals buffer:

RecordRenderGraph:

UniversalRenderer universalRenderer = cameraData.renderer as UniversalRenderer;

//bool isDeferred = universalRenderer != null && universalRenderer.renderingModeActual == RenderingMode.Deferred;

TextureHandle cameraNormalsTexture = resourceData.cameraNormalsTexture;

builder.UseTexture(cameraNormalsTexture, AccessFlags.Read);
passData.cameraNormalsTexture = cameraNormalsTexture;

ExecuteMainPass:

if (data.cameraNormalsTexture.IsValid())
    data.material.SetTexture(s_CameraNormalsTextureID, data.cameraNormalsTexture);
private static readonly int s_CameraNormalsTextureID Shader.PropertyToID("_CameraNormalsTexture");

Additionally, I am not sure how to handle the difference in the Normals GBuffer with the ‘accurate g-buffer normals’ enabled. Is there a way to convert? the accurate normals:
image
To standard-looking normals:

Am I missing sth?

Hi everyone,

We are still hard at work on URP with RenderGraph. Thank you very much for working with us, we are excited about our upcoming Unity 6 release.

I’ll get back to this thread with a few tips and tricks over the next weeks.

Today we start with Gbuffers.

The deferred renderer generates a few temporary frame resources called GBuffers (_GBuffer0, _GBuffer1, …). You can use the Render Graph Viewer to see the lifetime of those resources.

GBuffers store the material properties that are used for lighting in a later pass. This is a lot of data for mobile to store and load to the memory. Mobile GPU vendors have a specific GPU architecture to reduce memory bandwidth (and therefore energy consumption) called tiled rendering. To have the best performance with the deferred renderer, we need to make sure that the GBuffers are “memoryless” and not stored to memory. With RenderGraph this is now done automatically, and also easily set up in your extensions for other passes.

This type of optimization was already done before in URP. We had a single native render pass for the deferred renderer, hard coded. You could turn on native render passes on the URP asset. However, there were many cases where this was automatically turned off, for example when extending the render pipeline, to ensure correctness. With NRP, we also did not store the Gbuffers to memory.

Not storing to memory has implications though. This means there is no texture that can be sampled in the shaders. In older URP versions, when NRP was turned off, the GBuffers were bound as global textures as a way to share them between passes. In RenderGraph there is now a better way to do that without requiring global textures. I’ll share more on that in a later post. Shaders that sample these global textures will not work out-of-the-box with URP in U6.

However, you now have complete control to configure the lifetime of these resources through the RenderGraph API. We have created a sample to demonstrate how to bind the Gbuffers as global textures. This will automatically extend the lifetime of the textures to passes that use global textures. You can find the sample in the Render Graph package sample or see the code here. When adding that Render Feature to your renderer, you’ll see that the lifetime is extended and that the resources are not memoryless anymore. The textures are now available as global texture in every shader to sample. However, this comes at a GPU memory and performance cost on mobile devices.

13 Likes

Took another crack at RenderGraph, now that I’m getting people emailing me about implementing it in my asset store plugins…

I’m really honestly truly trying to learn this system, but I really want you guys to understand what a nightmare it is to actually port over existing RenderFeature/Renderpass stuff.

This example is a very simple “fog gradient” asset. I use the Volume system to blend curves to generate a fog gradient texture. Then I just apply it to the active color target. I do a manual blit, because it actually works with VR (MPI and SPI) and unlike the billion “blitter” APIs it actually “just works.” I’m going on a tangent though, just look at this.

BEFORE RENDERGRAPH:
Just a single command buffer. Executes when it’s done. Easy.

AFTER RENDERGRAPH
Chains of torment cast upon all asset store developers, cursing them into hundreds of hours of maintaining compatibility between 20,000 unity versions and URP configs.


(I actually am not including all of the RenderGraph code here, but anyone who’s used it understands there’s like 50 more lines of boilerplate around this)

This is a really simple plugin. I can’t imagine porting one of my more complex ones. Is any of this insane boilerplate going away anytime soon? I get it, I do. You want to consolidate resource usage, etc etc etc. The thing is though: I don’t want to consolidate anything. I don’t want anyone touching my textures. They’re mine. Leave them alone. I don’t want anything “clever” happening with them, I just want to write a render feature and run it consistently for all people who buy my assets.

1 Like

Now that we have this highly flexible foundation, we can build layers on top that simplify common operations. For example, the copyPass and the blitPass, two helper functions that don’t require you to write this boilerplate. I’d love to hear what generic passes could help you to build and maintain extensions more efficiently.

You have that option ofc. But having RenderGraph manage your resources if they don’t need to persist across frames removes that burden for you, and can reduce memory usage.

Whatever “burden” you think is being removed is being replaced with 1000x more burden with how this API is written. I’m not against rendergraph, I just hate that I have to re-learn another new system, after Unity already promised us we wouldnt have to. And not only that, but the unified renderer was just announced. So in another year or so I’ll need to learn ANOTHER system and maintain ALL of these for multiple years. I just want to make a bunch of asset store assets. Why do I need to be bogged down with 10,000 layers of API hell?

The API should be even simpler than the RenderFeature/Renderpass api. I just want one function to hook into and give me a command buffer. Let me do what I want in there. Leave me alone outside of it.

I was happy when I found the RenderPassContext, most of my code would “just work.” But no, despite the API for that not being marked Obsolete or whatever, I get a bunch of error log spam in the console saying not to use it. You HAVE to use the more limited Raster/ComputeContext

Hey @funkyCoty,

I was happy when I found the RenderPassContext, most of my code would “just work.” But no, despite the API for that not being marked Obsolete or whatever, I get a bunch of error log spam in the console saying not to use it. You HAVE to use the more limited Raster/ComputeContext

Could you check the AddUnsafePass API (UnsafeGraphContext) and let us know if it fits your needs? It has been added to ease the transition to Render Graph, no native render pass (RasterRenderPass) or async compute (ComputePass) optimizations, but you should be able to port your effects using it. It is currently used in URP for various different passes.

AddRenderPass API is only used by HDRP internally and is not compatible with the RG compiler used in URP that supports native render pass. I understand the confusion, we will add more explicit comments to prevent people using it, thanks for your feedback!

The API on user side should never change, all changes should be on the backend, with only very minor adjustments on front end

This goes for all pipelines, should never be three, just one with toggles

I’ve also never been able to get Blitter to work, it still leaves me confused that it’s the recommended method.
I’ve heard about needing to use a specific shader for it to do anything, but enforcing the shader removes the point of blitting as far as I’m concerned, I blit exactly because it allows me to store the output of my own materials (with my own Shadergraphs), and that’s always been an option in the past.
Thankfully CommandBuffer.Blit just works in my case, I hope Unity is planning to keep it.

Do you have any input on this, @AljoshaD ?

Does Render Graph allow doing full screen effects that use UI Toolkit? I started a project a while ago during the Unity 6 preview period and found that the full screen render pass didn’t effect UI Toolkit. I was advised a bunch of workarounds but considering it worked fine using UGUI I figured I’d wait until it’s properly supported.

Just wondering if Render Graph solves this, or if it’s a UI Toolkit problem, or something else entirely as I saw in the release notes of one of the beta/preview releases the following and thought it would solve my issue but it never did “Graphics: Added UITK support for CustomPostProcessOrder”

Hey @Kandy_Man

Can you elaborate a bit more? What do you mean by full screen effects that use UI Toolkit? You want to use FullScreenPassRendererFeature to apply full screen post processing effects after UI Toolkit has been rendered on the camera view?

Also, is it happening only with Render Graph, or also with Compatibility mode? And you said UGUI is working as expected?

Hey, so I’m wanting to do screen transitions between scenes/levels. Sometimes a simple fade to black, other times using greyscale textures to make it a bit more exciting.

In the past, i.e. pre U6, doing this with UGUI worked fine. Currently using the same code but using UI Toolkit instead of UGUI, the renderer feature only effects the scene, the UI drawn with UI Toolkit doesn’t get effected by the renderer feature.

I haven’t gone back to it in a while, I was just wondering if I went back to the effect and redid it using Render Graph it would fix the issue, or if it is an issue with UI Toolkit/rendering in general in that UI Toolkit is rendered after everything else. Ideally, the full screen pass renderer feature would include everything rather than rendering UI Toolkit on top of it

Blitting needs to just work. You indeed need to find the correct overload to work in all/the right case, and it can be confusing.

We introduced the BlitPass (RenderGraph.AddBlitPass) as way to remove the render graph boilerplate code. But it also has a nice API to serve more cases (eg you can decide the texture property _BlitTexture, _MainTexture, etc.). It uses Blitter under the hood in the execute function.

We’ll do another round of testing to make sure it works on all platforms with/without MSAA and with/without XR. There are a few know problems that we’re fixing now. Ideally in the near future, under the right circumstances it will use the copyPass under the hood and automatically optimize with framebuffer fetch when appropriate.