Previously with ScriptableRenderPass I was performing voxelization of the scene using three view matrices and ScriptableRenderContext.Cull to execute a culling operation for each view. However, I can no longer find a Cull function in the new render graph API when working inside a ScriptableRenderFeature.
I am aware that RenderPipelineManager.BeginCameraRendering passes a ScriptableRenderContext and it is possible to record a render graph pass from that function like someone said before. However, it would be great to have a Cull function available within ScriptableRenderFeature as well.
I’ve just run into this exact bug with the MSAA + multiple cameras + windows build. Is there any update? I tried on the latest Unity 6000.0.32f1 and it’s still happening
we have resourceData.activeColorTexture and resourceData.depthColorTexture. These are the regular handles that have drawn the camera contents internally (forward rendering). They use MSAAx4
add render pass such that we draw some meshes here
add another pass that blits tmpColorTexture onto resourceData.activeColorTexture transparently (renderGraph.AddBlitPass)
On WebGL, it seems that an attached depth texture is ignored, and the rendering occurs without a depth texture. It is the same if I create my own depth texture (reusing the TextureDesc of resourceData.depthColorTexture).
I traced some WebGL calls by adding logging to WebGL.framework.js. I think the issue here:
var texture = gl.createTexture();
var TEXTURE_2D_MULTISAMPLE = 0x9100;
gl.bindTexture(TEXTURE_2D_MULTISAMPLE, texture);
But on WebGL TEXTURE_2D_MULTISAMPLE does not exist natively, and using 0x9100 results in “INVALID_ENUM: bindTexture: invalid target”.
Please note that MSAA rendering was working ok on WebGL before porting to RenderGraph.
CommandBuffer: temporary render texture _CameraTargetAttachmentA not found while executing (Blit source)
UnityEngine.GUIUtility:ProcessEvent (int,intptr,bool&)
How do we mark a Render Graph as “Post-Processing” so that it respects the ‘Post-Processing’ option in the scene view’s “Toggle skybox, fog, and various other effects” toggle?
I have an issue with fullscreen blit when in Virutal Reality.
In my project, to do a visual effect, I render some objects to a separate texture, do some operations on the texture and then merge it back in the camera color texture. It works fine in compatibility mode, but only renders the left eye when render graph is enabled.
I have the same issue with the URP RenderGraph Samples provided in the package (URP 17.0.3, Unity 6000.0.32f1):
@jRocket, in Unity 6000.1.0a9, we added the new CullContextData object in URP frameData allowing you to access new Cull and CullShadowsCaster APIs, these ones are equivalent to the APIs (with the same name) already existing in ScriptableRenderContext.
using (var builder = renderGraph.AddRasterRenderPass<PassData>(passName, out var passData, profilingSampler))
{
// UniversalResourceData contains all the texture handles used by the renderer, including the active color and depth textures.
// The active color and depth textures are the main color and depth buffers that the camera renders into.
UniversalResourceData resourceData = frameData.Get<UniversalResourceData>();
var cameraData = frameData.Get<UniversalCameraData>();
// CullContextData contains the culling APIs.
var cullContextData = frameData.Get<CullContextData>();
// Retrieve the culling parameters for the camera used.
cameraData.camera.TryGetCullingParameters(false, out var cullingParameters);
// Perform culling using the CullContextData API.
var cullingResults = cullContextData.Cull(ref cullingParameters);
// Fill up the passData with the data needed by the pass
InitRendererLists(cullingResults, frameData, ref passData, renderGraph);
// Make sure the renderer list is valid
if (!passData.rendererListHandle.IsValid())
return;
// We declare the RendererList we just created as an input dependency to this pass, via UseRendererList().
builder.UseRendererList(passData.rendererListHandle);
// Setup as a render target via UseTextureFragment and UseTextureFragmentDepth, which are the equivalent of using the old cmd.SetRenderTarget(color,depth).
builder.SetRenderAttachment(resourceData.activeColorTexture, 0);
builder.SetRenderAttachmentDepth(resourceData.activeDepthTexture, AccessFlags.Write);
// Assign the ExecutePass function to the render pass delegate, which will be called by the render graph when executing the pass.
builder.SetRenderFunc((PassData data, RasterGraphContext context) => ExecutePass(data, context));
}
Hi,
Does Render Graph support ray tracing? Are there any examples?
I did some experimentation today and found that, as mentioned in the post , Raytracing shaders in RenderGraph Compute, it seems that ComputeCommandBuffer has functions related to ray tracing, but lacks functions like SetRayTracingShaderPass.
I found the following imperfect code can correctly DispatchRays and render.
public static void Record(PassData passData, ComputeGraphContext context) {
var cmd = context.cmd;
CommandBuffer wrappedCmd = cmd.GetWrappedCommandBufferUnsafe();// Some methods of C# reflection.
wrappedCmd.SetRayTracingShaderPass(testRs, "DXR");
cmd.SetRayTracingAccelerationStructure(testRs,"_RtScene",passData.accelerationStructure);
cmd.SetRayTracingVectorParam(testRs, "_WorldSpaceCameraPos", passData.cameraPosition);
cmd.SetRayTracingMatrixParam(testRs, "_PixelCoordToViewDirWS", passData.pixelToWorldMatrix);
cmd.SetRayTracingTextureParam(testRs, "_output", passData.outputTexture);
cmd.DispatchRays(testRs, "RayGen", (uint)passData.texWidth, (uint)passData.texHeight, 1,null);
}
We are doing some customization to Unity’s SSAO. I see that the RenderGraph based version of SSAO assigns to UniversalResourceData.ssaoTexture, but that property is read-only for external code. Can this be changed? Or alternatively, is there a workaround?
Edit: well, I see that SSAO rendering still works without assigning UniversalResourceData.ssaoTexture. But it is actually read from a few internal passes. Especially DrawObjectsPass is of importance to me, which draws the opaque and transparent objects.
This is the missing call: builder.UseTexture(resourceData.ssaoTexture, AccessFlags.Read);
But Render Graph Viewer correctly shows the _ScreenSpaceOcclusionTexture as a read dependency of ‘Draw Opaque Objects’ and ‘Draw Transparent Objects’ passes. What am I missing?
Edit 2: answering my own question. I guess the SSAO texture is still passed as a global texture from SSAO (builder.SetGlobalTextureAfterPass(finalTexture, s_SSAOFinalTextureID)), and referenced this way from DrawObjectsPass. Maybe the missing UseTexture would only affect culling of SSAO pass, but it already does builder.AllowPassCulling(false). Hence it works without a problem.
SetRayTracingShaderPass should be available in Compute passes, it is already on our radar and we will add it soon.
In the meantime, if you want to avoid using C# reflection in your compute pass, you can temporarily change it into an unsafe pass and access the missing API using CommandBufferHelpers.GetNativeCommandBuffer(), all the other ray tracing APIs should be available with unsafe API.
Hey folks,
I recently started porting NGSS to Render Graph and quickly found out couple of limiting factors.
So first, im not adding any render pass (yet). I’m modifying URP AdditionalLightsShadowCasterPass. Issues I encountered:
First one. Seems like I can’t bind a depth buffer to anything else that isn’t a depth format. Basically i need to have mipmaps on my depthmaps and they need to have two channels. Im basically doing PCSS with mipmap used for the blocker search. And the color format is for Variance/Exponential shadowmapping filtering.
I extended CreateRenderGraphTexture to take useMipBias as a parameter but seems like depth formats can’t be sampled with mip bias, must be a color format but color formats can’t be bind with depth buffers in RG hehe:
Second issue, im not sure if it’s related to Render Graph but here is. The array/structured buffer that im passing to the Lit shader with more per light data (like softness, filtering type, slice bounds, etc) gets resized for some unknown reason. This is the piece of code that update per shadow slice and my arrays in the same loop (inside Setup method):
I basically set my array/buffer the same size as m_AdditionalLightsShadowSlices and fill with my data (in this case bounds data cause my shadows kernels are large and i need to clamp slices uvs when filtering shadows). The problem is that m_AdditionalShadowmapBounds it’s getting resized and my indexes gets totally wonked resulting in kernels clipping the wrong slices… snif sniff.
Thanks again for your great work on RG!
a new Tips and Tricks: a compact upgrade guide to URP RenderGraph.
A project that is upgraded to Unity 6 will automatically turn on Compatibility Mode to simplify the upgrading. Compatibility Mode is not intended for shipping your project. So once you get your project working in Unity 6, as a next step you should turn it off and convert your ScriptableRenderPass’s and RenderFeatures to RenderGraph.
Check if you have custom ScriptableRenderPass’s in your project. These could be part of an asset that you installed from a 3rd party.
If you have no custom passes, then you can switch off Compatibility Mode and everything should just work. If it doesn’t then that is likely a bug.
If you have custom passes, these need to be converted to using RenderGraph. That means implementing the RecordRenderGraph method.
Familiarize yourself with the new concepts and APIs. You can find an overview of the learning resources here.
If the passes are part of a 3rd party asset, you’ll need to reach out to get a new version that works with RenderGraph.
Implement RecordRenderGraph for each of your ScriptableRenderPass’
When Compatibility Mode is off, URP will not call the ScriptableRenderPass.Setup/Execute methods. Instead, it will call RecordRenderGraph for each pass.
During conversion/upgrading, it’s recommended to move code to a static function from the Execute method to be shared with the RecordRenderGraph implementation. You can see an example of that in the URP package samples that can be installed through the package manager. This helps you to limit the amount of new code and you can switch back and forth between Compatibility Mode and RenderGraph to check correctness.
The RenderGraph API has been designed to provide more guardrails so we can offer better performance out of the box. You’ll see that RasterPass and ComputePass limit your access to certain commandbuffer APIs. As a step in your upgrading process, you can use the UnsafePass that gives you more access to the commandBuffer. This helps you to share more code with Compatibility Mode. Once you have everything running again correctly, then you can consider converting some of these passes to RasterPass or ComputePass for better GPU performance.
The RenderGraph code can be more verbose. The most common operation is Blitting. You can use RenderGraph.AddBlitPass for this to avoid the boilerplate. We’ll add more of these helper passes soon.
The RenderGraph Viewer is a great tool that shows you exactly the read/write behavior of resources for each pass that you converted.
Small question. When building a compute shader pass and setting builder.EnableAsyncCompute(true); will Unity internally handle cases where the hardware/platform does not support async compute? (ie. WebGPU).
The documentation reads “Enable asynchronous compute for this pass.” which doesn’t clarify this.
There’s also SystemInfo.supportsAsyncCompute available, should this be used as an argument instead of ‘true’?
@tatoforever, I will discuss your questions with the rest of the team and get back to you.
@StaggartCreations, this is a good question. I just had a look and it doesn’t seem that we check SystemInfo.supportsAsyncCompute at RenderGraph level so we probably don’t handle well hardware/platform not support async compute. I think we should, I will add it to the todo list. For now, you probably need to use it as an argument for builder.EnableAsyncCompute().