(URP 13.1.8) Proper RTHandle usage in a Renderer Feature

Unity 2022.1.14f1
Universal Rendering Pipeline 13.1.8

In my experiments and testing to try and introduce myself to ScriptableRendererFeatures (and, by extension, ScriptableRenderPasses), I’ve failed to find a single, up-to-date example online to use as a starting point and am clearly on the wrong track to some extent, since the first successful modification to my current render consumed more and more memory in RenderTextures until my third out-of-VRAM-induced computer restart.

So far, my best first attempt combined the primary concepts from https://alexanderameye.github.io/notes/scriptable-render-passes/ with the URP’s “Upgrade Guide” offering bare minimum suggestions for updating to RTHandle use. Research has also spanned more than a dozen other sources (most of which were lost to the aforementioned computer restarts, but all of which involved earlier versions of URP, prior to non-RTHandles being deprecated), but those two have actually resulted in something that’s offered a visual result.

With all of that in mind, this is the current state of my script(s) (with most non-deprecation-related comments removed):
–TemplateFeature.cs–

using UnityEngine;
using UnityEngine.Rendering.Universal;

// https://alexanderameye.github.io/notes/scriptable-render-passes/
public class TemplateFeature : ScriptableRendererFeature
{
    [System.Serializable]
    public class PassSettings
    {
        public RenderPassEvent renderPassEvent = RenderPassEvent.AfterRenderingTransparents;

        [Range(1, 4)]
        public int downsample = 1;

        [Range(0, 20)]
        public int blurStrength = 5;
    }

    TemplatePass pass;
    public PassSettings passSettings = new PassSettings();

    public override void Create()
    {
        pass = new TemplatePass(passSettings);
    }

    public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
    {
        renderer.EnqueuePass(pass);
    }
}

–TemplatePass.cs–

using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;

// https://alexanderameye.github.io/notes/scriptable-render-passes/
public class TemplatePass : ScriptableRenderPass
{
    const string profilerTag = "Template Pass";

    TemplateFeature.PassSettings passSettings;

    //^RenderTargetIdentifier colorBuffer;
    //^RenderTargetIdentifier temporaryBuffer;
    // ^Updating deprecated RenderTargetIdentifier usage to RTHandle-based
    RTHandle colorBuffer;
    RTHandle temporaryBuffer;
    //^int temporaryBufferID = Shader.PropertyToID("_TemporaryBuffer");

    Material mat;

    static readonly int blurStrengthProperty = Shader.PropertyToID("_BlurStrength");

    public TemplatePass(TemplateFeature.PassSettings passSettings)
    {
        this.passSettings = passSettings;

        renderPassEvent = passSettings.renderPassEvent;

        if(mat == null)
        {
            mat = CoreUtils.CreateEngineMaterial("Hidden/TemplateBlur");
        }

        mat.SetInt(blurStrengthProperty, passSettings.blurStrength);
    }

    public override void OnCameraSetup(CommandBuffer cmd, ref RenderingData renderingData)
    {
        RenderTextureDescriptor descriptor = renderingData.cameraData.cameraTargetDescriptor;

        // Downsample the original camera target descriptor
        descriptor.width /= passSettings.downsample;
        descriptor.height /= passSettings.downsample;

        descriptor.depthBufferBits = 0; // Color and depth cannot be combined in RTHandles

        // Grab the color buffer from the renderer camera color target
        //^colorBuffer = renderingData.cameraData.renderer.cameraColorTarget;
        colorBuffer = renderingData.cameraData.renderer.cameraColorTargetHandle;

        //^cmd.GetTemporaryRT(temporaryBufferID, descriptor, FilterMode.Bilinear);
        //^temporaryBuffer = new RenderTargetIdentifier(temporaryBufferID);
        // This included variations on descriptor definitions and scaling definitions
        RenderingUtils.ReAllocateIfNeeded(ref temporaryBuffer, Vector2.one / passSettings.downsample, descriptor, FilterMode.Bilinear, TextureWrapMode.Clamp, name: "_TemporaryBuffer");
    }

    public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
    {
        // NOTE: Do NOT mix ProfilingScope with named CommandBuffers i.e. CommandBufferPool.Get("name").
        // Currently there's an issue which results in mismatched markers.
        CommandBuffer cmd = CommandBufferPool.Get();
        using(new ProfilingScope(cmd, new ProfilingSampler(profilerTag)))
        {
            Blit(cmd, colorBuffer, temporaryBuffer, mat, 0); // shader pass 0
            Blit(cmd, temporaryBuffer, colorBuffer, mat, 1); // shader pass 1
        }

        context.ExecuteCommandBuffer(cmd);
        CommandBufferPool.Release(cmd);
    }

    public override void OnCameraCleanup(CommandBuffer cmd)
    {
        if(cmd == null)
        {
            throw new System.ArgumentNullException("cmd");
        }

        //^cmd.ReleaseTemporaryRT(temporaryBufferID);
        temporaryBuffer = null;
        //colorBuffer = null; // I don't know whether it's necessary, but either way doesn't help
    }
}

… And the shader, although it’s not specifically the cause of any problems here
–TemplateBlur.shader–

Shader "Hidden/TemplateBlur"
{
    Properties
    {
        _MainTex ("Texture", 2D) = "white" {}
    }
        HLSLINCLUDE

        #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"

        struct Attributes
        {
            float4 positionOS : POSITION;
            float2 uv : TEXCOORD0;
        };

        struct Varyings
        {
            float4 positionHCS : SV_POSITION;
            float2 uv : TEXCOORD0;
        };

        TEXTURE2D(_MainTex);
        SAMPLER(sampler_MainTex);
        float4 _MainTex_TexelSize;
        float4 _MainTex_ST;

        int _BlurStrength;

        Varyings Vert(Attributes input)
        {
            Varyings output;
            output.positionHCS = TransformObjectToHClip(input.positionOS.xyz);
            output.uv = TRANSFORM_TEX(input.uv, _MainTex);
            return output;
        }

        half4 FragHorizontal(Varyings input) : SV_TARGET
        {
            float2 res = _MainTex_TexelSize.xy;
            half4 sum = 0;

            int samples = 2 * _BlurStrength + 1;

            for(float x = 0; x < samples; x++)
            {
                float2 offset = float2(x - _BlurStrength, 0);
                sum += SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, input.uv + offset * res);
            }
            return sum / samples;
        }

        half4 FragVertical(Varyings input) : SV_TARGET
        {
            float2 res = _MainTex_TexelSize.xy;
            half4 sum = 0;

            int samples = 2 * _BlurStrength + 1;

            for(float y = 0; y < samples; y++)
            {
                float2 offset = float2(0, y - _BlurStrength);
                sum += SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, input.uv + offset * res);
            }
            return sum / samples;
        }

        ENDHLSL

   
    SubShader
    {
        Tags
        {
            "RenderType"="Opaque"
            "RenderPipeline"="UniversalPipeline"
        }

        Pass // 0
        {
            Name "Horizontal Box Blur"

            HLSLPROGRAM

            #pragma vertex Vert
            #pragma fragment FragHorizontal

            ENDHLSL
        }
   
        Pass // 1
        {
            Name "Vertical Box Blur"

            HLSLPROGRAM

            #pragma vertex Vert
            #pragma fragment FragVertical

            ENDHLSL
        }
    }
}

To say this again now, DON’T USE THIS AS-IS. You will regret it quickly.

Anyway, with all of this in mind, what exactly have I overlooked at this point? I have been completely unable to find a source for a functioning example of a ScriptableRendererFeature (as of changes made over the course of URP 13 for the current non-beta Unity Editor build), and without prior experience using them, I’m at a loss regarding what I’m missing in terms of assembling this.

For that matter, is there anything else I should also be taking into consideration when working with this to begin with? For example, is ScriptableRenderContext.DrawRenderers() a viable way to draw a specific subset of GameObjects in my scene (i.e. using LayerMask) with a Renderer Feature, rather than only using the current frame (at a given time)?


Edit: Where KeepFrameFeature.cs in the URP “Samples” package contains something like a dozen lines reporting as “deprecated”, Test107Renderer in Unity’s URP Github “Graphics” branch, while not a ScriptableRendererFeature, looks like it might get me on the right track… despite not seeming to strictly adhere to the Upgrade Guide. I can’t say I’m enthusiastic about the risk of crashing my computer again, though, so I’ll hold off on testing it while I focus on other things (unless there are any better suggestions offered here by the time I get to trying that approach - namely, incorporating CommandBuffer.GetTemporaryRT() and CommandBuffer.ReleaseTemporaryRT() when they’re specifically filtered out in the Upgrade Guide itself).

7 Likes

Update: I still have not found a clear approach to take yet. I’ve been digging around Unity’s “Graphics” GitHub repository (example search, effectively no results) and have been unable to find an example of a ScriptableRendererFeature-based ScriptableRenderPass that makes use of OnCameraCleanup() (as opposed to an inherited call from ScriptableRendererFeature.Dispose()) and allocates/clears RTHandle(s) in the process.

Since it seems like that should be an expected/intended use case, based on the Upgrade Guide (which includes an example of “Dispose()” without context for calling it and a variable “m_Handle” with unclear purpose), I’m still unclear on what is expected for a properly-formed, up-to-date script here.

Am I supposed to pass down a call to a general function (i.e. “Dispose()”) from ScriptableRendererFeature.Dispose() to clear RTHandles that are allocated within the ScriptableRenderPass (example: ScreenCoordOverrideRenderPass.Cleanup() is called from ScreenCoordOverrideScriptableRenderFeature.Dispose()), while RTHandles inherited from already-existing data (e.g. RenderingData.cameraData.renderer.cameraDepthTargetHandle) should be detached in OnCameraCleanup()? If that is the case, however, is OnCameraCleanup() necessary at all in most cases? There are very few examples which aren’t empty functions or out of date (prior to RTHandle usage), and they’re generally just part of the main rendering pipeline (is that even significant at all?).

I guess in writing this post, it DOES seem like OnCameraCleanup() is barely used, though, so I guess that will be my next avenue for working through this some more. To note, however, since it’s part of the template for a ScriptableRendererFeature file (with self-contained ScriptableRenderPass), and it includes the pre-written comment of “// Cleanup any allocated resources that were created during the execution of this render pass.”, that doesn’t lend itself to the expectation that you (potentially) WOULDN’T actually clean up allocated RTHandles there, and would instead do so with a call handed down from ScriptableRendererFeature.Dispose().

Something that’s strange in that documentation. On this page: Upgrading to version 2022.1 of the Universal Render Pipeline | Universal RP | 13.1.9 , it says:

If the target is known to not change within the lifetime of the application, then simply a RTHandles.Alloc would suffice and it will be more efficient due to not doing a check on each frame.

However, the example uses the less efficient RenderingUtils.ReAllocateIfNeeded(ref m_Handle, desc, FilterMode.Point, TextureWrapMode.Clamp, name: "_CustomPassHandle");

Well, I finally went and got things sorted out, but there’s still one thing that has me confused, which I haven’t been able to locate any clear/definitive information on:

When does RenderingUtils.ReAllocateIfNeeded() actually decide to do so? Or, alternatively, does it actually make a difference if the texture is larger than necessary?

When I increase the window size of the editor/game view, the temporary texture’s resolution increases to match. When I decrease the window size, the texture inherited from RenderingData.cameraData.renderer.cameraColorTargetHandle continues to match the current resolution, but the texture created from RenderingUtils.ReAllocatedIfNeeded() retains its maximum dimensions (as described in the Frame Debugger).

Additionally, there seem to be no visible changes to the image overall, despite the shader utilizing pixel-size-driven modifiers (namely, _MainTex_TexelSize for pixel offsets), so it seems like accommodations are being made behind the scenes, but it’s also confusing at the same time, since the main rendering process doesn’t claim to retain higher-resolution-than-necessary textures under the same circumstances. It seems at least a bit silly that it wouldn’t offer a clearer means of reducing the texture’s resolution for circumstances like a low-capability device that just dropped render resolution for performance. Why should it be necessary to restart the game to optimize performance just for a window size change?

At any rate, now that I have a much better grasp on how this needed to be organized, here’s an updated state for the original “TemplateFeature” and “TemplatePass” files, for reference (the shader needed no modification):

–TemplateFeature.cs–

using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;

// https://alexanderameye.github.io/notes/scriptable-render-passes/
public class TemplateFeature : ScriptableRendererFeature
{
    [System.Serializable]
    public class PassSettings
    {
        public Material material;

        public RenderPassEvent renderPassEvent = RenderPassEvent.AfterRenderingTransparents;

        [Range(1, 4)]
        public int downsample = 1;

        [Range(0, 20)]
        public int blurStrength = 5;
    }

    TemplatePass pass;
    public PassSettings passSettings = new PassSettings();

    // This prevents attempted destruction of a manually-assigned material later
    bool useDynamicTexture = false;

    public override void Create()
    {
        if(passSettings.material == null)
        {
            passSettings.material = CoreUtils.CreateEngineMaterial("Hidden/TemplateBlur");
            useDynamicTexture = true;
        }
        pass = new TemplatePass(passSettings);
    }

    public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
    {
        renderer.EnqueuePass(pass);
    }

    protected override void Dispose(bool disposing)
    {
        if(useDynamicTexture)
        {
            // Added this line to match convention for cleaning up materials
            // ... But only for a dynamically-generated material
            CoreUtils.Destroy(passSettings.material);
        }
        pass.Dispose();
    }
}

–TemplatePass.cs–

using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;

// https://alexanderameye.github.io/notes/scriptable-render-passes/
public class TemplatePass : ScriptableRenderPass
{
    const string profilerTag = "Template Pass";

    TemplateFeature.PassSettings passSettings;

    RTHandle colorBuffer;
    RTHandle temporaryBuffer;

    Material mat;

    static readonly int blurStrengthProperty = Shader.PropertyToID("_BlurStrength");

    public TemplatePass(TemplateFeature.PassSettings passSettings)
    {
        this.passSettings = passSettings;
        renderPassEvent = passSettings.renderPassEvent;

        // Now that this is verified within the Renderer Feature, it's already "trusted" here
        mat = passSettings.material;

        mat.SetInt(blurStrengthProperty, passSettings.blurStrength);
    }

    public override void OnCameraSetup(CommandBuffer cmd, ref RenderingData renderingData)
    {
        RenderTextureDescriptor descriptor = renderingData.cameraData.cameraTargetDescriptor;

        descriptor.width /= passSettings.downsample;
        descriptor.height /= passSettings.downsample;

        descriptor.depthBufferBits = 0; // Color and depth cannot be combined in RTHandles

        // Enable these if your pass requires access to the CameraDepthTexture or the CameraNormalsTexture
        // ConfigureInput(ScriptableRenderPassInput.Depth);
        // ConfigureInput(ScriptableRenderPassInput.Normal);

        colorBuffer = renderingData.cameraData.renderer.cameraColorTargetHandle;

        RenderingUtils.ReAllocateIfNeeded(ref temporaryBuffer, Vector2.one / passSettings.downsample, descriptor, FilterMode.Bilinear, TextureWrapMode.Clamp, name: "_TemporaryBuffer");
    }

    public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
    {
        // A reasonably common and simple safety net
        if(mat == null)
        {
            return;
        }

        // NOTE: Do NOT mix ProfilingScope with named CommandBuffers i.e. CommandBufferPool.Get("name").
        // Currently there's an issue which results in mismatched markers.
        CommandBuffer cmd = CommandBufferPool.Get();
        using(new ProfilingScope(cmd, new ProfilingSampler(profilerTag)))
        {
            Blit(cmd, colorBuffer, temporaryBuffer, mat, 0); // shader pass 0
            Blit(cmd, temporaryBuffer, colorBuffer, mat, 1); // shader pass 1
        }

        // Execute the command buffer and release it
        context.ExecuteCommandBuffer(cmd);
        CommandBufferPool.Release(cmd);
    }

    public override void OnCameraCleanup(CommandBuffer cmd)
    {
        if(cmd == null)
        {
            throw new System.ArgumentNullException("cmd");
        }

        // Mentioned in the "Upgrade Guide" but pretty much only seen in "official" examples
        // in "DepthNormalOnlyPass"
        // https://github.com/Unity-Technologies/Graphics/blob/9ff23b60470c39020d8d474547bc0e01dde1d9e1/Packages/com.unity.render-pipelines.universal/Runtime/Passes/DepthNormalOnlyPass.cs
        colorBuffer = null;
    }

    public void Dispose()
    {
        // This seems vitally important, so why isn't it more prominently stated how it's intended to be used?
        temporaryBuffer?.Release();
    }
}

(No, this version won’t cripple your computer)

1 Like

As an additional frame of reference, with relation to burningmime’s comments regarding RenderingUtils.ReAllocateIfNeeded(), there’s an example of an on-demand usage of RTHandles.Alloc() in FinalBlitPass:

if (m_CameraTargetHandle != cameraTarget)
{
    m_CameraTargetHandle?.Release();
    m_CameraTargetHandle = RTHandles.Alloc(cameraTarget);
}

This could easily be extrapolated to be regenerated whenever the window size/screen resolution has changed, but I haven’t yet located any frame of reference for doing so efficiently/sanely (through an event/message). It could be done by checking the width/height of the camera’s (color) target texture and comparing with last-known values, but that seems wasteful to constantly check those values as part of the rendering loop, especially because that would mean checking against values which already properly adapted themselves.

URP 13 is quite broken on Unity 2022, I have found it much more stable to stay on URP 12 and Unity 2021.

All things considered, I don’t think I’d necessarily describe it as “broken” or “(un)stable”. It may not have been an especially sound idea for me to try and introduce myself to ScriptableRendererFeatures on this latest version (granted, I didn’t know what I was getting myself into until I was already in the thick of it), but after what I’ve dug up, it seems to be more a problem of documentation/clarification than one of functionality.

For one, that’s why I made sure to include a template for a common, yet somewhat multi-faceted effect (blur), using a combination of base Render Texture (screen/camera color) and a temporary/surrogate one. If it’s enough to give anyone some idea of how to get started who comes across this thread, then that would already make it a clearer resource than a majority of what I looked up just to ensure I set it up correctly as a newcomer to this system myself.

For all my ranting up to this point in this thread, I’ve really been aiming more toward expressing the thought process that’s gone into this as someone starting into this because the Built-in Renderer was giving me trouble with a logistical nightmare of a rendering concept and the High-Definition Rendering Pipeline doesn’t lend itself to modification well enough to suit my current goal(s). (Namely, a combination of Multiple Render Target shader output, coupled with refraction, coupled with blurring, coupled with customizable lighting and/or shadow processing per GameObject during deferred rendering… the only part I still haven’t found a suitable solution to in combination with all of this is also factoring in thickness (back face minus front face depth of same mesh added together per pixel for all meshes on the same layer), but I am willing to settle for avoiding yet another rendering pass on a layer just to make that work)

As I’ve mentioned before, however, there are still aspects that seem… unclear (namely, texture resolution and usage attributes from ReAllocateIfNeeded()), but my testing since wrapping my head around parts of this system as a whole doesn’t suggest that the URP has become dysfunctional with the most recent (non-beta) iteration.

1 Like

I had to do a similar upgrade recently. Here’s documenting what I had to do in the hope that it’s useful (like you I’m not sure if this is really the intended way - it’s extremely hard to find functional examples and the API changes so frequently that as soon as you do they are often out of date).

I’ve trimmed out duplication for this minimal example:

Defining render target handles in my Pass:

#if URP_13
        private RTHandle _PixelizationMap;
        ...
#else
        private int _PixelizationMap;
        ...
#endif

Allocating render target resources in override void Configure:

#if URP_13
            RenderingUtils.ReAllocateIfNeeded(ref _PixelizationMap, pixelizationMapDescriptor, name: "ProP_PixelizationMap");
#else
            _PixelizationMap = Shader.PropertyToID("_PixelizationMap");
            cmd.GetTemporaryRT(_PixelizationMap, pixelizationMapDescriptor);
#endif

Disposing of render target handles:

#if URP_13
        public void Dispose()
        {
            _PixelizationMap?.Release();
        }
#endif

        public override void FrameCleanup(CommandBuffer cmd)
        {
#if URP_13

#else
            cmd.ReleaseTemporaryRT(_PixelizationMap);
#endif
        }

These resources were helpful references for me:

3 Likes

Ah, whoops. I completely forgot to ensure that I was using an up-to-date variant of Blit(). Rather than ScriptableRenderPass.Blit(), which in turn calls CommandBuffer.Blit(), it looks like I should be making use of Blitter.BlitCameraTexture() for equivalent behavior.

On that note, however, it’s a shame that there isn’t better/clearer information provided on what some of the other Blitter functions are doing. For example, Blitter.BlitTexture() appears to just be Blitter.BlitCameraTexture() without calling SetRenderTarget() first (namely, BlitCameraTexture() calls SetRenderTarget() just before calling BlitTexture() itself). The description of BlitTexture(), however, is “Blit a RTHandle texture.”.

Having tried adjustments, however, simply replacing the Blit() calls with Blitter.BlitCameraTexture() (reusing the same arguments) in my script posted in this thread resulted in no blur occurring anymore, so I’ll try digging around to see what I’m missing.

Edit: Specifically, the first Blit resulted in a solid black image and the second Blit basically came back with the original, unblurred image, so clearly many things weren’t assigned just right.

Edit 2: Oh, wait… the overload of ScriptableRenderPass.Blit() I was using literally calls Blitter.BlitCameraTexture() itself. The material isn’t reporting as null, so it’s not using the other branch there… so how is (B.)BlitCameraTexture() failing when (SRP.)Blit() is succeeding!?

Edit 3: Ugh, GitHub decided to revert to the newest version of URP, rather than sticking with the approximately-version-relevant entry I’d previously looked at. I don’t know exactly which “2022.1” time frame it might’ve been (and not simply listing the GitHub versioning based on URP version certainly hinders apparent relevancy), but the behavior of ScriptableRenderPass.Blit() changed significantly since then, in that it calls CommandBuffer.Blit() itself in that version.

1 Like

I’m trying to update some RendererFeature code to 2022.1, using the RTHandles and the new Blitter. Initially I’m converting a custom postprocessed fog pass.

I’ve got the basics functioning, after figuring out the changes needed to custom blit shaders to get UVs/position and the renamed source texture.

But I’m doing a blit from the color target to a temporary texture, with the standard blit shader and the simplest Blit function, and the UVs seem to be wrong, particularly after resizing the scene/game window. I’m just doing:

Blitter.BlitCameraTexture( cmd, renderingData.cameraData.renderer.cameraColorTargetHandle, m_tempTexture );

Both textures are the same dimensions, it should be a 1:1 blit. But I’m getting weird UVs that seem to come from the _BlitScaleBias shader parameter. Any idea what’s going on there?

Edit: Solved? - It looks like BlitCameraTexture isn’t setting _BlitScaleBias - so it’s essentially undefined. I can do the blits successfully with CoreUtils.SetRenderTarget and Blitter.BlitTexture instead.

1 Like

I agree, there needs to be a clear example. All official sample Renderer Feature that use any custom texture currently do not work in URP 14.
We shouldn’t need to reverse engineer this.

6 Likes

This is all going to change again in 2023 when render graph support arrives. If you have the luxury to do so, I’d recommend skipping the 2022 cycle for URP. Anything you learn now will be obsolete when you’re using TextureHandle instead of RTHandle.

1 Like

Thanks for the heads up, it would be really useful if someone from Unity who is familiar with plans for 2023 could weigh in on this comment

Urgh. Will the breaking changes to URP ever slow down?

Custom RendererFeatures seemed great when I started using them, but they’ve turned into a real maintenance liability. Even worse if you’re relying on store assets where updates aren’t guaranteed.

I’d just started to try and move a project from 2021.2 to 2022.2, to move to RTHandles and the new blitter, although I’ve given up on that for now due to far-too-frequent editor crashes in 2022.2.

The upcoming render graph stuff sounds worrying if it really limits SetRenderTarget as much as has been suggested. I’ve been dealing with an object-outline-rendering system that blits back and forth between several render targets, and bloom/blur type effects need to do similar.

1 Like

Just fyi, I haven’t had any editor crashes since disabling the SRPBatcher - may be worth a go.

Anyone got this working in 2022.2? all of this API change are confused me, and they didn’t give a proper sample on how to use it.
My first attempt just happen to also try converting the template feature to 2022.2. . .

3 Likes

Same here, what a mess

2 Likes

this is technically incorrect. RenderGraph uses TextureHandles, yes, which are just an RTHandle wrapper.
One of the main reason why RTHandles were adopted in URP was that it was necessary preparation work in order to be able to use RenderGraph.
It’s all part of a long term roadmap, we don’t add new features to deprecate them the version after.

The main benefit of RenderGraph is that you will be able to use RTHandles which are totally managed by RG, so you won’t be responsible of allocations, lifetime management etc, so should make your life easier. Any code based on RTHandles will be easy to port to the new API.

RenderGraph is built on RTHandles. You can verify that by looking at the 23 public Graphics repository where RG code is already available (even though the feature is still disabled for users). HDRP also uses RenderGraph and RTHandles.

The other reason why RTHandles were introduced was dynamic scaling support, which was not possible with the old TempRT system.

Regarding lack of RTHandles documentation: we are aware of it and the docs team is working also on samples, to make sure that by 22 LTS it will be available.
For now, on top of upgrade guides, which show very basic examples, you can check URP internal complex render features like SSAO and Decals, which are fully working with RTHandles and Blitter API

5 Likes

@ManueleB
Regarding the new Blitter, is the texture always locked to _BlitTexture?
and is _BlitTexture = _CameraOpaqueTexture?

Edit #1: Okay. . so it’s not, _BlitTexture is the _frameBuffer of the Current RenderPass

Calling RTHandle.Alloc() from a ScriptableRendererFeature is not supported by URP in 2021, and not best practice in 2023. Whether or not that’s “add[ing a] new feature to deprecate [it] the version after” is a matter of semantics, but if a user wants to have best practices for a ScriptableRendererFeature that uses temp textures, they need significantly different code structures in all of 2021, 2022, and 2023.

The whole idea of allocating/managing RTHandles in a ScriptableRendererFeature only applies to 2022. In 2021, you didn’t have RTHandles, and in 2023 if you won’t ever allocate RTHandles directly (unless you need to save results between frames), just register for them in the RenderGraph. “Proper RTHandle usage in a render feature” is going to look very different in 2023 than in 2022.

I know the team is pushing forward to align with HDRP. Which is awesome; I applaud you guys for that. But I’m not sure if encouraging users to move to 2022’s APIs only to turn around and tell them to port again to 2023 with its configure methods is a good idea. You’re just making more work for everybody involved.

2 Likes