Enabling Post Processing removes benefits from MSAA and Render Scale

I am using Unity 2022.3.5 and URP 14.0.8 with a 2D renderer. I recently experimented with render settings to avoid some ugly pixel flickering effects in my 2D plattformer. I found out that using MSAA 2x and doubling the render pixel size (Render Scale 1.41) solves the flickering even at high camera speeds, so I was happy about it.

But now I wanted to introduce Post Processing effects. But when only enabling the “Post Processing” option in the main camera without adding any concrete Post Processing effect the flickering is fully back. Disabling the “Post Processing” again causes the disappearance of the flickering again. For me it looks like enabling “Post Processing” suppresses MSAA and Render Scale. I have already read that MSAA and Render Scale settings are compatible with Post Processing effects, but this is something I cannot observe in my project. Do I miss something here? Are there settings which have to be made that all working in conjunction?

I did all tests and observations in the standalone build (development build) and not in Editor playmode.

1 Like

I could reproduce the described effects of my original post in very simplistic test project. I tried various URP/2D render settings but there was no magical combination that got Post Processing and MSAA and Render Scale working together in the repro project or at least I did not found that combination. I then tried finding alternative ways. One vary obvious approach was using Camera Stacking while an overlay camera only enabled Post Processing while the level background was rendered by a camera without Post Processing. This did not work. When activating Post Processing on any camera in the camera Stack MSAA/Render Scale did not worked anymore. It is the same behavior when using two separate cameras instead of a Camera Stack.

The next logical step was using two separate cameras, one for the level background and one for all other stuff including Post Processing and render the Post Processing stuff into a RenderTexture. The RenderTexture was then rendered to a Canvas/RawImage. This works! But it comes at a high price in my opinion. I made some performance tests using standalone build with bloom post processing effect on my system. Using a single camera with bloom I got up to 1150 FPS. Using the two camera approach with Post Processing with Render Texture on Canvas/RawImage dropped the FPS to 800. I have only very rudimentary knowledge about HLSL/Shaders and I implmented a shader with two passes cause of different blending modes to correctly apply the bloom effect from the RenderTexture to the Canvase/RawImage. The second pass costs 20-30 FPS. You can also imagine that the doubled pixel count cause of Render Scale 1.41 already costs performance. Now doing stuff in a separate RenderTexture using the same doubled pixel count additionally worses the performance. I tested a bit with the settings of the Render Texture, for example a lower size on the RenderTexture negating the Render Scale but this does caused a minor negative FPS effect. So I used parameters that seems to have the most beneficial performance.

I also tried using Full Screen Pass Renderer Feature for rendering of the RenderTexture directly onto the screen without Canvas/RawImage to see if there are benefits regarding performance. But unfortunately there is a bug in URP 14.0.8 so this does currently not work. See ArgumentNullException: Value cannot be null.

For sake of completness here is are settings of the RenderTexture and the implemented shader. To preserve the alpha values from Post Processing to RenderTexture I followed the steps in the following thread:

9156476--1273454--upload_2023-7-19_11-1-59.png
(Hardcoded the size to my screen size for the tests)
(EDIT: I tested with colors and a bloom threshold below 1.0 so the specified format was sufficient, but when using colors and a bloom threshold above 1.0 then a HDR color format needs to be used like R16G16B16A16_SLFOAT)

Shader for drawing RenderTexture to Canvas/RawImage

Shader "Custom/TransparentRenderTexture"
{
    Properties
    {
        _MainTex ("Texture", 2D) = "white" {}
    }
    SubShader
    {
        Tags { "Queue"="Transparent" "RenderType"="Transparent" }
        LOD 100

        Pass
        {
            Blend SrcAlpha OneMinusSrcAlpha
            ZWrite Off
            Cull Back
      
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            #include "UnityCG.cginc"

            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            struct v2f
            {
                float2 uv : TEXCOORD0;
                float4 vertex : SV_POSITION;
            };

            sampler2D _MainTex;
            float4 _MainTex_ST;

            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = TRANSFORM_TEX(v.uv, _MainTex);
                return o;
            }

            half4 frag (v2f i) : SV_Target
            {
                half4 col = tex2D(_MainTex, i.uv);
                return col;
            }      
            ENDCG
        }
  
        Pass
        {
            Blend SrcAlpha One
            ZWrite Off
            Cull Back
      
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            #include "UnityCG.cginc"

            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            struct v2f
            {
                float2 uv : TEXCOORD0;
                float4 vertex : SV_POSITION;
            };

            sampler2D _MainTex;
            float4 _MainTex_ST;

            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = TRANSFORM_TEX(v.uv, _MainTex);
                return o;
            }

            half4 frag (v2f i) : SV_Target
            {
                half4 col = tex2D(_MainTex, i.uv);

                // EDIT: That is buggy and will cause loosing pixels
                //col.a = col.a == 0 ? 1.0 : 0.0;
                // EDIT: Replaced with:
                col.a = col.a < 1 ? 1.0 : 0.0;

                return col;  
            }      
            ENDCG
        }
    }
}

I would appreciate if anyone has suggestions for improving performance on the described approach or having an alternative approach.

EDIT: Found a better approach than the one below described in my fifth post ( https://forum.unity.com/threads/enabling-post-processing-removes-benefits-from-msaa-and-render-scale.1461779/#post-9181940 ))

So, last update from me: I implemented the the two camera approach in my real project. I decided to use a
ScriptableRendererFeature for rendering the post processed texture instead of a Canvas/RawImage. But this does not make a difference on FPS, still I personally find the approach cleaner directly rendering the Post Processing texture this way. To get that two work two different 2D renderer are required:
9160634--1274276--upload_2023-7-20_21-9-49.png
The MainRenderer2D is used for the main player camera, while the EffectsRender2D for the effects camera, which is a child of the main player camera using the same transform position and orthographic size to ensure it always has the same scene view. The EffectsRenderer2D uses the customized UberPost shader which allows preserving the alpha as mentionend in my last post:
9160634--1274294--upload_2023-7-20_21-16-1.png


(The inspector has to be switched to Debug mode to show the shader list above)
The SH_AlphaUberPost_Default shader can be constructed using the following thread. It should be constructed because the original UberPost.shader may change between URP versions even minor version ones:

Also don’t foget to compare/update the shader when URP versions change.

The MainRender2D has the DetachedEffectsRenderPassFeature added:
9160634--1274303--upload_2023-7-20_21-22-27.png
The material M_EffectsRenderPass is using the shader code I posted in my previous post. This is the feature code:
DetachedEffectsRenderPassFeature

using UnityEngine;
using UnityEngine.Experimental.Rendering;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;

namespace My.Game.Scripts.Graphics
{
  /// <summary>
  /// Render pass feature for rendering effects rendered by the effects camera into a effect render texture to the pipeline.
  /// </summary>
  /// <remarks>
  /// This has been implemented for rendering Post Processing effects without harming
  /// MSAA and render scale, because otherwise pixel flickering effects may re-occur.
  /// </remarks>
  public class DetachedEffectsRenderPassFeature : ScriptableRendererFeature
  {
    public Material Material;
    private bool _isEnabled;
    private bool _isReady;
    private Camera _registeredEffectsCamera;

    private EffectsRenderPass _renderPass;
    private RenderTexture _renderTexture;

    /// <inheritdoc />
    public DetachedEffectsRenderPassFeature()
    {
      this._renderPass = new()
      {
        renderPassEvent = RenderPassEvent.AfterRendering,
      };

      this.SetEnabled(false, true);
    }

    /// <inheritdoc />
    protected override void Dispose(bool disposing)
    {
      this.ReleaseEffectsTexture();
      base.Dispose(disposing);
    }

    /// <summary>
    /// Gets or sets a value indicating whether the effects render pass feature is enabled or not.
    /// </summary>
    public bool IsEnabled
    {
      get => this._isEnabled;
      set =>
        // Force always because of issues how serialization and deserialization works on RenderPassFeatures
        this.SetEnabled(value, true);
    }

    // Here you can inject one or multiple render passes in the renderer.
    // This method is called when setting up the renderer once per-camera.
    public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
    {
      if (this.IsEnabled && this._isReady)
      {
        renderer.EnqueuePass(this._renderPass);
      }
    }

    /// <inheritdoc />
    public override void Create()
    {
      if (this.Material == null)
      {
        Debug.LogWarning($"No material specified for '{nameof(DetachedEffectsRenderPassFeature)}'.");
        return;
      }

      this._renderPass.Material = this.Material;
      this._renderPass.RenderTexture = this._renderTexture;
    }

    private void CreateEffectsTexture()
    {
      if (this._renderTexture != null)
      {
        if (this._registeredEffectsCamera)
        {
          this._registeredEffectsCamera.targetTexture = null;
        }

        this._renderTexture.Release();
      }

      // EDIT: Do not increase the Render Scale, because it will be increased automatically
      //var renderScale = UniversalRenderPipeline.asset.renderScale;
      //this._renderTexture = new RenderTexture(Mathf.CeilToInt(Screen.width * renderScale), Mathf.CeilToInt(Screen.height * renderScale),    
      this._renderTexture = new RenderTexture(Screen.width, Screen.height,
Mathf.CeilToInt(Screen.height),
        GraphicsFormat.R16G16B16A16_SFloat, GraphicsFormat.None)
      {
        anisoLevel = 0,
        filterMode = FilterMode.Point, // EDIT: When desried other filter modes could be used
        wrapMode = TextureWrapMode.Clamp,
        dimension = TextureDimension.Tex2D,
        antiAliasing = 1, // EDIT: When desired MSAA can be used
        useDynamicScale = false,
        useMipMap = false,
        autoGenerateMips = false,
        enableRandomWrite = false,
        depth = 0,
        memorylessMode = RenderTextureMemoryless.Depth,
      };

      // Assign textures and configure appropriately
      this._renderPass.RenderTexture = this._renderTexture;
      this.Material.mainTexture = this._renderTexture;

      // Update camera target texture
      if (this._registeredEffectsCamera)
      {
        this._registeredEffectsCamera.targetTexture = this._renderTexture;
      }

      this._isReady = true;
    }

    /// <summary>
    /// Registers a camera which records onto the effects texture.
    /// </summary>
    /// <param name="camera">The camera which records to the effects texture.</param>
    public void RegisterTargetCamera(Camera camera)
    {
      if (this._registeredEffectsCamera != camera)
      {
        if (this._registeredEffectsCamera)
        {
          this._registeredEffectsCamera.targetTexture = null;
        }

        this._registeredEffectsCamera = camera;

        if (this._renderTexture != null)
        {
          this._registeredEffectsCamera.targetTexture = this._renderTexture;
        }
        else
        {
          this._registeredEffectsCamera.targetTexture = null;
        }
      }
    }

    private void ReleaseEffectsTexture()
    {
      if (this._isReady)
      {
        if (this._registeredEffectsCamera)
        {
          this._registeredEffectsCamera.targetTexture = null;
        }

        this._renderTexture.Release();
        this.Material.mainTexture = null;
        this._renderPass.RenderTexture = null;
        this._isReady = false;
      }
    }

    private void SetEnabled(bool isEnabled, bool forced)
    {
      if (this._isEnabled != isEnabled || forced)
      {
        if (isEnabled)
        {
          this.CreateEffectsTexture();
        }
        else
        {
          this.ReleaseEffectsTexture();
        }

        this._isEnabled = isEnabled;
      }
    }

    private class EffectsRenderPass : ScriptableRenderPass
    {
      public Material Material;

      public RenderTexture RenderTexture;
      private RenderTargetIdentifier _currentTarget;

      /// <inheritdoc />
      public override void Configure(CommandBuffer cmd, RenderTextureDescriptor cameraTextureDescriptor)
      {
      }

      // Here you can implement the rendering logic.
      // Use <c>ScriptableRenderContext</c> to issue drawing commands or execute command buffers
      // https://docs.unity3d.com/ScriptReference/Rendering.ScriptableRenderContext.html
      // You don't have to call ScriptableRenderContext.submit, the render pipeline will call it at specific points in the pipeline.
      public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
      {
        // Skip rendering if the render texture is null or not created
        if (this.RenderTexture == null || !this.RenderTexture.IsCreated() || this.Material == null)
        {
          return;
        }

        var cmd = CommandBufferPool.Get("RenderDetachedPostProcessingEffects");
        cmd.Blit(this.RenderTexture, renderingData.cameraData.renderer.cameraColorTargetHandle, this.Material);
        context.ExecuteCommandBuffer(cmd);
        CommandBufferPool.Release(cmd);
      }

      // Cleanup any allocated resources that were created during the execution of this render pass.
      public override void OnCameraCleanup(CommandBuffer cmd)
      {
      }

      // This method is called before executing the render pass.
      // It can be used to configure render targets and their clear state. Also to create temporary render target textures.
      // When empty this render pass will render to the active camera render target.
      // You should never call CommandBuffer.SetRenderTarget. Instead call <c>ConfigureTarget</c> and <c>ConfigureClear</c>.
      // The render pipeline will ensure target setup and clearing happens in a performant manner.
      public override void OnCameraSetup(CommandBuffer cmd, ref RenderingData renderingData)
      {
      }
    }
  }
}

Last but not least I implemented a GraphicsController-MonoBehavior for enabling/disabling the DetachedEffectsRenderPassFeature and adjusting the cameras:
GraphicsController

using System.Linq;

using Microsoft.Extensions.Logging;

using My.Game.Scripts.Dependencies;
using My.Game.Scripts.Project;

using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;
using UnityEngine.Serialization;

using Zenject;

namespace My.Game.Scripts.Graphics
{
  public class GraphicsController : MonoBehaviour
  {
    [Tooltip("The effects renderer 2D used for rendering post processing effects only when configured.")]
    public Renderer2DData EffectsRenderer2D = null!;

    [FormerlySerializedAs("IsSeparatingEffects")]
    [Tooltip("The post processing effects are processed detached from the main camera.")]
    public bool IsDetachedPostProcessingEffects;

    [Tooltip("The main renderer 2D used for rendering all kinds of game objects.")]
    public Renderer2DData MainRenderer2D = null!;

    [Inject(Id = DependencyIdentifiers.SceneHierarchy.Cameras.Effects)]
    private Camera _effectsCamera;

    [Inject]
    private ILogger<GraphicsController> _logger;

    [Inject(Id = DependencyIdentifiers.SceneHierarchy.Cameras.Player)]
    private Camera _playerCamera;

    private UniversalRenderPipelineAsset? _urpAsset;

    private void Start()
    {
      if (this.MainRenderer2D == null)
      {
        this._logger.LogWarning("No main renderer 2D set, '{Name}' not working correctly", nameof(GraphicsController));
        return;
      }

      if (this.EffectsRenderer2D == null)
      {
        this._logger.LogWarning("No effects renderer 2D set, '{Name}' not working correctly", nameof(GraphicsController));
        return;
      }

      var urpAsset = GraphicsSettings.currentRenderPipeline as UniversalRenderPipelineAsset;

      if (urpAsset != null)
      {
        this._urpAsset = urpAsset;
        this.UpdateGraphicSettings();
      }
      else
      {
        this._logger.LogWarning("URP asset not found, '{Name}' not working correctly", nameof(GraphicsController));
      }
    }

    public void UpdateGraphicSettings()
    {
      if (this._urpAsset != null)
      {
        this.UpdateDetachedPostProcessingEffects();
      }
      else
      {
        this._logger.LogWarning("URP asset not found, graphic settings cannot be updated.");
      }
    }

    private void UpdateDetachedPostProcessingEffects()
    {
      var effectsRenderPassFeature = this.MainRenderer2D.rendererFeatures.OfType<DetachedEffectsRenderPassFeature>().Single();

      var isDetachedPostProcessingEffects = this.IsDetachedPostProcessingEffects;

      // When effect separation is enabled
      if (isDetachedPostProcessingEffects)
      {
        unchecked
        {
          this._playerCamera.cullingMask = (int)(UnityUserLayers.All & ~UnityUserLayers.Effects);
        }
      }
      else
      {
        unchecked
        {
          this._playerCamera.cullingMask = (int)UnityUserLayers.All;
        }
      }

      this._playerCamera.GetUniversalAdditionalCameraData().renderPostProcessing = !isDetachedPostProcessingEffects;

      this._logger.LogInformation("Setting enabled of effects render pass feature to '{EnabledState}'", isDetachedPostProcessingEffects);
      this._effectsCamera.gameObject.SetActive(isDetachedPostProcessingEffects);
      effectsRenderPassFeature.RegisterTargetCamera(this._effectsCamera);
      effectsRenderPassFeature.IsEnabled = isDetachedPostProcessingEffects;
    }
  }
}

I made a performance test in my real project and the approach results in a drop from 530 to 450 FPS. This are roughly 15% loss, which I find acceptable. Without really knowing I guess the fewer the frames the lesser the impact of this approach. I make this technique a graphical quality setting in my game and assign them to the upper quality presets, so the player can decide whether to use it or not. So at the end I am satisfied with this solution, most importantly a clean camera movement quality can be achieved with the upper quality presets.

But one more post from me on this topic. I found another way of handling the pixel flickering: Introducing Motion Blur. Because the flickering only appears during camera movement in my 2D plattformer it is useful to mask the pixel flickering through bluring the full screen. I used the following settings which only introduced a subtle but noticable blur, but highly depends on camera movement speeds in general:
9162497--1274771--upload_2023-7-21_16-41-23.png
It really helps, but makes the overall image blurry and not only the causing sprites. Also it does not work on slow camera movements. And to be honest especially increasing Render Scale looks way better for the overall graphics than blurring. But I consider using it for the lower graphic quality presets.

Another more post from me :slight_smile: Reason: I figured out a better solution than described in my third post:

Reasons:

  • Approach is easier to implement, because no customization of UberPost.shader required and no custom shader required for bliting. Also no shader adjustments on URP updates needed.
  • Still two cameras necessary, but one camera can simply draw all required layers, no more splitting of layer rendering. So simply all game object can be rendered without causing sorting issues like in my previous approach which rendered effects last as overdraw.
  • Better FPS with increase from 320 to 340 FPS in my project

So whats the approach: Using two cameras. The PlayerCamera renders all required layers as usual. It has post-processing disabled and renders to a RenderTexture instead of the display. It uses its own 2D renderer. A second DetachedPostProcessingCamera renders to the display. This camera has post-processing enabled, has a higher priority then the PlayerCamera, but is configured to render nothing (CullingMask). It uses its own 2D renderer and I used BackgroundType “Uninitialized”, because content is fully overdrawn in opaque-like.


The FinalRenderer2D of the DetachedPostProcessingCamera has a render feature while the MainRenderer2D has nothing special:
9181940--1279139--upload_2023-7-30_17-54-48.png
The FinalRenderPassFeature simply renders the RenderTexture which is used by the PlayerCamera before post processing happens:
FinalRenderPassFeature

using UnityEngine;
using UnityEngine.Experimental.Rendering;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;

namespace My.Game.Scripts.Graphics
{
  /// <summary>
  /// Render pass feature for rendering the final camera displayed texture.
  /// </summary>
  /// <remarks>
  /// This has been implemented for rendering Post Processing effects without harming
  /// MSAA and render scale, because otherwise pixel flickering effects may re-occur.
  /// </remarks>
  public class FinalRenderPassFeature : ScriptableRendererFeature
  {
    private readonly RenderIntermediateTextureRenderPass _renderIntermediateTextureRenderPass;
    private RenderTexture _intermediateTexture;
    private bool _isEnabled;
    private bool _isReady;
    private Camera _registeredBaseCamera;

    /// <inheritdoc />
    public FinalRenderPassFeature()
    {
      this._renderIntermediateTextureRenderPass = new()
      {
        renderPassEvent = RenderPassEvent.BeforeRenderingPostProcessing,
      };

      this.SetEnabled(false, true);
    }

    /// <inheritdoc />
    protected override void Dispose(bool disposing)
    {
      this.ReleaseIntermediateTexture();
      base.Dispose(disposing);
    }

    /// <summary>
    /// Gets or sets a value indicating whether the effects render pass feature is enabled or not.
    /// </summary>
    public bool IsEnabled
    {
      get => this._isEnabled;
      set =>
        // Force always because of issues how serialization and deserialization works on RenderPassFeatures
        this.SetEnabled(value, true);
    }

    // Here you can inject one or multiple render passes in the renderer.
    // This method is called when setting up the renderer once per-camera.
    public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
    {
      if (this.IsEnabled && this._isReady)
      {
        renderer.EnqueuePass(this._renderIntermediateTextureRenderPass);
      }
    }

    /// <inheritdoc />
    public override void Create()
    {
    }

    private void CreateIntermediateTexture()
    {
      if (this._intermediateTexture != null)
      {
        if (this._registeredBaseCamera)
        {
          this._registeredBaseCamera!.targetTexture = null;
        }

        this._intermediateTexture.Release();
      }

      var msaaSampleCount = UniversalRenderPipeline.asset.msaaSampleCount;

      // Render scale seems to be increased by another mechanism, so do not increase it manually
      this._intermediateTexture = new RenderTexture(Screen.width, Screen.height,
        GraphicsFormat.R16G16B16A16_SFloat, GraphicsFormat.None)
      {
        anisoLevel = 0,
        // This increases effect quality, but may be undesired when having more layers rendered on the effects texture, because they get also filtered
        filterMode = FilterMode.Point,
        wrapMode = TextureWrapMode.Clamp,
        dimension = TextureDimension.Tex2D,
        antiAliasing = msaaSampleCount,
        useDynamicScale = false,
        useMipMap = false,
        autoGenerateMips = false,
        enableRandomWrite = false,
        depth = 0,
        memorylessMode = RenderTextureMemoryless.Depth,
      };

      // Assign textures and configure appropriately
      this._renderIntermediateTextureRenderPass.RenderTexture = this._intermediateTexture;

      // Update camera target texture
      if (this._registeredBaseCamera)
      {
        this._registeredBaseCamera.targetTexture = this._intermediateTexture;
      }

      this._isReady = true;
    }

    /// <summary>
    /// Registers a camera which records onto the texture to be processed texture.
    /// </summary>
    /// <param name="camera">The camera which records to the effects texture.</param>
    public void RegisterBaseCamera(Camera camera)
    {
      if (this._registeredBaseCamera != camera)
      {
        if (this._registeredBaseCamera)
        {
          this._registeredBaseCamera.targetTexture = null;
        }

        this._registeredBaseCamera = camera;

        if (this._intermediateTexture != null)
        {
          this._registeredBaseCamera.targetTexture = this._intermediateTexture;
        }
        else
        {
          this._registeredBaseCamera.targetTexture = null;
        }
      }
    }

    private void ReleaseIntermediateTexture()
    {
      if (this._isReady)
      {
        if (this._registeredBaseCamera)
        {
          this._registeredBaseCamera.targetTexture = null;
        }

        if (this._intermediateTexture != null)
        {
          this._intermediateTexture.Release();
        }

        this._renderIntermediateTextureRenderPass.RenderTexture = null;
        this._isReady = false;
      }
    }

    private void SetEnabled(bool isEnabled, bool forced)
    {
      if (this._isEnabled != isEnabled || forced)
      {
        if (isEnabled)
        {
          this.CreateIntermediateTexture();
        }
        else
        {
          this.ReleaseIntermediateTexture();
        }

        this._isEnabled = isEnabled;
      }
    }

    private class RenderIntermediateTextureRenderPass : ScriptableRenderPass
    {
      public Material Material;

      public RenderTexture RenderTexture;
      private RenderTargetIdentifier _currentTarget;

      /// <inheritdoc />
      public override void Configure(CommandBuffer cmd, RenderTextureDescriptor cameraTextureDescriptor)
      {
      }

      // Here you can implement the rendering logic.
      // Use <c>ScriptableRenderContext</c> to issue drawing commands or execute command buffers
      // https://docs.unity3d.com/ScriptReference/Rendering.ScriptableRenderContext.html
      // You don't have to call ScriptableRenderContext.submit, the render pipeline will call it at specific points in the pipeline.
      public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
      {
        // Skip rendering if the render texture is null or not created
        if (this.RenderTexture == null || !this.RenderTexture.IsCreated())
        {
          return;
        }

        var cmd = CommandBufferPool.Get("RenderDetachedPostProcessingEffects");
        cmd.Blit(this.RenderTexture, renderingData.cameraData.renderer.cameraColorTargetHandle);
        context.ExecuteCommandBuffer(cmd);
        CommandBufferPool.Release(cmd);
      }

      // Cleanup any allocated resources that were created during the execution of this render pass.
      public override void OnCameraCleanup(CommandBuffer cmd)
      {
      }

      // This method is called before executing the render pass.
      // It can be used to configure render targets and their clear state. Also to create temporary render target textures.
      // When empty this render pass will render to the active camera render target.
      // You should never call CommandBuffer.SetRenderTarget. Instead call <c>ConfigureTarget</c> and <c>ConfigureClear</c>.
      // The render pipeline will ensure target setup and clearing happens in a performant manner.
      public override void OnCameraSetup(CommandBuffer cmd, ref RenderingData renderingData)
      {
      }
    }
  }
}

To enable/disable the detached post-processing following code is sufficient:
Enable/Disable detached post-processing

private void UpdatedPostProcessingEffects(in GraphicsOptions graphicsOptions)
{
  var finalRenderPassFeature = this.FinalRenderer2D.rendererFeatures.OfType<FinalRenderPassFeature>().Single();

  var isDetachedPostProcessingEffects = graphicsOptions.HasDetachedEffectsRendering;

  this._playerCamera.GetUniversalAdditionalCameraData().renderPostProcessing = !isDetachedPostProcessingEffects && graphicsOptions.HasPostProcessingEffects;

  this._logger.LogInformation("Setting enabled of effects render pass feature to '{EnabledState}'", isDetachedPostProcessingEffects);
  this._detachedPostProcessingCamera.gameObject.SetActive(isDetachedPostProcessingEffects);
  this._detachedPostProcessingCamera.GetUniversalAdditionalCameraData().renderPostProcessing = isDetachedPostProcessingEffects && graphicsOptions.HasPostProcessingEffects;
  finalRenderPassFeature.RegisterBaseCamera(this._playerCamera);
  finalRenderPassFeature.IsEnabled = isDetachedPostProcessingEffects;
}