How to merge two RenderTarget correctly? Please let me know

My Goal
Merge source rendertarget to destination rendertarget. And make result as final rendered image that user can see.

Image below shows what i want to achieve.


First Image: Source RenderTarget
Second Image: Destination RenderTarget
Last Image: rendered image that user can see

( Sorry for the terrible example. I believe you can understand what i’m trying to say )

Problem
The source rendertarget is not reflected in the final image that user can see.
Final image only reflect destination image. This is my primary problem!

Image below shows my problem

I suspect the use of the ‘Blit Method’ is wrong. or… context.DrawRenderers might not draw at RenderTarget i set… I’m not sure.

I’ve been stuck on this issue for 3 days and can’t find a solution. I’m willing to give you a cup of coffee if you help me to solve this problem…

Here’s full source code i made for test.

using ProvisGames.CustomPass;
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;

public class MyCustomFeature : ScriptableRendererFeature
{
    [SerializeField] private RenderPassEvent _passEvent; // Inspector Value : RenderPassEvent.AfterRenderingOpaques
    [SerializeField] private LayerMask _layerMask;

    private RenderTargetHandle _temporaryRenderTarget;
    private Material _blitMtr;

    public override void Create()
    {
        _temporaryRenderTarget = new RenderTargetHandle();
        _temporaryRenderTarget.Init("_temporarySourceTexture");

        _blitMtr = CoreUtils.CreateEngineMaterial(Shader.Find("Hidden/Universal Render Pipeline/Blit"));
    }

    public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
    {
        bool isOpaque = (int)_passEvent <= (int)RenderPassEvent.AfterRenderingSkybox;

        DrawOnRenderTargetPass renderPass = new DrawOnRenderTargetPass(_passEvent, isOpaque, _layerMask, _temporaryRenderTarget);
        renderer.EnqueuePass(renderPass);

        MergeRenderTargetPass mergePass = new MergeRenderTargetPass(_temporaryRenderTarget, RenderTargetHandle.CameraTarget, _blitMtr);
        mergePass.renderPassEvent = RenderPassEvent.AfterRendering;
        renderer.EnqueuePass(mergePass);
    }
}

public class DrawOnRenderTargetPass : ScriptableRenderPass
{
    private bool _isOpaque;
    private LayerMask _layerMask;
    private RenderTargetHandle _destination;

    public DrawOnRenderTargetPass(RenderPassEvent passEvent, bool isOpaque, LayerMask layerMask, RenderTargetHandle destination)
    {
        this.renderPassEvent = passEvent;

        _isOpaque = isOpaque;
        _layerMask = layerMask;
        _destination = destination;
    }

    public override void Configure(CommandBuffer cmd, RenderTextureDescriptor cameraTextureDescriptor)
    {
        base.Configure(cmd, cameraTextureDescriptor);

        cmd.GetTemporaryRT(_destination.id, cameraTextureDescriptor);
        ConfigureTarget(_destination.Identifier());
        ConfigureClear(ClearFlag.All, Color.clear);
    }

    public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
    {
        var properSortingCriteria = _isOpaque ? SortingCriteria.CommonOpaque : SortingCriteria.CommonTransparent;
        var properRenderQueue = _isOpaque ? RenderQueueRange.opaque : RenderQueueRange.transparent;

        var drawingSetting = CreateDrawingSettings(URPSharedData.URPDefaultShaderTags, ref renderingData, properSortingCriteria);
        var filterSetting = new FilteringSettings(properRenderQueue, _layerMask);
        context.DrawRenderers(renderingData.cullResults, ref drawingSetting, ref filterSetting);
    }

    public override void FrameCleanup(CommandBuffer cmd)
    {
        base.FrameCleanup(cmd);

        if (_destination != RenderTargetHandle.CameraTarget)
        {
            cmd.ReleaseTemporaryRT(_destination.id);
        }
    }
}

public class MergeRenderTargetPass : ScriptableRenderPass
{
    private RenderTargetHandle _source;
    private RenderTargetHandle _destination;
    private Material _blitMtr;

    public MergeRenderTargetPass(RenderTargetHandle source, RenderTargetHandle destination, Material blitMtr)
    {
        _source = source;
        _destination = destination;
        _blitMtr = blitMtr;
    }
    public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
    {
        var cmd = CommandBufferPool.Get();

        Blit(cmd, _source.Identifier(), _destination.Identifier(), _blitMtr, 0);

        context.ExecuteCommandBuffer(cmd);
        CommandBufferPool.Release(cmd);
    }
}
using System.Collections.Generic;
using UnityEngine.Rendering;

namespace ProvisGames.CustomPass
{
    public sealed class URPSharedData
    {
        public static readonly List<ShaderTagId> URPDefaultShaderTags = new List<ShaderTagId>()
        {
            new ShaderTagId("SRPDefaultUnlit"),
            new ShaderTagId("UniversalForward"),
            new ShaderTagId("UniversalForwardOnly"),
            new ShaderTagId("LightweightForward")
        };

        public static readonly ShaderTagId DepthOnlyShaderTag = new ShaderTagId("DepthOnly");
    }
}

“Blit” assumes you are overwriting the target, scaling the source to fit the target. That is not what I assume you want to achieve. You should try drawing a screen-space sized quad for what you are trying to do, using CommandBuffer.DrawMesh.

Something with the likes of

On an additional note, the example from an ancient version of LWRP utilizes a full screen quad for the mesh. Since you are going to just draw a “mesh”, you could customize the size of the “mesh” to achieve your desired effect.

Thanks for posting your answer!

As you said, I did some research on DrawMesh and realized that what I wanted was closer to how the Blit function worked. I want to overwrite two renderTarget So that the objects drawn in the Source are always drawn over the Destination. And overlapped result always looks like AlphaBlended. I think DrawMesh might solve my problem, but I would like to solve the problem using the Blit function.

After posting the question, I continued to investigate and found that if I blit to the CameraTarget, which means the render target of the currently rendering camera, the final result should be the same as the blit result, but it’s not.

It seems that I made a very minor mistake or there is some knowledge that I am not aware of related with Blit or SetRenderTarget.

Thanks again for posting a reply. I will reply back when the problem is resolved!

( English is not my native language, so it may be difficult to read. Please understand )

Blit is a combination of rescaling and copying, there is no way to split the scaling part from blit. If you blit from 600x500 to 1200x800, you will not get an overwriting of just the 600x500 but a result of scaling from the 600x500 to 1200x800. So you would never be able to achieve what you desire with blit.