Feedthrough on CommandBuffer/RenderFeature. [SRP | URP | 2021.1.0b1]

Hello,

I’ve been recently trying to create an outline shader following this video:

Which creates the outline in the following steps:

  1. Render outlined objects as a solid colour.
  2. Apply a blur to the render texture.
  3. Remove the original render from the newly blurred render texture.
  4. Overlay that over the main camera’s render.

6723205--773440--firefox_ehvnw6VxIq.png

With this being the net result, repo here:

With this video being made in 2017 I was trying to update it to use the SRP and Unity’s new rendering features to make simpler and more streamlined code.

I used this repo as a starting off point:

Then created two cameras and two renderers, with one camera being the base and one being an overlay on the other.

The problem I’m having is while my replacement shader works fine:

When I try to appy my blur it blurs the whole camera instead of just the current RenderTexture which should be active within the Renderer:
6723205--773470--Unity_4dfT8NVw9u.png

I’m assuming my issue is that instead of renderers acting as independent camera views, all render features/renderers are simply applied to the main camera texture in order.

Meaning while I expect this to be in the command buffer:
6723205--773476--Unity_J6Fyg4DyL8.png

It’s actually this:

So my issue is, what is the correct way to do this?
I’ve been struggling on it for a few days but I’ve had a hard time making any progress.

I’ve tried a couple different solutions:

  • Having one renderfeature instead of two and doing the whole thing in one compute buffer. (You can’t make the camera render in a command buffer, I think maybe I was going about it wrong.)
  • Making it all into one shader with multiple passes, but it never seemed to compile.
  • Applying all the materials in order to a secondary camera with a C# script, then applying that camera’s texture to the main camera, again just issues with the whole thing.

I think I just have some fundamental misunderstanding with the data flow and structure of how to go about doing something like this.

If anyone could be of any help it would be much appreciated.

Thanks,
Harry.

I have resolved this issue, so I’ll post my findings for the future forum historians.

The way the Base/Overlay system seems to work is that BaseCameras and OverlayCameras share the same RenderTexture, they simply render to it in a specific order, this is what caused my blurring on my main texture.

To workaround this I do the following steps:

  1. Set my Overlay Camera to a Base Camera.

  2. Change the Enivornment Background Type to be SolidColor and the Background to be Black with no Alpha.

  3. Send the output to a RenderTexture.

  4. Create a AdditiveBlending material with my RenderTarget
    6733453--775294--Unity_oEm1kECOGt.png
    Code Here:

Shader "Add" {
    Properties{
        _Combine("_Combine", 2D) = "black" {}
        _MainTex("_MainTex", 2D) = "black" {}
    }
        SubShader{
            Tags { "Queue" = "Transparent" }
            Pass {
                Blend One One
                SetTexture[_Combine]
                SetTexture[_MainTex] { combine previous, texture }
            }
    }
}
  1. Add a OverlayPass RenderFeature to my BaseCamera’s RenderPipeline.
    6733453--775297--Unity_ROSugNSphx.png
    Code here:
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;

public class OverlayPass : ScriptableRendererFeature
{
    [System.Serializable]
    public class OverlaySettings
    {
        public RenderPassEvent renderPassEvent = RenderPassEvent.AfterRenderingTransparents;
        public Material overlayMaterial;
    }

    public OverlaySettings settings = new OverlaySettings();

    class CustomRenderPass : ScriptableRenderPass
    {
        public Material overlayMaterial;
        private string profilerTag;

        private RenderTargetIdentifier source { get; set; }

        public void Setup(RenderTargetIdentifier source)
        {
            this.source = source;
        }

        public CustomRenderPass(string profilerTag)
        {
            this.profilerTag = profilerTag;
        }

        public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
        {
            CommandBuffer cmd = CommandBufferPool.Get(profilerTag);
            cmd.Blit(source, source, overlayMaterial);
            context.ExecuteCommandBuffer(cmd);
            cmd.Clear();
            CommandBufferPool.Release(cmd);
        }
    }

    CustomRenderPass scriptablePass;

    public override void Create()
    {
        scriptablePass = new CustomRenderPass("OverlayPass");
        scriptablePass.overlayMaterial = settings.overlayMaterial;
        scriptablePass.renderPassEvent = settings.renderPassEvent;
    }

    public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
    {
        var src = renderer.cameraColorTarget;
        scriptablePass.Setup(src);
        renderer.EnqueuePass(scriptablePass);
    }
}

(Based off KawaseBlur’s code here: urp_kawase_blur/Assets/Scripts/KawaseBlur.cs at master · sebastianhein/urp_kawase_blur · GitHub)

Which then achieves this lovely effect:
6733402--775282--jmpebLisjY.gif

Limitations:
Because I’m rendering to a RenderTexture, I don’t seem to be able to change resolution at runtime, I’ve tried it with a script it doesn’t seem to work.
This is quite a heavy limitation, in theory you’d have to have set resolutions and swap out rander textures as your resolution changes at runtime to make this effect continue to work instead of getting stretched out and distorted.

This occurs because the blending shader renders the pixel information of both textures based off UV co-ordinates, meaning the overlay image is warped to match the aspect ratio of the main image.

There are two workarounds I see for this:

  1. Alter the shader to scale the RenderTexture’s X or Y to scale properly with the main cameras Render.
    Limitations: Any difference in aspect Ratio between the two cameras will cause outlines to be culled.

  2. Instead of Rendering to a RenderTexture, simply make the Overlay camera a Base Camera with a lower rendering index, pass the camera to the OverlayPass RenderFeature and finally apply it’s activeRenderTexture to the additive material.
    Limitations: In theory none, but it feels rather hacky.

There’s also a secret 3rd workaround which seems very obvious, “why not simply make the second base camera have a higher Priority than your first base camera? Surely that’ll simply make it render on top and avoid this issue?”, in theory yes, in practice it seems only the BaseCamera with the highest Priority gets rendered to the screen, whether this is a bug or intentional I’m not sure, only time will tell.

Good luck to ya!
Harry.

Update: Solution 2 is not possible because it relies on references to objects within the scene heirarchy, which you cannot pass into a static render pass, it seems solution 1 is the only way to do this from a practical standpoint, a shame.

All the best,
Harry.