SetTargetBuffers does not render UI Canvas

Hey all,

I’m trying to implement some basic post-processing with multiple render targets with a simple setup like this:

public class CameraPostProcessing : MonoBehaviour {

    public Material postProcessingMaterial;
    private Camera cam;

    private RenderTexture mainTex;
    private RenderTexture effectsTex;

    void Start () {
        mainTex = new RenderTexture(Screen.width, Screen.height, 24, RenderTextureFormat.ARGB32);
        effectsTex = new RenderTexture(Screen.width, Screen.height, 0, RenderTextureFormat.
ARGBHalf
);
        cam = GetComponent<Camera>();
        RenderBuffer[] rb = new RenderBuffer[] {mainTex.colorBuffer, effectsTex.colorBuffer};
        cam.SetTargetBuffers(rb, mainTex.depthBuffer);
        postProcessingMaterial.SetTexture("_EffectsTex", effectsTex);
    }

    void OnRenderImage(RenderTexture src, RenderTexture dest) {
        Graphics.Blit(mainTex, dest, postProcessingMaterial);
    }

}

It works great except one little detail: UI is not rendered at all. I assumed it has something to do with me discarding the src in OnRenderImage, but it appears to be not the case: this texture is pure white.

I’m using Unity 5 standard UI Canvas with Render Mode “Screen Space — Overlay”.

Can’t really figure it out, any help is much appreciated!

Can you post a screenshot of the whole (or at least the last part) of the Frame Debugger contents?
Most likely something in your scene gets rendered after the Canvas.RenderOverlays (I don’t remember the exact call)

1 Like

Firstly, thank you for introducing me to Frame Debugger :smile: (I’m sorry, I’m such a newbie).

Secondly, I don’t see anything happening after Canvas.RenderOverlays:

Once thing I noticed is that if I remove the SetTargetBuffers call (which will lead to UI rendering correctly and post-processing not rendering at all), then RenderTarget in Frame Debugger shows <No name> instead of an empty string. With SetTargetBuffers call enabled, Frame Debugger doesn’t display a “preview” in Game tab when going through draw calls, and RenderTarget is an empty string, just exactly as shown in a screenshot above.

I wonder if I need to return something to original state in OnPostRender call… I already tried memorizing Graphics.activeColorBuffer and Graphics.activeDepthBuffer in OnPreRender and setting them back in OnPostRender — but then buffers are not rendered at all :face_with_spiral_eyes:

What happens when you click those separate draw calls? Does anything get rendered in the game view?

Nope, with SetTargetBuffers enabled game view doesn’t update at all.

Is it possible to receive your project example to investigate?

Sure, here’s a demo project which illustrates the issue.

In short: Object shader writes mask to COLOR1 buffer, which is subsequently captured by SetTargetBuffers and passed to PostProcessing shader to draw an outline.

If you comment out SetTargetBuffers call and OnRenderImage, then you will be able to see the UI — but no outlines.

3272981–252973–OutlineDemo.zip (1.87 MB)

Some further investigation shows that calling SetTargetBuffers affects source and destination parameters in OnRenderImage. Src unsurprisingly becomes Null, but what’s rather unexpected is that Dest becomes “ImageEffects Temp Buffer” instead of Null.

I tried manipulating RenderTexture.active and all sorts of Graphics.activeColorBuffer/activeDepthBuffer juggling, but this either gives no effect, or a black screen.

Ok, the SetTargetBuffers is a big box of surprises.

Long story short, with (incorrect) code like this:

    public Material postProcessingMaterial;
    private Camera cam;

    private RenderTexture mainTex;
    private RenderTexture effectsTex;

    void Start () {
        cam = GetComponent<Camera>();
    }

    void OnPreRender() {
        mainTex = RenderTexture.GetTemporary(Screen.width, Screen.height, 24, RenderTextureFormat.ARGB32);
        effectsTex = RenderTexture.GetTemporary(Screen.width, Screen.height, 0, RenderTextureFormat.ARGBHalf);
        RenderBuffer[] rb = new RenderBuffer[] {mainTex.colorBuffer, effectsTex.colorBuffer};
        postProcessingMaterial.SetTexture("_EffectsTex", effectsTex);
        cam.SetTargetBuffers(rb, mainTex.depthBuffer);
    }

    void OnRenderImage(RenderTexture src, RenderTexture dst) {
        Graphics.Blit(mainTex, dst, postProcessingMaterial);
        /*
        RenderTexture.ReleaseTemporary(mainTex);
        RenderTexture.ReleaseTemporary(effectsTex);
        */
        mainTex.Release();
        effectsTex.Release();
    }

— I get exactly the results I want (that is, both effects and UI are rendered correctly). With one “small” caveat: this code is incorrect and spits out exceptions “Releasing render texture whose render buffer is set as Camera’s target buffer with Camera.SetTargetBuffers!” on every frame.

However, if I replace Release calls with RenderTexture.ReleaseTemporary, then no exceptions are thrown. And no UI rendered either :rage:

Unless there’s something I’m missing out, I really assume this is a bug. Can some one confirm/infirm please?

I have just opened your example but can’t see any outline effect.


I also can’t see UI, which is, as you’ve said, your problem. Commenting out the command buffers line just renders everything black. But I won’t be able to see if any of my actions have fixed it or not because I won’t be able to tell if the effect works or not :slight_smile: Can you provide some additional info on this? I’m using 2017.2, just like you (I’ve checked the project settings)

@Kumo-Kairo Thanks for looking into it! And sorry to hear that it doesn’t work for you, I assume I’m using something OpenGL-specific which probably doesn’t work on DirectX (I have no Windows to test).

Anyway, I kinda gave up on the idea of using SetTargetBuffers on main camera. Initially it seemed like a win, because all geometry is rendered exactly once, and we can encode lots of usefulness into a separate buffer to use at later post-processing stages. But in practice setting SetTargetBuffers has unwanted side-effects (like buffer[0] still being set as render target even if you reset it in OnPostRender) — and this basically prevents UI from being rendered to screen.

I will now try using stencils (I think it’s possible to write some bits to Z buffer from fragment shader and use it later in post-processing). Will post the results here.

Thanks again for your help!

1 Like

Can you show a screenshot of an effect you’re trying to achieve?

Yes, sure. Finally managed to achieve what I was going after:

3277342--253424--outline.gif

So I still used SetTargetBuffers — but this time on a separate camera, leaving main camera with cullingMask = 0 (render nothing), and then blitting multiple buffers together with a post processing shader. Initially I was reluctant to introduce another camera, because it “almost worked” with the main one. But I figured that SetTargetBuffers just isn’t designed to work with actively rendering camera (UI being rendered into wrong buffer is just one of the consequences).

Actually, much to my surprise, additional camera setup adds almost 0 complexity: all we have to do is to create an empty game object inside main camera, attach a Camera component to it, then in controller do cam.CopyFrom(Camera.main) — or even initialize some settings manually in Editor. Finally, the code isn’t too different from the one I posted in the beginning:

public class CameraCustomRenderer : MonoBehaviour {

    public Camera mrtCamera;
    public Material postProcessingMaterial;

    private RenderTexture mainTex;
    private RenderTexture effectsTex;

    void Start () {
        mainTex = new RenderTexture(Screen.width, Screen.height, 24, RenderTextureFormat.ARGB32);
        effectsTex = new RenderTexture(Screen.width, Screen.height, 0, RenderTextureFormat.ARGBHalf);
        RenderBuffer[] rb = new RenderBuffer[] { mainTex.colorBuffer, effectsTex.colorBuffer };
        postProcessingMaterial.SetTexture("_EffectsTex", effectsTex);
        mrtCamera.SetTargetBuffers(rb, mainTex.depthBuffer);
        mrtCamera.enabled = false;
    }

    void OnRenderImage(RenderTexture src, RenderTexture dest) {
        mrtCamera.Render();
        Graphics.Blit(mainTex, dest, postProcessingMaterial);
        Graphics.SetRenderTarget(effectsTex);
        GL.Clear(false, true, Color.black);
    }

}

Few more things:

As I mentioned, my main camera renders nothing at all, not even UI (culling mask is 0, or “Nothing”). This is the trick I learned from my adventures: if Canvas is set to its default rendering mode “Screen Space — Overlay”, then it gets rendered regardless of what you specify in camera’s culling. And, ironically, this is the very reason I started this thread :smile:

Also, if you have Physics Raycaster on your main camera (I happened to have one), just move it to the additional camera which actually does the rendering. Apparently, culling mask is also applied to raycasters (and maybe other components as well, for optimization).

4 Likes