Either there is a fundamental flaw or I am not understanding camera stacking

Recently I have been wondering how to make upscaling working on a camera stack without affecting the UI camera.

As before, we have a BaseCamera, a GameCamera (overlay) and a UI Camera (overlay).

We have recently noticed that also a similar issue happens with AA. We want the AA for the game camera but not for the UI camera.

I find it strange I cannot achieve this, so I must be getting something wrong.
A solution can be to move away from camera stacking, but that’s ok just because the camera can override the AA settings. Unfortunately, something similar doesn’t exist for upscaling, but that could make sense as they are different algorithms

@ManueleB

Hey!

I am not sure about your project details, but in general camera stacking in its current state is not the ideal solution for anything in image space (postFX, UI, etc) and the common usage is to stack layers of scene geometry.
Please also note that each overlay camera adds a significant cost on tile based/mobile GPUs, so it might be a bad optimization choice depending on your target hardware.

We are planning an overhaul of the system after 23.2 but that is still at a very early stage age with no defined roadmap. For now if possible you should look into alternative approaches for UI rendering like screenspace overlay UI or render features

1 Like

ok @ManueleB I moved away from camera stacking but unfortunately, the URP Asset is still affecting the final RT instead than the rendering linked to every single camera.

So for this reason I still don’t understand how to enable the upscaling filter. What’s the point of it if it affects the UI?

I have also no clue about what allowing dynamic resolution means on the camera output part (as in if it has any meaning at all)

In general, I am still VERY confused by the camera system in URP.

I can have two base cameras, but I cannot stack them. If so why is there a priority system at all?
I cannot use uninitialized because it gives unexpected results and I cannot use solid colour because it will cover the previous camera. I really don’t understand why the second camera just doesn’t use what is left by the first camera.

is the use case of uninitialised to use something that copies the RT from one camera to another camera?
on top of this even if the two cameras are independent, something like upscaling will affect both cameras and I have no possibility to control the upscaling for each.

The way dynamic resolution works in the SRPs in general is this:

  1. Is dynamic resolution globally enabled on the pipeline.
  2. Is dynamic resolution enabled on this camera.

For any dynamic resolution to happen for a given camera, both of these must be on.

That being said, we are working on fixing dynamic resolution with the help of RTHandle dynamic scaling. This will bring it in line with HDRP’s dynamic resolution.

2 Likes

Would you be so nice to show me how to enable fidelityfx upscaling on a render target programmatically?

I am exploring this unstacked solution I have implemented through a render feature. The ui camera is initialised with the RT of the main camera before to render the ui

Would be nice to render the game camera with a different RT resolution now but it seems there is no easy way to upscale it using fidelity fx

So this is my workaround to have unstacked cameras working. I just put this feature on the UI camera that stays on top

using System;
using UnityEngine;
using UnityEngine.Experimental.Rendering;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;

public class CopyColorFeature : ScriptableRendererFeature
{
    // Create a class that inherits from ScriptableRendererFeature and override the Create and AddRenderPasses methods
    class CopyColorPass : ScriptableRenderPass
    {
        public Material scalingSetup;
        // Define the source and destination cameras of the pass
        public Camera sourceCamera { get; set; }
      
        public void Cleanup() { }
      
        public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
        {
            // Get the command buffer and the camera data
            CommandBuffer cmd = CommandBufferPool.Get("CopyColorPass");
          
            RenderTargetIdentifier dest = renderingData.cameraData.renderer.cameraColorTarget;

            // Set the render target and clear it
            //theoretically I could enable FideltyFX but the process is too complicated and I couldn't figure it out
//            var fsrInputSize = new Vector2(sourceCamera.pixelHeight, sourceCamera.pixelHeight);
//            var fsrOutputSize = new Vector2(renderingData.cameraData.camera.pixelWidth, renderingData.cameraData.camera.pixelHeight);
//            FSRUtils.SetEasuConstants(cmd, fsrInputSize, fsrInputSize, fsrOutputSize);
//            Blit(cmd, sourceCamera.targetTexture, dest, scalingSetup);
          
            Blit(cmd, sourceCamera.targetTexture, dest);
          
            // Execute the command buffer
            context.ExecuteCommandBuffer(cmd);
            CommandBufferPool.Release(cmd);
        }
    }

    // Create a render pass event and a material for the feature
    [SerializeField] RenderPassEvent renderPassEvent = RenderPassEvent.BeforeRenderingOpaques;
    [SerializeField] Material material;

    // Create a copy color pass
    CopyColorPass m_CopyColorPass;

    public override void Create()
    {
        // Initialize the copy color pass
        m_CopyColorPass = new CopyColorPass();
        m_CopyColorPass.renderPassEvent = renderPassEvent;
        m_CopyColorPass.scalingSetup = material;
    }

    public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
    {
        var sourceCamera = Camera.main;

        if (sourceCamera == null) return;
          
        // Set the source and destination cameras of the copy color pass
        if (m_CopyColorPass.sourceCamera == null || m_CopyColorPass.sourceCamera.pixelWidth != Screen.width
         || m_CopyColorPass.sourceCamera.pixelHeight != Screen.height)
        {
            m_CopyColorPass.sourceCamera = sourceCamera; // Change this to the camera you want to copy from
            if (sourceCamera.targetTexture != null)
            {
                sourceCamera.targetTexture.Release();
                sourceCamera.targetTexture.DiscardContents();
            }

            var m_RenderTexture = new RenderTexture( Screen.width, Screen.height, 24, RenderTextureFormat.ARGB32, 1);
            m_RenderTexture.Create();
            sourceCamera.targetTexture = m_RenderTexture;
        }

        // Add the copy color pass to the renderer
        renderer.EnqueuePass(m_CopyColorPass);
    }
}

but I am wonder if I can also enable RT upscaling during the blitting.
Performance wise there is no difference between this trick and using overlay cameras

disclaimer: I am not the best person to ask about UI, so just throwing some idea here

are both the cameras targeting the same RT? Have you tried having one camera targeting an offscreen target and then using compositing in your render feature, blending the UI offscreen texture on top of the scene? That would allow you to use a different resolution for the UI texture easily (I think you should be able to have different upscale settings per camera at that point)

I have asked some UI experts to chime in so they might be able to give better suggestions

In general, stacking is meant to be used when you want to overwrite the results of the previous camera (so it loads the previous contents).

Using multiple cameras is normally used for offscreen rendering, as you can assign a camera target, in that case the camera doesn’t render to the main color/backbuffer. Then you can use render features or custom materials to read from the offscreen targets. Uninitialized is just an optimization that tells to the camera “I know I am going to write to every single pixel, so no need to clear”, on some hardware (i.e. TBDR/mobile) this results in undefined behaviour in case you don’t overwrite all the pixels, since it loads “undefined” memory

so the priority system is meant to be used with custom RT only?

In general, it seems to me that each camera has its own RT even when they are not assigned by the user, which I find weird. Without RT assigned I would assume that the output is the display colour buffer shared between cameras.

The problem is not about the UI specifically, it’s just because UI is the standard use case for an overlay camera. The problem is that URP asset options must be all overridable per camera, since something that works for the game, may not work for the UI. Now some options are overridable others are not. Specifically, in my case, I don’t understand how in all of this the Render Scale option works. If it’s true that each camera has its own RT even in default mode, why cannot I change the Render Scale per camera?