How can I make character silhouette appear behind UI?

Hello there!
I’m currently working on recreating a Persona style battle system, and am currently in the process of recreating Persona 3 Reload’s battle intro animation, which looks like the following:

makoto-yuki-persona

Currently I have a material and use URP settings (Render Objects) to make the silhouette appear when the player is behind objects, which looks as follows:

And now I’m trying to see if I can get that silhouette to be rendered behind actual UI images/sprites


so that I can make the actual animation of the battle introduction showing the silhouette behind a black screen, where the black screen then pans away showing the actual scene in real-time.

If somebody could help me figure out how to get the silhouette to appear behind UI, I’d be very grateful!

Hey.
The effect works by reading the depth buffer to find out if anything is in front. That means that the geometry obscuring the model has to be rendered before the render objects pass runs.
UI is always rendered after the camera has finished and so would never be able to write to the depth buffer before render objects. Which means, UI can’t really be used in this scenario.

I would recommend to just put a quad with an unlit black material in front of the camera and use that to obscure and cause the blue silhouette to be rendered.

If you need to use UI for the black sprite then you would need some other way to render the blue silhouette.

wouldn’t screen space and playing with canvas renderer’s sorting layer (forgot what they’re called exactly) achieve this?
and/or just adding a material to that Image/RawImage and then it’s just a matter of playing with the material’s params or (more probably) finding/creating a shader for it

another option which I really like in terms of the applicability space is rendering a camera to a texture – you can use thistechnique in so many contexts

interesting, I wonder if this would actually work, would you happen to know where to begin?

you can learn about render texture here: https://www.youtube.com/watch?v=tRTbPGalJXk
and then, once you have that 2nd camera render to a RawImage on your UI, you’ll need a shader that basically says “anything that should be rendered should be blue”, and everything else will be black if you set the Background to black in Camera’s inspector.

I haven’t used urp that much, so I can’t evaluate this shader (generated with gpt):

Shader "Custom/URPSilhouette"
{
    Properties
    {
        _Color ("Color Tint", Color) = (0, 0, 1, 1) // Default to blue
    }
    SubShader
    {
        Tags { "RenderType"="Opaque" "Queue"="Geometry" }
        Pass
        {
            Name "UniversalForward"
            Tags { "LightMode"="UniversalForward" }
            
            HLSLPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"

            struct Attributes
            {
                float4 positionOS : POSITION;
            };

            struct Varyings
            {
                float4 positionHCS : SV_POSITION;
            };

            float4 _Color;

            Varyings vert(Attributes input)
            {
                Varyings output;
                output.positionHCS = TransformObjectToHClip(input.positionOS);
                return output;
            }

            half4 frag(Varyings input) : SV_Target
            {
                return _Color; // Render all objects with the specified color
            }
            ENDHLSL
        }
    }
    FallBack "Hidden/InternalErrorShader"
}

create a URPSilhouette.shader and then a material using this “Custom/URPSilhouette”, and add it to your RawImage on the UI. to prevent alignment issues, just parent this second camera to the main camera, and copy all of its settings (before changing them for render texture purposes)
also, the 2nd camera’s cull mask should only target the layer your subjects are in, OR you can hack it by setting a low enough far plane (another setting in 2nd camera’s inspector), which will only render objects within a certain distance from that cameras, if you don’t want to mess with layers.

the silhouettes will move up while you’re lifting the curtain
they’re like “printed” on the curtain, which you can mitigate by moving the rendering in the opposite side, for example by manipulating rawimage’s uv in code in sync with lifting its transform:

using UnityEngine;
using UnityEngine.UI;

public class RawImageUVFill : MonoBehaviour
{
    public RawImage rawImage; // Assign your RawImage in the Inspector
    public float fillSpeed = 0.5f; // Speed of the fill effect

    private float uvY; // Current UV offset

    void Start()
    {
        uvY = 1.0f; // Start fully filled
    }

    void Update()
    {
        // Reduce the UV Y offset to simulate the fill effect
        uvY = Mathf.Clamp01(uvY - fillSpeed * Time.deltaTime);

        // Set the UV rect with the new Y offset
        rawImage.uvRect = new Rect(0, 1 - uvY, 1, uvY); // Adjust height and offset
    }
}

if for some reason the above is too complex of you get stuck, I suggest forcing an llm (I didn’thave enough time to explore this) to give you a simple shader you’ll just add to your rawimage, set Canvas mode to Camera space, and the shader would basically be a simple transparent one (meaning in the transparent queue) saying “if any object is within x meters of me [the RawImage], render blue; otherwise, render black”. this should be in shaderlab compatible with urp. one pro here is simplicity, the other is the silhouettes will naturally stay overlapped to the objects even when you move the curtain