you can learn about render texture here: https://www.youtube.com/watch?v=tRTbPGalJXk
and then, once you have that 2nd camera render to a RawImage on your UI, you’ll need a shader that basically says “anything that should be rendered should be blue”, and everything else will be black if you set the Background to black in Camera’s inspector.
I haven’t used urp that much, so I can’t evaluate this shader (generated with gpt):
Shader "Custom/URPSilhouette"
{
Properties
{
_Color ("Color Tint", Color) = (0, 0, 1, 1) // Default to blue
}
SubShader
{
Tags { "RenderType"="Opaque" "Queue"="Geometry" }
Pass
{
Name "UniversalForward"
Tags { "LightMode"="UniversalForward" }
HLSLPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
struct Attributes
{
float4 positionOS : POSITION;
};
struct Varyings
{
float4 positionHCS : SV_POSITION;
};
float4 _Color;
Varyings vert(Attributes input)
{
Varyings output;
output.positionHCS = TransformObjectToHClip(input.positionOS);
return output;
}
half4 frag(Varyings input) : SV_Target
{
return _Color; // Render all objects with the specified color
}
ENDHLSL
}
}
FallBack "Hidden/InternalErrorShader"
}
create a URPSilhouette.shader and then a material using this “Custom/URPSilhouette”, and add it to your RawImage on the UI. to prevent alignment issues, just parent this second camera to the main camera, and copy all of its settings (before changing them for render texture purposes)
also, the 2nd camera’s cull mask should only target the layer your subjects are in, OR you can hack it by setting a low enough far plane (another setting in 2nd camera’s inspector), which will only render objects within a certain distance from that cameras, if you don’t want to mess with layers.
the silhouettes will move up while you’re lifting the curtain
they’re like “printed” on the curtain, which you can mitigate by moving the rendering in the opposite side, for example by manipulating rawimage’s uv in code in sync with lifting its transform:
using UnityEngine;
using UnityEngine.UI;
public class RawImageUVFill : MonoBehaviour
{
public RawImage rawImage; // Assign your RawImage in the Inspector
public float fillSpeed = 0.5f; // Speed of the fill effect
private float uvY; // Current UV offset
void Start()
{
uvY = 1.0f; // Start fully filled
}
void Update()
{
// Reduce the UV Y offset to simulate the fill effect
uvY = Mathf.Clamp01(uvY - fillSpeed * Time.deltaTime);
// Set the UV rect with the new Y offset
rawImage.uvRect = new Rect(0, 1 - uvY, 1, uvY); // Adjust height and offset
}
}
if for some reason the above is too complex of you get stuck, I suggest forcing an llm (I didn’thave enough time to explore this) to give you a simple shader you’ll just add to your rawimage, set Canvas mode to Camera space, and the shader would basically be a simple transparent one (meaning in the transparent queue) saying “if any object is within x meters of me [the RawImage], render blue; otherwise, render black”. this should be in shaderlab compatible with urp. one pro here is simplicity, the other is the silhouettes will naturally stay overlapped to the objects even when you move the curtain