Render depth pass of the scene from a different view than the main camera

As the title states, I need to render a depth pass of the scene from a different view than my main camera, and I need to access the resulting texture from from a script. The texture will be used to create a hologram projection effect, similar to what typical a Kinect depth camera point cloud looks like.

I’m new to URP, but I know how I would do it in BiRP. I would first create a new camera and disable it. Then I would create a script that sets the camera’s renderTarget to a render texture, and then call _camera.RenderWithShader() using a replacement shader to only draw the depth. I would then immediately submit the point cloud for rendering either using Graphics.DrawProcedural, or by dispatching a compute shader that writes to a mesh.

How much setup does this require in URP?

After reading the docs and looking for examples, my intuition says that I need to define a new ScriptableRendererFeature, similar to this one, except it is using obsolete stuff. This would mean ALL cameras would call AddRenderPasses for this feature, and I would somehow need to ensure that only the relevant camera enqueues the pass. As I understand it, this camera would still render the scene first with lighting and all, even though I don’t need it. And how would I access the texture created by the scriptable render feature? Preferably I would like to use the depth texture (to draw the point cloud) the same frame it is rendered. Is this really the correct approach?

Use separate camera with separate URPRendererData with this render feature for DepthOnly
Main settings of RendererData make render nothing with layer mask

Thanks for the hints!

It seems the resulting texture of the camera is still applied to Display1 even though layerMask is zero. I can see it when I switch the Environment Background Type to anything other than Uninitialized. Doesn’t this mean the camera is still iterate the scene, culling lights and all that?

Camera do fully normal rendering like it do for BiRP
so if you want it to not output to default render target then set it to render to some temporary render traget

Another option is:
Add render feature to your main camera
setup it to draw before opaques and set custom projection, filtering options (separate layer for example) and render target. So you render feature will use culling result of your main camera but render only depth from another view point into separate RenderTexture. This way you pay cost of Camera culling only once.

You can see how to do all this in default RenderObjectsFeature

1 Like