As the title states, I need to render a depth pass of the scene from a different view than my main camera, and I need to access the resulting texture from from a script. The texture will be used to create a hologram projection effect, similar to what typical a Kinect depth camera point cloud looks like.
I’m new to URP, but I know how I would do it in BiRP. I would first create a new camera and disable it. Then I would create a script that sets the camera’s renderTarget to a render texture, and then call _camera.RenderWithShader() using a replacement shader to only draw the depth. I would then immediately submit the point cloud for rendering either using Graphics.DrawProcedural, or by dispatching a compute shader that writes to a mesh.
How much setup does this require in URP?
After reading the docs and looking for examples, my intuition says that I need to define a new ScriptableRendererFeature, similar to this one, except it is using obsolete stuff. This would mean ALL cameras would call AddRenderPasses for this feature, and I would somehow need to ensure that only the relevant camera enqueues the pass. As I understand it, this camera would still render the scene first with lighting and all, even though I don’t need it. And how would I access the texture created by the scriptable render feature? Preferably I would like to use the depth texture (to draw the point cloud) the same frame it is rendered. Is this really the correct approach?