So i’m trying to create a dataset for the depth of a 3d model. I’m trying to do it by creating a scene, setting up a camera and capture both the png image and a csv depth map using the DepthTextureMode.Depth but all i’m getting is a regular image since i can see the shadows and the patterns of the textures when i open the csv file using python. Can someone help me do it?
In URP (should work similarly in HDRP), I’ve successfully rendered scene depth to a depth-only Render Texture like this:
Create a second camera, a new Universal Renderer Data asset and a Render Texture
Setup the RenderTexture as in the screenshot below (left side; adjust size)
Add the renderer asset to the active render pipeline asset’s renderer list
In the camera component, set the Render Texture as “Output Texture” in the Output section and select the renderer asset as “Renderer” in the Rendering section
Add a “Render Objects” Renderer Feature to the renderer asset setup like this (right side):
I disabled the camera component and use the following to render on demand:
var renderRequest = new UniversalRenderPipeline.SingleCameraRequest();
if (RenderPipeline.SupportsRenderRequest(depthCamera, renderRequest))
{
renderRequest.destination = renderTexture;
RenderPipeline.SubmitRenderRequest(depthCamera, renderRequest);
}
That leaves you with the depth-only Render Texture containing the depth values of all opaque objects. I assume you already have some way to write that info out as CSV.