Accessing Depth rendertexture in HDRP and pass it to Compute Shaders

We used to obtain the Depth rendertexture from the camera and pass it to a compute shader for later processing as follows:

currentRT = RenderTexture.GetTemporary(rows, columns, 32, RenderTextureFormat.Depth);

        cam.targetTexture = currentRT;
        cam.Render();
        RenderTexture.active = currentRT;

        computeCloud.computeShader.SetTexture(computeCloud.mComputeShaderKernelID, "_DepthTexture", currentRT);

Or directly pass the rendered depth from the main camera using the following line of code:

computeCloud.computeShader.SetTextureFromGlobal(computeCloud.mComputeShaderKernelID, "_DepthTexture", "_CameraDepthTexture");

None of these codes work in the HD Render Pipeline and the produced Depth-map is all black! I guess there are some customization needed to get the depth from camera in HDRP!

I know HD Render Pipeline is still in shadows but anyone can help in this?

I’m having trouble getting a RenderTexture to work at all with the HD Render Pipeline. Were you able to figure this issue out?

I’ve opened and issue on SRP’s Github page and got a response that some customization needed in the Render() function in HDRP source but all issues have been removed from there. My problem was that I can not get the depth rendertexture from the 2nd camera in scene.

Also it has been mentioned that “RenderTexture.GetTemporary” is removed in HDRP.

@SebLagarde I still couldn’t get my head around customizing the Render() function in order to include the Depth in the rendertexture from camera! Your help and real code example is really appreciated.

I know I’m reopening an older thread but I also haven’t figured out this problem, although RenderTexture.GetTemporary is working in HDRP for me.