How can i write the camera depth into a rendertexture when OnRenderImage is not getting called?

This is how you should currently do it (minus the post-processing and intermediate buffer part though).

Off the top of my head, here are the steps you’d have to take:

SRP and Depth in custom Shader (similar to non-SRP workflow):

  1. Enable depth on your camera or in the pipeline settings asset if there is one (for Lightweight in specific)
  2. Set the “Queue” Tag to “Transparent”
  3. Add sampler2D _CameraDepthTexture to your shader
  4. Sample the camera’s depth buffer using the LinearEyeDepth function that Desoxi used above
  5. Use the linearized depth value to do your depth-based coloring

SRP and Depth in a Shader Graph:

  1. In your Shader Graph, add a Texture2D property via the Blackboard
  2. Set the reference to “_CameraDepthTexture”
  3. Set “Exposed” toggle to false
  4. Drag the property into your Shader Graph workspace
  5. Add a Sample Texture 2D Node and plug the _CameraDepthTexture node into the Texture2D input port
  6. You now need to sample the depth texture using screen space UVs, so you’ll need to add a ScreenPosition node and plug that into the UV port of the Sample Texture2D Node. This will give you the screen position of the current mesh fragment
  7. The output of the Sample Texture2D Node will now give you the stored non-linear depth value for the screen pixel where the current mesh fragment is going to be drawn (you’ll have to linearize this yourself until our Shader Graph Depth Node makes it into a package release). You can look at the Unity shader source to get the code for that function
  8. Do your depth-based coloring with the linearized depth value
  9. Create a Material out of your Shader Graph
  10. Set the Render Queue to Transparent via the Material Inspector

For both cases, you might also need to compute the depth for you mesh fragment and compare that to the depth value stored in the depth buffer to get the color comparisons you want