Custom shader not writing to depth buffer

Hey there,

I have a fairly simple simple unlit transparent shader that I would like to have write values to the depth buffer for later use in a post-processing shader. Right now, the color components of the shader draw to the camera just fine, but the the depth buffer is empty. Default material objects show up in the depth buffer just fine. Any idea what I’m doing wrong?

Shader "Unlit/TextureGlow"{ Properties { _MainTex ("Texture - Pastebin.com - Simple unlit / transparent shader that’s on all my scene game objects (except the sphere)
using UnityEngine;public class StarFieldEffect : MonoBehaviour{ publi - Pastebin.com - Script with OnRenderImage(…) where I’m blitting the src render texture to dst, using a shader that currently just displays the depth buffer.
Shader "Hidden/NewImageEffectShader"{ Properties { _MainTex - Pastebin.com - The post-processing shader

Thanks ahead of time

The depth buffer and camera depth texture are not the same thing. The depth buffer is used when rendering the camera view color. The camera depth texture is rendered separately prior to rendering the main camera view. For objects to render to the camera depth texture two things need to be true, they need to use a shader that has a shadow caster pass and they need to use a material with a render queue less than 2500.

A transparent material (a queue of 3000) will not render to the camera depth texture, regardless of if it has a shadow caster pass or not.

4 Likes

However, generally you do not want transparent objects as part of the opaque queue range (<2500) because they sort front to back, and because the sky will render over them unless ZWrite On is used. But then the skybox won’t render behind it, and neither will any other normal transparent objects. It will also cause problems for any directional shadows, as those use the camera depth texture as well and objects behind this transparent object won’t receive shadows, because it’ll be casting on the transparent object’s depth.

Your best option would be to manually render your object into a depth texture after the opaque queues have been rendered to the camera color target.

1 Like

Makes sense.

… manually render your object into a depth texture after the opaque queues have been rendered to the camera color target.

Does this mean that I would need to iterate across all my objects and swap their materials each frame? Then use Camera.Render() with a RenderTexture?

Also, I can’t seem to find a way to hook into the rendering pipeline to do anything between the opaque and transparent draw calls. I’ll keep digging.

By the way, I’m using the built-in rendering pipeline. Maybe I should be looking at the scriptable / universal pipelines?

I’d recommend either looking into replacement shaders, or command buffers using DrawRenderer() and iterate over the objects you care about and draw them to your target texture.

https://docs.unity3d.com/ScriptReference/Rendering.CameraEvent.html
https://docs.unity3d.com/ScriptReference/Camera.AddCommandBuffer.html

They both have the same limitations in this specific regard, and require the same solution.

I had no idea Unity’s graphics pipeline was extensible like this. Very cool, thanks for the pointers. Here are some notes on what I ended up doing for anyone else reading this in the future. I’m still trying to debug one issue but I’ll post about that next.

Unity supports multiple rendering pipelines. The two built-ins are forward and deferred. Depending on which pipeline you’re using for you project, you can bind to different CameraEvents.


credit

   private Camera cam;
   private CommandBuffer cbuf;

   ...

   void OnEnable()
   {
        this.cam.AddCommandBuffer(CameraEvent.AfterForwardOpaque, this.cbuf);
   }

The output of draw commands will be written to the color buffer. If your shader is writing depth information it can be half4 encoded into the color channels. You can set a render target if you need to capture the depth texture for usage in a later shader.

   private RenderTexture rt;

   ...

   void Update()
   {
       this.cbuf.Clear();
       this.rt = RenderTexture.GetTemporary(this.cam.pixelWidth, this.cam.pixelHeight, 16, RenderTextureFormat.Depth);
       this.cbuf.SetRenderTarget(this.rt);
       this.cbuf.ClearRenderTarget(true, true, Color.black);
       foreach (Renderer r in this.ship.GetComponentsInChildren<Renderer>())
       {
           if (r.enabled)
           {
               this.cbuf.DrawRenderer(r, this.depthMat);
           }
       }
    }

When allocating a RenderTexture with a depth format specified, it appears to be the case that the RenderTexture’s color buffer is allocated per. the format specification – in the case of the snippet above, 16 bits per pixel.
The size of the depth buffer values (needs?) to match the output of the fragment shader type – so half4 for 16 bit.

In my case, I want to use this depth buffer in a full screen post processing shader. To do that I just bind the the render texture to the fullscreen shader material

    void OnRenderImage(RenderTexture source, RenderTexture destination)
    {
        this.sfMat.SetTexture("_DepthTex", this.rt);
        Graphics.Blit(source, destination, this.sfMat);
        RenderTexture.ReleaseTemporary(this.rt);
    }

Lastly, use SAMPLE_DEPTH_TEXUTURE to pull a float from the depth texture.

    float depth = SAMPLE_DEPTH_TEXTURE(_DepthTex, i.uv);
5 Likes

@bgolus Is it possible to blit the depth buffer to a texture? I would like to capture that and not have to use a shadow caster pass when I don’t care about shadows.

I would like the code below to work but it appears that BuiltinRenderTextureType.Depth still produces the camera depth texture, not the actual depth buffer.

    void Start()
    {
        depthTexture = new(mainCam.pixelWidth, mainCam.pixelHeight, 0, GraphicsFormat.R32_SFloat);
        depthTexture.antiAliasing = 1;
        depthTexture.filterMode = FilterMode.Point;
        depthTexture.useMipMap = false;

        Shader.SetGlobalTexture("wtf", depthTexture);

        cb = new CommandBuffer();
        cb.name = "Selection Commands";
        cb.Clear();
        cb.Blit(BuiltinRenderTextureType.Depth, depthTexture);
        mainCam.AddCommandBuffer(CameraEvent.AfterForwardOpaque, cb);
    }

Is it possible to get a texture from an arbitrary depth buffer?

Yes.*

Is it possible for you to get a texture from an arbitrary depth buffer?

No.*

* Not all APIs and platforms allow you to grab data from arbitrary depth buffers!

To get a depth texture you can sample, you need a render texture with the Depth or ShadowMap format. This can then be used as a depth buffer and will automatically be resolved to a texture to be sampled. Though like normal render textures you can’t be both rendering to and reading from a render texture. So if you want to be able to grab a depth buffer from an arbitrary camera, you need to override the render targets before it renders rather than trying to grab it afterwards.

Here’s some basic example code.

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

[RequireComponent(typeof(Camera))]
public class DepthRenderTextureTest : MonoBehaviour
{
    public bool OnlyRenderToDepth;
    public RenderTexture RTColor;
    public RenderTexture RTDepth;
    public RenderTexture RTDepthCopy;

    void Update()
    {
        Camera cam = GetComponent<Camera>();

        if (RTDepthCopy)
            RenderTexture.ReleaseTemporary(RTDepthCopy);

        RTColor = RenderTexture.GetTemporary(1024, 1024, 0, RenderTextureFormat.Default); // no depth buffer
        RTDepth = RenderTexture.GetTemporary(1024, 1024, 24, RenderTextureFormat.Depth); // only a depth buffer
        RTDepthCopy = RenderTexture.GetTemporary(1024, 1024, 0, RenderTextureFormat.RFloat); // depth buffer copy

        // note: if the objects being rendered have code in their fragment shader, that's still all running even
        // there is no color buffer to render to. so preferably it should be using replacement shaders or only see objects             // with depth only materials applied.
        if (OnlyRenderToDepth)
            cam.targetTexture = RTDepth;
        else
            cam.SetTargetBuffers(RTColor.colorBuffer, RTDepth.depthBuffer);
        cam.Render();
        cam.targetTexture = null;

        // at this point you can sample the depth render texture
        // can't use CopyTexture unless both RTs are Format.Depth, but that is an alternative!
        Graphics.Blit(RTDepth, RTDepthCopy);

        RenderTexture.ReleaseTemporary(RTColor);
        RenderTexture.ReleaseTemporary(RTDepth);
    }

    void OnRenderImage(RenderTexture src, RenderTexture dst)
    {
        // proof that the copy worked
        Graphics.Blit(RTDepthCopy, dst);
    }
}
1 Like

You are the man @bgolus ! I truly appreciate your generosity in sharing knowledge that few can. I got something that seems to work well enough for my needs. FYSA, I plan to use this for some advanced compositing logic with outlines and occlusion fx. Here’s the test code based on your example:

using UnityEngine;
using UnityEngine.Rendering;

public class DepthBufferTransfer : MonoBehaviour
{
    public Camera mainCam;

    CommandBuffer cb;
    RenderTexture colorTex;
    RenderTexture depthTex;
    RenderTexture persistentDepthTex;

    Material depthCopyMat;

    void Start()
    {
        int width = mainCam.pixelWidth;
        int height = mainCam.pixelHeight;

        colorTex = new RenderTexture(width, height, 0, RenderTextureFormat.Default); // no depth buffer
        depthTex = new RenderTexture(width, height, 24, RenderTextureFormat.Depth); // only a depth buffer
        persistentDepthTex = new RenderTexture(width, height, 24, RenderTextureFormat.Depth); // only a depth buffer to persist

        // This is used in a shader on a visible quad to test that things are working
        Shader.SetGlobalTexture("wtf", persistentDepthTex);

        // This is the secret sauce: predetermine where the depth buffer will go
        mainCam.SetTargetBuffers(colorTex.colorBuffer, depthTex.depthBuffer);

        // Shader can be found at https://support.unity.com/hc/en-us/articles/115000229323-Graphics-Blit-does-not-copy-RenderTexture-depth
        depthCopyMat = new Material(Shader.Find("Hidden/DepthCopy"));
        depthCopyMat.SetTexture("_DepthTex", depthTex);

        cb = new CommandBuffer();
        cb.Clear();

        // Copy the current depth buffer to use for whatever (only needed if we can't count on the data in depthTex not changing)
        cb.Blit(depthTex, persistentDepthTex, depthCopyMat);

        // Copy color to the screen (this might not be where you want to do this but it's fine for this example)
        cb.Blit(colorTex, BuiltinRenderTextureType.CameraTarget);

        mainCam.AddCommandBuffer(CameraEvent.AfterForwardOpaque, cb);
    }
}

And here’s an ugly image of my test visuals: Pasteboard - Uploaded Image

Just in case stumbles upon this and wants materials in the opaque queue to write to depth buffer. I have a custom frag/vert shader and the solution for me was adding a custom pass for depth (mainly shadows, SSAO, …) and depthnormals (ssr, transparency,…). Here is an example for both in URP, I just copied them from a standard lit shader and added the properties:

Pass
{
    Name "DepthOnly"
    Tags
    {
        "LightMode" = "DepthOnly"
    }

    // -------------------------------------
    // Render State Commands
    ZWrite On
    ColorMask R
    Cull[_Cull]

    HLSLPROGRAM
    #pragma target 2.0

    // -------------------------------------
    // Shader Stages
    #pragma vertex DepthOnlyVertex
    #pragma fragment DepthOnlyFragment

    // -------------------------------------
    // Material Keywords
    #pragma shader_feature_local _ALPHATEST_ON
    #pragma shader_feature_local_fragment _SMOOTHNESS_TEXTURE_ALBEDO_CHANNEL_A

    // -------------------------------------
    // Unity defined keywords
    #pragma multi_compile _ LOD_FADE_CROSSFADE

    //--------------------------------------
    // GPU Instancing
    #pragma multi_compile_instancing
    #include_with_pragmas "Packages/com.unity.render-pipelines.universal/ShaderLibrary/DOTS.hlsl"

    // -------------------------------------
    // Includes
    #include "Packages/com.unity.render-pipelines.universal/Shaders/LitInput.hlsl"
    #include "Packages/com.unity.render-pipelines.universal/Shaders/DepthOnlyPass.hlsl"
    ENDHLSL
}

// depth normals write to _CameraNormalsTexture
Pass
{
    Name "DepthNormals"
    Tags
    {
        "LightMode" = "DepthNormals"
    }

    // -------------------------------------
    // Render State Commands
    ZWrite On
    Cull[_Cull]

    HLSLPROGRAM
    #pragma target 2.0

    // -------------------------------------
    // Shader Stages
    #pragma vertex DepthNormalsVertex
    #pragma fragment DepthNormalsFragment

    // -------------------------------------
    // Material Keywords
    #pragma shader_feature_local _NORMALMAP
    #pragma shader_feature_local _PARALLAXMAP
    #pragma shader_feature_local _ _DETAIL_MULX2 _DETAIL_SCALED
    #pragma shader_feature_local _ALPHATEST_ON
    #pragma shader_feature_local_fragment _SMOOTHNESS_TEXTURE_ALBEDO_CHANNEL_A

    // -------------------------------------
    // Unity defined keywords
    #pragma multi_compile _ LOD_FADE_CROSSFADE

    // -------------------------------------
    // Universal Pipeline keywords
    #include_with_pragmas "Packages/com.unity.render-pipelines.universal/ShaderLibrary/RenderingLayers.hlsl"

    //--------------------------------------
    // GPU Instancing
    #pragma multi_compile_instancing
    #include_with_pragmas "Packages/com.unity.render-pipelines.universal/ShaderLibrary/DOTS.hlsl"

    // -------------------------------------
    // Includes
    #include "Packages/com.unity.render-pipelines.universal/Shaders/LitInput.hlsl"
    #include "Packages/com.unity.render-pipelines.universal/Shaders/LitDepthNormalsPass.hlsl"
    ENDHLSL
}