depth information of a frame

Hi everyone!

I am stuck while trying to get depth information of a frame and really need some help. I have searched similar problems and tried to understand the concept but lost my way, so I will go step by step to make my confusion clearer. Thanks in advance :slight_smile:

Firstly, what I am trying to do is accessing depth information of any frame w/ depth buffer.

  1. Render texture is needed to use depth buffer (referring to this answer).

  2. I followed RenderDepth.js as an example script which can be found in ShaderReplacement page, following code lines are also used in this script.

  3. I can allocate a render texture in depth format with

renderTexture = RenderTexture.GetTemporary (camera.pixelWidth, camera.pixelHeight, 24, RenderTextureFormat.Depth);

  1. I create a second camera and set allocated rendertexture to camera’s destination rendertexture with

cam.targetTexture = renderTexture;

  1. I rendered my camera with RenderDepth.shader (shader is assigned to the script by the way) with

cam.RenderWithShader (depthShader, "RenderType");

RenderDepth.shader can be found here.

What I am expecting is after rendering camera with shader, camera’s target texture would hold depth buffer of the visible scene. Then, I can save this target texture into a Texture2D and can see a standard depth buffer (white-grey-black colors for far and close objects).

However, I could not get which part is holding depth buffer or how I can access information in that buffer. What I am trying to reach is pixel values which range from 0 to 1 with a nonlinear distribution.

Any help would be greatly appreciated, thanks!

Hi, Thanks for your experience. I finally render the depth texture by the first method (use RenderWithShader). @kat0r is right, the depth is possible to be rounded down to 0.
Well, You probably have figure it out or given up by now, but for the next guy.

This is the way I do:

a. add this script to the camera which I want to render depth. In my case, it is not the main camera, so if you want to use in main camera, just set null to camera.targetTexture after calling RenderWithShader.

using UnityEngine;
using System.Collections;
using System.IO;

[ExecuteInEditMode]
public class ReplaceShader : MonoBehaviour
{
    public Shader shader;
    public RenderTexture m_RTTemp = null;
    public RenderTexture m_Output = null;
    private Texture2D m_Output2D = null;

    // create objects
    void Awake()
    {
        if (m_RTTemp == null)
        {
            m_RTTemp = new RenderTexture(512, 512, 16, RenderTextureFormat.RGB565);
        }
        if (m_Output == null)
        {
            m_Output = new RenderTexture(512, 512, 16, RenderTextureFormat.ARGBFloat);
        }
        if (m_Output2D == null)
        {
            m_Output2D = new Texture2D(m_Output.width, m_Output.height);
        }
        camera.targetTexture = m_RTTemp;
    }
	
    // put the texture into scene
    void OnGUI()
    {
        GUI.DrawTexture(new Rect(0.3f * Screen.width, 0.5f * Screen.height, 0.5f * Screen.width, 0.5f * Screen.height),
            m_Output);
        
    }

    // use RenderWithShader to change shader
	void Update () 
    {
        Debug.Log("Update: " + camera.name);
        camera.targetTexture = m_Output;
        camera.RenderWithShader(shader, "");
        camera.targetTexture = m_RTTemp;
	}

    // when destroy, save rendertexture to png file
    void OnDestroy()
    {
        RenderTexture oldActive = RenderTexture.active;
        RenderTexture.active = m_Output;
        m_Output2D.ReadPixels(new Rect(0, 0, m_Output.width, m_Output.height), 0, 0);
        m_Output2D.Apply();
        byte[] pngBytes = m_Output2D.EncodeToPNG();
        File.WriteAllBytes(Application.dataPath + "replace_shader.png", pngBytes);
        RenderTexture.active = oldActive;
    }

}

b. use this shader and add it to the public shader of the script above:

Shader "Render Depth Official" {
SubShader {
    Tags { "RenderType"="Opaque" }
    Pass {
        Fog { Mode Off }
		CGPROGRAM
		#pragma vertex vert
		#pragma fragment frag
		#include "UnityCG.cginc"

		struct v2f {
		    float4 pos : SV_POSITION;
		    float2 depth : TEXCOORD0;
		};

		v2f vert (appdata_base v) {
		    v2f o;
		    o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
		    UNITY_TRANSFER_DEPTH(o.depth);
		    return o;
		}

		half4 frag(v2f i) : COLOR {
		    //UNITY_OUTPUT_DEPTH(i.depth);
		    half d = i.depth.x/i.depth.y;
		    return half4(d*10, d*10, d*10, 1);
		}
		ENDCG
	}
}
}

In shader frag function, I have to multiply the depth by 10 (this scale factor depends, You need to adjust it to you scene) to get a visible image. The depth image will just show in game scene.

Sorry about my poor English, please feel free to say anything.