I have a Texture2D in R16 format (depth image from Kinect feed). How do I use this texture as the initial depth buffer of a camera before any rendering?
The goal is an AR scene where virtual objects (rendered by Unity) are occluded by real-world objects (which are present in the Kinect feed). I’m working in DX11.
The easiest is probably to just create a special shader that takes the input Texture2D and renders it to the z-buffer. A typical shader sends a color as output from the pixel shader, but you can also supply the depth.
The value to output to depth is not a linear one, it’s z divided by w of the clip space position. (The position output from the vertex shader.)
@jvo3dc That’s true - I copy-pasted from an old shader without thinking.
However, turning ZWrite on didn’t actually solve the issue of not seeing anything. I tried a bunch of things and ended up using a CommandBuffer instead, which seems to work.
Also, it doesn’t seem to be possible to modify the depth buffer unless the camera is rendering into a RenderTexture. Not a big issue, but man it was hard to debug without knowing it.
Also also, camera clear flags don’t contain an option to clear color but leave depth as is, so this has to be done manually as well.
renderTexture = new RenderTexture(dimensionSrc.width, dimensionSrc.height, 16);
Camera cam = GetComponent<Camera>();
cam.targetTexture = renderTexture;
cam.clearFlags = CameraClearFlags.Nothing;
CommandBuffer cb = new CommandBuffer();
cb.SetRenderTarget(renderTexture);
cb.ClearRenderTarget(true, true, Color.black);
cb.Blit(KinectManager.Instance.depthTexture, renderTexture, depthConversionMat);
cam.AddCommandBuffer(CameraEvent.BeforeDepthTexture, cb);
Getting back to the shader bit, I realize I had another issue with the depth value. You said it was z divided by w of the clip space position, but that’s only meaningful when reading the Z buffer value in an object shader. This shader is being used to write the Z value, and the object is a screen quad, so the position has no meaning, right?
So, how do I convert a linear depth value into a Z buffer value?
@faren Can you explain what your code is supposed to do? It doesn’t work, and I’m afraid I don’t understand it enough to try to debug.
@jvo3dc Getting somewhere… I think. I’m getting results but it looks like it’s the wrong way around… are Unity’s depth buffer pixel values inverted or something?
float frag (v2f i) : SV_Depth
{
float4 depthTex = tex2D(_MainTex, i.texcoord);
float depthMeters = depthTex.r*65.535f; // pixel value is millimeters as a 16-bit unsigned int
float3 pos_world = float3(0f, 0f, depthMeters);
float4 pos_clip = mul(UNITY_MATRIX_VP, pos_world);
float depth = pos_clip.z / pos_clip.w;
return depth;
}
As for the RenderTexture thing, I only know that when I tried using BuiltinRenderTextureType.Depth as the CommandBuffer.Blit() target, nothing happened. Assigning a RenderTexture to the camera and then using that as the target solved the issue. In my scenario that’s actually a good thing, because I’ll probably need to render the final image into a UI texture anyway.
Got it. UNITY_MATRIX_VP was the culprit. Replacing it with UNITY_MATRIX_MVP made Unity upgrade it to UnityObjectToClipPos() which gives correct results. Here’s the shader code that works:
Oh yeah, @jvo3dc , I didn’t understand until now what you were talking about with material shaders. I can create a screen-sized quad mesh and render into that using a regular material shader. Set it to “Queue”=“Background-1” and it’ll write into the depth buffer before anything else gets rendered. Getting it pixel perfect takes a bit of doing of course.
I was still thinking about using the texture itself as the depth buffer so I didn’t get that…
Well… yes and no. We encountered other technical issues regarding the color/depth correlation. But what I posted above is a viable solution: just plop in a quad in the scene, size it so that it covers the camera view, then assign a material to it. By setting the queue to Background-1 and ZTest Always, you get a shader that fills up the depth buffer before any other rendering happens. There doesn’t appear to be a way to set the depth buffer itself, so you just have to render into it when it’s empty.
Depending on the situation, it may also be a good idea to output both color and depth. In this case, you can define the output as:
This will also render the regular color image alongside the depth buffer. Even though 3D objects are rendered afterwards, they will be occluded by the depth buffer that’s already there, letting the color image show through in those pixels.