Render ARKit depth texture depth into scene depth.
Render scene.
Fin.
Requires using a shader that writes to SV_Depth. You can use ColorMask 0 to skip rendering to the color values. My understanding is the ARKit’s depth texture is in linear meters, so you need to convert that into the non-linear depth of the current camera. Here’s the function I use to do that.
Render scene with shaders that sample the ARKit depth texture and clip() when further away.
Have your shader pass the screen position and linear depth from the vertex to the fragment, and compare against the ARKit depth texture. Look at the built in particle shaders for an example, though that doing soft fading with the results, and needs to convert the camera depth texture into linear depth, which you shouldn’t need to do.
This sounds like a good route to take. I read the documentation about SV_Depth and making the frag shader output to the depth buffer for me, but how do I pass in the linearDepth parameter as in your example. I shouldn’t use tex2D because that unwrap the texture to conform along the mesh correct? Or maybe I should use a quad covering the screen so I won’t have the problem.
You absolutely should be using tex2D to sample the ARKit depth texture, that’s the only (sane) way to get per pixel values. You’ll want to use a command buffer to call [Blit()](https://docs.unity3d.com/ScriptReference/Rendering.CommandBuffer.Blit.html) on [CameraEvent.BeforeForwardOpaque](https://docs.unity3d.com/Manual/GraphicsCommandBuffers.html). Blit draws a full screen quad, with UVs matching the screen.
The linear depth is the value in the depth texture you get from ARKit. You’ll need to create a material in script that uses your custom shader and assign the depth texture (and maybe stencil texture to clip() against) to the material before using it in the Blit().
You’ll want your blit to look like this:
CommandBuffer myCommandBuffer = new CommandBuffer();
myCommandBuffer.Blit(null, BuiltinRenderTextureType.CurrentActive, myDepthWriteMaterial);
myCamera.AddCommandBuffer(CameraEvent.BeforeForwardOpaque, myCommandBuffer);
Wonderful! I’m almost there. I’m using a dummy greyscale texture of a hand in the shader that you instructed me to write (shown below). The hand silhouette blocks objects that are far away from being rendered and is properly rendered when it gets closer to the camera.
The only problem that remains is that everything outside the silhouette is pitch black. Could it be that it’s because I’m writing directly to SV_Depth and meddling with the depth and that I need to add the depth of the rest of the scene?
You said you had a stencil texture, though I don’t entirely understand in what form the stencil texture is passed in. Is it a b&w image of the hand outline? If so, sample that texture and use: clip(stencil.r - 0.5);
As for why, if you’re using a dummy texture rather than the real one, that means you have a black and white texture with a range of 0.0 to 1.0, presumably the hand is black, and the background is white. Well that’s going to write to the depth as if there’s a wall 1 meter away. The real ARKit depth texture should be a floating point texture with a much larger range, so presumably the “white” areas are actually values much larger than 1.0, but honestly I have no idea since I’ve never used ARKit.
The other problem I don’t have an answer to is I don’t know how the ARKit camera view is rendered to the scene. Ideally it gets rendered into the camera first and ignores the depth buffer.
Yes, you’re right! I forgot about adding the stencil texture. The frag shader no longer sets the depth outside the hand so everything else renders just fine.
This was done in a test environment, the only thing to do now is to try it in the real environment fingers crossed