Alternatives to Scene Depth node in URP Shader Graph for decals

I have a URP Shader Graph that I use to add decals on surfaces in normal Unity projects. It’s based on this nice tutorial from Daniel Ilett.

I’m interested in adding decals to 3D objects in a visionOS mixed reality experience.

The problem is that my shader graph uses the Scene Depth node, which seems to not be supported in visionOS. This page says the following about the Scene Depth node:

“Platform doesn’t allow have access to the depth buffer, this is just the camera distance in either clip or view space.”

That is a bit unclear to me. Could someone clarify it?

I understand that Apple probably doesn’t want developers to access depth maps for privacy reasons, but I just want a depth map of the 3D elements in my Unity project, not of the user’s room.

Are the any workarounds to have depth maps of 3D elements in visionOS? Maybe an approach that’s based on using a RenderTexture? Thanks for any advice.

Basically, there’s no point in using the Scene Depth node. It should simply be reported as Unsupported, but we haven’t gotten around to removing a legacy implementation. If the implementation actually works, what it will return is simply the depth of the current geometry being rendered, which is unlikely to be useful.

We target RealityKit for MR, and RealityKit doesn’t offer any kind of access to the depth map. It is possible to render to a RenderTexture and copy the depth for use in a shader graph, though the fact that you’ll need to set the Camera parameters to match the RealityKit scene view will make this very tricky (probably impossible in bounded mode, where you can’t even get the device transform). There’s some discussion of that approach in this thread.

Another possibility, if the mesh isn’t that big, would be to render the mesh again with the decal and use the VisionOS Sorting Group component to ensure that it renders on top of the base.

1 Like

Thanks kapolka, that thread is very interesting. There’s one thing I tried earlier today that I wanted to ask you about:

  • I create a 2nd camera in my scene. I make it a child of the Main Camera.
  • I create a RenderTexture with the R8G8B8A8_UNORM Color Format and D32_SFLOAT Depth Stencil Format.
  • I make the output of the 2nd camera that RenderTexture.
  • I add the BatchModeUpdateRenderer script to my 2nd camera.

If I add the RenderTexture on a plane in my scene, it’s like a TV. It shows what the camera is rendering, even in the visionOS simulator, so that confirms that the RenderTexture is being updated correctly.

Now my question is: if I pass that RenderTexture to my decal URP Shader Graph, how can I sample the depth of the RenderTexture? A Texture2D Sampler doesn’t seem to allow me to do that. If I could get the depth from it, that would be a perfect workaround for decals.

Thanks for any information.

As far as I know, I think you have to blit the depth texture to another RenderTexture with a different, support format (such as RGBAHalf). Once you do that, you can just sample it as a normal texture and take the R component. That’s what nathanael-omnivor describes in that thread, and it matches what we’ve done in our internal experiments.

Thanks kapolka! I was just re-reading that thread and finally understood that on this 2nd read. I’ll give that a shot and report back.

1 Like