Scene depth support in visionOS mr shader graph

I would like to implement the fog effect in the link in the Unity visionOS mr environment.
However, I found out that the shader graph does not support scene depth.
I think I saw something on the forum that says you can achieve something similar by using a displacement map shader… but I don’t know what to do.

I got information about displacement maps from this thread

Any help would be greatly appreciated…!!!

That’s right; Apple doesn’t currently provide a way to sample the depth buffer in their MaterialX implementation, so we can’t support the Scene Depth node. If you had the exact camera parameters that visionOS uses (camera position, rotation, field of view), then you could theoretically render the scene in Unity, transfer the depth map to a RenderTexture, and then sample that in a shader graph. However, that would only work in unbounded mode (you can’t get the head position/rotation in bounded mode), and would be tricky to get exactly right because Apple doesn’t expose things like the field of view or IPD (so you’d have to estimate them).

That approach also wouldn’t work with real-world objects, as in the example you linked. It’s probably worth noting that that page doesn’t list support for visionOS, and neither do the pages for the APIs that it uses. If visionOS did support them (or if it eventually adds support, and AR Foundation were updated accordingly), you might be able to use AR Foundation’s Environment Depth Image feature to get the real-world depth map for this purpose.

AR Foundation does support meshing, so one option might be to render those meshes in Unity and, again, copy the depth buffer to a RenderTexture and use that. Again, though, I wouldn’t expect pixel-perfect accuracy, since you’re going to have to estimate some of the camera parameters.

thank you !!! I’ll try!!! I always get a lot of help. thank you so much

1 Like