We are trying to create an mixed reality water plane for visionOS. In order to have that look good, we need to cut out objects, that are in front of the water plane, from a mixed reality point of view - creating an Occlusion of sort.
Our approach now is to get the mesh of the scene geometry as demonstrated in this video, at 12:53, in order to use this geometry to create the occlusion.
It has been difficult to find out how we can get access to the LiDAR scanned room and its geometry. We would appreciate any pointers - our research has been inconclusive so far.
Hello! Have you tried taking a look at Unity’s ARFoundation samples? ARFoundation will be how you interface with the Apple frameworks linked in the video you shared. There appears to be an example for scene meshing there. It’s not PolySpatial specific, but I believe it would work in MR with an unbounded volume camera. Edit: Actually I think this will only work with the Metal Rendering app mode, but i’ll double check with my colleagues today.
So we have used the AR Foundation Samples, but to no success. Our steps so far have been:
Build the Scene “OcclusionMeshes” with the sample project unchanged. Results in an empty app in the vision pro.
Install PolySpatial and drop a Volume Camera in the Scene, without any other changes. Result: another empty scene.
Put a bunch of cubes into the scene. Result: We see, that there is some Occlusion happening, but still no meshes or scene geometry displayed, see this video: Unique Download Link | WeTransfer
That’s correct. We have an example of scene meshing in the PolySpatial Samples (the “Meshing” scene), and that might be an easier example to get working.