Help with attaching a 3D model to the mesh generated with the ARMeshManager

Hi,

I’m using the Apple Vision Pro and have run the example scene included with Polyspatial that ties into ARMeshManager to create a mesh of the room around me. How can I best automatically apply 3D models to the ground as the mesh is being generated around me? I have looked into ARMeshesChangedEventArgs (Struct ARMeshesChangedEventArgs | AR Foundation | 4.0.12) but I don’t fully understand how the mesh that is being generated is being handled, so I’m not sure if this is the right place to do my testing.

Is only a single mesh being generated or are multiple meshes being generated? If it’s a single mesh, then if I could obtain vertex data along the mesh then that should be sufficient for my uses. The use case ideally is to get points along the mesh and their rotation so that I can attach models to the ground that look natural, ie a barrel or box.

I tried looking into the ARFoundation examples but they required the beta version of Unity and I was worried about updating Unity and running into complications of certain polyspatial libraries not playing nice together anymore. Anyone have any ideas how to best solve this issue?

Thanks!