I’m observing a major stutter in framerate every time ARMeshManager brings in new meshes. In addition to my project, I can observe it happening in the PolySpatial MixedReality Sample as well. This results in a pretty choppy experience every 2 seconds or so when the meshes are updated.
Has anyone else seen this And if so, can anyone recommend any ideal configurations or workarounds to prevent this from happening?
I’m using the RealityKit app mode with PolySpatial 2.0.4.
AR Foundation 6.0.3.
Unity version 6000.0.23f1
And another interesting one. Here, Skinned Mesh Renderers are mentioned, which is really weird because neither AR Planes nor AR Meshes use Skinned Mesh Renderers:
Bumping this because it still is happening for me in:
Unity 6000.0.43
PolySpatial 2.2.4
AR Foundation 6.1.0
The device is also running visionOS 2.3.2
Is this something that is being investigated currently, or is it just generally not recommended to enable spatial mesh updating for the majority of your app’s experience?
The Polyspatial Framerate Hitch has not gotten better in the last five months since I first posted about it, and is now a top priority for us. I’d like to raise the issue once again – this time with an example project and video.
The jump from 90 to 80 may not sound that bad, but keep in mind it’s an average over a half-second. What’s really happening is that the app is dropping whole frames but recovers very quickly. This wouldn’t be a big deal in a few isolated cases, but when it happens every few seconds in XR it’s bad.
The example project was created using Unity’s own PolySpatial Samples, and it uses the very latest Polyspatial and AR Foundation. My demo contains a modified version of the Mixed Reality scene from the Polyspatial Samples to showcase the Framerate Hitch.
This is really impacting our project, which relies heavily on realtime mesh and plane updating throughout the experience. Every couple of seconds, all physics, animations, etc. seem to “glitch out”, distracting the user and breaking immersion.
Repro steps:
Download the project onto your computer and prepare to build to Apple Vision Pro
Requirements:
Unity 6000.0.44.f1 + visionOS module
Tested using:
visionOS 2.4
Xcode 16.3
Make a build and deploy to device. The app will be called “Spatial Update Bug”
Run the app. Make sure you’re looking in an empty space in front of you while the UI settles into place. This UI and scene are based upon the “Mixed Reality" demo in the Polyspatial Samples. I’ve added a modification in the UI that allows users to more easily observe the framerate hitch, using a framerate counter and indicators in the UI. You can use this to see that there is a hitch every time the spatial meshes or planes update.
If meshes/planes are not being updated, try moving your head around or walking around your space.
Expected behavior:
Framerate should remain at a smooth 90fps, regardless of whether spatial meshes or planes are being updated.
So far, it’s looking like this is related to the way we process the AR meshes. I created a non-Unity RealityKit project that acquires and renders the meshes roughly the same way Unity would (using ARKitSession/SceneReconstructionProvider to get the stream of MeshAnchor updates, then extracting the MeshAnchor geometry using the contents functions of the buffer accessors and using it to create a LowLevelMesh (notably, processing the vertices on the CPU, which may be the bottleneck), as well as extracting the vertex data back from the mesh to create a static ShapeResource (since the example populates a non-convex MeshCollider, and that’s what we turn those into in RealityKit). I ended up getting a slowdown similar to your example: on frames where we get new mesh data, we often get inter-frame times of 20-32ms, versus the 11ms that we would expect for 90fps.
I did note that I didn’t get the slowdown if I used the RealityKit-specific pipeline. That is, in RealityKit, you can turn a MeshAnchor directly into a MeshResource or a ShapeResource, without the intermediate step of extracting the geometry (indices, positions, normals). Depending on whether or how you need to process the mesh data, that may suggest a workaround that you could use: get the mesh data directly from ARKit and apply it directly to RealityKit, integrating it with your Unity scene as per the SwiftUI integration examples (which demonstrate how to manipulate RealityKit objects from Unity).
Let me try to come up with a minimal example of how that would work.
My app is doing a lot of processing of the meshes and planes, including raycasting and box-casting onto surfaces. It’s also relying on it for physics, to allow virtual objects to collide with and bounce off of it.
We’re also processing the mesh to create a nav grid for pathfinding (not Unity’s Nav Mesh system, but something similar). The process requires a mesh collider.
Since you need Unity MeshColliders, it sounds like you won’t be able to use the RealityKit-specific pipeline. I’ll try experimenting a little more with some other options.
Do you need the MeshRenderer for the AR meshes? So far, it’s looking like the bottleneck is creating a new MeshResource for the LowLevelMesh via the synchronous path. If I reuse the LowLevelMesh or create the MeshResource from it asynchronously, I don’t get the slowdown. That gives me some ideas as to how we can work around the issue, but in the meantime, if you just need the Unity MeshCollider, it seems like you could use a prefab without the MeshRenderer.
OK! Well, it sounds like the way around this is to either pool MeshResources/LowLevelMeshes and/or use the asynchronous path (perhaps with a settings option or other way to configure the behavior so that we can avoid synchronization issues when it’s important, like separate meshes not appearing on the same frame). I’m sure we can get something like that in for the next release.