If I build the sample scene and turn off meshing, plane detection, and skybox, I generally see things running at a solid 90FPS. I can confirm that, by default, the average frame rate dips down to about 75FPS, depending on what room I’m in (bigger rooms result in more planes and AR mesh triangles). Maybe we shouldn’t set that scene up to run all of the AR features by default, but we decided to prioritize demonstrating all of the features above keeping a smooth frame rate. In a real app, I would design the flow to only engage AR features one at a time and only as needed. For example, the sample is constantly running image tracking even though most apps probably won’t need to. In most cases, I would expect an app to run AR features at startup with lightweight visuals, turn off the AR features, and then enable/load the more expensive graphical elements once the room has been scanned, or planes detected, etc.
Although the scene doesn’t look like much, the meshing and plane tracking end up creating a lot of transparent fragments, which are expensive to render. I was also kind of surprised at how slow the default skybox shader is on visionOS. It’s not great that such a simple scene runs into performance issues, but based on my analysis it really is taking up the full 11ms each frame between CPU/GPU work, and there isn’t any wasted “dead time” or thread sync issue… this is just the best you can do with that GPU on that massive (high resolution) framebuffer. And of course, there’s a little bit of “slop” in the timing due to thread scheduling, so you’re really working with a window of ~10ms for each frame. Even if you miss the deadline by <1ms, you miss the deadline.
Suffice it to say… we did have some sync issues in the past, but I spent a lot of time earlier this year resolving them. What we have now is the best we can do from the perspective of thread scheduling and GPU sync points. The issues I’m seeing now come down to how specific shaders are implemented and what Unity spends its time doing on the GPU during that 11ms window. Things like shadows and the default skybox are surprisingly expensive on this platform, and you may need to investigate alternatives to our default solutions and settings for common render features. In the sample scene, I ended up switching all the materials to using the SimpleLit shader, and created a more lightweight custom transparent shader for the room mesh, since the default lit shader was surprisingly expensive on such a detailed mesh. If you’re using an older version of the sample scene, you may be missing these optimizations.
Something else I discovered during this process is that it is impossible to avoid frame drops while using Screen Mirroring. I mentioned this in the post I linked above but it bears repeating. Now that we’re allowed to keep the screen mirror window open in immersive spaces (as of visionOS 2), I tend to work with the screen mirror open. The GPU needs to do some work to update that window, which causes a stall while Unity is trying to render, and Unity misses the deadline. Even for the simple Xcode swift template set up for Metal, if I set up a cube to oscillate back and forth I see glitches as long as I have the mirror window open. I lost a lot of time trying to fix phantom frame pacing issues related to this before I figured it out. Once I started testing without screen mirroring running, I was able to get that same test scene in Unity (just an oscillating cube on a black background) running smoothly without glitches. You may also have other apps running in the background causing GPU stalls or interrupting Unity’s CPU threads, causing it to miss the deadline.
This is based on the design of Apple’s CompositorServices API. The OS will block your render thread until it is ready to receive the next frame, and that window only happens every 11ms. If you miss the deadline, you can’t submit the next frame for another 11ms. The only way to give the app more time is to call cp_layer_renderer_set_minimum_frame_repeat_count with a whole number >1. This will double, triple, etc. the time between frames to 22ms, 33ms, etc. Thus, you can only set a target frame rate of 90hz, 45hz. etc. Even if you don’t set this explicitly, you can’t actually render at a frame rate between those values. You might end up with inconsistent frame timings that average to a frame rate between 90 and 45 hz (for example, 11ms, 11ms, 22ms, 11ms…) but there’s just no way to submit a frame that took 15ms to render. We end up being blocked until the next deadline, resulting in a delta time of 22ms.
Bear in mind that even when you limit Unity rendering to 45FPS, the device is still updating the head pose and re-projecting the current frame at 90FPS. Head motion remains smooth, but you may notice animations (like the moving particles) updating less frequently. This is the case regardless of target frame rate.
Please do. It may be the case that there’s a project setting or package interaction that we’re not seeing in our test projects, and even if the project you upload is exactly the same as our test project, it rules out anything like that being the cause of the issue. As I said, I see < 90FPS on the sample scene in its default configuration, so that is expected. Try trimming it down to just the features you are interested in using, and see if that can maintain 90FPS. Make sure you turn off Screen Mirroring. If you are still having issues, then please do submit a bug report with your modified sample scene (or, even better, your actual project) so we can help identify any potential bugs on our end, and advise on how to improve performance.
Thanks for reaching out, and good luck!