Vision OS - Unstable framerate in Metal with passthrough

I’m sure you have noticed that the framerate is unstable even on the sample metal app of the vision os 2 template.

You can easily notice it, just open up Metal Sample - URP and wave your hand around. You will notice that the tracking points have “hiccups”, quite often they will lag behind only to abruptly catch up the next frame.

At first I thought this was just a hand tracking issue but that’s not the case, just press play on the particles UI and you will see the balls are choppy too.

Same thing happens with my own project, it’s rather bad is there any workaround or is there any info of when this may be fixed?

Using Vision OS 2 (non beta), vision os plugin 2.0.4. sample is 2.0.0-pre.11

Thanks!

Hey there! Sorry to hear that you’re having trouble. What version of Unity are you using? We made a bunch of fixes for frame pacing that went into the visionOS XR package and core Unity this year. If you’re using 2.0.4, which sets a minimum version of 6000.0.22f1, you should be on a versions with those fixes, but it’s worth checking. The hiccups you are seeing are likely a missed frame deadline, which can happen for a variety of reasons. It’s unfortunate that frame drops cause such an obvious glitch, but the only thing you can do about it is get your per-frame time down below 11ms, or set an Initial Target Frame Rate of less than 90hz. Changing the target frame rate lets the system know that you don’t expect to hit 90hz, and it holds each frame in view for two (or more) frames, hopefully reducing glitches.

As for why you’re seeing hiccups, are you using AR features like plane detection and meshing? Are you able to enable/disable those features on-demand rather than having them constantly running. In the latest version of the samples, you should see an FPS counter and some extra buttons to let you turn AR features on and off. Disabling the skybox also claws back a surprising amount of GPU time, so you may consider a simpler skybox shader, etc. As with any performance problem, the first step is to open the profiler and see where we’re spending most of our time. Is the app CPU or GPU bound, etc.

Are you seeing these issues for just the normal sample, or do you have a particular scene/project that you would like to get down below 11ms? If you can report a bug with a specific project attached, we might be able to help you improve frame pacing.

For more information, you may want to check out this thread where I was working through similar issues with other users. Thanks again for reaching out, and good luck!

I’m on unity 6.0.23f1

The thing is, this happens with the sample app which is already barebones. turning off mapping and everything off still makes no difference. I seriously doubt the problem is that it’s taking longer than 11ms

Regarding changing the target frame rate… I’m not sure why your team even offers that option, it goes from 90 to 45, no XR app should run at 45 it would be seriously bad.

Do you really need me to upload the project as a bug report? I mean it’s the template. You don’t see this issue? the balls from particles flow smoothly?.

If I build the sample scene and turn off meshing, plane detection, and skybox, I generally see things running at a solid 90FPS. I can confirm that, by default, the average frame rate dips down to about 75FPS, depending on what room I’m in (bigger rooms result in more planes and AR mesh triangles). Maybe we shouldn’t set that scene up to run all of the AR features by default, but we decided to prioritize demonstrating all of the features above keeping a smooth frame rate. In a real app, I would design the flow to only engage AR features one at a time and only as needed. For example, the sample is constantly running image tracking even though most apps probably won’t need to. In most cases, I would expect an app to run AR features at startup with lightweight visuals, turn off the AR features, and then enable/load the more expensive graphical elements once the room has been scanned, or planes detected, etc.

Although the scene doesn’t look like much, the meshing and plane tracking end up creating a lot of transparent fragments, which are expensive to render. I was also kind of surprised at how slow the default skybox shader is on visionOS. It’s not great that such a simple scene runs into performance issues, but based on my analysis it really is taking up the full 11ms each frame between CPU/GPU work, and there isn’t any wasted “dead time” or thread sync issue… this is just the best you can do with that GPU on that massive (high resolution) framebuffer. And of course, there’s a little bit of “slop” in the timing due to thread scheduling, so you’re really working with a window of ~10ms for each frame. Even if you miss the deadline by <1ms, you miss the deadline.

Suffice it to say… we did have some sync issues in the past, but I spent a lot of time earlier this year resolving them. What we have now is the best we can do from the perspective of thread scheduling and GPU sync points. The issues I’m seeing now come down to how specific shaders are implemented and what Unity spends its time doing on the GPU during that 11ms window. Things like shadows and the default skybox are surprisingly expensive on this platform, and you may need to investigate alternatives to our default solutions and settings for common render features. In the sample scene, I ended up switching all the materials to using the SimpleLit shader, and created a more lightweight custom transparent shader for the room mesh, since the default lit shader was surprisingly expensive on such a detailed mesh. If you’re using an older version of the sample scene, you may be missing these optimizations.

Something else I discovered during this process is that it is impossible to avoid frame drops while using Screen Mirroring. I mentioned this in the post I linked above but it bears repeating. Now that we’re allowed to keep the screen mirror window open in immersive spaces (as of visionOS 2), I tend to work with the screen mirror open. The GPU needs to do some work to update that window, which causes a stall while Unity is trying to render, and Unity misses the deadline. Even for the simple Xcode swift template set up for Metal, if I set up a cube to oscillate back and forth I see glitches as long as I have the mirror window open. I lost a lot of time trying to fix phantom frame pacing issues related to this before I figured it out. Once I started testing without screen mirroring running, I was able to get that same test scene in Unity (just an oscillating cube on a black background) running smoothly without glitches. You may also have other apps running in the background causing GPU stalls or interrupting Unity’s CPU threads, causing it to miss the deadline.

This is based on the design of Apple’s CompositorServices API. The OS will block your render thread until it is ready to receive the next frame, and that window only happens every 11ms. If you miss the deadline, you can’t submit the next frame for another 11ms. The only way to give the app more time is to call cp_layer_renderer_set_minimum_frame_repeat_count with a whole number >1. This will double, triple, etc. the time between frames to 22ms, 33ms, etc. Thus, you can only set a target frame rate of 90hz, 45hz. etc. Even if you don’t set this explicitly, you can’t actually render at a frame rate between those values. You might end up with inconsistent frame timings that average to a frame rate between 90 and 45 hz (for example, 11ms, 11ms, 22ms, 11ms…) but there’s just no way to submit a frame that took 15ms to render. We end up being blocked until the next deadline, resulting in a delta time of 22ms.

Bear in mind that even when you limit Unity rendering to 45FPS, the device is still updating the head pose and re-projecting the current frame at 90FPS. Head motion remains smooth, but you may notice animations (like the moving particles) updating less frequently. This is the case regardless of target frame rate.

Please do. It may be the case that there’s a project setting or package interaction that we’re not seeing in our test projects, and even if the project you upload is exactly the same as our test project, it rules out anything like that being the cause of the issue. As I said, I see < 90FPS on the sample scene in its default configuration, so that is expected. Try trimming it down to just the features you are interested in using, and see if that can maintain 90FPS. Make sure you turn off Screen Mirroring. If you are still having issues, then please do submit a bug report with your modified sample scene (or, even better, your actual project) so we can help identify any potential bugs on our end, and advise on how to improve performance.

Thanks for reaching out, and good luck!

OK I uploaded a bug report: IN-89468

As mentioned earlier, I’m turning everything off (planes, mesh, environment probe, etc) but it makes no difference really, it just stutters quite often and it’s very visible when you turn on the particles or you wave your hand around and observe the tracking points.

The problem is that native Vision OS apps don’t have this issue, it’s just Unity metal apps and that’s going to be a bit of an issue for our customers.

Thanks as always!

Thanks for the report! I’m AFK right now but I did want to confirm something… You mentioned that you’re using the sample imported from 2.0.0-pre.11. I can’t remember if that has my rendering optimizations (simpler shaders, mostly). Have you tried the sample from 2.0.4?

Anyway I’ll check out your project when I’m back in action and let you know what I find.

Cheers!

I updated the sample to 2.0.4 I do notice it changed but the performance issue is still the same, you can try updating in the project I attached to the bug report as well if you’d like.

1 Like

Right out of the gate, I noticed you’re hitting that MSAA issue from our other thread. Disabling RenderGraph to work around that issue has improved things significantly. I assume the workaround I shared will have a similar effect, one way or another we want to kill that warning.

If I do that, and disable everything at runtime (meshes, planes, shiny sphere, and skybox), I still see glitches every second or two, but it’s much smoother. If I disable MSAA and shadows, I no longer see any glitches. Also remember when you’re looking at this stuff to disable Metal Frame Capture and API validation (as well as Screen Mirroring and other apps that may run in the background), which will interfere with the results. Disabling foveated rendering also (ironically) boosts performance, because it forces us to render to a much smaller frame buffer, which means less work in the fragment stage, where we are currently seeing the bottleneck. Of course, this results in visibly low-res output, so you’ll probably want to crank up the MSAA, which gets us right back where we started: bottlenecked on fill rate.

Here’s an instruments trace of the sample scene with all of the runtime-toggle-able things (planes, meshes, etc.) turned off, but MSAA and shadows are still enabled. As you can see we’re fitting it all in just within 11ms, but we still see the frames in the display instrument reported as “Old.” They’re not all old (some of the ones that aren’t filtered here are “Good”) but these are frames where we missed the deadline.

And here’s a trace of the same scene, runtime stuff turned off, but also with MSAA and shadows disabled. As you can see there’s a healthy buffer between when we finish rendering one frame and when we start rendering the next one. It may be a little hard to read this graph, but observe how the long bars down in the fragment stage are much shorter compared to the duration of a single frame in the display instrument. We’re doing less work (because no shadows) and we’re doing it on much smaller buffers (because no MSAA). This kind of optimization is the only way to ensure things look smooth on device. You need to reduce the amount of GPU work needed for each frame, or double the allowed time per frame to 22ms, and render at 45FPS.

Aside from just generally trying to improve GPU performance on URP, there’s nothing I can do to help you here. I assume that you want to use MSAA, shadows, and other nice rendering features, but as it stands you can’t maintain 90FPS on Vision Pro with all of those features enabled at their default settings, even for a simple scene with just a cube and the default skybox. How you get from where you are right now to 90FPS and what trade offs you should make will depend on your use case. I’ll refer you back to my wall of text on the frame pacing thread about tips and tricks for diagnosing GPU bottlenecks. If you are comfortable sharing your actual project, I can try to help you make decisions about where we can trim off extra milliseconds here and there, but I’d just be using the same profiling tools available to you. Maybe I have a deeper understanding of our render pipeline, but to be honest I’m not a deep expert on that stuff; I just work here! :laughing:

We’re always working to improve performance, but visionOS is a particularly challenging environment. Quest is very similar (mobile SoC that needs to render mixed reality at 90FPS) but also has similar drawbacks. The default URP settings on Quest don’t quite render at a consistent 90FPS, but their compositor does a much better job of failing gracefully when you drop frames, so you don’t notice until it’s really bad. It’s possible that we might see improvements if we try features like GPU Resident Drawer or customize the render pipeline to skip passes we don’t need, but all of this is general performance optimization stuff. I can’t find any “smoking gun” pointing to something specific to our visionOS player that is outright broken or incorrect (aside from the stuff I fixed back in June/July). If any of those issues had regressed, we would still be seeing dropped frames in the “optimized” build with MSAA and shadows disabled.

I still haven’t gotten a good answer for why moving objects appear to reverse direction or “stop dead” when we drop frames on visionOS, which isn’t a problem on other XR platforms. That behavior (a moving object visibly juddering when you drop a frame) is the same for a barebones Swift Metal app, so it’s something that would need to be fixed by Apple. You can easily see this if you do what I told you not to do earlier and open Screen Mirroring over any app that uses CompositorServices. My test was a simple cube oscillating back and forth (linked in that thread if you’re curious) and when I run that simple app, which couldn’t possibly have a CPU or GPU bottleneck, I can still trigger dropped frames by running Screen Mirroring, and it looks the same as the particles in the sample scene. You see a “glitch in the matrix” and the cube appears to have moved backwards.

My hunch is that it’s an issue with double/triple buffering. I think you’re seeing an old buffer that the system expects to be a new frame, and it’s already been presented by the time the system knows you missed the deadline, so you see it briefly before a fresh frame shows up again. Anyway, all of this is to say that I don’t think there’s any way to 100% prevent this issue. Apps will inevitably drop frames from time to time, and unfortunately due to how that looks on visionOS, you notice it a lot more. The best we can do is optimize the hell out of our GPU work to try to avoid dropping frames. As much as it feels like accepting defeat, dropping the target frame rate down to 45 might end up resulting in the best experience.

Hey @mtschoen I appreciate the effort you put in here, I hope your boss notices you are so great! If you are curious of what project I’m working on it’s called Figmin XR
https://x.com/FigminXR

I’m just going to have to live with the hiccups, hopefully someday they’ll get better. Wish we could set the framerate to something more reasonable like 70 but ohh well.

Thanks again!

Haha, thanks! I’ll keep this one in my back pocket for the next quarterly review. :wink:

That’s awesome! Glad to see you’re helping carry the torch from TlitBrush. And it’s cool to see that you’re able to tick all the boxes for XR platform support. I’ll check it out on Quest :sunglasses:

Yeah, that’s what I’d do, as well. I agree, it would be nice to have more fine-grained control over the frame rate. I can see why Apple set things up the way they did–they want to maintain a steady 90FPS for pass through, so they don’t let the app control the refresh rate like you can on iOS with the TrueMotion thing (or FreeSync/Gsync on desktop). If the compositor needs to keep refreshing the screen every 11ms, your only option is to wait for the next refresh, hence “double or nothing” on the frame times.

With that said, they can clearly adjust the display refresh rate for the 96hz mode (and I think it can also go to 100hz in PAL regions to sync with 50hz displays?). They could probably give you the option to drop to 72hz like on Quest. I’d encourage you to send feedback to Apple through the Feedback Assistant requesting more fine grained control over refresh rate. For now, we’re stuck making tradeoffs with the constraints we’ve been given.

We’re still exploring ways to improve performance for URP on Vision Pro. As I said, upcoming features like RenderGraph may yield some improvements, and I might be able to come back with better advice about how to dial in the settings for shadows and other render features to reduce the time spent on fragment shaders. I’ll be sure to share anything I find here.

Thanks again for the bug report, and for coming along on this journey with us. Cheers!