Using the same build of our app we saw massive performance drop in VisionOS 2.0, to the point that our app is no longer usable.
We are using Unbounded mix reality mode, we have a high polygon count model (700k), it is more then recommendation, but this was working great in version 1.3, rending at 45Hz but was smooth.
We did a profiling comparison and found the big difference is the call to “3D render encoding”, it is almost twice as long.
We did some optimization to try to get back on our feet, we were able to get back to 45Hz with 3D model optimization and disabling transparency and etc… but it still feel much worst then 1.3.
When moving the head around near 3D object we see huge border of pixel around the 3D model that does not get updated, in 1.3 it was present but much less severe.
I know you’ll say that the issue is on apple side, we already have a ticket open with them, but we were wondering if Unity is aware of this issue and if you know a work around.
When you say “same build” are you talking about literally the same build out of Xcode running on devices with one or the other OS version? Or are you building the same project using different SDKs?
Performance regressions like this could be caused by any number of things. If it is literally the same build as you say, then it couldn’t be anything that we changed on the Unity side. That doesn’t mean we can’t do anything to mitigate the fix, though.
Hm… This is a separate issue, I think. Can you explain what you mean by “smooth with RealityKit and jittery with Metal?” Are you using Hybrid mode to switch between RealityKit and Metal? Or are you building the same scene once with RealityKit app mode and once with Metal app mode? Was this a regression with visionOS 2.0 or are you seeing this on both 1.x and 2.x versions?
In both of these cases, we’ll need a project to reproduce the issue. Please submit a bug report (Help > Report a Bug...) and attach your project so that we can build it and replicate the issue on our end. If you are unable to share your whole project, it might be possible to replicate the issue in a fresh project with a script that just includes the minimal set of assets or a stress test scene that replicates the issue. If the bug reporter is too slow, you can use a secure file sharing service like Google Drive and either paste a link in this thread or send it to me on DM. For submitting projects outside of the bug reporter, I highly recommend you delete or temporarily remove the Library folder from your project, as it is usually quite large and can be re-generated on my end.
Yes I’m talking about same binary, installed through testflight on both device.
If we reduce significantly the number of object in the scene it fix the issue. (Going from 700k polygon to 250k)
But this is not a solution. The thing is in VisionOS 1.3 it work great with 700k, so why suddenly in 2.0 it does not…
Also we can’t share the project, I just wanted to know if anyone at Unity is aware of this regression.
I’ll see if I can create a new project with dummy model just to load the same quantity of mesh and object, if I can reproduce with this I’ll share it with you. Thx
No, I’m using either only Reality Kit or only Metal mode. But we’re delayed on our move to Unity 6 due to Cesium being broken in the latest versions, so we’ll get back to the matter when Cesium is fixed
As you mentioned, our first response–since this is apparently a regression on Apple’s side–would be to request that you submit feedback to them about it (as you apparently have done). However, I do think that there are things we could potentially do to mitigate the issue. The first thing that comes to mind is just pure (GPU) memory usage. For example, we know that there’s a 256MB limit on mesh data that we’ve encountered elsewhere, and perhaps the slowdown is related to that. We plan to make heavier use of the LowLevelMesh API in the future, and that provides more control over how mesh data is laid out in memory. It’s possible that we could use that to reduce the memory size by omitting attributes or changing their formats.
Just curious, though; have you tried just splitting the 700k mesh up into separate meshes yourself? I wonder how that would affect performance, though since we’ve also seen performance issues with high numbers of entities, it’s not clear that that would be an overall win.