Mixed Reality Performance, settings and recommendations

Hey guys,

Unity 2022.3.32
Polyspatial 1.2.3
TargetFrameRate set to 90
Vsync = 0

We are coming to the end of porting a game to visionOS and are struggling with getting the performance to at least the frame rate of the other platforms, eg Quest,PCVR. (70-80fps)

These other platforms do have static batching, but our meshes have already been through a optimization/merging pass.

We added a simple fps counter, and our simple menu scenes and intro gameplay scenes do achieve 90fps, but the general game play scenes which contain environments (walls/floors/props etc) never go above 45fps (unless we delete the entire environment :slight_smile: )

  1. How many (and which) of the quality and graphical settings have an effect on the game at all?, as this is being handed off to Reality Kit? For example, changing antialiasing quality does not seem to have an effect.

  2. Is it useful to refer to Reality Kit performance docs and follow those recommendations. ie. flattening of entity hierarchies, or does Unity do any scene optimization under the hood. Does a Unity game object get converted to a Reality kit entity?.

  3. Does the next Polyspatial update contain any performance improvements, if so, is there an eta on this update? (We do have particles set to baked to mesh, and have had to implement a workaround to avoid the entity not found crash, until the next update)

  4. Are there any Unity recommendations on scene setup in particular regard to PolySpatial tracking of objects. We have profiled via Instruments, and Unity profiler, and in the struggling scenes, the CPU is spending double the rest of the CPU time in ‘Render Systems’, which I believe to be the tracking/updating of mesh objects?

NOTE to other developers : A few of our levels didn’t load at all until we implemented a delayed activation of game object hierachies in the scene, The device would reset rather than crash in Xcode with a vague apply fence fx failed message. It seems that the device could not process all of these objects at once.

I know there is a lot here to digest and hopefully answer, but any advice, or nuggets of wisdom would be greatly appreciated


1 Like

Thanks for the feedback!

Right; generally speaking, any graphics settings will not have any effect, because all rendering is performed by RealityKit (which doesn’t really have any global options for rendering in MR mode). The only option that comes to mind as relevant is the Application.targetFrameRate, which PolySpatial sets to 90 by default. Otherwise, the options relevant to PolySpatial are generally contained in the PolySpatial player settings.

Yes, each Unity GameObject becomes a RealityKit entity. Unity doesn’t do any scene optimization under the hood, and the guidelines for RealityKit definitely apply (they’re pretty general, IIRC, and basically apply to any 3D engine). One slight exception to this is that, in the 2.X version, we do support a limited form of static batching (either with respect to the scene root, or to a shared ancestor): objects flagged as static will be merged into combined meshes, which can improve performance significantly for certain kinds of scenes. That support didn’t quite make it into 1.2.3, but will be backported in the next release. However, you can also do this kind of static batching yourself, manually, by merging objects in the scene either during the build process or using an editor tool.

No specific ETA, but we are hard at work on it now and expect to release soon. There are some performance improvements (such as the static batching backport), and we are looking at improving performance more using new features in visionOS 2.0 (such as LowLevelMesh and LowLevelTexture, which should allow us to, for example, improve the performance of bake-to-mesh particles).

I don’t know that we have an specific recommendations at this time. However, I would definitely encourage you to submit a repro case (or multiple cases) as a bug report (and let us know the incident number: IN-#####) so that we can investigate your performance issues directly. It’s always helpful for us to have more real-world examples to test and iterate on.

Thank you for the detail response @kapolka!

For the ‘Rendering’ section of the Player settings, is it just the Color Space setting that will have any effect on Apple Vision Pro?

Thank you

Color Space will have an effect; we only support the Linear color space at the moment. Static Batching will have an effect on 2.X (and the next version of 1.X, since we’re backporting that functionality): if you have it enabled, we merge objects with the same materials into combined meshes if they’re flagged as static for batching. Normal Map Encoding, Lightmap Encoding, and HDR Cubemap Encoding will all have an effect on the textures that we send through PolySpatial. AFAIK, either Normal Map Encoding setting should work. Lightmap Encoding and HDR Cubemap Encoding are relevant (only) if you’re using the PolySpatial Lighting node; they should both be Low Quality (dLDR encoding). Those are the only Rendering settings I’m aware of that have any effect.

Thanks again @kapolka !

1 Like