Performance, Crash issues on CAD Models with medium/larger transform hierarchy

Hi PolySpatial Team,

We observed over the last weeks that “Smaller CAD Models” with an medium triangle count (2 MIllions Triangles) and 1800 transform is crashing the polyspatial app.
Other models with higher triangle count (e.g 8 Millons) but fewer transforms (1k Transform) is not crashing however is lagging very much when moving the model on its root transform.

Based on our observation we assume the transform synchronisation from unity to realitykit is here the bottleneck.
Since Models 7 Millions Triangles but just 100 Transform are rendering much more smoother, but also not great
The Draw calls are not here the problem, most transforms only contain metadata no mesh renderer or colliders. All the models we have tested are easily rendered with 60 FPS on low end mobile devices on Android or iOS.

When we move the root transform, we know unity is recalculating the whole transform hierchary (LocalPosition). Are these recalculations of the child transform transfered to the RealityKit Instance. Or is only the transform that is getting moved synched with the RealityKit Transform Graph?

Can you tell us how we can diagnose or improve transformation synchronisation?

We have tried for testing merging meshes to reduce transform count. However we get an crash since the mesh size is too large.
is there an limit how large unity meshes can be on polyspatial?

IOSurface creation failed: e00002bd parentID: 00000000 properties: {
    IOSurfaceAllocSize = 2931490816;
    IOSurfaceName = CoreRE;
    kIOSurfaceName = REIOSurfaceMeshPayload;
makeIOSurfaceMeshBuffer: IOSurfaceCreate failed
assertion failure: 'success' (replaceContentsWithMeshResourceDefinition:line 344) Unable to replace MeshAsset contents with a model definition

Best regards
David Schrott

1 Like

I’m not aware of any hard limit. Merging the meshes is definitely the first thing I would think of to improve performance, so it would be good to know if there’s a limit there.

If it’s possible, it would be great if you could submit this as a bug report and let us know the incident number (IN-#####). It sounds like this example is different from the scenes we’ve typically been using to test, and it would be very helpful to have it as a target for optimization.

Since most of these affected models are customer CAD files we aren’t allowed to share them with any 3rd party. I will check if we can find a model which has the issue and can be shared with your team.

By the way we got informed by our Unity PRM that dynamic batching is not supported with polyspatial. Is this sth apple just hasn’t provided until now in RealityKit?

Without Dynamic batching, draw calls will be also here an issue. Unlikely xcode instruments is currently crashing our app, so i can’t give here an definitive answer. Nevertheless coming back to my main question:

When we move the root transform, we know unity is recalculating the whole transform hierchary (LocalPosition). Are these recalculations of the child transform transfered to the RealityKit Instance. Or is only the transform that is getting moved synched with the RealityKit Transform Graph?

So if we move the root node is only one Transform change synchronised or are 1800 Transform changes (1 root, 1799 Child nodes) send to the Realitykit instance?

I guess it’s more accurate to say that any dynamic batching in RealityKit isn’t under our control, but I would be surprised if they did any dynamic batching, since there are no options to hint that it should take place. I can ask them, though. Even if they don’t do it, though, it might be possible for us to do some amount of batching automatically–but we would likely need some way of specifying that certain groups of renderers will always stay in the same position with respect to others.

It should be only one transform synchronized. We sent the relative transforms and the parentage, rather than the absolute transforms. However, it’s entirely possible that some process other than synchronization is slowing things down in this case. One that comes to mind is the light and/or reflection probe data, which is resent when the absolute transform changes. You might try disabling light probes and reflection probes on all the MeshRenderers and see if that makes a difference (you’d probably want to do this in script via lightProbeUsage and reflectionProbeUsage, but technically you can set them in the Inspector by enabling debug mode and setting them both to zero).

1 Like

We did identify that these are a major performance hit, and will be disabling them by default in the next bugfix release (either with a switch in the settings to turn them back on, or turning them on automatically only if needed by shader graphs with the PolySpatial Lighting node).

great to hear, but in our cases we didn’t saw much performance improvements when we tried it. however our xcode instruments with RealityKit Trace is still not providing us the appropriate results about to see details about the frame times. maybe an bug in the new xcode 15.1 beta 3

We have obsevered moving the CAD Model with Volume Handle/Bar is very smooth and we don’t see any lagging. However moving the model with an Pinch and leveraging here unity to transform the root node is still very much lagging. So its might not be here the missing batch draw calling in realitykit.

Yes, that makes sense. I suspect the issue is with the Unity update thread, rather than the visionOS rendering. Our next release will include a fix that should help substantially (more than just disabling the light/reflection probes in MeshRenderers with the current version, since that doesn’t remove all the overhead).

1 Like

great looking forward testing it

Can you provide a timeline for when we can expect the next release? Is it more likely to be days or weeks?

1 Like

I can’t give you an exact time frame, but it should be soon.


just coming back to this, we have tried the new setting flag right after the package available. It gave some improvements in conjunction removing shadow over 100ms on frametime. but we still see 100-200ms frame times.

Does PolySpatial uses RealityKits [MeshInstanceCollection] ?(

We see in XCode Instruments under RealityKit Metrics “3D Render Encoding” marked as bottleneck and 800 draw calls which seems to be too much for RealityKit

But we also see under Runderloops the unity thread varying from 150ms to 400ms. So its for us not a clear picture, where the bottleneck is.

We don’t at the moment, though that’s something we can look into in the future.

1 Like

yes this would be awesome if you can look into this

Yes, that would be amazing. I’ve managed to get some performance improvements optimizing some of the code in the obj-c code in the xcode project and other things like fog in polyspatial.

But as it stands we are going to be stuck in fully immersive mainly due to performance. (and pray that ARKit passthrough eventually happens via ARFoundation when apple decides to listen to the development community needs :upside_down_face:)

It’s a really necessary feature to have some kind of batching/optimization for lots of meshes. Or the ability to use instanced meshes.


Any update here on Support for MeshInstanceCollection

1 Like