Hello everyone,
I’m the technical lead on a project in my company and we are discussing a series of technological options. We have investigated several options but to keep it short and related to this forum topic (MR apps), the shortlist is as follows:
- Going native: Swift language, SwiftUI, integration with OS APIs. App will be a mix of mixed and full immersion.
- Using Unity with Polyspatial: C# language, keep app logic in Unity, use Unity API like XR toolkit, particle systems, etc. MR app, maybe with full immersion (using a dome or sphere).
We would like to use some relatively advanced geometry modifications, like bending a shape or “exploding” effect (parts of a model explode from its center).
Also, we want to use geometry cached animations (for simulations we can’t process in real-time).
For materials we want to change it in code (mostly colors or textures).
Our physics requirements are not high but we are considering objects to “float” in space.
In-house we have much more experience using native than using Unity, comparatively. So this aspect is quite important for us.
My main question is: what is the appeal of Polyspatial in this case? One important limitation of RealityKit is that complex animations are not possible, it is a limited renderer (no blendshapes, particle engine is limited, playing geometry cache does not seem possible). In this situation I expected to Unity to help with these limitations, but I found that Polyspatial would not help because it ultimately depends on RealityKit so it inherits many of its limitations.
I can see the benefits if you are already familiar with Unity/C#, or want to use P2D or porting existing Unity project, among others. But outside of that the appeal seems limited because you have to move into Unity while hitting the limitations of RealityKit anyway.
Am I understanding it correctly? Can I still benefit from Polyspatial for things like better animations with features like caching or geometry modifiers?