Deciding on Polyspatial or "pure" native app

Hello everyone,

I’m the technical lead on a project in my company and we are discussing a series of technological options. We have investigated several options but to keep it short and related to this forum topic (MR apps), the shortlist is as follows:

  • Going native: Swift language, SwiftUI, integration with OS APIs. App will be a mix of mixed and full immersion.
  • Using Unity with Polyspatial: C# language, keep app logic in Unity, use Unity API like XR toolkit, particle systems, etc. MR app, maybe with full immersion (using a dome or sphere).

We would like to use some relatively advanced geometry modifications, like bending a shape or “exploding” effect (parts of a model explode from its center).
Also, we want to use geometry cached animations (for simulations we can’t process in real-time).

For materials we want to change it in code (mostly colors or textures).

Our physics requirements are not high but we are considering objects to “float” in space.

In-house we have much more experience using native than using Unity, comparatively. So this aspect is quite important for us.

My main question is: what is the appeal of Polyspatial in this case? One important limitation of RealityKit is that complex animations are not possible, it is a limited renderer (no blendshapes, particle engine is limited, playing geometry cache does not seem possible). In this situation I expected to Unity to help with these limitations, but I found that Polyspatial would not help because it ultimately depends on RealityKit so it inherits many of its limitations.

I can see the benefits if you are already familiar with Unity/C#, or want to use P2D or porting existing Unity project, among others. But outside of that the appeal seems limited because you have to move into Unity while hitting the limitations of RealityKit anyway.
Am I understanding it correctly? Can I still benefit from Polyspatial for things like better animations with features like caching or geometry modifiers?

As you note, there’s no native API for visionOS that PolySpatial has access to that you can’t access without Unity. In particular, I don’t think there’s any benefit regarding caching (you can just as easily load meshes/materials/textures in a native app as USDZ/PNG/etc., and I doubt Unity’s transferring them in a raw format confers much benefit) or geometry modifiers (we transfer raw geometry data and turn it into a MeshResource–you can do the same in a native app).

Off the top of my head, the main advantages of using Unity/PolySpatial are:

  • Cross-platform support. This is probably the biggest draw of Unity in general, as it often requires substantial resources to create/maintain multiple platform-specific implementations. If you know that you only want to target visionOS, though (or only Apple platforms: you can definitely share most of your code between those), then that isn’t an issue.
  • The Unity editor, and its associated ecosystem of packages, extensions, the asset store, etc. Lots of developers are already familiar with these tools, and they’ve been around long enough to include a big feature set (of which, of course, only a subset will work with PolySpatial–but it’s a pretty substantial subset). The well-tested pipeline for assets (particular things like meshes and animations) is related to this; it’s easier to find artists who can readily create assets for Unity as opposed to RealityKit. The equivalent tools for visionOS, like Reality Composer Pro, are a long way from being as full-featured as Unity.
  • Iteration time for testing, if you’re using Play to Device.

Other, perhaps lesser advantages:

  • For particles, you can use the Bake to Mesh option to get Unity’s (much more full-featured) particle systems instead of RealityKit’s (at the expense of some performance, since it requires creating/updating meshes on the CPU).
  • The support for Unity shader graphs includes some higher-level functionality (such as procedural shapes and limited HLSL support) that the raw MaterialX support lacks, and provides some additional lighting options to complement visionOS’s image based lighting.
  • Unity physics and AI/NavMesh support.

I would say that if you have more experience with Swift, SwiftUI, and RealityKit, only plan to target Apple platforms, and don’t need the Unity toolset/asset pipeline (and aren’t likely to hire anyone used to that ecosystem), then you’re probably better off doing native development.


You can actually write your own custom blendshape animation support for Reality Kit - which sadly is the same for Polyspatial (although Unity has blendshape animations covered for the other platforms)

If your team has more experience with native than Unity, I recommend sticking to native. Unity polyspatial is (as i understand it anyway) converting unity to realitykit / native - and this is often with broken pipelines, “things work in this sub-version then break the next,” matrix explosions and more.

On the plus side, Unity has probably the largest and longest XR community of devs. On the down side, many of these people eventually quit XR and join the dark side (non XR).

RealityKit and SwiftUI seems a late player that has learned from the mistakes of these other platforms (Unity is effectively cobbled together from many different acquisitions that are very ad hoc integrated - Unity UI was NGUI acquisition, Mecanim was another etc). SwiftUI is very logical (though cumbersome in some instances).

1 Like

Many thanks for your thoughtful responses!

Just to clear something up: with cached geometry animation I mean having a complex animation pre-computed and baked into mesh, typically one mesh per frame of the animation. Unreal provides geometry cache animations imported from other software or processed inside Unreal itself if using their own Chaos Cloth simulator.

Our idea was to used a clothing animation rendered in a different software and saved it as a cached geometry animation. AVP would just play it back. Unfortunately I could not find a way to do it in RealityKit. Unity does have the functionality by using Alembic importer or MegaCache plugin, but I have not yet confirmed if these are converted correctly to RealityKit (does anyone know?).


Cross-platform support.

It can be an advantage in terms of code and resources (scenes, models, audio). But we have realized that our use cases are differentiated to the point that a mere cross platform project is not enough (or even prejudicial). We will probably do AVP-exclusive native app with the benefits of OS integration, and a more high end visual experience using Metal render (we considered Unreal for this but AVP support is experimental and hard to deal with).

The Unity editor, and its associated ecosystem of packages, extensions, the asset store, etc.

This is attractive, but our models will be sourced outside of the marketplace. We will work mainly with FBX and USD formats (at least on a exchange level). I’m the main developer and I more experienced in Xcode than Unity so this is tilting the balance in the way of native app.

The equivalent tools for visionOS, like Reality Composer Pro, are a long way from being as full-featured as Unity.

I agree with this. The editor can only manage simple scenes and interactions are rather basic. However, I have experience combining scenes with programmatic changes to the scene. Picking the right flow and UX can work (as long as you don’t shoot for the moon).

Iteration time for testing, if you’re using Play to Device.

Currently testing.

Particles/shaders/physics and nav

I would prefer to avoid baking particles, our project is not heavy on particles any way. I will try to keep it simple and control it mostly in code. As per shaders I have some experience with both PBR and MaterialX, and Metal shaders. I think I can managed the basics this way. Navigation is not a requirement for us.


You can actually write your own custom blendshape animation support for Reality Kit - which sadly is the same for Polyspatial (although Unity has blendshape animations covered for the other platforms).

Could you elaborate on the implementing custom blendshape part? I could not find any custom implementation googling or in GitHub.

I think the limitation comes from RealityKit. USD importer does not support it. It has been requested to Apple years ago…

Unity polyspatial is (as i understand it anyway) converting unity to realitykit / native - and this is often with broken pipelines, “things work in this sub-version then break the next,” matrix explosions and more.

I don’t have experience myself, but it is a risk in case some obscure bug appears do the conversions into RealityKit.

RealityKit and SwiftUI seems a late player that has learned from the mistakes of these other platforms

Can’t asses that, because I have limited experience outside of Apple APIs. It seems the worst part of AVP is that it is hard to get out of the box and do more complex stuff beyond simple “show and tell” type of apps. Which can be frustrating because I think it has much more potential. Also, there is not much 3rd party support (I mean things like physics or animation libraries) for RealityKit.

I haven’t tested them, but it will depend on what APIs they use. If they just create standard Meshes (on the CPU), then they will likely work, but if they use something fancier like immediate mode rendering or building buffers for meshes using compute shaders, then they won’t.

I should probably clarify that PolySpatial’s “baked” particles are baked in the sense that they’re converted to meshes on the fly (every frame), not baked in the sense that they’re converted to a mesh animation as part of the build process (so that every instance would look exactly the same). But if you’re not using particles for much, then it probably doesn’t matter.

You can supply dynamically created Meshes (MeshResources, for RealityKit) each frame, which means that technically you can create any kind of procedural geometry (with the caveat that the performance is limited by the fact that the Mesh/MeshResource is supplied as CPU data rather than GPU buffers). As procedural geometry goes, blend shapes aren’t too hard to implement: for each vertex, you start with a base and then add deltas scaled by the blend shape coefficients. It’s possible to do this in a compute shader and then read the results back to the CPU to create/update a MeshResource. For hints as to how a blend shape compute shader might work, it’s helpful to look at how Unity lays out blend shape data. It’s also possible to render the blended vertices to a texture (again, likely with a compute shader) and sample that texture in the vertex stage of a shader graph.

I tested them in Polyspatial. My setup was importing a few alembic files and place them in the scene (I used the SwiftUI template from Polyspatial package just because I was already testing some other things). For controlling the animations there are a few ways but I used these two animation methods: Animation clip, and Alembic Timeline.
Playing in editor works as expected and the animations worked as expected. Then I built for simulator and device and tested again. It fails.
I also tried to simply add the mesh (not the whole alembic file, just the mesh) to the scene and this will show up in all tests, as a static mesh with a material.
I don’t how this is implemented internally, but it seems that a static mesh has no issues but a baked animation is not translated at all into RealityKit. I could not see any errors about this in Unity while building or in XCode while testing, it is just simply ignored.

Thanks for clarifying. If we use some particles, probably native way will be enough.

Thanks for explaining. I think I would be at some disadvantage if I were to implement this in Unity using C#. By chance I’m implementing something quite similar to this in Swift natively using ModelIO. I found that MeshResource to be rather limited when accessing the low level details of a mesh (I wasn’t able to access the vertices buffer or the mesh directly), so I had to rewrite my model loader using MDLMeshBuffer which allows me to stride through the vertices as a memory buffer (I believe this is CPU-bound). Finally, I will reconstruct it as a mesh and replace it in the scene. Bit of a round-trip… we shall see.

Yes, this is basically what I am doing: base mesh, end mesh, and a weight coefficient that can be animated (not working yet, but the principle is the same).
Our biggest challenge seem to be how to procure good mesh models in terms of having same topology and number of vertices. Maybe there are better implementations that can work with different topologies, but it seems harder to pull off.

Thanks. This is interesting. I think this is better implemented in Unity because you actually get a blend buffer, while in RealityKit I think this does not exists (I looked in ModelIO API). Moving into Unity development has its own cost that outweighs these perks (in our case IMHO).
I think what I’m doing using ModelIO would be computed in CPU, maybe there is a way of off-loading the computation to a GPU shader.

Interesting, how would that look? I have used shaders for transforming geometry, but that was only for height maps (which can only be used in certain cases).

I’m sorry I can’t post implementation details because is under construction and in a private repo.

We tried some basic Alembic mesh animations and I saw them working in a very basic sense over Play to Device, but it sounds like there are some platform-specific checks that are currently excluding visionOS. We’re looking into a fix for that, though there may be other limitations. As for how it’s implemented internally, it’s actually open-source, so it’s pretty transparent.

There are different ways you can do it, but it’s the same basic principle: offset or replace the vertex positions in the vertex stage of the shader graph based on the contents of the texture. For example, you might use a compute shader to populate the texture each frame (in a format such as .rgba16Float) with the final blended vertices, using the vertex ID to find the position within the texture. You could also bake multiple frames of animation into a texture, as someone did in this thread.

1 Like

Thank you. I was not able to render any Alembic in the device. As I understand it Alembic files can have animations based on vertex interpolation or baked geometry. I trie both types of files and none worked.

I checked the link but I could not see anything specific to visionOS. What I meant was in regard to Unity engine itself. I’m thinking they are not translated to RealityKit entities. Even though you can use them in the editor and the inspector shows the propierties. The importer does not seem to be the problem.

Many thanks for the hint on baking vertex animation (VAT). I did not know about this technique. My background is not in game creation. It seems doable even in native, but a bit above my current level though.

1 Like

Good news. RealityKit in visionOS 2.0 finally gets blend shapes support in USD.
I wonder if PolySpatial engineers will make use of this addition.

The new blend shape support in RealityKit is only usable for USD, at least presently; there’s an API for setting the blend shape coefficients, but no API for providing the blend shape data as part of the runtime-created mesh. However, we plan to use the new LowLevelMesh API along with compute shaders to provide our own blend shape implementation.

1 Like

That’s great, so if i have a regular rigged fbx blendshape file that works on Unity win/mac/ios/android - should work fine in this upcoming polyspatial release?

1 Like

A main reason for native focus is first access to new features. I’m jealous that native devs are playing with VisionOS 2 features already and that’s not coming to Polyspatial for “several weeks”.

And also not needing to hire out for not-Unity-implemented native feature integration / Unity plugins.

1 Like

Thanks for leading me to LowLevelMesh. I’m catching up on WWDC24 news…
This is important news and I think it will open the way for better animation, procedural meshes and mesh deformations over time. It is similar to what I was trying to achieve manually by replacing the mesh, but better.

Do you have an ETA for Polyspatial? As I understand it requires 2.0 to be released first and then Unity will release an update.

I would like to know too. My Polyspatial test is using Alembic files in a timeline. Current Polyspatial with visionOS 1.2 does not render them at all. I wonder if it will work in the future.

Yes, as far as I know. It will require that you use the visionOS 2.0 beta, Unity 6000, and PolySpatial 2.X.

No ETA yet, but we’re working hard on it at the moment, and should be able to release soon. The visionOS 2.0 beta (/Xcode 16.0 beta) with LowLevelMesh support was released this week; you can download them from here.

1 Like

I saw the announcement during WWDC24. I plan on testing it this week.