Use Unity as a library in visionOS apps

Due to the current limits of UI development in Unity / visionOS / Polyspatial, we decided to go native for the time being, implementing most of our app in SwiftUI. However, we’d love to incorporate Unity’s game engine features when the user switches to a fully immersive view (VR).

Is it possible somehow to include Unity as a library in an existing visionOS SwiftUI app, so one can use the best of both worlds?

Highly unlikely since Unity is an entire game engine and not a single library

Hi @waldgeist

If you haven’t done so already, please cast your vote on our public roadmap in favor of “SwiftUI support” and bringing support to “Unity as a Library” (UaaL) to visionOS.

We actually use Unity as a Library to support VR apps on visionOS today. If you build a VR app for visionOS today, you can see this implemented in UnityMain.swift. We do not provide official support for modifying/extending this implementation, but there shouldn’t be anything blocking you from playing around with SwiftUI on top of a VR application.

I’m also curious to hear more about specifically which aspects of UI development in Unity seem limiting for you. Is it that you find UI development in Unity cumbersome, insufficiently expressive, you’re trying to match the OS look-and-feel exactly, etc? Feedback of this kind would help guide our planning and prioritization for these features.

For context, one reason we’ve hesitated to add SwiftUI support is how it’d inevitably fragment Unity development, particularly when considering cross-platform development and deployment. Cross-platform development is a major reasons users choose Unity in the first place, and platform-specific UI development runs counter to this concern. We’ve thus been wary about encouraging to pursue this route lest users trap themselves one platform without meaning to.

But understanding more specifics about your case – whether we make Unity UI more effective or expressive to meet your needs, as well as gaining insight into how developers such as yourself approach cross-platform UI, would help inform our own planning and roadmap.

Thanks in advance!

1 Like

Hi @timc-unity, Thanks for sharing these insights. Yes, I aready voted for these two planning items. Regarding the UI: The biggest issue is that UIs created with Unity do not really feel “natural” and snappy. This was already the case with our iOS app, but on visionOS, the delta is even more obvious. Also, until recently, not even keyboard inputs were supported. This was a blocker for us, as we could not even implement a login screen. The UI capabilities of Unity might be fine, if you’re mainly building a game and only need minimal UI. But if you have a quite UI heavy AR application, it’s just not enough. We also want to take advantage of the many UI controls visionOS supports natively, like maps. Last but not least, the new pricing model didn’t make things better, but that’s unrelated to the UI question.

1 Like

Is there a tutorial how to do this, like it was available for iOS? The process has to be different, because the way Unity integrates with the core system is quite different.

To add additional weight about the comments from @waldgeist : Many companies in the industrial sector are already heavily using uaal to support integration with native UI (or any other cross-platform ui library or native app logic).

Motivations/Reasons for native UI/uaal mechanism:

  • able to re-use an existing mechanism developers are used to (uaal) rather than learning something new from existing uaal developers,
  • possibility to share existing iOS UI code with the visionOS platform (swiftui or uikit),
  • an existing app with 2d native ui and the requirement to keep visual&behavioral consistency for embedded unity player,
  • an existing app with 2d native ui and the requirement of re-usability of those 2d ui components,
  • large feature set of native 2D UI components vs unity,
  • native ui stability & performances vs unity 2D UI: current fragmentation (UGUI, UIToolKit), the lack of feature set (UIToolkit doesn’t support 3D UI, MRTK privileged over UGUI for 3D UI by most, UIToolKit doesn’t support today the possibility to do runtime loading of UIXML, etc.), performances (commented about unity,
  • skilled platform developers on native UI (vs Unity 2D UI),

I think everyone here certainly embrace the cross-platform capabilities of Unity for anything regarding 3d real time and 3d interaction (the best!).
However, on the 2d ui side, all the arguments developed above, which are largely oriented on the fact developers want to embed Unity in an existing industrial app really motivate largely the requirement for uaal support (and therefore offer a bridge to support any sort of 2D UI integration, native or other).

It will be extremely helpful if you can address IN-65644 issue created by @timc-unity as it least there is a way for us to use uaal and it will unblock many of your industrial customers.

I will be more than happy to get with you on a call to discuss more about 2d ui in in industrial context.

cross-referencing: Unity as a library on VisionOS - #36 by mfuad


For us - we prefer to use the react-native/JS ecosystem to build our UIs and interact with our backend because it’s simply more mature, stable, fully featured, and has a plethora of off-the-shelf solutions for anything we would ever want to do. TBH I would expect this to be the main path of any developers entering the ecosystem for the first time.

Unity is a fantastic 3D engine - but y’all have limited resources - and by definition can’t build an ecosystem as rich as what’s been built by the broader internet over the past 20+ years. I’ll upvote UaaL and SwiftUI Support but I feel like that undersells how important this feature is. Realize that with Vision Pro you’ll start seeing developers who are creating the application layer vs the entertainment layer and the demands on UI will be much higher.


I agree. I would also love to use React Native in the first place, since our server is React-based anyways. May I ask how far you came with that approach so far? I have seen this React Native extension:

which is pretty heavily being maintained. But I hit some walls when I tried to integrate other stuff, so I eventually came back to native development.

VisionOS + RN = not very far at all. I’m now in the early exploratory stages of what it would take to bring our current app to the vision pro

My experience thus far has been embedding Unity inside RN on an iPhone. Overall it works quite well.

How easy is it to create your custom native components and embed them in the RN app? We had to do this a lot in Unity to serve specific needs. For me, the combination of RN and custom native components would be the ideal setup. If it worked smoothly. We never really got far with the UI Unity provides. It’s great for games, but not for “serious” apps. Way too sluggish.

Hi Tim, I’d like to try out how far I can get with using Unity as a library for VR inside a SwiftUI app. Is there any information available what the code parts are that have to be copyied into an Xcode project to make this work? If you export a Unity project to visionOS, there’s a whole lotta stuf that I assume is not really needed for this.

My use case would be: Using SwiftUI as the main user interface of the app, and let the user dive into a VR experience on the press of a button. Pretty much like Apple shows it in the solar system part of the Hello World sample.

1 Like

This is exactly what we are trying to accomplish too. So far our biggest blocker is not being able to pinch select SwiftUI buttons (yet the element responds to gaze.)

1 Like

Did you manage to use Unity as a library for the AR part? Would still be interested in doing so. Doing 3D in SwiftUI / RealityKit is pretty low-level, and I’m missing what can be done with Unity.

Kind of, we are doing a realitykit/swiftUI experience that launches a Unity metal VR experience. And then displaying swiftUI over that metal experience.
The bridge I wrote has apparently caught the attention of the Sr. Director of Technology Evangelism at Apple, and we might open source it at some point.

We use the new dynamic function swizzling introduced in Swift 5.7 to hook onto UAAL and then use a custom swift native visionOS plugin with a custom build post processor to hook it all up.

One oddity we needed to do to get this to work is to duplicate our custom native framework plugin and reconnect the guids to xcode in a post process script.

There are a lot of bugs, but some of them related to state seem to be fixed in VisionOS 1.1 beta 3.

Main things are:

  1. When you close a SwiftUI Window it also closes the unity renderer.

  2. Unity will take over the ability to interact with SwiftUI buttons… sometimes. It seems SwiftUI Ornaments work but traditional SwiftUI buttons do not.

  3. After the app is “closed” and opened again, only the swiftUI window we added will show.

Issue 3 may be fixed in the next unity.xr.visionos release via LayerRenderer State

Actively uploading a repro bug report to Unity but it is a 17.1gb huge file.

Super interesting. Would love to see this open sourced!

1 Like

likewise - these early tools can help the community

Hi folks –

Based on community interest, visionOS v.1.1.4 includes a SwiftUI sample scene and documentation that demonstrates how to integrate Unity and SwiftUI within a single project. From the release notes:

  • Added a SwiftUI sample scene to the PolySpatial package samples. This lets you create a standalone SwiftUI window that can interact with Unity content.

We’re aware that this doesn’t address all needs for UaaL support, but as a large number of UaaL requests have centered on SwiftUI integration, we hope this sample will be helpful for many of you. Please take a look and let us know whether it meets your needs!



Awesome. Will try it out.

SwiftUI sample scene does not work on Play To Device, right?

That’s correct; the protocol used by PolySpatial/Play to Device doesn’t include a way to recreate SwiftUI interfaces.