Unity as a library on VisionOS

Is there anyone who succeed in implementing Unity as a library (UAAL) on Vision Pro?
We are making an VisionOS App with Unity, but would like to implement menu UI with Swift.

What I tried were basically same steps for UAAL for iOS app explained in this blog.

Build was succeeded, but Unity scene didn’t appear from a Swift app. After 10 seconds of the launch of the app, error occurred.

If somebody tried UAAL with VisionOS, I’d like to know the result of Swift & Unity integration.


We’re trying to achieve the same.

I was successful when using the Virtual Reality Mode (Full Immersive Space) by copying the Swift files from Libraries/ARM64/Packages/com.unity.xr.visionos/Runtime/Plugins/visionos/ (UnityLibrary, UnityMain) and adjusting them to use inside the SwiftUI app.

We’re currently stuck doing the same for Mixed Reality mode. The PolySpatial stuff seems to require a different approach. When integrating the Swift files from that build (AppDelegate, iPhoneApp, OSSettings) we get the following error: No such module ‘PolySpatialRealityKit’. After comparing the UnityiPhone target to our native SwiftUI target we’ve added the libPolySpatial_xrsimulator.a library to our target, but that does not solve the issue. Did someone find a way do fix that error?


Short update: We were able to include PolySpatial builds to another project by copying the values for Header Search Paths, Library Search Paths and Import paths from the Xcode build settings.

I’m currently working on bringing the content to the Swift wrapper application in a convenient way following the approach “copy lot’s of contents from the generated Swift files” as I already did for UaaL with Full Immersive Mode. The goal is to have reusable SwiftUI controls (e.g. ImmersiveSpaceContent) which can be used inside the Swift app.

Will there be documentation for that approach/the complete UaaL concept on visionOS?

1 Like

any update on this Uaal topic?
There hasn’t been any answer on this topic at all, since the start of the thread (sep 29)

Is there no intention to support Uaal with Polyspatial?

we’re currently facing issues with scene management/handling in our visionOS UaaL approach. Will there be any kind of documentation? Anything on how PolySpatial handles scenes using SceneDelegates would be very helpful! We’re currently only able to launch an immersive space once. After that, Unity is still running in the background but no content will be rendered

Hi !
Can someone from the polyspatial team please reply to this thread ? This issue is currently a stop-ship blocker for us and we need to know if what we are trying to attempt here is actually intended to be supported or not.
We are trying to achive a deeper integration with Unity and native SwiftUI, so any help or advice is highly appreciated. Currently there is no documentation on this topic and we all we can do is guessing and trying out the public interface of the Unity framework exposed to Swift.
Feel free to send us a direct mail if needed.

Best regards … Ralf Heindoerfer - SAP


Hi there!

I’m sorry to hear that folks are having trouble, and that it’s taken so long for us to respond. This is indeed a challenging issue! In short, we haven’t had a chance to try this yet, but in principle it should be possible to add custom SwiftUI controls to a Unity/PolySpatial app.

For the crash reported by @from2001vr, please report a bug with a repro project if you haven’t already. If you aren’t able to share your full project if, please try to create a small sample that replicates the issue.

Otherwise, I think folks here are on the right track. You’ll want to customize UnityPolySpatialApp by adding your own WindowGroup and/or ImmersiveSpace under inside the following property, after mainScene:

var body: some Scene {
    // Your new WindowGroup and/or ImmersiveSpace goes here

You can add your own SwiftUI elements or a RealityView for your own RealityKit entities inside of this window. The best way to make these changes is to “embed” the com.unity.polyspatial.visionos package by copying it from <Unity_project>/Library/PackageCache/com.unity.polyspatial.visionos@0.4.3 to <Unity_project>/Packages/com.unity.polyspatial.visionos. That way you can freely edit files within the package while still building out of Unity. You’ll definitely want the whole build pipeline to do its thing, rather than trying to assemble an Xcode project with Unity and PolySpatial by hand.

With that said, it will probably be easier to iterate directly in the generated Xcode project so you can have code completion and quickly build/run to test your changes without building out of Unity every time. Just make sure to copy your changes in Xcode back to the package source so they don’t get lost.

Could folks in this thread share a bit more about their intended use case? For example, are you just looking to create some buttons and other native UI that operate entirely in the Swift layer? Or do you want this SwiftUI to control stuff within Unity? Do you need access to the RealityKit entities that are created/controlled by PolySpatial? The latter functionality may require us to expose/change some things within PolySpatial, so it would be good to know what kind of end-result is desired.


Hi @mtschoen

Thank you for the input!

We did basically the same thing. As we’re having a quite complex Swift app wrapper around our Unity application we’ve changed the type of UnityPolySpatialApp to Scene instead of App and managed to integrate the scene content (in this case as described by you mainScene) into our main Swift application. As we have a native entrypoint for our application we are launching the Unity Mixed ImmersiveSpace only on user action at a later point in time. The user has the option to close the ImmersiveSpace to go back to our Swift application UIs.

We’re currently facing a problem that after opening the ImmersiveSpace for the first time it cannot be opened a second time. From the console output we can see Unity is still running. When launching the ImmersiveSpace for the first time and we’re able to see the scene content (simple cube in our case for testing), we’re receiving the following console output:

Matching windows and volumes -- 1 orphan volumes, 1 free windows, 1 total windows
   ... matching window 432D2436-F1AA-4AA9-B913-41BDAE6D01E7 Unbounded to 11470
Volume 11470: window assigned, uuid: 03DC32FA-CF9D-4A8B-B590-0405FEB78D45 hostDimensions: SIMD3<Float>(1.0, 1.0, 1.0)

When launching the ImmersiveSpace for the second time we’re receiving the following output and no content is shown:

Window added: uuid FA28C92A-AE14-4AEC-98ED-3763736360E6 as Unbounded
Unloading 5 Unused Serialized files (Serialized files now loaded: 0)
[Diagnostics] Warning: Non shader graph shader ‘xxx' not supported or MaterialX encoding missing
Matching windows and volumes -- 0 orphan volumes, 1 free windows, 1 total windows
   ... requesting dismiss of unused window: uuid FA28C92A-AE14-4AEC-98ED-3763736360E6 Unbounded

Can you provide any tips on how to solve that? To me it seems like it’s related to how PolySpatial manages scenes internally. We’ve already experiemented with using specific UUIDs when launching the ImmersiveSpace but that does not have any effect on the content. Please let me know if you need further details on that.

The second problem we’re currently facing is related to the usage of libPolySpatial_xros.a/PolySpatialRealityKit.swiftmodule. As this library is added in the App target itself in the exported project and we’re not using that target at all we have to add the library to our main application target. That’s of course possible by copying the binaries to the main app from the Unity export but our plan is to do this in an automated way by using an XCFramework or any other approach that does not require editing import paths inside the Xcode build settings. Would it be possible to add those libraries for device and simulator as an XCFramework on your end? Of course we’re open to any other solution. Would be great if you can help us with that, too.

Thank you and regards

1 Like

The issue that you’re running into is that windows end up being attached to volumes – when the window gets closed, the volume is also “closed”. We don’t have any way of surfacing that back up to the app yet, or any way to allow you to handle those events or “re-open” a volume.

However, there is a workaround – if you load a new scene on the Unity side (or reload your existing scene), that will cause the volume to get re-created. So instead of launching the immersive space yourself, call into some C# code that will unload/reload the scene that has the immersive space volume. That will re-create the volume and also open the immersive space. Let me know if that works for you.

For the XCFramework – switching to using a XCFramework is on the list, but it will take a bit (requires some surgery in our build processes). But to make sure I’m understanding, the issue is that your own app is a separate Xcode project (or is it part of the same one as the Unity-generated one?), and you can’t easily reference the .a file from Unity’s export inside your own project? Or is the main issue the device vs. simulator differences in the .a file?

Thank you for the explanation! That is exactly what we were assuming and we’ll try to implement the workaround. I will keep you updated on that.

Regarding XCFramework: Great to know that it’s on the list first of all! The main reason for us is that we have a separate Xcode project with consumes the UnityFramework built as XCFramework through Swift Package Manager. When the static library is replaced with the XCFramework we would be able to include that XCFramework as additional binary target in our SPM package. I’m assuming we would be able to build the XCFramework from the provided .a files for device and simulator on our end if we would have access to the header files. Is that something you can include easier/earlier than switching to the XCFramework by default?

Great, glad to hear that the sceneloading workaround sounds usable for you.

You should be able to build the XCFramework yourself, yeah – you shouldn’t need any header files (the -headers arg to -create-xcframework is optional), and there aren’t any to give as it’s pure Swift code. I’m not 100% sure how to tell xcodebuild to include the swiftmodule though, and maybe that doesn’t actually need to get included in the framework (so you’d end up with the .swiftmodule and the .xcframework).

We were already able to build the XCFramework, but have been struggling with including the swiftmodule: Can you confirm that the static libraries are built using
BUILD_LIBRARIES_FOR_DISTRIBUTION=YES build setting? As far as I understand this option should ensure the required swiftinterface files are included inside the build of the lib itself and the additional swiftmodule folder would not be required anymore. But I’m not 100% sure if that’s also the case for static libraries (.a) or only for dynamic frameworks (.framework). Due to the project setup we’re not able to set the import path for that Swift package to include the swiftmodule manually. A workaround for that is copying the swiftmodule folder to the BUILT_PRODUCT_DIR using a pre-actions script inside the scheme.

A new error that occurs after the upgrade to Unity 2022.3.12 from 2022.3.9 (and it was happening with .11, too) is the following:

Undefined symbol: _OBJC_CLASS_$_UnityFramework

It only occurs when using the UnityFramework.framework (wrapped inside an XCFramework) in a separate project/workspace. When using it inside the same XCWorkspace where it is build there are no issues. I can see there were some changes in the project structure when comparing .9 and .12. Is there anything that could be related to that (linking) issue? Linking using .9 worked completely fine.


Short update: We were able to fix that issue on our end. It was related to the way the bundle id of the framework was used to initialize the UnityFramework for UaaL. Replacing it with the bundle id of the framework as string has solved the issue.

Any updates on the XCFramework topic?


I am also interested in running a Unity content in 3D mixed with visionOS native windows.

Is this solved or need more investigation?

What is the idea of building the XCFramework? (didn’t get it).


our status is that using Unity content inside a window is not possible at the moment, but we were able to overlay Unity content in a volume with SwiftUI native controls using UaaL and a simple ZStack by creating our own custom version of UnityVisionOSSettings.swift.

XCFrameworks combine binaries/frameworks for multiple platform in a single file. With that approach it’s possible to have an XCFramework wrapping builds for device (xros, arm64) and simulator (xrsimulator, arm64+x86_64). For PolySpatialRealityKit there are static libraries (.a files) for xros and xrsimulator. We’re wrapping both in the XCFramework to ensure all SDKs are supported with the same build.


Would love to get some clear steps on how you made this work - or actual updates from Unity here. I realize everyone has lots of priorities, but I’m sure many of us are in the boat of trying to figure out how to offer immersive and polyspatial content. Unity as a Library is a definite way forward if we can make it work.

1 Like

I’ll try my best to write down some steps, but as you mentioned we have quite some stuff that has to be prioritized. In general following the Unity as a Library documentation for iOS is a very good starting point.

Once you have done the general project setup inside the workspace, you should check out the Unity-VisionOS target of the Unity-VisionOS project. When clicking on the “Build Phases” tab you’ll see a build phase called “Compile Sources”. All Swift files included here should be copied to your native Swift wrapper project which should use UaaL. That’s basically it. You can adjust those Swift files to your needs and e.g. include custom SwiftUI.

In the same tab you’ll find a build phase called “Linked Binary with Libraries”. This is actually the only bigger difference to UaaL on iOS. In this phase libPolySpatial_xrsimulator.a is linked to the app target. This link has to be added in the wrapper target too, as the Swift files rely on PolySpatialRealityKit which is part of the mentioned static library. You should be able to link that file by right clicking the item and selecting “Reveal in project navigator”. Dragging that file to “Frameworks, Libraries and Embedded Content” should add a reference to your wrapper project.

After those steps you should be able to run your wrapper target. Without any adjustments it will look like the Unity standalone app, but you should be able to work from here with native stuff inside the copied Swift files.

I hope those steps help.

1 Like

Awesome, I’ll give that a try!

Hi @v_vuk we were able to partly fix the volume closing issue by reloading the scene using

SceneManager.LoadScene(0, LoadSceneMode.Single);

after that the PolySpatial window is mapped to the SwiftUI Volume and the content is displayed correctly.

We also tried closing that Volume completely on Swift side (using dismissWindow) and reopening it at the time we want the Unity content to be visible again. That leads to the following error message:

Scene became active: <UIWindowScene: 0x15aa49ad0; role: UIWindowSceneSessionRoleVolumetricApplication; persistentIdentifier: com.sap.mobile.productviewer.debug:SFBSystemService-F8262F7A-C2DA-4B1A-9A34-D7FE54531707; activationState: UISceneActivationStateForegroundActive>
Window added: uuid 01D21633-0C17-418F-A179-7106C298BBEB as Bounded-1.000x1.000x1.000
Matching windows and volumes -- 0 orphan volumes, 1 free windows, 1 total windows
... requesting dismiss of unused window: uuid 01D21633-0C17-418F-A179-7106C298BBEB Bounded-1.000x1.000x1.000

Scene disconnected: <UIWindowScene: 0x15aa49ad0; role: UIWindowSceneSessionRoleVolumetricApplication; persistentIdentifier: com.sap.mobile.productviewer.debug:SFBSystemService-F8262F7A-C2DA-4B1A-9A34-D7FE54531707; activationState: UISceneActivationStateUnattached>
Window removed: uuid 01D21633-0C17-418F-A179-7106C298BBEB
<FBSWorkspaceScenesClient:0x600002c0ec80 com.apple.frontboard.systemappservices> scene request failed to return scene with error response : <NSError: 0x600000f86c40; domain: FBSWorkspaceErrorDomain; code: 1 ("InvalidScene"); "scene invalidated before create completion">
Scene session activation failed with error: Error Domain=FBSWorkspaceErrorDomain Code=1 "scene invalidated before create completion" UserInfo={BSErrorCodeDescription=InvalidScene, NSLocalizedFailureReason=scene invalidated before create completion}
Unable to present a volumetric scene for id 'Bounded-1.000x1.000x1.000': Error Domain=FBSWorkspaceErrorDomain Code=1 "scene invalidated before create completion" UserInfo={BSErrorCodeDescription=InvalidScene, NSLocalizedFailureReason=scene invalidated before create completion}

It seems to be basically the same error message as before. Is there an option to recreate the SwiftUI Volume on Unity side? We tried opening it using the standard openWindow action, but that does not help. We even added a delay before reloading the Unity scene to ensure the SwiftUI Volume is completely initialized but that does not help, too.

Any solution to that would be appreciated!


For a simple sample project, I’m able to call (multiple times) openWindow and dismissWindow on a unity volume.
To do so, I had to change requestDismissWindow to no longer call requestSceneSessionDestruction (toggle to #if false).
A second thing that I had to do was to alter the mainScene windows and immersive spaces to no longer use the for: UUID.self. Otherwise it would open the window, but without content (just with the close bar). Not sure if this breaks something else, but looks good so far.
This will probably be more complicated for complex apps.

A different (big) issue that I’m now hitting, is a crash whenever I want to close a (Swift) window (either by pressing ‘x’ or by calling dismissWindow):

#1	0x00000001052d4b28 in PolySpatialSceneDelegate.sceneDidDisconnect(_:) at /Users/bokken/build/output/unity/quantum/Packages/com.unity.polyspatial.visionos/Source~/PolySpatialRealityKit/PolySpatialRealityKit/PolySpatialSceneDelegate.swift:40


Id like to register our interest as well. We want to create a focussed app, or whatever thats called, where its a single app, with multiple windows that only show our App.

For us to get to where we want to be, we ALSO want to display a number of Unity/Polyspatial items WITHIN this UX.

So, will this be possible?

I can imaging that Polyspatial has mostly been geared to a SINGLE app, like a VR. But in our case, we want to be able to spawn multiple Polyspatial assets within our visionOS App.
Not being able to do this, will mean Polyspatial is dead in the water for our use case.