📌 Official Support for visionOS - Release Notes 1.1.4

The latest release of Unity’s support for visionOS is now available.

Installation

  • Please note that your packages will not automatically upgrade to this release unless explicitly requested. This is a bug that will be resolved in a future editor release. You must upgrade manually using one of the following options:
    • Remove and re-add the packages using the package manager.
    • Edit the project manifest directly (Packages/manifest.json) and change the version number for the packages to the exact versions listed at the end of these release notes.

Supported Versions

  • Unity 2022 LTS - 2022.3.191 or newer, Apple Silicon version only.
  • Xcode 15.2 - Xcode beta versions are not currently supported.
  • visionOS 1.0.3 (21N333) SDK - We currently do not support beta versions of visionOS.
  • Apple Silicon Mac and the Apple Silicon macOS build of the Unity editor. The Intel version of the Unity editor is not supported.

Major Features & Fixes

General Highlights

  • Improved performance of scene validation on large projects.
  • Renamed validation profiles to reflect the current app mode.
  • Added device rotation value to the XRTouchSpaceInteractor for XR Interaction toolkit.

Mixed Reality (Immersive) Mode

  • Added a SwiftUI sample scene to the PolySpatial package samples. This lets you create a standalone SwiftUI window that can interact with Unity content.
  • Various Play to Device improvements:
    • Fixed a range of connectivity issues when using Play to Device on the Apple Vision Pro device.
    • Increased Play to Device connection stability.
    • Added a progress bar during connection to indicate that Play to Device is active.
    • Play To device now retains its position within editor play sessions.
    • Scaled down Play to Device UI for easier workflow iteration.
    • Shader Graphs can be modified in real-time and will be updated live over Play to Device.
    • Significant performance improvements for compressed textures to improve initial load time.
  • Shader/Material improvements
    • Custom function node now supports reassignment operators (++, +=, etc.).
    • Sprites can now use custom materials & Shader Graphs.
    • MaskingShader (for sprites and UI) now supports vertex colors.
  • SkinnedMeshes now respect grounding shadow, image-based light, and sorting groups.
  • Built-in RenderPipeline unlit particles are now supported.

For an exhaustive list of fixes and performance improvements, please refer to the changelogs and package documentation:

Known Issues

Play to Device

  • Pausing Play Mode during a Play to Device session disrupts active connections.
  • Connecting while the device is locked will cause both the current connection and all future connections to fail, as will locking a device while a Play to Device host app is running.
    • To work around either issue, force quit and restart the Play to Device host app.
  • Setting ‘Connect on Play’ to ‘Enabled’ in the editor Play to Device window with no Available Connections selected may still connect to a connection entry if the Play to Device host app is running.
    • Set ‘Connect on Play’ to ‘Disabled’ to disable connections.
  • You will currently see duplicate GameObjects in the Play window when running Play to Device.
  • Running a Scene with a rotating VolumeCamera also rotates the Play to Device window, and the rotation transformation persists after stopping PlayMode. This may leave the Play to Device window facing away from users, resulting in the blank back of the window.
  • When your scene has unbounded volume camera, after stopping Play Mode in the editor, the Play to Device app will appear in an unbounded volume and can no longer be moved around.
    • You can workaround this by connecting any scene to Play to Device, or by force-quitting and restarting.
  • If an XR simulation environment is enabled, it will show up in the Play to Device session when connecting to the Play to Device app on the device or in the simulator. Note that this XR simulation environment is an editor-only object, and will not appear when the app is built and deployed to VisionOS device or simulation.

Other Information

For additional information please visit the general release notes. These release notes apply to the suite of packages released as part of Unity’s support for visionOS:

  • com.unity.polyspatial (1.1.4)
  • com.unity.xr.visionos (1.1.4)
  • com.unity.polyspatial.visionos (1.1.4)
  • com.unity.polyspatial.xr (1.1.4)
  • com.unity.ext.flatsharp (1.1.1)
5 Likes

Thanks for the update.

Not working with beta versions of the OS is a major setback though, especially because once a device is updated it can’t be (easily) reverted back. I suspect most devs will be using beta versions of the OS.

1 Like

Great to see quick updates for the VisionOS plugin. For us the play-to-device is only working the first time.

Any subsequent entering playmodes spill out the following warnings (for different types of components):

ObjectDispatcher hasn't collected changes for type Mesh for more than 64 frames. The Type tracking will be disabled. 
Use 'maxDispatchHistoryFramesCount' to increase maximum number of frames of the dispatch history.
This might also happen if you forgot to dispose ObjectDispatcher or forgot to disable the Type tracking.

With a final exception of:

ObjectDisposedException: Cannot access a disposed object.
Object name: 'The NativeArray has been disposed, it is not allowed to access it'.
Unity.Collections.NativeParallelMultiHashMap`2[TKey,TValue].Dispose () (at ./Library/PackageCache/com.unity.collections@2.2.1/Unity.Collections/NativeParallelMultiHashMap.cs:278)
Unity.PolySpatial.Internals.UnitySceneGraphAssetManager.Dispose () (at /Users/bokken/build/output/unity/quantum/Packages/com.unity.polyspatial/Runtime/Platforms/Unity/UnitySceneGraphAssetManager.cs:52)
Unity.PolySpatial.Internals.PolySpatialUnityBackend.Dispose () (at /Users/bokken/build/output/unity/quantum/Packages/com.unity.polyspatial/Runtime/Platforms/Unity/PolySpatialUnityBackend.cs:138)
Unity.PolySpatial.Internals.PolySpatialCore.Dispose () (at /Users/bokken/build/output/unity/quantum/Packages/com.unity.polyspatial/Runtime/PolySpatialCore.cs:262)
Unity.PolySpatial.Internals.PolySpatialCore.OnPlayModeStateChanged (UnityEditor.PlayModeStateChange newState) (at /Users/bokken/build/output/unity/quantum/Packages/com.unity.polyspatial/Runtime/PolySpatialCore.cs:498)
UnityEditor.EditorApplication.Internal_PlayModeStateChanged (UnityEditor.PlayModeStateChange state) (at /Users/bokken/build/output/unity/unity/Editor/Mono/EditorApplication.cs:463)
UnityEngine.GUIUtility:ProcessEvent(Int32, IntPtr, Boolean&) (at /Users/bokken/build/output/unity/unity/Modules/IMGUI/GUIUtility.cs:190)

Is that a known issue?

I’ve just built (using Unity 2022.3.20, Xcode 15.2 and the new 1.1.4 packages) and played our game on the device with VisionOS 1.1 beta 3 which works fine. Haven’t tried with VisionOS 1.1 beta 4 yet though which was just released.

1 Like

We’ve done some testing internally and didn’t find any issues with the beta but officially plan to only support the release versions of the OS for now.

1 Like

Good to know, though this does make it a bit more complicated for developers based outside of the US since TestFlight does not come pre-installed on the non-beta (release/consumer) visionOS and the App Store (to get TestFlight) is not accessible for non-US app store accounts (until Apple opens it up for all countries).

Also, developers would not be able to test new features introduced in beta versions of visionOS and would have to wait until the release version comes out at which point if the game does not work, consumers would notice that immediately as well and it will take a few days before an update can be released.

Hi. We tried the 1.1.4 version, but some samples don’t seem to be working. For example the Input Debug Sample gives very wrong touch positions. Also there are no change notes and documentation for 1.1.4. Is this an experimental release? In the package manager it tells us to downgrade to the “recommended” 1.0.3

Were you using play to device? I saw this behavior when using play to device but on a standalone build input lined up properly.

There are currently some limitations of play to device related to volume camera sizing / dimensions. Right now the input sample scene has dimensions of (1.75, 1.75, 1.75) but the play to device volume dimension is (1,1,1). If you change the input sample scene dimensions back to 1,1,1 input should align properly through play to device.

We observed the same.

It can be worked around by dividing the positions reported by the camera volume.

In our case we have to scale the camera volume to a 50x50x50 cube, then dividing the positions by 50 gives the correct point.

1 Like

Removing the library folder seems to have fixed the issue completely.

Up to now this release is a lot more stable and allows for much faster testing and iteration!

Also hint if it doesn’t work, or the volume appears empty:

Uncheck Project Settings → Enter Play Mode Settings → Enter Play Mode Options

Having this checked prevented play-to-device to work properly.

2 Likes

Yes, that seemed to have been the issue. It would be great to have this fixed though.

Also some other notes:

You have stopped providing Changelogs, would be great if they returned.
The Character Walker sample has a lot of bugs. For example when the volume Camera is offset,
The clicks don’t register at the correct location.

Our application crashes a lot more on Beta versions when loading new scenes/asset bundles in beta versions. Just a data point here. In the regular OS version it barely crashes (still not great but not as a bad as in beta).

1 Like

Thanks for the feedback! Regarding changelog, we had to do a few internal re-publishes, so the changelog content ended up stacking all within the 1.1.1 sections.

We’re working to improve this experience, but in the meantime, the changelog items from 1.1.1 to 1.1.4 are applicable to this latest release.

for those on the beta visionOS - i am wondering why - are there breakthru major features not supported ?

Great update! Thank you for your effort!

I have a question that how to add a custom Swift scene?
I saw “UnityVisionOSSettings.swift” and checked how to add a custom scene. There is mainScenePart0 scene builder. I saw a sample scene named “SwiftUISampleInjectedScene” at there.

My question is how to add my custom scene swift to “mainScenePart0” property. I’m guessing I have to add some settings into any settings but I didn’t find out.

How do I add them?

[Edit]
I checked its document but the target page was missing.
https://docs.unity3d.com/Packages/com.unity.polyspatial.visionos@1.1/manual/InteropWithSwift.md

I found out!

I saw “VisionOSBuildProcesser.cs” and then I found injecting swift scene code out.
This indicates we have to name to inject and/or add our swift files.

// Capture .swift files that we need to move around later on
var allPlugImporters = PluginImporter.GetAllImporters();
foreach (var importer in allPlugImporters)
{
    if (!importer.GetCompatibleWithPlatform(BuildTarget.VisionOS) || !importer.ShouldIncludeInBuild())
        continue;

    if (importer.assetPath.EndsWith("InjectedScene.swift"))
    {
        m_InjectedScenePaths.Add(importer.assetPath);
    }

    if (importer.assetPath.Contains("/SwiftAppSupport/"))
    {
        m_swiftAppSupportPaths.Add(importer.assetPath);
    }
}

Thanks for the info. I totally get that.

I am using Play to Device on the real device, and most samples work. But the “Mixed Reality” doesn’t seem to work. Is that by design? If yes you should add it to the docs, like you have for the simulator not working with these samples.

Yes, currently ARKit features are not supported through Play to Device but it’s something we’re working on. Thanks for the heads up about the documentation, we’ll try to get that called out in the next release.

Thanks!

I checked out your Image tracking sample. I noticed that it doesn’t support moving Images (this is a dealbreaker for what we wanted to use it for.) Is this something visionos doesn’t support at the moment, or is it something that polyspatial doesn’t support at the moment?

This is a limitation of visionOS. My understanding is while it does support dynamic images they only update at ~1 frame per second. More built for static images.

You can submit feedback to Apple about this via the feedback assistant.

1 Like