visionOS Beta Release Notes (0.3)

We have published a new release (0.3.2) that provides support for Xcode 15 beta 8 and visionOS beta 3 (21N5233f) SDK. For those who are testing on devices at Apple’s developer labs or via a developer kit, you should only be using the following configuration(also outlined in the “Requirements” section).

  • Apple Silicon Mac for development
  • Unity 2022 LTS (2022.3.9f1 only)
    • The update to Xcode 15 beta 8 only works in the above version.
  • Xcode 15 beta 8 (linked to download)
    • The Xcode 15 Release Candidate will not work
  • visionOS beta 3 (21N5233f) SDK

To learn more about Unity’s visionOS beta program, please refer to this post. The following release notes apply to the suite of packages released as part of Unity’s visionOS beta program:

  • com.unity.polyspatial (0.3.2)
  • com.unity.xr.visionos (0.3.1)
  • com.unity.polyspatial.visionos (0.3.2)
  • com.unity.polyspatial.xr (0.3.2)

Please note that there is a known issue with content not appearing when the Splash screen is enabled. Disable the Splash screen in Player Settings > Splash Image.

Requirements

  • Apple Silicon Mac for development
  • Unity 2022 LTS (2022.3.9f1 only)
    • The update to Xcode 15 beta 8 only works in the above version.
  • Xcode 15 beta 8 (linked to download)
    • The Xcode 15 Release Candidate will not work
  • visionOS beta 3 (21N5233f) SDK

Package Installation

  1. Redeem your coupon code to unlock the Apple VisionOS Beta subscription. This is a prepaid subscription used to manage access to our packages during the beta program. No payment is required to participate in the program.
  2. Launch the Unity Hub, or restart the Unity Hub if it is already open.
  3. Create a new Unity project using Unity 2022.3.9f1, and make sure visionOS Build Support (experimental) and iOS Build Support are installed.
    • Please note that visionOS Build Support (experimental) is only supported on Mac devices.
  4. Open the Package Manager (Window > Package Manager) and make sure Packages: Unity Registry is selected.
  5. Open the “+” dropdown and select “Add package by name…”
  6. Add the following packages:
    • com.unity.polyspatial.visionos
    • com.unity.polyspatial.xr

Experimental packages

As a reminder, experimental releases are not supported for production, but they provide early access for those who want to start testing our solutions for the visionOS platform. This also helps us make progress on development through your feedback.

Template and Samples

Template Installation

  1. Download the visionOS Project Template here.
  2. Unzip the file to your desired project location.
  3. Open the project with Unity 2022.3.9f1 using the Unity Hub > Open Project
  4. Further information is provided with In-Editor Tutorials in the Template, and the template documentation.

Unity PolySpatial Samples

The PolySpatial Samples are built with the Universal Render Pipeline. If you are not starting with the template project referenced above, please create a new project using the Universal Render Pipeline, or manually add and configure URP to your project. Without this, materials from the samples will appear pink in the editor.

  1. Once you have installed our Unity packages, go to the Package Manager (Window > Package Manager) and select the Unity PolySpatial package
  2. Under the Samples tab, select Import on the Unity PolySpatial Samples
  3. Once the samples are installed, open the Scenes folder and add all the scenes to your build settings (File > Build Settings…). Make sure the ProjectLauncher scene is the first one.
  4. Make sure PolySpatial is enabled in the Player Settings > PolySpatial and make sure Apple visionOS is checked in XR Plug-in Management.
  5. Build your project for the visionOS (experimental) platform.
  6. Open the Xcode project and target the Apple Vision Pro simulator to launch the project launcher scene. From here, you can use the arrows to select and click Play to load a scene.
General Notes
  • A new Spatial Pointer Device has been added to the input system package for the spatial tap gesture. Previous input API’s are being deprecated.
  • The samples have been updated to use the new Spatial Pointer Device.
  • There is a known issue with content not appearing when the Splash screen is enabled. Disable the Splash screen in Player Settings > Splash Image.
  • Currently an app must be set to either bounded or unbounded. The settings in the scene volume camera must match the XR Plug-in management Apple VisionOS settings. For building the unbounded scenes in the Samples (Mixed Reality, Image tracking) make sure to not include a scene with a bounded volume camera (Project Launcher).
  • The SampleScene is the only supported scene in the template project. The Expand View button in SampleScene is not functional.
  • When using the visionOS Simulator, you may experience visual issues with any object that has upward facing normals.
  • Interacting with the Slider component in the SpatialUI scene of the samples will crash the app.
  • There is a known issue in the Samples Object Manipulation scene when dropping an object below the platform. You can exit and replay the scene to fix it.
  • Please note that when running apps in visionOS Simulator, the result may be different than the Vision Pro headset. Check out Apple’s guide on running your app in the simulator to learn more.
  • On build, the resulting project is still called “Unity-iPhone.xcodeproj”. (Fix in progress.)
  • Only Apple Silicon machines are supported. Support for the x86-64 simulator is coming in a future release.
  • When running in the Simulator, you may need to disable Metal API Validation via Xcode’s scheme menu (Edit Scheme > Run > Diagnostics > uncheck Metal API Validation). (Fix in progress.)
  • Some visionOS/iOS player settings are ignored or improperly applied. (Fix in progress.)
  • The PolySpatial project validations are under the Standalone, Android and iOS tabs in the Project Validation window (Edit > Project Settings > XR Plug-in Management). (Fix in progress.)
Windowed Apps / Virtual Reality
  • Crashes may be seen when some pixel formats are used. (Fix in progress.)
  • To receive controller input, you’ll need to use a prerelease version of the Input System package. In your project’s Packages/manifest.json, include:

"com.unity.inputsystem": " https://github.com/Unity-Technologies/InputSystem.git?path=/Packages/com.unity.inputsystem"

visionOS Shared Space / PolySpatial
  • Objects may flicker in and out of visibility, especially when using a bounded volume. (Platform issue.)
  • An application cannot have more than one volume at this time. (Platform issue.)
  • Volume camera dimensions cannot be changed dynamically. (Platform issue.)
  • Canvas / UGUI can consume lots of CPU and be very slow. (Fix in progress.)
  • Unity Particle System effects may not function or look correct. (Fix in progress.)
  • Input sometimes does not work, due to input colliders not being created on the RealityKit side. (Fix in progress – enable collision shape visualization in Xcode to see if this is your issue.)
  • When using an Unbounded Volume Scene, an empty 2D window appears at app launch. (Fix in progress.)
Frequently asked questions

Q: My input code is not working and I’m not getting the correct touch phases

A: Make sure you are using the correct touchphase

using TouchPhase = UnityEngine.InputSystem.TouchPhase;

Q: Everytime I launch my app the home menu icons will appear in front of it.

A: This is a known issue. (Platform issue.)

Q: Hover component shows up differently depending on the object geometry.

A: This is a known issue. (Platform issue.)

Q: My app is not responsive or updating slowly.

A: Check the performance tab in Xcode and see if the usage percentages are near or over 100%. If your app is not hitting performance, the simulator and passthrough video will not slow down or stutter but the objects in your app might.

Tips for getting started
  • Use XCode visualizations to see information about objects in your app

  • Make sure you have the leftmost icon selected in the simulator for interacting with content. Other icons are for navigating the simulator and do not register input.

8 Likes

Unity PolySpatial Input

This package release adds a new input device SpatialPointerDevice that developers can use with Input System action maps and APIs.

You can leverage the Enhanced Touch API for polling touch phases.

  • SpatialPointerState phase will never be valid to use when polling the state. Instead, use the EnhancedTouch API to get active touches and query the phase.
  • When filtering input based on the Spatial Pointer Kind enum for Kind.DirectPinch it’s recomended to not check this when in a Began touch phase. This is because a Kind.Touch will usually be called before a direct pinch is detected and direct pinch will be set in a later touch phase.

Spatial Pointer Device Data

The SpatialPointerDevice mirrors the SpatialEventCollection data from SwiftUI and is the primary way to interact with content on visionOS. The input data is limited depending on your app’s mode (bounded vs unbounded) and when you are requesting the data. Some data is only valid for the first frame in which the input was performed.

The input is registered by the user looking at an object and peforming a pinch gesture with their index finger and thumb. This type of input will register as an Indirect Pinch. Input can also be performed with a direct poke (Touch) or direct pinch (Direct Pinch) on an object.

  • Up to two inputs can be registered at the same time.
  • The object a user is looking at must have a collider on it to capture input.
  • The devicePosition and deviceRotation properties describe the pose of the input device controlling the interaction. Typically, this is based on the user’s pinch (a point between their finger and thumb).
  • By default, the Interaction Ray is based on the user’s eye gaze.

The SpatialPointerDevice provides the following properties:

  • .interactionPosition: the position of the interaction in world space. This value will updated while the input is maintained and will always start on a collider.
  • .deltaInteractionPosition: the difference between the starting interaction position and the current interaction position.
  • .startInteractionPosition: the starting position of the interaction. This will always be on a collider and will only be set when the input occurs.
  • .startInteractionRayOrigin: the ray origin based on the user’s eye gaze. This is only available when an app is in unbounded mode and will only be available when the input occurs.
  • .startInteractionRayDirection: the ray direction based on the user’s eye gaze. This is only available when an app is in unbounded mode and will only be available when the input occurs.
  • .devicePosition: the position of the user’s pinch (between the user’s thumb and index finger). This value will be updated while the input is maintained.
  • .deviceRotation: the rotation of the user’s pinch (between the user’s thumb and index finger). This value will be updated while the input is maintained.
  • .kind: the interaction kind, Touch (poke), In Direct Pinch, Direct Pinch, Pointer, Stylus.
  • .targetId: the instance ID of the object being interacted with.
  • .phase: the spatial pointer touch phase of the current interaction.
  • .modifierKeys: any modifer keys that are active while the interaction is happening.
  • .targetObject: a direct reference to the game object the interaction targets.

Use a spatial pointer in an Action Map

The Spatial Pointer Device is listed in the Other section of Input Action maps. There is a Primary Spatial Pointer for detecting the primary interaction and a Spatial Pointer #0 and Spatial Pointer #1 for the first and second interactions respectively.

Use a spatial pointer in direct code

A script that surfaces SpatialPointerState data based on active touch input.

using Unity.PolySpatial.InputDevices;
using UnityEngine;
using UnityEngine.InputSystem.EnhancedTouch;
using UnityEngine.InputSystem.LowLevel;
using Touch = UnityEngine.InputSystem.EnhancedTouch.Touch;

public class InputScript : MonoBehaviour
{
    void OnEnable()
    {
        EnhancedTouchSupport.Enable();
    }

    void Update()
    {
        var activeTouches = Touch.activeTouches;

        // You can determine the number of active inputs by checking the count of activeTouches
        if (activeTouches.Count > 0)
        {
            // For getting access to PolySpatial (visionOS) specific data you can pass an active touch into the EnhancedSpatialPointerSupport()
            SpatialPointerState primaryTouchData = EnhancedSpatialPointerSupport.GetPointerState(activeTouches[0]);

            SpatialPointerKind interactionKind = primaryTouchData.Kind;
            GameObject objectBeingInteractedWith = primaryTouchData.targetObject;
            Vector3 interactionPosition = primaryTouchData.interactionPosition;
        }
    }
}

Limit interaction by kind

You can check the spatial pointer device data to limit the kinds of interactions to permit. The following example only executes its input logic for indirect pinch and poke interactions:

var activeTouches = Touch.activeTouches;

if (activeTouches.Count > 0)
{
    SpatialPointerState primaryTouchData = EnhancedSpatialPointerSupport.GetPointerState(activeTouches[0]);

    if (primaryTouchData.Kind == SpatialPointerKind.IndirectPinch || primaryTouchData.Kind == SpatialPointerKind.Touch)
    {
        // do input logic here
    }
}

Select, move, and rotate an object

You can update the position and rotation of an object based on the interaction position and device rotation. The following example shows how to select an object, translate and rotate the object and unselect the object based on touch phases:

using UnityEngine;
using Unity.PolySpatial.InputDevices;
using UnityEngine.InputSystem.LowLevel;
using Touch = UnityEngine.InputSystem.EnhancedTouch.Touch;
using TouchPhase = UnityEngine.InputSystem.TouchPhase;


public class InputScript : MonoBehaviour
{
    GameObject m_SelectedObject;

    void Update()
    {
        var activeTouches = Touch.activeTouches;

        if (activeTouches.Count > 0)
        {
            SpatialPointerState primaryTouchData = EnhancedSpatialPointerSupport.GetPointerState(activeTouches[0]);

            if (activeTouches[0].phase == TouchPhase.Began)
            {
                // if the targetObject is not null, set it to the selected object
                m_SelectedObject = primaryTouchData.targetObject != null ? primaryTouchData.targetObject : null;
            }

            if (activeTouches[0].phase == TouchPhase.Moved)
            {
                if (m_SelectedObject != null)
                {
                    m_SelectedObject.transform.SetPositionAndRotation(primaryTouchData.interactionPosition, primaryTouchData.deviceRotation);
                }
            }

            if(activeTouches[0].phase == TouchPhase.Ended || activeTouches[0].phase == TouchPhase.Canceled)
            {
                m_SelectedObject = null;
            }
        }
    }
}

Is the project template going to be updated as well? or will 0.2.2 work?

The project template has now been uploaded visionOS_Template0.3.2.zip. It’s been updated to use the new Spatial Pointer Device but functionally it’s still the same as the 0.2.2 release.

1 Like

I think there’s a typo in the package version numbers listed above (at least, I was able to install the packages by making these changes):

  • com.unity.xr.visionos should be 0.3.1, not 0.3.2
  • com.unity.polyspatial.visionos should be 0.3.2, not 0.3.1
3 Likes

If it helps anyone else, I needed to sign out of Unity Hub and then sign back in to get the packages to install correctly

2 Likes

Hello! Thanks for that new update! Just in time for my visit at the Apple Developer Center for a test on the device.

I just ran into an issue that popped up with that release:

  • On both simulator and device: Pressing Play on Xcode will successfully launch the app, but nothing will be rendered.
  • Solution: Stop the app running on Xcode and try relaunching the app directly from the visionOS home menu.
3 Likes

Thanks for the report, we’ve seen some issues like this on our end as well. Can you confirm you’ve disabled the splash screen?

  • There is a known issue with content not appearing when the Splash screen is enabled. Disable the Splash screen in Player Settings > Splash Image.
1 Like

Ah, I did not catch that. Yes, it is indeed enabled in my case. :+1:

1 Like

I’m also getting no rendering, and my splash screen is off. I’m simply using the visionOSTemplate-0.3.2 project and changing Volume Mode in settings to Unbounded. It works if I set it to Bounded. Is there anything else that I need to do to get Unbounded mode to work correctly?

Are you building the SampleSceneUnbounded or the SampleScene? Currently most unbounded scenes don’t show much content (if any) in the simulator since ARKit features are not enabled in the simulator.

For showing the bounded SampleScene in an unbounded scene make sure the volume camera in the scene and the Mixed Reality Volume mode are both set to unbounded.

Hey Dan! Great job with this document, onboarding was very smooth.
Trying to run the Polyspatial samples and - while they work - TMP text shows up fine in editor, but when hitting play shows up bork/pink in game mode (which was surprising!) - as well as in the simulator when building.
(This is all after properly importing TMP ofc)

Any thoughts? I don’t know that I’ve ever seen TMP break only in play mode - super interesting.

Thanks!


1 Like

Try locating the TextSDFSmoothstep in the PolySpatial package and reimporting it.

Also are there any logs in the editor console window?