I have successfully built and deployed many of the examples scenes from visionOSTemplate-1.2.3.zip. I am especially interested in the MixedReality sample.
In an effort to understand the project setup, I then tried to create a project that just implements hand tracking as shown in the MixedReality example. So I followed the “Create a Project from Scratch/Mixed Reality” instructions, then added in the things that I thought were relevant from the MixedReality sample.
Here’s what I did. (I’m on a 2020 Macbook Air M1 running Sonoma 14.4.1)
- Create a new project in the hub using Universal3D template and Unity 2022.3.26f1
- Go to Project Settings > XR Plug-in Management and click “Install XR Plug-in Management”
- Enable “Apple visionOS” Plug-in Provider in the VisionOS tab, wait for it to be installed
- When I get the popup warning about “This project uses the new input system package, but the native platform backends for the new input system are not enabled in the player settings… Do you want to enable the backends?” I click “Yes”
- In the “Apple visionOS” section of “XR Plug-in Management”, set the App Mode to “Virtual Reality - Fully Immersive Space” and when I am prompted to “Install PolySpatial”, I click “Yes”
- Add “Hand Tracking Usage Description” and “World Sensing Usage Description”
- Under “Project Validation”, i choose “visionOS MR - Volume”, and then “Fix All”. This adds an AR Session to the default SampleScene and disables Splash Screen
- While in Player Settings, go to “Player” section and change CompanyName so that it will build properly
- Go to Build Settings and change the Build Target to visionOS
[NOTE: this is where the “Create a visionOS Project from Scratch” instructions end, so the rest I am just guessing or copying from visionOSTemplate-1.2.3.zip] - I noticed that “Input Settings Package” in the visionOSTemplate-1.2.3.zip looks different than mine, so I copied “InputSystem.inputsettings.asset” over from visionOSTemplate-1.2.3.zip
- Double-check that the following packages have been installed: com.unity.polyspatial, com.unity.polyspatial.visionos, com.unity.polyspatial.xr, com.unity.xr.hands
- I add a VolumeCamera to the scene and create a Volume Camera Window Configuration (in the Resources folder), and assign it to the VolumeCamera
- I replicate the “XR Origin” setup from the Mixed Reality sample:
- Empty Game Object called "XR Origin
- Main camera becomes a child of XR Origin. Reset Transform and change clipping planes to 0.1/20
- Add a “Tracked Pose Driver (Input System)” to the Main Camera and replicate the “centerEyePosition” and “devicePosition” actions in Position Input and Rotation Input. [Side Note: Is this necessary for hand tracking in visionOS?]
- Also add the “HandManager” object as a child of XR Origin
- Copy over the “Hand Visualizer” script and assign it to HandManager
- Copy over the Joint Prefab and assign it to Hand Visualizer > Joint Prefab
- Add a cube to the scene, just to make sure it’s actually rendering something
- Build and deploy to Vision Pro
The app builds and deploys, but all I can see is a red cube in my room, and my hands aren’t being tracked like they are in the MixedReality sample. What am I doing wrong?
Could I just start with the visionOSTemplate? Of course. But I’m worried that whatever I am missing will come back to haunt me somehow. So I want to understand how to create the project from scratch.
Thanks in advance