VR Sample -- Vision Pro Build

When setting up a build for AVP / Fully Immersive mode using Unity’s VR sample, as shown in this video:

My hand interactions don’t work. I have added all the listed packages from this documentation:

  • [visionOS plug-in]
  • [XR Interaction Toolkit]
  • [XR Core Utilities]
  • [Input System]
  • [VR project template]
  • [Hand tracking]

What am I missing in this setup to get this working?

Hey there! Sorry to hear you’re having trouble.

What exactly are you trying to do with hand interactions? Are you using XR Interaction Toolkit components? UI? Skeletal tracking with individual joints? The template in the section of the video that you linked is the standard VR template, which isn’t actually configured to work with visionOS out of the box. I recommend you try out the visionOS Template which is set up specifically for visionOS.

If you are still interested in using the VR template, here are a couple of things to check in order to confirm we’ve covered the basics:

  • Apple visionOS XR loader is checked in XR Plugin Management
  • Hand tracking usage description set in visionOS Settings (if you are using skeletal tracking)
  • No errors or warnings in Project validation (under XR Plugin Management in Project Settings)
  • AR Session and HandVisualizer components in the scene (if you’re using skeletal tracking)

Have you tried the package samples in com.unity.xr.visionos? Does the sample scene work as expected in this project?

Let us know if any of these suggestions worked for you. Good luck!

Thanks for the quick reply. I’v been getting acclimated to all the examples, finding them all very helpful.

I’m was attempting to get the VR Template working with visionOS because I had seen the above video and the dev made it seem that it would more or less work if I included the right packages and built for visionOS. I can verify that I’ve done all your suggestions you’ve outlined above – thanks that was helpful. I needed to fix the camera offset in the project so my hand meshes don’t float above my head, and now the hand models appear as expected and move with my hands but none of the hand interactions works within the environment, buttons, Interactables, menus. I’m guessing I’m missing something with regard to how the inputs are being handled.

The VisionOS XR Plugin Samples I found in the package were sort of helpful, although none of the grabbing interactions work for me. I can pinch to set a transform from a raycast of my gaze on anything the raycast collides with, but I can’t interact with the two blue cubes or green sphere in the scene. It appears that I should be able to manipulate them in this sample. The menu buttons highlight on gaze but I can’t interact with the slider.

I have dug into the visionOS Template which has some great stuff in it. But I’m finding them a little confusing on how interactions are handled in these scenes vs. the PolySpatial Samples that I’ve also been looking at. These seem to be two different approaches.

In the visionOS Template, the bounded interactions with the objects are handled differently than the unBounded scene. I’m confused why in the bounded I’m able to rotate objects when I interact with them, but in the unbounded, even with track rotations enabled on the XR Grab Intertactable component the objects only follow position (and not rotation?), is this a bug or is this intended? The bounded version of the scene doesn’t use an XR Grab Interactable component and I’m guessing it gets its movement from the Bounded Object Behavior Script? Using that version I’m able to rotate objects though.

Looking further at the PolySpatial samples (specifically Manipulation Scene) there is an entirely different Manipulation Manager script that looks like it handles interactions with the objects. This interaction allows rotations.

The XRIDebug Scene again uses the XR Grab Interactable, but doesn’t track rotations.

I know there is a lot to unpack here, but I’m trying to find a straightforward example of the best way to handle interactions. I’m guessing the recommend way it using the new VisionOS Template (bounded and unbounded examples), but I’m struggling to understand why I’m unable to rotate objects in the unbounded?

I’m giving up on the VR Template for now… I’m guessing at some point someone will explain clearly the process for modifying inputs from exiting VR/XRI setups to PolySpatial (VisionOS) inputs. :slight_smile:

Yep. I get where you’re coming from. We’re still working hard on our samples, and this is great feedback. You can probably tell that the VR sample is much more “programmer art” than the more polished PolySpatial Samples and templates. But I think that is honestly the best place to start if you’re targeting VR mode. There are some subtle differences in how input works between the two modes which will just be confusing as you are getting started.

You should definitely be able to interact with the cubes. When you gaze at a cube and pinch your figers, it you should be able to grab it and move/rotate your hand to move it around. Is that not working for you? The green sphere is not interactable. It is a visual-only indicator of where the XRRayInteractor’s transform is. It should track the location of your pinched fingers.

That’s surprising, actually. Are you sure this is a VR build? You don’t get gaze input before you pinch your fingers. You do trigger a highlight state when you click the button, but you shouldn’t see a hover state prior to the pinch…

The slider should work but it’s a little finnickey. It’s kind of just a limitation of our UI framework, but you need to keep the drag “on top” of the slider element or the nib doesn’t move. The way we do the slider is that your gaze is used at first to select the nib, and then you need to move your hand side-to-side to “deflect” that original gaze ray along the length of the slider. There’s a red ray in the sample scene (actually there are two, but I’m talking about the one that moves :upside_down_face:) that shows this input vector. This user shared a more useful way to show this ray to users, which you might want to try. Either way, you need to make sure you keep the moving red ray in the sample on the slider as you interact with it.

Thank you! This was very helpful and I appreciate your response!

I realized that clearing out the project validation warnings was breaking the interactions. Specifically when I chose Fix All It diasabled the XR Controller component which complains about support for controller inputs not working… which makes sense in the visionOS context. Fix All also removed the text materials, so I didn’t even notice that those menus had text until I just left them as it. :wink:

Menu interactions/slider works now, and I understand what the green sphere represents with regards to pinch… and I can pick up (and Rotate!) the blue cubes… BUT only after I placed them above the world origin, they come into the project placed below ground plane and my ray won’t hit them I’m guessing because of the mesh that is being generated.

With regards to rotating Interactables, what is the difference between this setup in the visionOS VR sample and the visionOS “unbounded” Template… which doesn’t seem to let me rotate Interactables – specifically the ‘Stack Cube’ prefab. I can see that there is an XR General Grab Transformer on the intractable in the visionOS VR sample. This looks like it handles single/multiple gripping and scaling? I’m really confused why the unbounded interactables don’t rotate for me even with track rotation enabled?

I’m unable to resize either of the two blue cubes in this sample scene in the Simulator with Allow Two Handed Scaling enabled for the XR General Grab Transformer script component.
Is additional configuration/scripting required to enable two-handed scaling for XR Grab Interactables using XR Ray Interactors?

Can anyone confirm if I should be able to rotate an interactable using the XR Grab Interactable script with track rotation enabled? Am I missing something? I seem to only be able to translate a grabbed object.

The documentation on the PolySpatial Template is terrible.

You should be able to rotate an object using the XR Grab Interactable, but there is something with the way that the PolySpatial Template is setup that this does not function as you expect.

The reason for this (as far as I can tell) is that the object you are using for the Interaction is not your hand as you would expect. It is instead the XRTouchSpaceInteractor component. This component does not rotate, so your object will not rotate. If you take the XR Touch Space Interactor script and instead apply it to your Hands Prefab you will get rotation. While this doesn’t completely answer your question I hope it helps you as you try to figure it out.

We are trying to understand why the PolySpatial Template was setup the way it was and why there is no explanation for why things were done in certain ways. This thread has been a little helpful, but we are still heavily struggling to figure out how to get the functionality we want just by exploring the Template Scene, especially since it came with no explanation about the thought process behind the decisions to do things so radically different than other XRI Interaction Example Scenes.

+1 here since I’m trying to make these things work

I can use the XR Grab interactable to move and object, but the multiple seleciton behaviour isn’t working and so the two hands general transformer as well

Since I cannot rotate the object as you are saying, where can I put the XRTouchSpaceInteractor component to make this work? Where is the hand prefab you’re mentioning?