See my cheeky meme above for whether or not mode switching works now… it does not.
Edit: nevermind, I think I see what you’re asking now… Does gaze/pinch input work in VR? Yes!
But I think what you’re saying in the next sentence is that when switching your app mode to VR in project settings, input stops working. That is because, as with rendering, pinch/gaze input has differences between VR and MR, and thus you will need to change how you implement input depending on the app mode.
In MR, we get a targeted entity for a given pinch gesture, which is especially useful in bounded mode, when you don’t have access to the gaze vector. Without this, you would need to do a sphere cast around the interaction position to find what object was being interacted with, and in some cases that would give you the wrong result.
In VR, however, there are no RealityKit entities, so this isn’t an option. Thankfully, because you’re always in an immersive space, you always get a gaze vector on the first frame of the pinch, which means you can do a regular old raycast in Unity. Furthermore, the machinery for finding the GameObject backing a given RealityKit entity all lives in the PolySpatial package, so VR input needs to function independently.
This is why we have a bit of a fragmented input story for visionOS. For Mixed Reality, you need to use SpatialPointerDevice
, which provides the data about pinch gestures, including the targeted entity. For Virtual Reality, you need to use VisionOSSpatialPointerDevice
, which provides the same pinch gesture data, excluding targeted entity. As @dariony points out, the VR samples in com.unity.xr.visionos
(Apple visionOS XR Plugin package) show you how to use VisionOSSpatialPointerDevice
with XRI, Unity UI, and regular C# code. When pivoting to VR, you’ll need to update your input setup, but it’s not that different, and the samples should be your guide.
Thanks for reaching out, and good luck!