I’m currently working through adapting the UI of an existing VR application to work with XRI and the Vision Pro. Our previous interactions were built using Poke Interactions. I attempted using XRI to implement a simple button press interface, but it seems the Gaze + Pinch interaction (clicking in the simulator with the left-most option selected) wouldn’t actually select buttons.
While exploring this, I was able to get the “HandsDemoScene” working on the simulator (by adding an AR Session to the scene), but currently there’s no way to test hand interaction in the simulator. What’s the recommended best practice for building UIs if we don’t have access to a device? Trying to prep for a Developer Lab session, but nervous we won’t have a UI ready without some set of best practices.
I would like to know this too. I haven’t been able to get any input to register in the simulator. Have tried some of the XRI samples, and while I can navigate around the scene and control the camera, clicks in the simulator don’t seem to do anything
IINM in the simulator you can do “Click” and “Pinch/Hold” with the left mouse button. Not sure if that is what you need but I think that’s the extent of what the simulator may provide.
The visionOS Template shows both actions and how they work in the simulator.
You’re saying input works for you with App Mode set to Virtual Reality - Fully Immersive? I have run the Sample Scenes from the visionOSTemplate 0.4.3 in Fully Immersive and cannot get left mouse button input to cause anything to happen. With App Mode set to Mixed Reality, I can grab objects and canvas UI elements are highlighted when rolled over but click doesn’t do anything
Input in VR is not functional in 0.4.x and below. It will be available in our next release.
As for using XRI, we provide a sample scene called XRIDebug that shows how to set up our PolySpatial-specific XRTouchSpaceInteractor. Awkwardly, the input action map is missing its PrimaryWorldTouch action, and the script is still set up to use the now-obsolete WorldTouchState struct. This shouldn’t be a huge issue, since you can still set things up manually. It may also be possible now that the gaze ray is coming through the input system to use a RayInteractor but I can’t confirm if that is working at the moment.
We may not be able to get the samples fixed for the next release, but we should be able to push a point release shortly after that fixes all of this up. Hopefully the slightly broken XRIDebug scene is enough to get you started.
This is expected. The template is set up assuming you are building to MR, so that “expand view” button is for transitioning from a shared space to fully immersive MR. You’re already in a fully immersive space in VR mode, and we don’t use the VolumeCamera component for anything, so that button won’t have any effect.
I think you mean XR Direct Interactor for that first one? There is no Grab Interactor.
We had some trouble with Direct Interactor due to the way input works on visionOS. The Direct Interactor expects that you have a continuously tracked controller or hand pose which can update its position ahead of the interaction. So when your select/activate input comes along, your interactor is already overlapping with the interactable. In the case of visionOS, device position is only provided on the same frame that you pinch/poke, so the overlap test can be inconsistent.
We recommend that you use the XRTouchSpaceInteractor. Some of this will be cleaned up in our next release, but we won’t have a 100% working sample. Stay tuned… I’ll update this thread when everything is ready.