Some confusion about XRI in Vision Pro

Hello everyone in the Unity community!
Recently, I’ve been using the Polyspatial to develop applications for visionOS, while utilizing XRIToolkit to meet the input requirements of different mobile devices. For Vision Pro, I referred to an example named “DebugUI” in Polyspatial example, and in my own scene, I introduced several components like XRInteractionManager, InputActionManager, and XRTouchSpaceInteractor.


截屏2024-04-01 17.10.03
截屏2024-04-01 17.10.09
截屏2024-04-01 17.10.14
Unexpected errors occurred during this process. Firstly, when I placed XRTouchSpaceInteractor in a sub-scene, entering the sub-scene from the start scene caused hand recognition to drift and fail. Conversely, placing XRTouchSpaceInteractor in the main scene resulted in issues where entering the sub-scene would cause when fingers pinch and not release, the device will read as null.

I think I have some misunderstandings about XRTouchSpaceInteractor and the other components I mentioned, leading to incorrect reading methods set in the wrong scene settings.

I also read Polyspatial’s documentation at Input | PolySpatial visionOS | 0.7.1

It mentioned that introducing this component would short-circuit other ray and collider detections. Does this mean that upon integrating XRITouchSpaceInteractor, the conventional XRIToolkit input will be short-circuited and the input from Polyspatial will take over, set via XRTouchSpaceInteractor?

I’m quite confused because when I remove the XRITouchSpaceInteractor component, my app runs normally through XRI. I hope to get your clarification on this.

Sincerely, thanks !

1 Like

The root of my confusion lies in the fact that the documentation for this component is missing. :rofl:

:partying_face:Please, everyone, help me :rofl:

facing the same problem. And without any documentation reference i am just googling mindlessly right now.

1 Like