Does XRI provide XRNode.LeftHand, etc on Vision?

Whoops! I just found this in my drafts… sorry for the delayed response.

Sorry, I didn’t mean to give you the impression that XRNode is going away anytime soon. I’m referring to all of this as “legacy” because it is distinct from the “new” Input System package. This is what the Active Input Handling player setting controls: do you want Input Manager (old) (a.k.a. legacy :wink:), Input System Package (new), or Both? When I say “going away eventually” I mean really eventually, like if/when we fully replace the legacy input system with the Input System package.

Anyway, since visionOS doesn’t support 6DOF tracked controllers, the only place where XRNode.LeftHand would be relevant would be the legacy Hand API. If I am not mistaken, the OpenXR package and more broadly Meta Quest hand tracking doesn’t support this API, either. I think Hololens and Magic Leap were the only platforms that supported that built-in Hand API. We are trying to standardize around the com.unity.xr.hands package as the common interface between hand tracking on various platforms.

These are the ways we expose input for visionOS:

  • Head pose via XRInputSubsystem, which will service the legacy input APIs you are referring to, as well as the XRHMD and ARHandhheldDevice devices for the Input System package.
  • Gaze/pinch gesture via VisionOSSpatialPointerDevice for VR and SpatialPointerDevice for MR. I went into a little more detail on this in another reply on another thread.
  • ARKit skeletal hand tracking via the API surfaced through the com.unity.xr.hands package.
  • The system keyboard can be invoked in MR apps by focusing a text field. It is not currently possible to show the system keyboard in VR (this is an issue we need Apple to resolve).
  • Gamepad, keyboard, mouse(?), and other input peripherals should still route through both the legacy system and Input System package. This is mostly shared code with iOS, so as a general rule, if you were using a peripheral on iOS, it should function the same on visionOS.

There’s another potential option which is to use the data from com.unity.xr.hands to stand-in for a 6DOF controller, or create an “aim pose” like you get on OpenXR/Meta platforms. I can see a way where you might be able to create an input device like this and route it through the legacy input system as a controller, but it would be much easier with the new input system package. In any case, you’re in for a bit of work to implement a new input mechanism, rather than just “hooking up the pipes” to your existing solution. I’d suggest you fully explore our samples for both VR and MR to see what’s available input-wise on visionOS. But at the moment, at least, you won’t be able to get anything other than head pose from those old APIs in the XR Module.

If you would like us to reconsider legacy input support for visionOS hand tracking, please submit an idea on the roadmap so we can consider it alongside other feature requests. In the meantime, you will need to use one of the above options to integrate visionOS input.