Does XRI provide XRNode.LeftHand, etc on Vision?

We have a lot of existing code that uses XRNode.LeftHand and XRNode.RightHand, but these nodes never show up on Vision, but the InputTracking.XRNodeAdded callback simply never receives them. (The nodes for the headset show up just fine though)

I assumed the whole “node” abstraction was to cover both controllers and hand tracking, but I get the feeling that hands don’t show up as “nodes” because they only cover actual, physical devices. Does the hand tracking only work with gesture based classes like XRGrabInteractible? I guess I might be falling back to skeletal hand tracking otherwise, ugh.

No, we do not support XRNode or the “device-based” configurations for XR Interaction Toolkit on visionOS. With the legacy TrackedPoseDriver being the only exception (which is why you’re seeing XRNode.Head working), we only expose visionOS input through the Input System XR Hands packages. We investigated legacy support and determined that the cost did not justify the benefit.

Is there a particular reason why you need to use the XRNode APIs? We haven’t encountered any systems that don’t support Input System and XR Hands, but there’s a lot of ground to cover, and we may have missed something!

Well… old code that we are avoiding rewriting mostly. We made our own VR interaction stuff a few years ago to work across the SteamVR, Pico, and Quest APIs. Pico and Quest share a UnityXR backend now that both use that, but we still have the old SteamVR code. Either way, we have all our own stuff for recognizing grabs, interactions, selections, teleportation, UI, gestures, etc and replacing it all would be a big mess.

If that’s the case, I guess I’ll probably just have to write a backend that emulates what we need from the hand data then. Not ideal, but at least I can stop looking for why it isn’t working.

do you guys only reply to quistions on vr?

I’d highly recommend biting the bullet and upgrading to the new input system. We haven’t officially deprecated the legacy APIs, but they’re eventually going to be phased out. Also, it should be noted that there isn’t a direct equivalence to an XR controller on visionOS, anyway. You can sort of use the inputDevicePosition and inputDeviceRotation for this, but the rotation won’t be helpful for pointing. We have a project on the back burner to provide a fallback “aim pose” like you get for Meta hand tracking, but at the moment your best bet is to use the gaze/pinch interaction for visionos-specific interactions. Of course, I’m not familiar with your app or interaction code, so take this advice with a grain of salt.

VR is my particular area of expertise, but we do our best to reply to everyone’s question as quickly as we can. There’s a lot to keep up with! What are you having trouble with? If you DM me a link to your question I can try and answer it or get it in front of the right person.

What is considered legacy here though? There is no mention in the docs anywhere that I can find. Will there not be “nodes” for controllers in the future? Is there an alternate API we should be using to get the transform and input info on the other supported platforms?

We have our own interaction system that’s been working well enough for many years now. All it really needs is a transform and some input events, like the XRNode stuff provides. Writing a hand tracked backend to give us that info is much more feasible than replacing our whole interaction system everywhere. To be honest, part of the reason why we have our own is because these APIs, whether from Unity or other parties keeps changing every few years. If you are saying that the XRNode parts of the API that gives us basic user input is on the chopping block, then that’s a pretty big deal, and we definitely don’t want to rely on it even more.

Whoops! I just found this in my drafts… sorry for the delayed response.

Sorry, I didn’t mean to give you the impression that XRNode is going away anytime soon. I’m referring to all of this as “legacy” because it is distinct from the “new” Input System package. This is what the Active Input Handling player setting controls: do you want Input Manager (old) (a.k.a. legacy :wink:), Input System Package (new), or Both? When I say “going away eventually” I mean really eventually, like if/when we fully replace the legacy input system with the Input System package.

Anyway, since visionOS doesn’t support 6DOF tracked controllers, the only place where XRNode.LeftHand would be relevant would be the legacy Hand API. If I am not mistaken, the OpenXR package and more broadly Meta Quest hand tracking doesn’t support this API, either. I think Hololens and Magic Leap were the only platforms that supported that built-in Hand API. We are trying to standardize around the com.unity.xr.hands package as the common interface between hand tracking on various platforms.

These are the ways we expose input for visionOS:

  • Head pose via XRInputSubsystem, which will service the legacy input APIs you are referring to, as well as the XRHMD and ARHandhheldDevice devices for the Input System package.
  • Gaze/pinch gesture via VisionOSSpatialPointerDevice for VR and SpatialPointerDevice for MR. I went into a little more detail on this in another reply on another thread.
  • ARKit skeletal hand tracking via the API surfaced through the com.unity.xr.hands package.
  • The system keyboard can be invoked in MR apps by focusing a text field. It is not currently possible to show the system keyboard in VR (this is an issue we need Apple to resolve).
  • Gamepad, keyboard, mouse(?), and other input peripherals should still route through both the legacy system and Input System package. This is mostly shared code with iOS, so as a general rule, if you were using a peripheral on iOS, it should function the same on visionOS.

There’s another potential option which is to use the data from com.unity.xr.hands to stand-in for a 6DOF controller, or create an “aim pose” like you get on OpenXR/Meta platforms. I can see a way where you might be able to create an input device like this and route it through the legacy input system as a controller, but it would be much easier with the new input system package. In any case, you’re in for a bit of work to implement a new input mechanism, rather than just “hooking up the pipes” to your existing solution. I’d suggest you fully explore our samples for both VR and MR to see what’s available input-wise on visionOS. But at the moment, at least, you won’t be able to get anything other than head pose from those old APIs in the XR Module.

If you would like us to reconsider legacy input support for visionOS hand tracking, please submit an idea on the roadmap so we can consider it alongside other feature requests. In the meantime, you will need to use one of the above options to integrate visionOS input.