We’re porting a VR title that will work in Mixed Reality mode (Although we’re also considering shipping as Fully Immersive). What’s the best way to go about getting a left hand / right hand position? I’d like it as input to puppet our avatars.
I understand gestures like the pinch will give you two hand positions, but those only work while the gesture is active.
I’m also aware of the full hand tracking API which we could potentially use to grab the hand mesh and then grab a point on it.
Is there anything simpler? If not, is there a recommended method for pulling this off with the hand mesh API?
If you use an Unbounded volume camera (which uses an ImmmersiveSpace in mixed reality) you can use ARKit hand tracking, which is exposed via the com.unity.xr.hands package. If you check out the MixedReality scene in the PolySpatial package samples, you can find an example of how this works. You will need to enable the visionOS XR loader in XR Plugin Management settings and include an AR Session in your scene.
Thanks for the reply! I’m aware of the hand tracking API, but I don’t see an API within that to get the left and right hand positions similar to what I get via the XRNode API for XR controllers. Could you be more specific on how I use the hand tracking API to do that? Ideally I’m looking for something besides pulling the hand mesh and grabbing a vertex off of it.
Similar to my reply on another issue I’ll suggest you check out Assets/Samples/PolySpatial/MixedReality/Scripts/PinchSpawn.cs from the package samples. This shows how to get the left/right hands and spawn objects wherever you pinch. You can read more about these APIs in the XR Hands documentation.
To get the root pose of the left and right hand you would subscribe to updatedHands and get the rootPose of the leftHand and rightHand properties of the subsystem. That will give you the wrist pose for each hand. You can refer to the PinchSpawn or HandVisualizer script for an example of how to use the full skeletal data.