Hand Gestures Accuracy Problem

I have tested the XR hand gesture sample scene on both AVP and Quest 3, and it is interesting that the result on Quest 3 is much better than AVP, some of the gestures, such as thumb up or down are almost unable to be detected on AVP. I would like to ask is there any extra setup on AVP in order to have better accuracy?
I am using
Unity 2022.3.20f1
Apple visionOS XR Plugin 1.1.3
XR Hand 1.4.0
and
Unity 2020.3.14f1
OpenXR 1.9.1
OpenXR Meta 1.0.1
XR Hand 1.4.0

You can see the difference here:
AVP:
[AVP (1) (1).mp4 - Google Drive]
Quest 3:
[Quest3.mp4 - Google Drive]

Hi there! This is a known issue with hand gestures. Vision Pro is a little more “aggressive” about reporting joints as not-tracking, where Quest will still give you a “best guess” about where the joints are if they are occluded or not tracking well. As a result, gestures which rely on occluded joints (like your knuckles when giving a “thumbs up”) will fail to be recognized.

We’re working on a fix for this on the Unity side, but in the meantime you may be able to re-construct certain gestures to work with only the joints that AVP is giving you in those circumstances. I’ll ping this thread again when we have more info to share.

Thanks for the update, I may keep testing with different gestures that fit for AVP right now.

Howdy! I ended up finding a workaround that I shared on another thread just now. If you just ignore the tracking state from ARKit, there are estimated poses hiding behind the curtain! I haven’t tested the gesture recognition stuff with this yet, but simply modifying VisionOSHandProvider to always report valid tracking for each joint gives me a pretty reasonable pose for each one, even if it is not visible. We’ll be pushing an update soon to take advantage of these poses, but in the meantime you may be able to try this as a temporary fix.

1 Like