How to use new vision Inputs ? (from beta 0.5.0)

Yes I can clearly reproduce the behavior in the sample scene (or at least my reconstruction of it.).
While pointer0 phase remains “Moved”, ponter1 can initiate a touch, but is stuck at “moved” until pointer0 releases the gesture.

1 Like

Hm… this isn’t what I’ve observed. Does your first hand get occluded at any point during this interaction? We had a bug where touches would get stuck if tracking is lost, but it sounds like this is a little different. That other user was also reporting the issue with the SpatialPointerDevice in the PolySpatial package, not VisionOSSpatialPointerDevice in com.unity.xr.visionos for fully immersive/VR apps.

In fact, there’s a new post from yesterday reporting the same issue. I’ll double-check and see if I can replicate the issue. Here are the steps I will try:

  • Pinch with my left hand
  • Observe visual feedback for pointer #0
  • While holding my left hand pinch, pinch with my right hand
  • Observe visual feedback for moving pointer #0 and #1
  • While holding the pinch in my right hand, release the pinch in my left hand
  • Observe visual feedback for moving pointer #1, and ending #0
  • Release the pinch in my right hand
  • Observe visual feedback or ending #1

Based on your report, I should still see visual feedback for pointer #1 in the moved phase? If that’s the case, I’ll fix it! :slight_smile:

1 Like

Quick update on this. I’ve confirmed the issue, and unfortunately it’s not something we can fix on our end. Even in a fresh app based on the Xcode fully immersive Metal template, I can see the same thing. You don’t get a .ended input event until both pinches are released. Hopefully they’re able to fix this on their end before the next release, at which point it should hopefully “just work” without any fixes/changes on the Unity side. I’ll update this thread again when I hear more.

2 Likes

@mtschoen I had what appeared to be issues with TouchPhase.Ended not being called on device - do you have any updates on this?

Is this just for a single-handed interaction, or the two-handed case described above? The bug where your second hand does not get a TouchPhase.Ended is still present on the latest version of the OS (beta 6). It needs to be fixed by Apple, so we need to wait until a new OS version comes along. Unfortunately, I don’t have an ETA on this, but please feel free to submit feedback to Apple. I’ll update this thread when I confirm the fix.

This was with two-handed events, so seems like this was the issue.

1 Like

thanks for this lovely reference, @mtschoen !
Will the AVP input action work out-of-the-box with a screen space canvas, or must each element (e.g button) be manually raycast against similar to the method outlined above?

EDIT: seeing that it’s a physics raycast, am I barking up the wrong tree by using something that relies on graphics raycast? Do all elements have to be non-UI for now?

Screen space UI isn’t supported on VR platforms. You’ll have to set up your canvas in World Space mode. However, you can use the normal EventSystem and either the InputSystem input module or the one provided by the XR Interaction Toolkit. The 1.0.3 version of com.unity.xr.visionos has a pair of samples (one for URP and one for Built-in) that demonstrate how to set up a scene for UI canvas input. In the case of XRI, if you have a ray interactor set up to use the gaze ray and input device pose, you should be able to interact with UI, and for the input system input module (set up in a separate scene) you can set up a custom action map to get input from VisionOSSpatialPointerDevice.

sorry, brain fog moment - meant world space, of course!

Thanks for the samples reference, I’ll take a look!

Hello!
Do we have news from Apple about this bug? I’m still able to reproduce the touches not being released on device.
Thanks,

Hold up, does Apple seriously not allow eye tracking info for full VR apps unless you pinch? I just want to make sure I understand this so I don’t waste time trying to get something impossible to work.
I don’t plan to have pinch gestures in my game. However, I do want to use eye tracking, if possible.

I haven’t been able to confirm yet, but this might be fixed in the visionOS 1.1 beta.

Correct. You do not have access to eye tracking data except for the first frame of a pinch interaction.

1 Like

Is this still accurate? I’m looking at the documentation and it uses the polyspatial package. I’m building a fully immersive VR app and don’t want to include polyspatial packages, so how can I get gaze without polyspatial?

2 Likes

The script I linked uses VisionOSSpatialPointerDevice (from com.unity.xr.visionos), where that documentation is using SpatialPointerDevice (from com.unity.polyspatial). I’ll admit the naming is a little confusing. They both do almost exactly the same thing, but the PolySpatial (flavor) gives you an extra bit of data, which is the ID of the collider object which the gaze ray intersected with. For what it’s worth, this script (and accompanying scene) is now available in the samples for com.unity.xr.visionos 1.0.3.

For VR apps, you can use VisionOSSpatialPointerDevice and startRayOrigin and startRayDirection to get the gaze ray. We also provide interactionRayDirection which is “deflected” by the pinch position so that you can interact with draggable UI like sliders.

Something I should note about this script: if you want to extend this kind of script to use both hands, you should use Spatial Pointer #0 and Spatial Pointer #1. Primary Spatial Pointer will always represent the first element in the list of pointers, which is slightly different from “the first pointer.” It will be replaced by the second pointer if the pinch is released on the first pointer before the second one. If that’s confusing, I’d encourage you to play around with these APIs, because the difference is subtle but understanding the difference will save you a ton of time debugging.

Good news! We were able to confirm that the bug where your first pinch gets “stuck” until the second pinch has been released is now fixed on visionOS 1.1 beta. When users update their devices, the expected behavior should start working, without any need to update the app.

1 Like