Does XRI support pinch interaction and gaze interaction in vision pro?

Hello, everyone,
Leverage the XR Interaction Toolkit (XRI) - With visionOS, people will use their hands and eyes to interact with content.
And I learned that XR Interaction Toolkit supports poke、pinch and gaze interaction.
But I only implemented the poke interaction,does XR Interaction Toolkit (XRI) support pinch and gaze interaction in vision pro?
Could you give me some advice?
Thanks

Hi there! Thanks for reaching out. Have you looked at the package samples in com.unity.xr.visionos? The Main scene includes a basic setup with interactors and a pair of grabbable cubes, as well as UI. Things work more or less as they normally do, but you need to set up input to use VisionOSSpatialPointerDevice and use an extra transform that tracks the gaze vector, used to override the Ray Origin for a ray interactor. It’s not the easiest thing to describe with words, but if you look at the objects in that sample scene, it should hopefully make some sense.

Let me know if you have any more questions. Good luck!

Hi,
In this post PolySpatialHoverEffect in VR, I learn that eye tracking (eye gaze) data is only available on the first frame of an input (like a pinch gesture).And I have to use skeletal hand data and a visible line originating from your hands to “aim”. Besides PolySpatialHoverEffect cant works in VR fully immersive apps.
Is there any plan to open this function in the future?

No. Unfortunately it is impossible to achieve a hover effect in VR on visionOS. PolySpatialHoverEffect works by adding a HoverEffectComponent to the RealityKit entity for the decorated object. Because we do not use RealityKit for VR rendering, this functionality is not available to us. Unless Apple changes their API for pinch/gaze input, we will never be able to implement a hover effect in VR.

hi,
I downloaded a VR application called Spatial Vision in the visionOS App Store, and I found that it can have the effect of eye tracking, that is, when the eyes look at the button, the button will be highlighted.
Do you know how it did it?

I didn’t see any app called “Spatial Vision” when I searched in the app store. I do see one called “Space Vision” which displays a solar system in a fully immersive space with SwiftUI windows for interactions. I can’t tell if the fully immersive space is rendered with RealityKit or Metal, but there are no hover states in the fully immersive states. The buttons in the SwiftUI window highlight just like any other native UI. This is because they are implemented with Swift and the OS can control the hover state with gaze data that the app doesn’t have access to.

I’m sorry. It’s called “Space vision”.
In Mixed Reality (Immersive) mode,can we achieve hover effects or visual feedback because PolySpatial plugin integrates RealityKit’s components?

Yep! You can create a fully immersive experience like Space Vision by using an Unbounded volume camera in Unity with your project App Mode set to Mixed Reality.

Hi,
In this app called “Space Vision”, I can move 3D objects which simulate the solar system in a fully immersive space via pinch/ eyes gaze.
Do you know how it works?
And can I do this in visionos vr app built by unity?

Hi there! I provided a more thorough reply in an other thread. Does this answer your question?

hi,
I saw it, but I don’t quite understand it.
In other words, like the Hello World sample and Space Vision app,are they achieve a fully immersive app with RealityKit by just covering up the passthrough video with large, distant objects?If so, what should I did it?
In view of your post thorough reply , if I port my existing unity vr project to visionpro and achieve the same effect, I don’t think I can do it.

hi,there!
I have one more question,why can RealityKit get eye tracking (eye gaze) data and achieve a hover effect, but Metal can’t?

Because RealityKit controls the rendering of your objects, it can use continuous gaze data without exposing it to the app. The app indicates that an object should display a hover effect by adding HoverEffectComponent to the Entity, and the OS renders it with a highlight if the user is looking at it. The OS has access to the user’s gaze continuously, but does not expose this data to 3rd party apps.

If you use PolySpatial and set your AppMode to MixedReality, you can use VisionOSHoverEffect in Unity.

Hi,there
If I use PolySpatial and set AppMode to MixedReality,I need to do some thing with the rendering, like convert the custom shader to shader graphShader, Particle systems,UI and so on,is that right?
And if set AppMode to VR,I don’t need any of that, right?

Hi,
I added hand tracking to vr app by XRI and realized poke and pinch interaction.
And I wanted to add a ray interactor to hands that would interact in a way similar to traditional VR controller rays,and rays are displayed, but they cannot interact with the UI. I set up input to use VisionOSSpatialPointerDevice ,but it didn’t seem to work.

That is correct.

Yup! :slight_smile:

What you are describing sounds like an “aim pose” that you can use with hand tracking on Meta Quest or Hololens. This is not provided on visionOS, although the accessibility settings can repurpose the gaze interaction to use a wrist-based ray. Even in that case, the app is not able to track the device pose until the user pinches their fingers.

Unfortunately, due to the nature of how interactions work on visionOS, we are not able to provide a visual indicator in VR apps of where the user is aiming until they pinch their fingers. This makes VR interactions with the gaze/pinch OS gesture of limited use. The main scene in the com.unity.xr.visionos package sample demonstrates how to set up XRI, UI, and basic C# scripts to use this input mechanism. XRI needs a pretty specific setup with a separate RayOrigin transform. We are still working on providing a sample scene for XRI 3.0, which just shipped recently. So if you are using XRI 3.0, you’ll need to get a little creative, but you should be able to get things working.

Finally, it is possible to use ARKit hand tracking to implement your own aim pose ray. We have done a few experiments internally that we haven’t been able to get across the finish line, but if you want an “aim pose” ray like you get on Quest, you can write some C# code to build one using the joint poses you get from the com.unity.xr.hands package. Take a look at HandVisualizer in the xr.visionos package samples to see how to access joint data.

I know this is all challenging right now, but hopefully our sample code gives you a head start. We’ll be improving this over time, as well as taking steps to make our existing XR templates and samples support visionOS alongside our other XR platforms.

1 Like

Yes,I do add a aiming ray to my hand and then trigger the event with a pinch or a custom gesture.
But Why isn’t the button highlighted when hand ray moves to the UI button?
Does he trigger the highlighted feedback only after I use the pinch gesture? Or other Settings are required?

At least with the way the scene is set up in the samples, the ray interactor isn’t actually using the hand to aim at interactables. We use the eye gaze when the user pinches their fingers to aim, which means we can’t do any interactions until the user pinches their fingers.

The situation I described was hypothetical. There are no existing samples that work this way.

I have seen on the internet that someone have implemented the hand ray /pinch interaction using MRTK.And maybe there’s something wrong with my Settings,I have not achieved this effect with XRI.
So I have a premature suggestion that I hope to implement hand ray/pinch interaction in the com.unity.xr.visionos package’s future versions, so that we can learn more quickly.