I’m not clearly understand how to handle visionOS gaze direction. Is there some code samples or something, where I can see how to write own handling for view direction.
Hi there!
For privacy reasons, Apple does not expose the gaze vector to application code. The one exception is for the gaze/pinch interaction. The first event fired by the system includes the gaze vector at the moment the user pinches their fingers which was used to raycast into the RealityKit scene. We expose that through the SpatialPointerDevice
via the startInteractionRayOrigin
and startInteractionRayDirection
. These Vector3 controls will give you the ability to cast a ray in Unity to do your own intersection test.
We recommend that, for mixed reality apps. you use the XR Interaction Toolkit and XRTouchSpaceInteractor
provided by the com.unity.polyspatial.xr
package. This will take advantage of the targetId
control, which identifies which specific RealityKit entity the user interacted with. The targetID
control maps to a GameObject instanceID on the Unity side, which we can use to look up the specific GameObject and Interactable component without doing our own raycast.
Also please note that this ray origin/direction is not provided in the simulator, and I think it is only provided on the device if you have an immersive space open (meaning an unbounded volume camera on the Unity side). Furthermore, there is a bug in the OS at the moment that reports this information incorrectly, so you won’t be able to rely on this data until that is fixed. Apple is aware of the issue and working on a fix. In the meantime, I recommend you try to work with XRTouchSpaceInteractor
. You can see an example of how this works in the XRI Debug
sample scene included in the PolySpatial package samples.
If you are building a VR app, the same constraint applies (you only get the gaze vector on the Began
phase of the pinch). However, the data is correct, and can be used to do a raycast. The sample content in the com.unity.xr.visionos
package demonstrates how this can be used with XRI as well.
Good luck!
any update in 2025? is there any other APIs to use, some inaccurate direction is still ok?
No updates. Feel free to request this from Apple via their Feedback Assistant.
But gaze works when using the Canvas right? World Canvases do detect where the eye is looking and highlights the correct buttons. What’s the difference here?
That doesn’t happen in code that we control; it’s entirely handled through Apple’s software. In RealityKit, we add the HoverEffectComponent to the buttons (along with a CollisionComponent for the collision geometry and the ModelComponent for the visual representation). Our UI shader graph then uses the Hover State node to change the button’s color when hovered (i.e., looked at).
All of this is possible to reproduce from Unity (so that, for example, you can have 3D objects that highlight when gazed at) using:
- VisionOSHoverEffect
- Collider and MeshRenderer components, as specified in the docs for VisionOSHoverEffect.
- The PolySpatial Hover State shader graph node.
What is not possible is getting the gaze vector in user code (C#/Swift). That’s simply a limitation that Apple has imposed for all users of their APIs.