Casting Ray From Pinch Point To Gaze Collision With XRI

I have a solution for casting a ray from the current active pinch point to the intractable selected by the user’s gaze. This ray can be used for debug purposes or to create laser pointers, guide lines, etc. I’m not sure if this is the optimal way to do this so would welcome any feedback if there are simpler solutions.

This code was extended from the VR sample code found in the Apple VisionOS Plugin 1.0.3 package.

Much like the Input Tester GameObject in the sample, I created a Pinch Ray Tester GameObject and copied the RayIndicator and TargetIndicator children from the Input Tester to the Pinch Ray Tester. The script below was then added to the Pinch Ray Tester.

The script looks for UI and spatial collisions with the XRRayInteractor and if once is located, directs a ray starting at the Pinch Point provided by primaryTouch.inputDevicePosition that is oriented to look at the collision.

using UnityEngine;
using UnityEngine.XR.Interaction.Toolkit;
using UnityEngine.XR.VisionOS;
#if UNITY_EDITOR || UNITY_VISIONOS
using UnityEngine.XR.VisionOS.InputDevices;
#endif

public class PinchRayTester : MonoBehaviour
{
    [SerializeField]
    Transform ray;
    
    [SerializeField]
    Transform target;
    
    [SerializeField]
    XRRayInteractor xrRayInteractor;
    
#if UNITY_EDITOR || UNITY_VISIONOS
    private PointerInput _pointerInput;

    void OnEnable()
    {
        _pointerInput ??= new PointerInput();
        _pointerInput.Enable();
    }

    void OnDisable()
    {
        _pointerInput.Disable();
    }

    void Update()
    {
        var primaryTouch = _pointerInput.Default.PrimaryPointer.ReadValue<VisionOSSpatialPointerState>();
        var phase = primaryTouch.phase;
        var began = phase == VisionOSSpatialPointerPhase.Began;
        var active = began || phase == VisionOSSpatialPointerPhase.Moved;
        
        ray.gameObject.SetActive(active);
        target.gameObject.SetActive(false);
        
        if (active)
        {
            var rayOrigin = primaryTouch.inputDevicePosition;
            ray.position = rayOrigin;
            
            if (xrRayInteractor.TryGetCurrentRaycast(
                    out var raycastHit,
                    out var raycastHitIndex,
                    out var uiRaycastHit,
                    out var uiRaycastHitIndex,
                    out var isUIHitClosest
                ))
            {
                Vector3? raycastTarget = null;
                
                if (uiRaycastHit != null && uiRaycastHitIndex != 0)
                {
                    raycastTarget = uiRaycastHit.Value.worldPosition;
                }
                else if (raycastHit != null && raycastHitIndex != 0)
                {
                    raycastTarget = raycastHit.Value.point;
                }
                
                if (raycastTarget != null)
                {
                    var raycastTargetV3 = (Vector3)raycastTarget;
                    ray.LookAt(raycastTargetV3);
                    
                    target.gameObject.SetActive(true);
                    target.position = raycastTargetV3;
                }
            }
        }
    }
#endif
}
2 Likes

Hey there! Yep. This makes sense to me :+1:

My suggestion in the other thread was to just use vector math instead of Transform’s LookAt method. If you replace the last four lines with the following:

var direction = raycastTarget.Value - rayOrigin;
ray.rotation = Quaternion.LookRotation(direction);

target.gameObject.SetActive(true);
target.position = raycastTarget.Value;

you can get the proper location and direction ray without relying on Transform.