Questions about Apple VisionOS XR Plugin VR Samples

I have been experimenting with the VR sample (built-in pipeline) contained in the VisionOS XR Plugin v1.0.3 samples. I have a few questions about how the sample is expected to operate:

  1. There have been conflicting solutions to fix some of the positioning problems when the CameraOffset transform beneath the XRRig is offset somewhere other than the world origin. One solution offered was to reparent the XR Interactable and XR Ray transforms from the XRI transform to the CameraOffset transform under the XRRig. Unfortunately this results in these items updating their positions to be double whatever the CameraOffset position is set to. When they are left under the XRI transform, they maintain their proper position relative to the CameraOffset. What is the correct solution?

  2. There has also been mention on some other threads of making updates to the ray position and rotation by modifying scripts in the sample to transform these items relative to the CameraOffset. Is this still necessary and if so, which scripts need to be modified?

  3. When run on device (but not on simulator), the XRHands feature activates, the HandVisualizer displays hand joints, and some cartoony AR plane rendering appears. This is very useful for testing gaze/pinch interactions against a huge set of intersecting planes but it does obscure the UI test panel behind it. Is there any way to disable the plane rendering to test just the UI test menu, HandVisualizer, and InputTester?

  4. There are two test rays generated when gaze/pinch interactions occur: one from the XR Ray transform and one from the RayIndicator of the InputTester. I assume the former is supposed to emanate from the headset (gaze) and extend to the gaze collision point and the latter is supposed to emanate from the pinching hand to the gaze collision point? In the default setup, it seems the hand ray is also emanating from the gaze and not the hand location when run in device. When run in the Editor, the RayIndicator properly emanates from the pinching hand.

  5. How are the World Anchors supposed to behave? It seems that most of the time the previously placed anchor disappears after each new gaze/pinch but sometimes an anchor remains. If you pinch rapidly, you can generate a whole bunch of these “sticky” anchors. What make them “sticky” and what is the expected proper behavior?

1 Like

@mtschoen Bumping to add you to this thread since you may have insights as the author of the sample.

Howdy! Lot’s to digest here… sorry for the delayed response :slight_smile:

I’m pretty sure what you’re describing here is a bug in the samples. In this post I described what is going on and how to work around the issue. TL;DR: the interactors and test objects in the scene need to be a child of the CameraOffset transform, and need to use localPosition or account for transformations to the XR Origin. The sample code needed some updates as well as the UnityPlayModeInput script that mocks spatial pointer input in the Editor.

There’s another change coming down the pike that will properly report the Tracking Origin Mode as “Floor,” since right now it is unspecified. This may change the behavior of XROrigin’s CameraOffset behavior but we’re holding it back until we can properly document the edge cases or project modifications users will have to make when they upgrade.

We should be pushing out the next package version very soon. Rather than going through all of the changes here, I think it will be best to just wait for the package release and you can look at the updated sample.

Yep! Just disable or remove the ARPlaneManager component attached to XROrigin. The same goes for the MeshManager object and other AR managers in the scene. If you want to keep the visuals, but lose the collider, modify the AR Default Plane prefab to remove or disable the collider.

Not quite… RayIndicator shows the gaze ray that we get from VisionOSSpatialPointerDevice. It represents the gaze ray provided by the platform that comes from in between the user’s eyes and extends in the direction that they were looking. Since there are no RealityKit objects to intersect with, this isn’t based on some sort of target object, but just the direction where the user is looking.

The scaled Cube that is a child of Ray Origin is showing the data we use to provide an interaction ray to the XR Interaction Toolkit. This is driven by the interactionRayOrigin control on VisionOSSpatialPointerDevice which is kind of a “hybrid” of the gaze ray and input device position (a.k.a. where the user’s pinched fingers are). As you move your hand around, we “deflect” the gaze ray to allow interaction with things like UI sliders or other systems that require a moving “pointer.” This is not something the OS provides, and you may want to use inputDevicePosition like we do fir the XRRayInteractor itself, depending on the use case.

The “sticky” anchors are a bug. The anchor API for visionOS is asynchronous, and it doesn’t allow you to place an anchor while waiting for the result of a previous anchor. There isn’t any feedback at the moment built into the sample to prevent you from doing this, so any anchors you add while the system is still adding a previous one will not be added successfully, and thus cannot be removed by the sample script when it tries to place the next one. As a workaround, I would suggest adding some kind of time-based delay (maybe ~1sec) before the next anchor can be placed. I’ve added a task to our backlog to improve this.

Thank you this is all very useful information and I look forward to the next update.

I understand how the debug rays are now being used but have one more related question: Can I draw a ray that extends from the deflected gaze ray origin to the current pinch point? I want it to maintain the ray to the pinch point as I move my hand around.

The ray origin from the deflected gaze ray remains the same throughout the interaction. It is the direction that is deflected by hand movement. The ray for the Ray Origin object is showing this deflected ray. A vector from the gaze/interaction ray origin to the pinch point can be computed by subtracting the Vector3 values coming from the spatial pointer: startRayOrigin - inputDevicePosition.

1 Like

Thank you for the help! Sorry, I was mistaken with my terminology. I was actually trying to render a ray from the pinch point to the current target collision point not the gaze origin. Is there a simple mathematical way to determine the direction of the pinch point ray to the target collision point based on the pinch point transform offset from the gaze origin transform and the gaze ray direction?

I came up with the solution in the link below that performs ray casts using the XRRayIndicator to (re)find the target collision point but expect there may be a faster way to do it with vector math.