Questions about Apple VisionOS XR Plugin VR Samples

Howdy! Lot’s to digest here… sorry for the delayed response :slight_smile:

I’m pretty sure what you’re describing here is a bug in the samples. In this post I described what is going on and how to work around the issue. TL;DR: the interactors and test objects in the scene need to be a child of the CameraOffset transform, and need to use localPosition or account for transformations to the XR Origin. The sample code needed some updates as well as the UnityPlayModeInput script that mocks spatial pointer input in the Editor.

There’s another change coming down the pike that will properly report the Tracking Origin Mode as “Floor,” since right now it is unspecified. This may change the behavior of XROrigin’s CameraOffset behavior but we’re holding it back until we can properly document the edge cases or project modifications users will have to make when they upgrade.

We should be pushing out the next package version very soon. Rather than going through all of the changes here, I think it will be best to just wait for the package release and you can look at the updated sample.

Yep! Just disable or remove the ARPlaneManager component attached to XROrigin. The same goes for the MeshManager object and other AR managers in the scene. If you want to keep the visuals, but lose the collider, modify the AR Default Plane prefab to remove or disable the collider.

Not quite… RayIndicator shows the gaze ray that we get from VisionOSSpatialPointerDevice. It represents the gaze ray provided by the platform that comes from in between the user’s eyes and extends in the direction that they were looking. Since there are no RealityKit objects to intersect with, this isn’t based on some sort of target object, but just the direction where the user is looking.

The scaled Cube that is a child of Ray Origin is showing the data we use to provide an interaction ray to the XR Interaction Toolkit. This is driven by the interactionRayOrigin control on VisionOSSpatialPointerDevice which is kind of a “hybrid” of the gaze ray and input device position (a.k.a. where the user’s pinched fingers are). As you move your hand around, we “deflect” the gaze ray to allow interaction with things like UI sliders or other systems that require a moving “pointer.” This is not something the OS provides, and you may want to use inputDevicePosition like we do fir the XRRayInteractor itself, depending on the use case.

The “sticky” anchors are a bug. The anchor API for visionOS is asynchronous, and it doesn’t allow you to place an anchor while waiting for the result of a previous anchor. There isn’t any feedback at the moment built into the sample to prevent you from doing this, so any anchors you add while the system is still adding a previous one will not be added successfully, and thus cannot be removed by the sample script when it tries to place the next one. As a workaround, I would suggest adding some kind of time-based delay (maybe ~1sec) before the next anchor can be placed. I’ve added a task to our backlog to improve this.