Pressing the crown to recenter the view is working in native apps, but not ours. From this post, Crown Input, it sounds like we won’t get direct access to any crown input, but I’m wondering if it would be possible to get an event when this happens to allow us to recenter ourselves (for VR Fully Immersive)?
Unfortunately, there is no API provided by Apple to notify the app when the user has requested a recenter. Please submit feedback to Apple that you need a way to be notified when the user requests a recenter, and explain why.
Will do, thank you.
To clarify – are you saying that long-pressing the crown does nothing at all, or just that it does “recenter”, but the app doesn’t get notified?
When I wrote that originally, I was hearing from labs that nothing was happening at all when long pressing.
But as of now, I can confirm that crown-recenter does recenter our app on the y-axis, but not the x-axis as it does in the home menu.
Hm… This is not what I’ve been seeing. In fact, I see the opposite (at least the way I’m reading this). When I long-press the crown button, I can see content at the origin move toward me, but only in the XZ plane (forard/back, left/right). I do not see content move at all in the Y (up/down) axis (even if I crouch down). I still haven’t had the opportunity to test what happens when you move to a new floor of the building, but I’m not sure that’s what you mean when you say “recenter on the y-axis.” Note that AR content (planes and meshes) will appear to stay where it is, attached to the scanned environment in the real world.
Every now and then, I would encounter an issue where the tracking origin was wrong, and I would also see AR data in the wrong place (for example, up inside the ceiling or in the next room). In those cases, a long-press on the crown would return that incorrect tracking origin to where it should be, and in those cases it would have to move in all 3 axes to do so. But outside of that kind of glitch, I only see the recenter move content in the XZ plane.
Can you create a project or scene that replicates the issue, and report a bug? As far as I can tell, things are working as expected. The device knows where the floor is, and keeps that fixed, while it orients the X/Z plane so that x = 0, z = 0 is at my feet.
When pkenndy is talking about x-axis, I think he means rotations on the x-axis so one can look up toward the ceiling and recenter to that view-point.
This is useful for cases where the User wants to recline.
Oh! In that case, recenter is not what you want
Check out the LazyFollow
script in the XR Interaction Toolkit. This kind of behavior where menus and affordances follow head look is certainly desirable, but you would not want the entire tracking space to rotate on the x axis! That would put the horizon on an angle, and in the worst case can cause the virtual camera to rotate “off kilter” which is a recipe for instant nausea.
To be clear, the long-press crown reset is entirely a platform-side feature. In Unity we are taking head pose (position/rotation) data from ARKit and feeding that directly through the input system. TrackedPoseDriver
applies these data to the localPosition
and localRotation
of a Transform, which is usually the child of a transform with an XROrigin
attached. When the user long-presses the crown button, Unity keeps running as if nothing happened, but the data from the platform will now be offset differently. So, for example, if ARKit was giving us 1, 1.7, 1
for the position of the user’s head, and the user long-presses the crown button, we will suddenly start seeing 0, 1.7, 0
in the head pose data, and that’s all. The same is true for rotation, but the math is a little more complicated.
The OS-level recenter action will only account for yaw (y-axis) rotation because it is pretty much never a good idea to rotate the “horizon” of your virtual world. You are still able to do this in your Unity scene by rotating the XR Origin transform, but still I highly recommend that you do not. Instead, use a
LazyFollow
script or some other method to use the head pose (e.g. Main Camera Transform, or whatever object has the TrackedPoseDriver attached) to position just the menu or virtual object that you want to make visible. This is the recommended way to let the user recline and still see/interact with virtual content in VR. Apple is surely doing something similar for the home screen: using the head pose at the moment when the user pressed the button to position/rotate the home screen in a convenient location. As an example, the visionOS template uses LazyFollow
to position the settings menu so that it always faces the user.
The only time I would ever recommend applying a full 3D rotation to the XR Origin would be in the case of a flight sim or spaceship kind of scenario. In that case, you do want the virtual horizon to rotate, but the fixed cockpit/environment around the user (a child of XROrigin) can help to mitigate sim sickness. Again, this would be a 3D rotation of the Unity XROrigin Transform, not the tracking space established by the platform. These kinds of experiences are still not for everyone, and I’ve seen plenty of people who still get nauseous without a heavy amount of “vignette” to block out the moving horizon.
Yes, so there’s an app-case where the horizon needs to be shifted up/down to support a User that’s not in the standard sitting/standing positions. It’s a one-at-a-time, manually triggered shift (aka recenter).
So you are saying Vision Pro has never shifted the horizon via the Crown recenter? That interesting detail as I’ve been told otherwise that it has.
Now per your description, it sounds like only UI may have been “recentered” but no horizon tilt has ever happened.
Yes. The recenter at the OS level only resets X/Y/Z position and Y (yaw) rotation. X/Z rotation of the head is not taken into account.
Hi is it possible to ignore the recenter action or how should i do some calculation to offset the change in Y rotation. Im trying to achieve the effect of placing down the object at a “fixed world position”
Your best bet is a World Anchor. There’s no good way to detect/respond to the recenter, which means that anything in a fixed place in your scene will move relative to the real world when the recenter occurs. However, AR data is updated after the recenter to line back up with its position in the real world.
You can either rely on user interaction (like the com.unity.xr.visionos
samples do) or automatically place an anchor relative to the main camera, and that transform should stay lined up to its original location in real-world coordinates.
Please submit feedback to Apple through the Feedback Assistant if you still think you need a callback on recenter. Without them implementing it, there’s no sure-fire way to detect the event in Unity.