How to get the main camera position and rotation via script?

How do I get the main camera position and rotation from my script? The head is tracking fine while wearing the VisionPro in MR. However in my script, whereas I’m in Update or LateUpdate, the camera transform doesn’t seem to update. Even if I attach an object to my camera in the hierarchy it stays in its original position at runtime.

Camera position is only available in an unbounded app (immersive space) since that data comes from ARKit (device position).

You should be able to query the Main Camera setup with a Tracked Pose Driver (Input System) configured from [AR handheld device] and [XR HMD]. If you look at the XR rig setup in the mixed reality Sample scene and output that Main camera (not volume camera) position that should give valid data.

Do you know if this works in the simulator? I’m trying to get the HMD (or eye) position relative to the volume camera in a bounded app, but the mixed reality sample scene doesn’t seem to work in the simulator, and logging the main camera position gives 0,0,0.

1 Like

Yes camera position should be available in the simulator for unbounded apps (immersive space).

Make sure the volume camera in the scene is set to unbounded AND the Apple visionOS settings in XR Plug-in management is also set to unbounded.

Here I’ve put a world space TextMeshPro object as a child to the Main Camera under the XR Origin and am setting the position data in the Mixed reality sample scene.

Camera position is NOT available in bounded apps (shared space).

Thanks Dan, I actually had the XR Rig set up properly but I was missing the AR Session script. I grabbed the prefab from the Mixed Reality sample like you suggested.

1 Like

This is unfortunate, I was hoping to have an object in the volume rotate towards the camera in a bounded app. Is this something that could become available in the future?

This is a platform constraint, you can submit feedback to Apple via the feedback assistant.

Hello Dan,

We are trying to re-create a similar scene with a 3D object following a volume camera in an unbounded app. We have followed closely on what you suggested but we can’t have the 3D object follow or getting any volume camera transform information. Would you be able to share any link to the Mixed reality sample scene? We are using the visionOS template project here. Thank you.

If you navigate to the package manager PolySpatial package there should be a Samples tab that let you import the samples.

note: there is a known issue with the sample import overwriting the XR plug-in management settings to make sure to disable and reenable Apple visionOS there.

I think the part you might be missing is that the device position is managed by the normal Unity camera setup under an XR rig, NOT the Volume Camera. The XR Rig camera is setup with a tracked pose driver that uses device position and rotation.

Let me know if that helps, if not I can share a more detailed breakdown of my setup!

Hi Dan, thanks. We have found the samples and we are now referencing to the Meshing sample to implement our head / camera following object. We’ll let you know if we can manage that.

1 Like

Hi, is there a way to get rotation of the bounded space/camera?

I don’t believe this is possible in shared (bounded) mode. You can get the view transform in a shader graph, but not in C#/Swift code.

Where exactly can I find the “Unbounded” setting in the XR plug-in management? I can only see an option to set it to “Mixed Reality”:

I am trying to find out who to run the Mixed Reality sample scene in the Simulator, but without luck. The scene always tells me it would not support the Simulator:

I am wondering what I am missing here?

This has changed since this post, the Mixed Reality settings for App mode + the Volume camera configuration (which has unbounded) is how it’s now managed.

That looks correct, ARKit features not being enabled is a limitation of the visionOS simulator. If you want to build content with ARKit features you need to target a vision pro dev kit or use unity editor tools like XR simulation environment.

Thanks for the info. That ARKit is not supported in the simulator is a bummer, though. Our current app relies on ARKit, and there seems to be almost no way to get a device right now.

How to put the position data to the TextMeshPro object?Could you give a screenshot of unity Inspector?