Polyspatial Camera Projection Matrix

I mentioned this in another thread, but I don’t believe there’s a way (for any visionOS app) to get the exact camera parameters on the CPU in MR mode. In unbounded mode, you can get the device position and orientation, but that’s about it. You can get the projection matrix in shader graphs, subject to the caveats about the relative coordinate spaces between Unity and RealityKit, but that won’t help you when you’re rendering to a RenderTexture in Unity (because any shader graphs rendered in that case will simply use the Unity camera parameters).

If you can use unbounded mode, your best bet might be to take the device position and orientation, estimate the IPD to get different matrices per-eye, and reverse engineer the field of view (which I would expect to be fixed, albeit to different values, in both the simulator and device).