I’m trying to read eye positions and rotations on Oculus Quest but the values never change and are clearly wrong. The device indicates it is valid however.
The TPD shouldnt be one frame delayed as all it does is use the latest data from the SDK that is available in the engine. (twice a frame, once at the very start of dynamic update, and again during the OnBeforeRender callback)
Hi Matt, thanks for the reply. Been really pulling my hair out over this one.
I’m trying to create a portal effect using RenderTextures. I need to pose a stereo camera rig on the other end of the portal, and the relative position of this rig needs to exactly match the relative position between the user’s head and the portal.
I think I have a solution that generally works. If you hold still, alignment is perfect. But head motion, particularly rotational motion, causes the portal view to move incorrectly before settling down to the correct place. I believe the problem is that I’m getting stale positions for either the main camera (the center anchor position) or the eyes. The most recent working version of my code used the left and right eye anchors, which are on the XR Rig and use TPD.
I can simulate the effect when running in the editor outside of VR by creating a one frame delay for the camera pose. It’s a bit hard to see but you can see an example of what it looks like here (especially at the beginning). In VR, the effect is much more pronounced. When the head stops moving, alignment settles down to being perfect. If I capture a video from the Quest, the effect is no longer there and the camera position appears perfectly synced in motion. It only happens in stereo mode (i.e., in the HMD).
I also tried using InputDevice and reading XRNode.LeftEye, RightEye positions directly but these values never update for me. I’m not sure why. I made a thread about this in the VR forum and you can see the latest post there in which I share the code for that.