I am having some difficulty with an Oculus Quest project that consists of 3 scenes, each scene has its own OVRCameraRig with positions and rotations set in the Editor.
However, when running on the Quest if the player moves say a few meters from the centre of the tracking space and rotates a bit then the next scene loads - the player’s starting position in the next scene is not positioned or facing where it is supposed to.
I have done a lot of Googling and made some discoveries:
Superhot VR had to implement a game mechanic to recenter and reorient the player before loading a new level (not an option for me)
The default TrackingSpaceType for Quest is set to stationary - so I have set it to room scale on Awake
OVRManager.display.RecenterPose has been disabled by Oculus for Quest projects so calling it does nothing
I tried to parent all the children/anchors of OVRCameraRig>TrackingSpace under an empty GameObject and then rotate/position that but changing the hierarchy this way seems to confuse the OVRCameraRig, which re-instantiates the anchor objects all over again
I have also tried enabling/disabling the ResetTrackerOnLoad and ReorientHMDOnControllerRecentre options on OVRManager but that doesn’t seem to do the job either
As of yet, I have found no definitive answer for reorienting the OVRCameraRig on level load. I would like to ensure that regardless of where the player is in their tracking space that they start the scene at certain position and are rotated to face a certain direction.
I’m having similar problems with having a consistent floor height on Quest. At this point I think Unity’s implementation isn’t tested well - noticed devs tell me they haven’t tried it on Quest. Which is surprising considering how it’s outselling rift 10:1
I am sure there is a more efficient way to do this, but for now I have got this working for me. Just thought I would throw it up here in case anyone else is having a similar issue (and up against a deadline xD):
using System.Collections;
using UnityEngine;
using UnityEngine.XR;
using UnityEngine.SceneManagement;
[System.Serializable]
public class SceneLoadCameraReset
{
public int sceneIndex;
public Vector3 startPosition;
public float startYRotation;
}
public class SetTrackingType : MonoBehaviour
{
[SerializeField] SceneLoadCameraReset[] sceneLoadOptions;
Transform _OVRCameraRig;
Transform _centreEyeAnchor;
private void OnEnable()
{
SceneManager.sceneLoaded += ResetCameraOnSceneLoad;
}
private void OnDisable()
{
SceneManager.sceneLoaded -= ResetCameraOnSceneLoad;
}
private void Awake()
{
XRDevice.SetTrackingSpaceType(TrackingSpaceType.RoomScale);
}
//Helper function to find the correct instances of OVRCameraRig and CentreEyeAnchor
void FindOVRCameraRig()
{
OVRCameraRig ovr = FindObjectOfType<OVRCameraRig>();
if (ovr)
{
_OVRCameraRig = ovr.transform;
_centreEyeAnchor = ovr.centerEyeAnchor;
}
else
{
Debug.Log("No OVRCameraRig object found");
}
}
//Calls ResetCamera based on the current scene which was just loaded
void ResetCameraOnSceneLoad(Scene scene, LoadSceneMode mode)
{
FindOVRCameraRig();
for (int i = 0; i < sceneLoadOptions.Length; i++)
{
if (scene.buildIndex == sceneLoadOptions[i].sceneIndex)
{
StartCoroutine(ResetCamera(sceneLoadOptions[i].startPosition, sceneLoadOptions[i].startYRotation));
}
}
}
//Resets the OVRCameraRig's position and Y-axis rotation to help align the player's starting position and view to the target parameters
IEnumerator ResetCamera(Vector3 targetPosition, float targetYRotation)
{
EditorDebugOffset();
yield return new WaitForEndOfFrame();
float currentRotY = _centreEyeAnchor.eulerAngles.y;
float difference = targetYRotation - currentRotY;
_OVRCameraRig.Rotate(0, difference, 0);
Vector3 newPos = new Vector3(targetPosition.x - _centreEyeAnchor.position.x, 0, targetPosition.z - _centreEyeAnchor.position.z);
_OVRCameraRig.transform.position += newPos;
}
}
It just adjusts the OVRCameraRig’s position / rotation to compensate for the centreEyeAnchor (player) being moved or looking the wrong way.
OVRManager.display.RecenterPose works in TrackingOriginType → FloorLevel.
In the case of TrackingOriginType → EyeLevel, just recenter the rotation (of the Y) and keeps the position in the tracking area.
My real problem was that OVRManager.display.RecenterPose works in Oculus Link and Oculus Air Link, but not in Virtual Desktop. @Jimbo_Slice solution solve this problem. It’s like a teleport.
My function (I’m not using XR Management):
private Transform m_CameraRig;
private Transform m_CentreEyeAnchor;
public OVRCameraRig m_OVRCameraRig;
void Start()
{
m_CentreEyeAnchor = m_OVRCameraRig.centerEyeAnchor;
m_CameraRig = m_OVRCameraRig.transform;
}
private void ResetVRPosition(Transform teleportPoint) //Do the same as OVRManager.display.RecenterPose() but works in Virtual Desktop and EyeLevelTracking
{
float currentRotY = m_CentreEyeAnchor.eulerAngles.y;
float targetYRotation = 0.0f;
float difference = targetYRotation - currentRotY;
m_CameraRig.Rotate(0, difference, 0);
Vector3 newPos = new Vector3(teleportPoint.position.x - m_CentreEyeAnchor.position.x, 0, teleportPoint.position.z - m_CentreEyeAnchor.position.z);
m_CameraRig.transform.position += newPos;
}
According to this the headset will overwrite the main camera position and that’s why we can’t modify the camera position directly. There a link to the docs, but I haven’t been able to locate the information in there yet.
Since all examples I have seen so far do point to modifying a parent transform, I guess this is the way.
I’m still having some problems implementing the above examples.