How to create stereo RenderTextures and cameras?

I’d like to implement a portal effect in VR. To do this, I’ve duplicated the XR Rig’s CenterEye Anchor camera to observe the other side of the portal and render to a RenderTexture whose dimensions are set programmatically to Screen.width and Screen.height. This works great in the Editor because there is no stereo rendering going on and the center camera is exactly what is used.

However, this obviously does not work when deployed to my Quest. I’m stumped as to how to proceed. I set Multiview as my rendering mode in the Oculus XR settings, which I believe is equivalent to Single Pass Stereo.

But how do I create cameras that duplicate the stereo view? How do I create the RenderTexture and have each eye render to the appropriate side? How do I even size that texture?

I can’t find any working examples on the forum.

EDIT:

Here’s what I tried just now:

  • Modify my portal shader to accept two textures, left and right, in single stereo mode.
  • Modify my portal script to disable the single camera and create two cameras and two render textures in stereo mode.

On device, it just renders black.

Shader

Shader "Custom/Portal"
{
    Properties
    {
        _MainTex ("Texture", 2D) = "white" {}
        _LeftEyeTexture ("Texture", 2D) = "white" {}
        _RightEyeTexture("Texture", 2D) = "white" {}
    }
    SubShader
    {
        Tags { "RenderType"="Opaque" }
        LOD 100
        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            // make fog work
            #pragma multi_compile_fog
            #include "UnityCG.cginc"
            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };
            struct v2f
            {
                float4 screenPos : TEXCOORD0;
                UNITY_FOG_COORDS(1)
                float4 vertex : SV_POSITION;
            };
            sampler2D _MainTex;
            float4 _MainTex_ST;
            sampler2D _LeftEyeTexture;
            sampler2D _RightEyeTexture;
            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.screenPos = ComputeScreenPos(o.vertex); // use the screen position coordinates of the portal to sample the render texture (which is our screen)
                UNITY_TRANSFER_FOG(o,o.vertex);
                return o;
            }
            fixed4 frag(v2f i) : SV_Target
            {
                float2 uv = i.screenPos.xy / i.screenPos.w; // clip space -> normalized texture (?)
                uv = UnityStereoTransformScreenSpaceTex(uv);
                // sample the texture
#if SINGLE_PASS_STEREO
                fixed4 col = tex2D(_LeftEyeTexture, uv);
#else
                fixed4 col = tex2D(_MainTex, uv);
#endif
                // apply fog
                UNITY_APPLY_FOG(i.fogCoord, col);
                return col;
            }
            ENDCG
        }
    }
}

Portal Script

using UnityEngine;
public class Portal : MonoBehaviour
{
  [Tooltip("Camera observing the other side of the portal.")]
  [SerializeField]
  private Camera m_otherCamera;
  [Tooltip("The other portal transform, which must be the equivalent transform to this portal's.")]
  [SerializeField]
  private Transform m_otherPortal;
  private MeshRenderer m_ourPortalRenderer;
  private void Update()
  {
    Vector3 userOffsetFromPortal = Camera.main.transform.position - transform.position;
    m_otherCamera.transform.position = m_otherPortal.transform.position + userOffsetFromPortal;
    float angularDifferenceBetweenPortalRotations = Quaternion.Angle(transform.rotation, m_otherPortal.rotation);
    Quaternion portalRotationDelta = Quaternion.AngleAxis(angularDifferenceBetweenPortalRotations, Vector3.up);
    Vector3 newCameraDirection = portalRotationDelta * Camera.main.transform.forward;
    m_otherCamera.transform.rotation = Quaternion.LookRotation(newCameraDirection, Vector3.up);
  }
  private void Start()
  {
    if (m_otherCamera.targetTexture != null)
    {
      m_otherCamera.targetTexture.Release();
    }
    Debug.LogFormat("Stereo={0}", Camera.main.stereoEnabled);
    if (!Camera.main.stereoEnabled)
    {
      m_otherCamera.targetTexture = new RenderTexture(Camera.main.pixelWidth, Camera.main.pixelHeight, 24);
      m_ourPortalRenderer.material.mainTexture = m_otherCamera.targetTexture;
    }
    else
    {
      // Disable the camera and attach stereo cameras
      m_otherCamera.enabled = false;
      GameObject left = new GameObject("LeftEye");
      left.transform.parent = m_otherCamera.transform;
      left.tag = m_otherCamera.gameObject.tag;
      //left.transform.localPosition = -Vector3.right * Camera.main.stereoSeparation;
      GameObject right = new GameObject("RightEye");
      right.transform.parent = m_otherCamera.transform;
      right.tag = m_otherCamera.gameObject.tag;
      //right.transform.localPosition = Vector3.right * Camera.main.stereoSeparation;
      Camera leftCamera = left.AddComponent<Camera>();
      Camera rightCamera = right.AddComponent<Camera>();
      leftCamera.CopyFrom(m_otherCamera);
      rightCamera.CopyFrom(m_otherCamera);
    
      leftCamera.projectionMatrix = Camera.main.GetStereoProjectionMatrix(Camera.StereoscopicEye.Left);
      rightCamera.projectionMatrix = Camera.main.GetStereoProjectionMatrix(Camera.StereoscopicEye.Right);
    
      leftCamera.targetTexture = new RenderTexture(leftCamera.pixelWidth, leftCamera.pixelHeight, 24);
      rightCamera.targetTexture = new RenderTexture(rightCamera.pixelWidth, rightCamera.pixelHeight, 24);
      leftCamera.enabled = true;
      rightCamera.enabled = true;
      m_ourPortalRenderer.material.SetTexture("_LeftEyeTexture", leftCamera.targetTexture);
      m_ourPortalRenderer.material.SetTexture("_RightEyeTexture", rightCamera.targetTexture);
    }
  }
  private void Awake()
  {
    m_ourPortalRenderer = GetComponentInChildren<MeshRenderer>();
    Debug.Assert(m_otherCamera != null);
    Debug.Assert(m_otherPortal != null);
    Debug.Assert(m_ourPortalRenderer != null);
  }
}

Funny I’m actually trying to do the same thing with the same technique (camera + RenderTexture) but using multi-pass… no success yet. I get some results in VR mode but the image is blurred (like if both eyes are renderer in the texture; seems like the texture rendering is not done with the correct pass matching the correct eye).

I may be wrong but I thought that using multi-pass rendering + RenderTexture with vrUsage setup + stereo-enabled cameras would have been enough… but no apparently. I maybe be missing something (maybe my shader used to crop the image rendered by the portal camera to fit the portal mesh).

I’ve seen techniques using stencils but it would have to much impact on my level design, so it’s not acceptable in my case.

So, if I make any progress I will post here. Please do so as well :wink:

BTW in your case (single pass), did you check those to adapt your shader?

bricefr: vrUsage is a flag used when creating a RenderTexture but how do you enable stereo on a camera?

I did some more investigation and found that UNITY_SINGLE_PASS_STEREO is evidently not defined in the shader when I build. unity_StereoEyeIndex is available when I build for Quest. So I tried using that to render from the appropriate texture but the results are bizarre. Also, the cameras do not appear to track my head rotation (only translation).

EDIT: Okay, so despite unity_StereoEyeIndex being defined, it is not working. The value is always 0.

I figured out a few things. Firstly, multiview stereo on Quest does not appear to be treated as a single-view stereo mode by Unity. So I’m back to the default and inefficient multi-view mode. This renders the scene twice, once for each eye, one after the other. But it does make unity_StereoEyeIndex available and now I have a stereo portable.

The problem now is that the cameras don’t replicate the stereo characteristics of the actual VR camera and I’m not sure why. I assume Camera.main.transform tracks the center point between eyes – is this not the case?

Attempting to offset the two virtual cameras I create manually by the stereo separation does not work. I also am skeptical whether the separation I’m getting is what is actually being used to render.

Here is how it looks in mono (running on PC without stereo) – perfect:

Now, in VR, clearly wrong. Note the misalignment between the blue map and the orange map (which is on the other side of the portal):

And here is my code for setting up the cameras:

private void Start()
  {
    if (m_otherCamera.targetTexture != null)
    {
      m_otherCamera.targetTexture.Release();
    }

    Debug.LogFormat("Stereo={0}", Camera.main.stereoEnabled);
    Debug.LogFormat("Separation={0}", Camera.main.stereoSeparation);
    Debug.LogFormat("Convergence={0}", Camera.main.stereoConvergence);

    if (!Camera.main.stereoEnabled)
    {
      m_otherCamera.targetTexture = new RenderTexture(Camera.main.pixelWidth, Camera.main.pixelHeight, 24);
      m_ourPortalRenderer.material.SetTexture("_LeftEyeTexture", m_otherCamera.targetTexture);
    }
    else
    {
      // Disable the camera and attach stereo cameras
      m_otherCamera.enabled = false;

      //float separation = 0.5f * Camera.main.stereoSeparation;
      //float convergenceAngle = 90f - Mathf.Atan2(Camera.main.stereoConvergence, separation) * Mathf.Rad2Deg;

      GameObject left = new GameObject("LeftEye");
      left.tag = m_otherCamera.gameObject.tag;
      left.transform.parent = m_otherCamera.transform;
      GameObject right = new GameObject("RightEye");
      right.tag = m_otherCamera.gameObject.tag;
      right.transform.parent = m_otherCamera.transform;

      Camera leftCamera = left.AddComponent<Camera>();
      Camera rightCamera = right.AddComponent<Camera>();
      leftCamera.CopyFrom(m_otherCamera);
      rightCamera.CopyFrom(m_otherCamera);

      leftCamera.fieldOfView = Camera.main.fieldOfView;
      rightCamera.fieldOfView = Camera.main.fieldOfView;
      leftCamera.aspect = Camera.main.aspect;
      rightCamera.aspect = Camera.main.aspect;

      leftCamera.projectionMatrix = Camera.main.GetStereoProjectionMatrix(Camera.StereoscopicEye.Left);
      rightCamera.projectionMatrix = Camera.main.GetStereoProjectionMatrix(Camera.StereoscopicEye.Right);

      Debug.LogFormat("aspect={0}, {1}", Camera.main.aspect, m_otherCamera.aspect);
      Debug.LogFormat("type={0}, {1}", Camera.main.cameraType, m_otherCamera.cameraType);
      Debug.LogFormat("aspect={0}, {1}, {2}", Camera.main.aspect, m_otherCamera.aspect, leftCamera.aspect);
      Debug.LogFormat("fov={0}, {1}, {2}", Camera.main.fieldOfView, m_otherCamera.fieldOfView, leftCamera.fieldOfView);
      Debug.LogFormat("focalLen={0}, {1}", Camera.main.focalLength, m_otherCamera.focalLength);
      Debug.LogFormat("lensShift={0}, {1}", Camera.main.lensShift, m_otherCamera.lensShift);
      Debug.LogFormat("rect={0}, {1}", Camera.main.rect, m_otherCamera.rect);
      Debug.LogFormat("left={0}", Camera.main.GetStereoProjectionMatrix(Camera.StereoscopicEye.Left));
      Debug.LogFormat("right={0}", Camera.main.GetStereoProjectionMatrix(Camera.StereoscopicEye.Right));

      leftCamera.targetTexture = new RenderTexture(Camera.main.pixelWidth, Camera.main.pixelHeight, 24);
      rightCamera.targetTexture = new RenderTexture(Camera.main.pixelWidth, Camera.main.pixelHeight, 24);

      leftCamera.enabled = true;
      rightCamera.enabled = true;


      left.transform.localPosition = Vector3.zero;
      right.transform.localPosition = Vector3.zero;
      left.transform.localRotation = Quaternion.identity;
      right.transform.localRotation = Quaternion.identity;

      //left.transform.localPosition = -Vector3.right * separation;
      //left.transform.localRotation = Quaternion.AngleAxis(convergenceAngle, Vector3.up);
      //right.transform.localPosition = Vector3.right * separation;
      //right.transform.localRotation = Quaternion.AngleAxis(-convergenceAngle, Vector3.up);

      m_ourPortalRenderer.material.SetTexture("_LeftEyeTexture", leftCamera.targetTexture);
      m_ourPortalRenderer.material.SetTexture("_RightEyeTexture", rightCamera.targetTexture);
    }
  }

Seems like a problem with your RenderTexture sizes, doesn’t it? Have you checked the RenderTexture(XRSettings.eyeTextureDesc) constructor?

Regarding unity_StereoEyeIndex, have you declared UNITY_VERTEX_OUTPUT_STEREO in your v2f struct, and UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o) in your vert() shader func? And also UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(i) in your frag() shader func? After all those, I guess UnityStereoScreenSpaceUVAdjust should work… I guess…

On my side, using multi-pass single camera portal effect, good results in non-VR mode but not in VR:

Man, the devil is in the details like they say… :smile:

I have not tried these and will take a look. But keep in mind that UNITY_SINGLE_PASS_STEREO isn’t even defined, so I’m skeptical this solution will work. For now, I am stuck using multi-pass stereo, for which these functions do not work (everything is simply rendered twice).

I don’t think the render texture size is wrong. Each render texture is the size of the main camera render target. If the size is wrong, it mostly just affects the fidelity of the portal, not really the alignment.

It appears to me that the stereo cameras I create are not calibrated properly. It might be the stereo separation, the convergence, or something else about the projection matrices. I wonder whether Unity is giving me the wrong projection parameters.

Is it possible that Camera.main.transform.position is not in the exact center of the left and right eye cameras? Because my math assumes that it is. But I don’t see any discussion about how Camera.main behaves in a VR stereo system.

bricefr: In your Oculus XR plugin settings, are you using Multi Pass or Multi View rendering?

Looks like we both have an offset issue. Could you share your code for creating the render texture as well as the portal camera (I’m still not clear on how stereo is enabled on virtual cameras)?

I am using multi-pass since the start. I tried to switch to multi-view (which is single-pass from my understanding) but same problem and it will have too much impact on my shaders to use this (I will also have to create dual-camera for my portals, and adapt my shaders to offset the rendering to the matching eyes, like you’re trying to do). My prototype doesn’t require too many CPU performances so I guess multi-pass is acceptable, even for the Quest… for the moment… :wink: And I really wan’t to have the VR implementation has close as the classical implementation possible. Just to mention I am using pure Unity XR package, no Oculus package or any other VR extensions what so ever.

I’m actually doing some experimentations regarding the camera settings (which I now instantiate from the main VR camera to be sure, instead from scratch), the RenderTexture parameters (with or without using the XRSettings.eyeTextureDesc), and the cutout shader.But it also seems like I will need to make some stereo projection/view matrices adaptations on my camera for this to work… using setStereo*Matrix.

Another strange thing, if I disable completely the rotation of the portal camera (meaning, only the player position should be reflected in the portal plane): the left eye still rotates according to the player camera (like if the projection/view matrix of the left eye was still impacted by the main camera… but the right eye is till good!). Too many thing I don’t fully understand yet… but I will share my code today… working or not :slight_smile:

Regarding your stereo settings, they are parameters on the Camera, why don’t you try to get them from the main camera: stereoConvergence, stereoSeparation, … ? I have also seen you already setup the projection matrices of your cameras… which should include the convergence and separation I guess… What about the view ones? Shouldn’t they be adapted as well? Just trying to guess here sorry :stuck_out_tongue:

Check this out:

:sunglasses:

Perfect alignment. But there is still a problem: there’s a lot of jitter with even the slightest head motion. It is not rigidly locked to my head motion and it is extremely noticeable. I can post a video if you’d like.

How I solved it:

In the XR Rig, you have a CenterEyeAnchor, right? Duplicate this twice and rename the dupes to LeftEyeAnchor and RightEyeAnchor. Remove the camera from both of those. There is a TrackedPoseDriver remaining. Set one to Left Eye and the other to Right Eye. Now, you can compute the exact translation and rotation relative to CenterEyeAnchor (or Camera.main.transform)!

For example, my LateUpdate looks like this now:

  private void LateUpdate()
  {
    // Reposition center anchor point at the other side of the portal based on relative position of our head
    // to the portal entrance
    Transform cameraTransform = Camera.main.transform;
    Vector3 userOffsetFromPortal = cameraTransform.position - transform.position;
    m_otherCamera.transform.position = m_otherPortal.transform.position + userOffsetFromPortal;
    m_otherCamera.transform.rotation = m_otherPortal.rotation * Quaternion.Inverse(transform.rotation) * cameraTransform.rotation;

    // Ensure the left and right eye cameras are offset from the center anchor correctly
    if (Camera.main.stereoEnabled)
    {
      m_left.position = m_otherCamera.transform.position + m_leftEye.position - Camera.main.transform.position;
      m_right.position = m_otherCamera.transform.position + m_rightEye.position - Camera.main.transform.position;
      m_left.rotation = m_otherCamera.transform.rotation * Quaternion.Inverse(Camera.main.transform.rotation) * m_leftEye.rotation;
      m_right.rotation = m_otherCamera.transform.rotation * Quaternion.Inverse(Camera.main.transform.rotation) * m_rightEye.rotation;
    }
  }

A bit messy. m_left is the transform of the virtual left eye camera observing the portal (I create this object myself and add a Camera there) and m_leftEye is set via the inspector to be the LeftEyeAnchor driven by TrackedPoseDriver. I’m just trying to compute the local position and rotation of my virtual cameras based on how the left and right eye are offset.

I have not yet had time to capture a log dump of the separation and convergence values but my suspicion is that they will differ from the values in Camera.main. Will confirm later tonight.

1 Like

Now I am really confused. You need the Oculus plugin for the Unity XR package, right? (Not necessarily the Oculus Integration asset, just the Oculus plugin). When I go to Project Settings → XR Plug-in Management, there is an Oculus drop-down.

Now I’m really confused as to how you are creating a single render texture and rendering to that. If you can share your code, it would be super helpful to understand what you are doing. Feel free to reach out privately at bart.trzy at gmail dot com, also.

See my post above. I think these values are actually incorrect! I also thought the projection matrices include these values but now am confused because it seems that the fix for me (minus the horrible jitter now) was to offset the cameras manually.

1 Like

Nice ! Well done. I understand. And this is using the initial cutout shader you posted? Nothing specific to single-pass rendering in this shader?

On my side, the code is pretty simple, because I have a single camera… and I still want to keep one camera :smile:

[Header("General")] [SerializeField, Required]
    private GameObject portalOuput;
  
    [Header("Setup")]
    [SerializeField, Required] private new Renderer renderer;
    [SerializeField, AssetsOnly, Required] private Material material;
    [SerializeField, Required] private Collider activationArea;
    [SerializeField, Required] private Collider teleportationArea;

    private Camera _camera;
    private Material _defaultMaterial;
    private Material _mirrorMaterial;
    private RenderTexture _renderTexture;

    private bool _active;
    private bool _tracking;
    private Transform _trackedObject;

    private void Start() {
      
        // instantiate a camera and parent it to the portal output (to follow it if its moving)
        _camera = Instantiate(PlayerController.instance.head.camera, portalOuput.transform);
        _camera.gameObject.AddComponent<UniversalAdditionalCameraData>();
      
        _camera.transform.localPosition = Vector3.zero;
        _camera.transform.localRotation = Quaternion.identity;
        _camera.forceIntoRenderTexture = true;
      
        _camera.stereoTargetEye = StereoTargetEyeMask.Both;
        _camera.depth -= 1;

        // duplicate render texture and material
        _mirrorMaterial = Instantiate(material);

        // create the render texture
        _renderTexture = XRSettings.enabled
            ? new RenderTexture(XRSettings.eyeTextureDesc)
            : new RenderTexture(Screen.width, Screen.height, 24);
      
        _renderTexture.antiAliasing = 2;
        _renderTexture.vrUsage = VRTextureUsage.TwoEyes; // default seems to be DeviceSpecific

        // link render texture to material
        _mirrorMaterial.mainTexture = _renderTexture;
      
        // associate camera to render texture
        _camera.targetTexture = _renderTexture;
      
        // retrieve default material
        _defaultMaterial = renderer.material;
      
        // default, no mirroring
        SetupTracking();

    }

    private void LateUpdate() {

        if (!_active)
            return;
      
        // mirroring through the portal
        if (_tracking && _trackedObject != null) {
            _camera.transform.position = portalOuput.transform.TransformPoint(transform.InverseTransformPoint(_trackedObject.transform.position));
            _camera.transform.localRotation = _trackedObject.transform.localRotation;
        }
    }

And the shader I use if the same as yours… thanks to Brackeys :slight_smile:

And I don’t use the TrackedPoseDriver, but I have my implementation which basically does the same…

using System.Collections.Generic;
using Sirenix.OdinInspector;
using UnityEngine;
using UnityEngine.XR;

[DefaultExecutionOrder(-30000)]
public class RoomScaleTracker : MonoBehaviour {

    [Required] public XRNode node;

    /// <summary>
    /// Should the position be tracked?
    /// </summary>
    public bool trackPosition = true;

    /// <summary>
    /// Should the rotation be tracked?
    /// </summary>
    public bool trackRotation = true;

    /// <summary>
    /// Last known position (local space).
    /// </summary>
    public Vector3 lastLocalPosition { get; private set; }

    /// <summary>
    /// Last known rotation (local space).
    /// </summary>
    public Quaternion lastLocalRotation { get; private set; }

#if ENABLE_VR

    private static bool _initialized;

    private void Awake() {
        if (enabled && !_initialized) {
            _initialized = true;
           
            var subsystems = new List<XRInputSubsystem>();
            SubsystemManager.GetInstances(subsystems);
            foreach (var t in subsystems) {
                t.TrySetTrackingOriginMode(TrackingOriginModeFlags.Floor);
            }
        }
    }

    private void Update() {
        Track();
    }
   
    private void Track() {
        if (!enabled)
            return;
       
        var device = InputDevices.GetDeviceAtXRNode(node);
        if (device.isValid) {
            if (device.TryGetFeatureValue(CommonUsages.deviceRotation, out var rotation)) {
                lastLocalRotation = rotation;
               
                if (trackRotation)
                    transform.localRotation = rotation;
            }

            if (device.TryGetFeatureValue(CommonUsages.devicePosition, out var position)) {
                lastLocalPosition = position;
               
                if (trackPosition)
                    transform.localPosition = position;
            }
        }

    }

#endif

}

My fragment shader:

            fixed4 frag(v2f i) : SV_Target
            {
                float2 uv = i.screenPos.xy / i.screenPos.w; // clip space -> normalized texture

                // sample the texture
                fixed4 col = unity_StereoEyeIndex == 0 ? tex2D(_LeftEyeTexture, uv) : tex2D(_RightEyeTexture, uv);

                // apply fog
                UNITY_APPLY_FOG(i.fogCoord, col);
                return col;
            }

I still don’t understand why the projection matrices aren’t handling the convergence and separation. I’ve confirmed that they do differ (albeit by only one element). I need to sit down and review projection matrices. Off the top of my head it does seem like one value would be insufficient to handle both phenomena…

Here are the matrices taken from the Quest log:

07-04 13:59:37.546 31733 31747 I Unity   : left=0.91729    0.00000    -0.17407    0.00000
07-04 13:59:37.546 31733 31747 I Unity   : 0.00000    0.83354    -0.10614    0.00000
07-04 13:59:37.546 31733 31747 I Unity   : 0.00000    0.00000    -1.00060    -0.60018
07-04 13:59:37.546 31733 31747 I Unity   : 0.00000    0.00000    -1.00000    0.00000
07-04 13:59:37.546 31733 31747 I Unity   :
07-04 13:59:37.546 31733 31747 I Unity   : (Filename: ./Runtime/Export/Debug/Debug.bindings.h Line: 35)
07-04 13:59:37.546 31733 31747 I Unity   :
07-04 13:59:37.546 31733 31747 I Unity   : right=0.91729    0.00000    0.17407    0.00000
07-04 13:59:37.546 31733 31747 I Unity   : 0.00000    0.83354    -0.10614    0.00000
07-04 13:59:37.546 31733 31747 I Unity   : 0.00000    0.00000    -1.00060    -0.60018
07-04 13:59:37.546 31733 31747 I Unity   : 0.00000    0.00000    -1.00000    0.00000

m02 differs (+ in one - in the other).

And the jitter… hmm…

Have you seen this? https://docs.unity3d.com/ScriptReference/Camera.CopyStereoDeviceProjectionMatrixToNonJittered.html

Ok I’m starting to understand the problem… the second camera I create at runtime isn’t in stereographic mode… and I don’t seem to be able to force that… question is: can we setup more than one stereographic camera in a same scene on Unity??

At runtime on the Quest:

And I did CopyFrom the initial camera…

// camera setup
        _camera = _clone.AddComponent<Camera>();
        _camera.CopyFrom(PlayerController.instance.head.camera);
      
        _camera.transform.localPosition = Vector3.zero;
        _camera.transform.localRotation = Quaternion.identity;
        _camera.forceIntoRenderTexture = true;

        _camera.stereoTargetEye = XRSettings.enabled ? StereoTargetEyeMask.Both : StereoTargetEyeMask.None;
      
        // URP-specifics
        _clone.AddComponent<UniversalAdditionalCameraData>();

Ok apparently setting the renderTexture attribute to a Camera disable the stereo mode… what the… ?! I don’t see the point of having a vrUsage on the RenderTexture…

I’m screwed, I guess I have to switch to multiview and build up a fake stereographic system with two cameras, like you did. So sad, seems like a little thing is missing to get a reliable and easy solution to VR-enabled portal or mirror in Unity…

2 Likes

Ok I ended up doing like you and it’s working.

For the eye positioning, I implemented this. Attach it to your portal cameras, which are then attached to the portal remote origin. Given the fact my transformations are all local-relative (between the player and the portal), it’s pretty simple to compute the required offset to get the alignement. Strangely, I have to override the projectionMatrix at every frame. For sure it can be optimized, but it works and the portals - at least mine - will be enabled on a short period of time.

@trzy Thanks for the discussion, it help us both find a way to solve this.

The eye tracker (local space):

public class LocalEyeTracker : MonoBehaviour {


    [SerializeField] private Camera.StereoscopicEye eye;

    private void Update() {
        var node = eye == Camera.StereoscopicEye.Left ? XRNode.LeftEye : XRNode.RightEye;
       
        // update relative eye position
        var device = InputDevices.GetDeviceAtXRNode(node);
        if (device.isValid) {
            if (device.TryGetFeatureValue(eye == Camera.StereoscopicEye.Left ? CommonUsages.leftEyeRotation : CommonUsages.rightEyeRotation, out var rotation))
                transform.localRotation = rotation;
           
            if (device.TryGetFeatureValue(eye == Camera.StereoscopicEye.Left ? CommonUsages.leftEyePosition : CommonUsages.rightEyePosition, out var position))
                transform.localPosition = position;
        }
       
        // update projection matrix
        GetComponent<Camera>().projectionMatrix = PlayerController.instance.head.camera.GetStereoProjectionMatrix(eye);
    }
}

Initialization of the cameras:

// instantiate the clone and parent it to the portal output (to follow it if its moving)
_clone = new GameObject("Portal Clone");
_clone.transform.parent = portalOutput.transform;
_clone.transform.localPosition = Vector3.zero;
_clone.transform.localRotation = Quaternion.identity;

// retrieve default material
_defaultMaterial = renderer.material;

// setup render to texture
if (XRSettings.enabled) { // stereographic
    
    portalStereoSystem.SetActive(true);

    portalStereoSystem.transform.parent = _clone.transform;
    portalStereoSystem.transform.localPosition = Vector3.zero;
    portalStereoSystem.transform.localRotation = Quaternion.identity;
    
    // setup cameras
    portalStereoLeftCamera.CopyFrom(PlayerController.instance.head.camera);
    portalStereoRightCamera.CopyFrom(PlayerController.instance.head.camera);

    // render textures
    portalStereoLeftCamera.targetTexture = new RenderTexture(PlayerController.instance.head.camera.pixelWidth, PlayerController.instance.head.camera.pixelHeight, 24);
    portalStereoRightCamera.targetTexture = new RenderTexture(PlayerController.instance.head.camera.pixelWidth, PlayerController.instance.head.camera.pixelHeight, 24);

    // render texture material
    _mirrorMaterial = Instantiate(stereographicMaterial);
    _mirrorMaterial.SetTexture(LeftEyeTexture, portalStereoLeftCamera.targetTexture);
    _mirrorMaterial.SetTexture(RightEyeTexture, portalStereoRightCamera.targetTexture);

} else { // monographic
    
    portalStereoSystem.SetActive(false);

    // camera setup
    _camera = _clone.AddComponent<Camera>();
    _camera.CopyFrom(PlayerController.instance.head.camera);

    _camera.transform.localPosition = Vector3.zero;
    _camera.transform.localRotation = Quaternion.identity;
    _camera.forceIntoRenderTexture = true;

    // URP-specifics
    _clone.AddComponent<UniversalAdditionalCameraData>();
    
    // create the render texture
    _camera.targetTexture = new RenderTexture(Screen.width, Screen.height, 24) {antiAliasing = 2 };

    // render texture material
    _mirrorMaterial = Instantiate(monographicMaterial);
    _mirrorMaterial.mainTexture = _camera.targetTexture;
}

And the portal LateUpdate:

_clone.transform.position = portalOutput.transform.TransformPoint(transform.InverseTransformPoint(_trackedObject.transform.position));
            _clone.transform.localRotation = _trackedObject.transform.localRotation;

My PortalStereoSystem is just a node with two children: one with a camera with a left-enabled LocalEyeTracker, and another child with a right-enabled one.

Ok if by “jitter” you mean the rendered image rotate way too fast regarding the VR camera, I get the same phenomena. And also the portal camera rotation around its forward axis has not the same behavior than the VR camera… is there some kind of “adaptation” made by XR management package on the camera after updates, and just before rendering, to correct/smooth things out?

In fact there is. All VR and AR HMDs require the rendered image to be adjusted just before it hits the display to account for head motion that occurred after the scene was submitted to the GPU for rendering. There are various techniques and terms for this: reprojection, time warp, space warp, etc.

I’ve never actually seen a system where this is disabled to compare the effect. I’m not sure if this is the cause of what we are seeing, although it is certainly possible. The portal is rendered from the same head pose as the rest of the scene, so I don’t quite understand why this would be. From my reading, Oculus Quest Asynchronous TimeWarp is not using depth information and operates purely on the texture and mesh that the renderer outputs each frame to. Therefore, the fact that the portal is itself a texture with depth information discarded should not be an issue.

I wonder if this is just latency and the head pose during LateUpdate is simply not the same as the frame is actually rendered with? Maybe it is one full frame behind?

BTW, thanks for your other posts. I’ll have to find some time to study them further.

Out of curiosity I got the true convergence and separation values:

Convergence angle = 0
Separation = 0.06792906

This looks better. 68mm is a reasonable and realistic human IPD. There is no convergence angle (the left and right eye cameras are parallel) so as not to create uncomfortable vertical parallax and presumably the difference in the left/right projection matrices is to create the asymmetric, off-axis projection for each eye, as below. At some point for my own edification, I’ll derive the projection matrices myself.

6057974--655754--projection.gif

Now as for the camera jitter: I have one idea that I’ll try later. There is an onPreRender() callback that gets fired on the cameras themselves. This would require re-working the scripts to live on the cameras, though. Maybe the HMD pose is correct at that stage?