Rendering into and reading from stereo render texture

Working on an AR project for the MagicLeap 2 (that runs on an android OS) using Unity (with Stereo Instanced Rendering) and the MRTK

I want to render the depth of an object into a render texture that I can read out in shaders later when rendering the normal view to screen.

For that, I have created a second camera as a child object of the main camera that renders only that object (via layer mask). When I render that second camera to the screen, I can distinguish between eyes in the shader using unity_StereoEyeIndex. But when I set the camera’s render target to a render texture, I cannot get the information from both eyes into that texture even when the target eye is set to “both” on the camera.

It is important to note that in the tests I made the rendering order seems to be

  • left eye cam 1, right eye cam 2 2) left eye cam 2, right eye cam 2 so I need to write depth values for each eye into the tex and later read one of the two values respectively

I tried:

  • Using a 2D RGFloat render texture with size equal to the ML2 screen dimensions and writing the depth into color.r if unity_StereoEyeIndex == 0 or writing it into color.g otherwise. But when I read from the render texture when rendering with the main camera, I can only get a value from sampledColor.r, sampledColor.g is always 0. I have also tried doubling the render tex width to see if maybe both eyes get rendered next to each other. For testing purposes I have also tried simply displaying a quad with unlit/texture shader and the render tex as main texture so that I can see if something simply renders into the wrong coordinates but it also only has red shapes, no green.

  • Using a 2DArray render texture with 2 entries, declaring it with UNITY_DECLARE_TEX2DARRAY and sampling with UNITY_SAMPLE_TEX2DARRAY where uv.z = unity_StereoEyeIndex but with that approach I just see black.

Maybe using a RenderFeature is the better approach? Although I’m not really sure how these work with stereo instancing. It would also be okay if I can only get it to work with multipass rendering but so far I have had no luck with that either as unity_StereoEyeIndex is not set then and so I have no way of distinguishing between eyes in the shader.

MRTK is not supported on Magic Leap 2. See the list of supported devices here: Mixed Reality Toolkit 3 Developer Documentation - MRTK3 | Microsoft Learn

I think that is only the case for MRTK3, version 2.8 works like a charm.
(following Set Up MRTK for Magic Leap 2 | MagicLeap Developer Documentation)

My issue is that a camera with rendertarget = rendertexture does not render for both eyes even if target eye = both is set on the camera. I don’t think this has anything to do with MRTK, I just listed that for the sake of completeness.

  • If I create a rendertexture with the exact settings as the one used by a camera rendering to the screen (R11G11B10F tex2D array with correct width/height by copying cameraColorTarget descriptor), the second array entry will not get written to.
  • If I create a one-dimensional rendertexture and try to different color channels with different color masks according to unity_StereoEyeIndex, I will only get the output of my shader that uses unity_StereoEyeIndex==0.
  • It also does not render the eyes side by side (I tried a rendertexture with double width)

My workaround for now is, to render my second camera to the screen as well and then use a ScriptableRenderFeature to copy the cameraColorTarget (which now correctly has contents in both entries of the tex2DArray) into another render texture that I can later read from.

PS: For some reason that I do not know, unity_StereoEyeIndex seems to only be set in the vertex shader and I need to pass it to the fragment shader myself, I do not know if this is intended behavior, just wanted to mention it.
(See https://forum.unity.com/threads/unity_stereoeyeindex-is-always-0.544620/ )