I am attempting to write an image effect that combines left and right eye as anaglyph 3D (red+blue). Because I want this to work in single pass stereo mode, I use a single camera and chose the VR API ‘Split Stereo Display (non head-mounted)’ while setting ‘Stereo Rendering Method’ to ‘Single Pass’.
EDIT:
My main issue is that when I assign a RenderTexture to cam.targetTexture, then cam.stereoEnabled flicks to false. IF stereo cameras supported a targetTexture, then I could render at double width and then Blit the texture to screen, sampling from left and right. So at present time it seems that I am limited to render stereo at half width (if I want single pass to work).
Findings:
– The VR SDK ‘Stereo Display (non head-mounted)’ only supports output displays directly as exported app.
– The VR SDK ‘Split Stereo Display (non head-mounted)’ renders to two textures if system does not support single pass (RTEyeTextureLeft0 and RTEyeTextureRight0) and one (texture RTEyeTextureDoubleWide0) if it is supported. But Unity’s API does not provide a runtime indicator that tells us if it’s doing one or the other.
How can I render a single pass stereo image to one RenderTexture at a specific size (3840x1080)?. And then Blit that texture to screen at another size (1920x1080), reading from left and right side to combine them?
This is easy piece if you use two cameras. But as I understand the docs, you need to use a single camera to support single pass stereo.