Hello,
I have posted this question on Unity Answers, but I’ll summarize here as well.
I’m working on an effect that requires an object to be drawn to an off-screen RenderTexture during OnPreCull of the Main Camera using a replacement Shader, then during the Forward Base pass of the Main Camera that same object is drawn with a Material that uses that RenderTexture.
With a single Camera, or even an explicit dual Camera setup such as Google Cardboard, this approach works as expected, however OVR seems to only render the buffer from one eye’s perspective.
What I’m assuming is happening, or rather, what I know is happening in the Cardboard workflow is this:
-
OnPreCull of Main Camera (from left eye perspective) the off-screen Camera is told to render an object into its RenderTexture.
-
The object is rendered by the Main Camera (from left eye perspective) with the RenderTexture as a property of its Material.
-
OnPreCull of Main Camera (right eye perspective), the off-screen Camera is told to render an object into its RenderTexture.
-
The object is rendered by the Main Camera (from right eye perspective) with the RenderTexture as a property of its Material.
It seems in OVR, #3 (or possibly #1) from above is not actually happening as both eyes’ perspectives seem to be looking up the same state of the RenderTexture, causing the resulting image to appear as “double vision”.
Is there something I’m misunderstanding with the way the framebuffer is put together when it renders in the OVR setup?
Any information would be greatly appreciated.
Thanks in advance.
==