I am attempting to leverage a RenderTexture drawn with screen-space coordinates of an object in VR.
I have a separate Camera with a target RenderTexture and a replacement Shader that is told to render during the OnPreCull event of the Main Camera.
The object that gets rendered with the replacement Shader to the RenderTexture then, during the Forward Base pass of the Main Camera then uses that RenderTexture to be rendered.
In doing this I am assuming that the Main Camera will render for the left eye, the RenderTexture will receive the results from the left eye’s perspective and the object will then be rendered with that RenderTexture, and repeat for the right eye.
To reiterate, the process occurs like this:
- OnPreCull of Main Camera (from left eye perspective) the off-screen Camera is told to render an object into its RenderTexture.
- The object is rendered by the Main Camera (from left eye perspective) with the RenderTexture as a property of its Material.
- OnPreCull of Main Camera (right eye perspective), the off-screen Camera is told to render an object into its RenderTexture.
- The object is rendered by the Main Camera (from right eye perspective) with the RenderTexture as a property of its Material.
With other APIs such as Google Cardboard, where there are actually 2 Cameras, this works as expected; the perspective of each eye is correctly used for the RenderTexture and the buffer can be shared. However, it seems that in OVR, #3 (or possibly #1) from above is not actually happening as both eyes’ perspectives seem to be looking up the same RenderTexture, causing the resulting image to appear as “double vision”.
I can only speculate that my assumptions of how the Camera is working when rendering to the Oculus headset is incorrect, but I can’t seem to find any information that gives an explanation as to what is going on with the Camera.
If anyone has any ideas it would be greatly appreciated.
Thanks in advance.
==