I’ve recently set up my project to render the scene to a render texture, which is then used as a texture for a plane that gets drawn to the screen by an orthographic camera. I’ve got the camera, plane and texture set up so that the aspect ratios are correct (the scene looks the same regardless of if its rendered via the flat plane, or directly from the main camera).
My issue is with Camera.ScreenPointToRay(). which is used for all of the user input to tap on objects in the scene. The rays are generated using the same perspective camera that renders the scene, but provides different rays depending on whether or not the orthogonal rendering setup is active or not. Pictures below show the different results.
First is the view from the editor when just rendering through the camera normaly, the red dot is where I was clicking to generate the ray, which is drawn is yellow).
Now for the same scene, rendered to a RenderTexture, drawn on a plane by the orthogonal camera. Again with the red dot as the input position, you can clearly see a huge difference in the ray that is generated.
I’ve output the screen coordinates that are used for the ray cast, as well as the aspect ratio of the camera. With and without the orthogonal system these values are exactly the same. So I’m confused as to why the ray results differ, unless there is some other value that I’ve not thought of that would effect the ray generation.
A unity package with a test scene (slightly modified from the one in the screenshots to make testing on device easier) Here.
The test scene has 4 spheres that should cycle colours when clicked or tapped on, and a button to swap between modes. Uses render textures so will only run on pro.