WorldToScreenPoint / Rays with stereoscopic 3D

In a stereoscopic scene you need to have two different cameras (according to the left and right eye). Hence you have two differing points when using WorldToScreenPoint (at least as far as I understand).

Will camera.WorldToScreenPoint(transform.position) still work or do I have to use another command to get the result? Or can Unity cope with stereoscopic set up and still give a correct result?

Further: What is the effect on Rays?

Even further: may the result differ when using MiddleVR?

WorldToScreenPoint would still work…in a way, since its a method of a camera instance, the result would simply depend on which of the two cameras you when calling the function.

Depending on the separation distance between the cameras, it may be perfectly acceptable for you to just pick one of the cameras and go for it, but if accuracy is a key importance, then you may be better off performing the function on both cameras, and using both results to then work out where on the screen the point is (so the midpoint between the two results).

Never used/looked at MiddleVR though so i cant offer any input there.

Since I did enough trying/ testing I came up with a quite simple but effective solution. I think this solution only works with MiddleVR, but I do not know for sure since I could not test any other set up.

In the case of MiddleVR two cameras are needed to render a stereoscopic image. But nevertheless you can place a third camera, which can be located between the camera for the left and right eye. This third camera can be used as the ray-perfoming and WorldToScreenPoint-performing camera. It will not render any image. Hence one does not need to rewrite code to fit a game to stereoscopic 3D.