WorldToViewportPoint and WorldToScreenPoint give wrong positions when VR is enabled

In my application, I want to use eye-tracking to determine if a user looks at a certain GameObject. From the eye-tracker I get positions in “normalized screen space”, which is comparable to the viewport positions in Unity: A vector in range [0;0] - [1;1].

To determine the position of the GameObject on the screen, I use Camera.WorldToViewportPoint. This works fine as long as VR is disabled. As soon as I enable VR, the calculated viewport position is wrong (except in case the GameObject is in the center), it seems like a nonlinear distortion.

Do I need to apply another conversion after WorldToViewportPoint?

Here is an example of the issue, which displays the viewport position and overlays a GUI element using the screen position. I modified the code from this question and attached it to the cube:

    void OnGUI()
    {
        Vector3 viewportPos = Camera.main.WorldToViewportPoint(transform.position);
        Vector3 screenpos = Camera.main.WorldToScreenPoint(transform.position);

        GUI.Label(new Rect(30, 30, 100, 30), viewportPos.ToString());
        GUI.Box(new Rect(screenpos.x - 10, Screen.height - (screenpos.y + 10), 20, 20), "X");      
    }

The correct result (VR disabled):
88267-normal.jpg

The incorrect result (VR enabled):
88268-vr.jpg

As you can see, there is no GUI element displayed (the calculated screen position is off limits) and the calculated viewport position is wrong as well. I understand that the screen positions can be wrong in case the values (in pixels) are calculated for the screen in the HMD and then are used to position GUI objects on the secondary monitor, where the window has other dimensions than the HMD.
But the viewport positions should work correctly, or am I missing something?

Many thanks!

For VR, I used both the standard Unity Camera and the HTC Vive Camera Prefab - same effect with both.

@ottolutz
did you figure this out? I’m having trouble with this too.

Thank you

Hi Otto,

You have 2 questions here so I am going to attempt to answer them individually.

First the GUI: VR is a special beast.Because framerate is extremely important in VR, certain parts of the rendering stack are not drawn at all. (I believe that this is the deferred rendering? But everything in VR needs to be forward rendering or something like that).

Anyway: the important bit is that the GUI drawing phase of rendering is ignored by design in VR. Additionally, it would be disorienting. You don’t actually want to use GUI with VR because its 2D in a 3D space. It would mess with your eyes to kinda have GUI elements pasted to your face.

What I recommend is to child a GUI to the headset object, and set its Z to be a reasonable distance in front of the HMD.

If you want to still use the GUI elements, I think you still can do it by switching your canvas render mode to world space. Now that its a world space object you should be able to mount it to the HMD by childing it to the HMD object.

For your second question, I suspect that Unity is correct and that it is considering the HMD itself to be the camera (one of the eyes, anyway).

You could simply print out the world coordinates of each camera and see if the starting positions are different. I bet they are. And I bet if you move your HMD you’ll see the values change in the VR case.

Hi!

Did you found a solution? I have the same problem and I don`t know how to fix it.

Thanks!