In Unty2018.2
Stereo Rendering Method is MultiPass.
I displayed the attached image (modo_uv_checker.jpg) on the Plane with the following shader code, but the results seen by the left eye and the right eye of VR are different like the attached image (uvbug.PNG).
This seems to indicate that ComputeScreenPos is handled differently for left eye and right eye. That was not the case with unity 2017.
There was no change in the implementation of ComputeScreenPos between version 2017 and 2018.2
You can download the built-in shader code for both versions here and compare.
ComputeScreenPos is designed to help you take a clip-space position and convert it to a screen-space friendly 0 to 1 range which can be useful for various fragment shader operations.
In a stereo context, ComputeScreenPos will perform differently if and only if you are using the single pass double wide stereo rendering method. This is due to the fact that when you are rendering with the single-pass double wide mode, you are rendering into a single texture which is twice the width of a single eye texture. Therefore in order to calculate the screen position, we need to consider the width and offset of each eye.
However, you said you were using multi-pass. When using multi-pass, ComputeScreenPos will perform in the same manner as it would if you were not rendering in stereo. However, the input to ComputeScreenPos is dependent upon the output of UnityObjectToClipPos. Therefore I think the offset is happening due to the UnityObjectToClipPos function. The UnityObjectToClipPos function will transform the object space position into a view space position and then finally into a clip-space position. The view matrix per eye will be slightly different to account for the interocular distance between the two eye positions. That is why game objects appear in slightly different locations from the perspective of each eye rendering. The same view matrix which applied a horizontal offset to the game objects in your scene, per eye, is being applied when you use the output of UnityObjectToClipPos as the input to ComputeScreenPos. Even if you don’t use ComputeScreenPos and you attempt to calculate the screen pos like this,
You will still notice an offset in your texture coordinates. However, if you move your quad close to the camera such that it fills the screen, you will notice that the offset doesn’t seem to exist. This is because the game object itself is offset per eye based on the view matrix as mentioned earlier. You can see that the distance from the left edge of the eye texture and the left edge of the quad is different per eye.
I’m having a similar issue based on a tutorial I found for doing Portal style effects using render textures. I have little shader knowledge, but it looks like it’s using UnityObjectToClipPos just before ComputeScreenPos similar to your examples.
My question is, is there any way to make this work in multi-pass without duplicating my cameras and render textures? OP mentions they used to, but your answer sounds to me like it simply won’t work this way.