Problem with shader in multipass vr, How can I fix the shader to position correctly for both eyes?

Hi, I have a problem. I connected vr for my project, which contains volumetric clouds that are created inside the container boundaries using raymarching. When render is enabled, the clouds are heavily offset from the edges of the container for different eyes. For clarity, I placed the cube inside the container to show the offset.


Hello, Doesn’t it look like the left, and right views just are mixed up for the clouds?

No, they are offset from the container, although they should fit into it. In normal rendering (qwe1.PNG), they are clearly inscribed in the container.

I still think it looks like the left, and right views are flipped for the clouds compared to the box.

1 Like

Oh, really, thanks! Do you have any ideas why this might be happening?

Guess it may has something to do with how the shader makes the clouds, or how they are rendered in the scene. Is it for example a skybox, or a dome mesh?

Skybox, container coordinates are passed to the shader.

        material.SetVector ("boundsMin", cloudsContainer.position - cloudsContainer.localScale / 2);
        material.SetVector ("boundsMax", cloudsContainer.position + cloudsContainer.localScale / 2);

After that, the shader calculates the distance to the container and inside the container.

            float2 distanceToContainer(float3 boundsMin, float3 boundsMax, float3 rayOrigin, float3 invRaydir) {

                float3 t0 = (boundsMin - rayOrigin) * invRaydir;
                float3 t1 = (boundsMax - rayOrigin) * invRaydir;
                float3 tmin = min(t0, t1);
                float3 tmax = max(t0, t1);
               
                float dstA = max(max(tmin.x, tmin.y), tmin.z);
                float dstB = min(tmax.x, min(tmax.y, tmax.z));

                float dstToContainer = max(0, dstA);
                float dstInsideContainer = max(0, dstB - dstToContainer);
                return float2(dstToContainer, dstInsideContainer);
            }

Since its correct in Mono, perhaps it’s just to focus on how the stereo separation is done?

Can you suggest where I should start in this case?

How about in the shader?

I’m trying, but I can’t find where the mixing is happening. Maybe I need more specific built-in functions for stereo vr?

            struct appdata {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            struct v2f {
                float4 pos : SV_POSITION;
                float2 uv : TEXCOORD0;
                float3 viewVector : TEXCOORD1;
            };
           
            v2f vert (appdata v) {
                v2f output;
                output.pos = UnityWorldToClipPos(v.vertex);
                output.uv = v.uv;

                float3 viewVector = mul(unity_CameraInvProjection, float4(v.uv * 2 - 1, 0, -1));
                output.viewVector = mul(unity_CameraToWorld, float4(viewVector,0));
                output.viewVector.x -= unity_StereoEyeIndex * 0.5;
                return output;

You are probably on to something. If the shader is rendering the skybox in mono, instead of stereo for vr, the image will also be off for left, and right eyes for nearby objects.