Issue viewing a raymarching shader I wrote for VR.

Hello everyone,

I am currently writing a raymarching shader. So far, I’ve gotten the shader to work just fine, intersection-wise, with singular camera positions. Here are some screenshots to show what’s up:

I am able to make a variety of raymarched shapes that properly intersect with vertex-based geometry in-world. Every single-point camera is able to look at this raymarched geometry without issue.

However, once I put on the index headset, the viewport shows this strange mismatch. There’s a vertical bar in the middle of everything, and the depth detection for the raymarcher is offset from where my avatar’s hands are.

Right now, I have some rather messy code in the vertex shader for transforming the vertices from object space to clip space:

            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);

                #if UNITY_UV_STARTS_AT_TOP
                    float scale = -1.0;
                #else
                    float scale = 1.0;
                #endif
                o.uvgrab.xy = (float2(o.vertex.x, o.vertex.y * scale) + o.vertex.w) * 0.5;
                o.uvgrab.zw = o.vertex.zw;
                o.uv = TRANSFORM_TEX(v.uv, _MainTex);
                o.ro = mul(unity_WorldToObject,float4(_WorldSpaceCameraPos,1));
                o.hitPos = v.vertex;
                return o;
            }

I think this is where the problems might lie, but I’m not sure. I know that there are some functions in Unity - Manual: Stereo rendering which sort of explain how a stereoscopic camera operates in a fundamentally different way, and that the coordinates need to be mapped differently, but it is absolutely not clear to me what the inner workings of those black boxes are.

How should I better transform these coordinates so I can appropriately pass data to the fragment shader in a manner that’s properly mapped to the VR headset screens?

It looks like the raymarched shapes still remain in their proper spots, so my assumption is that the fragment shader is able to get this to work alright. However, I have a feeling that the _CameraDepthTexture is the culprit, not being properly mapped to match the viewport to do the depth detection with surrounding vertex geometry. Right now, in order to get the camera depth data into the fragment shader, I use:

fixed4 col2 = LinearEyeDepth(tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(i.uvgrab)));

I think i.uvgrab is perhaps the thing which needs to be rescaled.


It intersects just fine when viewed with a non-VR camera.

Any assistance on this one, anyone? What function can properly map the depth texture to the VR viewport? That’s what I’ve been using to determine draw order.

You should be using the built in function for calculating the screen UVs rather than trying to calculate it yourself, as the built in function correctly handles the (several) ways the screen UVs differ between non VR and VR for screen space textures like the depth texture.

Specifically you want to use the ComputeScreenPos() function. See the built in particle shaders for an example of how to use it:

Hello and thanks for this information! I ended up going a slightly different route.

I changed frag from a float4 to a struct writing to SV_Target and SV_Depth.

Specifically

struct f2s
{
float4 color : SV_Target;
float depth : SV_Depth;
}

I was unaware of SV_Depth until talking with some people about it and seeing how they got it to work.

Within the fragment shader I computed the clip position via

clipPos = mul(UNITY_MATRIX_VP, float4((where the ray hits in world space),1));

then, within an instantiated struct named ret, I passed clipPos.z/clipPos.w to ret.depth.

That allowed all of the depth to be properly sorted (except in the case where it literally touched the near clipping plane. Then it appeared behind everything else. Not sure what’s going on in that case, so I just threw in a junky “if” statement which returned 1 if the depth somehow dropped below 0.)