Hello everyone,
I am currently writing a raymarching shader. So far, I’ve gotten the shader to work just fine, intersection-wise, with singular camera positions. Here are some screenshots to show what’s up:
I am able to make a variety of raymarched shapes that properly intersect with vertex-based geometry in-world. Every single-point camera is able to look at this raymarched geometry without issue.
However, once I put on the index headset, the viewport shows this strange mismatch. There’s a vertical bar in the middle of everything, and the depth detection for the raymarcher is offset from where my avatar’s hands are.
Right now, I have some rather messy code in the vertex shader for transforming the vertices from object space to clip space:
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
#if UNITY_UV_STARTS_AT_TOP
float scale = -1.0;
#else
float scale = 1.0;
#endif
o.uvgrab.xy = (float2(o.vertex.x, o.vertex.y * scale) + o.vertex.w) * 0.5;
o.uvgrab.zw = o.vertex.zw;
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
o.ro = mul(unity_WorldToObject,float4(_WorldSpaceCameraPos,1));
o.hitPos = v.vertex;
return o;
}
I think this is where the problems might lie, but I’m not sure. I know that there are some functions in Unity - Manual: Stereo rendering which sort of explain how a stereoscopic camera operates in a fundamentally different way, and that the coordinates need to be mapped differently, but it is absolutely not clear to me what the inner workings of those black boxes are.
How should I better transform these coordinates so I can appropriately pass data to the fragment shader in a manner that’s properly mapped to the VR headset screens?
It looks like the raymarched shapes still remain in their proper spots, so my assumption is that the fragment shader is able to get this to work alright. However, I have a feeling that the _CameraDepthTexture is the culprit, not being properly mapped to match the viewport to do the depth detection with surrounding vertex geometry. Right now, in order to get the camera depth data into the fragment shader, I use:
fixed4 col2 = LinearEyeDepth(tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(i.uvgrab)));
I think i.uvgrab is perhaps the thing which needs to be rescaled.
It intersects just fine when viewed with a non-VR camera.