Infinite projection in VR

The case is fairly simple, but it’s really hard to figure out how single pass stereo actually works in Unity. Which variables are different between the two eyes and which are the same.

I have a sphere, centered around the camera. The contents should be shown as if infinitely far away. By centering it around the camera OnPreCull, this works fine for normal rendering, but in VR the actual size of the sphere is visible. So I changed the world space position calculation to:

pos_world = float4(mul((float3x3)unity_ObjectToWorld, input.pos_object) + _WorldSpaceCameraPos, 1.0);

That should center it on _WorldSpaceCameraPos for each eye, resulting in an infinite projection. It doesn’t however, so that in my opinion means that _WorldSpaceCameraPos is always set to the “center eye”.

At first I couldn’t find the inverse view matrix, but it seems it’s unity_CameraToWorld. So:

float3 _ActualWorldSpaceCameraPos = float3(unity_CameraToWorld[0][3], unity_CameraToWorld[1][3], unity_CameraToWorld[2][3]);

Maybe update this page with things like the existence of unity_CameraToWorld? Maybe with a hint that _WorldSpaceCameraPos does not change between eyes in single pass stereo? Maybe change _Object2World to unity_ObjectToWorld? (That was changed about a year ago.)

Sorry for the rant, but this turns a 5 minute change in a 1 hour search.

Anyway, for others, that’s how you calculate a single pass stereo compliant camera position.
Edit: Read below, this isn’t the solution.

3 Likes

I’ve noticed this differs between Unity versions and VR device used. Sometimes _WorldSpaceCameraPos is per eye projection position, sometimes it’s the in-game camera position. This is definitely a bug. It’s changed multiple times back and forth for some platforms. :frowning:

Look at the UnityShaderVariables.cginc file in the shader source. This is where all of these are defined.

#if defined(UNITY_SINGLE_PASS_STEREO) || defined(UNITY_STEREO_INSTANCING_ENABLED) || defined(UNITY_STEREO_MULTIVIEW_ENABLED)
#define USING_STEREO_MATRICES
#endif

#if defined(USING_STEREO_MATRICES)
    #define glstate_matrix_projection unity_StereoMatrixP[unity_StereoEyeIndex]
    #define unity_MatrixV unity_StereoMatrixV[unity_StereoEyeIndex]
    #define unity_MatrixInvV unity_StereoMatrixInvV[unity_StereoEyeIndex]
    #define unity_MatrixVP unity_StereoMatrixVP[unity_StereoEyeIndex]

    #define unity_CameraProjection unity_StereoCameraProjection[unity_StereoEyeIndex]
    #define unity_CameraInvProjection unity_StereoCameraInvProjection[unity_StereoEyeIndex]
    #define unity_WorldToCamera unity_StereoWorldToCamera[unity_StereoEyeIndex]
    #define unity_CameraToWorld unity_StereoCameraToWorld[unity_StereoEyeIndex]
    #define _WorldSpaceCameraPos unity_StereoWorldSpaceCameraPos[unity_StereoEyeIndex]
#endif

#define UNITY_MATRIX_P glstate_matrix_projection
#define UNITY_MATRIX_V unity_MatrixV
#define UNITY_MATRIX_I_V unity_MatrixInvV
#define UNITY_MATRIX_VP unity_MatrixVP
#define UNITY_MATRIX_M unity_ObjectToWorld

#define UNITY_MATRIX_MVP mul(unity_MatrixVP, unity_ObjectToWorld)
#define UNITY_MATRIX_MV mul(unity_MatrixV, unity_ObjectToWorld)
#define UNITY_MATRIX_T_MV transpose(UNITY_MATRIX_MV)
#define UNITY_MATRIX_IT_MV transpose(mul(unity_WorldToObject, unity_MatrixInvV))

When doing single pass stereo it does indeed have separate values for each eye index, though it doesn’t necessarily mean the data they hold is actually unique.

Also note that UNITY_MATRIX_V and unity_WorldToCamera are different, intentionally so. The unity_WorldToCamera matches in editor positions, where as UNITY_MATRIX_V (and unity_MatrixV) are world to render view space which, while similar, aren’t guaranteed to match.

And yeah, I agree with this. Lots of the shader documentation about built in variables and functions are out of date and occasionally simply wrong.

1 Like

Yes, we all understand that documentation is hard to maintain, but the whole debugging process in shaders is just that much harder compared to scripts. Specially if we’re talking about a 3 cm difference in camera position.

So, actually, I still get the feeling my sphere is not at infinite distance when using unity_CameraToWorld, but it’s difficult to be sure of that. I’m now thinking this might be related to the fact that the projection matrices are skewed in VR. So my straight mapping approach will not put things at infinite distance.

The whole thing is for a sky setup consisting of a number of components. The high clouds are currently mapped on a virtual sphere using a ray/sphere intersection and they feel at the right distance. Pretty much a bullet proof way of doing things, but I can’t apply it to everything. Plus, doing a view ray intersection with a sphere at infinite distance is usually fairly pointless, since it boils down to just the view direction. Unless your projection matrix is skewed.

So, this might turn into an actual topic instead of just a rant on the documentation :wink: I’ll work out some code and post it here.

Edit: Just figured the skewed projection matrices should not matter much. Still I feel it’s not at infinite distance, so having a look at UNITY_MATRIX_I_V.

Well, after some further testing it does seem that my mind is just playing tricks or the interocular distance is just not set up properly. The parts projected to infinity are visually further away than 100 or 200 or 400 meters, while the original sphere is only 100 meters in radius. So the math does seem correct on the Unity side.

I’ve found when your play space is relatively small, say 20ish meters across, if you make the sky sphere 150ish meters across your brain just gives up and you can get away with it.