The case is fairly simple, but it’s really hard to figure out how single pass stereo actually works in Unity. Which variables are different between the two eyes and which are the same.
I have a sphere, centered around the camera. The contents should be shown as if infinitely far away. By centering it around the camera OnPreCull, this works fine for normal rendering, but in VR the actual size of the sphere is visible. So I changed the world space position calculation to:
That should center it on _WorldSpaceCameraPos for each eye, resulting in an infinite projection. It doesn’t however, so that in my opinion means that _WorldSpaceCameraPos is always set to the “center eye”.
At first I couldn’t find the inverse view matrix, but it seems it’s unity_CameraToWorld. So:
Maybe update this page with things like the existence of unity_CameraToWorld? Maybe with a hint that _WorldSpaceCameraPos does not change between eyes in single pass stereo? Maybe change _Object2World to unity_ObjectToWorld? (That was changed about a year ago.)
Sorry for the rant, but this turns a 5 minute change in a 1 hour search.
Anyway, for others, that’s how you calculate a single pass stereo compliant camera position.
Edit: Read below, this isn’t the solution.
I’ve noticed this differs between Unity versions and VR device used. Sometimes _WorldSpaceCameraPos is per eye projection position, sometimes it’s the in-game camera position. This is definitely a bug. It’s changed multiple times back and forth for some platforms.
Look at the UnityShaderVariables.cginc file in the shader source. This is where all of these are defined.
When doing single pass stereo it does indeed have separate values for each eye index, though it doesn’t necessarily mean the data they hold is actually unique.
Also note that UNITY_MATRIX_V and unity_WorldToCamera are different, intentionally so. The unity_WorldToCamera matches in editor positions, where as UNITY_MATRIX_V (and unity_MatrixV) are world to render view space which, while similar, aren’t guaranteed to match.
And yeah, I agree with this. Lots of the shader documentation about built in variables and functions are out of date and occasionally simply wrong.
Yes, we all understand that documentation is hard to maintain, but the whole debugging process in shaders is just that much harder compared to scripts. Specially if we’re talking about a 3 cm difference in camera position.
So, actually, I still get the feeling my sphere is not at infinite distance when using unity_CameraToWorld, but it’s difficult to be sure of that. I’m now thinking this might be related to the fact that the projection matrices are skewed in VR. So my straight mapping approach will not put things at infinite distance.
The whole thing is for a sky setup consisting of a number of components. The high clouds are currently mapped on a virtual sphere using a ray/sphere intersection and they feel at the right distance. Pretty much a bullet proof way of doing things, but I can’t apply it to everything. Plus, doing a view ray intersection with a sphere at infinite distance is usually fairly pointless, since it boils down to just the view direction. Unless your projection matrix is skewed.
So, this might turn into an actual topic instead of just a rant on the documentation I’ll work out some code and post it here.
Edit: Just figured the skewed projection matrices should not matter much. Still I feel it’s not at infinite distance, so having a look at UNITY_MATRIX_I_V.
Well, after some further testing it does seem that my mind is just playing tricks or the interocular distance is just not set up properly. The parts projected to infinity are visually further away than 100 or 200 or 400 meters, while the original sphere is only 100 meters in radius. So the math does seem correct on the Unity side.
I’ve found when your play space is relatively small, say 20ish meters across, if you make the sky sphere 150ish meters across your brain just gives up and you can get away with it.