…because after lots of pain and suffering, I finally went and measured everything in the shader pipeline. The Unity docs state that sampling their Camera depth textures (Unity - Manual: Cameras and depth textures):
“LinearEyeDepth(i): given high precision value from depth texture i, returns corresponding eye space depth.”
Coming from many years of OpenGL, I read “eye space” and I assume “view space”.
However, as far as I can tell deductively, they actually return a value relative to the near-plane of the current camera. This makes very little difference from a distance of meters (Unity’s default near-plane is 0.3m), but makes a huge difference when you have objects close to the viewer - e.g. anything that’s right in front of the player, occupying a large amount of the screen.
(for a long time, I’d been wondering why my depth values looked almost-but-not-quite right, and triple-checking, quadruple-checking all my math. I’d re-written the projection, sampling, distance calculations over and over again, using different Unity magic functions and features, and just kept on getting the exact same “almost, but not quite” correct data)
LinearEyeDepth(depthTexture) is absolutely the same as view space depth, within floating point precision error at least. You can test with this shader, which samples from the depth texture and passes its view depth (which is what the COMPUTE_EYEDEPTH calculates).
Eye vs View Depth
To see anything but black, you’ll have to change the Depth Difference Scale to >100000.
Shader "Unlit/EyeVsViewDepth"
{
Properties
{
[PowerSlider(2.0)] _DiffScale("Depth Difference Scale", Range(1,100000)) = 1
}
SubShader
{
Tags { "Queue"="Geometry" }
LOD 100
Pass
{
Tags { "LightMode" = "ForwardBase" }
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
};
struct v2f
{
float4 vertex : SV_POSITION;
float4 projPos : TEXCOORD0;
};
UNITY_DECLARE_DEPTH_TEXTURE(_CameraDepthTexture);
float _DiffScale;
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.projPos = ComputeScreenPos (o.vertex);
COMPUTE_EYEDEPTH(o.projPos.z);
return o;
}
fixed4 frag (v2f i) : SV_Target
{
// raw depth from the depth texture
float depthZ = SAMPLE_DEPTH_TEXTURE_PROJ(_CameraDepthTexture, UNITY_PROJ_COORD(i.projPos));
// linear eye depth recovered from the depth texture
float sceneZ = LinearEyeDepth(depthZ);
// linear eye depth from the vertex shader
float fragZ = i.projPos.z;
// difference between sceneZ and fragZ
float diff = sceneZ - fragZ;
return float4(
saturate(-diff * _DiffScale), // red if fragZ is closer than sceneZ
saturate( diff * _DiffScale), // green if sceneZ is closer than fragZ
0.0, 1.0);
}
ENDCG
}
}
FallBack "VertexLit"
}
Unity uses standard OpenGL view space, which means -Z is forward, hence the negative sign in the macro. And the macro just calls the function to convert from object space vertex position to world space, and then to view space.
Even the COMPUTE_DEPTH_01 macro and Linear01Depth(depthTexture) function should match nearly perfectly. The only situation I know of where Unity’s code doesn’t account for the near plane is the UNITY_Z_0_FAR_FROM_CLIPSPACE macro used for fog, and even then only in the specific situation of OpenGL using a reversed Z depth … which AFAIK it never does … so it’s never actually a problem.
If you’re having a problem with the values not matching, something else must be off.
That’s what I expected - and when I went digging in code I could only find exactly what you pasted above - but measurably: when I added the Camera’s near-plane distance the calculations were exactly correct, and without it they weren’t. I got this down to circa 5 lines of code at one point while I thought I was going insane :).
I haven’t touched that code in > 6 months - it worked with the above adjustment, so I just shrugged and moved on. I couldn’t find any documentation from Unity explicitly defining these terms, so I figured it was a dead-end.
If @wwaero is seeing similar offsets to the data, maybe we can narrow down what we’ve done differently that’s causing this?
I was only testing on Windows, I’m pretty sure (95%) D3D11, with Unity 2018 and 2019. It really surprised me, and I spent many days narrowing it down to this one discrepancy - I even created new projects and copy/pasted complete examples from the web from other people’s tutorials on the depth buffer, and theirs had the exact same problem.
So I ended up thinking it was either a driver bug or “by design” feature of Unity. The latter seemed more likely (nothing exotic, I was working with nVidia GTX 10xx cards).
…it’s still possible that it was something hilariously simple, like me having some code somewhere that munged the buffer, but … the fact that I could show the same problem with 3rd party examples eventually convinced me otherwise.
(I’m not currently working on that project, and it would take me a lot of time to dig back into it, otherwise I’d go back and try to rebuild the shortest example I made before)
Just for anybody looking into this, after some testing with shader graph due to lack of specific documentation anywhere in the Unity manual…
(Warning: it can change for each rendering pipeline so this is only for HDRP. Specifically because HDRP does camera-relative rendering, represents everything as seen from the camera to avoid precision errors.)
View space (for example using a position node and setting the space to “View”) will return the position of that interpolated vertex in meters from the camera position (not taking into account the near plane). But be careful, this uses OpenGL standards, so -Z is front. ( more on this )
Eye space (for example, when you get the scene depth and set it to Eye sampling) will return the depth in meters of the opaque rendered objects in the scene, and it is expressed exactly the same as View Space (I think they should set the same name instead of confusing people with “eye” and “view”…). Near plane has no effect here.
BUT if you set the Scene Depth node from eye to sampling Linear01… then you get the distance normalized from 0 to 1, starting from the CAMERA position to the far plane. Confusing as it is, I’d expect to have 0 at the near plane… Changing the near plane changes nothing, but the far plane does.
Camera space is, in general, expressed normalized from 0 to 1 inside the frustum, 0 in the near plane and 1 in the far plane, and x and y to the width and height 0 to 1 of the frustum. But I haven’t tested this in HDRP, so not sure 100% about this.
In Unity terminology, Camera space vs View space differ by Z axis convention and that’s it. View space being -Z forward as you noted above, and Camera space being +Z forward.
In HDRP “camera relative space” is simply called World Space, which is why there’s also Absolute World Space which correlates to the Unity scene world position.
The whole “Eye Space” thing seems to come from legacy OpenGL naming conventions, circa early 2000s, and continues to live on within Unity’s shader code even into the HDRP.
So if HRDP is relative to the camera, what is depth relative to for URP with eye space ? I assumed eye space was also the camera so it returns the distance from the Camera to the opaque objects in the scene?
That depends on which depth you’re reading about. Raw Z Depth is something else entirely. It’s not strictly in any of the spaces listed above, but rather in normalized clip space. For perspective camera views it’s a non-linear value between 1.0 and 0.0 going from the near plane to the far plane. At least on anything not using OpenGL. OpenGL raw depth isn’t even in clip space, it’s kind of its own thing though it closely matches the raw depth of other APIs, it doesn’t match OpenGL clipspace which is different from every other API.
LinearEyeDepth and Linear01Depth are indeed camera relative depth though. Because URP and the built in rendering path, along with all real time rendering ultimately transforms everything to be camera view relative.
The difference is HDRP sends positions to the GPU already relative to the camera position, where most others (including the URP and BIRP) send the positions in world space. The benefit being higher precision because the further away from 0,0,0 the less precision floating point numbers have, and the more artifacts you end up getting in vertex positions.
So in shader graph with the depth node, if you choose eye space units (i presume eye means camera space) is this always a positive value or is it platform dependant where some times it might use negative z? Or does shader graph return results with consistency without worrying about what platform on our behalf ?
The scene depth node set to “Eye” (or “Linear 01”) will always be a positive value, regardless of platform or rendering pipeline you’ve chosen. 0.0 will always be at the camera, and also never visible since anything closer than the camera’s near plane will be clipped. Values for the “Eye” depth will also always be in world space units, just remember that depth and distance are not the same thing.
Technically the “Raw” option will also always be a positive value between 0.0 and 1.0, but the platform will change if 0.0 is near or far.