I’ve been trying to separate my camera’s world position from having an impact on my depth effect. But i am very confused on how to do this, i opted to compare the fragment distances between my object and the depth texture in world space. But its making no difference the deeper the object is below the plane.
This is what i am currently using for my fragment shader:
v2f vert (appdata_base v) {
v2f o;
o.pos = UnityObjectToClipPos(v.vertex); //local to clip
o.screenPos = ComputeScreenPos(o.pos); // clip to scren pos
o.worldSpacePos = mul(unity_ObjectToWorld, o.pos); //world position of vertex
return o;
}
half4 frag(v2f i) : SV_Target {
//sample depth texture
float depth = tex2Dproj(_CameraDepthTexture, i.screenPos).r;
// from perspective to linear distribution
depth = LinearEyeDepth(depth);
//far clip minus near clip
float cameraRange = _ProjectionParams.z - _ProjectionParams.y;
//depth texture to world height
float textureWorldDepth = depth * cameraRange;
// how deep is the depth texture fragment to this object's fragment
float worldDepth = i.worldSpacePos.y-textureWorldDepth;
// limit opacity down to x units[todo move to property]
float maxDepthTransluency = 10;
// gradient colour for depth from 0 to max eg. (10 units)
depth = clamp(worldDepth/maxDepthTransluency,0,1);
return float4(depth,depth,depth,.5);
}
Here you can see it in action, the colour should change the deeper it goes up to 10 units difference but not happens:
So far so good. You now have the linear depth. Next all you need is to …
Wait, no, that’s not … oh no. The linear depth is already in world space units, there’s no need to scale it. Easy thing to fix, just don’t do that. But, what’s this comment about world height? Height isn’t a thing here yet …
Ah ha. I think I understand what you’re trying to do, and where the misunderstanding is coming from.
Let’s step back and talk about what the depth texture is. I suspect you’re thinking about it as a world space height value, something like depth in the sense of some world space water plane.
It is not.
The depth texture is a representation of the screen space depth of each pixel. Depth being the distance along the forward view axis. In other words it is intrinsically linked to the camera, there’s no separating it. The depth texture itself stores the depth value in a 1.0 to 0.0 range that is non-linear for various reasons related to perspective projection matrices that aren’t important right now. The LinearEyeDepth function converts that non-linear value from 1.0 to 0.0 to world space units. That is to say if you put a screen facing quad as a child of an unscaled camera game object, the linear depth value for that quad across the entire surface be the same as the z value for the you see on the transform. This is different that distance btw, see the below image representing the difference between depth and distance for a point in the camera’s view:
The most common way the depth texture is used is to test it against the current mesh’s depth at that pixel. For that you pass the linear view depth (or eye depth as it’s called in Unity’s shader functions) of the vertices to the fragment shader, and compare that against the linear depth you get from the depth texture. You can look at the built in particle shaders, here’s someone who did a quick breakdown of that: https://www.jordanstevenstechart.com/particle-shading
You don’t need to encapsulate your stuff in the #ifdef SOFT_PARTICLES_ON, that’s just something controlled by quality settings.
Thank you for explaining rather than just giving code i much prefer that - I was not aware it was already in world space. Was not really familiar what eye space really meant. So that means world co-ordinates relative to camera basically?
This is what i have now:
v2f vert (appdata_base v) {
v2f o;
o.pos = UnityObjectToClipPos(v.vertex); //local to clip
o.screenPos = ComputeScreenPos(o.pos); // clip to scren pos
COMPUTE_EYEDEPTH(o.screenPos.z);
return o;
}
half4 frag(v2f i) : SV_Target {
float depth = tex2Dproj(_CameraDepthTexture, i.screenPos).r; //sample depth texture
depth = LinearEyeDepth(depth); // from perspective to linear distribution
float fade = saturate (_FadeFactor * (depth-i.screenPos.z) + _MinimumFade);
return (float4(1,1,1,1 * fade) * _Colour);
}
Don’t know if this is how water fade is usually done, but it works now I don’t suppose you know of any resources on making water visuals for shaders at least theory wise ? I can only find theory on vertex functions for waves but not the visuals of them.
World space scale, but not world coordinates. It’s relative to the camera’s position and orientation. A view space position of (0,0,0) is the camera’s position, and a position of (0,0,-1) is one unit in front of the camera along its forward vector, regardless of what position or orientation the camera has (the in shader view space z is inverted vs Unity’s coordinate system).