Hey, I need some help.
I am writing a Post Processing shader for LWRP.
The post processing shader is for the LWRP volume, it looks like this

To create a custom shader for this Ive been using this: Writing Custom Effects | Package Manager UI website

and I want to make a fog effect shader, but it has to be spherical or range based.

Ive looked up online and found some images that demonstrate what I mean:

consider this plane-based (linear). when the camera turns something that wasnt in the fog can now be in the fog etc.
Unityâ€™s fog already does this, and its the undesired result.

consider this range-based (non-linear). when the camera turns the fog is fixed in world space.

Is it possible to make use of the depth texture to produce this result?

In theory, yes - there are many examples of sampling the depth texture both ways online. The first case is faster and uses the fact that you can take raw values from the texture, so itâ€™s the one people generally use by preference.

You take the view direction down the center-line (center pixel on screen) which is correct by definition, and then as you go left/right/up/down in screenspace, instead of taking the raw depth (which looks like your first images), you use basic trig to calculate the length of the ray through that pixel.

(PS: your first image is hugely incorrect, not that it matters).

In practice - last time I tried to do this, the results didnâ€™t quite work visually, and I couldnâ€™t figure out what was going wrong in Unity. The math was correct (checked it in a different OpenGL engine, and checked my hand-written version against other peopleâ€™s versions online), but in Unity it kept displaying slightly wrong curvature that should NOT have changed with view angle, but did (slightly). I suspected there was something wrong going on with my conversion out of NDC / clip space (1:1:1) and into final screen space (that was the mostly likely thing that would explain the slight curvature I saw), but couldnâ€™t seem to get it right.

1 Like

Off the top of my head:

1. D0 = depth along center line
2. Dnp = depth to near plane
3. Wnp = width left/right across screen of the pixel youâ€™re rendering
4. Dx = depth of the pixel youâ€™re rendering (note: this is the â€ślinearâ€ť depth as you described it)
5. Wx = x at depth D0

=> Wx = x at depth D0 = (ratio of Dnp : D0 * Wnp)
=> Lx = true length along X-ray = sqrt( WxWx + D0D0)

i.e. youâ€™re using two trig rules. Firstly the one about two triangles that share an angle and one is just an extrapolation of the other (the ratio of the congruent lengths is the same as the ratio of the crosswise lengths).

Secondly pythagorasâ€™s theorem that if you know 2 sides of a right-angled triangle then you get the third by squaring them, adding them, then square rooting.

1 Like

Hey thanks for your proposed solution. I will try implement this within the next few weeks and Iâ€™ll post back the full source code for anyone who is subscribed to this.

There are probably some examples of people implementation it before on the unity forums (Iâ€™d search with google and â€śsite:forum.unity.comâ€ť), if you search for linear depth, linear-vs-actual depth, ray depth, ray length from depth texture - things like that.

I have a solution, but its not necessarily for post processing, but the fragment shader.

``````float fogDepth = distance(IN.worldPos, _WorldSpaceCameraPos);
float start = 330;
float end = 1250;
fogDepth = 1-saturate((end - fogDepth) / (end - start));
``````

and you could use a lerp to apply it:

`col.rgb = lerp(col.rgb, fogDayColor, fogDepth);`