Volumetric Fog Shader - Camera Issue

I am trying to build an infinite fog shader. This fog is applied on a 3D plane.

For the moment I have a Z-Depth Fog. And I encounter some issues.

As you can see in the screenshot, there are two views.

The green color is my 3D plane. The problem is in the red line. It seems that the this line depends of my camera which is not good because when I rotate my camera the line is affected by my camera position and rotation.

I don’t know where does it comes from and how to have my fog limit not based on the camera position.

Shader

        Pass {
            CGPROGRAM
                #pragma vertex vert
                #pragma fragment frag
                #include "UnityCG.cginc"
                
				uniform float4		_FogColor;
				uniform sampler2D	_CameraDepthTexture;
				float				_Depth;
				float				_DepthScale;
				
                struct v2f {
                    float4 pos : SV_POSITION;
                    float4 projection : TEXCOORD0;
                    float4 screenPosition : TEXCOORD1;
                };

                v2f vert(appdata_base v) {
                    v2f o;
                    o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
                    o.projection = ComputeGrabScreenPos(o.pos);
                    o.screenPosition = ComputeScreenPos(o.pos);
                    return o;
                }
                
                sampler2D _GrabTexture;

                float4 frag(v2f IN) : COLOR {
                	float3 uv = UNITY_PROJ_COORD(IN.projection);
					float depth = UNITY_SAMPLE_DEPTH(tex2Dproj(_CameraDepthTexture, uv));
					depth = LinearEyeDepth(depth);
					return saturate((depth - IN.screenPosition.w + _Depth) * _DepthScale);
                }
            ENDCG
        }

Next I want to rotate my Fog to have an Y-Depth Fog but I don’t know how to achieve this effect.

Because you use _CameraDepthTexture, which is based on your current camera

Ok ! That’s why depth is calculated with the camera position.
But I don’t know how to correct it… It seems that there is no way to get the depth from another point. Any idea ?

Here is another example. In green You can “see” the object and the blue line is for me the fog as it should be.

1276256--57159--$ghW9U.png

Any idea ? How can I get the depth buffer from another camera and use it to calculate the depth of the actual camera ? Is it a good process or should I use a matrix effect or anything else ?
I am a bit lost with this… Any help is welcome.

Render depth from a different camera?

Yes but how to apply it on my shader with the actual camera ? I tried different things but I only get strange behaviour.

Any help here on how to apply depth shader from one camera to another camera point of view like explained above ?

There’s already a built in fog and Global fog in unity. What are trying to achieve actually?

There is an explanation of what I am trying to achieve in the beginning of the thread.
I am trying to apply Z-Depth fog on a surface (a plane) like explained above. It’s very close to the Global fog but it’s “projected” in a plane.
But the depth is calculated from the actual camera so I don’t have always a perpendicular depth (look above for screenshots).

Okay, hmm this is kinda tricky since DepthTexture is in Camera/ScreenSpace coord. and i’m not sure if it’s possible to render depthtexture using multiple camera to achieve different depth buffer since Unity DepthBuffer is shared between camera CMIIW. What renderPath did you used Forward or Deffered?

For the moment, I use deffered but it would be cool to be able on both.
This was a solution in my mind but if there is another solution to obtain a fog like the blue line. I will accept any solution !

Have you tried using WorldPos and CamPos to calculate depth rather using DepthTexture?

For the moment, I didn’t found a way to calculate depth manually…

http://forum.unity3d.com/threads/140917-best-way-to-calculate-the-distance-between-camera-and-vertex-position
From that thread i stole the following piece of code which might be able to solve your problem;
float dist = distance(_WorldSpaceCameraPos, mul(Object2World, v.vertex));

The following link also seems quite clear;
http://gamedev.stackexchange.com/questions/43442/calculating-distance-from-viewer-to-object-in-a-shader

Can you explain what do you mean ? Thanks a lot !

Yep it’s just like what annihlator said. and by taking vertex depth you’ll get object space depth buffer not camera space depthbuffer

Ok, it seems I misunderstood what Annihlator explained. But what can I do when I have the distance between my plane’s vertex and my camera in my vert function ? should I project my matrix ?

Well, then you basically know the amount of air volume, so i’d suggest to use that value for basing your fog transparancy.

For example;
We can expect the value returned to be in the 0-inf. Range, by remapping this with a formula you’ll be able to determine how much fog you should have drawn.

Say in the case of exponential fog;
Float FogFactor = inverselerp(0, 100, dist*dist) should result in something to get you started :wink:

I am really sorry but I don’t get it. :frowning:
I will have the distance between my camera and my plane’s vertex, not between my plane and other objects… How am I supposed to calculate fog with this ? Should I use cameradepth ?