Linearising the inversion of camera depth effect

I have the following code, slightly modified the the original, as per Unity’s documentation on depth. My aim is to create a Kinect / Primesense effect, where object appear white as closer to the camera and darker when far away.
With the following code, I was able to get a reasonably goo effect. However, I get full white for any object between 0-2 units from the camera. Looking at the gradient, I don’t think it is a linear falloff.

How can I display the (inverted) depth fall off linearly?

My code:

Shader "Custom/Render Depth2" {


SubShader {

    Tags { "RenderType"="Opaque" }
    Pass {
	    	Fog { Mode Off }
			CGPROGRAM
			#pragma vertex vert
			#pragma fragment frag
			#include "UnityCG.cginc"
	
			struct v2f {
				float4 pos : SV_POSITION;
			    float2 depth : TEXCOORD0;
	
			};
	
		v2f vert (appdata_base v) {
	    	v2f o;
	    	o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
	    	UNITY_TRANSFER_DEPTH(o.depth);
	    	
	    	return o;
		}
	
	 
	
		half4 frag(v2f i) : COLOR {
			// this is where the depth inversion take place
			UNITY_OUTPUT_DEPTH(1.0f - (smoothstep(0.4f, 8.0f, i.depth)));
		}
	
		ENDCG
	    }
	}
}

My screenshot:

You see from the screenshot, I can’t tell the difference between 2 cubes, as the closest point, 0.5 unit away and 1.0 unit away are all white. I’m hoping someone with better knowledge in shaderlab can help me to linearise the (inverted) depth fall off.

I believe you need to use the built-in (though not obviously documented) LinearEyeDepth() function, which will give you distance from the eye in world units. There’s an example in the following blog post: Fun with Shaders and the Depth Buffer | Chris Flynn's Blog and Such