Inversing AOV Depth Image Sequence to actual depth

I started using AOV Depth Image Sequence Recorder to record depth maps from my single camera.
My question is: how can I inverse the [0,1] range I have right now in .png, to actual distance from camera? Which parameters I can use for such a operation and is there something intrinsic that I can use for this?

Here is my depth map:

Unity’s shader function for converting raw depth buffer values to linear view-space depth is as follows;

// Where:
//  z - raw depth
//  _ZBufferParams - x is (1-far/near), y is (far/near), z is (x/far) and w is (y/far)
inline float LinearEyeDepth( float z )
    return 1.0 / (_ZBufferParams.z * z + _ZBufferParams.w);

Using that information, we can simplify the above into a function useable in C#;

float LinearEyeDepth (Camera camera, float depth)
    float n = 1f / camera.nearClipPlane;
    float f = (1f / camera.farClipPlane) - n;
    return 1f / (f * depth + n);

Keep in mind that this is still view-space depth - you’ll need to do some additional calculations to translate that to actual 3D distance from the camera.