Dynamic outlines by accessing the depth buffer during rendering?

I have a point cloud shader, and I’m attempting to add dynamic outlines to clusters of points so that it is easier for a user to visually parse the depth and shape of a dense point cloud, but I’m having trouble implementing an approach that feels like it should be straightforward.

This is a simplified mockup with two overlapping points, where one is closer than the other.
Right now they render like this, with their own individual outlines

What I want however, is for the opacity of the outline to be dependant on the depth difference of what’s already been drawn and what’s currently being drawn (with the points being drawn from back to front)

The shader is written in GLSL, right now the main function of the fragment shader looks like this

void main()
{
    vec2 coord = gl_PointCoord - vec2(0.5);
    float dist = coord.x * coord.x + coord.y * coord.y;

    if (dist > 0.25)
    {
        discard; // Trim down to a circle
    }

    gl_FragColor = Color;
    gl_FragColor.rgb *= 1.0 - dist; // Pillow shading

    if (dist > outlineThreshold)
    {
        gl_FragColor.rgb *= 0.3; // Darken any point that is past a distance threshold from the center of the point
    }
}

I’d ideally like to just change how much the outline darkens based on the difference between the depth buffer and the depth value of the current point, but I can’t seem to find a way to read from the depth buffer. My understanding is that _CameraDepthTexture is only copied to after render and is only really useful for later passes or post-processing, but I need values for what is currently in the depth buffer before overwriting them with a new value.

I’m trying to avoid doing this with a post-processing effect for a few reasons. The main one being that we are targetting low-end machines and deploying via WebGL, so performance is a big consideration. Another is that I have concerns about artifacts and muddying up how these points are being represented. Most of the points are being rendered at 5-10ish pixels wide a lot of the time, and accuracy & precision of how the data is represented are important.

Any help is greatly appreciated (:

there shouldn’t be a problem with reading the depth buffer in an object shader, many shaders do the depth difference for things like water murkiness.

this can’t be done in the way you describe, as “reading what’s already been drawn”, but you can definitely get an accurate depth difference.

this is described here: although it’s for water, the code may still help you.
Transparency, refraction and depth – Real-time Water Shader in Unity (wordpress.com)

don’t forget to enable the depth texture in c# on your camera! and test with the game running, as the depth texture isn’t created until the main camera starts rendering.