The short answer is what you’re trying to do is not possible.
The longer answer is the Scene Depth node is sampling from the camera depth texture. The camera depth texture is a screen space texture that has the depth value for the closest opaque surface at each pixel. The way textures like this work is either they’re rendered before everything else as a separate pass of all of the opaque objects, or it’s extracted from the main camera’s depth buffer after opaques have rendered. In either case by the time the texture exists to sample it’s the depth texture for all opaque objects. Thus objects can either be opaque and be rendered into the depth texture, or they can be transparent and can read from the depth texture. They can’t (usefully) do both. The result is also that two objects that both use the Scene Depth node will not be able to “see” each other because neither will exist in the depth texture to be visible to that node.
The “solution” to this problem is … don’t try and have two objects that are using this kind of shader show intersections with each other. Yeah, really. There’s not really another option.
If showing complex intersections like this is something you absolutely need then you’d need to drastically change how you’re doing things and either render out custom depth textures for each object manually, excluding the object itself, or have a version of the scene in SDF form that each object’s shader can iterate over. Neither of these are small changes, and neither are things Unity has any built in stuff to help you do, especially when using Shader Graph.
Maybe you can put a smaller cube without the shader inside the object with the shader so that it can be seen from the scenedepth and you can see the intersections.