Does the vertex "uv" store the position of a vertex on a texture, or a position of vertex on a screen.

I wrote a shader that would draw the scene like a depth map. It works, but I am having a hard time wrapping my head around why I can call SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv) like:

struct v2f {
    float2 uv : TEXCOORD0;
    float4 pos : POSITION;

v2f vert (appdata v) {
     v2f o;
     o.vertex = UnityObjectToClipPos(v.pos);
     o.uv = v.uv;
     return o;
//... Code
fixed4 frag (v2f i) : SV_Target {
    // ... more Code
    float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv);
    // ... return ....

and have the shader render the color of the fragments into based on their position on the screen.
My assumption would be that the UV coordinate are based in “texture space,” and that the _CameraDepthTexture is rendered onto the screen?

Yes UV or texture coordinates are in texture space. So they define the vertex position within the normalized texture space. “_CameraDepthTexture” is a texture of the size of the screen which contains the depth values for each fragment of the last rendered screen.

What the UV’s looks like inside a shader depends entirely what geometry you are actually rendering. That’s something you haven’t mentioned at all. On what object or in which cases you do use your shader? For example post processing shaders simply draw a fullscreen quad across the whole screen. So only in this instance the UV’s give you a 1:1 relation to the actual screen pixels / fragments.

Since that was all what you were asking I’m not sure what else we should say ^^.