What happens to Vertex Position Input in a shader.

I am trying to understand the following code. What this results in is a flat yellow color, i.e. float4(1,1,0,1) for all pixels. This looks like “position” is the same for all vertices, which I do not understand because I apply the shader to a sphere.

This looks like UnityObjectToClipPos() is mapping everything to yellow, but it shouldnt?

SubShader {
    Pass {
        CGPROGRAM
        
        #pragma vertex MyVertexProgram
        #pragma fragment MyFragmentProgram          
        
        #include "UnityCG.cginc"
        
        float4 _Tint;
        
        float4 MyVertexProgram(float4 position : POSITION,
        out float3 localPosition : TEXCOORD0) : SV_POSITION {
            localPosition = position.xyz;
            //MVP = Model view matrix
            return UnityObjectToClipPos(position);
        }
       
        float4 MyFragmentProgram(
        float4 position : SV_POSITION,
        float3 localPosition : TEXCOORD0
        ) : SV_TARGET {
            //return float4(localPosition,1);
            return position;
        }
        
        ENDCG
    }
}

is the same for all vertices

Actually it’s not because you have a general misconception here. The output of the vertex shader is not directly feed into the fragment shader. the SV_POSITION semantics have different meaning for those two profiles. The Vertex shader output is in clipspace while the fragment shader input is in pixel coordinates. In between the GPU performs the perspective divide and the translation into window coordinates. So the coordinates you get are simply the pixel positions, relative to the bottom left corner. Since every coordinate greater than 1 will simply saturate to 1 everything just looks yellow except for the very left column and the very bottom row. This is probably not noticable if your sphere actually reaches that far down / left in the screen.

Try this instead:

position.xy /= _ScreenParams.xy;
return position;

This will convert the incoming position into viewport coordinates. So they go from 0 (left screen edge) to 1 (right screen edge) and also 0 (bottom screen edge) to 1 (top screen edge).

Note that the depth / z value is usually in the range 0(far clip plane) to 1(near clip plane) in the usual non linear mapping.

If you want NDC coordinates you have to multiply by 2 and subtract 1

position.xy /= _ScreenParams.xy;
position.xy *=2;
position.xy -=1;
return position;

Or in one line:

position.xy = position.xy * 2 / _ScreenParams.xy - 1;
return position;

Now we have NDC coordinates which is the result of the clipspace coordinates being divided by w. This can not simply be reverted since the homogeneous divide turns w back to 1. (w / w == 1).

If you actually need the clipspace coordinates before the homogeneous divide you have to pass the clipspace coordinates along in a seperate variable (for example TEXCOORD1).

I would highly recommend that you actually use input and output structs. Using out parameters and seperate input parameters makes it extremely hard to read and follow. When you create a new unlit shader in Unity it will create the basic stub which is usually a good start.