In URP shader code, Unity uses short suffixes to denote what coordinate system a position is in, which is super helpful:
However, I am somewhat confused by the suffixes used in some built-in functions.
For example:
float2 GetNormalizedScreenSpaceUV(float2 positionCS)
{
float2 normalizedScreenSpaceUV = positionCS.xy * rcp(GetScaledScreenParams().xy);
TransformNormalizedScreenUV(normalizedScreenSpaceUV);
return normalizedScreenSpaceUV;
}
This function seemingly expects a clip space position. But looking at the implementation, it treats it as screen space coordinates. It divides (rcp) the position by the screen resolution to get UV coordinates [0-1], e.g. for sampling a depth texture. Clip space coordinates are homogeneous and have a range of [-w, w] if I am not mistaken.
My assumption is that this is coming from the fact that SV_POSITION
varyings/interpolators are a special case and even though a vertex shader is expected to output clip space coordinates, the fragment shader receives interpolated screen space coordinates.
Is this correct or am I confusing something? Are these suffixes incorrect?