Vertex to Frag mapping

Hi Guys, I am a newbie to shader programming. I have a quick question that needs explaining.

We know vertex shader does work on each vertex, then the output is fed to pixel shader. Now there are way fewer vertexes than they are fragments(pixels). How are mapping done here? Does it take some kind of interpolation?

Or I am just simply confused. Really need help here.

There are not necessarily more vertices than fragments. If your geometry is dense enough, or far enough away, you can have a lot of triangles whose vertices are processed, but do not result in any fragments being rendered.

However, you are right that there is not a one-to-one mapping between vertices and fragments, so a single polygon (usually consisting of three vertices) can result in many fragments. Your guess as to how this is handled is correct: varying values such as colours and positions are interpolated in world space. This means that spatial values which are correct for vertices, such as UV coordinates, will be interpolated correctly for individual fragments. This is done automatically, and the fragment program is only aware of the interpolated value, not those of the original vertices.

Thanks! Makes perfect sense.
Still some newbie questions:

Sometimes I see vert function outputing POSITION, sometimes it’s outputing SV_POSITION. I know it must output at least one position. What are the differences here?

TEXCOORD[N], POSITION[N], COLOR[N] these are just registers in graphic pipelines right? They vary by hardware?

Can I just output position from vet to any of the POSITION[N], COLOR[N] or it still matters after all.

POSITION is the only one I know matters for the output of the vertex program. I don’t actually know that much about semantics, though.