A Cg shader semantic problem

Hi guys, I am a beginner to learn Unity Shader. There is a problem about CG Shader:

Why do some people define the semantics of the input and output structures in Shader, some of them can be defined as the same meaning?

for example:

struct vertIn {
	float4 vertex : POSITION; 
	float3 normal : NORMAL; //semantic:NORMAL
};

struct vertOut {
	float4 pos : SV_POSITION;
	float3 worldNormal : NORMAL; //the same as vertIn.normal, but some people use TEXCOORD instead
	float3 color : COLOR
};

How do I know when to use NORMAL and when to use TEXCOORD?

I think you’re having an issue with concepts rather than semantics. A normal is how the vertex should react to light, and is usually a vector perpendicular to the intended surface, while textures are 2-dimensional constructs with coordinates ranging from 0,0 to 1,1 (thus giving a U and a V coordinate, rather than X,Y,Z).

The semantics attached to the shader inputs and outputs are there to show intent to the GPU, to tell it what you mean to use those registers for. It’s very important that you do not confuse the intent of a normal with the intent of a texture coordinate.

For some build targets, and/or years ago, “NORMAL” semantics were not valid for vertex-to-fragment interpolators, though they were valid for vertex input. Since normal vectors were being interpolated anyway, people used “TEXCOORD” semantics for normals.

By informing the render pipeline as to your intent for interpolators, it has an opportunity to optimize. For example, pure color interpolation might tolerate less precision than an actual texture coordinate, as textures could point to thousands of completely different pixels across a range 0 to 1, so color might be “easier” for the pipeline.

This does not guarantee that semantic hints will be used to optimize. However, per the answer by @waller_g, if your shader model and targets accept the “NORMAL” semantics for a true normal vector, then it could be optimized for speed and/or (domain) precision. For example, hypothetically, a “NORMAL” interpolator could prioritize maintaining directionality over maintaining scale of the vector.

So, if all your desired shader variants are compatible for “NORMAL” semantics for a normal vector, I suspect that’s what you should use. (Some variants might not be, though.)