custom vertex format

Hello,

I’m trying to use a custom vertex format with a secondary UV set.

That’s working fine with the fixed function passes. But when I define a new vertex format like this :

struct v2f {
	V2F_POS_FOG;
	LIGHTING_COORDS
	float2	uv;
	float2	uv2;
	float3	normal;
	float3	lightDir;
};
struct appdata_base2uvs {
    float4 vertex : POSITION;
    float3 normal : NORMAL;
    float4 texcoord : TEXCOORD0;
    float4 texcoordsec : TEXCOORD1;
};

v2f vert (appdata_base2uvs v)
{
	v2f o;
	PositionFog( v.vertex, o.pos, o.fog );
	o.normal = v.normal;
	o.uv = v.texcoord.xy;
	o.uv2 = v.texcoordsec.xy;
	o.lightDir = ObjSpaceLightDir( v.vertex );
	TRANSFER_VERTEX_TO_FRAGMENT(o);
	return o;
}

CG complains about the vertex program : Vertex program ‘vert’: unknown input channel ‘texcoordsec’.

I’m pretty sure the vertex program is OK, it simply passes the secondary UV set to the output strcture.

Is it possible that Unity produces shader programs for shadow rendering using the vertex program, and uses a different vertex delcaration that doesn’t have a secondary TEXCOORD set ?

Unity expects vertex inputs to be named like documented. Basically, name 2nd UV “texcoord1”. You don’t have to use semantics for vertex input structure (you can, but it’s not necessary).

Shadow rendering? If you mean shadow mapping, then they do not use 2nd UV set from the models (they don’t need it). If you mean lightmapping, then yes, they do use 2nd UV set, but it’s named like above. See built-in shaders for source code of lightmapped shaders.

Thanks a lot, I didn’t saw this part of the doc.

(I meant the depth texture filling of the shadow mapping algo. )

This does not need 2nd UV set from the mesh. Yes, vertex shader generates more than one texture coordinate for the pixel shader, but it’s computed from vertex position and some matrices – it does not come from the mesh data.