[SOLVED] Getting pre-skinned vertex positions in a surface shader

So basically subj.
Is there a way to do so? Local vertex positions in the shader are skinned, i need them to be static no matter what rotations bones are in. I was thinking there should be some inverse skin matrix that i should be able to mutiply skinned vertices to get pre-skinned, but cant figure out what that matrix is.

The vertex position should be always the final position after bone transform. You may have a custom script to store the original vertex position in colors input and access them in shader.

I guess i didnt convey it properly, i know that vertex positions should be after the bone transform, i just want the unskinned positions separately from those. And i dont think storing positions as colors would work well because of precision problems and 0…1 range.

Ive also just found out that uvs as of Unity 5.3 allow to store vector3 and vector4 if you use mesh.SetUVs. Ill try to use those instead i guess.

Yes, you can use uv instead.

Welp, now the problem is that surface shaders apparently only allow 2 uv channels, and dont allow to use float3 and float4 on them, it says “cannot implicitly convert from ‘const float2’ to ‘float4’”. Any thoughts on what can i do here? I know i can just use the cg source of the standard shader, modify it and it wont have problems with uvs, but thats unwanted for several reasons.

Don’t rely on Surface Shaders’ auto-magic handling of UVs using the float2 uv_textureName; or float2 uv2_textureName;. Instead use the explicit float3 myCustomData : TEXCOORD2;. You’ll probably also want to disable lightmap support by adding nolightmap to the #pragma surface line since presumably this shader is only going to be used on non-static skinned meshes and it’ll prevent Unity from trying to do funny things with TEXCOORD1 and TEXCOORD2 (which are used for static lightmaps and precomputed realtime GI).

I tried that, it didnt work unfortunately, the shader still doesnt see that data (even with nolightmap). I tried outputting it as Emission but it didnt do anything. I also tried both TEXCOORD2 and TEXCOORD3. But it should work, cause i just tried it with a copy of standard shader CG source, where i access the same uv texcoord and output it as Emission, and it works fine there. But as i said, i dont want to use the source copy shaders and fiddle with multiple cginc files in case they change something vital in future updates.

Recently it seems like surface shaders got a lot more aggressive about “optimizing” away stuff put into the Input struct, but you can work around this using custom data and a vertex function.

See the section titled: Custom data computed per-vertex

Basically put float3 myCustomData; in the Input struct, then have a vertex function that does:
void vert (inout appdata_full v, out Input o) {
UNITY_INITIALIZE_OUTPUT(Input,o);
o.myCustomData = v.texcoord2;
}

Note that if you’re familiar with vertex / fragment shaders, the vertex function in surface shaders isn’t quite the same, it’s just another function that gets called to let you inject or modify data during the real vertex function, hence why it looks like that vert function isn’t doing enough to work.

1 Like

Yes, huge thanks! Works now. Didnt know i can still access uv channels through appdata.texcoordX. That was dumb of me.

I’m not really sure how to do this myself and would love to know how. I directly set normals in object space in my shader and would like to be able to transform those normals, as they come out incorrect the moment any sort of change is done to the bones from their original positions.

Unity’s skinned meshes are transformed either on the CPU or on the GPU as a “stream out” shader, meaning it outputs a pre-deformed mesh on the GPU that the shaders we write then use. This means custom shaders only have access to the post-defined mesh and normals. This means if you’re outputting object space normals for a skinned mesh, expect them to be wrong. You’ll need another solution.

Storing the mesh’s original normal and tangent in two extra UV sets (or maybe a single quaternion) would give you the info needed to transform from the object space into the deformed orientation, but that’s super expensive. If plausible you might try to think about how to achieve your goal using tangent space.

I’m trying to use object space because I have the mesh split into interchangeable parts that are supposed to seamlessly join together, giving the illusion of being one big piece

Honestly, that doesn’t really explain the need for using object space normals at all.

well the thing is that I have old assets already generated with object space normal mapping in mind, to which I don’t have the sources for, and the assets were designed for an engine which does skinning on the GPU directly in the shader itself, which is not a thing that’s really possible in Unity I don’t think.

You can do skinning in the on-object shader in Unity. It just requires completely replacing Unity’s skeletal animation system with your own custom one. People do that relatively often when they need to support rendering of a large number of skinned characters.