Inaccuracies importing animation

I’m experimenting with importing keyframe animation. I loaded the Unity-chan FBX into Maya, made some animation with HumanIK, and baked it out and exported it to FBX with the game exporter. After a couple false starts, it seemed to work and the animation looks mostly correct.

However, the import wasn’t accurate. Things were okay up through the spine and shoulders, but the upper arm rotation was slightly off, and errors cascaded down from there, leading to the hands not being quite where I left them. The finger positions were a little off, too.

After a lot of poking around, I found that this was being influenced by humanoid muscle settings, and if I changed the skeleton to generic, the error went away.

It’s expected to have fuzziness in animation when you’re retargetting, or if there’s a heuristic system like IK or ragdoll active. But if you’re creating and fine-tuning an animation for a specific hero character without any of those things active, you should see exactly what you animated. Is this a problem with Unity’s humanoid system, or did I get something wrong somewhere along the line creating the animation?

The unity chan model doesn’t match the humanoid rig exactly - so the error you are seeing is because the animation imported does not have the proper bones to match the humanoid rig. It is only retargeting humanoid bones.
You can still create an avatar that excludes certain bones so unity chan WILL match the humanoid rig, and then the animation you created will also have to use the same avatar to retarget onto the same character.
You will most likely loose some animation data if animating on a non humanoid matching rig because the humanoid rig has certain bones that require exact set up. For instance the humanoid rig only supports 2 spine bones - cause most mocap only has 2 spine bones. So if a character has more than two spine bones the artist will need to choose which 2 spine bones to choose to match them to the humanoid rig. The animation will be interpolated to match as close as possible to the original animation - but it might be very slightly off.
I’ve not muddled with unity chan in a while - but I think she has a non-conforming clavicle/shoulder set up and extra bones in other areas also. These will need to be masked out from the avatar of the original character and if you remember which ones you mask out - you can create an animation in 3D and the retarget will match 1 to 1 identical. That is until you start setting up transitions and blends and stuff like that.

The rig is the key - if you can match your character to the base humanoid rig in Unity - there will not be any issues.

Avatar masks and layer masks are really cool - and allow for more complex characters to be used with the humanoid rig - but this is a little more advanced and that requires research to get hands on experience.

Also generic rigs can be retargeted - but that requires a scripting/transform match process I’m unfamiliar with, and mostly requires the rig to match transforms/rotations 100%. TonyLi has explained this process before. If you look back on his post history (a couple years) you can find the info about this.

I did exclude the extra joints that the humanoid avatar doesn’t support. Unity warns you about this when you import an animation, and I only used joints that were a part of the humanoid definition. Once I did that, the warnings went away and the animation almost matched what I exported (bigger errors went away), but it wasn’t 100% exact and there was enough error to be visibly incorrect (eg. hands placed exactly together would end up slightly interpenetrating or apart).

It looked like it was still acting as if it was a retarget. For example, if I expanded the muscle ranges, I’d expect that to have no effect on the import as long as the avatar on the character and the animation file are the same (the muscle ranges are the same, so it shouldn’t need to remap anything), yet it had a huge effect on the imported animation. I did set the animation to use the same avatar as the character and not generate a new one, of course.

Generic skeletons make it accurate, but then of course you lose all of the humanoid avatar features too.

Is animation compression on - by chance?
And from memory Unity Chan has a good T-pose so that should not be an issue, though others have missed that step. Just confirm the imported animation has a good enforced T-pose.

I think this is a silly question but - You didn’t happen to add any bones to the rig in Maya?
From mecanim-dev
If the joints they add are leaf you won’t have any issue, but if they do add it in the middle of the rig then you will start to have issue:
for a generic rig all the animation for transforms that are childs of thoses added transform won’t work anymore. The mapping between animation curve and transform is based on the transform Path, obviously adding a transform in the middle of the rig will change all transforms path for his children and grand children.
for a humanoid rig it’s a little bit difference since the animation format is not FK pose but muscle pose, it may look like it work but you will have some discrepancy between the animation played in Unity and in your orignal authoring tools(Max, Maya, Blender, etc …) by example if they did add a rotation animation in this added transform the engine won’t retarget the animation correctly.

For this specific instance you could just use generic - although that’s not a fix. The only things you loose using generic is retargeting (which can be done if the transforms/bones/names match), animation mirroring, and built in IK (solutions in the asset store for generic).

A couple other links though none are specifically related to your experience.
https://forum.unity3d.com/threads/what-is-required-to-retarget-generic-animations.387088/#post-2518352
https://forum.unity3d.com/threads/still-no-way-to-get-fine-grained-masking-with-humanoid-animations.385964/

I wonder if - Unity Chan is the problem, or a combination of required steps to solve the rig retargeting to get the animations exact. A combination of avatar/transition masks, include/exclude bones, T-pose matching, animation compression, offsets.
If you simplified the variables in the experiment - to use a existing free asset store character that conforms to the humanoid rig, or has 1-2 extra bones at most, performing the same test - what are the results?

If I have an extra hour this evening I’ll throw Unity Chan into Max and see what type of results I come up with.

I’ve been turning animation compression off–they cause problems even with generic avatars. (The default tolerances are way too high, and I’ve started using an AssetPostprocessor to lower them significantly.) I didn’t change the skeleton at all in Maya, I just made sure to only bind to HumanIK the joints that Unity understands, leaving the others in their bind pose.

The model is in a clean T-pose. The pose is slightly different in the avatar than in the FBX (modified in the avatar editor), but only down in the fingers.

One thing that I think clues at the problem, though, is that if you change the muscle ranges, the imported animation changes. If I take Spine Front-Back and change it from [-40,40] to [-180,180], the animation breaks. Now, if I was retargetting an animation from another character, I’d obviously expect that to break everything–it would be remapping the much smaller muscle range from the animation to a 360-degree range, and movements would be wildly amplified. But when I’m importing an animation for this avatar, I’d expect it to have no effect: the [-180,180] range is being mapped to the same [-180,180] range.

That’s obviously not how it actually works, but I haven’t been able to get a clear picture of why that happens from the docs. But a takeaway here is that the muscle ranges have an influence on the imported animation. If [-180,180] has a huge effect on it, it makes sense that a [-40,40] range could still have an effect, even if it’s very small. It’s not giving a 1:1 import with the big range, so it makes sense that it doesn’t give a 1:1 import with the smaller range either.

Note that the errors are small enough that people animating punches and sword swings would probably never notice (maybe 1-2cm in the fingers), but it’s a ton of error if a character is twining his fingers together. My first concern was whether that amount of error is actually expected in the humanoid system or not, so I didn’t spend too much time banging my head on it if it’s just an expected artifact. I do need to re-test with a simpler skeleton and compare, just haven’t got back to this yet. (I should probably write a script to export the world space positions of joints to make comparing between Maya and Unity easier, since the coordinate systems are different.)

Hey, did you ever figure out a solution? I’m have the same slight inaccuracies with a Humanoid rig, which is a real pain for hand poses like holding a gun. I don’t want to give up on a Humanoid rig, since I need the mirroring and masking