I’m converting skinned mesh renderers into vertex animations. In order to do this I need to transform the parent object manually. I’ve successfully recorded the root motion. I tried doing the same for the head, but the motion does not match up. I’m preprocessing the animation to apply these transforms at runtime.
My workflow for recording root motion is as follows.
- Animations are set with no baking.
- The animator is set to “Apply Root Motion”.
- I record the position before and after after my predefined timestep, subtract, and offset my vertex animation parent by the change in position. I simply record the rotation quaternion and apply.
I do this for every frame in my animation and record it to an indexed data structure.
RootStartPosition = parent.transform.position;
...
Vector3 rootEndPosition = Root.transform.position;
Vector3 rootMovementVector = rootEndPosition - RootStartPosition;
Quaternion rootEndRotation = Root.transform.rotation;
Transforms[frame].RootPosition[i] = rootMovementVector;
Transforms[frame].RootRotation[i] = rootEndRotation;
My question is why does this same process not work for my head location? The head position follows reasonably closely but it drifts away, and the rotation is noticably off when rotation happens in the X axis.
Is there something I’m missing?