I’m trying to pose an avatar at run time based on motion capture data live streaming from another piece of software. Incoming I have an xml with a list of bones an their transforms and positions. I have created a system which applies this to the transforms of bones directly however this creates distortions as the size and limits of the model are not respected by directly editing the transforms.
Is there any way to pose the avatar (possibly by generating an animation) so that a pose can be translated onto any rig successfully, the same way human animations are translated across?
I’ve looked into the AvatarBuilder class, but I don’t think I need to create a new avatar, the model has one, I just need to access its bones.