Can I use Inverse Kinematics just to move parts of the model to certain points in the scene (without having animations)?

I’m feeding the game with coordinate information from a motion tracking camera, and translating it into Unity coordinates. I would want to move hands, head and shoulders to their points that the camera sent, but have the rest of the body move with them as well.

I’ve read a bit about kinematics in Unity, but it uses Animators and animations, and I’m not sure if that would accomplish what I want.

Edit: For clarity, THIS is what I have, and I want to connect the objects (represented currently by cubes) to other objects, or use an avatar model that moves his hands to where the coordinates received from the camera are. (e.g. Right hand (x: 5, y: 3) Left hand (x: 3, y:2,5))

Your not sure because you probably realize that the animations would limit the potential postures of the model, and you want any reasonable posture.

I think you mean inverse kinematics, a subject outside Unity, which (oversimplified) means you position a part of the model that is connected to other parts, probably through joints or something related to joints, which the move appropriately to accommodate that position.

A personalized, illustrative example is our own hand. From infancy we learn how to position our hand to grasp an object we’ve found through vision. In reality the only control we have over our arm is to set the angles of shoulder, elbow and wrist. Our brain is built to solve the implied math automatically, without formalized measurements. The method is simply to project an imaginary line from the shoulder to the target. This implies a distance from the shoulder, and because there’s one elbow and two links (upper and forearm), there is an imaginary triangle formed from shoulder to wrist that includes the elbow. For any given distance to the target there is only 1 angle the elbow can assume to exactly match the target distance. That is the first calculation required to control the arm, and happens to be the first calculation required to solve the reverse kinematic puzzle, that of controlling the arm from the target backwards to the shoulder.

The second step, for the infant or practiced adult, is to adjust the shoulder joint’s angle, as this “sweeps” the “system” created in the first step to align with the target. With these to angles the wrist is positioned at the target, the hand may use the wrist for small error adjustment, and the target is reached. Both calculations are done with trigonometry (in the case of arms), specifically law of cosines, a bit of law of sines and maybe some simpler right angle trig.

That is inverse kinematics in a nutshell. That process can cascade through a number of joints and positions, but in the case you’re describing brings a lot of conditions to mind. If, for example, in my previous personalized example, the target is too far away, the elbow can straighten out (for maximum reach) and still not touch the target. The toddler gets up to walk as a result.

That last decision hints that for your application, you’ll need to consider either a rule based “expert system” that can be hard coded (but limited), or AI that can learn what to do (a bit advanced and difficult), or some balance between these extremes which is, then, a smart bit of code that can respond to a variety of situations with workable solutions, like walking toward a target - hopefully not with an outstretched arm like a toddler walking toward an objective (parents spend two or three years of their parenting career trying to guard against that).

Since I can’t see the input from tracking data you’re getting, I must assume that you could identify that data indicating the shoulder position, as that is basically the body’s position. If you’re ignoring the slight twists and bends of the spine, that positions the character (otherwise you probably have similar data from the waist, which combine to be the body position). The head is a single relative attitude (an angle in 3D really) which is relative to the shoulder (the math is much simpler, overall, for that). The “tough” part, if trig is tough, are the arms as given by the position of the hands, but if you keep your wits about you and use trig to work backwards from the hand’s position to the shoulder, you can calculate the solution. There is a caveat - there are usually two matching solutions, one that’s right, and one that makes it look like the elbow joint is broken (in the wrong direction). You just have to “sense” what solution puts the elbow below the imaginary line from shoulder to hand, as viewed from the body’s local coordinate system (just in case the person is upside down for some reason).

I’ll give you a short start on the trig. Look up law of cosines. Match upper arm and forearm to two sides of the diagram documenting the trig, the imaginary line is the third line. Now, calculate, from the formula given on the doc’s page for trig, the angle opposite the imaginary line. That’s the elbow angle required.

The second objective is to position the upper arm. Relative to the body’s local coordinate system, find the angle of the imaginary line (in ‘pure’ trig, this would be the line’s world coordinate oriented angle or slope). Now, using the of cosines, calculate the angle at the shoulder. This second calculation is the interior angle of the implied triangle. Add the ‘world’ (or body relative) angle of the imaginary line to the shoulder angle, and you get the angle of the upper arm (in ‘world’ or body relative format). That’s the angle to position the shoulder. When you position shoulder joint and elbow to these two angles, the arm is about where it should be.

The next tricks I’ll leave to you, but that system, the shoulder/elbow combination, can rotate on a human (you can lift your elbow ‘out’ to the point that the system moves sideways relative to the ground, or perpendicular relative to the ground). If you don’t have tracking data on the elbow itself, you’ll have to improvise, because that doesn’t actually change where the hand is…the system can ‘spin’ about 90 to 100 degrees on most people, from where the elbow is tucked into one’s side, to where the elbow is rotated upward and outward. All of that is then compounded by the fact the shoulder joint can rotate freely (unlike the elbow) in two coordinates (like a joystick). It is a calculation very much like that described so far for the upper arm/shoulder joint angle calculation, but instead of being an XY relative adjustment, it’s XZ. To be clear, the calculation I described was envisioned to provide vertical alignment to target after the elbow angle is known, but the shoulder joint on a human can do the same for a horizontal alignment to the target without rotating the entire body - both sweep in circles, one vertically oriented, the other horizontally oriented.

Okay, I managed to achieve my goal, having found a workaround.

So I used Mechanim IK to animate a character’s body (that I previously downloaded from Mixamo) and the thing I did was download his Idle animation, so the character didn’t really move, which is what I wanted (having IK “without an actual animation”).

The result is here:

I had to put his goals to the vectors that represented gameobjects I was moving through the coordinates given by my camera/application. Added some interpolation and voila.

Code for the controller:

leftHand.transform.position = Vector3.Lerp(leftHand.transform.position, gameobjectVectors[2], Time.deltaTime * 5f);
        rightHand.transform.position = Vector3.Lerp(rightHand.transform.position, gameobjectVectors[3], Time.deltaTime * 5f); ;
        middleBody.transform.position = gameobjectVectors[5];
        // move character to where spine is on the x axis
        if (gameobjectVectors[5][0] != 0f)
            characterVector.x = gameobjectVectors[5][0];
            playerCharacter.position = Vector3.Lerp(playerCharacter.position, characterVector, Time.deltaTime * 5f);

And IK controller:

using UnityEngine;


public class IKControlScript : MonoBehaviour
    private RealsenseController RSController;
    protected Animator animator;
    public Transform rightHandMiddleFinger;
    public Transform leftHandMiddleFinger;
    private Transform rightHandCoordinate;
    private Transform leftHandCoordinate;
    private Transform myRightHand, myLeftHand;

    void Start()
        animator = GetComponent<Animator>();
        rightHandCoordinate = GameObject.Find("lijevaSaka").transform;
        leftHandCoordinate = GameObject.Find("desnaSaka").transform;
        myRightHand = rightHandMiddleFinger.parent;
        myLeftHand = leftHandMiddleFinger.parent;

    //a callback for calculating IK
    void OnAnimatorIK()
        if (animator)
            animator.SetIKPosition(AvatarIKGoal.RightHand, rightHandCoordinate.position);
            animator.SetIKPositionWeight(AvatarIKGoal.RightHand, 1);
            animator.SetIKPosition(AvatarIKGoal.LeftHand, leftHandCoordinate.position);
            animator.SetIKPositionWeight(AvatarIKGoal.LeftHand, 1);