Motion Matching (MxM) Animation System for Unity (Released-beta)

Check it out on the Asset Store

Download the Standalone Demo

Motion Matching for Unity (MxM) is an alternative animation system to mecanim for characters. With motion matching you can achieve advanced, organic and fluid animation without the need for an animation state machine. You don’t have to define transitions or set up conditions. Provided you have decent animations with enough coverage you can create a fully working character in about 15 minutes with starts, stops, plants and turns. With a little more effort and coding ability, its not hard to make very complex animation such as parkour, fighting and sports.

How Does It Work?
Motion matching is a relatively new animation technique that is not constrained by the concept of animation clips, states, pre-defined blends or transitions. Motion Matching allows animation to flow freely through your entire animation set, jumping to any pose at any time. Animation poses are chosen based on both the current pose and the desired gameplay input*. Whatever matches the pose and your desired input the best gets chosen and is rapidly blended in. By balancing the pose and the desired input, motion maching is able to achieve high fidelity animation while still achieving good input response**.

Industry Use:
Motion matching as a technique was originally developed about 2 years ago by Ubisoft for the game ‘For Honor’. It has since been used in several EA and Ubisoft games and is even being used in ‘The Last of Us Part 2’ by Naughty dog. As you can see, the technique of motion matching is not novel and has significant AAA application. (I am not affiliated with any of the above mentioned companies or games)

Trailer

Introduction & Overview

Introduction & Overview

Motion Matching with Raw Mocap (alpha footage)

Note: This is raw mocap placed on a generic rig. The weird hands and arms are a retargetting issue and has nothing to do with MxM.

Performance:
Motion matching for Unity uses the cutting edge of Unity development to achieve solid performance while still being stable. It uses Unity’s Job system, SIMD mathematics library and burst compiler, to achieve lightning fast, multi-threaded performance.

This video, which is already out of date, shows a performance benchmark on an i5 4 core system running motion matched characters above 60fps without LOD or infrequent updates.

Features:
Motion matching for Unity is not just a locomotion machine. It supports a number of features that allow the user to create almost any kind of animation. Features include:

  • No animation state machine
  • Fluid and responsive animation output**
  • Support for cut clips as well as un-cut mocap
  • Powerful event system for dynamic actions (e.g. vaults, sword attacks, pacour)
  • Powerful tagging system to allow control over animations (e.g. stances etc.)
  • Motion timing editor (change animation timing to match gameplay)
  • Animation warping for event (precise environment contacts
  • Layer system
  • Use alongside mecanim (switch at any time)
  • Compatible with custom animation playable graphs
  • Compatible with IK systems

Useful Links
- Asset Store Page
- Unity Connect
- Project Roadmap
- Discord (for support)
- User Manual
- Quick Start Guide
- FAQ
- Tutorial Videos

*MxM needs gameplay code to tell it what to do. You have to write the gameplay code to do that. MxM is an animation system, not a gameplay system.

**responsiveness and fidelity of the resultant animation is dependent on the quality, responsiveness and coverage of your source animations. Watch this video for more detail.


8 Likes

Looks awesome! I’d love to use this some time. Do you have a target release date yet, or is it too early in development still?

1 Like

Quite interested to see where this goes.

1 Like

Thanks. It’s moving along quite quickly now but it is very experimental in nature so its hard to say when it will be done.

1 Like

Some significant improvements in quality over the last week.

5 Likes

Huge progress over the last week. Probably the most difficult part is out of the way now. i.e. getting the primary motion matching algorithm correct and error free. Most discrepancies are now down to animation quality.

As well as improving the stability of the current system there’s a few things I need to do before the first release which will be a beta. These features include:

  • Markup System (for stance etc)

  • Event System

  • Idle Animation smoothing (tricks to make idle animations work)

  • Direct control matching policy (for things like interactions, e.g. pulling a lever

5 Likes

since it seems kinematica will take ages to release, im interested in this too. what are your asset plans? and also how much it cost on cpu right now?

4 Likes

Very interested in this, also wouldn’t mind testing the alpha version. While this is probably far from your intended use case, would you think this technique could be useful in a scenario where you have lots of mocap takes where the pose at the end of one take does not match up to the pose at the start of the next one, and you need a convincing transition animation. Could a similar technique be used to pull bits from a large mocap database and create this transition?

2 Likes

Hi Razzraziel,

The plan is to release it when it’s ready. That sounds like a huge cop-out but the whole thing is rather experimental so I don’t really know when that will be. That being said, I’m hoping that won’t be too long as I plan to release it as a beta on the asset store at a reduced price. For the beta I want to have enough features to allow different stances, combat and better idle animations as a minimum.

Regarding performance, with ~3000poses (5mins of animation total) the update on the MxMAnimator is about 3.6 - 3.8ms in editor. Note that the current approach is a completely brute force linear search. My focus has been on getting it working first. Optimization comes later. MxM actually has two different matching techniques, the Pose Culling technique performs better with 1.5 - 2.2ms un-optimised. However, the caveat here is that this technique seems to requires better / more animation data.

Following initial beta release there will be a big focus on jobifying and improving the data structures (kd-tree / voxel tree) to improve performance probably x100 or better. There will be some other minor improvements to performance before beta release which should all just drop nicely.

I’d say the matching algorithm is pretty well optimized itself. A lot of the data is pre-processed so there is less impact at run-time. It’s just the search algorithm that just goes through every single pose that needs better optimization from this point.

It’s possible. Motion Matching just picks whatever animation matches both the pose and trajectory the best. So as long as you have animations somewhere within your mocap data that could bridge the gap then it could work. But yeh as you said it’s a not the intended use case and would likely need some customization to achieve that.

2 Likes

I’m not exactly sure what this is doing, but will the smooth running movement also link up with jumping and navigation over small obstacles and slopes? Just curious, because I’m looking at the running on a plane and thinking … it works in a perfect world … :wink:

1 Like

Short answer… yes

But its a bit more complex than that. Motion Magic will match whatever trajectory you give it provided that you have the animations that you need and you code your gameplay / trajectory model to detect such things. The trajectory model is not a part of MxM as it is tied too closely to your gameplay which will be unique for every game.

The trajectory model that comes with MxM will be for a simple use cases though it will be a bit more complex by the time of release. MxM provided hooks to easily make your own trajectory model and integrate it into the system easily. If you wanted to have your character say jump over small obstacles (maybe 0.3 - 0.5m high) you would have to code that behavior into your game-play and feed that prediction into MxM. I’ll probably do an example of this later but it’s not really a priority at the moment.

Also note that there’s nothing stopping you from adding your own procedural animation (e.g. Foot IK) on top of the animation produced by MxM. Things like that are dealt with the same way as traditional systems.

1 Like

@CptKen

Great, waiting for first optimized beta release then.

Also this can help. (Maybe you already know)

I really want a performant & fluid next gen controller approach, but since i’m a solo dev, i cant afford the time that needs to develop something like this.

2 Likes

Cheers, deep learning animation selection is a whole different beast but the concept that you can jump to any pose is the same. Motion Matching’s beauty is in it’s simplicity (relatively). Last time I checked deep learning animation systems are really hard on performance.

3 Likes

Okay, so this could possibly be part of a solution for smooth parkour, with different forms of wall running, climbing, vaulting, leaping, etc., but we’d need to approach it kind of like how they do with the Motion Controller asset, by making a bit of code that tests for the criteria for change in motion (like slope, obstacle height, etc.), and then implements the movement mode change.

I am guessing this is a stand-alone system and cannot be integrated into Opsive Ultimate Character Controler?

I’m not completely familiar with how the Motion Controller asset works but you’re somewhat on the right track. Motion magic it simply jumps to whatever animation frame matches your current pose and desired future trajectory game-play. I suppose you could say that the current pose and desired future are the criteria for animation selection.

A character controller of some kind is needed to make that future prediction by simulating future movements within a single frame and recording the results. If the character controller is capable of vaulting over objects then motion matching should be able to choose an animation frame to animate that character appropriately… provided you have appropriate vaulting animations in your library.

It’s important to note that Motion Magic isn’t a character controller itself its an animation synthesis system.

I’m not familiar with how Opsive’s Ultimate Character Controller works. However, it’s important to note that Motion Magic is not a character controller, it is an animation synthesis system. The controller that is in MxM right now is only for testing purposes.

The system is designed to have custom gameplay controllers (not just motion) hook into it and provide it with a future prediction. These future predictions are usually done by simulating some kind of character controller code a number of timesteps in a single frame up to about 1 second in the future. If it’s possible to do this with the Ultimate Character Controller then it’s possible to integrate with MxM.

Once the asset is released, I fully plan to investigate integrations with other character controller type assets and hopefully get in contact with some of them for collaboration. However, for now I’m just focused on Motion Magic itself.

1 Like

Oh, okay. So … it is a bit like IK, then, in that it’s a modifier. It lies in between the Actor (character) Controller and the Motion Controller (animation selector) in the ootii system, I guess.

Not quite, it’s more like a replacement for Mecanim.

This is correct, Motion Magic completely replaces mechanim. Though I do plan to allow smooth transition between the two at run-time for those who desire that flexibility.

To understand motion matching you have to abandon all knowledge of state machine based animation and even the concept of animation clips. MxM takes raw mocap data as it’s input, pre-processes it based on a few settings (no manual cutting of clips) and then uses that data to synthesise animation at runtime. There is no state machine, the system continuously jumps to any animation at any time to achieve the best results.

Here is the general workflow at the moment:

  • Import your mocap animations (Literally drag and drop)
  • Automatically pre-process them using MxM’s PreProcessor Asset (this generates a new animData asset). ← (This is old school machine learning, not deep learning though)
  • Add the MxMAnimator component to your character and slot in the animData asset you just created
  • Tweak a few settings (calibration) ← I didn’t even do this for the video I last showed.
  • Play!

This is all I did to get the animation shown in the video I posted.

There is of course a lot more you ‘can’ do to tweak, finesse and control the animation to your liking. However, it gets you 80% of the way there really fast. Also this doesn’t include the process of building the character controller which is up you as its tied too closely to your game play for me to make any reasonable assumptions.

There really is no state machine. Doing the above few steps takes maybe 5mins, and its enough to get locomotion with stops, turns, jukes, starts etc. That usually takes a long time to do with a state machine.

I might make a video showing the process.

I’ll keep you posted on performance. In preparation for jobification, I refactored my data structures last night. This alone, without jobification, reduced the processing time on average by about 0.5ms (11% improvement) for the pose costing technique and 66% for the pose culling technique (pose culling went down from 1.5ms - 0.5ms process time)

This is without using fast searching algorithms or jobification so I’m pretty confident it will get down to a reasonable level.

2 Likes

I appreciate the extended description, it helps. But I think I still see it the same way. I’ll be following the progress here, to try to continue to get a better idea of what this is. It sounds like something I would love to use, like the next step in animation evolution after the Playables API.

I’m thinking it still needs a character controller, plus a state manager of some sort. It may replace the animation state machine, but we still have to list internally what movement options we have available and manipulate our control modes or states of action … like if we’re in a stealth mode (crouching), or in combat, if we’re jumping or falling, or if we’re switching from one dance move to another … right?