Making of Time Ghost: Cloth deformation with Unity Sentis

By Christian Kardach, and Plamen Tamnev

In September 2024, Unity’s Demo team unveiled a new real-time cinematic demo, Time Ghost. It was created with the Unity 6 engine, which includes a number of features and technological capabilities that, when used together, enabled us to achieve a higher level of visual quality and complexity than ever before.

During the production of Time Ghost, one of the questions we knew we would have to answer was how to author believable characters performing intense animations while dressed in realistic costumes, i.e., clothes made of fabric that would deform and wrinkle correctly according to the movement.

This is not an easy problem to solve, as we know that cloth is generally difficult to get right in 3D, and can so often end up being one of the main culprits responsible for breaking the immersion and the perception of realism. To avoid the known limitations of the industry-standard approaches for creating cloth in games, we needed to take a different path.

In the meantime, we knew that our colleagues at Unity’s Engineering organization had been developing a real-time inference engine that unlocks the potential of using machine learning models within Unity, at runtime, on any platform.

In this post, we will share our process and explain how we combined traditional animation workflows with cutting-edge ML to elevate the quality and realism achievable for real-time character animation and specifically cloth deformation.

Standard character setup

Our character modeling and rigging were done according to industry standards, allowing us to apply the usual motion capture data and perform keyframe animations on our rigged and skinned character model. Nevertheless, for the animation of our main character’s outfit, we sought a more realistic-looking solution beyond just a skinned mesh with additional blendshapes.

Machine learning-driven cloth dynamics

Sentis, Unity’s implementation of the Neural Network Exchange (NNX) framework, is the core pillar in our character pipeline. By implementing Sentis, we were able to train a machine-learning model using our own dataset of high-quality, realistic off-line cloth simulations and deploy the model at runtime.

Authoring the dataset

First, we created 70 poses of our character model, for each character’s performance – blending from a neutral pose to the final extreme pose over 30 frames.
In Marvelous Designer we simulate the pattern based cloth and are able to capture the cloth deformation behavior from these 70 movements.

Data extraction

The next step was to extract the delta values, i.e., the difference between the skinned mesh and simulated mesh. This calculation can be done in Maya or any other DCC where you have access to model vertex data. The process involves reversing the skin deformation while maintaining the simulated deformation data.

Design and training of the AI model

To train a runtime model, any machine-learning framework can be used as long as it can be converted to the ONNX format (Open Neural Network Exchange). The most common frameworks are TensorFlow and PyTorch. For the Time Ghost project, we used TensorFlow to design and train a custom model based on the extracted data.

The extracted data is fed into a feedforward neural network (FNN), which inputs the character’s joint orientations and outputs the extracted vertex delta positions.

Efficient data management and real-time deformations

In Unity, the deformation data is applied on top of our skinned character mesh and processed in patches to maintain a small runtime model.

All in all, it’s possible to go from existing cloth simulations prepared in DCC (in our case Marvelous Designer) to real-time deformations in Unity within a couple of hours. In the case of Time Ghost, we were able to reduce 2.5 GB of offline deformation data to a single 47 MB model. As Sentis runs the model locally on the GPU where the skinning data already exists, we are able to deform 120K vertices in 0.8ms on the GPU.

What is next?

The integration of Sentis not only improves our visual fidelity, achieving highly realistic and dynamic deformations, but also provides a workflow that is efficient and adaptive to the needs of high-quality game development. We believe that similar, machine learning-based pipelines will prove suitable for solving a wide range of vastly different production problems, which have reached their limits in the difficult tradeoff between the amount of data that is needed for high-quality outcomes, and the need to fit within realtime budgets. We continue to experiment with a similar pipeline and apply it to other artistic areas in the future.

In addition, the performance results we are getting with Sentis are so promising that we have started to look into whether the same pipeline can be applied to mobile development as well. Unity’s engineers already made some initial validation steps, and have confirmed that Sentis can run our character, with realistically deforming cloth, even on a mobile phone. And while high fidelity is not very typical for mobile game development, the implication of being able to use the machine learning-based approach for mobile target platforms is that deformations in stylized animations can also become way more detailed and beautiful than what has been possible so far.

Download a sample

As we promised, we are releasing two Unity projects from the Time Ghost demo – one with an Environment scene, and one with the Character.

To see the results of the pipeline described above for yourself, you can download the Time Ghost: Character project from the Unity Asset Store.

It includes a Unity 6 sample scene with the character and the vertex-based deformation model. We have included documentation describing the model and process of training, so you can try training with your own character simulation data and see whether this approach could be useful for your own projects.

Please note that the character’s face is provided only for educational purposes, i.e., you should not use the face in your project. Everything else in the sample is okay to use commercially, including the results of your training of the model with your own data.

You will notice that the Character sample also includes our hair setup, done with the Hair System we have been developing through the last three Unity Originals demos (The Heretic, Enemies, and Time Ghost). We will explain in more detail the latest developments in the hair system and how the setup works in a separate blog post, coming soon.

17 Likes

This is an extremely impressive demo, especially with the intelligent use of Marvelous Designer to get the wrinkles in the clothes. Now, for actual in game scenarios, I would have to use real time cloth sim (such as Obi Cloth) with some density to get some wrinkles, followed by vertex blending.

I’m also interested in a Unity approach for facial animation, but it needs to be user friendly enough for artists like myself who doesn’t have high end tools for capture (other than using a GoPro)

3 Likes

I agree having some model to achieve facial animation would be great. There was a talk at Unite about using Sentis for lip sync, but only very vague details were given about how to achieve this. A demo for this would be great.

3 Likes

And it would have to work with all kinds of facial rigs and setups, especially joint based rigs with corrective blendshapes

2 Likes

It’s basically like UE’s muscle demo, but for cloth, it would be nice if this was a proper tool like ZIVA :stuck_out_tongue:

2 Likes

Difference is that the ML deformer in UE is usable for “normal” users, as you can train it inside UE Engine by feeding some alembic data to it. Works relatively easy for such a complex feature. I think it trains blendshapes, which are later coupled to bone movements. Of course creating the data still needs external tools, but some have already plugins for this exact purporse.

2 Likes

Does this method of cloth deformation work with Mecanim blending?

1 Like

Hi!
Yes it will work since it’s reading the joint orientation as input, as long as you have access to those values then the deformer will behave accordingly!

Regards,
Christian Kardach

1 Like

Hey! The Ghost demo is really impressive and I really want to try the tech by myself, but I can’t get throught some issues.

  • I managed to create a set of fbx-pose/abc-cache pairs similar to the example, but I can’t pass the “extraction data” step. Whatever I do I just get some (103140) target vertices were far from reference (max delta: 0.00257204) kind of warning. More than that, even the demo data from the asset gives me the same results. I can’t extract data from the original files as well.
  • What’s “patches”? I can’t find any mentions of it in the provided help/docs and it’s really hard to understand what’s that from the maya scripts alone. Apparently my dataset misses those “patches” data which maybe crucial for the system to work. I assume it’s some data written to the vertices but it’s just a wild guess.
1 Like

Hi SammmZ!
Thank you :slight_smile:
I will try and clarify the patches in the documentation in the next update, I might have missed that part so thank you for pointing this out!
The system needs either one or more vertex color patches on the mesh (only needs to exist on the render mesh) Select a group of vertices and assign a random vertex color. Typically all vertices should have a vertex color assign. This is only necessary for training data extraction.

The tolerance value error you get is if the distance is too big between the render mesh and the alembic mesh, could be that the two meshes are either moved slightly in the scene or its a completely different topology.

I will also clarify this section in the tutorial a bit better.

Kind regards,
Christian Kardach

1 Like

Thanks for clarifying, it looks amazing.

Can it also react to physics forces? From colliders or Rigidbody.AddForce?

1 Like

Oh, thank you so much for following this topic and clarification! To be honest, I spent too many days with the asset already and I can assure you, there is definitely some problems with the SentisDeformerDataSampler.cs Let me show step by step how to replicate the issues.

  1. Do clean install of the Ghost 1.1 (I’m using Unity 6.0.23f) with all the necessary imports.
  2. Open scene ExportClothData
  3. Press ExtractVertexMapping button on export component.
  4. You will get an error DirectoryNotFoundException: Could not find a part of the path "...\Assets\training_data_interactive\training_data_output\patch_data\patch_0000.txt".
  5. Manually create patch_data folder and press ExtractVertex again. Now you will get 3 patch files in that folder without issues.
  6. Now press ExtractData.
  7. You will get an error DirectoryNotFoundException: Could not find a part of the path "...\Assets\training_data_interactive\training_data_output\x\pose_0000.bin".
  8. Manually create x folder and press Extract again.
  9. Now you will get the warning I told you before some (103140) target vertices were far from reference (max delta: 0.00257204) and no data will be exported.

To clarify it once again: it’s a clean setup of the original asset and original files. So it’s a 100% original topology and objects placement. Still, you can’t extract data from it in the expected way.

BUT! If you create manually all the x,y, and patch_data folders, spent some time reloading scene, reimporting the asset, restarting the project and all that common magic rituals, at some point you could get the fully exported data without warnings and errors. So, I assume something is happening during the export process that breaks the intendent way of things in the scene. Unfortunately it makes really difficult to bring my own data to this setup because it’s barely possible to say if it’s my dataset issues or SentisDeformerDataSampler.cs problems :frowning:

Nevertheless, the tech is amazing and I’m looking forward to implement it in my projects asap! I was waiting so long for something like this to be released <3

I’m not Christian, but if I may respond: no, it’s not supposed to react on forces. It’s not a “cloth simulation” asset but a “deformer”. I.e. it’s an asset that will modify certain verices of a skinned mesh in responce to the joints transforms of the very same mesh. It was indeed used to modify the cloth in the example, but in a nutshell it’s rather a very smart and efficient correction blendshapes driver :smiley:

Hi!
Thank you for the detailed breakdown of the issues you encounter and I’m really sorry it’s not as smooth as I was aiming for, will definitely address all the above!
Happy you made some progress yourself and I’m thinking it might be a good idea to add a pre-check with some more information in the console before it starts.

Kind regards,
Christian Kardach

Thanks makes sense. Even though it doesn’t work in the physics aspect, it’s still actually pretty exciting to hear that it’s basically real time blendshapes, something I’ve been wanting forever. So there’s still tons of potential there

For what it’s worth, I think it’d be theoretically possible to incorporate certain kinds of physics forces into this kind of technique, as long as you can figure out how to express them in a constant-sized input vector. “Global” forces like gravity ought to be relatively easy to express - you’re just adding 3 extra inputs (for the x/y/z values of the force), and expanding the training data to include poses under different gravity conditions.

I’m also reminded of that technique in single-pass forward rendering where you pick the N “most important” lights for an object and pass them as shader inputs, so that you can calculate their contributions without needing to do any looping/branching. Perhaps there is some similarly applicable concept here of picking the N “most important” colliders that are near the character, and representing them to the model as spheres or planes?

Hey! By any chance is there any update on constant target vertices were far from reference issue? I still can’t success with it and I can’t find anyone succeeded with using the tech

Hi SammmZ,
There is an update that should drop any day now (or even today) and I’ll post here once it’s published on the Asset Store!

Kind Regards,
Chris

1 Like

Sounds awesome! Thanks a lot!