Making of Time Ghost: Cloth deformation with Unity Sentis

By Christian Kardach, and Plamen Tamnev

In September 2024, Unity’s Demo team unveiled a new real-time cinematic demo, Time Ghost. It was created with the Unity 6 engine, which includes a number of features and technological capabilities that, when used together, enabled us to achieve a higher level of visual quality and complexity than ever before.

During the production of Time Ghost, one of the questions we knew we would have to answer was how to author believable characters performing intense animations while dressed in realistic costumes, i.e., clothes made of fabric that would deform and wrinkle correctly according to the movement.

This is not an easy problem to solve, as we know that cloth is generally difficult to get right in 3D, and can so often end up being one of the main culprits responsible for breaking the immersion and the perception of realism. To avoid the known limitations of the industry-standard approaches for creating cloth in games, we needed to take a different path.

In the meantime, we knew that our colleagues at Unity’s Engineering organization had been developing a real-time inference engine that unlocks the potential of using machine learning models within Unity, at runtime, on any platform.

In this post, we will share our process and explain how we combined traditional animation workflows with cutting-edge ML to elevate the quality and realism achievable for real-time character animation and specifically cloth deformation.

Standard character setup

Our character modeling and rigging were done according to industry standards, allowing us to apply the usual motion capture data and perform keyframe animations on our rigged and skinned character model. Nevertheless, for the animation of our main character’s outfit, we sought a more realistic-looking solution beyond just a skinned mesh with additional blendshapes.

Machine learning-driven cloth dynamics

Sentis, Unity’s implementation of the Neural Network Exchange (NNX) framework, is the core pillar in our character pipeline. By implementing Sentis, we were able to train a machine-learning model using our own dataset of high-quality, realistic off-line cloth simulations and deploy the model at runtime.

Authoring the dataset

First, we created 70 poses of our character model, for each character’s performance – blending from a neutral pose to the final extreme pose over 30 frames.
In Marvelous Designer we simulate the pattern based cloth and are able to capture the cloth deformation behavior from these 70 movements.

Data extraction

The next step was to extract the delta values, i.e., the difference between the skinned mesh and simulated mesh. This calculation can be done in Maya or any other DCC where you have access to model vertex data. The process involves reversing the skin deformation while maintaining the simulated deformation data.

Design and training of the AI model

To train a runtime model, any machine-learning framework can be used as long as it can be converted to the ONNX format (Open Neural Network Exchange). The most common frameworks are TensorFlow and PyTorch. For the Time Ghost project, we used TensorFlow to design and train a custom model based on the extracted data.

The extracted data is fed into a feedforward neural network (FNN), which inputs the character’s joint orientations and outputs the extracted vertex delta positions.

Efficient data management and real-time deformations

In Unity, the deformation data is applied on top of our skinned character mesh and processed in patches to maintain a small runtime model.

All in all, it’s possible to go from existing cloth simulations prepared in DCC (in our case Marvelous Designer) to real-time deformations in Unity within a couple of hours. In the case of Time Ghost, we were able to reduce 2.5 GB of offline deformation data to a single 47 MB model. As Sentis runs the model locally on the GPU where the skinning data already exists, we are able to deform 120K vertices in 0.8ms on the GPU.

What is next?

The integration of Sentis not only improves our visual fidelity, achieving highly realistic and dynamic deformations, but also provides a workflow that is efficient and adaptive to the needs of high-quality game development. We believe that similar, machine learning-based pipelines will prove suitable for solving a wide range of vastly different production problems, which have reached their limits in the difficult tradeoff between the amount of data that is needed for high-quality outcomes, and the need to fit within realtime budgets. We continue to experiment with a similar pipeline and apply it to other artistic areas in the future.

In addition, the performance results we are getting with Sentis are so promising that we have started to look into whether the same pipeline can be applied to mobile development as well. Unity’s engineers already made some initial validation steps, and have confirmed that Sentis can run our character, with realistically deforming cloth, even on a mobile phone. And while high fidelity is not very typical for mobile game development, the implication of being able to use the machine learning-based approach for mobile target platforms is that deformations in stylized animations can also become way more detailed and beautiful than what has been possible so far.

Download a sample

As we promised, we are releasing two Unity projects from the Time Ghost demo – one with an Environment scene, and one with the Character.

To see the results of the pipeline described above for yourself, you can download the Time Ghost: Character project from the Unity Asset Store.

It includes a Unity 6 sample scene with the character and the vertex-based deformation model. We have included documentation describing the model and process of training, so you can try training with your own character simulation data and see whether this approach could be useful for your own projects.

Please note that the character’s face is provided only for educational purposes, i.e., you should not use the face in your project. Everything else in the sample is okay to use commercially, including the results of your training of the model with your own data.

You will notice that the Character sample also includes our hair setup, done with the Hair System we have been developing through the last three Unity Originals demos (The Heretic, Enemies, and Time Ghost). We will explain in more detail the latest developments in the hair system and how the setup works in a separate blog post, coming soon.

14 Likes

This is an extremely impressive demo, especially with the intelligent use of Marvelous Designer to get the wrinkles in the clothes. Now, for actual in game scenarios, I would have to use real time cloth sim (such as Obi Cloth) with some density to get some wrinkles, followed by vertex blending.

I’m also interested in a Unity approach for facial animation, but it needs to be user friendly enough for artists like myself who doesn’t have high end tools for capture (other than using a GoPro)

2 Likes

I agree having some model to achieve facial animation would be great. There was a talk at Unite about using Sentis for lip sync, but only very vague details were given about how to achieve this. A demo for this would be great.

2 Likes

And it would have to work with all kinds of facial rigs and setups, especially joint based rigs with corrective blendshapes

1 Like

It’s basically like UE’s muscle demo, but for cloth, it would be nice if this was a proper tool like ZIVA :stuck_out_tongue:

1 Like

Difference is that the ML deformer in UE is usable for “normal” users, as you can train it inside UE Engine by feeding some alembic data to it. Works relatively easy for such a complex feature. I think it trains blendshapes, which are later coupled to bone movements. Of course creating the data still needs external tools, but some have already plugins for this exact purporse.

1 Like

Does this method of cloth deformation work with Mecanim blending?