Hi everyone, I’m Bill Cullen, a Principal Product Manager within Unity’s AI team. I am thrilled to announce the release of Sentis 2.1, coinciding with the launch of Unity 6. This marks a significant milestone in our journey to empower developers with cutting-edge AI capabilities directly in the Unity runtime.
My team is focused on fusing the advancements in AI with real-time games and apps, enabling fast and local AI model inferences that can happen inside the game loop at high frame rates. Sentis can import, optimize, and run AI models. The use cases for runtime AI are not LLM’s or generating assets, which are generally very bulky models and are much better suited for cloud inference, where unbounded compute and longer inference time are acceptable. Runtime AI is more focused on unlocking specific player features like smart, interactive NPCs, object detection, or real-time speech synthesis.
To deliver these features, nuanced AI models are built and optimized to run on resource-constrained end-user hardware. My hope is that the Sentis framework can enable some new features in your game or app, all while being relatively easy to integrate and maintaining your ability to deploy to any Unity runtime-supported platform.
In this post, I want to cover the 3 most common questions that I get for Sentis:
-
What can I do with it?
-
Where can I get AI models?
-
What are the future trends in Runtime AI?
Let’s go!
What can I do with it?
When we started this journey to enable AI models in the runtime almost five years ago, the relevant use cases were sparse. You could run basic reinforcement learning models for game agents and image-to-image models for novel game effects. But a unique convergence happened in the last few years that enabled a wave of new use cases:
1. Volume of AI models: A huge number of innovative and open-source AI models are now available and can be converted to standard file formats
2. Powerful native compute: Nearly ubiquitous powerful “native” compute (the chips that power our phones, PC’s, VR headsets, and game consoles) can run these AI models performantly at real-time speeds (30+ FPS)
As this convergence happened we rearchitected Sentis to ensure it could accommodate nearly any AI use case, and it’s been amazing to see what developers have done with it. Here’s a look at some of my favorite projects in a sizzle reel:
From the video, you saw that Sentis has 3 big categories of capabilities that are all super exciting to me.
1. Real-world interactions
2. Smarter gameplay
3. Game effects
It was a fast sizzle, so let’s break them down into more detail!
1. Real-world interactions
Real-world inputs like the camera, microphone, and motion sensors can now drive new player interactions. The example below uses VR device motion sensor data to generate complementary character animations. This category of capabilities opens up so many new real-world interactions that once required special hardware and sensors or were previously impossible. The category is also my personal favorite because of the fun and unique gameplay that is being unblocked.
2. Smarter gameplay
You can now build nuanced in-game mechanics, like automated game opponents and game outcome predictions, easily with AI. The example below evaluates poker game moves given the player’s current card hand. This category has been around for a long time. As developers have been using reinforcement learning and agents since before AI was hot. The big advancement here is that there are so many more freely available models now that are instantly compatible with Sentis. It’s also fairly easy to train your reinforcement learning model using something like Unity ML-Agents.
3. Game effects
You can also enhance player experiences with new effects, animations, and rendering techniques. The example below guides and controls a satellite docking maneuver. This category also exemplifies what users are excited about in the practical Time Ghost cloth deformation demo from the recent Unite keynote. Developers are just starting with interesting AI-based game effects, and I expect to see many more of them emerge in the next few years.
Where can I get AI models?
Every Sentis implementation starts with a unique gameplay concept or a problem to solve. But, the concept only comes to life if the developer can create or find an existing AI model to meet their needs. This is why we always showcase Sentis demonstrations with a first principles approach of a “problem” and “solution”, to make the search for the right model as straightforward as possible. Here is how we think about it:
Problem framing
- Consider a discrete gameplay concept or a problem to solve in your game
- What “input” from your game could you provide to a neural network to approximate (solve) this problem?
- How would you integrate the “outputs” from the neural network back into your game experience?
- Do you need it to run in real-time with your game at a high-frame rate? If yes, then Sentis could be your solution.
Solution framing
- Are there models available for this type of solution? If not, you will have to train one yourself (with a tool like Google Collab, or PyTorch), and also acquire some training data.
- Is the model memory size small enough to ship it with your runtime build? Obviously, you will not ship a multi-GB model with your mobile game, so most of the models used in Sentis are a few MB’s.
- Can the model run performantly, or should it be optimized and tuned for your game? After optimizing, is it now within your frame performance budget?
OK, now that you know how to think about these solutions, let’s get to the models! Here are my top 6 model repos:
-
Hugging Face (Start here! We have pre-converted .sentis models along with C# wrappers already integrated with Unity)
Note: Models in the ONNX format can be directly imported into Unity, but models in Torch or TensorFlow format need to be converted first to ONNX first.
What are the future trends in Runtime AI?
This is where I get to speculate with my crystal ball. Before you get too excited, I want to be very explicit that the below is NOT a roadmap of features for Sentis. Rather, they are two trends that I am observing and want to align with.
Larger models
The answer to the “What can I do with Sentis?” is ever-expanding. I predict within about 2-3 years that, most models we can’t run today (LLM’s, generative transformer models, etc.) will be able to run on many (but not all) devices using native compute. When combined, the 3 factors below will dramatically expand the model suite that Sentis can handle:
- Model performance: Models are becoming more powerful relative to their size and computing requirements. Each model generation surpasses the last in performance while improving quality.
- Device compute: Devices are getting more powerful. Multi-core GPU’s are still improving dramatically, and NPU’s (explicitly designed for AI model inference) are showing up in everyday consumer hardware. NPU’s offer the dual promise of offloading inference from the GPU while enabling faster model inference… a Goldilocks scenario.
- Optimization tricks: Sentis already enables things like model weight quantization that can reduce the model size by 75%, which will only improve. We also enable tricks like layer fusing to concatenate inefficient math ops, and slicing to spread inference over multiple frames for nuanced real-time tuning. What other tricks will we come up with? I’ll leave it to the math guys to sort out, but you can be sure we are working on a bunch!
AI Realism
The term “AI” is very hot right now, and while it’s amazing, at Unity we are focused on finding tangible value for users today. AI is just another tool in your toolbox to build a great player experience. So, what I mean by “AI Realism” is that developers will become very pragmatic about using runtime AI: it can solve great technical hurdles for you, but it’s not magic, and it’s not always the best tool for your needs. I do see it (already) being used widely though in the ways I’ve outlined above purely because it’s the best solution to some particular problems, and I believe developers and players will begin to see it that way. For example, as a developer, if you just want a cross-platform speech synthesis solution, then an AI model in the Unity runtime is just the best way to do that for performance, cost, and simplicity reasons.
Get started now
I encourage you to check out Sentis further, read our docs, and consider if it may be useful for your game or app. Below are all the getting started resources you need. If you want to collaborate closer with us, DM me on discussions or contact us through this form.
You can also get an excellent overview of the product with many real-world customer examples and technical explanations in our recent Unite 2024 breakout session:
We can’t wait to see the incredible AI-driven experiences you’ll create with Sentis 2.1 and Unity 6. Happy developing!


