Runtime optimization of Unity Sentis

Apparently Unity Sentis is an inference engine that is capable of running on CPU and GPU, I have a suggestion that would likely vastly improve runtime performance of this engine. For years now, phones have been providing speech recognition to power technologies like Apple’s Siri and Google’s assistant, and object recognition to enhance pictures and videos, and text to speech engine etc… All these features made companies like Qualcomm and Apple see the need to make a hardware Neural Engine to accelerate those AI tasks, and in 2020 when Apple announced their ARM transition, we saw for the first time Neural Engines on laptops and desktops for commercial use (since the architecture of the M chips is based on the Bionic A chips, they just took the NE of the A14 and put in on the M1). With that in mind let’s talk about games, when you profile a game in runtime you’re more likely to see the GPU maxed out most of the time, meaning any additional overhead would most definitely harm performance, and that leaves a lesser option which is the CPU to run the inference engine, it might not take as big of a hit since CPU’s tend to have multiple cores and Unity games are still by default mostly single threaded, but the architecture of CPU’s doesn’t allow for a fast AI processing so the quality of the output would likely need to be lowered compared to GPU, that’s where my suggestion comes in to make Unity Sentis run on the Neural Engine which is by design the best accelerator for AI tasks, doing so would enable us developers to not compromise on the quality of the output as it would on CPU, and it wouldn’t hurt the game’s FPS as it wouldn’t be adding any overhead to the GPU which is already maxed out with rendering. For reference since the A14 chip and M1, NE was capable of running something like 11 Trillion ops per second, and in future generations it went up to 15.6 Trillions ops per second then 22/31 Trillions ops per second on M1 Ultra and M2 Ultra. This power can be leveraged to run Unity Sentis like a charm on iPhones/iPads/Mac’s… On the PC side, I believe AMD just made their own Neural Engine in their latest laptop CPU’s, and I believe Intel will follow up the trend with their upcoming 14th gen. Long story short, make use of the Neural Engine which is already made to outclass CPU’s and GPU’s in AI tasks without compromising performance.
Every Apple silicon (mobile and desktop) comes with a Neural Engine.
These AMD chips comes with Neural Engine.
Snapdragon and Exynos and Tensor G chips also come with Neural Engines.



1 Like

Hi, thanks for the thoughtful topic. We are actively investigating integrations with all of the various neural specific chip architectures. Will post when there is an update available.

4 Likes

Nice information thanks.
By the way, I have tried and inspect the demo project right now on a Mac Pro M1 chip. It looks like there are many performance issues like low fps and etc.
It is a good attempt but I think there are some glitches about M1 chipset arch. It needs huge improvement in performance side.
Thanks for this framework.

Hey there - yes we are aware of the issues with the sample project. The model in the sample is actually quite computationally heavy, which is the issue. We are working on a few new samples soon with smaller models that will be far more performant.

I would love to try new samples but please consider using the Neural Engine instead of lowering the quality of the models, it’s way faster than the GPU and consumes very little power as it is a hardware accelerator, and is free way more frequently than the CPU/GPU.

1 Like

i think running on neural engines will definitely improve a lot on the performance side but also need fixed-point support. some architecture like GLOW from facebook.

1 Like

Hi, we are still working on neural engine support, but in the meantime, you can check these three newly added samples that are better than the original sample that was a bit too complicated and bloated.

2 Likes