How about SIMD ?

The compatibility of SIMD in browsers is relatively good. We have tested both chome and safari. In some engine functions, such as animation and mathematical calculation, can this technology be used to improve the current CPU computing performance of WebGL? This can improve fluency and reduce power consumption to a certain extent (especially iOS).

1 Like

It’s been only six months since Safari finally added webassembly SIMD support and Unity moves at a glacial pace.

If you consider that in relation to how web technologies evolve in general, I’d say Apple is the glacier in the room. :wink:

1 Like

I think SIMD support should not be as uncontrollable as multi-threading. Can support be considered as a priority?

2 Likes

post also here, in the roadmap “web” tab:

There is no SIMD in the Unity Web roadmap, but I think this technology is a very cost-effective thing

Suggestion was to post it.

I’ve post this suggestion. The power consumption of the mobile is too high. After Apple’s powerlog diagnostic tool, it is found that on WebGL platform the main bottleneck is CPU.
CPU > DRAM >> GPU

I authored SSE support into the compiler toolchain: Pull requests · emscripten-core/emscripten · GitHub . You can find the documentation at Using SIMD with WebAssembly — Emscripten 3.1.65-git (dev) documentation

Since 2021.2, if you’re an author of a native C++ plugin to Unity’s web builds, you can target SIMD by enabling the relevant Emscripten compiler and linker flags. (one of -msse*, -mneon*, -msimd128, and then -sSIMD in general)

You’re probably familiar that the performance benefit of SIMD in “general purpose” code is diminishing, and instead it works best in very specific arithmetically computation heavy tasks.

In Unity codebase, I see three locations where SIMD could benefit web builds:

  • Burst jobs
  • CPU skinning
  • Texture format conversion

Today Burst jobs are supported by web builds in limited fashion, and CPU skinning is actually being tackled in a better way by WebGPU, by moving the skinning over to GPU’s compute shaders, like all other Unity platforms currently do. (Web has been the only odd one out that had software skinning, due to WebGL not supporting compute shaders) Last, the texture format conversion is something that should generally be avoided by choosing the appropriate compressed texture formats that would not need conversion.

In game engines there are a lot of places where SIMD could help, but the situation with Unity is that there has been a massive shift to migrating compiled code from C++ to C#, and C# code has less opportunity to utilize SIMD. (well, this has recently been changing at SIMD-accelerated types in .NET - .NET | Microsoft Learn )

We might enable LLVM’s autovectorization across the codebase at some point, although these can be a bit of a hit or miss. If you’d like, you can already try it by setting a global environment flag EMCC_CFLAGS=-msimd128 -sSIMD, and then restarting Unity, and doing a full rebuild of the project.

4 Likes

I’ll make the argument that, in Web particularly, the situation is more nuanced. For us, we have to do quite a bit of texture conversion as we use a CMS to populate the content being shown by the WebGL Unity Player. This content is uploaded by clients so it can be difficult to enforce specific formats.

I am willing to bet that other such uses-cases for more professional uses of Unity Web platform are not rare as user-uploaded content often does not match WebGL optimized formats. And the web platform is the only platform Unity supports, that to my knowledge, leveraging user uploaded content is a common core driver of the platform.

That is a very good point, and currently true, the GPU hardware base on the web is much more fragmented than on other platforms. Though that is something that we have plans to address by means of doing multi-deployment of compressed texture formats as a primary solution.