I’ve made a benchmark program that does various operations, mostly running Mini Micro code (which is implemented as plain old C# code). Running this in the WebGL build, on the very same MacBook Pro as the desktop build, everything takes substantially longer:
And this doesn’t seem to have anything to do with graphics per se; some of the worst offenders (Semiprime and Race Sim) generate no output at all while they’re cranking away. It’s all just math and looping.
Any idea why the same code running on the same hardware should be 2-4 times slower in a WebGL build?