My recommendation is to treat the initialMemorySize as strictly the maximum size, ie assume it cannot grow like ever (not sure if there’s a way to prevent Unity from trying to grow memory, but if so, I’d add that as a check to make sure to catch any memory growth beyond the initial size as early as possible).
Ideally determine a decent minimum size that works across all targeted devices and browser and stick to it by tuning your game assets and code to strictly stay within that limit, and comfortably so (ie don’t use more than say 80% of that memory).
Growing memory in webgl builds can fail for a number of reasons at random or on specific scenarios/devices even when trying to grow just a couple extra bytes. Mainly due to the requirement that memory can only grow in continuous blocks of memory, but memory can be very fragmented which means there is simply no guarantee that any growth succeeds even if there’s enough free memory available.
I still have a question. Do the texures cost the memory in the ArrayBuffer?
All our textures “Read or Write” are unchecked. According to the documentation, unity will submit the texture to gpu, and release the texture memory. However, according to the profiler, the main memory is costed by textures. And the ArrayBuffer size is almost equals to the profiler tracked memory size.
I have to make an assumption on this one. Let‘s say I would not be surprised if textures need to remain in memory in WebGL because I believe there is no texture streaming and no guarantee that a texture on the GPU memory stays there for the lifetime of the app. But someone with more WebGL expertise on that matter should confirm this. It could also simply be a profiler quirkwhen analyzing WebGL apps.