We keep getting higher then expected memory usage in our webgl builds, which seems to be primarily caused by very high reserved memory from Mono. here’s a profiler example to illustrate:
It’s been a while but I would still like to find a good solution to this problem.
My current solution is to selectively disable stuff and running profile builds to find out where we can search for potential optimization options. While this sort of works, it is both imprecise and very time consuming, so I would be very grateful if there is some better way of profiling what causes the reserved memory to rise.
Thanks, I hadn’t tried that one yet. It looks great and I’ll definitely use it more in the future. However, like with the old profiler I don’t see any way to find out what is causing the reserved memory to rise. Should that be possible?
My guess is that during level loading some scripts require a large amount of memory and release it before the end of the frame. Combined with the WebGL limitation that garbage collection only happens once per frame, this could cause the rise, even though the memory profiler (whether it’s the old or new one) shows far lower actively used memory in the frames around the level load.
I managed to find one offending script to confirm that this is at least part of the problem. The script looped through a large number of meshes to do some vertex color adjustments on load. Changing this to only process a few meshes per frame and thus spreading the memory load over multiple frames, dropped the total reserved memory load by almost 100mb. This still leaves a very big difference between the reserved and actually used memory amounts though, so any tricks or tools that could help narrow down where we should look to improve this would be welcome.