I have a Mono memory leak (and it’s pretty big), to the point where you can comfortably reach 5GB of RAM in 10 minutes and it just keeps on going
Within the Profiler, only the ‘Mono’ number increases, and Memory Profiler doesn’t record this memory leak (it just stays at about 900MBs).
This leak is present in the Editor and in Standalone PC.
My game is a voxel game and the leak only seems to occur when loading new chunks.
All the meshes share the same Texture2D using MeshRender.sharedMaterial.
All Chunks are pooled.
All Lists used for creating the Chunk’s mesh (vertices, triangles, uvs) are reused using List.Clear().
Blocks are stored in jagged arrays within the Chunk.
Async-await and Task.Run() is used pretty heavily for multithreaded chunk generation/loading.
Unity 2019.4.5
IL2CPP
URP
Does anyone have any idea what may cause such a large memory leak or how I can check what it is as Unity isn’t telling me where it’s coming from. Thank you.
Sounds like a reasonable way of doing it. And yeah … 2240 lines is quite long … maybe you could try commenting out some of your code, figuring out which part causes the leak and then posting it here But at that stage you’ll probably have figured out the problem yourself anyways.
Could it be that you have some async tasks that never complete and so you end up with millions of them?
Which Memory Profiler are we talking about here?
I assume with the Mono number, you mean the one shown in the Memory Profiler Module that is part of the Profiler Window. However it is relevant to know if that Mono number that is rising is in the “Used” or the “Reserved” row. Or if both are steadily growing (Reserved growing a bit less steadily maybe and just jumping up every now and then).
But which Memory Profiler do you mean that doesn’t show the memory for it? Are those 900MBs shown when switching the Memory Profiler Module to Detailed Mode and taking a snapshot, or is this information you found via the Memory Profiler package?
Not sure why the Memory Profiler wouldn’t show you this, unless it is about heap fragmentation maybe. However, the Memory Profiler package contains a Memory Map, which should help some in figuring out if fragmentation might be a thing. It also gives you the ability to compare snapshots against each other so you can take 2 snapshots, 1 before and 1 after loading new chunks and diff them. You can also see this diff in the Memory Map.
FYI, this post of mine might help further to understanding that view, especially relating to fragmentation (still gotta wrap that into the Manual). Also, somewhere around 2019.x the colors for that view got messed up a bit . I’m currently working on a fix…
Within the Memory Profiler in Tree Map, the only noticeable differences are that with the originally loaded 230 chunks
there are:
146.3MB (787) allocated to Vector3[ ]
333.6MB (542) for Mesh
total of 750MB for Mono.
After travelling some distance and loading/unloading chunks, with loaded 245 chunks:
186.2MB (787) allocated to Vector3[ ]
291.5MB (542) for Mesh
total of 2400MB for Mono.
I’m not entirely sure what I’m looking for when it comes to the Memory Map.
I have a good amount of blocks that look like this, just with slightly different names:
As well as one very large but empty one with 1.5GB called ALLOC_GFX_MAIN.
Some of them near the top are a lot more green. Most of them have no objects in the Objects list.
Examples:
Then there’s many blue regions that are all very similar. A lot of Vector3[ ] and Vector2[ ] averaging at 400KB each, with the ChunkData classes (72B, contains a 3D jagged array of BlockData (ushort and byte)). ChunkDatas are not pooled, maybe they should?
Deep profiling with the Profiler doesn’t work. The Editor basically freezes on the first chunk and after waiting five minutes, it’s still frozen.
Yeah, I’ve been doing some of that.
I made some changes for testing purposes. When a ‘Region’ (16x16 Chunks) is unloaded, I removed it from a certain ‘transition period’ and decided to instantly destroy it using ‘DestroyImmediate’.
It appears one cause of the memory leak was that pooled Chunks (that weren’t actively used) were still referencing their previous Region (which was supposed to be destroyed), and ChunkData (contains info about the blocks and such). Not entirely sure how this led to such a large leak unless a RegionData (which has the references to all 16x16 ChunkDatas) was caught between it all.
With the current design of my game, there’s pretty much always surplus pooled Chunks, about 5-10. I guess these surplus Chunks were still referencing old data.
Although the Memory Profiler does reference these RegionDatas and ChunkDatas that were being referenced by some pooled Chunks, it didn’t really show their actual size. It appears that the Memory Profiler was not picking up the 3D jagged arrays that contained the blocks (which would have contained quite a bit of data).
Evidence:
After quite a bit of testing, my Mono usage is usually in the range of 1.8-4GB, instead of a constant (virtually never-ending) increase (with some rare random decreases).
FYI, I only just found the References button for the Memory Profiler - very useful.
I was thinking that could have led somewhere, as with my current code it does seem like that could happen if I was to unload a Chunk before it actually finished generating, but I didn’t notice anything.[/QUOTE]
Since your current issue is with Mono Memory, I’ll focus on the blue bits here.
I’m getting more an more confident that this memory growth is down to fragmentation. Pooling and reusing the ChunkDatas might be one way to avoid this. But those big sections of dark blue (Mono Memory) with only tiny bit of managed object memory in them (which indicates the dark blue parts are definitely part of the managed heap and not just mono internal Allocations for reflection or other type meta data) are something to investigate further.
What are the tiny bits of memory left in them? If they wouldn’t be around, the entire heap section might eventually get unloaded. If you compare the Memory Map of the before and after Snapshots, can you see what Memory gets unloaded that occupied such spaces. You can do so through the diffing feature as well as by manually switch back and forth between the view for the two snapshots.
Update: above was written before you latest post.
Not sure I can entirely follow what references what but it sounds like you might’ve found the tiny bits that were left behind and thus the heap sections can now be unloaded again. I’d guess that the empty space left by your 3D arrays or Region data had contributed to their size and the big gaps left behind. That sounds somewhat more likely than the Memory Profiler not capturing the array data. Mostly because the managed heap memory is just dumped/streamed into the snapshot file as-is so it should be in the data. There can still be a bug in the crawler that this data might not have been found as referenced (and therefore used memory) by the package’s code but that’s still a less likely explanation. If you have more indications that this would be the case, you could file a bug report, ideally with something to show that the Memory is actually still around and usable from within the game but not shown in the snapshot.
Huh… With that evidence added in the edit and the last message… Yeah it sounds more likely that something is up in the Memory Profiler code there… Could you please file a bug report and ping me the issue id?
yes, perfect! Thank you very much for this! Kinda embarrassing that it fails with such a straight forward repro… guess we should add some checks for multidimensional collections… We’ll get that fixed