What is IL2CPPMemoryAllocator?

When using a memory profiler and comparing two snapshots, I noticed a considerable increase in memory usage within Native/Unity Subsystems/Managers/IL2CPPMemoryAllocator.

I’m curious about the role of ‘IL2CPPMemoryAllocator’ and whether there are any ways to prevent this memory growth.

Thanks.

Hello,
The IL2CPPMemoryAllocator is what we use starting in Unity 2022.2 to allocate Virtual Machine Memory for the IL2CPP managed VM via our Memory Manager. This memory was previously untracked, and is the equivalent what the Mono VM would allocate memory for, i.e. memory needed to run the managed memory portion of the runtim, mostly type metadata for managed types. It’s amount is usually primarily defined by how many types you use, which can be disproportionately affected by the use of lots of generics. Using Reflection can inflate way more types than you’d normally use.

Then again, just taking a memory snapshot forces us to initiate and inflate the types that are sort of in memory but not used, e.g. their constructors weren’t called yet because you just created an array of the class type FooBar[ ] but never called new FooBar(). Without us initializing FooBar, we would get the array object with incomplete type data, so we initialize it. I believe the memory growth related to that would only show up in the following snapshot.

So it might be an idea to take 2 snapshots in quick succession just to get that initial inflation out of the way. It could off course be that mor inflated types enter memory before you take your next snapshot, but if you use the second of those initial ones as a base, it’s more likely to be just the general increase in types being used or inflated via Reflection.

Btw, the difference to Reflection is that some Reflection calls inflate ALL the types in an assembly, even the unused ones.

3 Likes

Wow, It was a great help for me. Thank you so much for your kind response :slight_smile:

1 Like

Could you clarify what reflections call can bloat runtime memory?

All reflection calls that cause the VM to touch and thereby inflate types that would otherwise not have been used.

1 Like

So for instance if you iterate types in an assembly, everything gets pulled into a reflection metadata table? But if those types actually get used at runtime, you’d still pay that cost? Or is there just additional overhead when accessing reflection?

I asked this in a private support request as well, but is it possible to filter reflection type information out during build completely? If you could, would that even have a meaningful effect?

Alright, I’ll try to structure this a bit more

  • If a type is stripped, it will never bein the build and Reflection will never find it

  • If a type is included in the build, either because it isn’t stripped out aggressively, refered to in some way that prevents stipping, or is a potential generic variant of a used type, or just referred to but not actively used (no methods are ever called on it) in a way that the stripping process can’t predict, it’s basically just a number/pointer until it’s first usage.

  • Generics being even less than that, just the base generic type and then every new type combo generates a new compound type based on the byte code

  • Once a type is actually used, the info from the binaries is used to load and “inflate” it into the VM’s native memory to have all the info necessary to work with it in RAM, i.e. its variables with type and name info and its function table

  • If a type is used, accessing it via Reflection will create some managed info objects like the TypeInfo or MethodInfo based on the info already loaded into the VM’s native memory.

  • The managed bits will be collected whenever you stop using them

  • The native bits will never be unloaded but they were already there to begin with because you used the type.

  • If the type was never used before asking Reflection to fetch you some info on it, the type is now inflated until the runtime is terminated, even if nothing else will ever use it.

2 Likes

In Mono, there are cases where you e.g. might be using a generic collection or an array of a certain type, but without ever actually using an type specific functions (e.g. always effectively treating it as just object, or just initializing but never actually populating the collection with any concrete instance for the collection to hold). In these instances the type used as a generic type argument might never be inflated.

In IL2CPP, that can’t happen and the generic type will get inflated. A high amount of VM Memory usage under IL2CPP almost always comes about from

  • A high usage of generics
  • Usage of Reflection
  • Or worst case: both

There are currently no easy ways to get more clarity on anything specific in that regard outside of the use of native profiling tools like XCode Instruments to see where such allocations might be coming from.

3 Likes

Hi, @MartinTilo

Here are a few more questions regarding IL2MemoryAllocator:

Based on my understanding, when using reflection and generics, memory needs to be allocated for the metadata of the types in use, and this allocation is categorized under IL2MemoryAllocator in the Unity Memory Profiler.

In that case, I’m wondering if the memory allocated by functions like Il2CppMemoryWrapper::AllocateXXX, as shown in the attached image (Xcode Instruments) below, is indeed being categorized under the IL2MemoryAllocator in the Unity Memory Profiler.

If that’s correct, then my second question is: when I checked a specific section of the game in development using Xcode Instruments Allocations, the total size of memory allocated by Il2CppMemoryWrapper functions came out to about 10MB. However, when I took snapshots at the beginning and end of that same section using the Unity Memory Profiler, the memory increase under IL2MemoryAllocator was around 37MB. What could be the reason for this discrepancy between the 10MB in Xcode Allocations and the 37MB in the Unity Memory Profiler? (Is it perhaps because Xcode is showing Resident Memory (dirty), while Unity Memory Profiler is showing Allocated Memory?)

Yes.

I’m not firm enough with XCode to know and don’t know exactly where you took that number from. If it is from all the calls that have gone through a similar callstacks as the one you posted then I’d assume that that number is very likely the Allocated amount as well.

Hello,
In my test, on the 2021.3 and 2022.3 versions of the Unity engine, there is 40Mb more memory usage in 2022 than 2021 on IOS, which is due to the il2cpp memory allocation changed to Unity Allocator , and there is a lot of 80Kb more malloc. Hopefully you can give me a solution.
Thanks

And how much is Untracked on 2021? (Though I guess the reliability of Untracked on 2021 is very platform dependent and I can’t remember how reliable it was on iOS…) I would assume that this is not a growth, but an attribution. I.e. previously that memory usage wasn’t tracked, and now it is. If so it’s a non-issue.

I got the data from the xcode test directly, not memory profiler, and I played on empty project.

The difference between the two versions is that 2021 malloc 16kb of memory, with 500 times, for a total of 8Mb. These malloc stack end with ‘MemoryPool::Addregion’. 2022 application memory is 80 KB, 156 times, a total of 12 MB.The stack is ‘Il2cppMemoryWrapper’ after ‘MemoryPool::Addregion’, Unity allocates the memory. That’s the main difference.

In addition, I found that the two versions of il2cpp source code has changed the size of ‘const size_t kDefaultRegionSize’ from 16 * 1024 to 64 * 1024. Could that be the main reason? And why is unity assigned 80 instead of 64?

There last question,how do I edit il2cpp source code on 2022 ?,xcode project seems to use a completed static library that libil2cpp.a

Thanks

Probably because there are profiling/memory management overheads to consider in Debug builds and iOS system pages are 16 KB aligned these days (used to be 4KB). I think this had been addressed though, are you on the latest 2022.3 version?

Thanks
How do I edit il2cpp source code on 2022?

I don’t think that was ever possible or supported unless you’re a source code customer? Mono’s source is available here but IL2CPP source is afaik not publically available? If it was previously possible for you, please reach out to your support contact.