So one of my scripts is causing a very large Garbage Collection Allocation in the Unity Profiler. I have narrowed the cause down to one of my Generic Lists. Every frame the List.Add() function is called along with List.RemoveAt(0). When I comment out the List.RemoveAt(0) the memory allocation remains at 100 bytes, but when I un-comment it again the allocation jumps to 211 kilobytes! What is happening!? If anyone has any idea, any help would be greatly appreciated! Thanks.
If you plan to remove the first element every frame, a standard list may not be the best approach. You should use a form of Linked List instead, so that removing the first element is a simple O(1) operation (I think that’s right…been a while since I’ve thought about O(n) type stuff).
However, the problem you have is that you are commenting out the removal of the first element, so if you are also adding new elements every frame, the memory will grow continually. That’s expected.
A List datastype is actually a resizing array, as shown recently when the C# code was open sourced.
What this means is that the list is initialized as an array with a set capacity, 4 by default. If you try to add a 5th element, a new array of size 8 is created, the first 4 are copied, and the 5th is added. If you try to add a 9th element, a new array of size 16 is created, the first 8 are copied and the 9th is added.
This amortizes the cost of adding an element to a list pretty nicely, since the size doubles each time and the copy operation only happens when you reach max.
On the flip side, this means that removing an element from the array is really, really, really fucking costly. A lot. Unless you remove it from the end!
If you have a list with 1000 items and you remove at 0, you have to copy 999 items to the new array, each offset by 1. If you remove at 999, you do nothing but mark the length of the list as being a little smaller and say that element 999 doesn’t exist anymore.
So if you can find a way to reverse the order of your list and remove at the end rather than the start, you’ll shave off nearly 100% of the overhead.
And I disagree with @AlwaysSunny. 0.2mb is a lot, relative to the size of this operation. But maybe that’s just my perspective coming from the embedded systems world where 200kb of program memory is a lot, let alone 200kb of RAM.