The UnityEvent reference source code includes this function in InvokableCallList, which is executed every time we call RemoveListener() on a UnityEvent:
public void RemoveListener(object targetObj, MethodInfo method)
{
var toRemove = new List<BaseInvokableCall>();
for (int index = 0; index < m_RuntimeCalls.Count; index++)
{
if (m_RuntimeCalls[index].Find(targetObj, method))
toRemove.Add(m_RuntimeCalls[index]);
}
m_RuntimeCalls.RemoveAll(toRemove.Contains);
// removals are done synchronously to avoid leaks
var newExecutingCalls = new List<BaseInvokableCall>(m_PersistentCalls.Count + m_RuntimeCalls.Count);
newExecutingCalls.AddRange(m_PersistentCalls);
newExecutingCalls.AddRange(m_RuntimeCalls);
m_ExecutingCalls = newExecutingCalls;
m_NeedsUpdate = false;
}
This seems to be unneccesarily allocating two new Lists (which then internally allocate two new arrays). Calling RemoveAll() with Contains as the predicate is also very inefficient.
The function could be rewritten like this to avoid any new allocations or inefficient use of RemoveAll:
public void RemoveListener(object targetObj, MethodInfo method)
{
//iterate backwards so removing an item from an index doesn't disrupt iteration
for (int index = m_RuntimeCalls.Count - 1; index >= 0; index--)
{
if (m_RuntimeCalls[index].Find(targetObj, method))
m_RuntimeCalls.RemoveAt(i);
}
// removals are done synchronously to avoid leaks. we reuse the existing list
m_ExecutingCalls.Clear();
m_ExecutingCalls.AddRange(m_PersistentCalls);
m_ExecutingCalls.AddRange(m_RuntimeCalls);
m_NeedsUpdate = false;
}
There are other places in the file where m_ExecutingCalls is recreated as a new List instead of simply cleared, each with the same comment âremovals are done synchronously to avoid leaksâ. Perhaps Iâm misunderstanding the original programmerâs meaning/intent, but list.Clear() is still synchronous, and if necessary the [Capacity](https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.list-1.capacity?view=net-8.0) property or [TrimExcess](https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.list-1.trimexcess?view=net-8.0) function can be used to change the capacity of the list.
I am not highly experienced in the field but it appears using Clear, clears the references of all objects and sets the list count to 0. But it does not free the resources up, whom continue to maintain the old references until they overwrite.
using new list resets the memory used by the previous contents of that list to nothing. Freeing those resources. Otherwise at a bottleneck those resources may not be considered free.
but, hey, I will write this comment, but I am no pro here! So, if you know better than I about it, and can measure notable increase of performance or a reduction of GC then ignore this keep doing what youâre doing.
If the contents of the previous list was very large, the next list maintains the memory cost of the previous list. If you cleared it. Rather than⌠ânewedâ it
You are reducing some percentage of garbage! Because any counts and capacities in the following list that remained count and capacity but were overwritten were not part of the disposal. (Using trim excess)
Your proposed change would cause issues in a mulithreaded environment, especially when the removal happens from inside one of the callbacks. The comment specifically states they do this to change is synchronously. You kept the comment but itâs not correct anymore. You change the list 3 times in a row (Clear and two AddRange which may also do several changes in a row. Replacing the entire list can be done in a single atomic step, so it would be thread safe.
If you worry about garbage, you should not change the Capacity manually in between, as any capacity change will allocate a new internal array. The point of using a list is to keep a larger internal array, so you donât need to re-allocate the array all the time.
Events shouldnât be added / removed too frequent. If something happens very often, itâs not really an event
The comment in the code says âsynchronouslyâ; it doesnât mention thread safety or concurrency. Methods like âClear()â and âAddRange()â are synchronous (in the sense that they arenât asynchronous functions or coroutines), but youâre correct that being synchronous doesnât mean that theyâre thread safe. The original code makes more sense in the context of thread safety. Thanks for clarifying.
Not only thread safety. Have a look at how the event is invoked. âPrepareInvokeâ returns âm_ExecutingCallsâ and it iterates through the list when invoking all subscribed events. As I said, when you remove an event from inside a subscribed event, the iteration would break as your changed code would directly manipulate the used list. The original code would create a new list and replace the old one. That means the iteration inside Invoke would continue with the old list that is stored in a local variable.
I agree this should be addressed. Unity APIs should minimise or remove as much unnecessary garbage allocations as possible.
At the very least they could test if the method is actually in the list first before allocating a new list.
Itâs common for programmers to just try to remove a method when they are unsure if a method is already registered to the event or not.
You mentioned Unity alloc unnecessly list for two reasons, but both are not correct. Hereâs why:
Thread safety. Alloc new list canât guarantee thread safe, I donât see any synchronous mechanism here, like lock or concurrency container here.
Logic correctly. âPrepareInvokeâ alloc new list just prevent callback death spiral, meaning add UnityAction to callbak list while invoking callback.
Unity developer just be lazy, they donât eat their own dogfood.
the removal case can be handled by simply copying the list to a stack-allocated array before executing, or by reversing the list, executing the reversed list in reverse, then reversing it back.
the latter is likely less performant and is definitely more error-prone.
The former handles most scenarios pretty well I think, especially if paired with a lock when the copy operation is occurring. one could even use the built-in SyncLock object in the List<T> class (only accessible via an interface, i forget which atm).
Either way, youâre trading some CPU cycles for garbage/memory allocation. I think the trade-off is worth it since itâs so minimal, but Unity may feel differently.
The m_RuntimeCalls part could easily be optimized. Thereâs no need for it to allocate a temporary list nor use List<T>.RemoveAll(Prediate<T>). Itâs a pretty peculiar implementation.
The m_ExecutingCalls is much tougher. One solution could be keep track of whether or not an Invoke execution was in progress, and just mutate the list directly if not. This would slightly increase the cost of every Invoke execution, though.