UnityEvent RemoveListener pointlessly allocates two Lists

The UnityEvent reference source code includes this function in InvokableCallList, which is executed every time we call RemoveListener() on a UnityEvent:

        public void RemoveListener(object targetObj, MethodInfo method)
        {
            var toRemove = new List<BaseInvokableCall>();
            for (int index = 0; index < m_RuntimeCalls.Count; index++)
            {
                if (m_RuntimeCalls[index].Find(targetObj, method))
                    toRemove.Add(m_RuntimeCalls[index]);
            }
            m_RuntimeCalls.RemoveAll(toRemove.Contains);

            // removals are done synchronously to avoid leaks
            var newExecutingCalls = new List<BaseInvokableCall>(m_PersistentCalls.Count + m_RuntimeCalls.Count);
            newExecutingCalls.AddRange(m_PersistentCalls);
            newExecutingCalls.AddRange(m_RuntimeCalls);
            m_ExecutingCalls = newExecutingCalls;
            m_NeedsUpdate = false;
        }

This seems to be unneccesarily allocating two new Lists (which then internally allocate two new arrays). Calling RemoveAll() with Contains as the predicate is also very inefficient.

The function could be rewritten like this to avoid any new allocations or inefficient use of RemoveAll:

        public void RemoveListener(object targetObj, MethodInfo method)
        {
            //iterate backwards so removing an item from an index doesn't disrupt iteration
            for (int index = m_RuntimeCalls.Count - 1; index >= 0; index--)
            {
                if (m_RuntimeCalls[index].Find(targetObj, method))
                    m_RuntimeCalls.RemoveAt(i);
            }
 
            // removals are done synchronously to avoid leaks. we reuse the existing list
            m_ExecutingCalls.Clear();
            m_ExecutingCalls.AddRange(m_PersistentCalls);
            m_ExecutingCalls.AddRange(m_RuntimeCalls);
            m_NeedsUpdate = false;
        }

There are other places in the file where m_ExecutingCalls is recreated as a new List instead of simply cleared, each with the same comment “removals are done synchronously to avoid leaks”. Perhaps I’m misunderstanding the original programmer’s meaning/intent, but list.Clear() is still synchronous, and if necessary the [Capacity](https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.list-1.capacity?view=net-8.0) property or [TrimExcess](https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.list-1.trimexcess?view=net-8.0) function can be used to change the capacity of the list.

Hi;

I am not highly experienced in the field but it appears using Clear, clears the references of all objects and sets the list count to 0. But it does not free the resources up, whom continue to maintain the old references until they overwrite.

using new list resets the memory used by the previous contents of that list to nothing. Freeing those resources. Otherwise at a bottleneck those resources may not be considered free.

but, hey, I will write this comment, but I am no pro here! So, if you know better than I about it, and can measure notable increase of performance or a reduction of GC then ignore this keep doing what you’re doing.

Sorry to rephrase that

If the contents of the previous list was very large, the next list maintains the memory cost of the previous list. If you cleared it. Rather than… ‘newed’ it

As I said, you can avoid that simply by setting the Capacity property or calling the TrimExcess() function.

1 Like

You’re right. Keep doing it: I believe in you.

You are reducing some percentage of garbage! Because any counts and capacities in the following list that remained count and capacity but were overwritten were not part of the disposal. (Using trim excess)

Very clever stuff!
I will certainly use!

Your proposed change would cause issues in a mulithreaded environment, especially when the removal happens from inside one of the callbacks. The comment specifically states they do this to change is synchronously. You kept the comment but it’s not correct anymore. You change the list 3 times in a row (Clear and two AddRange which may also do several changes in a row. Replacing the entire list can be done in a single atomic step, so it would be thread safe.

If you worry about garbage, you should not change the Capacity manually in between, as any capacity change will allocate a new internal array. The point of using a list is to keep a larger internal array, so you don’t need to re-allocate the array all the time.

Events shouldn’t be added / removed too frequent. If something happens very often, it’s not really an event

3 Likes

Well, that’s why I said

The comment in the code says “synchronously”; it doesn’t mention thread safety or concurrency. Methods like “Clear()” and “AddRange()” are synchronous (in the sense that they aren’t asynchronous functions or coroutines), but you’re correct that being synchronous doesn’t mean that they’re thread safe. The original code makes more sense in the context of thread safety. Thanks for clarifying.

Not only thread safety. Have a look at how the event is invoked. “PrepareInvoke” returns “m_ExecutingCalls” and it iterates through the list when invoking all subscribed events. As I said, when you remove an event from inside a subscribed event, the iteration would break as your changed code would directly manipulate the used list. The original code would create a new list and replace the old one. That means the iteration inside Invoke would continue with the old list that is stored in a local variable.

1 Like

Yep, I see what you mean.

I agree this should be addressed. Unity APIs should minimise or remove as much unnecessary garbage allocations as possible.

At the very least they could test if the method is actually in the list first before allocating a new list.
It’s common for programmers to just try to remove a method when they are unsure if a method is already registered to the event or not.

event.RemoveListener(MyMethod);
event.AddListener(MyMethod);

You mentioned Unity alloc unnecessly list for two reasons, but both are not correct. Here’s why:

  1. Thread safety. Alloc new list can’t guarantee thread safe, I don’t see any synchronous mechanism here, like lock or concurrency container here.
  2. Logic correctly. “PrepareInvoke” alloc new list just prevent callback death spiral, meaning add UnityAction to callbak list while invoking callback.

Unity developer just be lazy, they don’t eat their own dogfood.

the removal case can be handled by simply copying the list to a stack-allocated array before executing, or by reversing the list, executing the reversed list in reverse, then reversing it back.

the latter is likely less performant and is definitely more error-prone.

The former handles most scenarios pretty well I think, especially if paired with a lock when the copy operation is occurring. one could even use the built-in SyncLock object in the List<T> class (only accessible via an interface, i forget which atm).

Either way, you’re trading some CPU cycles for garbage/memory allocation. I think the trade-off is worth it since it’s so minimal, but Unity may feel differently.

The m_RuntimeCalls part could easily be optimized. There’s no need for it to allocate a temporary list nor use List<T>.RemoveAll(Prediate<T>). It’s a pretty peculiar implementation.

The m_ExecutingCalls is much tougher. One solution could be keep track of whether or not an Invoke execution was in progress, and just mutate the list directly if not. This would slightly increase the cost of every Invoke execution, though.