OnDisable doesn't follow execution order

I am making ECS wrapper. Awake/OnDestroy - creates/destroys empty Entity. OnEnable/OnDisable adds/removes Component on Entity. Unloading scene breaking this logic in a random way. Avoid it.

The reason why ensuring the order of A → B → C is important is that without this sequence, it becomes impossible to achieve the desired functionality when designing Codeless components.

For example, I created a component called ‘AnimationRewinder’ which rewinds and samples all Legacy Animations it holds during OnEnable, and fast-forwards and samples them during OnDisable.

Additionally, I created a component called ‘SpriteColorGroup’, which forces RGBA color values onto all registered SpriteRenderers during OnEnable, OnDisable, and every Update. (The method for registering SpriteRenderers can be simplified by iterating over child components.)

Then, I made a Legacy Animation Clip that lerps the alpha value of ‘SpriteColorGroup’ from 0.0 to 1.0 in a Codeless way. (If there were no intermediary helper like ‘SpriteColorGroup’, animating the colors of all sprites using a Codeless method would have required an overwhelming number of animation curves, and it would have been impossible to register dynamically created or deleted SpriteRenderers.)

Now, when activating a GameObject with these three components in the editor, the color of all sprites should be applied in the order of ‘AnimationRewinder → Animation → SpriteColorGroup’, starting with the initial value (0.0) in a Codeless way in the editor. Conversely, when deactivating the GameObject, the final color value (1.0) should be applied following the same order.

This is why the order is crucial. Without maintaining this sequence, such an approach is infeasible. In fact, this method couldn’t be used because OnDisable did not maintain the correct order. As a result, I had no choice but to manually control all the SpriteRenderers through code, which hindered the development of reusable components and led to project-specific or content-dependent code, drastically reducing productivity.

Instead of using the method mentioned above for ‘SpriteColorGroup’, you could control the properties of a shared Material. However, this still requires writing logic to manage the Materials, and explaining the usage to a Technical Artist (TA) increases complexity. In the end, we had no choice but to use code. If Unity guaranteed the execution order of lifecycle events like Android SDK, I could have created far more elegant and reusable components.

The are better ways to achieve this though. Usually I just use interfaces, usually called IOnXXXX or IXXXXReciever to let other objects easily listen for something having been actioned on the same or parent game objects. Then it’s just a matter of a GetComponentsInChildren<T> call to find them all and broadcast the message. Then you don’t need to rely on Unity’s order of execution (which you generally shouldn’t ever be).

1 Like

In the past, I attached custom scripts to manage every object, even down to particles and sub-objects. However, this approach increased the complexity of the project, and years later, when maintaining released projects, just looking at the components tangled up in the in-game code became terrifying, even just from viewing the Inspector.

Since then, my perspective has changed.

I now believe that SpriteRenderer should only handle sprite rendering functionality. It shouldn’t need to know about special features or dependencies on specific objects.

Keeping Unity’s core components as pure as possible has many advantages, especially in terms of collaboration and maintenance, which I realized too late.

In other words, attaching additional components to particles or sub-objects only increases complexity and dependencies, which has become an undeniable fact.

Alternatively, we could use Event Listeners, but even this requires the management of registering and deregistering them for each dynamic particle. This leads to a conclusion similar to attaching an I~Receiver interface.

In this context, GetComponentsInChildren is certainly costly but highly attractive. As long as the hierarchy structure is maintained, everything can be encapsulated. Each class doesn’t need to know the fine details.

And SpriteColorGroup is a feature that uses GetComponentsInChildren. However, my plan was blocked by the randomness of OnDisable.

In fact, the easiest solution is to abandon Codeless. You can simply create a Super Mother Class on the parent object and control all the sub-objects through code.

But this goes completely against the principles of CBSE (Component-based software engineering, CBD).

It’s nothing like an event listener pattern.

I don’t really feel like you understood my suggestion at all.

If I want to add an I~Receiver, what do I need to do? I would need to attach a custom component that implements that interface to each particle or sprite.

If I want to register/unregister with a Listener, what do I need to do? Again, I would need to attach a custom component that handles that registration to each object.

Unity’s SendMessage Receiver and Event Listener are fundamentally similar in that they both rely on a broadcasting structure. The issue I am concerned with is that having sub-objects hold custom components is problematic.

In fact, since Unity is also somewhat moving away from CBSE (Component-Based Software Engineering) and transitioning towards DOTS (Data-Oriented Technology Stack), discussing ways to make CBSE more codeless here might just be a conversation about over-engineering.

No it’s not. It’s Unity 101.

My suggestion was a means to get around order of execution. Everything else you’re going on about is completely beside the point of this thread.

I already mentioned above that the best way to bypass execution order issues is simply to control everything through a supermother class instead of using a codeless approach. This is also a perfect solution performance-wise, as there are no GC spikes.

But what was the initial agenda of this thread? It’s that OnDisable doesn’t maintain execution order.

Why is OnDisable needed? Because if you want a codeless workflow using Unity’s standard functionality instead of custom components, this is an excellent approach (assuming execution order isn’t relevant).

Now, why do some say execution order is necessary in OnDisable? You can refer to my previous statement. Everything is related.

You might be someone who doesn’t fully understand this passage. In that case, you should develop in the way you believe is correct. As for me, I will focus on reducing the number of components attached to objects or creating universal components that can be used across multiple projects.

Not sure why you keen saying “codeless” as everything we do requires code. We’re not talking about visual scripting after all.

Hardly. It sounds incredibly rigid, inflexible, and against Unity’s general modular approach. Particularly like you’ll be lumping huge amounts of code into the one class, making debugging and changes to the code difficult.

Of course the right approach for every situation is contextual. Sometimes a puppeteer game object is fine. Other times you need to disseminate functionality across multiple components (single responsibility).

If you were having trouble with too many components, then maybe that wasn’t the right approach for that given situation. But it doesn’t disqualify it from all situations.

Accepting there is only one way to approach something blinds you from the alternatives.

1 Like

Yes, that’s correct. Let me explain why I’ve been using the term “Codeless.”

Unity’s Animation system sometimes provides functionality that can replace the need for code.

For example, you can toggle the Active state of certain objects or change the values of SerializedProperties.

This allows you to handle some visual effects or On/Off functionalities without writing code.

By managing the start and end of these animations through events like OnEnable or OnDisable, you can achieve a lot.

Using a combination of a few shared components and animations, you can create an endless variety of results. (This is almost similar to the results you’d get with Visual Scripting.)

Additionally, by teaching this to roles like Technical Artists (TAs), you could expect them to handle some of the screen sequence work on their own.

This is why I focus on Unity events like OnEnable, OnDisable, Awake, Start, and OnDestroy. These can link Animation → SetActive → Call Function.

This clearly opens up the potential for a Codeless approach. However, since OnDisable cannot be controlled via DefaultExecutionOrder, this method has potential bugs, which is why it remained a limited idea.

Of course, what you’re saying is reasonable in most cases. The Supermother class was simply an extreme example viewed from the hardware perspective of execution speed. In reality, as developers, we have to deliver the given targets within the development period. To achieve the best development efficiency and maintainability, we should avoid approaches like the Supermother class and aim for a certain degree of functional separation.

However, the possibility of developing specific features in a more elegant way has been blocked, and in my experience, attaching too many components has caused issues in the past. Therefore, the use of custom components should be balanced and not overdone. (Especially when a feature is used across multiple projects, I’ve found that sticking to Unity’s standards as much as possible is advantageous.) Specifically, the fewer dependencies there are, the more reusable the object becomes.

The Codeless solution has been revolutionary in reducing custom content code and separating much of the visual-related work into view resources, which is why I’ve always focused on that approach. OnDisable was supposed to be the key to realizing this, but due to the well-known execution order issues, I had to abandon that method and, up until now, have adopted dynamic manual management approaches similar to the one you mentioned.

So you mean “designer friendly”, not codeless. And you don’t need to bold the word and others every time you use it.

It’s part of our role as coders as to make things designer friendly. That’s pretty standard faire (hardly revolutionary, sorry).

In any case, if you have issues with order of execution, that points to an architectural issue on your end. That’s always been the case. One that can easily be resolved without going to such dire lengths.

If you compare 10,000 lines of code to 5,000 lines, 5,000 lines is clearly less.
Compare 5,000 lines to 1,000 lines, and again, 1,000 lines is even less.
And if you compare 1,000 lines to 100 lines, 100 lines is still much less.

In this process, many classes, functions, and other code blocks are removed.

As you continue removing things, eventually, only a few configuration or flow control elements remain, while most of the functional parts disappear.

What remains are a small number of code blocks, which, as long as they meet the standard specifications required by the foundation, can be used across completely different projects, drastically increasing reusability. (This is what I mean by adhering to Unity’s standards, and another example would be how Unity’s Animation and Sprite Renderer can be used in different contexts.)

This is the path I’m advocating for when I talk about “codeless,” and it’s entirely different from just making things designer-friendly. In fact, my focus is on expanding reusability, which is why I’m cautious about introducing custom functions.

Of course, some parts will still require engineering (for example, geometric formulas, calculations, or parts where performance optimization is critical—especially in areas related to gameplay).

Yes, this might seem trivial to some, but if certain areas can be built into an excellent view without a single line of code, and this can be applied to other projects as well, then at least for some people, this would be a highly attractive feature.

I hope you can understand the context I’m presenting. But I have to say, as Unity is gradually moving away from this CBSE development method and shifting towards DOTS, the discussion that started with OnDisable might now feel like over-engineering.

No, programming always relies on execution order.

int* p = 0;
int a = 1;
// p = &a;
*p = 10;
p = &a;

C/C++ pointers require a strict execution order. Is this a flawed language? In my view, it’s perfectly normal. Rather, the commented-out part is what’s wrong.

Of course, you can write functionality without worrying about execution order. You can write parallel code or create independent code that doesn’t depend on execution order.

Yes, if everything were independent, that would be possible. We’d be very happy if we could write such code all the time. But imagine you are the developer of Unity’s UI.Image, ParticleSystem, or NavMesh components. Would you really be able to write these without considering execution order internally?

No, you absolutely cannot complete them without caring about execution order. In fact, most core functionalities heavily depend on the order of interactions with shared resources.

That’s because code written without considering execution order is usually far from optimized in terms of performance. This issue becomes obvious in asynchronous implementations of shared resources. Why else do we worry about memory visibility and many other factors to guarantee execution order?

To sum up, in many common cases, execution order might not matter. But when dealing with access to shared resources, execution order is crucial. Calling such code a flawed architecture is, in my opinion, an overly hasty generalization.

Talking about Unity’s order of execution, not in general.

But nice strawman.

It sounds like you’re suggesting that even in Unity, the ScriptExecutionOrder GUI or the [DefaultExecutionOrder] attribute is completely unnecessary. Do you agree with that?

And based on your logic, components like TMP (TextMeshPro) or other similar components that utilize these features would seem to be poorly written. Do you agree with that?

It’s only necessary for a quick hack, or where there is no other options available. Or you’re being lazy, of course.

I’ve never needed it, personally. When order of execution was of concern, I’ve been able to work around it very easily.

1 Like

Yes, I haven’t exactly counted, but in over 99% of cases (in Unity), execution order hasn’t been an issue.

In most content-focused implementations, it’s especially unnecessary.

In my experience, issues usually arise when trying to do something unconventional, like using Animation + Active to create codeless tricks, where multiple components are tied to an object and can affect each other.

(+ Additionally, when developing Editor-compatible code with [ExecuteInEditMode] (for SceneView and Inspector), execution order has been an issue. For example, when creating functionality that plays UI or sprite animations and modifies properties in edit mode, execution order has also played a significant role.)

Or, it becomes problematic when dealing with something core or interacting with third-party asset plugins.

Additionally, there can be issues when working with native code. Especially when using unsafe, marshalling, or P/Invoke, execution order becomes crucial. For example, handling Native Thread <—> Managed Thread communication, like when implementing HTTP/2, which Unity doesn’t provide natively.

In most other cases, where these advanced approaches aren’t used, execution order typically doesn’t pose a problem.