Clearly the system does not implement a sleep() for the coroutine, so something is still being checked. Memory allocation aside (for repeated new WaitForSeconds, if you went that route), it feels intuitive that a simple return null would be less impactful than monitoring and checking a timeout value, since it HAS to check something each frame…?
Unless I’m misunderstanding the internals of Coroutines? (Why I’m asking :))
And clearly, if you need/want an easy way to specify a delay, WaitForSeconds is the way to go. I’m asking from a standpoint of waiting for something else to happen… like a coroutine waiting for a document of unknown size to be downloaded. It needs to check periodically.
WaitForSeconds checks every frame to see if the specified amount of time has passed.
Every yield instruction in Unity pretty much checks for some condition and it does that check once every frame to decide whether or not to continue executing the coroutine.
Yep, if you look at the example for, e.g. UnityWebRequest.Get:
You can see there’s a coroutine that is yielding on the web request. Under the hood, the implementation of that is pretty much “Check if the request is done, every frame”.
So the short answer to the question in your title is “No”.
Cool, thanks for confirming what I thought. I come from a background with some Arduino/mcu programming, and if you’re looping, you’re wasting power and speed. It’s habit for me to lean towards wait/delay/sleep functions. Also glad to know that how I thought through it was right. Look at that, learning more every day.
Actually that’s just an assumption we can not really prove. This statement is true for WaitForSecondsRealtime as it’s a CustomYieldInstruction implemented in managed code. CustomYieldInstructions do indeed get called once every frame to check their condition. Though we don’t know how Unity implemented WaitForSeconds on the native side. First of all because the scheduling generally happens on native side there may be already some benefits as Unity doesn’t schedule your managed state machine at all. It has to check the timeout on the native side every frame, but there could be some other improvements as well.
For example if I would implement a scheduler for things like WaitForSeconds I would probably use a sorted queue. So the coroutine with the shortest timeout would be the first in the queue. That way even if 20 coroutines are waiting we only have to check the first one. Of course that means when we add a coroutine to the queue (so when it yields a waitforseconds object) we have to sort that new entry into the queue properly. However that’s a one time action that has a worst case of O(n). Now the efficiency of that may depend on the number of coroutines you have running at the same time and how short the wait time is. You could even split the management into two categories: relatively short wait times which get evaluated every frame and a seperate list of coroutines with long wait times.
A similar thing is most likely true for pending webrequests. Webrequests are usually carried out on a seperate thread. Unity most likely uses a callback when the request is done internally. This callback could directly reschedule the coroutine. That means during the execution it would not have to check every pending webrequest every frame.
Though as I said we can’t really know this unless you have a source code license for Unity or you dig deep into the native code with a debugger / disassembler.
What we do know is that every native-to-managed transition costs performance. That’s why 10k gameobjects each with an Update loop is way slower than having a single manager object with an Update method and calling a custom method on each gameobject (of course we have the monobehaviour instances in a list or array) is way faster. See this blog post for reference.
To sum up: If it’s faster or not is hard / almost impossible to figure out for sure. As a general rule, just avoid overusing anything you don’t know how it exactly works ^^. For example when I would create an RTS game with a lot of different units and buildings, each has their own business, I wouldn’t create a coroutine in each instance. I wouldn’t even use Update here. In most cases you want to limit the overhead and settle on a relatively low and constant tick rate. Say 20 ticks per seconds (minecraft, i hear you calling ^^). This tick would be generated by a single manager and distributed manually through the units / buildings. In some cases there might be other optimisations possible (Yeah, I know we have DOTS but that’s another topic).
Just in case: when doing yield return new WaitForSeconds(…) its an extra alloc.
If wait values are undefined - its better to use timer variable and a cycle with yield return null.
Otherwise make a single utility static class that holds those waits to reduce GC.
Personal preferred - condition in a loop + yield return null.
In the end, I’m pretty sure its the same under the hood.
Offtopic:
But to be honest, whole Coroutine idea is bad. In reality you’d want to avoid all allocations, where as starting coroutine will result in at least one IEnumerator alloc.
I’ve ended up writing a custom runner that starts / stops an extra Update / FixedUpdate / LateUpdate / TickUpdate via an interface for the MonoBehaviour, and it has 0 allocs when starting / stopping updates. Those methods can be started / stopped at any point in code execution, and there’s no need to write or use nasty allocating yield instructions.
So its basically a coroutine but with update logic, and not tied to the “enable” property of the MonoBehaviour, which allows to manage these updatables separately from each other. E.g. running “Update” each frame, while not running “LateUpdate”.
And as a bonus point - it has single native entry point, so “10k updates” overhead is not a problem anymore.
This also helped me by properly aligning logic execution time with hybrid DOTS (DOTS logic is simulated first, then MonoBehaviours run w/o interrupting each other).
That’s exactly the point. If you have sufficiently many instances this approach will always be better. However when you have relatively small amounts of script instances it doesn’t really matter.
Yes, coroutines in general shouldn’t be started / ended frequently. The best coroutine is a never ending coroutine. Coroutines are useful when you have many different events, actions and wait times / wait conditions interleaved within a sequence. Using a coroutine in such a case is much simpler as all the state machine logic is handled by the compiler for you. One example that came to my mind is waiting for UI buttons in a coroutine sequence. Yes I used a CustomYieldInstruction here so the coroutine checks every frame if a button has been pressed, but the overhead is constant no matter how many buttons you’re waiting for. This was just a neat workaround to avoid having boolean flags all around your class and callback methods for all the buttons involved. It also manages the subscription of the button actions automatically.
I was thinking about creating some more custom yield instructions like logical OR and AND to perform “parallel tasks” or waiting for the completion of several tasks. This is of course all just for convenience and not meant to be particularly GC friendly. However things like UI interactions by nature doesn’t happen too often ^^. Being able to construct some sort of expression tree (so a data driven approach) is often quite useful. Pure performance is not always the No1 goal, especially when the performance differences are in the microscopic scale. Yes it becomes more important if you’re dealing with many similar objects at the same time. Thought that’s the key of software engineering: Choosing a suitable tool / solution for the given problem. What’s suitable depends on the specific requirements. The requirements can change during development so a paradigm shift and some refactoring may be necessary. That’s why it’s important to think about some key aspects and limits ahead of time. That’s software design, not just blind programming