Hi everyone.
Since the Unity API is not thread-safe (I believe that’s because most things in Unity have a managed-code component and a native-code component), I’ve been thinking about coroutines.
Now, one of the big problems with coroutines is balancing them. If you yield too often, the coroutine will take forever to complete (e.g. a million calculations, which can be done almost instantly in a regular method, will take four and a half hours(!) if you yield after every calculation at 60FPS). On the other hand, if you yield too infrequently, the coroutine damages your performance, as it holds up moving to the next frame.
Since, in my heart of hearts, I really just want threads ( ), I’ve been playing with some code that allocates a time-slice to a coroutine. The idea is we figure out how long the frame takes (or rather, assume this frame will take about as long as the last frame and use Time.deltaTime), and then allow the Coroutine to take some specified portion of that (e.g. 10%).
And in this code it pretty much works, because although I set the startTime in Update(), Unity’s order of execution is to resume coroutines immediately after Update(). However, in this example I have only one Update(), and only one coroutine. With more Updates and more coroutines, there may end up being some time between setting startTime and resuming the coroutine, which would cause the code to not work as intended.
public class CoroutineTest : MonoBehaviour {
public float timeSlice = 0.1f; // How much of a frame do we let the Coroutine have?
public float targetFramerate = 60f; // What is our target framerate?
private float startTime; // When was Update called?
private float allowedTime; // How much time are we letting our Coroutine run?
private float targetTime; // How long does one frame at target framerate last?
void Start() {
print ("Start");
/* We need time differences WITHIN a frame so realtimeSinceStartup is our only
* option in Time's members. Per documentation: May not work well for all
* platforms(?) */
startTime = Time.realtimeSinceStartup;
allowedTime = timeSlice / targetFramerate;
targetTime = 1 / targetFramerate;
StartCoroutine (MyCoroutine ());
}
void Update() {
print("New frame: " + Time.frameCount);
/* This bit takes some explanation. If we only wanted the Coroutine to take 10% of
* each frame, we would just do frameTime * timeSlice. But we actually want the
* Coroutine to take more of the frame if our framerate is above our target
* framerate (i.e. we have time to spare), and less of the frame if our framerate
* is below our target framerate (i.e. performance is at a premium). So
* (targetTime / frameTime) will be 1 at our target framerate, >1 at higher
* framerates, and <1 at lower framerates, and acts as a multiplier for
* timeSlice. */
float frameTime = Time.deltaTime;
allowedTime = frameTime * timeSlice * (targetTime / frameTime);
// What time is it now, just before we resume our Coroutine?
startTime = Time.realtimeSinceStartup;
}
IEnumerator MyCoroutine() {
int passes = 0;
int target = 1000000;
while(passes < target) {
/* Do processing here. I've chosen Mathf.Log since it's a simple yet expensive
* call that, under normal circumstances, you don't want to do too often. Just
* for demonstration ;) */
float result = Mathf.Log(passes, 2);
print("Log base 2 of " + passes + " = " + result);
passes++;
// yield if the Coroutine has taken longer than it's allowed
if(Time.realtimeSinceStartup - startTime > allowedTime) {
yield return null;
}
}
}
}
So, does anyone have any ideas for improving this?