Currently I use
float done = Time.time + delay;
while(Time.time < done) yield return 0;
as a replacement for
yield return new WaitForSeconds(delay);
because it works with different time scales.
Is it more expensive? Is there more overhead? If so, how much, relatively speaking? Is it just the same, just without constructing an extra class?
Please do not respond if you're just guessing.
If you look into .Net's assemblies you'll see that each "yield"ing function returns an instance of a generated class which includes your local variables as instance members and injects your method body into it's MoveNext() method and fills it with IL-level gotos (this is what the IEnumerator handle is actually pointing at).
So the overhead is negligible in this case - you're not creating a thread of some other expensive system resource, just another vanilla method callback.
when you use WaitForSecond unity checks the timer of your coroutine and the end of each frame and see it's not the time to run it but when you use your own code to check it unity should execute a few instruction in each frame for your script. there is not much difference if you don't use it in many objects on iphone. the class is so light weight so creation of WaitForSecond don't take much time. it might be slower on 0.05 second. you can test things like this with attaching 2 different scripts to more than 5000 GameObjects and take a look at CPU usage and FPS.
It depends entirely on "delay". If "delay" is 10, then the first code will be creating 500 yields (at 50Hz) during that 10 seconds (instead of one for the latter code). If delay is 0.02, they'll be equivalent. As to whether it's negligible with large "delay", that'll depend on how many of these coroutines you have running (certainly for 20 or so it's negligible, since it's only 20 per frame). But your question is a good answer to "Does WaitForSeconds work for different timescales?"... which is what I was looking for - thanks!