# What is the fastest way to work a coroutine before yielding to maintain 60 FPS?

I am procedurally generating levels at runtime and I’m making a lot of ray casts to check for empty space and whatnot. Is there any clever way to throttle the coroutine to give it as much speed as possible without compromising framerate?

I’ve gotten it down to either:
A) Freeze the game for a few seconds to do the work
or
B) Do the work x interations at a time, which takes exponentially longer because I don’t know how many interations I could do without affecting framerate. It is just a guess.

I could take a guess at B, but it feels wrong. I don’t want to guess. I want to push it to the limit dynamically. Any ideas? I’ve tried messing around with Time and deltaTime and I wasn’t able to solve the issue today. Maybe tomorrow plus some helpful advice from the community.

I will state another way for clarity: How do I determine programmatically how many iterations I can get away with before yielding, maintaining about 60 frames per second?

This is an interesting problem I have been thinking about, so here are my ramblings. I would be keen to see what other folks think.

Is there any clever way to throttle
the coroutine to give it as much speed
as possible without compromising
framerate?

Now I don’t know, but it should be solvable in an abstract way and would make a great library. But;

• if it’s a performance sensitive issue, the amount of overhead mght shoot the idea in the foot or
• it’s not performance-sensitive, it doesn’t really matter, so do a super-conservative best-guess

In terms of implementation, we never know how heavily-loaded a machine may be at any given point in time. But we can find out the applications FPS over a set time interval.

So, “in the lab” we can deduce a “happy medium” – given our test FPS of X we can tweak work-to-do-per-frame Y until we get sth performant-enough. So then “in the wild” we can sample the FPS and extrapolate a new Y for any given framerate we encounter.

Issues I see are;