What is the fastest way to work a coroutine before yielding to maintain 60 FPS?

I am procedurally generating levels at runtime and I’m making a lot of ray casts to check for empty space and whatnot. Is there any clever way to throttle the coroutine to give it as much speed as possible without compromising framerate?

I’ve gotten it down to either:
A) Freeze the game for a few seconds to do the work
or
B) Do the work x interations at a time, which takes exponentially longer because I don’t know how many interations I could do without affecting framerate. It is just a guess.

I could take a guess at B, but it feels wrong. I don’t want to guess. I want to push it to the limit dynamically. Any ideas? I’ve tried messing around with Time and deltaTime and I wasn’t able to solve the issue today. Maybe tomorrow plus some helpful advice from the community.

I will state another way for clarity: How do I determine programmatically how many iterations I can get away with before yielding, maintaining about 60 frames per second?

This is an interesting problem I have been thinking about, so here are my ramblings. I would be keen to see what other folks think.

Is there any clever way to throttle
the coroutine to give it as much speed
as possible without compromising
framerate?

Now I don’t know, but it should be solvable in an abstract way and would make a great library. But;

  • if it’s a performance sensitive issue, the amount of overhead mght shoot the idea in the foot or
  • it’s not performance-sensitive, it doesn’t really matter, so do a super-conservative best-guess

In terms of implementation, we never know how heavily-loaded a machine may be at any given point in time. But we can find out the applications FPS over a set time interval.

So, “in the lab” we can deduce a “happy medium” – given our test FPS of X we can tweak work-to-do-per-frame Y until we get sth performant-enough. So then “in the wild” we can sample the FPS and extrapolate a new Y for any given framerate we encounter.

Issues I see are;

  • this all adds overhead
  • FPS is constantly moving, so we need a sample window, smaller window = more accuracy = more overhead, so it’s never going to be perfect
  • work-to-do may also constantly moving, which may or may not be a problem
  • I don’t think it is a linear relationship as the FPS approaches 0

I am procedurally generating thousands of meshes with LoD variants so I am hitting something very like this problem, so I amortize/batch the work over several frames and just best-guess the batch size. As it’s setup, not in-game, it’s more of a “nice to have” than critical feature for me.

As to your specific use case, as you are generating the geometry, can you not use geometry to find your spaces rather then physics, rays etc.? i.e. finding if areas of geometry share boundaries with or clip other areas (there are libraries out there)? or abstracting it out into a grid or similar model?

I’m making thousands of buildings from real-world city map data so I have to use geometry so find overlapping buildings etc, physics would take literally hours to do the same work.