Physics processing makes build run very slow (~5 FPS) but fine (50-60 FPS) in editor.

The title pretty much says it all. This is a totally new issue on my project which I’ve been working on (without problems, and with several totally fine builds) for the last few months.

Basically, the game runs around 50-60 FPS in the editor, but about 10x slower in builds. The culprit (according to the profiler) is a massive difference in the cost of the “physics processing” category between running in the editor and running in a build. It spikes from 5-10ms when playing in the editor to 100-200ms per frame when playing in a build. What’s the explanation for this?

Profiler output is here - you can easily spot when I switched over from running in the editor to running the debug build in the timeline!

Edit: Reuploading the profiler window screenshot:

How much is in YOUR actual FixedUpdate()? I don’t think you showed that above. Maybe keep scrolling down?

It’s super-easy to do “too much” in a FixedUpdate() and just horribly bog the game down to 5 fps. I did this in my Jetpack Kurt game and it basically would just SLAM into a wall on certain devices and you’d drop to 5fps and be stuck there. Other devices would run fine. Turns out I was spending 100+ms in my FixedUpdate() and once I streamlined that, it all was fine.

ALSO: beware that editor profiling is not really relevant to how it will ultimately behave.

1 Like

I think you’re asking for the FixedUpdate coming from my scripts, correct (FixedUpdate.ScriptRunBehaviorFixedUpdate in the hierarchy)? If so, here is the screencap from that from playing in the editor - typically it is 9-12% at most.

It is also in the 9-12% range in the build.

Fair point, however, the editor profiling almost identically matched the performance of my last 7-8 builds over the past few months. There is now a divergence between the builds and the editor plays, where the editor runs at a solid 50-60 fps as it always has, but now the builds are performing much worse. That physics.processing category now costing MUCH more on builds (where it didn’t previously) seems to be the culprit - if I can figure out why, that should unravel the mystery.

It’s not so much that it costs much more but you reached the point where Unity can’t get it done in 1/nth of a second (the physics step rate), so it just backs up and clogs up and comes to a stop, like a freeway at rush hour.

For instance, if you have 50fps physics updates and begin to take a substantial part of that 50th of a second to do your FixedUpdate() stuff, it will suddenly SLAM into a wall as Unity basically spends all its time pumping FixedUpdate() and can’t hardly any Update() calls in.

I’m going to guess if you were to just comment out a few of your biggest FixedUpdate() contenders, even ones that seem to only take a tiny fraction, suddenly you will unlogjam it and get back full performance. It’s extremely non-linear.

If you did comment a few things out and prove this, you would know that’s the mechanism in play, and be able to make further reasoned decisions about unpacking things from FixedUpdate() that don’t really need to be in there.

I see, very interesting! Unbelievably, I got about 30 fps back by following this: Gfx.waitForPresent causing massive lag spikes in editor - Questions & Answers - Unity Discussions

Just to clarify, specifically this step: "
If you are on MacOS disabling Metal might help with this. My Mac mini with integrated graphics was chugging along on simple scenes between 45-60fps. After disabling Metal my FPS is in the hundreds!

Project settings > Player > Other settings > Rendering > Auto Graphics API For Mac (uncheck) Then drag OpenGLCore above Metal in the list."

The crazy physics cost is gone now and has been replaced almost 1:1 with a very high rendering cost. The build is still terribly unperformative. Pulling 33 FPS in the MacOS build at the lowest quality settings, 51 FPS in the editor by both the stats panel and my FPS checker script.

Vsync is off. I’ve cut the scene down to 4 models and 250k triangles. Each one has a single material from a 32x32 texture. I have a single real-time direct light and that’s it. Should be getting a much higher framerate on these settings. Most of the time in the hierarchy (40-50ms per frame) is burned on semaphore.waitforsignal which suggests we’re waiting for the GPU. What’s going on here?