Upgrading PC hardware to 32 core decreased FPS from 130 to 50

For a while I was developing on an i7-9700k and most of my levels would be running around 130-180 fps. I recently upgraded to an AMD Threadripper 3970x (32-core, running 4.3 ghz) and went from a hybrid HD to a samsung 970 evo plus. My graphics card remained the same, using a gtx 1080 ti. Now on the new system I get around 50-60 FPS with the same levels. The cpu runs at 8% load across all cores and any single core is around 25%, while the gpu runs at 23-25% capacity. The ram was also upgraded from 32 GB 3200 to 64 GB 3200.

I also went from the free version to the pro version.
I am wondering what might be causing this much of a performance difference. is there any settings that would throttle hardware to only use a certain amount? also confused because I don’t seem to see any bottlenecks in the cpu or gpu. What might be going on here?

Your code is not optimized to be multithreaded ?
So only a single core of the 32 cores is being used to run your game

And that single core is slower than your last cpu at single core?

Is this related to Unity beta? Have you tested if performance is great in non-beta releases?

If your game doesn’t use jobs nor threads, all your scripting will run on a single thread, and that i7 has faster per-core performance than the threadripper. And while Unity itself will use multiple threads for many of its underlying systems, it won’t scale up anywhere close to using 32 cores.

I have not tested single thread performance personally between the two, but from what I see on benchmark result websites is that they are very similar and at https://www.cpubenchmark.net/singleThread.html they even have it with an advantage. Even if the i7 did have faster single thread performance (which I would actually expect) I don’t think it would be drastically different considering the clock speeds of each. Certainly not to the extent to explain 145 fps average down to 50. If it were solely due to the cpu performance, I would expect a marginal difference, which leads me to think it is something else causing such a large difference.

Also, I am a little confused on why the processor is not being run even at half load.

and in addition to that, the profiler is showing higher numbers as well : https://jake-bigrookgames.tinytake.com/tt/Mzk2NDk4NV8xMjE3NzI5NA
but the fps from the stats shows much much lower

It does sound like a bug in Unity, then. I know you can screw performance on these new AMD CPUs if you funk around with thread priority settings and alike. It’s also possible Unity is doing some basic “if INTEL” checks and behaving differently, since the CPU is not being taxed.

It’s probably just Unity vsync. Turn it off (and yes, Unity will report 50-70 FPS for vsync on in editor due to crappy timings). In reality it’s way higher. Make a build with no vsync, time it. Or disable vsync in In the editor game window (YMMV).

You changed your entire computer which means you did reinstall Unity with completely default options…

Most of the issues like this I see from people is with them just eyeballing stats window (which is broken garbage) with various random vsync settings on an editor that tries it’s hardest to not update to save laptop battery.

The GPU is pretty good so I can’t possibly imagine it could be slower but if it turns out to be after investigating, please pop a bug report case number in thread so Unity can test for that CPU :slight_smile:

Didn’t see a 2019.3 beta related issue so moved to general support.

1 Like

After a little more digging I found that it is coming from RenderTexture from the gpu, which oddly enough is the only piece I didn’t change on the upgrade.

So I am troubleshooting further, but disabling all of the meshes in the scene doesn’t seem to change the rentertexture time at all.

In a mostly blank scene it is taking 36 ms: