I ended up NOT upgrading to 3950x. So basically I have it in its basic form. I realized I’m okay with it, this CPU performance is plenty for my use case. It runs everything without a glitch.
I usually don’t “test” my computers, I don’t care about the synthetic performance, only about the real one. And nothing could sweat it so far, not even these days, it became a little bit hot here in North California for a couple of days and the cooling is plenty on the system and the semi-open case helps a lot I guess. I also ended up not installing one of the Noctuas because of the plenty of airflow and I’m using that space for the cables. (The case is a little bit tight for this config, but it ultimately fits, I had to bend the power supply holder a bit to install the videocard though. but when everything is in there you can bend it back)
I love my 3950x but wouldn’t buy it now. I would wait for zen3. What’s nice is AMD will let us with x570 upgrade so I will probably upgrade my 3950x when they release. But I don’t think we should hope for more than 16 cores on AM4 it’s only dual channel plus they will compete with their 3960x if they give it 24 cores
Its also just fun seing all those 32 threads at work
I quickly tested an i7-2600, an i7-6700k and a Threadripper 3960X for a few common tasks. (All running at stock speeds with fastest supported RAM from their product page)
Wall of text ahead
Tests were made in Unity 2020.2.0a15 with a small 2D URP project. (Windows 10 Pro, OpenGL 4.5, a few scripts, some externally compiled C# class libraries as imported DLLs and a few different simple sprite shaders)
Tested things which benefit from multiple cores are:
- Shader compilation
- The CPU lightmapper
- Texture compression (especially crunching)
- Build time (IL2CPP)
Most things didn’t really scale with more than 4 cores however:
-
Asset import time (except texture compression)
-
Script compilation time
-
External DLL import/reload time
-
Build time (Mono)
If you are compiling many shaders really often, more cores are a good idea. An initial build with shader compilation took the i7-2600 107 seconds, the TR 3960X only 43 seconds. Even the 3960X spiked to almost 90% CPU usage for a while.
However, once the shaders were cached after the first build, the difference was a lot smaller: 20 seconds for the i7-2600, 15 for the i7-6700k and 13 for the TR 3960X. (Mono Build on Windows)
CPU usage of i7-2600 during Mono build with cached shaders
For IL2CPP builds (with already cached shaders & low code stripping) thing are a bit different. The i7’s used all 4 cores/8 threads at 100% for the initial build, while the 3960X was at 25% total usage max. The i7-2600 took 207 seconds, the i7-6700k 109 seconds and the 3960X only 47 seconds. (This one almost looks like it was cached already)
After the first IL2CPP build was done, future builds were faster again. 76 seconds for the i7-2600, 58 seconds for the i7-6700k and 44 seconds for the TR 3960X. (Tested 3x and took the average, -/+ 2 seconds max difference for all of them)
The CPU usage was really similar to Mono builds, hitting 100% for a few seconds in the beginning and going down to 15% on all cores except one. (This one core was at around 80% instead of 15% like the others)
The i7’s were pretty close to their expected single-threaded performance difference in most cases. In plain benchmarks the TR 3960X is about 10-15% faster than the i7-6700k in single- and dual-core, while the i7-6700k is 35-40% faster than the i7-2600. I could notice about the same difference in Unity between them for DLL import times.
Reimporting/overwriting an external .Net 4.5 class library (as DLL) took 15 seconds with the i7-2600, 9 seconds with the i7-6700k (40% faster!) and 7-8 seconds (~15% faster) with the TR 3960X.
According to the Windows 10 Task Manager, DLL imports are using only 1-2 threads most of the time. (1 second 100% spike at the end) So this task does not benefit from multiple cores at all.
Similar result for script changes. Adding a simple Debug.Log line in a script took the editor 5 seconds to refresh with the i7-2600, 3 seconds with the i7-6700k and 2 seconds with the TR 3960X.
Of course there are more things to test. How well do Shader Graph or VFX graph work with more cores for example? What about DOTS?
But sadly I don’t know that much about these tools yet
In conclusion I’d say that more cores are pretty useless in most common, basic cases.
Yes, IL2CPP benefits from it a bit. But usually you will be using Mono builds anyway for prototyping.
Yes, the CPU lightmapper can take full advantage of a Threadripper, but the GPU lightmapper is even faster and hopefully replaces the CPU lightmapper soon.
Yes, texture crunching takes full advantage of all cores as well, but are you really importing that many high-res textures daily?
If you are using multiple Editor instances at once, or if you are an artist, permanently making changes to textures or shaders, a 3950X or even a Threadripper might be worth the money.
If you are a solo developer mostly focused on a single, smaller project however, I’d say you should stick to something less expensive.
At the end of the day all that really matters is your usecase I guess.
Personally I’m working on 3 networked projects at once, with a Linux VM for web development running in the background, so a TR 3960X with quad channel memory was a good choice.
If I’d be still working on smaller singleplayer games however, I’d definitely would choose a i9-10900k for the better single- and dual-core performance today.
Have unity even said that GPU light baking will ever work with bigger than demo sized scenes?
There is always bakery offcourse
So I’m building to WebGL several times a day and it takes a bloody age (especially when I’m debugging and need to make development builds). I also need to build for windows so I’m doing regular platform switches. (AssetDatabase v2 is a bloody hero!). In addition to this I’m building about 2gb of addressable asset bundles (around 3-400 individual bundles) again at least once every few days (but sometimes it can be several times in one day). Lastly its not uncommon for me to have three or four separate instances of Unity (and again, just as many instances of Rider) open at the same time.
I’m trying to decide between 3900x, 3950x or threadripper… (Of course I’m going to probably wait till Zen3 regardless) and I’m wondering if the extra cores are going to benefit my workload? It seems like because WebGL has to be built with IL2CPP there may be a big benefit to more cores… (also more cores are yummy!)