Today I ran the performance tests of my game with Unity 2019.3.0b1, 2019.2.0f1 and 2018.1.9f2. I present the results below, just like I did before, during older beta cycles.
Older Reports
According to my test, 2019.3.0b1 is slower than 2019.2 and significantly slower 2018.1. I profiled all three builds listed in the graphs below during the same session.
You can run the test using the project attached to the following bug-report:
(Case 1108597) Project to find performance regressions and bugs
How does the test work?
The (first-person) game I’m working on features an automated performance test. It works like this:
Camera is placed at a defined position in the scene
Camera makes a 360 degree rotation around the Y-axis within 20 seconds (slowly rotating around y)
Every second the test captures the average CPU frame time of the last second
The game runs just like it would normally do, but the camera/player does not move and the AI in unable to see the player.
It measures the “base-line” of the game so to speak. If an actual gamer plays the game, more things are going to happen, which is missing in the test, such as visual and sound effects when firing weapons, additional AI to hunt the player and more navigation pathing.
I run this test in a Standalone Windows 64bit (Mono) Player:
2018 = .NET3.5
2019 = .NET4.x
I’ve also enabled “Incremental GC” on both 2019 versions.
The following Physics settings are applied:
Physics.autoSyncTransforms=false
Physics.autoSimulate=true
Physics2D.autoSyncTransforms=false
Physics2D.autoSimulate=false
The resolution is set to 320x240 (D3D11) to make sure the bottleneck isn’t GPU.
The y-axis represents the average CPU time in milliseconds, how long “one frame took”. The x-axis represents the sample points while the camera makes its 360° rotation. Each test runs for 20 seconds, one sample every second, thus 20 samples for each test. Fewer milliseconds (vertical axis) indicate better performance.
These issues are probably not reproducible if you’re running the game on high-end hardware. Please use a rig similar to the hardware I used to submit the bug-report.
EDIT: The 2020.1 performance overview can be found at:
Personnaly I have much more problems with GPU performance. Granted, I use a GTX770M, but even with a simple shader on HDLit master node (or even PBR), I get near 2000 fragment shader code lines. I can spend a lot of time optimizing my shaders (and I do it through custom nodes), but it’s a drop in the water next to what HDRP is generating at the end.
It’s much easier to optimize the CPU side (no game objects except when absolutly necessary, instancing, threading, etc…). I have lots of behaviour trees AI running at the same time, some meteo calculations, minimaps, etc but I think that even with twice what I’m doing now, I’d still be GPU bound.
Yeah I’m also noticing a big performance drop on my mobile test. My android device used to run at 30fps no problem and it’s usually around 24fps now. That’s a 20% drop on performance from 2019.1 and I can’t test it now but I think 2018 was even faster.
I’ve been following your performance test results in the Betas for quite a few versions now, always interesting stuff. There’s something I don’t really understand. I can understand the Unity Editor itself getting slower with each new major version. (And I think that’s somewhat reasonable.) But I really don’t understand why an optimized build of a game would have worse performance in each version of Unity. If anything, I would expect build performance to improve, not worsen.
Has there ever been an explanation of why build performance decays? It would be one thing if you were using new features of a newer version, and having to accept a performance hit to enjoy the new functionality. But it seems that’s not the case with your performance tests. So, do we know why builds are slower in newer versions?
I imagine depending on Unity versions, it’s just different fixes/improvements that cause the build to run slower. Regression here, regression there.
That’s correct. The project does not use any of the new features, except for things Unity Technologies created to improve performance. For example, starting with 2019.2?, the test uses the incremental GC. But otherwise it’s really just a 2 years old copy of my actual project that I didn’t change since then.
Wait… They had a fix that actually improved the performances in 2019.2 and now it’s not improved anymore … I don’t get it. What did I miss ? How come 2019.3 is worth that 2019.2 after all they found to be faulty ?
I ran some tests on my game project comparing the same project on 2018.3 (the version we’re currently working on) and 2019.3 (the long awaited version that includes all the bugs we reported, being fixed…). And I found out that both in editor and in build, we have a rough 30% performance drop.
My game is a sprite based pixel art tactical, so nothing hardcore. And seeing the performances drop version after version is a bit worrying. Especially when we hear about performance improvement quit a lot from Unity.
I never understand why almost every single release Unity posts: performance improvements. But all I see every single release on all my projects is: performance drops.
I have no idea what performance they improve, but it’s definitely not the one for regular projects. At the beginning the drop was just a little but now it’s very very noticeable. A game that ran just fine on an iPhone 5 compiled with unity 4? I don’t even know how many years ago… same game without changes runs about the same speed on an iPhone 7 using unity 2019.3. The performance drop is gigantic.
Can you keep reporting issues for this thing? We definitely are open to fixing performance problems when we see them, it’s just that they tend to be super project specific. The fastest way to get them addressed is to figure out what got slower and then report it to us.
It’s quite hard to pin point what goes wrong exactly.
I tried to see why my project runs slower in 2019.3 but couldn’t find anything specifically faulty.
I reported a bug (Case 1186691)
I made a simple 2D project.
2 sprites, 1 tile and 1 script (a simple fps counter).
In the main scene there are around 50 animated GameObjects and a pretty big tilemap (all prefabs).
It represents an average scene in my game project.
Can you profile the build instead of the editor? Profiling the editor is not going to be an accurate representation of what frame rate your game will have.
Also, your screenshot shows 0.1 ms difference on the CPU. That is statistically insignificant to say for sure that 2019.3 is slower.
Lastly, your frame rate counter doesn’t appear to work correctly. The editor numbers don’t match your numbers.
That video was uploaded on 2015. Latest unity version back then. Running on android and iOS at 30 fps (or more) on an iPhone 5 and one of those crappy android phones (using android 4.4, I would need to look for the exact phone model).
It included:
Unity terrain+over 10k trees+water+all multiplayer going on+one directional light+clouds+ real time weather (you can see on the video raining, thunderstorms and such).
Today, the same game compiled on Unity 2019.2 or 2019.3, cannot be run at more than 10 fps on those phones. So currently:
I had to remove that water quality and put a crappy one. Until a much improved version of the community ocean water was released but the original one was way slower on latest version of unity.
Mesh terrain is used instead of unity terrain.
I’ve removed real time weather, it’s always sunny, no rain, no heavy clouds.
Everything else I was able to keep. But instead of adding stuff thanks to performance improvements on Unity, I keep removing stuff until the previous phones get so old that people upgrade I can finally put back what I had years before.