Your GPU overheating isn’t a problem with Unity, it is a hardware problem with your computer.
When running your Unity game, by default it will run at whatever frame rate your computer hardware will allow. If that causes your GPU to overheat then you likely have a problem with the fan(s) on your video card. Making your scene more simple will just result in a higher frame rate, so may not actually result in less work for your GPU. You could try working around your issue for now by forcing a lower frame rate, see the below link.
Damn it, I made a mistake, it’s my CPU that’s heating up (my GPU is at a constant temperature arround 40°C).
I’m on a new laptop, two months old so I think there is no problem with my fan ^^.
I am going to study your link which seems interesting
But why does my PC heat up more on build than in the editor ?
Who is the manufacturer and what’s the model number? While it’s definitely a hot temperature it’s worth noting that modern hardware will throttle in the event that the temperatures become too high. If it’s running for extended periods of time at that temp it’s entirely possible it’s a normal temp for that laptop under heavy load.
Everything I mentioned would apply to an overheating CPU the same as an overheating GPU, since each frame creates work for both.
Absolutely correct. Depending on the CPU, this may be a completely normal temperature under high load. With a modern laptop you should expect the fan to kick up to maximum under high temp, and throttling of the CPU to a slower speed as the thermal controls determine necessary. (well, unless you’ve disabled that, which is often possible through a BIOS setting or a setting in the control panel)
Yeah it all depends on the CPU. I used to work in QA on the platform team of a network appliance company. One of the fun tests I would have to perform was testing our software overheat detection, and hardware’s auto shutoff functionality. A lot of fun with maxing the CPU with a repetitive task while disabling fans and blasting the CPU heat sink with a heat gun, while monitoring for our various software warnings, CPU temperature, how many degrees away from max temperature the CPU said it was (PECI temp value), and what would happen when I pushed it above that.
I found it very interesting that not only did different CPU models from the same manufacturer often have very different temperature tolerances, but even different revisions of the same CPU model could have a 10+ degree C difference in what temp it could handle.
Interesting article with conducted comparison exercises.
I do wonder however, how i9-9900K would be affected, by long exposure to hitting (85+ deg) / cooling (20deg).
Customers with deep pocket, would perhaps care less. But I like personally long living hardware.
Either way, continuous and over long time rapid cooling and hitting up, leads to micro cracking in metallic structures of chips. May be leading eventually to breaking connections and causing data errors. While chips are quite robust in general, and can deal with high range of errors, there is definite some limit. Also, chips performance is changing with temperature change.
While you may be completely ok today with 90 deg C playing, not necessary may be true in 6, or 12 months time, if continue of abusing device.