I wanted to make some low-end friendly shaders.
I found out Unity uses 30% of my GTX 1060 drawing absolutelly nothing.
I even limited the FPS to 60.
The usage goes up with fullscreening the game window.
The usage goes down to 5% in build (still really high).
Task Manager cannot be used to reliably assess GPU performance. In particular because GPUs these days are dynamically adjusting to load scenarios and heat. When not under load, the GPU downclocks, which may make the usage % go up. Some games show 100% GPU usage in Task Manager but the fans don’t even spin up because the GPU is perfectly load balancing.
Yes but that will happen even for an empty scene in a build. In fact, many games exhibit this behaviour where the fans spin up particularly badly in the menu system (which is 2D only) whereas the game struggling to reach 60 fps will be more silent because the GPU does a great variety of things (often needing to wait on data too), rather than a single, highly efficient thing that also causes a lot of heat to be generated on a very tiny spot of the chip.
In fact, the easiest way to destroy a chip (before there were measures to prevent that) was to run very simple code in a loop that exercised only a small fraction of the chip known to generate a lot of heat. If you let a chip do all kinds of diverse things, then the heat is more evenly spread and more easily dissipated.
Prime95 and Furmark are used to test overclocked systems because they can be configured to stress test a system for maximum heat dissipation by running code entirely within the CPU caches, respectively the “hot paths” of the GPU in case of Furmark.
No, that’s not what I’m saying. Just trying to convey that heat and fans spinning and Task Manager % usage are … relative.
Like they said above: if you really want to know how your shader performs, you need to profile it.
And/or test the extreme cases. Imagine how many objects you could have with that shader, or just bring up any number, such as 10,000. Then add this many objects in the scene with your shader. Next time you run a build it may only run at 5 fps perhaps. That way you can optimize the shader and see improvements just by looking only at the fps counter. You may make a tiny change and the build jumps to 13 fps. That would be a HUGE improvement already.
But keep in mind: performance can only be assessed on the target device with a release (optimized) build. And by target I mean: low end. You’re testing on a 1060 and any performance optimizations are valid only for your GPU but the shader performance (or visuals!) may be different on a true low-end Intel HD Graphics 4000 for example. So you need to actually test on low-end devices to confirm that your optimizations are working on the target system, as well as that the shader still produces the desired output.