It would be useful to have a command that returns number of GPU cores and their speed to calculate the power of the users GPU (Like the SystemInfo.processorFrequency and SystemInfo.processorCount functions).
You’re better advised to either conceive a representative scene that determines the actual framerate by running it for a few seconds.
These system details tell you little about the achievable performance, definitely less than parsing the deviceName string for known device identifiers such as “RTX 4070” or “RX 7900”.
You can have a Notebook model that has the same number of cores or even more than the desktop variant, yet the desktop version will be significantly faster. This goes for both CPU and GPU. Atop that anything about the system itself can significantly affect performance, such as the quality vs performance options in graphics drivers.
Looking at the number of cores and clock frequencies is a bit like looking at the number of cylinders and RPMs in an engine to determine how fast a car is moving: it’s far more complex than those two numbers. In a car you have gears that are dividing and multiplying the rotations, whether you’re going uphill or downhill, and so on.
You can’t even accurately predict the performance of a CPU or a GPU within its own generation based off of those two numbers as there are more factors at play: cache amounts, bandwidth of the memory bus, whether instruction sets like AVX are supported, etc. That’s not even taking into consideration running more than the game.
Sorry for the late response, thank you.
Sorry for the late response, thank you for the idea.