In my Quality Settings menu I would like to suggest certain Graphics settings to the Player based on the CPU & GPU they have. So if they have a pretty weak CPU/GPU I’d suggest modifying certain graphical elements.
Is there a way to query the CPU & GPU to get some kind of stats on them? I don’t want to maintain a massive CPU/GPU list as a lookup table BTW.
Do you know of any games that do this? I don’t but I’m not trying out too many new games, on the other hand I’d say enough that if this were a viable option, I’d probably have seen it at least once.
What games sometimes do is to classify your system into a certain tier by running a quick, realtime performance test where they cycle through graphics settings and then suggest something that ensures a smooth play.
I believe the main challenge with this sort of thing is that if the GPU reports to support “Feature X12.1” then it might just technically support it, but either a) not fully implement the feature set leading to weaker quality (or crashes) and b) there’s simply no guarantee as to what that means to performance.
For instance, consider that technically the GPU may be nearly identical to its slightly more expensive variant, where it only has slower memory chips eg say 40% less mem bandwidth which may hurt some features a lot more than most others. But there’s no way of telling unless you maintain such a precise list which, as you noted, nobody likes to maintain.
Frequency
I’ve seen it dozens of time, more likely in AA/AAA. If I had to guess, it just looks at the GPU and takes a guess at appropriate specs based on that. Deadlock for instance just reminded me my Drivers were out of date.
My expectation is that there may be a (paid) service of providing metrics that big productions can afford to leverage.
At the lowest level it‘s probably something really dumb, like Windows had this „score“ system which you could use to classify a machine. Or they utilize something like Passmark - provided they have an API - where they send the vendor strings of CPU and GPU and get a passmark score back.
I seriously doubt that it‘s anything more elaborate than that since the effort and benefits quickly, if not exponentially, start to deviate. Let alone the issues caused by incorrectly classifying a system.
Thinking about it more, one kind of funny thing I’ve noticed is that I’m always recommended my current resolution (4K) but because my GPU is older the recommendation is always like “ok, keep the 4K but cannibalize all other quality.” I can’t recall ever being recommended a lower resolution to preserve higher quality.
Now that you mention it, I also recall GTA which primarily focuses on modifying upscaling/downsampling ratio.
I bet no game will want to risk suggesting a resolution that would be troublesome to the user. Thus it’s always going to be the desktop resolution. One of the issues with different resolutions resulting in terrible Alt+Tab behaviour. It’s bad on monitors already but on a projector a change in resolution could take upwards of 10 seconds and more until you get a picture back!
Another issue is that not every monitor supports every resolution and frequency reported by the drivers. Eg you could have flickering if you set 59.999 Hz rather than 60 Hz. Or quite simply a different image quality, say the user has optimized his quality settings (eg brightness, contrast, colors, image postprocessing) for 1920x1080@60 Hz but when the monitor switches to a different resolution or frequency, those settings may be back to defaults because the monitor remembers those settings only on a per-resolution/frequency basis. I know my projector does that, which is super annoying.
So after working with ChatGPT yesterday, here’s what it came up with.
using UnityEngine;
public class PerformanceEvaluator : MonoBehaviour
{
// Define importance multipliers for each category (Adjust as needed)
private const float CPU_CORE_MULTIPLIER = 0.2f;
private const float CPU_FREQ_MULTIPLIER = 0.15f;
private const float GPU_VRAM_MULTIPLIER = 0.25f;
private const float RAM_MULTIPLIER = 0.1f;
private const float RESOLUTION_MULTIPLIER = 0.05f;
private const float SHADER_LEVEL_MULTIPLIER = 0.05f;
private const float COMPUTE_SHADER_MULTIPLIER = 0.05f;
private const float TEXTURE_SIZE_MULTIPLIER = 0.05f;
private const float FILL_RATE_MULTIPLIER = 0.05f;
private const float MULTITHREADED_RENDERING_MULTIPLIER = 0.05f;
// System thresholds for scoring (can be adjusted)
private const int MIN_CPU_CORES = 2;
private const int MAX_CPU_CORES = 16;
private const int MIN_CPU_FREQ = 1500; // 1.5 GHz
private const int MAX_CPU_FREQ = 5000; // 5 GHz
private const int MIN_VRAM = 1000; // 1 GB
private const int MAX_VRAM = 16000; // 16 GB
private const int MIN_RAM = 4096; // 4 GB
private const int MAX_RAM = 32000; // 32 GB
private const int MIN_RESOLUTION = 1280 * 720; // HD
private const int MAX_RESOLUTION = 3840 * 2160; // 4K
private const int MIN_SHADER_LEVEL = 20; // Shader Model 2.0
private const int MAX_SHADER_LEVEL = 50; // Shader Model 5.0
private const int MIN_TEXTURE_SIZE = 1024;
private const int MAX_TEXTURE_SIZE = 16384; // 16K textures (max on high-end GPUs)
private const int MIN_FILL_RATE = 1000; // Approximation, in megapixels/sec
private const int MAX_FILL_RATE = 16000;
void Start()
{
float finalScore = CalculatePerformanceScore();
Debug.Log("Performance Score: " + finalScore);
}
private float CalculatePerformanceScore()
{
// CPU Score
int cpuCores = Mathf.Clamp(SystemInfo.processorCount, MIN_CPU_CORES, MAX_CPU_CORES);
float cpuFreq = Mathf.Clamp(SystemInfo.processorFrequency, MIN_CPU_FREQ, MAX_CPU_FREQ);
float cpuScore = (cpuCores / (float)MAX_CPU_CORES) * CPU_CORE_MULTIPLIER +
(cpuFreq / (float)MAX_CPU_FREQ) * CPU_FREQ_MULTIPLIER;
// GPU VRAM Score
int gpuMemory = Mathf.Clamp(SystemInfo.graphicsMemorySize, MIN_VRAM, MAX_VRAM);
float gpuScore = (gpuMemory / (float)MAX_VRAM) * GPU_VRAM_MULTIPLIER;
// RAM Score
int totalMemory = Mathf.Clamp(SystemInfo.systemMemorySize, MIN_RAM, MAX_RAM);
float ramScore = (totalMemory / (float)MAX_RAM) * RAM_MULTIPLIER;
// Screen Resolution Score
int resolution = Screen.currentResolution.width * Screen.currentResolution.height;
resolution = Mathf.Clamp(resolution, MIN_RESOLUTION, MAX_RESOLUTION);
float resolutionScore = (resolution / (float)MAX_RESOLUTION) * RESOLUTION_MULTIPLIER;
// Shader Level and Max Texture Size Score
int shaderLevel = Mathf.Clamp(SystemInfo.graphicsShaderLevel, MIN_SHADER_LEVEL, MAX_SHADER_LEVEL);
float shaderScore = (shaderLevel / (float)MAX_SHADER_LEVEL) * SHADER_LEVEL_MULTIPLIER;
// Compute Shader Support Score
bool supportsComputeShaders = SystemInfo.supportsComputeShaders;
float computeShaderScore = supportsComputeShaders ? COMPUTE_SHADER_MULTIPLIER : 0f;
// Max Texture Size Score
int maxTextureSize = Mathf.Clamp(SystemInfo.maxTextureSize, MIN_TEXTURE_SIZE, MAX_TEXTURE_SIZE);
float textureSizeScore = (maxTextureSize / (float)MAX_TEXTURE_SIZE) * TEXTURE_SIZE_MULTIPLIER;
// Fill Rate Score (approximation, inferred from GPU performance)
int approximateFillRate = Mathf.Clamp(GetApproximateFillRate(), MIN_FILL_RATE, MAX_FILL_RATE);
float fillRateScore = (approximateFillRate / (float)MAX_FILL_RATE) * FILL_RATE_MULTIPLIER;
// Multi-Threaded Rendering Support Score
bool isMultiThreaded = SystemInfo.graphicsMultiThreaded;
float multiThreadedScore = isMultiThreaded ? MULTITHREADED_RENDERING_MULTIPLIER : 0f;
// Calculate final score
float finalScore = cpuScore + gpuScore + ramScore + resolutionScore + shaderScore +
computeShaderScore + textureSizeScore + fillRateScore + multiThreadedScore;
return finalScore * 100f; // Scale to 100 for easier interpretation
}
// This function approximates the pixel fill rate based on known GPU performance levels.
private int GetApproximateFillRate()
{
// This function can be extended to better estimate pixel fill rate based on the GPU model.
// As a placeholder, we return an arbitrary fill rate based on some sample GPU.
// For real-world use, look up known fill rates for specific GPUs.
if (SystemInfo.graphicsDeviceName.Contains("GTX"))
return 8000; // Approx for GTX cards
if (SystemInfo.graphicsDeviceName.Contains("RTX"))
return 12000; // Approx for RTX cards
return 4000; // Default for lower-end GPUs
}
[ OR USE the Better FillRateTester Below ]
private int GetBetterApproximateFillRate()
{
// Start measuring time
float startTime = Time.realtimeSinceStartup;
// Render the fullscreen quad
FillRateTester.RenderQuad();
// End measuring time
float endTime = Time.realtimeSinceStartup;
// Calculate render time (in seconds)
float renderTime = endTime - startTime;
// Calculate pixel count based on screen resolution
int pixelCount = Screen.width * Screen.height;
// Calculate fill rate in megapixels per second (MP/s)
return (int)(pixelCount / renderTime) / 1_000_000f;
}
}
************************************** Better Approximate Fill Rate tester *****************************
public static class FillRateTester
{
private static Mesh quadMesh;
private static Material quadMaterial;
// Call this method in Update to test the fill rate
public static void RenderQuad()
{
// Create a fullscreen quad mesh
quadMesh = CreateFullscreenQuad();
// Create a basic unlit material for rendering
quadMaterial = new Material(Shader.Find("Unlit/Color"));
quadMaterial.color = Color.white;
// Set the material pass to 0 (basic rendering pass)
quadMaterial.SetPass(0);
// Render the full-screen quad using the Graphics API
Graphics.DrawMeshNow(quadMesh, Matrix4x4.identity);
}
// Helper function to create a fullscreen quad
private static Mesh CreateFullscreenQuad()
{
Mesh mesh = new Mesh();
// Vertices of a quad (fullscreen)
Vector3[] vertices = {
new Vector3(-1, -1, 0), // Bottom left
new Vector3(1, -1, 0), // Bottom right
new Vector3(1, 1, 0), // Top right
new Vector3(-1, 1, 0) // Top left
};
// UV coordinates (optional, but standard for textures)
Vector2[] uv = {
new Vector2(0, 0), // Bottom left
new Vector2(1, 0), // Bottom right
new Vector2(1, 1), // Top right
new Vector2(0, 1) // Top left
};
// Triangles (two triangles to form the quad)
int[] triangles = {
0, 1, 2, // First triangle
2, 3, 0 // Second triangle
};
// Assign the vertices, UVs, and triangles to the mesh
mesh.vertices = vertices;
mesh.uv = uv;
mesh.triangles = triangles;
// Recalculate normals (optional, not needed for basic unlit material)
mesh.RecalculateNormals();
return mesh;
}
}
I have 4 Quality Levels and some graphics options. So I can take this FINAL score and select a quality setting and some of these additional values to determine other settings.
Again, these would only be initial suggestions for the Player. Obviously they could change it.
It suggested iterating over GetBetterApproximateFillRate() a few times to get a better value.
That’s slightly better than stabbing in the dark, but the issues lurking in that script make me believe this is going to hurt the user’s experience more than it will support it by incorrectly assessing the system performance.
Plus the time it takes you to test and confirm these values. Do you have all the graphics cards and CPUs at hand to verify that the code correctly assesses the performance, respectively that the quality settings applied actually lead to the desirable constant 60 fps? For instance, the scoring may scale upwards with core count but if your game doesn’t even benefit from anything but 2 cores then players with a 16 core CPU with a lower end GPU might suffer.
Using that script is plain and simple a technical gamble for very little benefit. If anything, you could pick one or two of the metrics, say shader model and perhaps the GTX/RTX assessment, respectively something similar for AMD and Intel HD Graphics.
The fillrate measuring is totally bonkers. It also measures creating a mesh, finding a shader, and all the other things that aren’t fillrate related at all.
A simpler approach may be to just do a little render test when the game starts to get the FPS. You can then use that to gauge performance.