I have a web player that requires users to have a decent graphics card to run smoothly.
I’ve looked at the SystemInfo class and that provides a lot of useful information (graphicsMemorySize, graphicsDeviceName, graphicsDeviceVersion, et cetera).
I had this idea that I would test graphicsMemorySize and warn users with less than 64MB that the web player would not run very smoothly. However, in my testing I found that laptops with integrated graphics cards report that they have 128MB. These machines with the integrated graphics cards can’t really handle the presentation, so I’d like to include them in the warning.
Is there anything I can test to distinguish the integrated graphics cards from the dedicated graphics cards? Theoretically I could warn graphics devices by name but since there are so many out there that doesn’t seem like a feasible solution.
There’s no functionality in OS/OpenGL/Direct3D to distinguish between “integrated” and “not integrated” cards. Even more, sometimes it’s hard to say when the card is integrated versus not integrated (a standalone card might be able to use shared memory; or an integrated card might have discrete memory).
I think the most practical way is to create some scene that is quite graphics-heavy (even something simple like tons of semitransparent planes might work), run it for a bit and check the framerate or time elapsed. And then warn the user if the performance was too low.
The name-based detection script is on the Unify Wiki, but the video card names in there don’t cover all the cases (and new slow cards always pop up).
I think we had a similar idea where we decided to dynamically add primitives to the scene until we got up to roughly the same number of objects/polygons/vertices as our actual project. Right now we’re testing the time elapsed to create the scene as we found that it is giving more consistent results than the frame rate.
Anyhow, most of the machines we’ve tested seem fine, but we have one G5 with the ATI Radeon 9800 OpenGL Engine and 256 MB of RAM that runs our actual presentation at about the same speed as two other G5s with the ATI Radeon 9600 OpenGL Engine and 128 MB of RAM. Those results seem reasonable (the machine with the 9800 is a little bit faster), but when we run our little test presentation the machine with the 9800 is about 20% slower than the machines with the 9600.
Do you have any thoughts on why that might be the case? The test presentation we have is a web player, and both machines were tested when they were only running the browser (Firefox 3) with the test presentation.