I bumped into a page on using a shader to display a tilemap, and I have already seen examples of “word processors” running in shadertoy.
So it got me thinking how much of the UI could we move onto the GPU as shaders?
Asked the question on the Graphics forum, but got tumbleweeds, so thought I would ask on general to get people’s opinions or see if anyone else has seen examples of UI’s running in shaders or on GPU’s?
Its already rendered by the GPU isn’t it? Like, it’s just a big opengl/directx window? And all the buttons and stuff are all rendered with shaders, right?
It would more than likely be a performance loss. The tilemap on gpu is very limiting, and can’t possibly expand to be part of a comprehensive UI system.
A complex UI running entirely gpu side would require compute shaders or in essence be a kind of skinned mesh. There’s no benefit to custom shader just to render an entire UI. It’s much slower and very limiting.
The word processor shadertoy isn’t the most performant way to render text either.
The GPU operates by doing the same task millions of times. “Do this to every vertex”, “Do that to every pixel” and so on.
The UI rendering is already handled by the GPU. I’m not sure moving the ton of custom stuff you do to generate the UI to the GPU would help at all. Especially if your game is already GPU bound.
Didn’t OS X and Windows both switch to a compositing back-end that runs on top of the graphics card API’s (DX/GL)? Like, Quartz, and Aero, or whatever, like in Windows Vista or something? So everything graphical is already hardware accelerated?