Which platforms do use the full GPU and which don't?

With frame rates unlocked UWP builds can use up to 100% GPU and Windows builds about 50% on my machine. The player in Unity itself only uses about 30% GPU, WebGL can use 25-50% depending on the build and what’s in it. So what the so and so is going on Unity? Can anyone shed some light on this as all the 3 windows platforms UWP, Native and WebGL should be able to utilise 90%+ GPU with a single running instance.

Thanks

The really depends on the game and how efficient your code is. I have built some really crazy DOTS demos where my CPU was only 50% utilized and my GPU was 100% utilized. That was with a Windows build on a high end PC.

2 Likes

The percentage of GPU usage depends on the work you’re asking of the GPU, the performance capabilities of your specific GPU, and whether your CPU is capable of delivering data to the GPU faster than the GPU can process it.

1 Like

Hi, no nothing to do with that all running under 60fps high complexity and still only 40% max GPU on WebGL, and WebGL usage is so inconsistent sometimes the most complex things with no CPU still only use 20-25% GPU and lag like crazy. If they used 90%+ GPU like the UWP which gets 50fps they would be ok and not that big a difference, but because WebGL only uses less than half the GPU for exactly the same thing even though it is way below 60fps, then of course what I’m getting is 20fps frame rate. This isn’t rocket science Unity Dev’s is this a WebGL limitation like the 4GB Ram per tab, please respond. Basically, I don’t believe WebGL is slower it’s just throttled to heck. Unity Dev’s answer please!

Just differences between the platforms and their optimizations. WebGL, for example, is very dependent on the browser and often doesn’t have the latest features of the hardware, and Microsoft has basically deprecated UWP.

https://www.thurrott.com/dev/206351/microsoft-confirms-uwp-is-not-the-future-of-windows-apps

Unity uses Emscripten to translate code into WebAssembly which is then executed in a sandbox which imposes a penalty on performance that is on average around 10% slower compared to native code.

https://docs.unity3d.com/Manual/webgl-performance.html
https://www.usenix.org/conference/atc19/presentation/jangda

WebGL isn’t immune to this performance penalty meaning you now have two sources of the penalty due to using both at the same time. I wasn’t able to find a statement about the performance or benchmarks but I’d expect at least another 10% and the statement from people on Stack Overflow suggests that that may be very optimistic.

https://stackoverflow.com/questions/17516187/performance-of-webgl-and-opengl

So, yes, WebGL is slower and not just because it’s missing features but because it’s sandboxed within a browser.

1 Like

It is not reasonable to expect WebGL to perform as well as a native build. WebGL is running inside a web browser, and that creates some bottleneck.

UWP is basically not a relevant option. UWP was always a strange thing for Microsoft to push, and it makes sense for Microsoft to back away from UWP.

1 Like

Browsers also perform validations and apply some limits to WebGL content for security purposes.

You’re all just not getting it, if you were gamers you’d know the one thing they all try to do is max the GPU, why do you think they over clock it, as quoted by this article,

“Low GPU usage in games is one of the most common problems that trouble many gamers worldwide. Low GPU usage directly translates to low performance or low FPS in games, because GPU is not operating at its maximum capacity as it is not fully utilized.”

Running at a max of 30% GPU isn’t going to max performance it’s going to kill it stone dead! Trying to invent made up reasons about why the brower runs slower is nonsense.

To prove that I bought this Embedded Browser | GUI Tools | Unity Asset Store and built to Windows EXE and UWP with exactly the same Angular GUI overlaid on a 3D scene with frame rates unlocked as I had in my Browser WebGL. The Unity3D builds for WinExe and UWP run at exactly the same frame rates and GPU usage 70-90% on Windows and UWP with and without the in game browser Angular GUI. So if a guy by himself using Chromium which is exactly the same build code as Chrome and Edge now use can get no performance cost to an in Unity3D browser, there is absolutely no performance cost to running in the browser period! It could only be WebGL, but it isn’t, it is WebGL being throttled, whether by Unity3D or the browser or WebGL standard and I can’t get to the bottom of it!

What’s going on Unity devs answer please!!!

Riddle me this Browser Slow believers :eyes:
All Shots at same Resolution:

Windows EXE Build With In Game Chromium Browser and Angular GUI overlaid, frame rates on right 172 FPS.


Windows EXE Build With Nothing else 172 FPS

WEBGL With Angular in real Chrome Browser 91 FPS

Exactly the same performance the Chromium Browser cost nothing and both ran with same GPU Usage about 70% whereas WebGL build runs at about 30% and runs half the speed!

What’s going on Unity with WebGL builds devs answer please!!!

GPU Usage 75% with Win EXE builds
6148586--671525--upload_2020-7-30_11-6-19.png
GPU Usage 35% with WEBGL build of course it’s slower, nothing to do with WEBGL or Browser, just less GPU usage
6148586--671528--upload_2020-7-30_11-8-23.png
What’s going on Unity with WebGL builds devs answer please!!!

Did you read the articles Ryiah linked to?

2 Likes

What is your CPU usage while running that? It’s very likely the browser is far more CPU-bound than a native build since everything was converted to Javascript, it cannot utilize SIMD and is probably single threaded (I’m not up to date to how widespread multithreaded javascript support is on browsers and if Unity leverages it).

2 Likes

Thanks for the good question, shows someone is thinking about what I’m saying now. But not the reason…
WebGL CPU Usage at 13% when running the example…
6148676--671558--upload_2020-7-30_11-59-46.png
WinEXE CPU Usage at 11% when running the example…
6148676--671561--upload_2020-7-30_12-2-12.png
WinEXE CPU Usage at 11% when running the example with in game Chromium browser overlay…
6148676--671564--upload_2020-7-30_12-3-21.png

So no real difference and certainly not the CPU maxing out as the bottle neck.

I think WebGL browser slow is a myth and throttling is what is causing it like the 4GB per tab memory limit, it’s artificial.

Thanks for the good question, shows someone is thinking about what I’m saying now.

Of course he didn’t. He’s too convinced his own thoughts are the only accurate source of information in spite of the fact that there is a plethora of information out there that says he’s full of it. He’s the poster child for the Dunning-Kruger effect.

https://en.wikipedia.org/wiki/Dunning–Kruger_effect

Specifically, for anyone actually curious and wanting to learn, it’s the instruction itself that has the bottleneck because it has to be passed through multiple layers to reach the GPU. Once it has though it’s irrelevant whether it’s an instruction operating on a single asset or operating on multiple assets at once.

With proper batching you can largely bypass the limitation which is why it’s an “average of 10%” and not a “flat 10%”, but on the other hand if you poorly implement your app you will see more than just a 10% penalty. I don’t know what the upper limit is but I’m willing to bet it’s around the amount the OP is getting.

SIMD is functional within Firefox but it’s completely absent from Chrome.

https://blog.mozilla.org/javascript/2015/03/10/state-of-simd-js-performance-in-firefox/
https://bugs.chromium.org/p/v8/issues/detail?id=4124

There are ways to multi-thread your code in JavaScript and Unity has had it in experimental form since 2019.1. That said some browsers have them disabled by default due to security exploits like Spectre.

https://discussions.unity.com/t/774290/4

5 Likes

Oh here we go the typical inexperienced developer who has never done anything real on the web. Come back when you’ve done some real browser systems and not just quoting “sandbox” as the reason as if you even know how that works.

Show me your great examples with only 10% performance loss.

From Unity themselves which I agree with…
"What kind of performance can you expect on WebGL?
In general, you should get performance close to native apps on the GPU, because the WebGL
graphics API uses your GPU for hardware-accelerated rendering. "
https://docs.unity3d.com/Manual/webgl-performance.html

It should be “close” not using less than half as much GPU resource and running more than twice as slow.

So unless you have something real to add other than a few quotes that mean nothing, like some real data that says exactly the percentages lost in performance from some study rather than just quoting things and saying that’s it and your’re right and I’m right you’re, then shut the … up b…

1 Like

From the very same address the sections that you chose to ignore because they agree with my statement…

2 Likes

Show me your great examples with only 10% performance loss.

I provided my evidence backed up by professionals. Feel free to peruse the links I provided.

Cop Out Much “Close” not Half or less Mr Troll. Your inexperience is obvious and you hijacked an important thread. If I ignored you that was up to me, what you said was rubbish and not based on your abilities or experience.

Obviously this is the kind of thing you want to post on an “important thread”, along with lots of exclamation marks and shouting. That will get you an answer to your question.

May I ask how a WebGL build and a native build with integrated web browser are comparable? In the latter case, the heavy lifting (rendering, skinning) is being handled by multi threaded native code (as opposed to the single-threaded Web Assembly of the first example). There is a good chance that this is where the bottle-necking is.

Have you tried other engines that have a WebGL option vs native (e.g. Godot)? That might help reveal if it’s a bottleneck on the Unity side or the WebGL side of things.

2 Likes