I can’t figure out why I am getting such low fps on my mobile (android) game using URP. Could textures be causing this? I read somewhere that having a lot of textures with transparency can cause issues. I have a couple in my UI (the health bar display, as well as the shader in the background uses transparency)
I’m not looking for ridiculously fast fps… anything at or over 30 would be acceptable. But right now, less than 15 is insane. I downloaded Call of Duty Mobile this afternoon just to test it for comparison and it ran smooth as butter. This? Well… the renderer is super slow… I mean like 30+ms every frame. I can’t figure out why… if this is a bug, or if its something I’m doing. I’ve tried everything. I’m using object pooling. I’ve traded out a lot of update loop logic for More Effective Coroutines (supposed to be way faster). The app builds in less than a couple of minutes… what could be causing this renderer slowdown?
Gfx.WaitForPresentOnGfxThread is how long the CPU is taking waiting for the GPU to finish rendering the previous frame. In this case that’s showing ~50ms, which means you’re GPU bound. There’s nothing the Unity profiler is going to be able to do to help you here, because it’s only showing how long your CPU is taking to do stuff.
Lots of transparency is indeed very bad for mobile, but the only way to know for sure is use a GPU profiler, which you’ll have to find the one specific to the GPU on your mobile device to use.
A couple of quick optimizations in the URP asset without changing code - reduce Shadow Resolution (and distance, as any given resolution is spread out over it), uncheck Opaque and Depth textures if you don’t need them for anything. Look over lighting, types, numbers of shadow casters, the number of lights per object limit - reduce whatever won’t have an impact or will have negligible impact. Think re lightmapping and baking (which I don’t know enough about yet to suggest anything specific with confidence).
In build settings make sure Blit Type is set to Auto instead of Always, tinker with other settings and see if any have a positive effect - it’s hit or miss, but a few extra frames per second here and there add up. Also, you can override default frame rate with Application.targetFrameRate - just mind the temperature / throttling, you’re probably better off leaving it default unless for some reason it’s less than 30 on the device you’re testing with. Keep batch counts low however possible, distant scenery should be combined within reason (a complex topic I won’t touch on here), keep texture resolution realistic, be careful with use of Post Processing, as for transparency try to use alpha clip instead with opaque shaders if it won’t make much of a difference where used. Try to keep shaders simple wherever possible. Try to keep poly counts low / put some effort into mesh reduction if you haven’t.
Also… note that comparing your game to a AAA title isn’t realistic. Not to discourage you - just saying, neither you nor I have unlimited resources. Anyway, I hope you’ll find some of what I mentioned useful.
This is with all shadows disabled, no post processing, and roughly 15k tris in scene. I disabled the directional light and have one point light on avatar, so down to a single real time light. I guess my next move is to make a very basic starter scene with like a cube and see what the renderer is doing there.
And yeah, I get that Call of Duty is AAA and my game is not. But this seems to be beyond poor optimization… this is unplayable even at the most basic level.
Tried removing vulkan and running with Opengles3, disabling multithreading, among various other settings with the same results… still waiting on GPU to render frames
Been looking around for a GPU profiler, can’t seem to find one. There is 2019 Unity documentation that refers to ones existence, but I can’t find that and assume it is wrong. I’m not sure if its something that would be specific to my phones gpu (Adreno 506) or if there is some gpu profiler that people are generally using. Can’t find any reliable info on this topic.
@djweaver I do get the frustration - I’ve been through the wringer with Android myself, I use an S5 for benchmarking and that is nowhere near a gaming device by any stretch (in fact a quick search says it’s discontinued). That has an Adreno 330 GPU, which I’m assuming is older than your 506 - and it’s capped at 30fps unless I set Application.targetFrameRate explicitly higher. It might be worth a quick test to see if that’s the case with your device as well if you haven’t tried already.
@adamgolden Well at least I’m on the right track now. I dug out my old galaxy s8 and am gonna try that as soon as I get it updated. If its just an issue with my Blackberry (I’m hoping its some throttling or something related to security or somesuch that is specific to Blackberry) then I’ll move on with development and just count that device out.
I tried setting Application.targetFrameRate to 30 fps when I was testing my actual game and not the 3d mobile template. The problem with the game is that it doesn’t get anywhere close to 30fps to begin with, so I doubt setting the cap higher would help? I don’t know, maybe I will try it after I test with the S8.
The last thing worth mentioning for now, if you’re not already familiar with Adaptive Performance, I’d suggest reading up on it. This isn’t something applicable to our phones given it’s only supported on Android 10+ iirc, but looking ahead…
Understand mobile devices are always Vsynced. That means they always wait for the screen refresh to happen before rendering the next frame. The above example may be taking just longer than 16.6ms to render, or the target frame rate is set to 30, it’ll wait until 33.3ms have passed before starting to render again. If it takes just longer than 33.3ms it’ll wait until 66.6ms have passed.
Again, unless you use a proper GPU profiling tool, you’ll have no idea what’s going on.
Another thing about Call of Duty Mobile. By default it runs with a 720 vertical resolution, which is probably far lower than your screen resolution. I suspect the URP defaults to running at full resolution.
Thank you for this link. I was clueless as to where to get said profiler for the GPU. I’m awaiting approval for registration as we speak.
I ran the same test with the 3d mobile template on the S8 with and without Application.targetFrameRate = 120; with same results. @bgolus if I’m understanding you correctly, the vsync would be the reason for being gpu bound at around the 30 fps mark, however, I expected when I set the targetFrameRate to 120, this would have changed. It did not. Also, you are correct, my original app was running at the max res of the device which was 1080x1620. I will definitely be changing this once I figure out how
heres the galaxy S8 (adreno 540) with targetFrameRate set to 120
I’m running the Snapdragon profiler. Getting green on device and connection status, but no data coming into realtime, trace or snapshot modes when running the app.
Regardless of if you’ve set it to 120, if it can’t make the frame time it’ll wait until the next screen refresh. It doesn’t matter if you set the target frame rate to 120 if it’s taking >16 ms to render a frame. We know it is because the wait for present / present and sync times are above that.
Also if the device can’t do 120hz (which the S8 can’t, it only has a 60hz display) it’ll be still be limited to a max of 60hz. And if it can’t complete the frame in the 60hz 16ms window, it’ll display at 30 hz, or 20 hz, or 15 hz … because 16 ms * 2 = 33 ms or 30 hz, 16 ms * 3 = 50 ms or 20 hz, 16 ms * 4 = 66 ms or 15 hz. It’s just waiting until the next 60hz frame refresh.
The target frame rate is mainly used to make the game run at a lower frame rate than the device is capable of, for battery savings.
I’m not sure how to set specific resolutions on Android without creating render texture targets and assigning them to the camera. But under the player settings you can set resolution scaling to fixed DPI, set a target DPI, and then there’s an additional scale setting you can use from quality settings or script.
@bgolus I was hoping that I could just set the reference resolution on the camera (set to screen) to the desired resolution and it would upscale or downscale everything according to that. But I have no idea what I’m talking about. I don’t even know if when an application is set to run at a particular resolution if it ever really does run at that exact resolution or if that figure is just a means to calculate and scale everything to target devices resolution, and beyond any of that, how much this all correlates to gpu usage.
Question:
When you profile your gpu using the snapdragon profiler, what exactly does your workflow look like? The way I connected mine was I had to put the adb directory in my PATH env variable (windows) and set the adb and ndk directories in the snapdragon profiler settings. Then I connected my device, ran ‘adb devices’ command in command prompt to make sure it was connected, then ran the profiler and it connected automatically after a few seconds. Green on the device icon, green on the connection icon. I’ve tried real time profiling, trace and snapshot… each of which have a section that allows me to launch the app. I choose the app (in my case “com.DefaultCompany.TestApp”) from the lists, it launches on my device with a small dialog telling me to wait for the debugger to attach, then it attaches and the app continues to run… but I get no data from the profiler.
That’s not entirely true, I do Oculus Quest dev, but my Android dev environment or my very old engineering sample Quest has issues where I can’t connect any profiler, even Unity or our own internal tools. I can only issue ADB and she’ll commands from command line.
Lol, lovely. Welp. This is day four of me trying to figure out why my very very basic scrolling shooter game gets 12 fps with no custom shaders, no shadows, 1 light, object pooling, and less than 10 textures. When I add my 3 custom shaders into the mix, my game runs slower than dried dog turds rolling upwards against the wind to the peak of mount everest.
So far I have
-two unique devices that look virtually the same on paper (unity profiler)
-A gpu profiler that I can’t get working to even begin to understand why I’m having this issue
-No idea what settings I could change once I determine what said issue is, being that I’ve tried just about every graphic related setting I have thought of and read about
But give up I wont! There has to be some reasonable explanation here… its gotta be something so simple… so easily overlooked.
I think this is the ticket. I’ve been experimenting with setting the DPI. I set it to 72 and my fps wasn’t dipping much lower than 30 (seems like due to the vsync, 30 fps is the best I can hope for). Obviously it looked like trash, but it allowed me to bring my custom shaders back in with relatively decent performance.
I noticed when playing in the editor that my batches could reach as high as 40ish… does this seem high? It appears to correlate with the amount of projectiles I am firing, which I don’t really understand. My projectiles are spheres with a solid color, yet they are emissive. I have some transparent UI elements, as well as sprite gameobjects (item drops) that cascade down the screen, but if I disable all of that, my projectiles can still push the batch number well into the 30’s. Could this be a problem? Would using a sprite mesh in any of the situations where I’m using transparency with sprite images (either in UI or in world) help me at all? Or is this likely some negligible thing that probably has little to do with the performance drop I’m facing?
We do a lot of mobile development, albeit on iOS. The first thing you should do if you are GPU bound on mobile in a simple scene is dropping the resolution.
Android phones tend to have crazy high resolution screens, which means a lot of pixels to push around, a lot of times without an adeguate GPU. You can modify the Render Scale from the URP asset.
Start from here and then optimize.
I think you’re right about the resolution. The biggest noticeable performance increase I got was when I disabled vsync count setting, and increased the Application.targetFrameRate to 60. (When I increased it to 120, nothing changed… I’m assuming 120 must have been so high it triggered some default reset to 30 or something)
As things are now, I’m gpu bound at 60fps with time/ms being anywhere from 13-18ms which is a big improvement, but this came from disabling the vsync count, explicitly setting the targetFrameRate, and lowering the output DPI in player settings