Game Running on Multiplay Hosting Has Unreasonably Low Refresh Rate

I have a 2022.3.23 Linux dedicated server build using Netcode for Entities and running on Multiplay Hosting.

In the Lobby scene the refresh rate of the game is upwards of 2000 but when one player joins and the actual gameplay scene is loaded then the refresh rate stays at just about 5 and when the player disconnects and the server is still in the gameplay scene then the refresh rate increases to be about 9.

However, when I host the game on a Samsung Galaxy A23 which is sort of a budget phone then the refresh rate has no problem staying at 30 fps when there is 1 or even 2 players in the game. This is why I think the refresh rate is Unreasonably Low. Any suggestion on reason for this, what may cause this or what could solve it would be much appreciated.

I calculate the refresh rate like this:

public class FPSLoggerController : MonoBehaviour
{
    void Update()
    {
        float fps = 1.0f / Time.deltaTime;
        Debug.Log($"FPS: {fps}");
    }
}

Generic advice:

Any errors? Logging is slow.

If nothing comes up, try recording a profile capture to see if there’s anything specific slowing you down.

I wonder if this isn’t entirely flawed due to the server not rendering anything. Check how often network ticks run per second instead. This should equal the defined tick rate ie 30 Hz.

There’s also the possibility that the server is overtaxed with something and can’t process more than 5-10 frames per second. After all the Multiplay hosting servers are dual core machines with just 8 GB of RAM.

Try making an empty project’s dedicated server build and log its update rate for comparison.

1 Like

No errors logged on the server but on the client I get these:

If nothing comes up, try recording a profile capture to see if there’s anything specific slowing you down.

Thanks for the suggestion, I will try this!

I wonder if this isn’t entirely flawed due to the server not rendering anything. Check how often network ticks run per second instead. This should equal the defined tick rate ie 30 Hz.

I have done this and the tick rate 60 Hz, but according to ChatGPT the low call rate for the Update function is a problem.

Is Low Update() Rate (5-10 Hz) a Problem for a 60 Hz Server Tick Rate in a Dedicated Server?

If your server tick rate is 60 Hz, but the Update() function only runs at 5-10 Hz, this is a serious problem that will likely cause latency, desync, or dropped packets for connected clients.


Why is this a Problem?

In Unity Netcode for Entities, the server tick is the authoritative simulation rate. However, if Update() runs too slowly, it means:

  1. The server is not processing and sending updates for each tick.
  • Clients may not receive timely updates, leading to laggy movement, jitter, or desynchronization.
  • Input and snapshot processing could be delayed.
  1. The server is skipping multiple ticks between updates.
  • A 60 Hz tick rate means the server should be processing 60 ticks per second.
  • If Update() only runs at 5-10 Hz, that means it misses 50+ ticks, causing delayed responses to client input.
  • Physics, movement, and state updates will feel choppy.
  1. Packets to clients will be delayed.
  • Since the server isn’t updating frequently, it won’t send enough state updates (snapshots) or acknowledgments.
  • Clients may receive outdated data, causing rubberbanding or input lag.

TLDR; ChatGPT claims that if the Update function is called far less frequent than the tick Hz then the server will not process and send data to client each tick.

There’s also the possibility that the server is overtaxed with something and can’t process more than 5-10 frames per second. After all the Multiplay hosting servers are dual core machines with just 8 GB of RAM.

This might actually be the problem, namely that the CPU of the Multiplay Hosting server is not performant enough. I sort of took for granted that it would outperform the CPU of a budget phone but I guess that might not be the case. Since it only has 2 cores and the CPU of the phone has 8 cores the multithreading of ECS cant be utilized as effectively on the server CPU. Quite disappointing.

Try making an empty project’s dedicated server build and log its update rate for comparison.

The Lobby scene that I mentioned I believe to be comparable to an “empty project” and the update rate was like I said upwards of 2000.

If it would be of interest this is the comparison of the two CPUs made by our friend ChatGPT.

I suppose the major downside with just two cores is that this may effectively leave only one core for parallel processing as the main thread is loaded with all kinds of other work, including scheduling jobs.

Personally I’m also baffled why Unity does not at least provide two or three tiers of Multiplay servers with corresponding price tags for the average user. Or at least a quadcore, rest of the specs are okay.

If the server run so slow (5 tick per sec) there is indeed something really wrongly configured. I saw in the past some issues with Linux and job scheduling, effectively stalling forever the server.

First and foremost: can you please confirm that if you host the Linux server locally on you machine, and forcibly set the use only 2 worker threads (you can use a docker container for that, or if you have windows you can also use WLS).

Sure, there are just 2 cores, but at at tick rate of 30hz, unless you have a very complex simulation or a tons of client the server should be still running quite ok.

1 Like

First and foremost: can you please confirm that if you host the Linux server locally on you machine, and forcibly set the use only 2 worker threads (you can use a docker container for that, or if you have windows you can also use WLS).

Great idea! I did just this and could confirm in the profiler in the editor that the worker threads were limited. Then I built the linux build again and ran it in a docker container. Then I confirmed that the threads were limited in the container as you can see in the following images where in the first JobWorkerCount was set to 2 and then to 1 in second image. But the fps that I log is still steadily on 60. At least now I know that my game can run well on Linux build with 2 cores but it would be shame if I would have to host it somewhere else instead of multiplay because of how it runs there.

JobWorkerCount = 2


JobWorkerCount = 1

Have you tried enabling profiling on the multiplay server (see Unity - Scripting API: Profiling.Profiler.logFile ) and getting that profiler dump (see Manage servers )? Anything happening there?

It could be as simple as some error logs spamming and slowing everything down. If you open your server logs, see anything in there?

Those servers should support “empty project” level of load, it’d be really bad if they didn’t.

There’s also log streaming if you deploy to multiplay using play-mode scenarios

1 Like

I used direct connection from the editor to connect to the servers profiler. There I can see that it is SimulationSystemGroup and PhysicsCreateBodyPairsGroup that take up the lion share of the frame. I want to clarify that this is not a “empty project” level of load as you said. There is also not any error logs spamming, there is however the FPS being logged every frame but the time this takes is negligible from what is read in the profiler.

I have now deployed the same build (excluding the serverQueryHandler code) in a container in an AKS cluster on Azure. That is running on a Standard_A2_v2 CPU that has 2 core, 4 GiB RAM and 2.4 GHz. There the server has no problem reaching 60 FPS even though the machine seems to be less powerful than that on Unitys cloud.

I am really confused why this drop in performance occurs. I guess I could try running the container on Unitys cloud to see if it runs better then but I can’t see why it should have to run in a container to run without problems.

And just to double check, what server density are you using Multiplay side?

Just one server per machine :index_pointing_up:

Just to give you an idea, megacity metro runs on Multiplay and uses Netcode for Entities. We did some internal playtests with quite a few players and it was running just fine.

I’m mostly focusing on Netcode, but I’ve shared this thread with UGS folks.
In the meantime, do you have some minimal project or steps to repro so we can try reproing this on our side?

1 Like

I’ll let them ask for more details, but someone suggested looking at checking the impact of the -native-leak-detection EnabledWithStackTrace -diag-job-temp-memory-leak-validation launch parameters on some of your build configurations.

If you could provide a project ID / environment ID / fleet ID too that’d be good.

I have tried those launch parameters but did not get any wiser about the issue.

Gladly,

Project ID: 0e7b9d8a-b1ed-4962-a944-c8c8b878535b
Environment ID: 1e352a0c-a40a-4eaf-88b9-fbd0e4674d0d
Fleet ID: 36f700bd-acce-4db9-b3ff-e08bc061e3cb

I really appreciate your efforts!

Got this from our UGS friends.

From what I could see they weren’t resource starved so whatever the issue was it wasn’t something VM related

So just to do a sanity check:

  • you removed those parameters if they were present correct? Leak detection adds overhead to pretty much all memory allocations, it’ll have a non-trivial impact on your performance.
  • Any other difference in launch parameters between your two servers?
  • you’re building with the dedicated server build target right? (not just linux, but dedicated server linux). It’s a separate target that’ll strip graphics, audio and other non-server things.
  • when you look at your profiler dump, how many counts of SimulationSystemGroup do you see? How many counts of the prediction group do you see? How long do they take (mind sharing a screenshot of your profiler run please?)
  • What’s your TargetFrameRateMode? Auto? BusyWait? Sleep? Do you mind using unscaledDeltaTime for your FPS calculation, if you use Sleep mode server world’s rate manager will change your update rate to reach your target rate, I want to see if there’s some bug there.
  • Do you mind sharing your ClientServerTickRate settings please?

Did you find a solution to the problem? We encountered the same thing, but we use Photon Fusion for multiplayer.

Unfortunately not mate, my idea was to go with Azure instead since it ran fine there. Good luck to you!

In our case, the problem was solved. For quick assembly of server builds, we set the C++ compiler configuration to Debug. After assembling with the Master configuration, the load on the server became about 5%. I hope this can help someone.