Proxy Server Load Balancing / Scaling

Hi all,

We’re investigating using the Proxy Server provided under the Unity Networking Servers to relay game traffic for clients who cannot directly to connect to one another. We’ve gotten it to work in testing via the Network.useProxy command, but I have some questions geared more towards the maintenance and support of this solution.

  1. Have any of you used (or heard of someone who has) this Proxy Server on a live title?
  2. The code shows a default conn limit of 1000 - has anyone load/stress tested this to see if that’s its limitation? How many connections have you been able to support using this?
  3. What is the expected latency for connections routing through the Proxy?
  4. Most importantly, can this Proxy be load balanced if you have more connections than a single node can support? Has anyone gotten this to work? What kind of load balancing scheme would you use?

I appreciate any assistance or feedback. We’re running tests on the software right now, but it would sure speed things up.



I’m currently doing some research for the Unity ProxyServer myself, and was wondering about the same stuff.
According to google, not a lot of people out there are using it ?

I’m having a lot of trouble with the default ProxyServer as it constantly gives me error messages about rejected messages, etc …

Somebody suggested me to have a look and try the Photon Cloud, but that’s a typical client-server setup, and not really what I was aiming for.

What are your insights after testing ?

From what I can tell, I’m not aware of anyone who’s used this in production and I’d be happy to know if someone has.

We’ve gotten it working on our side and tested via the Networking Example Third-Person app, running on a CentOS5 VM. We’ve not had any difficulty with the default settings yet, but plan to tweak them and experiment to meet our needs. For example, the 1000 conn limit and limit of 10 server ports needs to be upped, and we’ll add in some app-level metrics to get a better understanding of how this performs when under load. I speculate we may need to change the sleep times as well as this could add unnecessary latency.

We’re still working on formulating a better load-test to see what kind of stress the system can take. It won’t scale beyond a single node unless this is balanced client-side, as it requires both the server (host) and client to hit the same instance. If it’s not cpu/mem-bound we may try running multiple processes on the same VM.

Sorry that’s not terribly helpful yet :slight_smile: