So what exactly are the (current) issues with Unity networking?

So when first looking in to Unity networking, there were quite a few major issues (this was a long time ago, version 2.something). I ended up going straight in to the middlewares rather than spending time with Unity networking. Nowdays I really like both Photon and uLink, but I’m wondering if anything has changed?

What’s the current situation with the base Unity networking? Are there still major problems or have most of them been fixed? Any major weaknesses I should know about?

I know Unity isnt made to be as scaleable to MMO’s, but I’m not making an MMO, it’s a room-based game (lets say ~10-12 max players per room), on an authorative server, and with custom matchmaking code… are there any key problems that I will have with this sort of game?

This might sound stupid, but I know Unity networking is fairly similar to uLink, but since I never took the time to learn it since I knew there were problems with Unity networking, so I’m not quite sure what I would be losing aside from scalability?

To be 100% honest, the biggest reason I’m asking is because if Unity networking is functional enough, then thousands of dollars on middleware will be saved. But I want to make sure I’m aware of any potential pitfalls… and I’m really worried that most of the atmosphere still seems like you can’t do anything serious without devoting yourself to a middleware solution… so any advice would be appreciate.

How many rooms per server are you planning to host? Or, in other words: How many players per server? If there’s not much simulation code going on on the server; in particular no physics stuff, things might scale up quite well (I’ve never tested pure Unity networking without physics stuff). Also, if you’re only planning to have 50 or maybe even 100 players, things shouldn’t be too much of a problem - but that obviously depends a lot on the kind of game you’re creating.

Most important thing in multiplayer (and unfortunately also the most difficult to implement): Test early, test often. When I did some load-testing with Traces of Illumination … probably two or even three years ago by now, I could go up to around 60 players and everything was totally smooth. Then I went to 80, and suddenly it was a complete unusable mess (Traces of Illumination does use quite a bit of collision detection, though, so that might have been the issue; I meanwhile did a lot of optimizations but didn’t set up a larger-scale loadtest).

Well… I had a big project at the beginning of the summer using a lot of iPad’s in a network environment.
Everything goes well up to 20 iPad’s. From there on… RPC calls won’t arrive at some clients, etc. (I’ve tried up to 40 iPads… hired for a big event)
Could be networking hardware off course… But I wish there was a build-in way to be sure every RPC message arrived.
So I’m writing that right now :slight_smile:

Grtz,
kaaJ

I take it as Unity wasn’t built with multithreading in mind, the server is running on only one thread. This would’ve helped in a room-based multiplayer.

The problem also is physics in one room will affect people in another room, unless you separate them by layer. And the problem with that is there are only a maximum of 32 layers.

My suggestion is for a “room” server to handle only one room, and create a new instance of it when a new room is requested. This is what I’ve had in mind long before, but we don’t really have the money to purchase even a windows server machine to pursue that.

There is physics, but the amount of players per server depends on what you mean by “server”… Do you mean server machine? Or do you mean each instance of the server application?

The plan was to run separate instances of the server application for separate rooms, to minimize the amount of possible bandwidth issues. The only one that will need to support more players is the “master” server, which will only transmit simple RPC’s and no states/physics or anything like that.

Also, this way applications can be ran on separate CPU cores to avoid the problems discussed by one of the others who replied.

Already tested a uLink server architecture similar to that and that’s the plan if uLink ends up being the choice. Already have a Windows server ready for it.

Does this mean there is some sort of bug in Unity Network’s Reliable transfer that is losing packets? Because in the middleware options, sending as reliable ensures that you don’t have the issues of losing packets.

I think I may not have explained my original question well in the OP, though. The architecture has already been tested in uLink, and I’m aware of the potential pitfalls with bandwidth/resources/haviing to split server applications between cores/erc.

But I’m more asking if there are any major bugs, or major differences between uLink and Unity that the server will not be possible in Unity. An example being the reliable RPC issue listed above.

That’s the only (possible) but I may be aware of. Aside from that I’ve noticed that there are less callbacks, doesnt seem to be any built in encryption, less security/pw functions, no P2P or redirection functionality. But that’s all I’m aware of, and I’m not so sure that those things are worth $10,000 American. I’m not aware of how much bandwidth difference there will be with smaller rooms, and probably won’t know for sure without testing.

But the thing is, is there are any Unity networking issues or bugs that will prevent it from being possible at all, I won’t even take the time to rewrite the architecture in Unity working, so I’m trying to do some research before starting. So if anyone has tried Unity networking and ran in to issues it would be appreciated if you could let me know =)

Both uLink and Unity networking have a slight useability issue when it comes to launching 1 process per room (and, say, 50 rooms per machine) - but that, of course, can be called ‘I’m complaining’.

There’s no out of the box solution (that I’m aware of) that would allow starting/stopping/restarting/monitoring/updating/managing many rooms (when I say room I mean unity process) on several machines (or maybe even ‘on demand’ launching a process if it is needed).

→ This applies to ANY networking solution if you want to add collision checking in unity processes.

I’ve started to write such thing for my own needs, but I would be extremely happy if any middleware provider starts providing such a “more tight” integration with unity )).

Or maybe there’s already such cool system existing somewhere and I’m just unaware of it?

I have only a slight experience in creating online games (and I might be wrong in some of my assumptions),
but things that are ‘not good’ in unity’s networking from my point of view are:

  1. RPCs are only ordered and reliable (if I want to send unreliable RPC to launch a particle system - there’s no way to do it).
  2. It is not possible to send byte[ ] over state sync (and you have to do some nasty packing\unpacking of your data into supported types).

As I said, I haven’t used Unity’s networking in a released product, so my experience is limited!

Here’s my list. I’m starting to work with lidgren now, but its a background project.

  1. Network Instantiation is unreliable, so you end up needing to do a really horrible hack between RPCs for spawning objects, and network view synchronization for updates. The networkview synchronizations with this are extremely ugly, requiring stuff like destroying all network views and recreating them whenever a new player connects (otherwise you’ll get errors where the network views never connect since it sent initial data before your client had the object).

  2. No way to limit bandwidth per player. Meaning, I’d like to send less updates, the less important an object is, based on distance or some other metric. Unity’s networking scales horribly right now - everyone sends out full data (reliable compressed etc) if they are in scope.

  3. Some way to visualize where bandwidth is being used, IE what RPCs, what percentage based on which components are synchronizing, etc.

  4. Networking has fairly horrible performance if you scale up to 30-50 networked objects - I assume it’s because we’re using reliable compressed and it’s trying to diff all the data 15 times a seconds. Using a dirty flag system instead would optimize a lot of that if that’s the reason the profiler is indicating high %. But the profiler tells you no info except Network.update().

  5. Use some sort of unreliable ackd packets for network updates instead of having everything go through a reliable UDP system, to allow unimportant data to be dropped/resent if needed instead of backing up the whole payload.

  6. Total bandwidth limit per connection, so that it will send less network view synchs if there’s been too much data sent, and prioritize any RPCs over networkview synchs, and then fill the remaining based on the priorities in #2.

Thanks for the responses!

So it seems the biggest issues seem to be related to RPC’s forced to reliable, and therefore harming performance issues and causing RPC’s to be dropped with many networked objects. In addition to network instantiation issues.

My work-around for that was to have the rooms already “open”, and sorted by which core Unity is running on (with command line you can choose a core). That way whichever core has the most open rooms, players will be sent to.

For monitoring/updating these rooms, I didn’t test that out yet, but it should theoretically be possible in a few ways in uLink. You can of course just save the important data to a database. Or uLink’s P2P implementation can be used for communication between servers. Or you could just use a Master Server and restrict access to it thru a single “main” server that’s used only for matchmaking and will be in control of the important data.

I’m curious to know, under what circumstances is it unreliable? I’ve never encountered any issues with it, but I’ve only done a few tests with 4 players outside of LAN.

Actually, you can use Physics.IgnoreCollision(Collider colliderA, Collider colliderB, bool ignore) to ignore collisions between objects of room A and room B if they have the same “location” and geometry but different people. That’s the approach I’m using in Traces of Illumination and it works quite smoothly. I have considered moving over to layer based collisions (I had my approach implemented before those were available) but the 32 layers limitation makes this somewhat complex to implement (and as I’m using multiple layers already I’d probably introduce a limitation of 16 or less “groups” per level).

I think this is where you will hit a hard wall with Unity’s built-in networking - it’s IMHO the greatest limitation there: In Unity’s built-in networking, a client can only connect to a single server. So, you have to choose: Either you put everything into the “room-servers”, or you use some different networking layer (e.g. Photon, Lidgren, SmartFox, whatever) for your “master” server … or you stay with uLink or move over to Photon Unity Networking (which has a similar API as Unity’s networking and uLink). Keep in mind that there’s also indie licenses of uLink available every once in a while, so if that would be an option for you, the price would be much easier to handle. With Photon, depending on how many CCUs you need, you might also get away paying much less (or much more if you need a hell of a lot of servers :wink: ).

If Photon would really cost a hell of a lot more is questionable as Unity is that sub-optimized for headless server usage (batchmode is not a that old feature after all, without Unity Pro you can skip it completely) that you need a cpu core (2.4ghz+ Core i generation assumed here) per 16-64 users in an action oriented environment (depending on if you use physics or just collisions), which gives you massive costs on the server side just due to the sheer calculation power requirements.

The issues I had with Photon were getting the headless physics to run optimally. There’s no way to configure it so that Photon does not treat it like a “regular” client (in terms of bandwidth) so the transfer time from client, to server, to physics server, back to server, ended up being a nightmare for getting the physics to look/act correctly w/ prediction. I’m not sure if Photon Unity Networking would help these issues, reading the thread it just sounds like the actual client API is better integrated now, but the server does not seem to have changed, please correct me if I’m wrong.

The other issue actually was based upon Photon charging per server, and uLink charging per game. With that the case, Photon would be more affordable in the short-term due to the scaling costs, but if multiple servers are going to be needed for physics simulation uLink would probably be cheaper in the long run, even at full price. If a Indie license becomes available, the choice is much easier.