Will the job system improve networking and multiplayer speeds?

So the job system is highly optimised to allow lot of units to move and Unity to do more things over more threads.

Could networking benefit from the job system, so updates from and to other players will not be as dependent on the main thread, could they even run on separate threads/jobs?

1 Like

You could receive the data then pass the data to be processed in jobs. So i don’t see why you couldn’t.

But that being said the issue with “a lot of units” you still need to pass the same amount of data with or without threading and that all has to do with network latency/packet size. If you have a lot of units you would need to make your physics deterministic so you don’t pass a ton of data and just mouse clicks/movements…

I think UNET NetworkTransport is already jobified/multithreaded internally for sending/receiving msgs ( @aabramychev might be able to confirm )

But with the job system, you can jobify the gathering and applying of game states for networking, so there’s definitely a speedup to gain here

Perhaps restating what PhilSA said above, but the job system and ECS could allow (context-specific) packet optimization and compression/decompression algorithms to be developed that wouldn’t have been feasible before. So maybe less data will be passed after all.

Networking and serialization are just largely separate from what you do with the data once it’s deserialized.

Efficient serialization/compression and networking won’t change at all. The best approaches there are well known already and concurrency is completely orthogonal to the core problems there.

What you should really be focused on for networking/serialization is GC. Zero GC is possible there and if you are going to spend time optimizing, that is by far the biggest bang for the buck area.

2 Likes

FYI you can get zero GC networking/serialization in 2018 via DotNetty along with protobuf-net combined with ArrayPool. Zero gc as in no per message GC, you still have to initially allocate the pooling.

I agree with @snacktime . GC pressure is the biggest problem and you must focus on the efficient buffer management and make the code less allocatey while working on networking stuff. Splitting logic across multiple threads/tasks/jobs will not save your application when GC will become angry and kill the performance.

Wasn’t there some mention of the new compiler technology working outside of the GC in the original video I think they mentioned something about the GC and the new system??

Found it…

From here ETA on C# job system and new ECS?

More info here:

From my point of view, this job system and friends is reinventing the wheel which makes it easier for Unity users to write code that runs in parallel. Personally, I don’t see a reason why the job system is better than TPL, except that it works outside of the Mono which should be ditched anyway in favor of .NET Core. There’s no quantum mechanics, and with this job system Unity simply gives you basic control over memory allocations with a bunch of headaches that you will encounter in the development process. At the moment this system doesn’t offer the flexibility which you have with TPL. It also involves many new issues and bugs from the facts that I’m reading. Write or not GC-friendly code is up to you, and I believe you don’t need such systems for that.

It’s not .Net …

It’s compiled to C++.

See post above yours.

I know that this system is written in C++, can you be more specific where I said that it’s a part of the .NET runtime? Please read my post above again.

Then you understand that Compiled Vectorised Batched C++ code is significantly faster and more performant than .Net.

It’s just sounded like you were comparing the new job system with Tasks in .Net, or by native Tasks were you referring to another system?

To begin with, the language can’t be slow or fast. It can be runtime, JIT compiler, interpreter, and so on. The performance of the system depends on the implementation of technology. Using C++ instead of C# doesn’t make your code magically faster and more performant. I can write C# programs/libraries that will outperform the similar solutions written in C++ because I know how to utilize the power of .NET platform using modern solutions (Roslyn one of them).

Yes, I’m comparing it with the .NET TPL (not the Mono crap).

Unity games/applications will still work on Mono/IL2CPP platforms and Boehm GC has not gone anywhere. In the right hands, the job system will solve only some of the performance issues. It’s not a magic wand and it doesn’t solve the core problems. Maybe Burst will change something, time will tell.

Right, but its not .NET, so no point comparing an apple to a giraffe?

As you wish.

If someone else (like a woodpecker above from my ignore list) thinks that comparing them, it’s like comparing apple to a giraffe:

The overhead in using C# vs compiled C++ is variable but for most non-trivial examples C++ is faster Benchmarks C# vs C++
Also aren’t they designed to do different jobs Task is a multi-threaded asynchronous process system. The C# Job System in Unity is designed to run vectorised data driven batch based operations quickly (like transform operations or raycasting).

One is general multi-threading, the other game oriented multi-threading.

Actually yes, and no. I agree it’s not a magic wand, the performance will still depend on how you write your code. Obviously. And on the top of that, multi-threading is not good for everything, neither the data-oriented design.

But don’t forget that if you stay on the managed side and you don’t open up the memory allocation on a transparent way to the C++ side (which Unity is written in) and to the C# side (which obviously your scripts are), you can’t really save the context switching. Which usually involves a bunch of unsafe code and boxing as well.

I see their point, they’re aiming on a lot of targets at once. They try to give us a safe method to write multi-threaded code, try to make it relatively easy to manage unmanaged memory and compile our code to native code. This way it can run closer to the Unity core. And of course there is the enforcement of data oriented design-thinking.

Doing this much stuff at once, I think it’s not bad. But of course will see how they pull this off.

2 Likes

I think Joachim’s response to many of these complaints in another thread is worth reading, if you hadn’t already. Here’s one quote from it:

1 Like

Synthetic benchmarks are one thing, and real-world applications are another. Benchmarks are good for gap-closing, but they don’t give a complete vision of what is happening under the hood of your game/application.

@ That’s right, thanks.

@IsaiahKelly This quote tells me only that the developers so far had some progress in concurrency. As for the race conditions, I’m not afraid of them because I have full control over the source code in my projects with a powerful enough code analyzer which shows many other potential concurrency problems. The race conditions are just the tip of the iceberg in multi-threaded/asynchronous/parallel code.