Jobs *Any* way to access static data safely?

So, I just dig into the C# Jobs manuals and learned some stuff. Now I clearly see that this mess on top of sockets is not even close to true parallelism, it’s an imitation. Besides the fact that the programmer who made the C underlayer doesn’t know how the kernel works, he (or another guy?) doesn’t even know how to make Unity’s tech truly efficient. It’s just tons of pointless overhead that will not benefit the end-user somehow. The whole implementation is overcomplicated and slowing down itself involving latency to the whole flow without any sane reason.

On top of this, I see at least 2 security vulnerabilities that open the doors even to a junior hacker who know how to manipulate packets.

3 Likes

Rough roadmap:

  • Define protocol standards, new noticeable change means new major version.

  • Investigate security, prevent tampering and replay attacks at least.

  • Reliability should sit at low-level for low-latency, consider KCP.

  • Remove hardcoded scatter/gather from transmission functions, pass it as a parameter and add context, make functions more agnostic.

  • Implement fibers, use fcontext_t for green concurrency. If Unity’s jobs already implemented similar way, then half of the work already done.

  • Remove any dependency between a game loop, I/O, and worker threads. No shared data, no shared states, no scheduling from hot-path. Everything should work independently of each other it should be stopless flow/conveyor. Make it truly parallel.

  • Avoid allocations on the heap for everything, use scalable concurrent pools with TLS.

  • Try to avoid/minimize interoperability/cross-language implementation overhead across layers and levels.

  • Organize a clear code structure, eliminate spaghetti.

  • Add pipeline-like API for injecting custom functionality into data flow, extend modularity and flexibility.

  • Implement high-level abstractions as modules for stuff that related to synchronization, various game mechanics, and so on.

  • Use Span for contiguous memory access and buffers manipulation at high-level. Don’t know about JIT optimizations in Mono, but the latest version should support fast Span at least.

  • Stop working in the shadow, nobody wants to contribute to the 2-month outdated repository with a single branch. Organize collaborative workflow, commit in real-time.

  • Provide a code of conduct and contribution guides for not in-house open-source developers.

  • Introduce two concepts like scriptable render pipeline, but for networking. One lightweight for mobile platforms/small games and one advanced for desktop/servers.

  • Investigate sockets-related kernel implementations across platforms.

  • Integrate scalable event interfaces, epoll and kqueue for readiness-oriented I/O on Linux/UNIX/POSIX, and IOCP for completion-oriented I/O on Windows.

  • Implement semi-real world testbed using multiple machines and heavy-load/high-parallelism simulations. Synthetic benchmarking for basic measurements during work.

  • Stabilization, fixes, and gap-closing with continuous updates.

  • Maximize performance and optimize resources usage.

14 Likes

If your business is not ready to make networking a first-class citizen in Unity, don’t try to create a parody.

3 Likes

Let me start of by saying that the overall plan for the Network Transport is to become a true citizen of the DOTS(Unity Data-Oriented Tech Stack) family. In order for us to do so we also need to experiment with code and flows. That means that not everything that is released will be in its final form the first time you guys see it. We chose a way forward that might not always be the most efficient way to do networking at the moment but it gives us leverage to see and think about how the API should grow and thrive in the future. I agree that our transparency has been more opaque the last months and I really hope we can do a better job at that, to share what specific features we are working on and how we can move forward together.

That being said I can say that a lot of the things mentioned that are missing are coming. And I will make it my personal mission to make sure you guys are kept up to date with our progress beginning next year.

Finally to give you guys a idea of how we work: We first try to define the problem we want to solve, in the case of the Transport our challenge is to create a networking library that supports the finite set of platforms we have in a efficient and effective manner while maintaining compatibility with both GameObjecs and ECS. To do so we identified that in order to get the most of each platform we will most likely need to have a specific solution in-place for that platform. So we made a decision early on to not tackle the specific implementations of each platform before we fully understood the characteristics of that platform. This way we can reach more platforms early so we can gather data for that platform and see in what ways we can improve the flow for that specific platform, as well as get a feel for how the API should be laid out in order to support the different platform specific implementations and quirks.

Our current focus this iteration has been to make sure the flows from send/recv make sense and that the API it self feels fluent, this includes pipelines and moving work off the main thread (currently work in progress). And I hope we will be able to share this progress with you guys soon.

7 Likes

@MichalBUnity Thank you, and sorry for the text in the aggressive form… Joe’s post triggered me a bit ( @Joachim_Ante_1 , I love you and your hair, don’t get me wrong). I understand that work is still in progress, and I would like to help/contribute, but many stuff is still not clear. Repository seems abandoned, issues just ignored, there’s no any activity. Open to us the development process we are programmers too.

6 Likes

There are not enough valued senior programmers on the market to solve the networks transport data systems well. And the guys who are able to don’t want to start at unitys team, unfortunately because of short time hype seasons and that’s good for long term secure employment. The solution you mentioned takes about 2.5 - 3 man years to solve in a appropriate way. So the most dev teams start over with their own solution or enhance the existing unity system.

You can be assured if you’re able to enhance the current unity NTDS in a useful and correct way, the unity net dev team will collaborate on this. Be friendly and courageous.

Where the data comes from? We aren’t talking about solving everything in the early stage. However, do not claim that the repository is super efficient because it utilizes “Jobs”. Efficient networking is not just about jobifying. In fact it’s not efficient at all and the whole network layer is simple marketing. 2 Month outdated repository and unclear goal what Unity is trying to solve when there are ton of efficient transport layers.

My only requirement isn’t revisiting the HLAPI fiasco. I would rather have LLAPI only, and code modules I can work with that are relevant to most games, so you would implement a proper clean approach using your only API (ie it’s all one API not LLAPI and HLAPI) - and tell us clearly how to use it.

If it’s not clean, direct and 1 message from Unity (nobody needs 100 ways to skin a fish) then I’ll probably just be safer going with Photon no matter what. It’s simple with Photon, 1 way of doing it and do it well. Works for all games.

UNet is basically just something best forgotten. Looking forward to the future, but try to err on the side of less options and more solid performance. I don’t care that every programmer wants a different API. I care, that with networking, it works and my customers don’t give my game one star cos it screwed up, or Unity mixed API up so much that I had no choice but to screw it up myself.

It matters more with networking than any other feature in Unity because with networking, at least 50% of traffic is not currently on your computer.

That’s why simple with less optional stuff is better - ESPECIALLY for an API that is still in design phase. Be authoritative, listen less to customers and lead better. If you want an example of what customers already know and understand, use Photon for a few months. Otherwise, know we need to be led firmly and clearly with networking.

Approaching networking development is not like any other area in Unity. It is the area with the most chance to ruin a project… or win it.

Unity needs to be assertive in this area, and don’t be pulled in multiple directions. For example HDRP render pipeline is led well, it knows best, knows what specifically has worked in the field, and what continues to work in the field.

I want that leadership from Unity because Unity lost my trust with UNet.

7 Likes

I fully agree with you on this one. But I think it’s more about ownership than leadership. I don’t think there isn’t anyone that who care enough about Unity enough themselves. It’s like everyone is saying about anything they want and they don’t seem to communicate with each other.

Marketing is always at the fullsteam selling stuff, but developers always miss the target, (not sure if they have any real roadmaps), and users aways at full of hopes.
Seriously, I don’t want to hear another marketing person start making another “you are covered” campaigns. Unity really needs to sit down and how to make Unity a solid product at the core, I’m talking about as a game engine, instead of lurking into other markets. I think you know what I mean.

I think in the case of the network stack, I rather have Unity just copy UE network stack concept and figure out how it can better implement using ECS jobs.

I really wish Unity can do better than UE but I have very low confidence that Unity can, therefore just borrow UE concept at the high level. I’m pretty certain that it would probably a safer strategy.

UE has the best network stack by far and still improved a lot lately. If it can support ~100 players with low-latency and high-throuput, I’m sure it can handle many other types of games except of course MMORPG.

Network programming is probably the most difficult task and one side benefit of borrowing UE concept is that it will make easier to learn as there are plenty of examples and documentation how UE networking works. (I can share my experiences and points to you some materials) And it will also enticing new UE developers to Unity easier.

Anyway, Unity needs be more transparent about what’s going on right now instead of “soon” Soon in Unity can mean years and early next year can mean Summer. Please don’t let us down this time.
Thanks.

1 Like

I agree with you regarding networking. Unity’s never done networking well so far. But in other areas I put to you that Unity is leading. HDRP, VFX, ECS, Burst, Jobs - these are so good that (I can’t name names) but a pretty big AA studio is moving over to Unity because of ECS. They could’ve rolled their own C++ solution, but it’s actually a bit too expensive for them to do so and their staff is already familiar with Unity from personal projects. And since they can ask me about obscure info, it wasn’t a big risk.

HDRP+VFX was the sweetner they needed to make the final decision, but ECS performance really is an industry talking point, specially as it’s so easy to get to (relative to other solutions).

So I understand why Unity wants to make sure networking is ECS based. You also have deterministic mathematics on the way as well…

If they don’t screw this up, Unreal’s networking isn’t going to beat this. However it’s a long way away I would imagine, and I’m sure nobody at Unity blames me for adopting a wait-and-see attitude to the new networking.

3 Likes

The game industry generally has never done well when it comes to client/server architectures. Years back when I was still working on game machine I ran into the Improbable team and we had some interesting conversations about how the hell was the game industry like literally a decade behind in this stuff.

So it’s not as easy for Unity to just bring in top people. It’s a niche area and on top of that some of the best people are outside the game industry. Even larger studios often don’t have real domain experts in this area. It really is a rather scarce resource in this industry.

With their purchase of that hosting company that I can’t remember the details of, I would think long term they definitely do plan on putting together an A team on this. At least it makes sense to me that they would.

I was a long time Unreal user so I’m biased, however, if there is one thing I want to steal from, it’s their network stack. Theirs is light years ahead of everyone and it’s battle proven. I wish Unity really sits down and carefully study them before they start another unproven experimental project. I’m sure thiers is not without fault and there are rooms for improvements and I also think that ECS can grasp such opportunities.

Unity, when I looked at it, ~10years ago, I was so disappointed looking at how rudimentary their Editor was compared to UE back then, It still is but what surprises me is that it’s basically the same editor 10 years ago. UE4 is built from scratch in the past few years and it’s already light years ahead of Unity in many areas. I’m so disheartened that simple yet fundamental workflow has not been fixed at all. That’s why I think there is a lack of ownership(leadership).

The reason I came back was that I’m a big fan of C# and I was so excited when I watched Unite Berlin to learn about ECS and their plan for the “Best Networking System”. Rendering-wise, it’s almost there and I’m not worried about it. I thought the important missing piece(network stack) will finally be there I convinced my partner to try Unity for their next project. I feel deeply responsible and I have so much trouble right now surprise dealing with large size assets. I never expected it would be a problem. Nothing else will matter right now unless I know I can work with a large size project. I hated UE so much for their long compile time, but right now I hate Unity so much how long everything else takes so much time. It loose all of its advantages of having a faster compiler.

Unity Editor is getting slower and slower each day. This is something I never experience with Unreal. In UE, only the compile time gets longer and longer with increasing project size and it is normal. Click on “Play” on UE is almost instant. I just found out that click on “Play” on Unity will reload all assemblies even when they are not used and changed at all each time. I’m dumbfounded it has been working that way all this time. It takes about 15 seconds to load a simple scene and it takes about 9 seconds to load an Empty scene in my project. Yeah, there problems with Asset initialization during the reload because some 3rd party asset doing that but why is it reloading the whole assemblies each time? It shouldn’t do that to begin with. Does Unity know about this problem? Probably they do but it’s probably not an issue because it adds only fraction of a second since they haven’t done any large size project themselves, otherwise, it’s impossible that they left it that way.

Before Unity talks about “Performance by Default”, Unity really need to make editor “Performant by Default” first so that it saves countless hours of our life. Will they ever listen and do something about it? I don’t know. It’s so hard for them to admit the problems and even if they do, it sits there for years and years. I remember Unity saying “Why do you need another GUI? There is no problem with IMGUI, you can do everything and anything with it” It tooks years for them to admit that there is a need for a new gui and it took about 5 years to deliver it. (As a side note, there still is big problem with Inspector where it trys to redraw each time you scroll. Thanks to IMGUI again. There are custom asset that causing the big slow down affecting general usability to the Editor. I tooks me a while to figure out why it’s so slow but there is very little I can do about unless I rewrite someone else’s Asset. I hear that it will support UIElement but I think it’s 10 years too late and it will take years for everyone else to adopt it.)

Instead, they seem to promote stuff that not so critical such as dockable windows, consolidated preset menus, font type, button background in the keynotes. To me it’s laughable and not worthy of keynote but just oneliner in the patchnotes. Sorry, but I had to be that guy to point it out.

What they should start to promoting is “Remove the Painpoints” or “Making Unity Easy to Use” campaign. There are literally many simple and easy yet fundamental fixes lying around. Unity should gather them all in one basket(Trello?) and show it to users that they are removing one by one in realtime. It shows that Unity cares about the users and it will put me to ease if they do. If they had such a mentality, Unity would be been so much better now.

Anyway, I’ll stop my rant but I can’t help saying it cause I care.

Wish the very best.

Cheers!

5 Likes

That’s not true. As I started with quake3 long time ago 1990, the net code was very efficient from the first time and was optimized step by step. This net code is the state of the art and all game companies learned from quake3.The ability to have a minus _timenudge by deterministic prediction is outstanding.

There is only one guy I know personally is able to solve problems in this case, but hes working at Dimension Data Austria. https://www.linkedin.com/in/david-hamann-a33437a8/

The game industry has solved some things well, things that were very specific to games and most of it not really at the system design level but more in implementation of very specific narrow problems. But if you look at the bigger picture, like their abstractions around concurrency and messaging, or even more basic principles like separation of concerns. They were at times literally a decade behind modern approaches.

It was the financial industry that was leading the way when it came to low latency cpu cache friendly designs. LMAX Disruptor was open sourced. Aeron is a modern low latency messaging framework although it’s fairly recent. Frameworks like Mina and later Netty were iterating on designs that worked well for networking pipelines. The Scala team had a huge impact on how concurrency was handled, borrowing from the actor model and pushing reactive design via Akka.

There is so much good stuff the industry could have borrowed from. And recently they are doing a better job, but for many years they failed miserably and client/server design is an area where it really stands out.

2 Likes

Well the aeron project / lib looks really interesting! In term of performance by default it would a beautiful match!

I’ve startet to watch some videos of Martin Thompson:

C/Java project:

A c# portation of a third party company

One of the biggest problems that I encountered while working with high-throughput UDP systems is that the kernel across platforms is not really suited for high-performance multiplayer games. When you just start multiple workers to read/write datagrams in the traditional way, a lock contention, memory allocations, inefficient queues management, route lookup, and tons of system calls in the kernel will make any well-designed system on top inefficient by default relatively to a count of concurrent connections. You don’t really need a heavy multi-threaded environment to build small and efficient networking system using the traditional ways of wrapping UDP sockets. Because you can’t scale in the userspace while the underlayer including the kernel is not scalable. For small multiplayer games/small server instances that are using traditional UDP approaches, a single networking thread for everything and few non-blocking queues like ring buffer to deliver stuff to the main thread is more than enough. Everything else is just workers for high-level abstractions and sub-systems.

The only way to bypass the kernel overhead and unleash the potential of a system that bet on high-parallelism and low-latency using UDP for extremely high throughput is:

  • Utilize latest sockets-related I/O technologies built by networking engineers of a particular platform.

  • Build a custom I/O framework using direct network API (as the kernel module if you want).

Only then you will be able to scale in the userspace:
4025209--348418--plot-tx-clock-201109.jpg

1 Like

I just looked into the source code of Aeron (C implementation), and it’s the same traditional UDP with vectored scatter/gather that you can find in Unity’s repository, but Aeron’s implementation is more correct, agnostic, and way more complex. Should be great for IPC.

Ya Aeron has a number of good ideas to borrow from, but it was obviously designed for server to server communication. And for games we almost always have some type of locality or easy way to partition stuff so that we don’t need to handle the type of volume Aeron was designed for on a single server.

The Unity networking I don’t care if it scales really. It’s always going to be a small number of studios that work at scale. It’s going to be far more now with realtime moving to mobile in a big way. But the job system and ECS are solving the really hard problems and give us a foundation we can do something with.

Right now even large AAA studios don’t know how to scale realtime at mobile scale. The hard problems are really more around the instance/match per process model that has been common, and scaling database queries around that. It has huge inefficiencies as studios like Epic are discovering. And I think it’s just natural that the best solutions are going to grow out of studios who have the actual problem in front of them. Unity has that on the client side, but they are kind of a fish out of water when it comes to what should the server side look like.

On the OP topic:

Now that Injection is going away, System Injection is removed as a workaround for accessing ‘static’ data.

What other options do we have to access the same Native Collection in multiple systems (and their jobs)?

Specifically, I’m looking to use a NativeMultiHashmap as a lookup table. It’s needed in at least two jobs.

Thanks!

I don’t unterstand your question.