Jobs *Any* way to access static data safely?

Way back when this thread was about accessing static data in Jobs, recursive wrote this:

That would prevent you from needing to access static data, and would let you dispose of your Native containers in the System’s OnDestroyManager() function.

But that way also uses [Inject], which is being depreciated. Since it won’t be an option soon, what other solutions do we have to access a single Native Container across multiple System updates?

…Something tells me I’m missing an obvious, face-palm-worthy solution. But I’m not seeing it yet. Anyway! That was my question. Any clearer? Thank you for any help.

1 Like

I think he means injecting systems like this: ECS - The “Correct” way to handle complex shared data between systems . I’ve also been wondering if there is a currently accepted way to do system injection or use EntityCommandBuffer without [Inject]. I’d like to eliminate Inject but it doesn’t seem currently possible to do so completely. I have a nativehashmap that needs to be created from a list on a monobehaviour, I’m not sure if I should just build it on the monobehaviour itself or create a system just for that and inject it. Neither option feels very clean to me.

To “injekt” a system… You can also use:

MySystem mySystem = World.GetOrCreateManager<MySystem>();

The only thing we can’t access without injection is ComponentDataFromEntity / BufferFromEntity as far as I know

Because the methods are internal methods of the EntityManager class…

2 Likes

Not true :slight_smile: GetComponentDataFromEntity/GetBufferFromEntity methods in BaseSystem. Versions in EM is not for public use and they without correct reader/writer. You must use GetComponentDataFromEntity/GetBufferFromEntity from ComponentSystem.

1 Like

I would guess the new singleton approach could give a nice work around for that :slight_smile:

https://discussions.unity.com/t/704391

1 Like

That would be great! However, Unity’s ECS Components must be blittable, so they can’t contain collections or native collections. So a few restrictions there. Lucky that Systems don’t have those.

In Unity’s Twin Stick Shooter sample project, some EntityArchetypes and Component types are stored as static members of the bootstrap class:

https://github.com/Unity-Technologies/EntityComponentSystemSamples/blob/master/Samples/Assets/TwoStickShooter/Pure/Scripts/TwoStickBootstrap.cs

public sealed class TwoStickBootstrap
{
    public static EntityArchetype PlayerArchetype;
    public static EntityArchetype BasicEnemyArchetype;
    public static EntityArchetype ShotSpawnArchetype;

    public static MeshInstanceRenderer PlayerLook;
    public static MeshInstanceRenderer PlayerShotLook;
    public static MeshInstanceRenderer EnemyShotLook;
    public static MeshInstanceRenderer EnemyLook;

    // continues...
}

Later, Systems access these archetypes to create new entities, add components, etc.

What should be made of this? I’ve been taking the warning of “don’t access static data from Jobs” quite literally, and looking for ways to avoid it in all cases - even the reading of static readonly data.

Have I been following this too strictly? What should be understood from the Twin Stick Shooter example?

Thanks for any advice.

Feb now, and GDC on the way. The beginning is already over.

You should pass an archetype to your job as a field.

If it’s not December and it’s still the beginning of the year in Unity terms. ^^
They need to get their acts together, not just eat their own dog food but also eat their own words.

Seriously, GDC means, they are preoccupied with the event preparing demo and PT, a month before and to recoupe a month after.
We have 5+ United events this year and I’m really worried if they will get anything done this year.

1 Like

This might interest you. https://discussions.unity.com/t/690838

I don’t understand the point of recreating everything from scratch once again, look at SCTP, QUIC or ENet - it takes years of shaping mature message-oriented protocols encapsulated into UDP by well-organized teams. On top of this, you need good I/O model, several complex high-level abstractions to cover various game mechanics which means general-purpose serialization, synchronization and security layers, interoperability with DOTS, and the further you go the deeper you enter the swamp. Be clear with people, just say openly we need N years for… or, be smart and grab mature library like Gaffer’s Yojimbo, then rework it slightly, fix a couple of things and voila: you got the transport that you are building from the ground-up. Focus on what your customers really needs instead of doing redundant work that will lead you to the same well-known result.

3 Likes

Hijacking this back to the initial Thread Subject, I see there were replies telling me I went the wrong route with using Native C# threading, In my case, and as of 6 months ago, I had no choice:

  1. Implementation took 25hrs less, and WAY cleaner. reduced hundreds of lines of ugly code dealing with native arrays and job packaging.
  2. Unity was still shooting out Warnings: jobs cannot live longer than 4 frames constantly. Still is?
  3. Main thread Performance was IMPROVED. (No need to copy huge datasets that the job produced.)

So, Like i said, Unity Jobs has their place, but in my case OF:

  1. Huge data set, Thread safe already, no reason to convert to Native Arrays.
  2. Singular Sequential huge Jobs of 10+ seconds long
  3. needing to retain tons of data created in the Thread/job.
  4. No Unity specific requirements this was all just pure data.

So, you can say I may not be doing what Unity recommends, but in my case it was definitely the “Right” way.

And if someone had kept me from learning the benefits and proper usage of native C# threads, i would be in a much sorrier state. All I said, is it’s an VALID OPTION for some narrow cases. And I stand by that.

I’m not using C# jobs as well, but what this system gives over C# threads is a possibility to safely and efficiently parallelize computations and formulate your code in terms of logical tasks with out of the box matching parallelism to available resources, load balancing and higher-level thinking. It perfectly suits for fine-grained parallelism.

In terms of granularity, C# jobs have some limitations, like, it’s not suited for continuous long-running I/O which we are using in coarse-grained systems like [network transport]( Networking: feedback and questions page-2#post-4313275) for example which shuttles packets/datagrams without any pauses/scheduling/high-level input. The logic there is independent and self-driven which follows the rules of sockets programming and sockets underlayer implementation.

1 Like

Since a few versions, burst is now supporting static readonly managed arrays directly from your job. There are currently two main constraints:

  • these arrays should be declared with static readonly and use plain simple primitive types (int, short…etc.) or struct with a simple constructor, (a simple constructor: a constructor that doesn’t throw any exceptions and is simply setting its field members)
  • You should not write to these arrays in another part of your C# code that is not covered by burst, as burst is making a readonly copy of these arrays at compile time

The safety checks are on, the bounds checks are still emitted, but when they are disabled, it is a direct access.

public struct MyJob : IJob {
    private static readonly int[] MyConstants = new int[] { 1, 2, 3 }

    public void Execute() {

         // ....


         // read access with a constant index
         var value = MyConstants[1];

         // read access with a dynamic index
         int sum = 0;
         for(int i = 0; i < MyConstants.Length; i++)
         {
            sum += MyConstants[i];
         }

         // write access forbidden
         // MyConstants[0] = 5  // will fail to compile

         // ....
    }

}

This static readonly array is placed in a readonly section of your program and has a fixed address known at compile time. As the array is tagged as entirely constant, it should be also resolved to a pure constant when using it with constant indices.

5 Likes

@xoofx good day. Sorry for the off topic but is there any plans to support Span, ReadonlySpan with burst?

Thank you.

3 Likes

I think the more common use case in real games with any complexity is shared logic using native containers that is abstracted out separately from any specific Ecs system/job. We have quite a bit of this in a complex multiplayer game. And that forces a lot of extra indirection when we want to use burst.

This doesn’t really apply of course to parallelfor type jobs, it’s IJob where grabbing an instance at the head of Execute would work well. In real games not every problem is EP and IJob is just the best practical approach.

Yes, although sharing data in a job system brings concurrency problems and can’t be solved at burst level (alone at least). But in any cases, static mutable data is a plague for software architecture design.

The challenge for us has been mostly spatial datasets. They are all readonly but used by multiple features. Statics aren’t really even an issue, I wasn’t implying statics was the right solution. The more difficult problem is with shared data sorting out what the job dependency chains should look like. The automatic handling when using Entities with job component system is really nice. But the issue is the way the system is designed you almost need that, because once you throw in something that the job component systems can’t resolve automatically it starts getting difficult to reason about.

Hey, I’ve been wondering about that. The way JobSystems compute dependencies could really be more decoupled from the ECS itself. I assume you are talking about something like an automatic dependency resolver? This is possible to do with some reflection magic. It is just a quality-of-life type of thing, but is certainly possible. May be just a little costly, because an automatic system is going to call CombineDependencies() much more than hand-tuned code, but if you jobs are sufficiently “fat”, it will offset the cost. I even have an inspector for this, it’s quite nice:

I didn’t have time to make an actual job graph yet, that would be much better for visualisation of actual dependencies (this just shows which job writes/reads which container).

3 Likes