How to learn or limit or plan quantity of parallel IJobxxx?

Hello.

  1. Correctly I understand that “iJobParallelFor.Schedule (1000, 100, inputDeps);” every time will be always divided and be carried out on 10 jobs?

  2. And how at [WriteOnly] to write in a component if for this purpose at first it is necessary to read it? Example:

Manual iteration

struct ParallelFor : IJobParallelFor {

    [DeallocateOnJobCompletion] [NativeDisableParallelForRestrictionAttribute]
    public NativeArray<ArchetypeChunk> Chunks;

    [WriteOnly]
    public ArchetypeChunkComponentType<Translation> TypeTranslation;

    public void Execute ( int i ) {

        var chunk = Chunks [ i ];
        сhunkTranslation = сhunk.GetNativeArray ( TypeTranslation );

        for ( var j = 0 ; j < chunk.Count; j++ ) {

            //Read, Modify, Write //How it is possible to write without reading?
            var tempStruct = сhunkTranslation [ j ];
            tempStruct.Value = newValue;
            сhunkTranslation [ j ] = tempStruct;
        }
    }
}
  1. 10 batches, not necessarily 10 jobs, but I think you get the idea.
  2. You don’t need the WriteOnly attribute here. In your case, you are only reading from the value you write to which makes what you are doing thread-safe.
1 Like

It is just an example, the real code is more difficult. It is strange that “Manual iteration” was given me the easiest)))

Thanks.

How it is possible to add identical components an unlimited number of times on one object?

        entityManager.AddComponentData ( instance, new MyComponentData{ Value = 0f } );
        entityManager.AddComponentData ( instance, new MyComponentData{ Value = 0f } );

You should create array or list.
Too many components will make GameObject look very messy.

You can’t have more than one component on an Entity. Use Dynamic Buffers instead.

1 Like

Valid point. I forgot, I am on DODS forum :stuck_out_tongue:

1 Like
  1. At consecutive iteration on “chunks” ( Manual iteration or IJobChunk ), if I also work with the created NativeArray , then behind it it will be necessary to jump to other area of memory (to iterate not consistently on “chunks”)? (That is CPU will address to other area of memory, than that where occurs iteration?)
  2. Whether there is a way to reorganize “chunks” under a request into specific IJobs? What not to jump on memory, or it is absolutely not essential? For example 10 chunks from 2000 entities with a heap of the used components and created an array on 40000 elements. How do they interact in memory at the time of work?

Side note, there is an IJobChunk for iterating chunks instead of using IJobParallelFor

Also you can pass a query (instead of a system) to IJobForEach and avoid having to use ArchetypeChunkComponentType

Basically you should only need IJobParallelFor for specialized algorithms. Most types of general entity manipulation can be done in other job types for simplicity.

1 Like

I use [ReadOnly] of public NativeArray Chunks; for iteration on all components in the second cycle. Whether NativeArray Chunks will create additional expenses in IJobChunk?
Example for understanding

//Standard representation
    for ( int i = 0 ; i < iMax; i++ ) {
       for ( int j = 0 ; j < jMax; j++ ) {
        }
    }

//Manual iteration
[ReadOnly] public NativeArray<ArchetypeChunk> Chunks;
Execute ( int i ) {                                                          //index chunk
    for ( int ii = 0 ; ii < Chunks [ i ].Count; ii++ ) {           //elements in chunk

       for ( int j = 0 ; j < Chunks.Length; j++ ) {               //index chunk
           for ( int jj = 0 ; jj < Chunks [ j ].Count; jj++ ) {    //elements in chunk

            }
        }

    }
}

//Whether in a case with IJobChunk there will be "[ReadOnly] of public NativeArray<ArchetypeChunk> Chunks;" additional expenses? (Perhaps incorrectly wrote)
[ReadOnly] public NativeArray<ArchetypeChunk> Chunks;
Execute(ArchetypeChunk chunk, int chunkIndex, int firstEntityIndex){
    for ( int ii = firstEntityIndex; ii < Chunks [ chunkIndex ].Count; ii++ ) { //elements in chunk

       for ( int j = 0 ; j < Chunks.Length; j++ ) {                                           //index chunk
           for ( int jj = 0 ; jj < Chunks [ j ].Count; jj++ ) {                                //elements in chunk

            }

        }

    }
}

About 1000000 - 4000000 cycles turn out.
Therefore I am concerned by productivity of a memory access.

Access on chanks with correctly located elements is the fastest?
Record in NativeArray in the second cycle will spoil everything?

CPUs have multiple data streams and can have cached many different locations in memory at once. Modern CPUs have very sophisticated data fetching units along with 4-way or 8-way set-associative caches.so you can have multiple independent arrays running at the same time without performance issues. The best thing to do is profile a couple of different approaches.

Personally, I would use To/FromComponentDataArray for your use case as it avoids issues iterating over sparse chunks and makes the code simpler.

Many thanks to you.

  1. Forgive for a silly question.
    var nativeArray = entityQuery.ToComponentDataArray ( Allocator.TempJob );
    How FromComponentDataArray is applied?
  2. How many approximately arrays it is possible to have?
    (At me it turns out now there will be already eight.)

1b) dataQuery.CopyFromComponentDataArray(dataArray, inputDeps);
2) Profile it on your target device. Performance is not something easily generalizable.