Hello. Forgive for a silly question.
I tried to make it via the Chunk system. But could not understand as.
How it is the best of all to make? (applying parallelism of JOB and ECS)
If either first or second is readonly you make the outer loop the one that can be written to and dispatch that as an IJobParallelFor or IJobForEach or IJobChunk. If both require writing you need some intermediary data structure and write collapse mechanism to make it thread-safe.
No offense Dreaming as your answer is correct (I think) but if I didnt already have 6+ months of experience twisting my code to work with Burst, I would not have a clue what a single word of that explanation means.
In much simpler terms:
Alexandr, you can have a for loop inside a for loop, except only one of those loops can be “jobified” using IJobParallelFor/ForEach/Chunk. A Job can not be scheduled (started) within another job but a for loop can both be located inside a job struct (Execute method) or can be used to schedule multiple jobs (highly not recommended).
A job example of your two for loops will be:
var jobHandle = new ParallelJob
{
JobMax = iMax;
}.Schedule(iMax, 1);
[BurstCompile] // Never forget!
public struct ParallelJob : IJobParallelFor
{
// [ReadOnly] or [WriteOnly] public NativeArray<Blittable> here!
// Jobs can take [ReadOnly] basic types but must be "wrapped"
//in a NativeArray<int (example)> for [WriteOnly], even if it's a length of 1.
[ReadOnly] public int JobMax;
public void Execute(int index)
{
for (var iSecond = 0; iSecond < JobMax; iSecond++ )
{
// Do stuff
}
}
}
This also doesnt do anything as there’s no read or write NativeArray that the job is accessing but this is a literal Job translation of your two for loops.
There is an opportunity in the second cycle to continue search from the place established by the first cycle?
Perhaps:
public void Execute(int index)
{
for (var iSecond = index; iSecond < JobMax; iSecond )
{
// Do stuff
}
}
What else there are opportunities for effective work of cycles?
4)Whether it is possible to start iJobs (for example iJobParallelFor) from some position among all available components, but not from the very beginning? (Or it and occurs at parallel streams?)
It only by means of NativArray can be made?
And how to find all components and to transmit them through NativArray?
Or how to transfer all existing components, for example Translation?
If first and second are the same dataset, that is a valid strategy.
Knock knock?
O(n log n) algorithms? Further optimizations are going to be problem-specific.
You can safely access any index in an array if it is [ReadOnly]. You can also schedule an IJobParallelFor with any count. But writing results has stricter rules.
You can only write to containers which supply parallel-safety. NativeArrays provide that. There’s a few others.
ToComponentDataArray
FromComponentDataArray? I don’t think I understand the question.
In what difference of ToComponentDataArray and EntityQuery?
How by means of EntityQuery to receive NativeArray? There it is written “After creating a EntityQuery, you canGet a NativeArray containing all the selected entities”.
Translation only as example of a component.
But the general gist is right to compare everything with everything. And make changes if necessary.
It seems to me in the depth of Unity there is something similar for work of Physics. Would be great to have an opportunity to influence this cycle. (Add my code to it and not re-create my own search of elements…)
Ok, that makes it clearer. As usual, there are many solutions to one problem. Depending on where you want to write the results of that comparison to, I suggest:
put things you want to compare to eachother into an array .ToComponentDataArray
if you write to same entities:
IJobForEach (outer loop) → to directly write to the outer loop entity
use array for (inner loop, i.e. comparing to the rest, you can shortcut the loop as you suggested by adjusting start index, on outerloop, i.e not double comparison)
It is difficult to me to understand in words how exactly it needs to be written without obvious examples.
If it is not difficult, you can give both examples with the Translation component? It will be so much more clear. Thanks a lot! Forgive for inconveniences.
what you ask for is really basic and if you spend a moment trying to figure it out, I am sure you will.
I attach something for you, which is different from your ask, but somewhat similar. it’s an example of two groups of players (spheres), say Group Red and Group Blue targeting the closest enemy of the opposing group. You can ignore the visualization (debug lines via line mesh)
IJobForEach, IJobChunk work in parallel (per Chunk) unless you schedule them Single
IJobParallelFor works parallel obviously, at a level you control
IJob single thread (you can schedule a few of those to run parallel though, if they do not depend on each other – that’s pretty much what the other jobs do, i think)
I hope this is explained in the manual, check it out…
No idea if it is relevant to your problem. Need to know more about your problem.
ToComponentDataArray is a member method of EntityQuery.
That’s a race condition. Multiple threads could be writing to the same outNativeArray[iSecond] with the same iSecond value at the same time. You’ll need either islanding or a multi-producer single-consumer buffer with order-independent result processing. Not a trivial problem to solve.
ComponentDataFromEntity is probably what you want.
Impossible to know without knowing your desired inputs and outputs.
See 3 and 5 for why I am hesitant to answer this.
sngdan got it right. There’s a few more useful ones like IJobParallelForDeferred, IJobParallelForBatch, IJobParallelForFilter, and IJobForEachWithEntity to name a few more.
Data is structured in ECS. Whether or not your job takes advantage of it is a different story.
Depends on what you actually need. In Data Oriented Design, you typically design for a very specifiic problem and not a general use case. What is best depends on your data, what you want to do with it, when, and any other bottlenecks you are trying to work around.
This is kind of related to a question I asked before, I hope people don’t mind if I post it here. I’m curious what people here would think and how they would make this into a job.
foreach (GameObject gobject in BusinessObjects)
{
currentBusinesscript = gobject.GetComponent<BusinessInfo>();
GameObject [] people = currentBusinesscript.EmployeesInd;
foreach(GameObject dodobject in people)
{
currentBusinesscript.PaperMoneyHeld -= GlobalWages;
dodobject.GetComponent<PersonInfo>().PaperMoneyheld += GlobalWages;
}
}
I have like a 100 entities each with roughly a random number of 0 to 100 entities in an array/dynamic buffer. I did wonder if you could do the whole lot in a single job. I thought about putting the business entities into a single native array and the entire employee entities(about 2000) into another and then iterate through them all with Ijobparrallel or something like that but I don’t think they would match up.
Try to make it an IJob or an IJobForEach using ScheduleSingle first. Given your numbers I suspect that the performance boost from just that might be enough for you. Once you get to that point if you still want to make it run wide, come back with the single-threaded version and we can give you much better opinions and suggestions.
Many thanks, a lot of things became clearer. Especially presence of parallelism everywhere (if not to specify one-threading) except IJob. If iJob depends on other iJob and it is one.
Sorry, meant syntax. (The principle, but not the solution of my problem.)))
And how in a physical cursor this problem is solved? (In the same place too it is necessary to compare everything between itself to define a collision.)
I.
MonoBehaviour
for ( iFirst = 0; iFirst < iMax; iFirst++ ) {
for ( iSecond = 0; iSecond < iMax; iSecond++ ) {
}
}
II.
iJob
for ( iFirst = 0; iFirst < iMax; iFirst++ ) {
for ( iSecond = 0; iSecond < iMax; iSecond++ ) {
}
}
III.
IJobForEach
for ( iSecond = 0; iSecond < iMax; iSecond++ ) {
}
“I” more slowly than “II” and “III”?
“II” and “III” are equivalent or “III” will be quicker?
Unity provides an API to create custom job types. ECS uses this API to create job types that can iterate over Entities. You can use IJob and IJobParallelFor and others to do work on other data structures, or to do extremely manual chunk iteration if you need something that custom.
Yes. It is main thread. The Job equivalent is IJobForEach.
Syntax is how to type the characters to use a tool, but I don’t even know which tool you can use to solve your problem since right now your problem cannot be made parallel without an algorithm redesign or added constraints.
You want collision detection? For how many elements? If that number is large you might want to look up a few different Broadphase algorithms. Fixed Grid, Multibox Pruning, and BVH are the three main ones. Maybe Unity.Physics solves your problem? Otherwise I would forget about writing the algorithm in parallel and use an IJob first to familiarize yourself with the job system. You can use an IJobForEachWithEntity to write your inputs into a NativeArray and an IJob with a ComponentDataFromEntity to write your results back to the Entities.
Usually it is slower though there are edge cases. But if you can BurstCompile II or III, the I is definitely slower.
III can be quicker, but it really depends on what other work you can do in the frame. If there are other jobs that can be scheduled to run at the same time, the overall performance difference will be negligible, and II is significantly easier to get working. So I would start with II and get it working with Burst, since that will give you the biggest speedup. The rest can be optimized later if you need it.
This just came on top of my head. This is just algorithm. Didn’t try it.
The idea is to merge arrays into one array and then use it in jobs. You didn’t mention anything about array you are traversing so I didn’t know how the initialization should look like.
// Initialize.
// NOTE: You should use native array. I'm using normal array for simplicity
// These lines should be before scheduling the job.
int[] array = new int[iMax * 2];
for(int i = 0; i < iMax; i++)
{
array[i] = array[i + iMax] = originalArray[i];
}
// This `for` should be a IJobParallelFor job.
// You need to run it iMax to the power of two times to cover all cases
for(int i = 0; i < iMax * iMax; i++)
{
// You can put these lines in one job
int firstIdx = i / iMax;
int secondIdx = iMax + i % iMax;
// Then you can do operations on the array
// Example:
if(array[firstIdx] == array[secondIdx])
// Do something
}
Encoding the algorithm’s iterators into a single iterator with the combination number does allow for better work distribution across threads, but it does not necessarily make the algorithm thread safe. In fact, now both firstIdx and secondIdx must index [ReadOnly] arrays or else you have a race condition.