Can such mechanics (herbivores eat plants) be reliably run in parallel?

Imagine you have plants and herbivore animals.

Lets take into consideration group of 5 animals close to each other.
Each searches for plants in reach.
It can happen, that multiple animals find same plant at the same time.
If animal eat plant, plants available resources decreases.
So for example, there may be enough food resources for 2 animals, but not for 3 or more.

All plays nicely, when I do such mechanics in single threaded job. Which is fine.

Now lets say, I run animals in IJobChunk.
By my understanding is that, if parallel chunks are running, it may happen that animal from each chunk pick same plant at the same time. So that of course is the problem, as I can run into race condition, when number of animals eat more, than plant can supply.

The way I can think of, is using multi hash map in parallel to store information, which animal tried eat which plant. At this point, no actual resources consumption would occur, just checks and pairing. Need to indicate, not every animal will eat plant at the same frame. Not sure, if that is right application for multi hash maps in parallel.

And even then, I would need iterate through multi hash map elements in next single threaded job, to check and consume resources, if available.

Another problems is, that as far I am aware, cleaning hashamps is not as performant, or I need allocate memory for hashamp. I would need do that every time, I want to run system.

Any other suggestions?

For this kind of thing I tend to use an event system.
In you’re job your create an Event Entity with a ComponentData like

public struct AnimalEatFoodEvent : IComponentData
{
    public Entity Animal
    public Entity Plant
}

Then in a simple ComponentSystem that run on the main thread and at the end of the frame, you collect all the events, and do you’re logic. Like removing resources on the plant, or giving it to the animal. Then you just destroy the EventEntity.

The nice thing with that also, is that you don’t modify any archetypes on the plants or animals. You just create 1 frame entity.

I’m using the awesome GitHub - tertle/com.bovinelabs.entities: A collection of extensions, systems and jobs for Unity ECS. library to do that, which by pass the slow CommandBuffer, and actually batchs all the events at the end of the frame.

2 Likes

Thx @Ziboo . These gives me something to think about.
I am checking link as well.

Edit:
I just link for reference, same forum thread about events, as in github repo.

It depends on what this system will be for. If you are working on an AUX AI, you don’t need to keep 100% realistic your data in all frames.
If I were making this I would do this (I don’t need every frame precise):

  • The animals select plant to eat, regardless of each other, store reference to plant on animal and reference to the animals on plant (the animals will prepare to move to the plant and eat in the subsequent frames)
  • a sorting job (animals have strength or power or whatever attribute) will be run on the plants selected by any animal, the job will sort the animals referenced on the plant by this power attribute DESC and store which animals will be able to feed on it (if there is room for two animals and four selected this, the two most powerful will win)
  • the animal plant-picking job run tests periodically and checks if the animal is still in the allowed pool otherwise resets such animals’ plant-selection and for those animals the selection process will start again

this has some very nice side effects: every animal which needs to feed will start to act, in the mean time you have time to check if everything is valid, if not, the less powerful animals will forced to alter their plans

The con: it is not working for decision in the same frame (fast simulation).

@ fortunately I don’t need strictly execute this behaviour in same frame. So I can spread it a bit.
While your concept sounds resealable, my only concern is regarding

My trouble is, if I run animals in chunks, how do you propose store group of animals in corresponding plant, without race condition?

Yeah, that’s a problem.
I think the NativeList allows parallel writing. So you will end up something like a ECB but hand-made. Which means after animals schedule their plants, you need to sync this to the plants as well before you run the check jobs, but this cannot be parallelized. This mean, you need to do this in a sync-point (end of frame?).

Yeah, I’ll think about this more at home. At work, my parallel brain processes are busy with work too. :smile:

No stressing out :slight_smile:

I am keep comming back and looking int event system, which was proposed earlier. It apparently work somehow similarly as ECB, but allow to work in burst as well.

However, I haven’t yet fully wrapped my brain around the concept.

1 Like

As far as I know, you can only do this reliably with some type of event system. The way I do it is, inside a bursted parallelized job, I can request an ‘eat’ event and put it into a hashmap. I then pass this hashmap to another parallelized but not bursted job which will pass along the event to a List or Queue of the relevant system, in your case it would be EatSystem. That system will then schedule a new and final job with a new hashmap containing all the requested events, which can be multiple per frame. This job will make sure that the plant you’re eating has enough ‘hp’ to be consumed and if not, simply ignore the event and move on. If it could be consumed, it would then request a new event to your animal system to receive whatever boost it should get from eating the plant.

Almost everything here is bursted and everything is parallelized. The only ‘catch’ is that this can be executed over more than one frame. This however doesn’t cause any race issues and everything works as it logically should, meaning that plants can’t be overconsumed and animals wouldn’t get more food than they should.

@Radu392 Since mentioning HashMaps, which now occurred few times in this thread already, I am just thinking, since we got now GetValues and GetKeys from native hash map, we can do something like following.

Pseudo code …

Before passing hashMap into job

// Prepare maximum size of Native HashMap.
var nhm_animalsTryingEat = new NativeHashMap <Entity, Entity> ( totalNumberOfAnimals, Allocator.TempJob ) ;

Assign animals-plants pairs in parallel job with burst.
Food check job for each animal.

// Here my parallel job ...
// Check if animal can potentially eat plant.
// If so, assign animal-plant pair to hash map.

After parallel job done, move animal-plant pairs to native array.

NativeArray <Entity> na_animalsTryingEat = nhm_animalsTryingEat.GetKeyArray ( Allocator.TempJob ) ;

Run this in single threaded job with burst.
Each filtered animal eats food if enough.

for ( int i = 0; i < na_animalsTryingEat.Length; i ++ )
{
    Entity animalEntity = nhm_animalsTryingEat [i] ;
    Entity plantEntity ;
    nhm_animalsTryingEat.TryGetValue ( animalEntity, out plantEntity ) ;
}

One thing is, since following is deprciated
nhm_animalsTryingEat.ToConcurrent ()
correct way would be using?
nhm_animalsTryingEat.AsParallelWriter ()

Does that sounds any reasonable?
Or still would advise go with some event system?

Edit:
To avoid needing for sync points, possibly using hash map as Allocator.Persistent.
Then dispose and reallocate / clean before each new search food job is executed.

Then eating food job can be executed in next frame.

Here is an example:

// Event Component
public struct AnimalEatPlantEvent : IComponentData
{
    public Entity Animal;
    public Entity Plant;
}


//Job System
public class AnimalAiSystem : JobComponentSystem
{
     protected override JobHandle OnUpdate(JobHandle inputDeps)
        {
            //Your code ....
            var animalAIJob = new AnimalAiJob
            {
                EventQueue = World.Active.GetOrCreateSystem<EntityEventSystem>().CreateEventQueue<AnimalEatPlantEvent>().AsParallelWriter()
            };

            return animalAIJob.Schedule(this,inputDeps);
        }

}

//Job
[BurstCompile]
private struct AnimalAiJob : IJobForEachWithEntity<Animal>
{
    public NativeQueue<AnimalEatPlantEvent>.ParallelWriter EventQueue;

    public void Execute(Entity entity, int index, ref Animal animal)
    {
      
        //Your Ai that check if animal near a plant and wants to eat it...
        //if yes then Enqueue an Event
      
        this.EventQueue.Enqueue(new AnimalEatPlantEvent()
        {
            Animal = animal,
            Plant = myPlant
        });
      
    }
}

//Event Logic
public class AnimalEatPlantEventSystem : ComponentSystem
{
    protected override void OnUpdate()
    {
        //Here you are on the main Thread. Thoses Events only live for 1 frame, cause the Queue is clear by the system automaticaly at the end of the frame
      
        this.Entities.ForEach((Entity entity, ref AnimalEatPlantEvent animalEatPlantEvent) =>
        {
            var plant = animalEatPlantEvent.Plant;
            var animal = animalEatPlantEvent.Animal;
            var plantHealth = this.EntityManager.GetComponentData<Health>(plant);
          
            if (this.EntityManager.Exists(plant) && this.EntityManager.Exists(animal) && plantHealth.Value > 0)
            {
                plantHealth.Value -= 1;
                this.EntityManager.SetComponentData(plant, plantHealth);
              
                var animalHealth = this.EntityManager.GetComponentData<Health>(animal);
                animalHealth.Value += 1;
                this.EntityManager.SetComponentData(animal, animalHealth);
            }
        });
    }
}
1 Like

@Ziboo this looks really nice and look simple. Thx lot for sharing.

I just got a thought here.
When you run

this.Entities.ForEach((Entity entity, ref AnimalEatPlantEvent animalEatPlantEvent) =>
{

Does it have to use this.EntityManager inside?
I am asking, because if ComponentFromEntity could be used, then burst would be applicable to the job?

I never tried in a job… But I guess it’s doable.

I was doing it in a simple component system, because you’re sure it’s on the main Thread, then you don’t have any race condition.

Also, keep in mind that you can make this system run only when such Events exists, with a Query and RequireForUpdate.

Also, we are talking about a one frame system that executes on some entities.
Try it, but I’m pretty sure it will not blow up your framerate ^^

I know with DOTS, we all want to super optimize the code, but remember that you profile first, and if needed, optimize.

Maybe you could try with a ScheduleSingle to run the job on a single thread.
But I don’t know in that case if Burst will be usefull at all. Never tried.
If you do, let me know.

Yep. There is also IJob, which I think is sensible, to allow burst.
My question was rather toward, if the event forces somehow EntityManager to be there.
If not, I just pointing into nice optimization possibility. Specially if there is such massive difference between burst and none burst job :slight_smile:

However, if this event job is burstable I will definitely give a go. Looks more optima, than using HashMap.
Until now, I was collecting all possible options, for given problem.

Of course, still open for other propositions :slight_smile:

I will play with events system, as soon as I can, and I post with results.

Nop the event doesn’t force the EntityManager.

The system only store Events in a queue. Batch them, and create Entities with this component. Then destroy the entities at the end of frame and clear the Queue.

So my event system ziboo talked about above does work quite well here, especially if you want to have other systems also act when a plant is eaten (play a sound, record stats, etc).

However if you don’t intend to build your application modular like this and all that would be handled in just a single system there is a much more general pattern you can use (and is something I use regularly) and will be more performant as you don’t need to create any entities. Just do the work over 2 jobs.

It ends up being pretty much exactly the same but instead of creating entities, you just pass what was the event queue directly between the 2 jobs, with the first one executing in parallel.

    public struct AnimalEatPlantEvent
    {
         public Entity Animal;
         public Entity Plant;
    }

    protected override JobHandle OnUpdate(JobHandle handle)
    {
        handle = new AnimalAiJob
            {
                EventQueue = this.queue.AsParallelWriter()
            }
            .Schedule(this,handle);
      
        handle = new EatPlantsJob
            {
                EventQueue = this.queue,
                Health = this.GetComponentFromEntity<Health>(),
            }
            .schedule(handle);

        return handle;
    }
    [BurstCompile]
    private struct AnimalAiJob : IJobForEachWithEntity<Animal>
    {
        public NativeQueue<AnimalEatPlantEvent>.ParallelWriter EventQueue;
   
        public void Execute(Entity entity, int index, ref Animal animal)
        {
            //Your Ai that check if animal near a plant and wants to eat it...
            //if yes then Enqueue an Event
       
            this.EventQueue.Enqueue(new AnimalEatPlantEvent()
            {
                Animal = animal,
                Plant = myPlant
            });
       
        }
    }

    [BurstCompile]
    private struct EatPlantsJob : IJob
    {
        public NativeQueue<AnimalEatPlantEvent> EventQueue;

        // pass in component data
        public GetComponentFromEntity<Health> Health;
      
        public void Execute()
        {
            while(EventQueue.TryDequeue(out var e)
            {
                var plantHealth = this.Health[e.Plant];
                // do the work here
            }
        }
    }

I would still consider using events with this but instead I’d put them in the second EatPlants job for when plants are confirmed eaten. Then you can react to these events in for example a AudioSystem to play a sound or a StatSystem to record plants eaten, types, anaytics etc so you don’t have to crowd your EatPlants job and AI system with all sorts of external systems.

It really just depends how you want to design your application.

I apologize if I come off as that guy who shows up late to the party and tells everyone they are wrong, but I do feel like you all are misunderstanding the ECS way of doing things for this particular problem.

Let’s suppose you have a bunch of animals that go around searching for food. When they “see” food, they target it and move towards it. So the animals discover food, and move towards food. Those are straight-forward ECS problems most of us have figured out. Now the next step is to make the animals devour the food, while also alerting everything which food was devoured, and trying to do that all in parallel. The problem is that you are trying to make the animals do all the work, will the food just disappears passively. That’s an actor model.

So instead, let’s look at our data from a different perspective. To help paint a mental picture, lets assume there is a goddess overseeing the entire food consumption. She is our “systems”. When multiple animals try to consume the same food, she decides which animal earned the food, essentially writing the deserving animal’s name on the food.

In practice what you would do is have a component or buffer on the food entities named “Claimed” or something and have an Entity field. Instead of looping over each animal, you loop over each food entity and collect the animals close enough to the food to eat it. Then you filter out for only the animals targeting that specific food entity. With those results, if you still have animal entities, you assign the entity references to the Claimed components. Afterwards, you run a loop over each animal and check the targeted food entity. The food’s Claimed component will either match the animal’s in which case the animal may eat it, or it may remain unclaimed (null entity) and the animal will continue to pursue it, or it may be claimed by another animal. In that final case, the current animal knows to give up, or as a bonus mechanic, it can see which animal “stole” its food and go attack that other animal.

This approach does not require any event system nor sync point, and can be completely parallelized. All I did was look at the problem from a different perspective and turned a write dependency into a read dependency.

2 Likes

Btw @tertle thanks for your repo ! Could you update the readme though, it seams outdated:

NativeQueue<T> EntityEventSystem.CreateEventQueue<T>(JobComponentSystem componentSystem)

CreateEventQueue() doesn’t take any parameter.

I’m stuck on something and can’t find the right solution.

Thanks

Thanks for the other point of view.
Will that modify Archetypes though ?

It’s mostly what I’m trying to avoid, cause that what’s slowing down the most the ECS

This algorithm does not require any sync points, if that’s what you are asking.

You obviously need to make sure that food has the Claimed components for the algorithm to work, but they don’t need to be added and removed at runtime. Their life-cycle would be the life-cycle would be the life-cycle of the food. For recharging food sources (if that’s what you need), you can have some “Recharge” entity value or use a bool in the Claimed component. Or if you go dynamic buffers, a dynamic buffer with a length of zero could mean that there is no food left and when the food replenishes, the buffer fills.