I just started updating my game toward the latest version of the ECS and Jobs systems.
So far, I was still using the preview 18 version, and I am currently updating to the preview 26 version (one step at a time, this should correspond to the 0.0.23 version of the samples). THis means getting rid of injection, which I used a lot in my game (since this was the old easy way of doing things).
Also, until now, I did not use the Job system but wish to convert most of my game to it.
In my game, I have several events. When an event occurs, it is created as a separate entity with an event component attached to it to store some data about the event, especially the target entity (the entity that should be affected by the event).
For instance, a DamageEvent could look like this:
public struct DamageEvent : IComponentData
{
public Entity Target;
public int Damage;
}
Then I would have a system processing the DamageEvent components to apply damage to each Target entity having a HealthComponent. Note that the same entity could be affected by multiple DamageEvent in the same frame, and also that a DamageEvent could target an Entity with no Health component (in that case, nothing should happen). The ApplyDamageSystem looks like this:
public class ApplyDamageSystem : ComponentSystem
{
public struct DamageGroup
{
public readonly int Length;
[ReadOnly] public ComponentDataArray<DamageEvent> Damages;
}
[Inject] private DamageGroup group;
[Inject] private ComponentDataFromEntity<HealthData> healths;
protected override void OnUpdate()
{
for (int i = 0; i < group.Length; i++)
{
Entity entity = group.Damages[i].Target;
int damage = group.Damages[i].Damage;
if (healths.Exists(entity))
{
HealthData health = healths[entity];
//apply damage
health.Life -= damage;
if (health.Life < 0)
health.Life = 0;
healths[entity] = health;
}
}
}
}
As you can see, I made use of ComponentDataFromEntity to apply the damages to the target entity.
Could someone help me translate this system into an efficient jobified one?
One important thing here is that the same entity can be affected by several Damage events.
inputDeps = new DamageJob
{
}.Schedule(damagedQuery, inputDeps);
Make sure you’re using JobComponentSystem and pass in the inputDeps parameter in OnUpdate to the Schedule call, and return the inputDeps once you’re done scheduling
Thank you for your fast reply, however this does not work in my case.
Indeed, the DamageEvent is not attached to its target entity (since multiple events can be targetting the same entity), thus I cannot use one archetype to get both the Health and the Damage components. This is why I used ComponentDataFromEntity before.
You could try to map each entity with a Health component to multiple DamageEvent components using NativeMultiHashMap<Entity,DamageEvent>. Something like
for each DamageEvent
map.Add(DamageEvent.entity,DamageEvent)
You would use an EntityQuery fetching all entities with the DamageEvent component.
Next, inside of an IJobForEachWithEntity, for each Execute(entity,index,health), iterate through the map using the entity as the key. For each or none of the values, apply the damage.
To see performance gains, use EntityQuery.ToComponentDataArray(Allocator,out JobHandle). This returns a NativeArray that can only be manipulated inside of a job. Have a job depend on that JobHandle; that job will add each DamageEvent to the map. The IJobForEachWithEntity job will depend on this job.
You’re doing everything right in regards to using the ECS API. As for the algorithm, I’m really tempted to do a time complexity analysis. Your approach for c cores, h HealthData and d DamageEvent is O(hd/c). My approach using the same input O(d) [IJobForEach filling up the NativeMultiHashMap] + O((h + d)/c) [IJobForEachWithEntity reading from NativeMultiHashMap], or O((h + d + cd)/c). Ignoring parallelism, it’s O(hd) against O(h + d + cd) [Here c is a “penalty” for forgoing parallelism]. If h is really small and d is really big (or vice versa), and if they’re both overall small, then your approach is more performant. If h and d are pretty close to each other, and are both overall large, then my approach is faster. In addition, the more cores your target audience has, the crappier my solution. Think about how many HealthData and DamageEvent you’ll have at a time. Please tell me if something is off with my analysis.
EDIT: You can replace ScheduleSingle with Schedule
Thank you very much for these insights!
You are right that doing the time complexity analysis is the best way to settle things here
The number of HealthData components is usually comprised between 100 and 500, and I would say that the number of DamageEvent usually ranges from 20 to 100. I target PCs for my first audience, so I guess the number of cores should be arround 4 in general. I might also target XBox / PS4 and have no idea about their cores.
According to your analysis, my solution would be hd=50000 vs h + d+ cd = 1000 in the worst case scenario, and 2000 (mine) vs 200 in the common scenario. Of course this highly depends on the constant factor and other hidden costs (such as initialization or job scheduling maybe?), but yours is linear and might thus scale better. So I will definitly give it a try if this becomes limiting (but for now, it really has a very small footprint).