I’m thinking about implementing GOAP(Goal Oriented Action Planner) in ECS
This is an approach to AI when npc behaviors are split into small independent actions (like WalkTo(x), Target(y), Equip(z)), there is an action planner system that can generate sequences of that actions (plan) depending on current goals, and another system that can take that plan and execute actions one by one.
So the result may look like this: Goal: RestoreHP >>> Plan: GoTo(fridge) > Grab(raw meat) > GoTo( cooker ) > Cook (steak) > Eat(steak)
In DOD action can be a pair of component and system, if npc has that component - corresponding system takes control of that npc and changes it’s state.
A goal can be a component connected to npc
Planner system reacts on Gola component and generates a plan (list of pending action-comonents), that plan should be saved somehow on npc or on some kind of Plan-entity associated with npc.
Plan Execution system shoud take steps from that plan and move them to npc one by one. Action system reacts on that action-component. And so on until goal is accomplished.
All looks pretty straightforward.
The only problem is that I don’t know how to store an ordered list of arbitrary components on an entity?
Except I would go with your own format in the DynamicBuffer and emit Action components when the previous is executed. So you will have some sort of PlanExecuter system or similar.
I think right now you do not have good alternative to your solution with dynamic buffers. Only thin I can think of is to have some kind of cache of action on the system. But it would be harder to jubiefy.
You do not make a cache of actions. You do store an action map. Enum is blittable as far as I know. And then you take the next on the top and resolve it to a component, attach it, and in the next iteration the corresponding system will pick it up. So basically you store it in an array of enum values or whatever.
Try to forget what you have learned in OOP. You do not tie your data to objects anymore. You should aim towards the idea that where you put the real Action component on the entity you should have the data you need to do it.
In other words: you should design DataFlow instead of command flow. You should design data crunching systems instead of objects.
If you’re familiar with Math, you probably know the Automatas in Mathematics. You feed some data in and it spits out some other data, but you’re only interested in what’s inside when you’re working on that individual one, otherwise it’s like a black box. Now, the ECS (and DOD in general) is similar, you can think of your systems like individual automatas. Design your data-flow between them with components (adding or removing), so you can feed your data into the systems you need.
You can also not store whole plan on entity, and just calculate data needed for each action when this action execution is next. Then you could have a dynamic buffer of actions on the entity.
While not GOAP, I wrote (about 5 months ago now) a very complex Utility AI solution in pure ecs / jobs that used really cool arbitrary generic jobs. It would build a different job tree every frame depending on that state of the entities automatically handing dependencies to each stage until the entire AI had been resolved each frame.
It worked and I was surprised how well I could make ECS/jobs generic with some crazy patterns… but I scrapped it a month later.
I realized I was trying too hard to fit existing solutions to a problem that it wasn’t suitable for so I ended up rewriting the whole thing. I based it off the general idea of Utility AI but I broke it down into lots of different systems and jobs that have very specific purpose. It flows a lot nicer and is much easier to maintain.
The downside is that it might take, 2, 3,4 even 5 frames to update depending on the order of system and the complexity of the tree the AI has process to complete. However, I found that this doesn’t matter most of the time.
AI isn’t updated every frame usually and most AI solutions are rate limited to stop lots of updates causing delays. It still usually continues with its previous decision while recalculating so the small delay is mostly hidden.
Just something to think about. While not saying it is in this case, it is a common mistake to try make a solution fit a problem (look at all the incorrect usages of design patterns).
this is true, when you have a hammer everything looks like a nail )
but, so far GOAP looks for me like a natural approach for data-oriented AI
the whole point of GOAP is planning and executing the pre-generated plan - user/programmer define only a goal for npc, and system figures out how to achieve that goal automatically based on npc abilities set and world state. This planning is not a cheap operation (building of “all possibilities” graph + A* pathfinding), so it can’t be executed each frame.
I have made a task/states queue system whereby every specific state like an activity, moving or thinking state is described by a generic IBufferElementData. This way i can store all these tasks in an DynamicBuffer on each AI entity.
My IBufferElementData
[InternalBufferCapacity(0)]
public struct StatesQueue : IBufferElementData
{
public int stateID; //What state does this buffercomponent describe
public Entity target; //Target of that state
//Additional information
public float Value1;
public float Value2;
public float Value3;
}
Values1, 2 and 3 can contain additional information about the state, like how much time an entity should wait in its WaitingState (Value1 = 10f)
Or for which field in the villager struct a specific entity in the SeekEntityState has to be found.
(Value1 = field in villager struct ID), Value2 = TreeEntityID)
This approach does require the programmer to know what each value field should contain for a specific state.
Once the queue of tasks is constructed the TransitionStatesSystem, which operates on all entities that are currently in no state, will examine the first described state in the queue and add this state with the additional parameters to the entity. An excerpt from the TransitionStateSystem:
TransitionStateSystem
NativeArray<Entity> Entities = chunks[i].GetNativeArray(entityChunkType);
BufferAccessor<StatesQueue> StatesQueue = chunks[i].GetBufferAccessor(StatesQueueChunkType);
for (int j = 0; j < chunks[i].Count; j++)
{
var currQueue = StatesQueue[j];
int currQueueSize = currQueue.Length;
if (currQueueSize == 0) //The queue is empty!, create a new queue in ThinkingState
{
CommandBuffer.AddComponent(Entities[j], new ThinkingState { });
continue;
}
int topofqueue = currQueueSize - 1;
int nextStateID = currQueue[topofqueue].stateID;
Entity target = currQueue[topofqueue].target;
float Value1 = currQueue[topofqueue].Value1;
float Value2 = currQueue[topofqueue].Value2;
float Value3 = currQueue[topofqueue].Value3;
StatesQueue[j].RemoveAt(topofqueue); //POP
if (nextStateID == WaitingStateID)
{
CommandBuffer.AddComponent(Entities[j], new WaitingState
{
TimeLeft = Value1,
});
continue;
}
else if (nextStateID == IdleStateID)
{
Note: I add the states to execute in reverse order, so MovingState is added later than ActivityState so MovingState is the first state to be “popped” from the dynamic buffer.
Thought I would share my solution to storing an ordered set of actions on an entity. Any criticism or suggestions are welcome.
How about storing the needed action data during precondition resolution. For example, say you’re resolving a HasTarget precondition. Let’s assume that there are no actions whose effect is HasTarget = true. This means that we try to resolve HasTarget procedurally. During this resolution, the target would be stored in some component.
If say the precondition is resolved by an action, then that action would store the target on the same location if it was resolved procedurally.
The action plan only has to store the types of actions needed (action components). The systems that executes the actions can still run since the data are already stored somewhere.
This is indeed an awesome idea. I’m as well have been thinking in this direction last night.
GoTo action may have a precondition that NPC entity has Destination component on it, and effect may be an addition of ArriverdToDestination component - so, in this case, GoTo action don’t need to have parameters at all, and may be stored as Enum
so for example “get some wood” goal can be accomplished with the following plan
FindTree > Walk > Chop