BT vs. GOAP for game AI

I’m experimenting with AI and generally want to explore different models for programming agent behavior and while neural networks and such are cool, their training nature makes it hard to set rules, so I assume they aren’t very good for creating actual behavior of an agent.
I’m not particularly experience with AI, but it seems to me that every established option for game AI is fairly limiting.
For example, for my test project, I wanted to make a functional village simulation in which agents keep the needs of it satisfied without player’s intervention. This seems like a perfect fit for a GOAP or HTN model, because clear goals can be given to each agent:

[Fish for food] = Move to nearest fishing spot -> Fishing activity -> Move to village storage -> Unload inventory
[Repair building damage] = Move to village storage -> Load inventory -> Move to damaged building -> Repair activity

And it works good on a macro scale, but I also want agents to respond to ongoing changes in the world. For example a response to the player approaching, whether they’re holding a weapon or not, etc.
Now, being “reactive” is a clear indicator that behavior trees might be the answer, but I’d like to retain the action planning as well.
I understand that “action planning” described above can also be implemented in a behavior tree, but keep in mind that I gave a very simplified example. In order to gather wood, for example, an agent might need to cross a river to reach the forest, and for that it will need to find a way to cross it, whether it be by boat, bridge, swim or anything else.
I’d appreciate any insights from devs familiar with AI design. Should I combine BTs with a form of action planner? Use something else entirely?

You could add goals that account for these things and weight their priority based on the world/player state. That’s how I’m handling it. Action planners can still be reactive.

Edit: This also doesn’t have to be an all or nothing approach. You can integrate behaviour trees in segments, integrating them into the actions themselves. Maybe you need a different action for a different context while still adhering to a goal-based system. Rather than switching to a behaviour tree, which can quickly spiral in complexity, you could use smaller, contextual behaviour trees for goal decisions.

1 Like

I think this describes an HTN. HTN literature uses different terms than BT literature, but in practice, they are like BT’s that get pre-evaluated. Like a BT that avoids branches that are ultimately going to fail, and when there are multiple branches that can succeed, in some implementations, can choose the most optimal branch of the bunch.

I’d suggest avoiding planners, though. It can feel like they are ideal to handle complex combinations of circumstances, but the thing is they add a lot of hidden complexity. They can make it very hard to understand why a character is doing one thing instead of another. A planner can make a very unorthodox plan that makes sense with the rules you gave it, but it’s not fun and it breaks the game. A planner can have you tweaking lots of parameters and adding lots of exceptions to the rules, days before release, just to fix a problem that can’t be fixed directly.

HTNs are less cruel than GOAPs in this respect, but they are still too much IMO. Even BTs can be too much sometimes. And I love them, I love graph stuff like BTs and FSMs because they can make things clearer, but even those can get unnecessarily complex if one is not careful. Anyway… I think you could be better of with some simple, smallish BT, maybe with some kind of utility selector node.

Bobby Angelov has an interesting take on this: (Youtube Link at 1:30:24). He talks about how devs have a much bigger view of the game than the player, so it’s easy to forget that the player won’t notice all the reasoning behind AI behaviors.

So, for example, a planner can find the optimal way to satisfy the food needs of a village, but then it can get very repetitive, maybe even to the point it’s uncanny, because real villages are not optimal at all. Or it can get very difficult for players to predict and understand what the villagers will do to the point it’s not fun. You could try to compensate with a lot of parameters and tweaks. Or you could make a Behavior Tree that just tries to feed villagers from time to time, in a predictable and easy to understand way, but with enough randomness for variety. With a pure BT option, sometimes a village wouldn’t do what’s optimal, but the player doesn’t know what’s optimal in these cases either.

If you do your job well and the game is fun, players will blame themselves when the AI does something they don’t like, no matter if you use a planner or a BT. They’ll think: “How can I change the design of the village to avoid this happening”; just like a city designer thinks “how can I design the city to reduce traffic jams” instead of thinking “these dumb citizens aren’t coordinating themselves well”. The difference here is, when you want to adjust the village’s AI to make things more fun, it’ll probably be easier without a planner.

1 Like

I’ve felt similar to this when working out different ways to manage some AI. In the end I prefer to simply “manage” them, meaning pretty much just a standard state machine, rather than try to give agents a complex mind of their own.

The main player would be doing in my particular game is observing time and place of AI, so the only data I really care about is where and when AI is going to be somewhere, and frequency of change to new spot.

Of course, interaction with player simply overrides whatever the standard routine is. After a cooldown, standard routine will pick back up if no more interaction with player.

This is bare bones basic state machine and at this point it has enough seeming complexity that it would take most players many many hours to pick up on the routine, if they ever did at all. All I have to do as a designer is make changes to when and where and how frequently AI migrates and the context of the locations they are moving to gives all of the info that player is interested in. So the AI doesn’t really have to make any decisions.

That was just for an animal AI in a hunting game, but I don’t see why it couldn’t be expanded to something like citizens in a city builder. You could devise complex rules where each citizen has various needs and responds to them in prioritized way… or you could just say when,where and how frequently they travel, with weighted chances so that it seems like there is intricate decisions being made. And of course, if some major event happens, that can override the routine.

THe main thing is, if AI agent is doing something weird, I don’t want to have to untangle a web to figure out why. It should either be because I made data entry mistake, or should be quick to hunt down in a basic state machine.

1 Like