What do you want from game AI

So I’ve been working on a behaviour tree tool for my game (and the space combat kit I’m working on). Right now it’s not meant to be an all-encompassing solution for AI or behaviour trees in general, rather just a basic solution to get people started with AI in that kit, and to make my life easier as I make my game. But it got me wondering about how devs in general see AI tools and AI development in their games.

From a non-technical perspective, what would be the ideal workflow for making game AI? I was thinking about animation and movement as well, and it occurred to me that in the same way that you have a ‘humanoid rig’ for animation, it seems like it would be efficient to have a ‘humanoid rig’ for AI behaviour. Or even a more abstract ‘organism rig’ which encompassed at least some groups of behaviours centered around survival, reducing threats and even cooperation with other organisms, by accessing information about their needs/desires and maximising them in a way that signalled the other organism about the intent.

Obviously it’s very easy to talk about it like this, and the devil is in the details, but I think maybe by discussing it we can get a better understanding of a ‘first principles’ viewpoint on the problem of game AI in general.

So maybe a few points worth thinking about:

  • What sort of behaviours are ‘universal’ or nearly so when it comes to game AI;

  • What sort of interactions between AI are universal or nearly so (e.g. conflict, cooperation, maybe even social interactions or some abstract measure thereof);

  • What sort of realistic behaviours do we not want in games (e.g. too boring or gets in the way of something);

  • Is there some simplified model of human or organism behaviour that might drive a high-level AI system?

EDIT: @Not_Sure I think you deleted your post? Anyway this was in response to a comment that doesn’t seem to be around any more.


That’s a good point. It’s especially apparent in one my my favourite games Splinter Cell. When you’re hiding around in the shadows waiting, you tend to have a good perspective on the AI and its reactions and it was very annoying for me to see how quickly the AIs reverted to normal behaviour (showing no apparent stress or change in alertness even after having recently found one of their fellow guards dead, or after having come across a ninja in a black suit who disappeared straight after).

There seemed to me to be an alert level that, at some point, became a permanent state if too many triggers were tripped, but apart from that binary state there didn’t seem to be much else, and before that state was reached, the AI would simply forget after some period of time.

This is definitely one of those cases where gameplay trumped realism, but I’ve always felt that the integrity of the situation should always be upheld in at least an abstract way. So even if the guards basically went back to normal for gameplay purposes, they could at least have:

  • Commented about the event in conversations with other guards;

  • Showed increased stress, such as whipping out their weapons every now and then and aiming at some dark corner;

  • Activating something that changed the gameplay such as extra lights or different patrol routes;

Also there could have been, like you said, different reactions for different experience levels or types of character.

Something I learned from AI development in unity3d, is too include sound clips into the programming design. I never found a common AI element between my different game object’s, except for movements.

Another part of the AI design is to also include playing animations. It should be designed that it can also trigger certain AI elements at different points of specific selected animations, and also maybe the sound clips.

Good point. It would be very good to be able to create, set and read animation triggers, and assign sound effects, inside the AI tool itself. Not sure if any of the solutions out there do this already.

On a more abstract note, I was thinking about how a generic AI ‘rig’ might work.

First of all, for in-game actions there’s not going to be an easy way to generalize what every game will need. If you have a knight that needs to throw down some mead before each swing, there’s not really a way to arrive at that behaviour (or not arrive at it!) analytically.

So if we consider this in terms of a behaviour tree, it would be quite hard to develop a generic solution to the leaf nodes (the concrete actions taken inside the game world).

But just a little further up the tree, I think there’s a lot of potential for generalisation. If there was some way to categorise each concrete behaviour in terms of maximising a specific reward, then the way that the AI structures its behaviour when choosing which of these actions to carry out (e.g. balancing different types of individual reward, short-term reactions vs long-term planning, optimising the overall balance of the game world) might be able to be abstracted in a way that removes a lot of handiwork from the developer.

So perhaps the developer would create only the leaf nodes - a lot of different actions that the AI can take - and then analytically the AI might determine at runtime how to structure the behaviour tree in order to achieve some goal, using these actions.

This goal would of course take into account many things, most of all the fun factor of the player, so it wouldn’t just be some optimum solution for the AI itself. But that doesn’t mean it cannot be achieved procedurally according to certain abstract constraints.

There also are a lot of generalized AI assets on the store already. I didn’t look but, is there even one specific AI solution for 6DOF space games? If I were you I wouldn’t try to make this general, I’d try to make it the best for your specific game and gamekit that it can be, and then see if it happens to be useful for something else too, but if not that’s fine.

The greatest challenge from a big picture perspective that I saw in my own AI usecase, was that it is much more important what all the enemies do “as a whole” compared to “as individuals”. The smarter AI with great pathfinding and awareness of positions that offer line of sight or cover etc. might look good if you only have a hand full of units that don’t cross paths, but if you have 400 of them and they all are “the same kind of smart”, they will drive the same route in single-file, or freak out while repathing around others, and it looks really bad. AI that for an individual is a lot dumber, but creates more believable swarm behaviour and big picture impressions, seems to work a lot better in my case, and looks smarter too. I don’t know how many enemies you’ll have at most but I think it’s something worth considering, especially if you want to sell it as an asset. Most existing solutions won’t scale up well with having a great number of units. If you wanna offer that, your system might need to make some compromises to allow for that.

4 Likes

Behavior Tree

There is some pretty good ones like Behavior Designer and Apex (and several others) that seems to do a lot already. Would be hard to compete with those. I even saw some free tools for behavior tree on the forum that seemed decent.

The implementation of the “reactions” (result from the Behavior Tree) should be left to the dev, but you could create some basic stuff to get the user started (like moving towards a target), but it should be separated components. May be you could have a template system to start with a “zombie AI” or ways for people to share those templates.

IMO the important parts of the AI are the sensors/inputs (visual, audio…) which are often just triggers and/or raycast and “behavior tree” triggered by the sensors. The result of the tree could change the state of the “Character” like “Shooting at target”, “Moving towards target” etc.
You can have multiple inputs/variable that influence the decision tree for example you can move toward an area while being “alert”. But you know all that already so not going further with it XD

I’d like to add a few things that were in UDK. For AI we could use state machine that was integrated into unrealScript, i don’t remember any sort of behavior tree at the time so we would use a “Think” function. Anyway what was nice was that the states could inherit from each other (they weren’t class though). So you could have a “root” state where your AI have a normal distance view and move normal speed, then you could put it in an “alert” state that would reuse all the stuff from the “root” state but change his view distance and override the thinking part, then you could go in hunt state that inherit from “alert” so you would keep the larger view distance, and whatever other stuff, and add more movespeed and probably override the thinking again. I though this was interesting to stack the changes like that, it is mostly like class inheritance but for state machine.

You should have a try at the other existing solution first, i think Apex has a free version that you can try.

Utility AI

Recently utility AI is being used more and more, it has many advantage over Behavior Tree. I think there is already 1 asset in unity store that make use of this. I suggest you have a look at it because it can allow for some pretty clever and unpredictable (in a good way) AI.
Here is an article about it:

Since i didn’t see that many in the store that uses this it might be a better investment of your time :slight_smile:

4 Likes

I think ‘generalized’ isn’t the term I was looking for, maybe ‘automation’. The idea is that the deeper level structure of a behaviour tree could be generated in some automated way based on some understanding of the relative benefits of different concrete actions.

So in a simple example, let’s say you were making a behaviour tree for a guard. The guard might have basically two objectives: to eat, and to prevent anyone from getting through a checkpoints.

Now, let’s say for now that the concrete actions, once a decision has been made, are manually created in a normal way by the player using leaf nodes, or subtrees, and leave it at that.

Higher up the tree however, if the tree is being created in the usual way, the developer might implement a relatively crude system such as simply choosing to leave its post and eat when some hunger threshold is reached. This would be ok for this simple context, but what if you want the decision to be impacted by other factors - such as the probability that the player is still in the area, or whether there are other guards nearby that are likely to be able to spot intruders. Suddenly, designing the tree becomes rather more complex.

What if, instead of having the developer design these parts of the tree, some procedural system evaluated the concrete actions (leaf nodes) and arrived at some decision using an abstract understanding of what each behaviour would result in (e.g., maximising some variable). This means that adding impacting variables would be easily scalable - you could add a hundred different variables to the guard’s situation, which would be very difficult to deal with in a normally designed tree, and it would be no more complex for the developer to implement.

Great point about crowd behaviour, there definitely needs to be a collective component, and not just similar parallel individual decision-making. Some kind of ‘flocking’ aspect where the decisions were simplified according to what others were doing.

The thing is though, whether these were spaceships, or deer in a forest, it wouldn’t be incredibly different except at the point of the leaf nodes, i.e., concrete actions.

I saw that article, and it was one of the reasons why at first, I programmed my ship AI with a scoring system. However I think it is too difficult to manually adjust scoring when situations and possibilities become very complex.

That’s why I’m talking about an automated ‘base brain’ which deals in abstract, and perhaps the developer can still construct concrete actions or groups of actions using a normal behaviour tree, for some kind of artistic control.

So this ‘base brain’ would certainly not deal with traditional behaviour tree architecture, but might rather use scoring/fuzzy logic or even some kind of regression calculation (although i think learning-based architecture is probably way too overrated for most runtime game uses). It would arrive at a decision as to what behaviour or group of behaviours to implement, at which point it would call the node or subtree that the developer has created for that decision.

Sounds like a QA nightmare and afaik players like predictable AI because predictable AI leads to reliable decisionmaking, and thus “feeling clever” for having outsmarted the AI by exploiting their predictable behaviour. I think you might be overthinking/overengineering this whole thing.

I’m not really convinced that’s a good argument for making it a general solution. Aren’t the concrete rules and actions the important and hard stuff for AI? I see the appeal of writing a general solution with nice visual interface. It probably allows to write plenty of feel-good-code and makes for cool assetstore screenshots. But wouldn’t it be much more useful from a “plug&play” and “actually making a game” perspective, to have a really specific and well-designed spacegame-AI, that is controlled over inspector parameters and can be treated as a blackbox otherwise? Imho it doesn’t really matter how you implement it, as long as performance and behaviour are as expected, right tool for the job and all that.
Why don’t you run a poll among the people who bought your other assets? Those might make up most of your sales for the AI kit. You have a market-research opportunity here that many don’t, I’d say make use of it. Ultimately it’s meaningless what I think would be most useful.

I agree that complex AI is a difficult thing to do in a way that doesn’t break the fun factor, but I really think it’s more a question of constraints than anything else. A human being has constraints which an AI algorithm must also have. But realism is not always bad - you don’t have any less fun at paintball or something just because it’s not a bunch of AI-driven bots that let you get away with anything.

Besides, the AI of Killzone which is referenced in the articles Elzean linked to, has come up in several discussions by gamer’s I’ve read, in a very positive way. It uses a much more complex and sophisticated (and difficult to beat) AI, from what I can tell, than most games.

Not at all! Once an AI has decided that it is going to shoot the enemy or something, it’s pretty straightforward and can easily be carried out by hooking up a small behaviour tree or something.

But what about the higher-level decision-making, before that decision was reached:

  • Coordinating with team-mates (covering fire, surrounding the enemy, cutting off escape routes).

  • Searching and moving to a better firing position even if the target is within firing range (planning);

  • What if the AI is wounded and needs a medic? Can they move fast enough out of cover? Is it better to try to eliminate the threat and then seek help, or vice-versa?

All of these are very easy to grasp conceptually for us, but are very very difficult to approach even with a behaviour tree.

But with something like the fuzzy logic/scoring system that article talked about, it’s very easy to add these variables.

One thing that might be difficult is to tweak and balance the scoring system. It’s possible however that some offline regression model could ‘learn’ a good balance.

So behaviour trees are actually very clunky sometimes. That’s why I’m interested to know what would be the ideal workflow (from a non-technical perspective) for developers creating AI in their games.

Another point (as described in that article Elzean linked) is that fuzzy logic is very much a part of human thinking. We score things based on fuzzy adjectives - if I say “How did your day go?” and you say “Pretty good, not fantastic but not too bad either” there’s a lot of fuzziness there, but it actually does tell me a lot because I am able to turn my understanding of those terms, and the way you say them, into something approaching an ‘/10’ scoring system.

So it might turn out that it’s a much easier workflow for developers.

That’s all well and good, but if there’s one thing that I’ve learned with my radar package, is that the more you tailor something for a specific job, the less flexible it is. I dodged a few major overhauls by making it extremely generic to begin with.

I think it’s good to step back from specific implementations and just ask “what’s the ideal AI development envrionment, without regard to technical attributes?” Even something as outlandish as “Talking to my computer and telling it what I want” has a lot to say in terms of how AI could be improved, even if it doesn’t actually make a lot of sense realistically.

That all still falls under the stuff that I call “specific”, because it doesn’t also apply to your deer in the forest example, and a proper implementation requires some degree of knowledge of your game’s mechanics and interfaces accessible by the AI. E.g. maneuverabilty of ships could be a huge factor for what can or should be done by AI, or it might be completely irrelevant based on your design choices.

All I remember from Killzone 2 is that I found it pretty boring. Iirc I didn’t finish it. But since I just wanted to play something and couldn’t decide what, I’ll give it another go and see if I see the AI doing something fun. :slight_smile:

This might be a thing of personal preferences and interest, or my lack of knowledge in certain areas, but I don’t really care “how I talk to the computer”. Whether I write 500 lines of if/else statements, or use a nodegraph, or dictate in plaintext how I would want the ruleset for the AI to exactly look like, the daunting task for me would always be coming up with the right ruleset (and math). What kind of implementation paradigms are used to implement it, I don’t really have an opinion on.

Actually it does, because a deer is just the equivalent of a human without a weapon. If the AI were of a tiger, then it would be the same as an armed soldier except that the weapon has a range of only a meter or so.

The concrete actions are different, but what I’m proposing occurs in the abstract before this point is reached.

Good point, but that is part of the settings of the fuzzy algorithm.

If you’re a ‘dreadnought ship’ with a turning time of 30 seconds, then you would simply add the angle between your forward vector and the ‘to-target’ direction as a parameter to the decision making.

Note that this situation you described is way more difficult for a behaviour tree, unless you manually implemented a binary threshold somewhere which wouldn’t be half as good.

One thing that interests me also is that as described in the article from Elzean’s post, curves can be used to score variables. So let’s say you want to avoid collision, rather than using a linear function based on the distance to the obstacle (which might mean that you start avoiding it slightly even when it is not a threat) you can use spline curves to describe the relationship between distance and threat. That way there would be a rapid and smooth transition into the avoidance behaviour, rather than it growing very slowly, or occurring suddenly when some threshold is crossed.

Would be great to read about your experience, I might buy it just to evaluate the AI.

The problem is that the ruleset is still the problem with any AI implementation. Building a behaviour tree is still arduous if any kind of subtle reactions are required.

The difference lies in the fact that fuzzy logic describes a non-binary set of parameters, which not only helps prevent turning the AI into a complex set of if/else statements, but opens up the possibility of analytical or semi-analytical solutions to problems with a very large set of parameters, without adding complexity (and maybe even greatly reducing it) for the developer.

I’m still very much in the thinking about it stage as far as AI. One thing I keep coming back it is modelling the why of decisions. Even things like pedestrians could get subtle improvements from adding things like the characters size to decisions, ie a small pedestrian would be more likely to move aside for a large one. Taking that a step further, statisic driven AI seems a good thing to have, adding a couple of scales like terrified-heroic, defensive-aggressive into account could give a lot of variance and could even influence animation blending for body language. A heroic aggressive guard would be more likely to give pursuit, a terrified defensive one more likely to flee, a terrified aggressive one might be nervously aiming into shadows and so on…

I’ve continued the single player campaign where I left off and I still find it boring in gameplay and cringy as fvck in the dialogs. God how much I hate when a game feels the need to verbally pat me on the back for ever little kill I make. Iirc in Killzone 3 that’s even worse, but I only played the demo of that.
During normal campaign gameplay at the point where I’m at in the game, the majority of combat is sitting in cover and shooting at enemies that sits in cover and occasionally pop out to shoot at your or run out towards you. I don’t care how sophisticated the methods may be that make the AI arrive at the conclusion to sit in cover and pop out occasionally, it’s boring gameplay without mechanics designed around making this fun. In a game where you control multiple units and covering fire is an important mechanic, and areas are open enough for flanking maneuvers, this could be made fun and interesting. But this is basically a corridor shooter and waiting in cover for the enemy to pop out of cover is kinda boring imho. For comparison Enemy Front has much lower production values, is far less “epic”, the AI is probably implemented in a much more straight forward way and much less intelligent, but as a player I can actually do more things. I can sneak up on a 2 man guard patrol, take the guy in the back silently hostage with a knife to his throat, shoot the other one in the back of the head with my pistol and slit the remaining nazi’s throat. Or I could mow both down with a silenced sterling smg, or kill em with a grenade, or throw a stone somewhere to distract them and sneak past them, or try to seperate them and take both out silently with a knife. The game gives me options, which leads to decisionmaking and more engaging gameplay. That’s the prime thing I care about regarding how much fun I’m having.

If you wanna look at the Killzone 2 AI maybe look for videos of the skirmish mode. You can play multiplayer matches with bots and I’d imagine that is where the AI really shines because it doesn’t need to “hold back on being smart” for the sake of supporting the player’s power fantasy in the solo campaign. I never played the multiplayer mode before so I got utterly wrecked by the AI, but that could mainly be an issue of how much better it aims. Also the skirmish mode made me feel intensely motion sick. It’s probably because in that mode you get to actually move around, instead of crouching behind cover most of the time. The campaign gameplay is much more static and thus I didn’t get motion sick from it.

When it’s the right tool for the job it’s the right tool for the job. I have no objections against using it.

What you describe sounds like it’s well-suited to a more analog type of AI. Have you tried Apex AI? It seems to be a great tool once people figure it out, but going by the reviews it’s not the easiest to pick up. This is a bit surprising because I think there’s potential for an AI guided by fuzzy logic/valuation to be more intuitive compared to FSMs or behaviour trees.

It certainly sounds like the killzone games aren’t necessarily the best in terms of giving the player options. Thanks for the rundown.

Btw although killzone is relevant to this topic, since it apparently uses some kind of ‘fuzzy logic’ AI and was mentioned by that article, the game I meant when I mentioned the AI that gamers seemed to like a lot, was F.E.A.R. Having watched a couple of videos it certainly seems pretty fun.

https://www.youtube.com/watch?v=cX3mkJcbjrQ

I haven’t quite got a handle on how it works yet, but it certainly seems to be a benchmark.

Having a humanoid rig for ai behavior would make me immediately classify AI framework as not useful. I.e. it is an auto-fail.

The reason why it is an auto-fail because you specialize framework in one specific behavior, making it less useful for anything that does not fit your expectations. For example, if in my game humans can fly and your framework expects them to walk, or if I have root motion and your framework has audacity to expect to drive character directly. Something like this happens - and the framework goes into trash bin, immediately. Because rather than developing a workaround, I"ll just do the stuff myself.

Actually this is the reason why I haven’t spent much time working with RAIN framework - last time I looked into it (that was at least a year ago), I quickly discovered that they wanna drive my character movement, and root motion examples are non-existent, documentation for “custom motor” is spotty and requires digging through the source code.

So, rather than something like that, here’s what I’d like to have from behavior tree:

  1. Separate scrollable/zoomable window for ai.
  2. Nodes can be created via keyboard.
  3. The window visualizes currently active nodes.
  4. … while the game is running.
  5. Base classes I can quickly extend to make my own nodes, similar to monobehaviors.
  6. Absolutely no attempts to drive my character movement. This causes issues when root motion is involved. And if you provide root motion support, but expect me to give you access to animation controller, this ain’t happening either.
  7. Basic sensor packages would be cool, as long as setup isn’t insanely convoluted.
  8. Communication support. For example, in that “Ai CTF” thread I don’t recall anyone having an idea that combatants could possibly talk to each other.

I’ll elaborate on #6:
Basically, if the ai framework provides pathfinding support, rather then “I’ll set your position myself!”, it should work in fashion, “This is your position, this your target position, and this is direction in which you wanna go if you want to reach the target”. The “direction I want to go” can be a unnormalized vector towards next pathpoint.

So, in general, I’d be interested in a “general purpose behaviour tree” solution concentrated on higher order logic.

If you’re providing an edge case scenario - “human combatant” or “space fighter combatant”, I would be able to write those myself.

In general, I do not want “use this script and you’ll have a generic combatant that can do one thing”.
Instead I’d like a system where my custom can make decisions based on what object sees, hears and knows.

http://alumni.media.mit.edu/~jorkin/gdc2006_orkin_jeff_fear.pdf

It simply select animation with a planner, transformers on ps3 use the same method. Since game have short planning chain it’s doable real time.

Ai game dev is a good industry resources along gdc vault:
http://aigamedev.com/
The free content is good the paid content is top tier.
Follow their twitter https://twitter.com/aigamedev?lang=en and they also have free AI lesson that will put you on the same page than everyone in the industry

On automation of behavior tree:
I have seen some research baking BT using planner like the fear AI. You should look for them

On AI in game:
You don’t actually AI, It’s all about optimizing a task the best possible. Ai in game is actually spectacle, not in the narrow showoffy sense with dramatic lighting, explosion and her shoot, but in the acting sense. Halo got remarked for its AI because they added bark that clearly spell the state of the ai, idling behavior that give some consistence and some funny behavior like the grunt suicide of fleeing. The new zelda is a great example of “acting”. It should be good at showing it’s feedback reacting to the player, what the player don’t see isn’t perceive as intelligence, obviously.

On AI in the industry:
The wisdom is that there is no silver bullet at all. And the joke term they created now is AI sandwich, ie an ai system that mix all the known trick depending on what they try to achieve.

On AI sandwich:
Utility tree (UT) are good at appraisal, they go beyond simple utility by not selecting action first but by computing intermediate representation like “threat level” (ie compute from the utility of number of enemy and their HP level, equipment, etc) that goes into more abstract representation like “survivability” that takes as input threat level, allies level, environmental opportunity (maybe computed using influence map) and current level. While UT can drive action directly by scoring them, it’s better to store the concept and their weighted activation in a blackboard (aka the memory) which is managed by a specific logic by itself (ie choosing what to forget or not, separating short term and long term, it’s not necessarily complex). Concept in the blackboard are used (instead of direct input of the sensor) to create decision, which may be handle by a hi level hierarchical state machine (ie fewer transition to track) that contain BT (or a mix of state machine and hi level BT, you can sandwhich at nauseum here, you can encapsulate the state machine into a BT leaf or have a BT as part of FSM node recursively). BT is good at showing and selecting behavior but is general bad at transition and interruption, also people should resist implementing action (low level behavior) into the tree, those are generally define by script. Squad or group can have a shared blackboard where they write data that is further appraisal by a group AI logic, that will activate or inhibit behavior into members by affecting “role” selection in their decision logic, to create coordination. Squad blackboard can duplicate or merged when group logically or physically separate each other.

Basically the idea is to separate into easily readable and manageable step that help AI design. Separating appraisal logic from decision (goal selection) from action and memory makes live easier. Many stuff are hard to debug because basically you mix stuff together and they end up being too subtle. For example the blackboard can help separate verb and adverb, by having GOAL encapsulated into the decision system as attack (verb) but passing hi level state as parameter like worry or angry (adverbs, which are just set of movement speed, animation, and sight length) that are handle through a script (because it’s not discrete) to create the final behavior.

1 Like

So basically, you’re looking for a framework that offers higher level functionality, such as sensors and communication, without taking control of whether you use it or not?

Visualizing the tree is an interesting concept, it doesn’t seem to me like realtime visualization would be always useful when the traversal might be very fast. Maybe better would be a breakpoint and rewind/playback? Not of the game world. Just the tree itself.

I’m a believer in the future of physics driven character movement, and I would never personally use root motion anyway (even if it meant a complex trigger setup in the animation for correct foot placement/movement correlation). Animation driven root motion seems to me to be a very restrictive way of working - maybe it would be needed for a Disney character or something where programming the stylistic movement would be too difficult, but not for a human character imo.

What sort of sensor/communication stuff would you like to see available out of the box?

I can understand that, although it might be useful as a preset of some kind. But the difference between flying/walking is exactly not what I think can be automated - i.e., that’s the concrete action, which has limited influence on higher-level decision-making. What I think can be automated perhaps is goals and abstract planning, i.e., choosing between satisfying different kinds of rewards based on many different influences of perceived risk and benefit. I think this is where even behaviour trees get very clunky because they tend to partition decision-making along a very consistent line, which is very difficult to move around in the `decision space’ according to changing influences.