AI Questions

I’ve been working on an AI engine as a side-project lately. Without going into too much detail, it rates its environment with three factors/emotions: Fear (proximity and power of enemies), Confidence(proximity and power of allies), and Ambition (goals). Fear and Confidence cancel each other out, and if Fear is greater than the Ambition, the bot will move to a place where it feels more confident (i.e. towards teammates or away from the enemy). If the Ambition is greater, it will move towards its objective.

The problem with this system is that it can’t plan ahead, it can only react. Given that I’d like to eventually make an RTS, I kind of need an AI that can plan ahead and at least appear to strategize.

What’s the best way to make an AI that can plan ahead?

Interesting concept. Because you are trying to gear this towards an RTS have you thought about a dual layered AI? The AI you describe would work at a Unit level… units could and likely should in an RTS be reactionary… whereas if you have this AI on the NPC units, and then have an overarching AI that is simulating an actual player (aka the Commander of the military, or whatever units you have) they can do the actual planning and issuing of orders, which can then trickle down to the units themselves.

If you are trying to make a more general AI framework (not specific to RTS) you could try to use this second AI layer concept still… and perhaps separate your AI into two major thought streams… Goal Acheivement vs Instincts.

Goal Acheivement would be geared towards accomplishing complex goals… but it would have to be balanced with the needs of Instincts (which you could further break down into your Fear/Ambition/Confidence). As Ambition builds, the Goal Acheivement brain will become more prominant… you could also add in a layer of control for the programmers at this point…

Some AI could be more emotional/reactionary and thus their Ambition score isn’t as effective at phasing in Goal Acheivement, versus other AI which could be far slower to “react” but are very Goal Oriented.

It’s not very far off from your original concept, but a second layer of AI that deals with planning ahead to generate new short term/long term goals might be the way to go.

Would love to see some youtube videos when you manage to get video worthy demos fired off. (I love AI :smile:)

Depends on how you implement goals, typically an RTS uses a higher level AI (as suggested by DavidB) with relatively simple unit AI (FSM, or some other that’s very fast). RTS units are usually very short lived, and rarely get to demonistrate real ingenuity (as I recall the Warhammer:40000 games have been pushing this area).

Based on your description, you could expand your use of goals, to include default (what to do when not being ordered around) and directed goals (with priorities). You still need to create a heirarchy of some kind for the computer directed team(s). It is theortically possible to duplicate a normal military structure, giving key decision processes to various levels of authority (chain of command), but in practicle implementation the volume of AI’s processing would likely be too much for the game to maintain a reasonable framerate (unless you went turn based, so everyone has time to think… real time strategy… well most platforms are a little off that mark just yet).

You could look at any of the usual suspects for a planning AI. I’ve been tinkering with a needs based AI myself, married to a multi-axial dynamic threshold fuzzy decision algorithm (straight from the AI Game Programming Wisdom 4 book - although there are a couple of places where the point math was wrong, and I’m mixing it up some), then stuffing the AI into a IoC service framework for performance reasons.

Look forward to hearing how your AI turns out.

Cheers,

Galen

I’m kind of an AI noob :slight_smile: I really like the idea of two-level AI, and it’s something I’ve experimented with a bit in a flight sim game. Each fighter had an AI to chase a target and shoot, and they were commanded by a high-level AI that chose targets for them.

But that kind of goes back to the problem of making an AI plan ahead instead of making immediate choices based on the current situation. I’ve heard a lot about behavior trees, but I’m not quite sure how that would work for something like this.

Disclaimer: Take this with a grain of salt as I am by no means an AI expert (but I have been studying the concepts lately!.. and I stayed at a Holiday Inn last night…)

The “planning ahead” will be rather specific to your game… but the way I see it happening is that your underling AI will collect data and pass it up to the main brain to process while it considers it’s goal and “plans” ahead. As a concrete example… let’s use Starcraft 2… (love SC2 :smile:)

Let’s say that I am up against a Zerg AI opponent. For the purpose of simplicity, we’ll say that I stay on one base in this game, and that I am a terran who will only build marines.

The Zerg AI will start out knowing nothing about my base (unless you give your AI “cheater” info… in which case you can collect data from that method instead of from underlings), so the current Zerg goal is to defeat me. The AI knows nothing, so to accomplish it’s goal of “Defeating me” it will want to build up an economy and scout my base to see what I’m up to. The AI will send it’s drones to the mineral line to start mining and let’s say it sends one drone immediately to scout me (it travels to every start point until it discovers me). When the drone gets to my base, it will encounter whatever I’ve built there… so let’s say it sees two barracks and 4 marines. The AI now knows that I have an offensive force of 4 units… and it knows that all of these units are marines. It also knows that I have two unit producing structures that are both barracks.

At this point the AI can start to “plan” ahead. Barracks can create Marines/Marads/Reapers/Ghosts, so the AI can calculate the probability that I can get any of these units… Marads/Reapers/Ghosts can be eliminated right away because I had no attached tech lab. Therefore my current capacity is Marines only, and it’s observed these marines. The baneling is a great counter to marines, so the Zerg AI will immediately build a gas extractor, collect enough gas and minears and then throw down a baneling nest and begin producing banelings and zerglings. (Banelings to deal with my marines, zerglings to mop up my base and finish me off).

Let’s say the AI scouted me at 5 minutes into the game… and took an additional 5 minutes to get banelings and zerglings… the AI will assume it took me 5 minutes to get 4 marines, and after 5 more minutes might assume that I can get another 10 marines. So it’ll generate ~20 zerglings and 10 banelings let’s say (the math would have to be worked out). When it accomplished the goal of creating “an effective offensive counter” it would then attack my base, attempt to destroy my army and then my base. Thus accomplishing it’s primary objective to “defeat” me.

Bah… example went longer than I thought… but I hope I got my point across. You’ll want your underling units to send information back up to the “Brain” AI for it to process. The Brain will start with a simplistic goal such as “Win”, and will then break that goal down into smaller goals… such as “Build an economy” and “Scout opponent”. After data is collected… the short term goals will be re-evaluated. etc.

Anyways hope this helps!

Starcraft 2 is overrated. Supreme Commander 2 is where it’s at! :slight_smile:

Anyways, I think I see. A behavior tree would probably work best, here, from what you described, from the concept of a simple task (Win) being broken up into multiple parts.

The thing still don’t get is when to make the AI interrupt its plan and go into panic mode, if you will :slight_smile: Using the Starcraft example again, let’s say that the AI sends every unit it has to your base, but you actually slip half your army around the other side of the map and start attacking his base… and it has no defenders. (Ideally, the AI would be smart enough to leave defenders, but ignore that for now)

A human would probably swear loudly, put his plan on hold, and run back to his base to wipe out the attackers. Somehow the AI would have to detect things that are “panic worthy” and put the plan on hold. And depending on the situation, some things may not be panic worthy… for example, in the example above, the AI would probably be able to win the game simply by ignoring the attack. Half the enemy’s army in my base means only half the army is defending his base.

Sorry for thinking out loud and cluttering up the thread :slight_smile:

This is where the difficulty comes in… making an AI realsitic, versus min/maxing goal achievement decisions.

You could have the AI brain NEVER want to lose it’s base… or some sliding scale probability that it will let it’s base go for a “fast win”. So if your AI is defined to value maintaining their base at all times, the AI would withdraw it’s troops and build more troops at its base to deal with the counter attack.

If the AI was super aggressive/careless, it would drill hard at the enemy’s base and try to end the game ASAP… and maybe collect the needed minerals and make an expansion closer to where it’s armies are.

The actual decision priorities will have to be defined by you… or at least the framework for coming up with these priorities must be defined by you. Hence the “easy” “medium” “hard” “insane” AI’s… easy means it builds 1 troop and it’s happy. Medium will build a weak economy and attack you… hard will try to expand, build a sizeable army, scout and attack. Insane gets cheater minerals and info, has lower build times and does the same thing hard does etc.

In short, you’ll have to design this sort of decision priority into your AI Brain.