AI thoughts

Ok, so I’ll walk through how I arrived at these thoughts. First off, today is my birthday, and I was thinking about how that really doesn’t mean as much as you get older. That, in turn, got me thinking about how much faster time goes when you get older… but that it makes sense because each unit of time becomes a smaller percentage of the overall time you have been alive.

That got me thinking about perception of numbers – how the difference between 1 and 2 is 1, and the difference between 1001 and 1002 is also 1, but that we perceive there to be something very much different between those two. That is something built in to how humans view things that isn’t built in to how computers view things. Which got me thinking about AI and the concept of intentional flaws in reasoning to make more “human” components.

This is in the same spirit of not letting the AI “cheat” by limiting access to information that a human player would also have (like not giving it a full resource map) and doing things like building in some degree of inaccuracy to aim, but it goes beyond that…

I think that a big part of making an AI believable would be to have it view numbers as non-linear when it is evaluating things to make decisions. This would introduce a very human flaw in assessing things.

Other human flaws would be interesting too. For example, people seem to have a tendency to think of themselves as being exceptional. What if AI marched into battle with too small a force because it underestimated the player? Or if it delayed in sending reinforcements because it overestimated its own abilities?

Combine some of those things together and you would get scenarios like:

AI encounters player’s settlement. Underestimates strength of forces and sends too small a group to wipe it out.

AI is losing the battle, but early on it isn’t motivated to send reinforcements. In gauging overall loss of units, non-linear number thinking means that it doesn’t have an accurate feel for how bad things are really going.

On an individual level, AI units are quicker to see how poorly things are going (they are looking at personal damage versus group losses) and lines begin to break. Because the AI didn’t send reinforcements, these units leaving start to dramatically increase the overall perception of loss.

The AI recalculates losses and notices a big increase (increase was actually linear, but perception of numbers plays a factor) so it orders retreat. Forces regather, but based on this defeat, the AI now acts as if the player is a much more powerful force than it really is and devotes a disproportionately high amount of resources to a counter attack.

These things are all very human behaviors, but they all manifest from something like mapping a linear number scale to a logarithmic one and on having some initial bias towards underestimating an enemy or overestimating the AI’s strength.

Ok, I’ll end this now.

Hi Charles,

Happy Birthday!!! :wink:

And: Very interesting thoughts :wink:

Sunny regards from this rainy place,
Jashan

In my opinion, the key to making games fun is Artificial Stupidity.

There’s an ancient arcade game named Elevator Action which first got me thinking this. The bad guys in the game had a small number of possible actions, including emerge from door, disappear into door, look around, stand around doing nothing, walk in some direction, crouch, lie down, shoot – and they would perform these actions more-or-less rationally if they could see the player or knew where the player was (having seen the player go there) and perform these actions more-or-less randomly the rest of the time.

In action games it’s easy to make enemies almost perfect – they can have better pathfinding, perfect reaction speed, perfect aim, know how much ammo they have without checking their display etc. The thing that makes enemies seem human is their imperfect behavior – shooting in the wrong direction, panicking, getting lost, and so on.

This insight hasn’t escaped other game designers – just look at the behavior of guards in THIEF (not only are their conversations hilariously stupid, they engage in various stupid behaviors – like panicking when they see a fellow guard’s corpse).

Oh, and happy birthday :slight_smile:

Happy birthday Charles …

AI in general has always interested me in a number of ways, ranging from gaming to object recognition to context-sensitivity in interpreting the written word.

Lately though, I’ve been tossing around ideas toward creating an AI system that can learn new information relative to information already known. As new information is added, the AI becomes better able to predict the needs of the user’s requests.

For example, the AI would first start out with a baseline set of criteria known to it. This could include data like what the system might need to know in order to display some object and the characteristics of the environment the object is presented in to the AI. Such criteria could include things like the object’s bounding size, the object’s position on screen, it’s color (in RGB values) and whether or not the object itself exists within the environment.

Starting out, a simple object could be added to the environment by the user. At this point the AI only knows the most basic information (size, position, RGB Color and the fact it exists). Next, the user would be asked to describe the object, while linking the description to a piece of existing criteria.

So to begin, the AI would begin inquiring about the “kind” of object it’s currently being presented with… only knowing that it’s an “object” with added criteria lacking context.

A good starting point might be to describe the size of our object to the AI. So for the first “kind” inquiry the user might respond that the object is “a large object” and direct the AI’s attention to the bounding size data. From this, the AI now knows that the keyword “large” has something to do with the object’s bounding size data, but still doesn’t completely understand what “large” is.

Next, we introduce another object, only smaller. Again, we define this to the AI as “a small object” linking it to the bounding size data. Now the AI has multiple descriptions linked to the bounding size data of our objects that it can work with to get an idea of what’s implied by “large” and “small”. Depending on how it’s deductive abilities are programmed, this could be enough information for the AI to conclude that “large” objects have a bounding size value greater than that of objects that carry “small” description. (And that “small” objects have a bounding size value less than that of objects carrying the “large” description.)

Now that we’ve established a baseline of “small” and “large” objects, we can step things up a bit and introduce context sensitivity toward articles (a/an/the) in user requests to the AI.

At the moment, asking the AI to select “the large object” and “a large object” would give us the same result of the larger object being selected. However, if the user simply requested to select “an object”, either could be chosen as both items are “objects”. Yet, if the user requests the AI to select “the object”, the AI would need to request more specific criteria (in this case, we currently only have “large” and “small” to choose from) before proceeding to select either object.

Moving on, we now introduce a new, third object with no current description, whose bounding size is between that of our large and small objects.

Among these three objects we now have a chance to test our AI’s interpretation of “small” and “large”, by making a few selection requests.

Again, depending on how the AI is designed, it may have reached the conclusion that “small” objects have a smaller bounding size than “large” objects, as well as the conclusion that “large” objects have a larger bounding size than “small” objects. This means, based upon our request, the newest object could be interpreted as “large” or “small”.

Now, if we request the AI to select a “small” object, it might choose either object smaller than our “large” object. Yet, if we request it to select a “large” object, it might choose either of the objects that are larger than our “small” object.

Next, if we request the AI to select the “large” object, it should choose the largest object by default (and vice-versa) depending on it’s interpretation of the “large” description.

Finally, if we request the AI to select the “large” objects (plural), it should select everything, but the “small” object. (And again for the “small” objects…)

Anyway, the idea here is that if we developed AI’s to be context sensitive to information it already knows as a means of generating conclusions beyond the scope of the initial code and data set it starts out with, you could end up with an impressively flexible method for dealing with situations not initially anticipated ahead of time. If such a method was allowed to modify it’s own data and nest several layers of criteria checking when running, it could allow for an incredible amount of adaptability when exposed to a finite setting, such as the scene within a game.

Ideally, such a system would have the user interacting with the AI at all times, simply by playing the game. (Kind of like a god that actually pays attention to you, and adjusts your world according to your activity and progress.)

Personally, I’m really curious to see how such a setup might work in a sand-box style game like the Grand Theft Auto series. For example, the AI in that scenario could start out as benign, letting you roam the streets as a complete unknown, while observing your activity and habits, but later becoming increasingly malevolent toward you by letting people recognize you as a threat or target for attack. Based upon how the situation plays out, the AI could start ramping things up as individuals start banding together as random groups of thugs. A bit longer, the AI starts getting the police in on things, acting every time they see you, first individually then eventually as massive specialized task forces. In the meanwhile, the AI starts directing the street thugs to become large street gangs, eventually moving into city-wide crime syndicates, all hell bent on destroying you.

The longer you manage to survive, the more the city organizes itself against you.

Keep in mind this is all occurring outside the scope of the AI’s initial design. The AI is learning from you on how to best destroy you.

Most likely, I’ll never actually get around to designing something this sophisticated, but I think we could have far better and more adaptive AI’s in gaming than we currently have. If we could get past the idea that games need to be predictable, we could probably have far more dynamic experiences when playing. If you doubt this, just look at how well games supporting online multi-player do within the industry. There is a huge demand for dynamic game play where the experience isn’t always identical to the previous session.

Thanks to everyone for the birthday well-wishes.

@Jashan - it was rainy here too. We’ve actually had a couple days of incredibly intense storms and were under flood warnings.

@podperson - I agree, “Artificial Stupidity” can be vital for a fun game. And not just for comic relief either… I think it may be that efficiency seems so cold.

@Bones3D - What you are describing reminds me a little of a project that I was doing at my job before starting Unity Developer Magazine. Basically, I was building a semantic network to describe graphic design so as to build a relational ontology for the connectedness of visual concepts. It was one of those bonus projects that you get to work on to keep sane.

Some of my initial tests involved developing a visual interface for mapping the culturally constructed linkages that describe the effect of color on human behavior and feeling within a perceptually normalized three-dimensional color space. Basically simulating color points as masses connected by springs with resting lengths based on calculations of Delta E (CIE 1994). It was a proof of concept to explore the psychological influence of a color prototype in three dimensional space. I got to present at a symposium on it, which was fun.

I started by expanding into a data set of culturally constructed metaphors and a set of factual assertions to accompany it. These aren’t real examples, the structure that I chose was much more rigid:

An eagle represents freedom to an American.
An eagle is a bird.
A bird can fly.
Birds have wings.

I got somewhere like 50,000 assertions into the project before realizing the scope of what I was doing. At that point, I decided it was better to build systems that could specialize in parts (like the color example) and have each system as part of an overall network. I had modest success… in the end, I had a network able to produce work that was comparable to that of the sophomore design students that I was teaching at the time.

That said, I don’t know if that is because my system was good or because my teaching was bad.

Charles:
What you and podperson describe sounds very much like fuzzy logic.

All:
For those interested, I can recommend shifting through the articles at http://ai-depot.com/ and http://aigamedev.com/

Caution: disparagement of philosophers follows!

Fuzzy Logic is, I guess, what any AI needs to use to reduce complex data to simple internal state. I don’t think labeling it is terribly helpful though. It doesn’t appear that the proponents of fuzzy logic have a huge toolkit to hand over to us, they’re merely interested in discovering whether fuzzy logic (for some value of fuzzy logic) satisfies more or fewer or the same number of completeness and consistency results as ordinary logic. Since fuzzy logic seems to be a subset of ordinary logic*, this seems like a way for Philosophy students studying logic to get PhDs rather than a useful area of inquiry.

  • Fuzziness is just a kind of predicate. Predicate logic is just logic with syntax sugar for adjectives. OK this is handwaving but it’s pretty convincing so voila – you get Godel’s results for Fuzzy Logic for free! If that doesn’t convince you, how about this: Godel’s results apply to any consistent set of rules, and a Fuzzy Logic system which isn’t internally consistent isn’t terribly useful. Third proof: a consistent system of fuzzy logic can be programmed into a Turing machine. Turing machines are devices that can execute any consistent set of rules (Turing basically proved Godel’s theorem independently) so QED. Time to amend the Wikipedia article…

Randomness Agency

One of the fascinating aspects of MMORPGs is that random tables are perceived as having all kinds of AI behind them. E.g. in WoW there was (is!) a widely believed rumor that the class of the player who formed a raid affected loot drops in raid instances. The code to do that would be quite ridiculously complex, and the results pointless, but this was widely believed. In most MMORPGs there are similar superstitions about almost every game system, and almost every such game system is simply a (weighted) random distribution (2% chance of this, 1% chance of that … go now and invent religion).

As an aside – it’s VERY easy to see how superstitions and religions arise from human beings ascribing agency to random events.

When an enemy AI is implacably efficient it ironically exhibits less “agency” than an AI which exhibits “random” behavior. When the Elevator Action guys pace up and down while not able to see the player, they look “bored” or “confused” and not “random”. But a missile that simply tracks you relentlessly doesn’t seem alive or clever.

Artificial Stupidity – Formally Defined

Give an AI a palette of individually reasonable behaviors to execute and an internal state machine reflecting “state of mind”. Each state can pick a behavior probabilistically at different time intervals. (Refine this as much as you like – simplest case is one state, random behavior selection; more complex examples might be “trying to kill player”, “confused”, “terrified of player”, “bored”, “asleep” with behaviors such as “aim and shoot”, “charge”, “flee”, “run around randomly”, “look around aimlessly”, “self-destruct” – you can see some fairly obvious mappings between state and behavior, I’m sure.)

You can easily vary the quality of enemy AIs by varying (a) the frequency with which they are allowed to update their mental state and/or select new behaviors (and the appropriateness of their mental state), (b) the quality of decision making when it comes to selecting new behaviors, (c) the quality of execution of those behaviors.

E.g. a great pilot might be (a) faster and more likely to change from “bored” to “trying to kill player” owing to better “eyesight”, more frequent updates, etc., (b) more likely to pick “charge” and “aim and shoot” appropriately, and less likely to pick “run around randomly” or “self destruct”, and (c) better able to execute “aim and shoot” or “charge”.

Winding up – imagine how much more “alive” an enemy AI will be who (a) takes noticeable time to react to certain stimuli (e.g. switch from being “bored” to trying to kill you) or (b) doesn’t always pick the correct behavior (e.g. runs in the wrong direction).

I might add that implacable AIs are frequently EASIER to deal with than less predictable AIs. Artificially Stupid AIs are FAR less likely to get stuck on landscape…

In what I was doing, the part where I was dealing with decision on set inclusion with as a distance from a prototype in a conceptual space was essentially making use of fuzzy logic, yeah.

It’s a bit different in my particular example, in that you aren’t really trying to toss random facts into a massive database. Instead, the AI in my concept bases it’s decision-making on data that has quantifiable values.

A better example of this might be a method of defining colors to the system.

So initially, I might start out with a single object with a color value of (0, 0, 255) in RGB and tell the system that the object is a “blue” object, while linking “blue” to the RGB data.

At first, this would tell the AI that blue is anything with the RGB value of (0, 0, 255). But further in, I could add in other objects of varying colors, ranging from full blue to no blue whatsoever.

The idea is that after enough definitions are added, the AI would eventually conclude that “blue” not only equals (0, 0, 255), but could also includes any object where the average of the sum of the red and green color channels in the RGB data is less than the value of the blue channel. (Or simply, “blue” = any scenario where if ((R+G)/2 < B) {} returns true.)

This means any object with a bluish tint might qualify as “blue” if the AI is requested to select all “blue” objects… as opposed to what is simply limited to the blue channel value alone.

This sounds a bit like a Bayesian network to me (think of how a spam filter learns which words indicate spam). These are based on the statistics of how often a particular piece of “evidence” occurs for a specified concept.

I don’t know if there is any Unity-compatible Bayesian library, though.

Bones3D:

That particular example could be solved using a neural network.

It’s probably very close to a Bayesian setup in some ways, but I actually haven’t done any research into how Bayesian filtering works. (It does interest me quite a bit though…)

Some other areas I have worked with to some extent are a couple short-lived utilities that existed as sort of a bastard child of those old “mad lib” games and a search engine backed by context sensitivity.

You can download a utility I wrote for Haxit from here, but keep in mind that this thing is ancient and needs to be ran on a PowerPC-based mac that supports either classic mode or carbon.

Had it been developed further, it probably would have been marketable as a mass plagiarism utility capable of retaining high accuracy from the source document while exporting a new document with enough transitions in the wording to beat most of the current online plagiarism filters most of the colleges and schools tend to require these days.

In some sense, I had hoped enough people would eventually use Haxit and my utility as a means of flooding such plagiarism detection tools to the point that they’d become unusable. (Since many of these tools aggregate student documents, on top of formally published documents, into their own databases.)

My theory was that such a system could be rendered unusable once it encountered a large enough number of recursively plagiarized versions of the same document… for each original document submitted to it.

Considering that the english language only has a little over 800,000 words in it’s vocabulary, it would only be a matter of time before a proper combination of english words in any given sentence would indiscriminately become flagged as plagiarized across the board, making such a utility a pointless exercise in futility for those who forced it upon students.

* update *

Weird… looking at the app today, it is eerily similar to my AI concept, only way more primitive. Probably would explain why the nature of the AI seemed so obvious to me when I first started playing around with the idea a couple months ago.

One thing with the “Stupid” v. “Perfect” AI: It is often trivial to make certain actions perfect, such as Aiming or math calculations, but there are many forms of human intelligence that are nearly impossible to code into an AI. Often an AI needs to be perfect and “cheat” to be any match for a player. I think I read that in all the Civ games, the higher difficulty levels mostly just gave the computer opponent a massive bonus to industry/commerce with little change in the base AI. No mater how perfect an AI is, there is almost always ways in which it is very very very dumb.