AI in general has always interested me in a number of ways, ranging from gaming to object recognition to context-sensitivity in interpreting the written word.
Lately though, I’ve been tossing around ideas toward creating an AI system that can learn new information relative to information already known. As new information is added, the AI becomes better able to predict the needs of the user’s requests.
For example, the AI would first start out with a baseline set of criteria known to it. This could include data like what the system might need to know in order to display some object and the characteristics of the environment the object is presented in to the AI. Such criteria could include things like the object’s bounding size, the object’s position on screen, it’s color (in RGB values) and whether or not the object itself exists within the environment.
Starting out, a simple object could be added to the environment by the user. At this point the AI only knows the most basic information (size, position, RGB Color and the fact it exists). Next, the user would be asked to describe the object, while linking the description to a piece of existing criteria.
So to begin, the AI would begin inquiring about the “kind” of object it’s currently being presented with… only knowing that it’s an “object” with added criteria lacking context.
A good starting point might be to describe the size of our object to the AI. So for the first “kind” inquiry the user might respond that the object is “a large object” and direct the AI’s attention to the bounding size data. From this, the AI now knows that the keyword “large” has something to do with the object’s bounding size data, but still doesn’t completely understand what “large” is.
Next, we introduce another object, only smaller. Again, we define this to the AI as “a small object” linking it to the bounding size data. Now the AI has multiple descriptions linked to the bounding size data of our objects that it can work with to get an idea of what’s implied by “large” and “small”. Depending on how it’s deductive abilities are programmed, this could be enough information for the AI to conclude that “large” objects have a bounding size value greater than that of objects that carry “small” description. (And that “small” objects have a bounding size value less than that of objects carrying the “large” description.)
Now that we’ve established a baseline of “small” and “large” objects, we can step things up a bit and introduce context sensitivity toward articles (a/an/the) in user requests to the AI.
At the moment, asking the AI to select “the large object” and “a large object” would give us the same result of the larger object being selected. However, if the user simply requested to select “an object”, either could be chosen as both items are “objects”. Yet, if the user requests the AI to select “the object”, the AI would need to request more specific criteria (in this case, we currently only have “large” and “small” to choose from) before proceeding to select either object.
Moving on, we now introduce a new, third object with no current description, whose bounding size is between that of our large and small objects.
Among these three objects we now have a chance to test our AI’s interpretation of “small” and “large”, by making a few selection requests.
Again, depending on how the AI is designed, it may have reached the conclusion that “small” objects have a smaller bounding size than “large” objects, as well as the conclusion that “large” objects have a larger bounding size than “small” objects. This means, based upon our request, the newest object could be interpreted as “large” or “small”.
Now, if we request the AI to select a “small” object, it might choose either object smaller than our “large” object. Yet, if we request it to select a “large” object, it might choose either of the objects that are larger than our “small” object.
Next, if we request the AI to select the “large” object, it should choose the largest object by default (and vice-versa) depending on it’s interpretation of the “large” description.
Finally, if we request the AI to select the “large” objects (plural), it should select everything, but the “small” object. (And again for the “small” objects…)
Anyway, the idea here is that if we developed AI’s to be context sensitive to information it already knows as a means of generating conclusions beyond the scope of the initial code and data set it starts out with, you could end up with an impressively flexible method for dealing with situations not initially anticipated ahead of time. If such a method was allowed to modify it’s own data and nest several layers of criteria checking when running, it could allow for an incredible amount of adaptability when exposed to a finite setting, such as the scene within a game.
Ideally, such a system would have the user interacting with the AI at all times, simply by playing the game. (Kind of like a god that actually pays attention to you, and adjusts your world according to your activity and progress.)
Personally, I’m really curious to see how such a setup might work in a sand-box style game like the Grand Theft Auto series. For example, the AI in that scenario could start out as benign, letting you roam the streets as a complete unknown, while observing your activity and habits, but later becoming increasingly malevolent toward you by letting people recognize you as a threat or target for attack. Based upon how the situation plays out, the AI could start ramping things up as individuals start banding together as random groups of thugs. A bit longer, the AI starts getting the police in on things, acting every time they see you, first individually then eventually as massive specialized task forces. In the meanwhile, the AI starts directing the street thugs to become large street gangs, eventually moving into city-wide crime syndicates, all hell bent on destroying you.
The longer you manage to survive, the more the city organizes itself against you.
Keep in mind this is all occurring outside the scope of the AI’s initial design. The AI is learning from you on how to best destroy you.
Most likely, I’ll never actually get around to designing something this sophisticated, but I think we could have far better and more adaptive AI’s in gaming than we currently have. If we could get past the idea that games need to be predictable, we could probably have far more dynamic experiences when playing. If you doubt this, just look at how well games supporting online multi-player do within the industry. There is a huge demand for dynamic game play where the experience isn’t always identical to the previous session.