Are we getting to the point in game AI where players will eventually just sign up to an AI faction?

If you have been watching the headlines over the last decade AI has been taking down top human players in more and more complex games. From Chess, Jeapordy, Go to FPS and RPS games AIs have taken over.

Recently an AI faction was released into the universe of EVE online with devastating consequences to the player driven universe.

Now we know that AIs can outgame people, will we end up with AI driven game factions that players can join?

If we try and limit games to just players there is bound to be someone who introduces an AI player that can then dominate that game.

Perhaps in the beginning. Later the AI will evolve even further and call human player N00bs and vote them to be kicked from the game and then we are full circle.

4 Likes

For almost all games, that’s already been possible ever since computers were invented. Until there’s a compelling reason why players would want AI to dominate games (which I suspect won’t come) they simply will remain entertainment value.

Think about the example: EVE introduced a highly threatening AI faction and the first thing people did was stop fighting eachother to focus on destroying it. I didn’t see anything about anyone joining it. Unless the AI suddenly develops a personality that’s more interesting than the average human being, they are likely to continue to have no real value other than as a target.

Even if people joined an AI faction because it was more efficient at winning the game, that would destroy EVE faster than anything else you could imagine. All the players would be sidelined watching AIs fight (because they are better at it) until they got bored and went to play something else.

You have to realize that no one really cares about the AI or their abilities. They care about themselves and how they relate to other human beings. Until the AI develops a compelling personality, they are pretty meaningless.

6 Likes

AI “superpower” is nothing new over decades.
AI has been handicapped and kept “dumb”/simple in many many games. Specially strategies.

1 Like

So apparently it was all planned by design. Not just purely AI by itself, from nowhere.

Even Kurt Russell couldn’t beat the “cheating bitch” chess AI.

1 Like

It will be interesting when we move from todays finite state machines into machine learning AI. AI will in a near feature feel less dumb and scriptet.

This will be equally adequate, as Desktop to VR. Where VR and ML will stay in niche.
For most games simple condition based on BT, or even Utility AI are more than sufficient.
ML if not supervised correctly, can easily make game unplayable. Most likely making it too hard = not fun.

I can imagine 2D Mario game where mushroom and turtle/duck evades Mario jump and bullets :smile:

VR will completely dominate in a few years.
Like VR ML will also take over ground were traditional finite state machines have been used. For example in AI - Human conversation in games. But also in strategy elements to make the AI take more natural strategic choices

Dominate what?
You think people first thing after long day work, will be willing to put stuff on their head? :slight_smile:
Some maybe, but not most.
Call me in 5 years time, that I was wrong :smile:

ML is difficult to tune, if you change something in your design. Need retrain. That is major downside.
Now you will be add baking AI to game, along with other stuff, where complexity grows exponentially. :slight_smile:
Sure, some people will be doing so. I like ML. And I see more of it for sure. More opportunities. But I see challenges with it as well.
Classical approach is very easy to tweak. Specially suitable for classic type games.

Lets consider perfectly driving racing car on track, learned by ML. That is actually good use of ML. Well is cool and functional. But it need crash from time to time. So need some handicap. So you start dumbing it down, because is to good.

Now you change gravity, or tires, or breaks. You need re-teach AI for new conditions. And then dumb it down again.
Sure you can teach with all these conditions. But thats adding complexity. All fine, until adding something, which wasn’t by design. And need re-teach AI again, then dumb it down. :slight_smile:

Probably there are clever ways of doing so. But you see my point.

It’s hard to train an AI to be ‘fun’.

5 Likes

What you’re saying is that they would have to get the game right the first time? Well there goes that idea. We haven’t had a publisher release a game in years that didn’t immediately need a patch. :stuck_out_tongue:

Joking aside I wonder if you couldn’t have the training mechanism be based off of the replay system that most competitive games come with today. If the AI could analyze the way a player handles their match then you could just pick replays that are good or interesting.

2 Likes

Look at the RTX gpu with how they us AI to give us raytracing, we will see many more similar applications. For example one were it can boost the feel of game AI

Lazy yanks, rest of the world love to get some exercise.
I think I trust myself and Valve more than you when it comes to predict markets

I know Zero-K (RTS) does that. But don’t know exact details regarding training process.

You are right. The thing is with training, is a TIME. If takes only few min, thats fine. But as you mentioned, if you want train based on already played game, based on replays, that means lots of TIME. The question is, if developers / publishers got time, to retrain, ever minor change was made.

In case of Zero-K, it has long years existence, with tons of data already collected. That make easier for such automatic training process. Not something most devs will be supplied with, at the release / beta date.

Now if you consider game replay, you need make game, which records all inputs accordingly and is deterministic. That adds complexity to average game product :slight_smile:

I know. But no worry, since you target “most” active people, so that won’t be an issue for you :wink:

Guess it depends on whether you need the replays to play back at normal speeds. If the replay can be analyzed in a very short period of time then the actual time investment is just finding the replays you want to analyze.

We would be teaching it which units the player created and how they used them based on the limited information the player knows about their opponents. The only thing that would need to be deterministic in my opinion is how the units affect each other. Unless I’m misunderstanding what you mean by “is deterministic”.

I like you. And your username.

1 Like

So you ban them for cheating.

1 Like

Sure, that may work for your case. And then that would be most suitable.

Good thing about determinism, is replayability. Weather for review, or for training purposes in certain conditions.
It also allows, to keep minimum number of data, while you can have thousands of units on the battle field. You don’t need to store position of each unit for every time frame. Mainly commands form player / AI and sync points.

You may want replay particular scenario, and ensure, that AI learning B, is better than its previous A. Not just by luck.
Or you may want actually review, what happened under particular training process. Why xyz happened, when least expected.

Still, you can accelerate training, but will probably more close to real time, rather than CPU clock :slight_smile:
Hence yes, teaching may be potentially much slower.