I KNEW IT!
Actually, AI for games is pretty much never equivalent to AI for non-games. The end goal is different - games want to provide a non-optimal set of instructions, so that it's challenging but not impossible to win. The goals are entirely different.
If you're making a game, this is probably at least a useful example to look at, even if I don't agree with some of the decisions they made (uninterruptible moves, for instance)
When I play a game I want the AI to be as close to another human as possible. That's why many AIs are laughed at for being tricked by really simple tactics (circling around a pilar so the AI follows you, putting something between you and the AI so they can't see you like in Skyrim's buckets, etc.).
AIs in games often cheat, and they're also dumbed down so that they don't beat humans too fast, but the main difference with "regular" AI comes from the difference in goals and context. AI in most games has to make sufficiently smart decisions in a dynamic environment and with very limited timespan (milliseconds, usually). In such, it is pretty similar to algorithms used in robotics, and therefore entirely unlike the stuff the regular AI does to suggest you better ads.
It's a complete misconception that "real AI" needs to be super hard. Give it realistic constraints like slow reaction times or noisy input. You can handicap it in many ways to control the difficulty. Modern chess engines can easily beat even the best players in the world. But by limiting the number of moves they search, you can set one up that new players can beat.
My favorite game AI is from age of empires 2. All other RTSes just let the AI cheat like crazy to provide challenge. For AoE2 they went to a lot of work to design an expert system and a custom scripting language for it. Tons of features were implemented to make it easy to write relatively sophisticated AI strategies. And they documented it well and made it easy for modders to write even better AI scripts.
As a result the AI on hardest can beat all but skilled competitive players without cheating at all (at least the current AI shipped with the steam version.) It's actually fun to play against and isn't a terrible substitute for a real human player.
Allow me to introduce you to a developer named https://en.wikipedia.org/wiki/FromSoftware ...
Reach a point where all core problems are solved, and then create your own problems (and try to solve them).
Its just DF community that claimed it as their mantra, and their primary marketing of the game. But all sim games (d)evolve to this.
Kinda like Lucky Strike's "it's toasted!" slogan
Although I understand Souls doesn't have permadeath, therefore it's probably only half the fun of Nethack.
But Original DnD is where my inspiration-knowledge stops
I agree that if you have no way to measure that progress/improvement then the fun is lost.
I bet this was done to decrease the complexity of the script flow. Otherwise, you'd have to provide for the interruption cases.
- Downloaded: when you've identified your opponents patterns and know what they're going to do next. Ex. they always jump after they jab.
- Conditioning: when you've trained your opponent to react the same way consistently to something you do. Ex. they always jump when you throw a fireball because you've let them do it successfully.
- "time to guess": situations where, if executed correctly, your opponent must choose one defense from many defenses randomly. Ex. your jump trajectory is such that whether you end up on the left or right side of your opponent is determined by a pixel.
A phrase that novice players say a lot is "HOW DID HE KNOW?" because they're in shock that their opponent is "guessing" right every time when really their opponent just knows what the novice is going to do in every situation. What's impressive is that "every situation" is really the more skilled player abstracting over similar situations. Ex. they recognize kick->jump = punch->jump for their particular opponent. I don't think an AI will be able to make complicated abstractions like that on an offline game console for a while.
Also cool to think about would be AI-vs-AI street fighter competitions. Or SF AI that learns from live matches currently being played online.
Kinda. Some of the AI strategies are based on the behaviour of human players. For example, in the later versions when Ryu dizzies the opponent, he will whiff several jabs. This is widely regarded as a tribute to a combovideo maker known as TZW who often did this in his videos. With regards to actual Fighting Game strategy, the AI in Super Street Fighter II absolutely uses tactics employed by real players.
These are a few off the top of my head:
- Tick throws
- Whiff aerial attacks into command grab
- Footsie spacing into whiff punish
- Fake footsie spacing into grab
- Low/Overhead attack mixup
- Left/Right attack mixup
Using Lua scripting, a compatible emulator and a rudimentary neural network it's completely possible to build a "better" AI. I believe a more advanced one was recently created for Super Smash Bros .
http://www.saltybet.com/ does just that.
I doubt there is reinforcement learning going on though, I think the AIs of the mugen characters are hard coded.
That delay time is what well-made fighting games are designed and balanced around. If you forced the AI to create inputs that would get processed however many milliseconds later they would actually have to start predicting their opponents, seeking hit confirms, etc.
If you don't enforce any sort of delay, making an unbeatable AI should be trivial.
One could just alter this process and insert a timed delay before this move becomes visible to the AI and it chooses a new yoke response.
You could also increase this time as AI skill goes down, although at e.g. 500ms it will begin to seem to a human as if the AI is dumb, reacting to things 250ms in the past. At this point you'd probably get a more playable and good looking AI by
1. adding more variance per delay, based on the skill level
2. having it fail a vision by inserting random moves into its perceived list (for example it think you're going to jump kick when you're only jumping)
3. by occasionally randomly deleting vision items, which will make it miss things that the player is doing
That's a fun idea. Provide an editor for the AI script files so you can program your own AI, then inject that into the ROM and pit AIs against each other.
In terms of FGs, you would store the relevant game state as inputs and build a reward function around health, time, damage done etc. A relatively simple (and useful!) SFII AI would be one that tries to win by only using crouching medium kick, sweep and throw.
if (distanceBetweenChars() < THRESHOLD) then
Here's a demo of one that reads the distance between characters and either dashes in and does a low attack, or attempts to throw you.
It's pretty simple to have this script run for Player 1 and Player 2.
In a competitive environment (ie. when you have AI vs AI) the players involved would need to agree on certain limitations, otherwise it would be pretty simple to build an AI that reacts "instantly" to stimuli.
And yet, it was really effective and fun. This is like a magician revealing how a trick was done, it's always a "letdown".
That's a pretty big cheat, to be honest.
I sort of neat trick is during a dump you can be holding down to charge your "blade kick"* and immediately when you hit the ground, press up and kick to execute the move, and it gives the appearance of one of performing the move without charging. Sometimes gets a surprised look by your opponent.
* we called it a flash kick, but whatever.
(kind of think that it was also possible even back in the 90s, but never implemented; what would have been the point?)
In fact, the only way to beat the computer opponent was to take advantage of weaknesses in the AI script, the biggest one of which is jumping backwards when there's a specific distance between you. The computer would jump towards you, leaving them open to you jumping forwards with a kick. Every time. Just don't get caught in the corner.
Yeah, the MK2 AI isn't much fun. It's designed to eat quarters, not to provide a fair fight. :-/
I've figured out that if Blanca jumps straight up and does a hard kick when zangief gets close, there is literally no way for zangief to hit Blanca. You can do this on the hardest difficulty and beat zangief perfectly.
This presents a problem for a perfect AI. The perfect AI would need to know not to get so close to Blanca as Zangief and at best could only let the time run out during that matchup.
Actually with a machine learning approach it's not immediately obvious to me how to incorporate it. I suppose an RNN or reinforcement learning could be used, in principle, to learn good reactions to a given sequence from the opponent, but it would take so many failures to train it, which would have to be generated more or less manually. I don't see how it could be done easily, and certainly not in an "online" way without delivering a pretty terrible play experience.
Are there any good examples of computer-controlled players using an online (or offline) machine learning-based approach to player control, that doesn't suffer from these problems?
The image that comes to mind for me is a computer player that gets better and better at beating you, but the problem is that with reinforcement learning there is no guarantee that this happens reliably and in an amount of time and gradient that would provide a good playing experience -- and then you'd be faced with the problem of it getting "too good", equally not a particularly good play experience.
Perhaps something in between? A machine learning approach that helps select and maybe parameterize these "scripts"?
The main difficulty with combat AI is that players enjoy mastering a game and beating it. And if the game keeps getting better and beating them instead, they will end up unhappy.
I really suspect that ML will find its place in helping design machines that are then static, but ML that adapts as we use it will be frustrating to work with. At least until we develop, as a society, some kind of "theory of mind" of AI, where we learn how to expect a machine to adapt as we use it. But we are not there yet.
For now, machines are still better when they act like machines. That is why I think, for example, society adapting to the presence of things like ubiquity of self-driving cars is going to be a long and scary road.
As for video games, it seems like a computer-controlled character should be somewhat predictable, even if complicated. When the ML algorithms adapt, it should present a new character to fight, not adapt an existing one that the player has already "theorized" (i.e. figured out its pattern.)
In this modern day and age grabbing telemetry from multiplayer matches might be a good way of getting training data. Particularly for games that already measure player skill. For example creating AI for Hearthstone that played like each of the skill tiers. Periodic retraining on up to date data could then keep up with things like changes in the metagame.
It probably uses a hybrid or AI and scripting techniques.
As for things like human reaction limits etc, that can built into the AI as part of parameters it needs to operate within.
Not making an AI that has perfect response, but a "human-like" one.
It would make for a formidable opponent since you can still defeat it (with a good dose of luck).
I think you could probably generate and hand-tune a number of situational scripts and let a learning system decide how best to chain them for particular circumstances (e.g. if the opponent is Character A, the computer is Character B, the computer's relative health is much lower, and Character A has a full super combo gauge, then it learns a particular sequence of scripts that proves effective in replays).
I think that's the most rage I've ever experienced in my life....just repeatedly getting beat by Vega.
The only thing that comes close is getting hit by the blue shell in Mario Kart.
Thanks for the info, I'm currently building my SF2 game and was wondering how they made their AI, then much thanks for the sharing.
Nobody really cares.