Hacker News new | comments | show | ask | jobs | submit login
Street Fighter II's AI Engine (sf2platinum.wordpress.com)
443 points by FroshKiller 68 days ago | hide | past | web | 86 comments | favorite



"When reacting to an attack the scripts are chosen based on something called a yoke. Each frame of animation for both avatars and projectiles contains a value for the yoke in the metadata, which the AI peeks at to select a script suitable for responding to that attack. The computer sees the yoke of your move as soon as you have input it, before the first animation frame has even displayed. As such it gets one more frame of advantage on top of your reaction time."

I KNEW IT!


> These days, when most people talk about AI they’re talking about machine learning. There’s not any of that in SF2.

Actually, AI for games is pretty much never equivalent to AI for non-games. The end goal is different - games want to provide a non-optimal set of instructions, so that it's challenging but not impossible to win. The goals are entirely different.

If you're making a game, this is probably at least a useful example to look at, even if I don't agree with some of the decisions they made (uninterruptible moves, for instance)


I think the problem here is that AI academics think of AI as predicting from factors. Most other people think of an AI as a program that can effectivly emulate human intellegence. The end goal of AI (or at least game AI) is to create as perfect of a facsimile for the way an average person will think. In reality there are no "best decitions", let alone in a simulated reality inhabited with different rules and AI.

When I play a game I want the AI to be as close to another human as possible. That's why many AIs are laughed at for being tricked by really simple tactics (circling around a pilar so the AI follows you, putting something between you and the AI so they can't see you like in Skyrim's buckets, etc.).


I wouldn't go that far.

AIs in games often cheat, and they're also dumbed down so that they don't beat humans too fast, but the main difference with "regular" AI comes from the difference in goals and context. AI in most games has to make sufficiently smart decisions in a dynamic environment and with very limited timespan (milliseconds, usually). In such, it is pretty similar to algorithms used in robotics, and therefore entirely unlike the stuff the regular AI does to suggest you better ads.


Completely agree. Games are typically only "fun" when you feel challenged. Most people do not find "fun" in losing.


The thing is most simple game AI isn't challenging. Or it's challenging in boring ways. Like having 100% perfect aim or long health bars. It's awesome to have an AI that is challenging by actually being good at the game. Having some actual strategy and intelligence.

It's a complete misconception that "real AI" needs to be super hard. Give it realistic constraints like slow reaction times or noisy input. You can handicap it in many ways to control the difficulty. Modern chess engines can easily beat even the best players in the world. But by limiting the number of moves they search, you can set one up that new players can beat.

My favorite game AI is from age of empires 2. All other RTSes just let the AI cheat like crazy to provide challenge. For AoE2 they went to a lot of work to design an expert system and a custom scripting language for it. Tons of features were implemented to make it easy to write relatively sophisticated AI strategies. And they documented it well and made it easy for modders to write even better AI scripts.

As a result the AI on hardest can beat all but skilled competitive players without cheating at all (at least the current AI shipped with the steam version.) It's actually fun to play against and isn't a terrible substitute for a real human player.


Similarly to that, there exist AIs for Starcraft Broodwar that are capable of providing a decent challenge to novice and intermediate level human players. You could check some examples at play on: http://sscaitournament.com/ and also develop your own AI in one of the more popular programming languages (C++, Java, etc). Disclaimer: I am the author of one of those Starcraft Broodwar AIs


> Most people do not find "fun" in losing.

Allow me to introduce you to a developer named https://en.wikipedia.org/wiki/FromSoftware ...


And if that's not good enough, there's the one whose game basically opens with 'Losing is fun!': http://dwarffortresswiki.org/index.php/DF2014:Losing


Kind of off topic, but the thing I like about the "Losing is fun!" mantra of DF is that "losing" is not the end of the game. It's just another point in history. So "losing" really just advances the story plot -- which is why it's fun. Conversely, "winning" means that you don't have anything left to do and it's "boring". I don't know any other game with this point of view.


Thia is just the general rule for simulation games, in free play mode.

Reach a point where all core problems are solved, and then create your own problems (and try to solve them).

Its just DF community that claimed it as their mantra, and their primary marketing of the game. But all sim games (d)evolve to this.

Kinda like Lucky Strike's "it's toasted!" slogan


Both of which were influenced in their attitude towards losing by a little game known as Nethack [1].

Although I understand Souls doesn't have permadeath, therefore it's probably only half the fun of Nethack.

__________

[1] https://en.wikipedia.org/wiki/NetHack


Which in turn took from rogue. Which in turn came from DnD first edition / advanced edition, when DnD was more about navigating dungeon traps than fighting monsters (and not at all similar to modern DnD, which resembles Besthesda RPGs much more than roguelikes)

But Original DnD is where my inspiration-knowledge stops


D&D is descended from wargaming.



https://alt.org/nethack/ is probably the best way to play it (or watch others do so, which is a pretty cool feature)


It depends. With a well balanced fighting game I don't mind losing to a good human opponent because it means that my game improves. Playing against someone who's good is a great way to find out what moves/patterns you thought might be safe aren't.

I agree that if you have no way to measure that progress/improvement then the fun is lost.


That actually would be an interesting game mechanic; make sure you lose, but you get more points for how spectacularly you lose...


Now i want though a machine learned fighting game AI. Maybe it would even be a challenge for the enthusiasts?

https://youtube.com/watch?v=xSGW7CwD5GM


Someone made exactly that for Super Smash Bros.:

https://www.engadget.com/2017/02/26/super-smash-bros-ai-comp...


Machine learning would be interesting to apply to something like saltybet[0], there's already a paper on the idea[1], dunno if somebody actually did manage to do it.

[0]http://www.saltybet.com/

[1]http://webcache.googleusercontent.com/search?q=cache:VXw25h2...


(uninterruptible moves, for instance)

I bet this was done to decrease the complexity of the script flow. Otherwise, you'd have to provide for the interruption cases.


I asked the blog's author and he said hits do interrupt scripts. Aside from that, more complex scripts intended for higher difficulty have explicit conditional branches after moves to allow for blocking and counters. The triple fireball script example is for "easy Ryu", so it's intentionally dumb/non-reactive.


What do you mean by non-interruptible moves?


It's funny how much SF slang hints at you and your opponent being lackluster AI scripts:

- Downloaded: when you've identified your opponents patterns and know what they're going to do next. Ex. they always jump after they jab.

- Conditioning: when you've trained your opponent to react the same way consistently to something you do. Ex. they always jump when you throw a fireball because you've let them do it successfully.

- "time to guess": situations where, if executed correctly, your opponent must choose one defense from many defenses randomly. Ex. your jump trajectory is such that whether you end up on the left or right side of your opponent is determined by a pixel.

A phrase that novice players say a lot is "HOW DID HE KNOW?" because they're in shock that their opponent is "guessing" right every time when really their opponent just knows what the novice is going to do in every situation. What's impressive is that "every situation" is really the more skilled player abstracting over similar situations. Ex. they recognize kick->jump = punch->jump for their particular opponent. I don't think an AI will be able to make complicated abstractions like that on an offline game console for a while.


Yeah, Sirlin covers that really well in his Layers of Yomi on Playing to Win:

http://www.sirlin.net/ptw-book/7-spies-of-the-mind


I wonder if knowing the different strategies would make pro-humans better at playing the game. I feel like they might already have internalized the different strategies the computer uses, but still pretty surprising to see how simple the source code is.

Also cool to think about would be AI-vs-AI street fighter competitions. Or SF AI that learns from live matches currently being played online.


> I wonder if knowing the different strategies would make pro-humans better at playing the game.

Kinda. Some of the AI strategies are based on the behaviour of human players. For example, in the later versions when Ryu dizzies the opponent, he will whiff several jabs. This is widely regarded as a tribute to a combovideo maker known as TZW[1] who often did this in his videos. With regards to actual Fighting Game strategy, the AI in Super Street Fighter II absolutely uses tactics employed by real players.

These are a few off the top of my head:

- Tick throws

- Whiff aerial attacks into command grab

- Footsie spacing into whiff punish

- Fake footsie spacing into grab

- Low/Overhead attack mixup

- Left/Right attack mixup

Using Lua scripting, a compatible emulator and a rudimentary neural network it's completely possible to build a "better" AI. I believe a more advanced one was recently created for Super Smash Bros [2].

1. https://www.youtube.com/watch?v=M0CidJaVS0Q#t=4m48s

2. https://arxiv.org/abs/1702.06230


> AI-vs-AI street fighter competitions

http://www.saltybet.com/ does just that.


nice... somewhat glitchy, but still pretty good.

I doubt there is reinforcement learning going on though, I think the AIs of the mugen characters are hard coded.


That's awesome. Next-gen betting - on AIs! :)


This is news to me. Interesting.


The key to making an fighting game AI API interesting is enforcing the same reaction delays humans have, from decision making to input, which is roughly 250-300ms, if I remember correctly (your nervous system from brain to fingers has pretty high latency).

That delay time is what well-made fighting games are designed and balanced around. If you forced the AI to create inputs that would get processed however many milliseconds later they would actually have to start predicting their opponents, seeking hit confirms, etc.

If you don't enforce any sort of delay, making an unbeatable AI should be trivial.


In this case I think it is an easy thing to implement. Currently the system seems to pick up any moves you do immediately, e.g. a fireball, and then chooses a script based on that + the extra factors.

One could just alter this process and insert a timed delay before this move becomes visible to the AI and it chooses a new yoke response.

You could also increase this time as AI skill goes down, although at e.g. 500ms it will begin to seem to a human as if the AI is dumb, reacting to things 250ms in the past. At this point you'd probably get a more playable and good looking AI by

1. adding more variance per delay, based on the skill level

2. having it fail a vision by inserting random moves into its perceived list (for example it think you're going to jump kick when you're only jumping)

3. by occasionally randomly deleting vision items, which will make it miss things that the player is doing


I was just contemplating a fighting game AI that actually had to use computer vision to process the game framebuffer and work off of what it can see as well.


Hasn't this been done already? IIRC there have been some machine learning AIs that use raw pixels as observations.


> Also cool to think about would be AI-vs-AI street fighter competitions.

That's a fun idea. Provide an editor for the AI script files so you can program your own AI, then inject that into the ROM and pit AIs against each other.


You can do it using the techniques employed here:

https://github.com/aikorea/strikersii_ai/tree/master/genetic

Demo:

https://www.youtube.com/watch?v=k6Ir8yd9iOk

In terms of FGs, you would store the relevant game state as inputs and build a reward function around health, time, damage done etc. A relatively simple (and useful!) SFII AI would be one that tries to win by only using crouching medium kick, sweep and throw.


I just meant straight up using the scripting language described in the article to hand-code an AI.


Aah in that case if you're willing to forego the limitation of using the original AI instruction scheme, you can use the technique I recently used to build a training dummy. It's just a state machine that reacts differently depending on the state of various game variables (the same idea used by the original SFII code). The scripting language is Lua, and it's basically lines of code that go:

    if (distanceBetweenChars() < THRESHOLD) then
        chooseAppropriateMove(distanceBetweenChars())
    end
etc...

Here's a demo of one that reads the distance between characters and either dashes in and does a low attack, or attempts to throw you.

https://www.youtube.com/watch?v=R9THKU63viM

It's pretty simple to have this script run for Player 1 and Player 2.

In a competitive environment (ie. when you have AI vs AI) the players involved would need to agree on certain limitations, otherwise it would be pretty simple to build an AI that reacts "instantly" to stimuli.



> Anyone looking for some insight into how to write an AI engine for a game today will be disappointed.

And yet, it was really effective and fun. This is like a magician revealing how a trick was done, it's always a "letdown".


Well you are in luck because in a major sense, deep learning AI cannot reveal how the trick was done unless you are willing to speak in terms of a model with trillions of variables and feedback loops.


"Charge moves such as blade kicks are simply executed as instructions, so they cannot fail. Guile can do a bladekick from a standing position simply because that’s what’s in the script."

That's a pretty big cheat, to be honest.


Welcome to the frustrations of my childhood.

I sort of neat trick is during a dump you can be holding down to charge your "blade kick"* and immediately when you hit the ground, press up and kick to execute the move, and it gives the appearance of one of performing the move without charging. Sometimes gets a surprised look by your opponent.

* we called it a flash kick, but whatever.


I wonder if one can create an invincible AI for Street Fighter II. One that obviously makes the right choice always and can counter every possible human attack.

(kind of think that it was also possible even back in the 90s, but never implemented; what would have been the point?)


Mortal Kombat II effectively did this with really shitty AI. If you jumped towards an opponent, they would jump straight up and hit you with a projectile with perfect accuracy. Every time. It was impossible to throw a computer opponent because the AI had better timing then you. Every time.

In fact, the only way to beat the computer opponent was to take advantage of weaknesses in the AI script, the biggest one of which is jumping backwards when there's a specific distance between you. The computer would jump towards you, leaving them open to you jumping forwards with a kick. Every time. Just don't get caught in the corner.


There's a "bug" in the MK2 script I've never quite understood nor seen explained. Sometimes when jumping at the computer from a certain distance (and perhaps certain difficulty level) it will move back and will keep moving back trying to separate itself from you as long as you press no button. You can walk it into the edge and will stay there, forever trying to move back. Then you could move back yourself, wait a second, and it would throw a projectile, letting you jump in over the projectile for a corner combo.

Yeah, the MK2 AI isn't much fun. It's designed to eat quarters, not to provide a fair fight. :-/


Exactly. That's why I've never enjoyed playing MK with a computer. Game AI should be about fun and enjoyment for the player.


They also would let you leg sweep a lot more than a normal player would. I'd win whole rounds with just leg sweeps. Not the ideal kind of "fun" when playing a video game, but neither is getting perfectly, precisely owned every single time you jump.


Absolutely. All fighting games are basically a mashup of rock-paper-scissors with a bit of real-time chess mixed in. If you have instant reaction time and perfect understanding of what's happening on screen (no mistaking the early animation of one move for another) an unbeatable AI should be trivial, because you'll always win the rock-paper-scissors part.


One interesting thing about street fighter 2 is that all of a the character's basic moves are different. I've learned from playing street fighter 2 that one matchup has a sort of bug, Zangief vs Blanca. Zangief requires to be close up to hit someone. Blanca's hard kick in the air has a long reach.

I've figured out that if Blanca jumps straight up and does a hard kick when zangief gets close, there is literally no way for zangief to hit Blanca. You can do this on the hardest difficulty and beat zangief perfectly.

This presents a problem for a perfect AI. The perfect AI would need to know not to get so close to Blanca as Zangief and at best could only let the time run out during that matchup.


You can beat him with any character. Jump away from him and kick constantly and you will win


Depends on what the input to it is. There is an invincible AI for StreetFighter V that reads the other player's inputs from the network stream and simply chooses the perfect attack with which to counter. Much more interesting to me would be machine vision that can recognize what's happening on screen. I'd be fine with it getting frames via direct HDMI connection rather than with a camera and similarly inputting moves by sending USB packets down the wire, rather than physically pressing buttons.


People have kind of tried this with Mugen, I think.


Yep. SpriteClub.tv, a mugen stream, showcases a bunch of these kinds of characters with hilarious results.


That's pretty fun, thanks for sharing ;)

Actually with a machine learning approach it's not immediately obvious to me how to incorporate it. I suppose an RNN or reinforcement learning could be used, in principle, to learn good reactions to a given sequence from the opponent, but it would take so many failures to train it, which would have to be generated more or less manually. I don't see how it could be done easily, and certainly not in an "online" way without delivering a pretty terrible play experience.

Are there any good examples of computer-controlled players using an online (or offline) machine learning-based approach to player control, that doesn't suffer from these problems?

The image that comes to mind for me is a computer player that gets better and better at beating you, but the problem is that with reinforcement learning there is no guarantee that this happens reliably and in an amount of time and gradient that would provide a good playing experience -- and then you'd be faced with the problem of it getting "too good", equally not a particularly good play experience.

Perhaps something in between? A machine learning approach that helps select and maybe parameterize these "scripts"?


There have been some attempts to apply ML to NPC control: sometimes to learn how to beat the player, sometimes how to mimic them (Forza), sometimes to learn what player wants done and assist them (B&W). But none of them have crossed the chasm from novelty to killer app.

The main difficulty with combat AI is that players enjoy mastering a game and beating it. And if the game keeps getting better and beating them instead, they will end up unhappy.


Yeah I think that's a very interesting aspect. If you want, I think this speaks to a more general point about AI that I have thought about sometimes. I suspect that we enjoy interacting with machines and using machines when we understand them, and that this understanding of how a machine operates, its capacity and its behaviour, is similar to the "theory of mind" idea that we employ a model of another person when interacting with them. That is why it is so frustrating to work with a machine that changes as you use it -- it's like trying to shoot a moving target.

I really suspect that ML will find its place in helping design machines that are then static, but ML that adapts as we use it will be frustrating to work with. At least until we develop, as a society, some kind of "theory of mind" of AI, where we learn how to expect a machine to adapt as we use it. But we are not there yet.

For now, machines are still better when they act like machines. That is why I think, for example, society adapting to the presence of things like ubiquity of self-driving cars is going to be a long and scary road.

As for video games, it seems like a computer-controlled character should be somewhat predictable, even if complicated. When the ML algorithms adapt, it should present a new character to fight, not adapt an existing one that the player has already "theorized" (i.e. figured out its pattern.)


Colin McRae Rally 2.0 was released back in 2000 and used offline learning to train a driving model. Here's an interview with the engineer that made it: http://www.ai-junkie.com/misc/hannan/hannan.html

In this modern day and age grabbing telemetry from multiplayer matches might be a good way of getting training data. Particularly for games that already measure player skill. For example creating AI for Hearthstone that played like each of the skill tiers. Periodic retraining on up to date data could then keep up with things like changes in the metagame.


This is how Drivatar works in Forza Motorsport. It was first trained and tuned offline internally at Turn 10 to give opponents personality traits and driving style (Forza 2, ah, that damn M. Rossi!), then trained from the player as well and used offline (IIRC Forza 3 had AI progress vanish after the race while 4 persisted it across races. Anyway I loved how the opponents would ostensibly start to use my own tricks and lines against me, making each lap harder than the previous one), and finally starting with Forza 5 and 6 you could have your own AI that would play as you online with your own driving style while you were away, and if it was winning, you'd get some cash.


They're fighting games. Why do we need to incorporate machine learning at all? If the computer wanted to they could always play perfectly because it could react to any input perfectly.


AI's are useful as training dummies. I believe Killer Instinct has quite a good AI called "Shadow Mode" that learns from your behaviour.

http://www.polygon.com/2015/5/30/8691219/killer-instinct-tea...

It probably uses a hybrid or AI and scripting techniques.

As for things like human reaction limits etc, that can built into the AI as part of parameters it needs to operate within.


Well, that's the challenge, right ?

Not making an AI that has perfect response, but a "human-like" one.


'learning to fight against you, sort of like a human would' is a lot more fun than 'literally cheating and perfectly countering you'


Since humans are not infallible, let's leave it at 'learning to fight against you, sort of like a superhuman would'.

It would make for a formidable opponent since you can still defeat it (with a good dose of luck).


If you forced the AI to work with a substantial input lag, it could be quite complex yet still possible to defeat.


This is just me bullshitting, but I imagine you'd want to continue shipping a simple interpreter and some kind of script for the computer opponents and use a hypothetical learning system to generate the scripts separate from the game you ship. The AI code in the game would be simpler, the scripts could be easily patched, users could generate their own scripts, etc.

I think you could probably generate and hand-tune a number of situational scripts and let a learning system decide how best to chain them for particular circumstances (e.g. if the opponent is Character A, the computer is Character B, the computer's relative health is much lower, and Character A has a full super combo gauge, then it learns a particular sequence of scripts that proves effective in replays).


I'm going to find Vega's script and replace it with NOOPs. I hated playing Vega. He would climb that wall like a spider and then jump with a shrill yell and then body-slam me.

I think that's the most rage I've ever experienced in my life....just repeatedly getting beat by Vega.

The only thing that comes close is getting hit by the blue shell in Mario Kart.


lol, vega is easy to beat come on! There's a lot of trick in SF2, with almost every character.

Thanks for the info, I'm currently building my SF2 game and was wondering how they made their AI, then much thanks for the sharing.


When he climbs on the fence, get as far away from him as possible. When he jumps off, jump away and hit roundhouse, this usually beats him.


Losing is not fun but almost winning IS fun. Progress through refinement, strategy and speed is fun.


Gaming AI must be "beatable". "Good" AI is not good for having fun games.


Why aren't there longer bytecode instructions?


1. Because it's not necessary, there are a lot of them instead, so it runs through a short script, picks another one at random, rinse, lather, repeat. 2. Longer scripts would be recognisable and easy to beat.


seems like it is down, 502 error https://usafacts.org/


AI != IF STATEMENT + RANDOM


You're confusing GAI and AI. AI covers anything that's making a system seem intelligent, essentially. Expert systems and game AIs are both very condition-based.


Sure it is. AI is a computer reacting to its environment to solve a problem. It doesn't necessitate machine learning, neural networks, or what-have-you.


This doesn't have a scoring function or objective function hence is not 'solving' a problem. it's just mimicking a behaviour. There is quite a difference.


Sorry but I don't see any intelligence in this.


It's still AI though.


I always thought that the I in AI stood for Intelligence


AI always seems to be that elusive magical self-conscious computer algorithm, which when revealed to run on actual binary logic is claimed to be "not AI after all"


do you familiar with decision tree learning? [1]

[1] https://en.wikipedia.org/wiki/Decision_tree_learning


You're not the only one saying that, so I made a quick blog post about whether SF2 really is true "AI". It goes like this:

Nobody really cares.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: