Hacker News new | comments | show | ask | jobs | submit login
Facebook Quietly Enters StarCraft War for AI Bots (wired.com)
230 points by sus_007 44 days ago | hide | past | web | 121 comments | favorite



Dave Churchill, who is heavily involved in the StarCraft AI scene, posted this in the StarCraft AI Facebook group:

"Hey everyone, I just wanted to make a quick statement about this article that was posted in WIRED. If Tom had told me that his intention for this article was to focus on the loss of ANY bot, Facebook or not, I would not have given him the information that I did to write this article. I am normally a huge fan of Tom's AI articles, but I think that this is a shameful clickbait title that focuses on the wrong side of the story.

It turns out that there may actually be a bug with how Facebook's serialization library interacted with our tournament environment, causing their strategy selection to not work properly, causing a huge drop in performance for their bot. We will be investigating this over the next few days to see if it was my mistake in setting things up that caused this, or if maybe something as simple as a windows firewall setting to be at fault. Unfortunately with competitions like this, competitors do not have access to our tournament environment / machines to fully 100% test all of their code on, and sometimes issues like this can arise. I appreciate Facebook's effort in this competition and applaud them for actually having the balls to submit an open source bot, knowing the scrutiny they would face if it wasn't already superhuman. It's a real shame that this is what media chooses to focus on. Tom also asked me what time the results would officially be made public so that the article could be released at the same time, but instead the article ended up being published 7am the next morning so everyone could wake up to the headline :( Also the word "quietly" implies some sort of sneaking around on the part of Facebook, when their registration has been publicly visible for months.

Note: There is nothing factually incorrect in this article, I just believe its focus is in poor taste, and hurts the SCAI community as a whole, and unfairly singles out a single competitor."


Fail early, fail often. The article was lame. I encourage the FB team to keep going. No one cares if you fail - we just care if you win. Go Gadget Go!


Link to facebook group? A search for 'Starcraft AI' doesn't bring up any results on facebook for me.


Not sure if it's the same one I followed for some time, but maybe it's this one: https://sscaitournament.com/



"The contest Facebook entered, like most AI research in the area, used an older version of StarCraft, which is considered equally difficult for software to master."

I'm surprised this one hasn't been called out yet but Starcraft: Brood War and Starcraft II are completely different games, not just different versions.

And apart from what was already mentioned regarding information on the scene, this link is pretty useful: http://sscaitournament.com/index.php?action=tutorial

An older post on this topic: http://www.teamliquid.net/blogs/485544-intro-to-scbw-ai-deve...

Facebook: facebook.com/groups/bwapi

As for SSCAIT in particular: http://wiki.teamliquid.net/starcraft/SSCAIT


Hmm, are they truly completely different games?

From a high level, it seems like the elements for an AI to learn are mostly the same: Resource gathering and optimization, map/location battle optimizations, building/technology decisions...

It's not like Starcraft: verison X is different than Starcraft: version Y is different the way Starcraft and Hearthstone are different games.


I agree with the sentiment that for a human the difference is much larger than for a machine, however, taking into consideration that a huge amount of the AI in SCBW needs to be on the micro (i.e., in-battle) execution which is - for the most part - handled by the in-game AI in SC2 I feel both games are different enough on AI development. Some basics are definitely shared, but beyond a certain level the two games go in different directions.

One thing to add is the openBW project (http://www.openbw.com). It is truly amazing: The guys there managed to reverse engineer the full SCBW engine (I thought that's nearly impossible, as the whole flow depends a lot on implementation details, some of the details are even considered bugs, which turned out to be just the right implementation to balance the game perfectly...).


It happens that SC2 implements certain micro AI functions for the game player, however, that doesn't stop a player from micro-ing all the units themselves, just like in SC1.

The number of units in SC2 make it much harder to micro for humans, hence the game provides help. But the human player experience shouldn't define the AI player behavior; especially one that is to learn the game on it's own.

Breaking down the goals, components, and constraints of the games, I'd actually say SC1 and SC2 are pretty similar games, especially to an AI on both micro and macro levels.


Extra waypoint actions, better pathing and the propensity to form nicer circles around an opponent targeted by an attack command isn't what I'd call micro AI. Most of that choppy feeling you get after switching to classic is because of how tiles and stacking is handled in sc2 where units step on each other's toes and slowly ease away and tiles are much smaller.


Honestly from an AI perspective SC:BW and SC2 are probably more similar than from a human's perspective. 12+ unit selection, multi building selection & intelligent worker rallies aren't huge improvements for AI.


SC2 ticks way faster than SC1 though, which you don't notice much as a human, but you might change your strategy when designing the ai components.


Also I'd imagine the amount of time you spend making sure that units will path efficiently is way less!


But doesn't that just make the search space of possible moves per unit time larger? That would require more processing power, but it doesn't seem like it would necessarily change the fundamentals of how the AI works.


I'd say about as similar as football and soccer. Both involve getting the ball thingy into the opponent's end-of-field thingy. Indeed lots of common skills.


AI has a significant edge in micro and mechanics in BW that is much less significant in SC2.


I haven’t played either, but I know for a fact that they use different pathfinding methods (grid vs navmesh, II has swarm behavior).


They are truly completely different games.


So which one isn't an RTS?


"Apples and bananas are basically the same thing because they're both fruit."


I didn't say they are basically they same.

However, you are saying they share nothing in common.


Up next: if a dog fucks a cat do they birth kittens or puppies?


Are Star Craft and Red Alert the same game?


From creating an AI perspective - they are actually pretty similar games! All the elements are there, the end goal is mostly the same, the constraints on the systems are not all too different...


No, but neither did I claim it.

However, my argument wasn't that. It was that they are not completely different. Their are similarities between the two.

Do Star Craft and Red Alert share absolutely nothing in common?

If they do, you agree with me!


Are your parents truly completely different people?


Here's a link if you want to see CherryPi vs. Purplewave (two bots mentioned).

https://youtube.com/watch?v=rTf_aL0hrgo


That's surprisingly frustrating to watch. Protoss throwing away its army of zealots with terrible positioning, zerg wasting time attacking buildings instead of sealing the deal, probes walking by enemy forces that ignore them as they mine minerals...

Which is not to say that there weren't impressive parts. I don't think a human could stutter step that many dragoons that well. But there's clearly a lot left to do in terms of tactics.


Hi Strilanc. Author of PurpleWave here.

That particular scene at the sunken colonies was equally frustrating to watch. I fixed the most likely cause of that behavior shortly after the tournament deadline.

But for sure, you're correct. There's a lot left to do on all fronts. There's noticeable progress every year -- none of last year's entries finished much over 50% -- and the growing research community should help accelerate that even more.


See my root-level comment in this thread: https://news.ycombinator.com/item?id=15437177


Hi! Author of PurpleWave here. A bit late to the party but happy to answer questions.


Both Iron Bot and krais0 have one helluva Terran mech (tank/Goliath/Vulture) push. Do you think a Protoss AI can defeat it?


Sure. I had a winning record vs. Iron in the tournament. Both are very strong but within grasp. The hard part is that any success is a bit fleeting; both have very capable and driven authors who will shore up weaknesses very quickly!


How far away are the AIs from consistently beating humans? What are the biggest problems you are facing right now?


The best AIs right now would beat above average folks from the public. But as you start approaching anyone who resembles a professional, bots rapidly turn into practice dummies.

The biggest problems I'm personally facing at the moment are midgame macro with imperfect information -- how many Gateways do I need to be producing from before I can safely take a third base or transition to endgame tech? -- and engaging Terran mech armies. It's very hard to know when or how to engage a Terran mech army, and you usually only get one chance to get it right. Dithering results in bleeding units you can't afford to lose.

The biggest problem in general is that rules-based approaches are asymptoticly approaching the limit of their capabilities. Either of the problems I've described are big undertakings but only scratch the surface of what PurpleWave needs to advance. Machine learning is the way forward. But so far real time strategy games have proved remarkably impenetrable.

Consider that professional players often place their initial buildings to provide Scarab-proof escape routes for their workers against Reaver drops that arrive ten minutes later. Such placements are only effective if you know to react to Reaver drops by funneling your workers through them to glitch the Scarabs. How are you going to let your bot learn that on its own? There's a lot of work to do.


Those minimalistic maps that you or somebody else in the #BWAPI channel in FreeNode once showed me, should be able to help at least partially?

Or are there no people that produce such scenario-like maps anymore?


Can rule-based approaches be augmented/assisted by machine learning? If so, what aspects do you think machine-learning could do a good job at?


Thanks for the link, really cool to see the AIs at work.


Only slightly related, but does anyone remember the code puzzles that FB had as a filter for the interview process? Specifically one (jurassic island?) where you would connect to a shared server and take control of a dinosaur on the island. Eventually when you gathered enough energy to lay an egg, you could spawn another connection to control that dino.

I had no hope of having a competitive score on the leaderboard, but I loved playing around with that little world.

Edit: Found more info on dinosaur island: https://github.com/robertdimarco/puzzles/tree/master/faceboo...


Fun story, there was a pretty serious vulnerability in there [1].

Lessons learned: don't run CTFs "too close" to your real infrastructure.

[1] http://www.telegraph.co.uk/technology/facebook/8708392/Stude...


Ah! I created a very similar game as a school project (a few years ago) ! It was a 2nd year project, pardon the messy commits : https://github.com/fiahil/Zappy


Misleading title. FB seems to be taking a self-learning approach, whereas hobbyists are defining bot rules themselves. Hobbyists are winning at the moment with an approach that doesn't scale. Self-learning has a better chance of dominating the field.

Edit: HN mods updated title to be less click-baity (thanks). Earlier it said "FACEBOOK QUIETLY ENTERS STARCRAFT WAR FOR AI BOTS, AND LOSES". The "and loses" at the end was misleading.


As someone who has implemented quite a few reinforcement learning techniques and seen their limitations, I would be surprised if RL could overcome handcrafting for SC any day soon.


The main way the AI bots have problems is with timing. The neutral networks used have no way of encoding time dependent actions in a reasonable way. (As opposed to say a fuzzy decision tree with explicit time input.) And if you try to explicitly include it, curse of dimensionality strikes back hard.

Both absolute and relative timing have to be handled. And relative since specific salient action...

Plus the real reward is very sparse. Say, crippling mineral production early may or may not snowball. Likewise being a unit or two up...


What that tells me is that they haven't yet come up with the right featurization - that is, the function that maps input data into the actual neural network node values. The appropriate featurization would include the time information but reduce its dimensionality by hard-coding some basic assumptions, of the kind that humans presumably make when processing the same data.


I think these guys (and most people using deep models) try to avoid hand-crafted features as much as possible.


Gabriel (as well as the others on the team) have definitely looked at these areas - if things were left out/not "featurized" it was likely done via an ablation test, or showed improvement over benchmarks, or maybe just to set a baseline, as he is quoted in the main article. I don't know what techniques they used here, but I am excited to find out!

On the specific issue of encoding time-dependent behaviors in models, I think it is related to a broader issue that shows up in many application areas. To me the critical factor is that these models are ruthlessly good at exploiting local dependencies and totally forgetting long-term global dependencies or respecting required structure in control/generation.

This basically means it is very difficult to train long-term, time dependent behavior without tricks (early/mid/late game models, extensive handcrafting of the inputs, or using high level "macro actions"). Indeed, FAIR's recent mini-RTS engine ELF directly gives macro actions, in part to look closer at how well global strategies are really handled and remove one factor of complexity [0].

Gabriel's PhD thesis was entirely on Bayesian models for RTS AI, applied to SC:BW [1], so I am sure he is well aware of the "classic/rules based" approaches for this.

[0] https://code.facebook.com/posts/132985767285406/introducing-...

[1] http://emotion.inrialpes.fr/people/synnaeve/phdthesis/phdthe...


Alphago used several hand-crafted features as of the Nature paper, so DeepMind at least is not above a little feature engineering.


I suspect you might be able to do surprisingly well with just a few simple features, e.g. what did I last see at each position and how long ago was that, how many of each enemy unit have I seen simultaneously and at what time, etc.

As to the sparsity of reward, I'm not sure this is such a big problem. Once the AI learns that e.g. 'resources are good', it can then learn how to optimize resource production. You could even give the process a head start by learning a function of time+various resources+assorted features to win rate from human games to use as the reward function.


"Resources are good" doesn't really mean anything.

Yes resources are good, but how do you know when to expand?

Judging from opponents movements, you can tell if they're turtling, going for some cheese strat, or doing some build where they may not be able to respond to a aggressive expansion.

Of course if you choose wrong, you lost the game.


Why do you say that? The dota2 bot open ai did earlier this year seemed pretty convincing and similar...


Starcraft is a much larger, more complex, more freeform game than Dota 2. It's like Go compared to chess.


I disagree with this (I used to play Warcraft 3, and currently play Dota 2), but that's beside the point. The Dota 2 OpenAI is only set up for one mirror matchup (impossible in real games) involving one hero on each side, in one lane, and only for the first 10-ish minutes. This is maybe 1% of a real Dota game.


I think you are both correct. Starcraft has a far larger space of verbs at any given moment, and many of them can impact each other, giving it one form of complexity, while Dota2 clearly has a much more complex set of units and abilities, leading to more possibilities total, even if the number of possible actions moment to moment are more limited. But yeah, the bot was a teeny little bit of the game, impressive as it was.


If DOTA is anything like league I'm not sure I agree completely. I think in league there's more 'future prediction' needed, i.e. the current state is less immediate than in star-craft. In star-craft you can quickly see who is winning but in league there are things such as pushing lanes to consider and knock-on effects from later back timings (I know dota doesn't have backing but it has couriers?).

While that all can be extrapolated from current state I think starcraft is much easier to go for immediate gains by destroying more supply/resource value of units and extrapolate from there.


Starcraft strategy has a lot more weird nuance to it.

I noticed this building in this position at this time and I haven't been attacked by X unit yet, so he's probably doing strategy Y. I better skip some unrelated building I was going to make, so I can have an extra unit Z in case he's doing that strategy. Then I'll place the units at a particular spot to try to trap him because that unit will be vulnerable in this other spot so he's unlikely to move through that spot.


A SC:BW bot will have the ability to be perfectly aware of every unit in vision at all times.

It wouldn't be a suprise if some research team could put out a bot achieving superhuman victories purely by out-microing an opponent with minimal strategic choices.


>It wouldn't be a suprise if some research team could put out a bot achieving superhuman victories purely by out-microing an opponent with minimal strategic choices.

Yeah they did pretty much that. But the problem is it's a very brute-force approach and violates some rules of the game.

They jam thousands of commands per second into the game, and give each unit its own rudimentary AI. The units basically just dance at maximum range, magically dodge hits, etc.

If they limit it to 600 actions per minute (10 keystrokes hitting the keyboard every second - still beyond the human mind but beyond human fingers) it becomes a much harder AI problem.


Yeah, for people unfamiliar with starcraft bw: Whereas in other strategy games you may be able to improve the effectiveness of a unit 2-3x by micromanaging your units perfectly, in bw microing certain units perfectly can improve their effectiveness by something like 100x.

In the case of certain unit matchups, say, zergling versus vulture, the vulture should be able to kill an infinite number of zerglings given that it is microed correctly. However, despite the zergling being useless against a vulture on paper, In a human game you just don't have enough time to babysit your vultures with everything else going on so you end up seeing zerglings being used against vultures somewhat cost effectively even at professional levels.


>The units basically just dance at maximum range, magically dodge hits, etc.

While it certainly isn't fair to play against, it does have a certain elegance[1].

There's also the problem that even if it's AI vs AI, the races and units are balanced around reaction times of humans.

[1] https://www.youtube.com/watch?v=IKVFZ28ybQs


> It wouldn't be a suprise if some research team could put out a bot achieving superhuman victories purely by out-microing an opponent with minimal strategic choices.

The chess equivalent would be letting Deep Blue take 10 years to evaluate each move; it's not a very interesting system anymore since it isn't playing under normal rules (~90 minutes per turn).

Any "real" SC AI will have limitations on input, say 300 actions per minute. It'd be pretty interesting to see how few actions per minute an AI could use to defeat the top human players.


>The chess equivalent would be letting Deep Blue take 10 years to evaluate each move;

Even worse and less interesting - it's a bit like allowing the computer to move two pawns in each turn.


And then the mind games begin.


OpenAI was playing 1v1


A lot of complexity in dota comes from the interactions between 10 players. Make it 10 ai having to communicate using chat. Make them pick and ban. Make all objects available. And then you'll have real complexity.

With hundreds of unit x objects, jungling, roshan, cd and pick + ban, you can actually get at the sc level of complexity.


It's 10 players vs ~200 bots.

So, SC is still a much more complex space. DoTA has non player bots, but they are similar to SC buildings and follow very simple rules.


You can't compare the complexity of a dota char with a unit. Sc units are very basics, they don't have xp, don't have a skill tree, don't have have 100 of possible objects and their skills don't vary so much in context.


This matters far less than you might think. Go has simpler peaces without movement and it's still much more complex than chess.


This is more of an artifact of the size of the game board than anything, I think. 9x9 go is decidedly simpler/easier than chess, and I expect chess on a 19x19 board with the number of pieces scaled proportionally (each player starting with, say, around 90 pieces) would be a lot more difficult to play/analyze than a standard go game.


Well sure, but that's the exact same issue as you see in SC2 vs DoTA. In SC you are simply dealing with a vastly more complex state.


Not in Dota.

In dota combinations open the road for brand new moves. Some items tp, some regenerates, some cancel buff, some critics, some cleave, some cut trees, some slow, some stun, some give visions, etc.

Now in SC, you have 3, 4 main builds for a given match up. You see the building, and you know where this is going.

In Dota, depending of the 10 heroes, current money and objects combination, and player skills, you may expect one build or another.

Also, a zergling or 10 zergling is pretty much the same the same to consider from the behavior point of view. The number doesn't matter that much, only the intensity of the effect. And a gling will alway do the same thing. Move. Attach. Burrow.

The same unit in Dota can have a completly different role depending of the context.

My guess is that an AI would give you a much bigger advantage in SC because they can make more APM than a human, strat or not, while on Dota at high level strat is more important on the long run.


SC 2 has ~100's of viable strategy's per race as openers, but like chess openers are just that. As the game evolves you get complex iterations and risk reward situations. Bluffing research is very much a part of high level play as min maxing requires you to force the other player to waste resources in any way possible.

For example larva are one of Zerg's most valuable resource and there are several ways of attacking that resource by killing units or simply forcing them to go more defensive.


Didn't they restrict the game to a single hero and a subset of all items to make it tractable for AI? And didn't people find a way to break that bot?


It was restricted to one very artificial game mode (1 vs 1 mid, Shadow Fiend mirror matchup) that is not representative of real games, but is good for practicing one aspect of Dota. Some items in this game mode are banned in order to make it less safe for both sides, and so that there is a better chance of one side dying before ten minutes. People have beaten the bot by both "breaking" it and by just outplaying it (but the latter very, very rarely).


> And didn't people find a way to break that bot?

According to the player interviews and Reddit discussion threads, the "break" you are talking about was more like being really unpredictable therefore finding a play style that the AI had never encountered.

The players were flailing to find a way to defend against the AI that is learning quicker over time than they are.


Misleading? How? It's the title of the actual article, and it's what happened. Just because they'll probably win in the future because their approach is more sound, doesn't mean they didn't lose now.


Makes it seem like they lost in an AI bot battle. But they are in fact creating AI bots that compete against rule-based bots. Big difference.


AI is not just neural networks / deep learning. Those rule-based bots are AI too, just different approach (something called expert systems is subcategory of AI too).


They did lose in an AI bot battle. A rule based bot is still an AI.


Things can be accurate but misleading. This is a good example of misleading and accurate.


Automated decision making does not an intelligence make on it's own.

Call again when it can actually learn the game from limited inputs available to human players.


Factually correct title.


6/28 is a very respectable finish, though I was surprised that the top 3 finishers were just individuals, not even teams.


they're not learned ai, they're built in classical hard coded logic perfected through the years, with lots of domain specific knowledge from the users. that's a much easier way to produce something that does okay than to innovate with a new ai scheme, but importantly it also has no chance of ever beating a player who's at the professional level of play, let alone a top one


> A Microsoft research paper on machine learning this year said that improving predictions of when a user will click on an ad by just 0.1 percent would yield hundreds of millions of dollars in new revenue.

Do you really need a 'research paper on machine learning' to understand that?


They need a research paper to show they're not pulling numbers from their ass.


Do people really click on ads on purpose?


Perhaps that's the trick -- if you make an ad look less like an ad, it's 0.1% more likely to convert.

And Google still hasn't figured out how to best their search advertising network business, so I assume that people click on ads, even if I'm not in that demographic.

You should really watch a child play a game that has interstitial ads. It's quite obvious that they often click on ads because they want to learn more (maybe not fully convert, but intentionally click).


Yes...?


Does anyone have the link to this paper?

I am curious if that millions in revenue is for Microsoft (not surprising) or advertisers (more interesting) - I would love to read through their thought process either way.


One of the bots authors participating in SSCAIT (https://twitch.tv/sscait) once sent me this link:

http://wiki.teamliquid.net/starcraft/Micro_Training_Maps

They are supposed to be mini maps where your bot can train separate aspects of the game on a much smaller scale.

Extremely useful if you're an aspiring bot author.


omg what a stupid title from wired. wired showing again that they are a sensationalist piece of internet garbage


Yeah, I'm disappointed their articles keep popping up on HN.


If the title has an anti corporate slant, especially FB/Google, it will continue to be upvoted on HN regardless of actual content


Did somebody say anti-corporate slant?!?


They came in sixth. That's what happened. Exactly what is so sensationalist here?


Weak article but still surprised there's no mention of Open AI's foray into Dota 2: https://blog.openai.com/dota-2/


I'm actually glad no one mentioned this because this event in and of itself was overhyped publicity. The Open AI bot had hard coded behaviors and was defeated handily within hours of being available to the public. It's since been iterated upon and there's still a handful of pros who are regularly beating it WITHOUT exploiting the holes in the logic. This is effectively just as weak as the article regarding facebook.


> and was defeated handily within hours of being available to the public.

The public exploited tricks to beat it. They did not beat it 'handily'.

Afterwards, the pros who do beat it only manage to do so ~2-3 times for every 100 games played. I believe they have been playing the same version that was shown at The International and not an iterated version.


(I work at OpenAI.)

Correct, we've been playing a number of pros using the same bot played at TI. We do have a stronger version which is just two days more of training (gets a 70% win rate vs the one at TI), but haven't seen a need to test it out. We'll likely do a blog post in upcoming weeks with more stats and commentary from the pros; would be curious what people would like to know!

Incidentally, the various exploits that people used are all similar to how we actually develop the bot. We try to find areas of the strategy space it hasn't explored, and then make a small tweak to encourage it to explore that area. Lots of progress comes from removing hardcoded restrictions, which are nice to get started. So the fact there exist exploits wasn't surprising to us — what would be surprising would be exploits we couldn't fix.

1v1 has always been a proof-of-concept for us. The fun part is really 5v5, which is what we're working on now (and hiring for! ping me if you're interested: gdb@openai.com).


> 1v1 has always been a proof-of-concept for us.

I understand this is a perfectly reasonable early test, but there are so many complaints about "it was just a restricted subset of the game and 1v1".

This is like complaining that Google doesn't release first-pass code (with minimal unit tests and no stress testing) to their production sites across the world. Everything that loops starts with the first iteration.

Also, keep up the good work, OpenAI! And please remember Asimov's 3 rules.


Interesting... I was not aware of this, thanks


Where can I watch other Starcraft Bots play each other?

Anyone have links to the matches?


Twitch is where I would be digging around. https://twitch.tv/sscait is one such with commentary.


There is StarCraft Artificial Intelligence Tournament on youtube. https://www.youtube.com/user/certicky/


If anyone is interested in similar RTS AI play but for Age of Empires 2:

https://youtu.be/ePgEUSazsos


"Facebook Quietly..." headline on Wired... um. nothing after that could be true.


Unless there are in-depth APM limitations on the AI, there's no contest for AI vs. A human.


Umm, they have human vs AI challenges in starcraft and the AIs are vastly outclassed by humans. No APM limits needed.

AIs have a long way to go to beat good human starcraft players.


What you say is true right now, but in the long term, the gp is correct unless we hit an AI asymptote which prevents AI from approaching human StarCraft capabilities.

Just from a pure economy standpoint, any computer process has quite an advantage from just optimizing action queues and keeping idle workers working.


Your last comment is, frankly, complete bullshit.

People have already created TAS programs that can take out infinite numbers of enemies with minimal stock (ex: medvac/tank vs. infinite ultralisks). Or have zerglings that perfect 1 siege blast per 1 zergling splitting.

As far as micro, AI already has proven capability to absolutely dominate human micro, like not even close.

As far as build paths and macro decisions, AI isn't there yet but all it takes is one player and one programmer to come up with an in-the-middle and well-rounded build path that doesn't lose to any cheese; Sacrifice some economy to just have an army at all times_ and the AI will micro dominate the rest in extremely, humanly impossible army trades (I mean winning a 40 stock vs. 200 stock army battle).

Honestly just imagine having ONE mutalisk perfectly micro all-game, never receive death damage that just outputs as much damage as humanly possible at every angle. And you could have all 20-140 of your army stock doing this at all points in the game.

No contest. Just hasn't had time devoted to it yet.


AIs have already been doing perfect micro cheese strategies (even with mutas!).

And EVEN THOUGH they have access to ridiculous APM and the ability to do BS cheese strategies like that, they STILL suck.

Seriously, go watch some of these games. The AIs are freaking terrible, despite the fact that they basically cheat at the game.

Also, the AIs "perfect" micro, frankly only applies at the individual unit level. IE, they can kite like no tomorrow with a singular marine, but as soon as you have anything more complicated than that, such as "fight with 10 marines", well you learn that the AIs can't so much as form a concave.

Yeah, those 10 marines are all INDIVIDUALLY stutter stepping, but it turns out that perfect stutter stepping doesn't matter much when your army is cut in half, due it being split up.

Controlling more than a couple units "perfectly" (with regards to each others actions) seems to be out of reach of any AI out there.


Brood War competitions have been going on for seven years, with individual bots getting years of development. If beating professionals was easy it'd have been done by now.

It turns out that being really good at narrow micromanagement situations doesn't add up to winning complete games. StarCraft is messy and difficult.


Anyone have good resources on how to get into this? Would love to learn more about self-learning in regards to starcraft.


the easier way : watch the course vids here : Berkeley CS294 Deep Reinforcement Learning Fall 2017: http://www.youtube.com/playlist?list=PLkFD6_40KJIznC9CDbVTjA...

harder more complete way: read this "Deep Reinforcement Learning: An Overview", https://arxiv.org/abs/1701.07274 progressively implementing subparts in Python and openai gym https://gym.openai.com/read-only.html


IRC FreeNode, channel #BWAPI. Most of the authors of the bots competing 24/7 in https://twitch.tv/sscait hang around in various times of the day and night. They can provide you with plenty of entry material.

Few links:

- https://sscaitournament.com/

- https://github.com/dgant/purplewave


blizzard have a github account with a couple of py libs for sc2 and other games.

https://github.com/blizzard


starcraftai.com ?


Is it fair to flag all articles that bait / mislead readers?


Quietly means paying for an article to be written in Wired now?


If AI weren't a religion, people would realize it is a joke by now.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: