
AlphaStar: Mastering the Real-Time Strategy Game StarCraft II - zawerf
https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/
======
rcheu
This is really impressive, I didn't expect starcraft to be played this well by
a machine learning based AI. I'm excited to read the paper when it comes out!

That said, I'm not sure I agree that it was winning mainly due to better
decision making. For context, I've been ranked in the top 0.1% of players and
beaten pros in Starcraft 2, and also work as a machine learning engineer.

The stalker micro in particular looked to be above what's physically possible,
especially in the game against Mana where they were fighting in many places at
once on the map. Human players have attempted the mass stalker strategy
against immortals before, but haven't been able to make it work. The decisions
in these fights aren't "interesting"\--human players know what they're
supposed to do, but can't physically make the actions to do it.

While they have similar APM to SC2 pros, it's probably far more efficient and
accurate so I don't think that alone is enough. For example, human players
have difficulty macroing while they attack because it takes valuable time to
switch context, but the AI didn't appear to suffer from that and was extremely
aggressive in many games.

~~~
gamegoblin
In the mass stalker battles, the AI APM exceeded 1000 a few times, and no
doubt that most of that was precisely targeted. Whereas a human doing 500 APM
micro is obviously going to be far more imprecise.

I think a far more interesting limitation would be to cap APM at 150 or so, or
to artificially limit action precision with some sort of virtual mouse that
reduced accuracy as APM increased.

~~~
pmontra
I understand the spirit of the proposal but that would be like limiting a
computer to add at most two numbers per second. It's OK if we want an
interesting contest against humans but it wouldn't be a fair estimate of a
computer math capability. It's also not the point of using computers to do
math instead of a room full of accountants. I'm OK with the AI going as fast
as it can and play superhuman strategies because it can be that fast. After
all we'll not limit AIs output rate when we'll let them manage a country's
power grid.

~~~
sl1ck731
The purpose of limiting speed isn't to make an interesting contest, it is to
accurately compare the "math" instead of the speed the math is done at.

It isn't surprising that its fast, the surprising part is that it can make
human-like decisions. The only way to compare whether its thinking is human-
like is to restrain it from "brute forcing" the contest through speed.

The model has likely learned that the faster it does things the better the
outcome. What it needs to be measured on is strategy.

~~~
freeflight
But isn't the competency of a Starcraft player is also measured on his/her
speed?

In that context, you can't really measure strategy without accounting for
timing/speed because a lot of tactics and strategies only become viable once
the player has the required speed to actually realize them aka "micro".

~~~
ajuc
That would be right if AI and human player had the same opportunities for
micro.

They don't, because AI doesn't use physical objects to move stuff in the game.
AI just "thinks" that this stalker should blink and it blinks. Human player
has to deal with inertia of his hand and of mouse.

If you want fair competition of micro - make a robot that watches screen
through it's camera, moves mouse and presses keys to play starcraft.

Then the bandwith of the interface is the same for both players, and we can
compare their micro.

~~~
aurelwu
you don't really need a real robot, but assign some "time cost" for various
actions which depends on spatial distance and type of action and if it is a
different action than the previous action. humans are really fast when for
example splitting a group of units but performing multiple different actions
on different areas on the screen or even multiple screens takes a lot longer.
They don't need to fully emulate human behaviour but getting somewhat close
would really show how strong teh AI is tactically and strategically without
superhuman micromanagement.

------
modeless
_AlphaStar interacted with the StarCraft game engine directly via its raw
interface, meaning that it could observe the attributes of its own and its
opponent’s visible units on the map directly, without having to move the
camera - effectively playing with a zoomed out view of the game_

 _Additionally, and subsequent to the matches, we developed a second version
of AlphaStar. Like human players, this version of AlphaStar chooses when and
where to move the camera, its perception is restricted to on-screen
information, and action locations are restricted to its viewable region._

I was really curious whether they would attempt moving the camera like a
human. Sounds like it's still a work in progress, but very exciting! Even this
isn't enough to make it fully like a human player, as I believe it is still
getting numerical values for unit properties rather than having to infer them
from the pixels on the screen. But it seems possible to fix that, likely at
the cost of drastically increasing the training time.

The benefit of using pixels, of course, would be that the agent would become
fully general. It would probably immediately work on Command & Conquer, for
instance, while the current version would require deep integration with the
game engine first. But I think the training time would be impractically long.

~~~
sleepydog
I'm curious, would the AI be able to see cloaked units? In sc1 you could see
them,( I think sc2 is the same) but it was very difficult. How does the 'raw'
interface expose that subtlety?

~~~
ionforce
This is actually a great question. Like what does it mean for a unit to be
cloaked?

If humans can, under ideal circumstances, see cloaked units... Maybe the only
mechanic that shows up (like for bots or an API) is the inability to be
targeted using an attack command (i.e. you can still be hit with splash damage
from ground targeting)

------
potatofarmer45
APM is a really really misleading metric to use here. Most starcraft pros spam
keys to keep themselves loose and ready when the time comes. Even at the start
of a game, you'd often see players with 500 apm warming up the fingers.

Here, there is the laughable graph of the computer apm over time. The key
points here is that when there were the mass battles that won the game, the
apm spiked to >1000\. And if you look closely in slow motion, there was
perfect split targeting. A human player wouldn't be able to perfectly select
the exact number of stalkers to hit the enemy without wasting a surplus shot.
They can, but not when it's mass stalkers. This efficiency is just beyond
humans. The APM here indicates much more effective use of an action than a
typical human.

This is super impressive as an achievement, but this is clearly not a smarter
ai, but moreso an ai like the video a while ago where zerglings could
perfectly micro against siege tanks to avoid splash damage. It is clearly
better than humans in certain ways, but not smarter.

------
owens99
Interested to hear what others here think.

The APM metric includes all clicks a player makes. AlphaStar's APM is lower
than a typical pro's, but that does not mean it is making fewer actions. All
pros try to keep a high "tempo" by constantly clicking the screen even when
they are not making any actions. e.g. Instead of sending a unit to a specific
location with one click, they will click 5-6 times while dragging the mouse to
that spot. The theory is that keeping a high tempo allows you to make more
useful actions overall.

Unless AlphaStar's average and maximum APM is 3-4x lower than a pros, rather
than just 2x lower average and the same maximum, I do NOT believe that this is
a fair test of the AI's strategic decision making ability.

~~~
owens99
I also wanted to note that the millisecond response time performed by
AlphaStar also seems an unfair test.

While the average was around 350ms, this number is skewed by a significant
long tail extending up to 1300ms for some actions. The most common response is
sub-200ms and the third most common response is 65ms-100ms. The best pros can
consistently hit close to 200ms but I am not sure if they can even hit below
100ms, let alone 65ms.

------
celeritascelery
Here is an example of what you do without artificial limitations on actions.
[https://youtu.be/3PLplRDSgpo](https://youtu.be/3PLplRDSgpo) Just goes to show
how important those restrictions are for competitive gameplay against AI.

~~~
owens99
This is amazing. Thanks for sharing.

------
rhlala
Trying to solve Imperfect information games is interesting.

But realtime game is not. As computer have edges in boring fileds (apm etc).

I feel the real challenge will be Imperfect information, turn based, and
cooperation( teamplay based) for ai.

Unfortunately there are currently no game AFAIK with these characteristics...

Like a 2d turn based counter-strike or dota/lol would be really the good
format game to solve.

~~~
sandov
Civilization?

~~~
kerbalspacepro
AlphaCiv would be interesting as hell, but I fear that there isn't historical
data for Alpha to consume. I would expect it would have a similar result to
AlphaStar though-- it beats the human player before the end game.

~~~
worldsayshi
Perhaps it should have other targets than simply beating the opponent. Like
getting good statistics.

------
hughzhang
This is really impressive. Even though the camera hack / uncapped APM limits
(only the average was capped) made this version slightly unfair since the
ability of an AI to micro insanely well with stalkers is basically unbeatable,
I feel confident based on this performance that Deepmind will release a
superhuman AI with lower EPM than humans very soon from now.

~~~
Nanocurrency
I think it's "a lot" unfair. Waiting anxiously for the next version which will
carry more limitations.

------
FlorianRappl
While this is super impressive we should not forget - it was just a single
matchup (1/9, requires a lot of adaption; especially on the three races) on a
single map (1/N, granted in SC2 the maps are not as diverse as in the original
StarCraft) using a single (outdated) patch. Humans are still master in
knowledge transfer playing an unknown map and lesser known matchup with
greater precision. Once an AI reaches a level where knowledge is transferred
more efficiently than what we are capable of I will start getting worried -
not beforehand.

------
jgrowl
I wonder how useful ai will be in balancing games in the future. Games with
more than one race and multiple upgrade paths seems like a nightmare to ensure
that things are even for players that are equally skilled.

It would be interesting to have a game that would auto-balance itself,
especially if you wanted to add extra content without having to worry about
throwing everything off.

~~~
forax
That's a cool idea but many games already struggle to find a fair balance
point at all ranges of human skill, so expanding that range to include AI
could just make things more difficult. Maybe there's a way to force the AI to
replicate bad players, but I'm not sure what learning objective you would give
it to achieve that.

~~~
skwb
I don't follow gaming balancing too closely, but I (naively) assume there's
some reasonable analytical solution to find tuning. Like I would imagine if
you're Blizzard and have logs of all the data, you could just regress race
attack/defense/movement/etc. stats to find a potential equivalence point.

I suppose there's lots of interactions, but finding "well Zerg beats Protoss
X% of the time", you could balance by messing slightly with resources or
blanket buff.

~~~
glun
Starcraft is generally balanced by adjusting the map pool rather than the
races.

------
nopinsight
Fun observation on the possible reason why they chose Protoss from the
publicity standpoint:

Zerg — Too icky for some in the general public

Terran — Too human-like. Might clearly trigger the imagery of human-robot
wars.

~~~
fernandotakai
imho, the AI had a better time playing as a Protoss because of blink stalkers.

you can probably perfectly micro marines + marauders + medvacs with stim, but
it's a lot harder than blink.

------
ChuckMcM
Because that is what we need to teach AI to do, build bases, extract
resources, and build units to go out and kill everything else on the map :-)

It is an impressive result, it seems pretty clear to me that as a force
multiplier for developing decision tree software this technique works faster
and more effectively than the waterfall techniques, and it gets better post
release. But beyond the game theoretic applications I am still looking for an
application where it reliably creates a better back end code generator for a
new architecture faster than a person can.

~~~
pdimitar
I think the first ever applications of a more generic AI will be in corporate
and military anyway, so yeah, the AI will build bases, extract resources and
send security units to kill everything else on the map indeed.

------
ForrestN
Can anyone who has a full context compare this to OpenAI's work on Dota 2?
Which is more impressive, both in how far along it is and the relative game
difficulty?

~~~
b_tterc_p
They’re probably comparable and at similar levels of success. Talking about
which game offers more entropy isn’t a good metric yet because neither AI
seems to be trying to utilize the depth available to them.

E.g. the Dota game had 5 AI working as a team. That feels like it should
demonstrate additional competence, but it’s not really clear that they worked
as a team.

The Dota game allowed greater variation in starting conditions, but it’s not
apparent that the Ai adapts it’s strategy to this well (e.g. hero choice).

Both of them are capable of creating a basic strat and excelling at micro.
Neither appears to have great depth of strategy.

------
minimaxir
How did they make those data visualizations? (specifically, the visually-
skewed ridge plots). That's a nice approach for those types of plots since
they can get cluttered without the perceptual skew.

~~~
b_tterc_p
[https://seaborn.pydata.org/examples/kde_ridgeplot.html](https://seaborn.pydata.org/examples/kde_ridgeplot.html)

I would try with a Seaborn facet grid? I think they’ve got something custom,
but this should get close (be aware this specific example is a kde so it will
normalize total area)

On a second glance this won’t get the 3D horizontal offset.

~~~
minimaxir
A lot of the viz styles seem close to ggplot2 which is why I though it was
made using ggridges
([https://cran.r-project.org/web/packages/ggridges/vignettes/i...](https://cran.r-project.org/web/packages/ggridges/vignettes/introduction.html))
but I'm not sure if you can do a perceptual skew in ggplot2.

~~~
jon-wood
It wouldn't surprise me if they generated graphs and then handed them off to a
design team to trace and make pretty.

------
jonahx
Does anyone know if this opens the door to using these techniques for poker
(given that they've now show success on games of imperfect information)?

Thus far the solutions to poker have involved solving the game tree through
raw computational power and clever methods of information collapsing:

[http://science.sciencemag.org/content/347/6218/145](http://science.sciencemag.org/content/347/6218/145)

But it seems the techniques used here might be both far more efficient, as
well as able to better exploit human players (as opposed to playing a pure
game theoretically optimal strategy, which is what they do now).

~~~
tialaramex
So, your link is about Cepheus. It's important to understand that Cepheus
isn't AI, it's an (asymptotically close to) optimal strategy for Limit Heads
Up ("Limit" means you don't need to choose bet sizes, they are fixed, which
makes the problem much simpler), and since poker is a game of probabilities
the strategy is probabilistic too.

ie this is like when somebody explains Tic-Tac-Toe, there isn't anything
interesting going on inside the machine, the insight was purely mathematical,
that there are one or more optimal ways to play this game, and this is one of
them.

You can literally look at the strategy right now, you can do it while playing
against a program that plays by this strategy. But you won't beat it by doing
that, that's the point of this optimal strategy, the best you can do is play
this or some other equivalent optimal strategy back, in a home game you'll
just pass chips back and forth forever.

In contrast Poker AI is a thing, and at _No Limit_ Heads Up the AI state of
the art is clearly better than human, named Libratus - it is nothing like
Cepheus, there is no fixed strategy.

For "Full Ring", the game of Poker as you've probably seen it played, which
has more than half a dozen independent players, AI would be very challenging,
not least because if humans realise they're at a disadvantage it would be
essentially impossible to prevent them from colluding with other humans to get
an advantage, even to some extent unconsciously.

~~~
jonahx
Yes, I understood that.

I was essentially asking if the techniques used by DeepMind could be leveraged
to create a much more powerful version of AI's like Libratus, or to become
very strong at games too big to solve for GTO solutions, such as full ring.

> For "Full Ring", the game of Poker as you've probably seen it played, which
> has more than half a dozen independent players, AI would be very challenging

This has already been done. There was an AI called Sonia which played both HU
and full ring at expert human level or beyond, and was not based on GTO
solutions. It created a model of every player which updated in real time, and
exploited them the way expert humans do.

I'm just curious if DeepMind would be able to achieve similar or better
results.

------
kingbirdy
Very exciting results. However, I'm a bit confused by this graph [0], could
anyone explain how I'm supposed to interpret it?

0: [https://deepmind.com/blog/alphastar-mastering-real-time-
stra...](https://deepmind.com/blog/alphastar-mastering-real-time-strategy-
game-starcraft-ii/#image-34385)

~~~
dfan
I was confused too; it has a weird format that I think hurts comprehension
rather than aids it by making you think you're not looking straight at the
data.

Ignore the fact that the "Training Days" axis is drawn diagonally. The system
is creating about 40 agents per day; by the end of day 14 it's made 610 or so.
The graph shows, for any given time of training (vertical axis, going down),
what is the distribution of trained agents that it's chosen in order to be
unexploitable (you wouldn't want to choose rock all the time in rock-paper-
scissors, for example). So, for example, at the end of day 14, it's using a
selection of agents with numbers 595 through 610 or so, which means they've
all been created within the last day.

~~~
ionforce
I think it's to help illustrate the time dimension as going forward rather
than something that goes up and down. And also to not measure the hills
against some global X-axis. It is confusing.

------
ValleyOfTheMtns
Impressive, but there's a couple of things I'd like to see them try one day.

Plug AlphaStar into a robot that physically interacts with a keyboard and
mouse to control the game. This "robot" should only have what's relevant for
playing the game and emulates a human i.e. a camera that looks at a screen
(this is the only knowledge it has of the game), and two arms & hands with
five digits that control the mouse and keyboard. Then limit its APM to the
best a human can realistically do.

The other thing I want to see them address is the virtual training time. 200
years of StarCraft is insane. LiquidMana has been playing for ~20 years and of
course he hasn't played the game 24/7\. Lets pretend he has played StarCraft
like it's a full-time job since he was 5; 8 hours a day, 5 days a week, for 20
years. That's ~42,000 hours of playing StarCraft.

Develop an A.I. that is only trained for that many hours of virtual game time.

If they can create an A.I. with those requirements, that can defeat top-level
players, I will be completely blown away.

~~~
gpm
Human pro gamers have sevaral advantages that mean they should need slightly
(but probably not that much) less game time.

Transfer learning from the rest of life, games are designed to be
understandable to humans with familiar concepts, that AIs don't start knowing.

Discussion with other players. Mana benefits immensely from every else's 20k
hours of SC2 as well.

Selection bias, there are many many people who try SC2, only the people who
are naturally good at it succeed. So in some sense we need to be counting the
rejects training hours as well.

I would like to see advances on training AI using less data. I just wanted to
comment that the comparison in number of hours isn't quite fair.

~~~
ValleyOfTheMtns
Those are really good points.

------
ggggtez
There seems to be an impulse to deny the AI progress that people see in front
of them. When IBM Watson won on Jeopardy, everyone claimed it was cheating
because, after all, "everyone knows the answers, so Jeopardy is really only
about who presses the button first". But what about the fact that a computer
could know the answers at all, and so quickly? Many people didn't think it was
possible, and as soon as it happens they attack the speed of the button press
as if knowing the answer wasn't the hard part.

Anyone with this point of view does not seem to understand what is being shown
here. This is not just "this ai beat this one player in this one match". It's
an entire system of techniques for machine learning. The AI was not hard coded
to learn those micro steps, like all other AIs have been. To ignore those
things is to miss the point entirely.

~~~
bo1024
I see your point and partially agree. But I think part of the problem with
these kinds of stunts (for lack of a better word) is it's hard to tell the
difference between actual scientific advancements and benefits of throwing
hundreds of years of GPU time at it.

------
ascar
It's amazing what they achieved so far, but I wonder how it would stack up
against Serral, who draws a lot of his wins by supremely judging an engagement
and continiously gains small advantages over the course of a game. This is a
skill even most other professional players only have on a much lower level.

~~~
cjbprime
Yes, a best of 9 (?) match against Serral or Maru needs to happen.

~~~
sabas123
But only if they get material to prepare with

------
RockofStrength
If they don't cap the apm at 300 or so, this means nothing. I am not impressed
with super speed, only decision making matters. Deepmind, you can do better.

~~~
a_imho
AlphaZero was set up against Stockfish similarly on unequal footing.

------
Layvier
I think there's a lot more to limit than average APMs and reaction time, in
order to have human-like capabilities. For instance:

\- the context switching cost of multitasking

\- the precision/speed tradeoff : pro players adjust the mouse speed according
to what they want. Selecting exactly the units wanted is very difficult,
AlphaStar seemed to do it perfectly.

\- the timing precision of an action

With restrictions such as these, I'm honestly confident that AlphaStar won't
beat human players anytime soon (once the human players adopt the interesting
findings of the AI).

I'm mostly interested in how it will change the meta. Players are often biased
by their experience, and regularly the meta shifts entirely without any
significant balance update, simply by players finding original strategies

------
infogulch
Many comments here are about how the AI information advantage (seeing the
whole map at once sans fog of war except the last game; seeing exact unit
stats like health etc) leads to higher APM-value, whether APM itself is higher
or each Action is more meaningful, and discussing different ways to nerf it to
bring it down to a human level.

I'm more interested in the limits that an AI could be pushed to vs humans, and
if humans can't match the AI's APM, just add more humans until they can. E.g.
1v7 would allow humans to manage multiple disparate flanks at once just like
an AI, and still leave someone to free to manage macro play etc that suffers
when a human focuses on micro.

------
Questioneer279
200 years of blink stalker meta... Overcoming the local minimum trap is still
a thing.

~~~
javier2
funnily enough, human players were also obsessed with blink stalkers for years
;)

------
integricho
I didn't catch if they mentioned this in the interview, but what would have
happened if they let AlphaStar play against other races, not Protoss only,
would it be completely lost, unable to achieve anything?

~~~
modeless
The current version would flounder because it has never seen the other races.
But that is not a fundamental limitation. All it would take to fix it is more
training time.

~~~
dmurray
Or they trained all the combinations but it was best at Protoss vs Protoss, so
they publicized those results. But given the reaction to the first AlphaZero
announcement and the later follow-up paper (summary: they cheated a bit but
it's still incredibly strong) I would give them the benefit of the doubt here.

------
akeck
"Next we'll demonstrate AlphaStar controlling real military units in Syria..."
:-/

------
eecsninja
How I would summarize the development of AlphaStar and Mana's strategy over
the series:

1\. AS studies games played by humans to learn what they do. 2\. AS takes
advantage of high-APM, high-precision blink stalker micro to defeat immortals
(something no human can cognitively/mechanically accomplish). 3\. Mana
realizes he cannot play vs AS as if he were playing vs a human. 4\. Mana
discovers an AI exploit using the warp prism + immortals to force AS's army
back, keeping his own base safe. This is a specific counter-AI strategy, not
something that would have worked vs a human player. AS does not know how to
properly react because it has not seen any replays of humans going up against
an AI by exploiting it. 5\. Mana gets enough breathing room to build up a
large enough force to win the game.

In short, Mana won because he "solved the problem" of how to exploit this
particular AI.

This is actually not a new strategy -- several years back, the stock SC2 AI
would do the same thing: pull back when you attacked its base. I could win vs
AI using the exact same trick that Mana used. Blizzard has since updated the
AI not to fall for this trick.

The real test of DeepMind's learning abilities is thus: if AlphaStar had seen
replays of that exploit vs AI, along with all other replays of humans vs AI
over the years, would Mana still have been able to win?

------
EGreg
I wonder how far AI can progress. Can for example five robots, built to have
insane reaction times and agility, outperform and safely subdue an entire army
of 1 million humans equipped with the latest melee weapons?

Or how about a tiny flying robot with millisecond reaction times in a room
with a crowd of people trying to catch it, being able to eg buzz and tag
everyone without being so much as touched?

How about one robot which builds replicas or otherwise organizes some massive
“real time strategy growth” across a city, versus the entire city and its
police and army being called in? With the robot swarms coordinating
distributed superhuman strategies and timing? I feel that, whatever we are
discovering now, aliens who make it to Earth will have had hundreds of years
advantage on that. It seems Monte Carlo Tree Search is the best we’ve got
against the aliens, for now.

Think Ender’s game but for real. It will require something like that.

Anyone also remember the narration in Edge of Tomorrow? There the aliens had
the additional advantage of resetting the clock to the beginning thus
nullifying any win, but really with trillions of games being played by a
league against itself, that rare “restart” seems quaint versus what we have
here.

Or over eons perhaps long term strategy with mining asteroids with Von Neumann
probes competing.

------
tomc1985
These developments sound wonderful but all I can think about is how this
advanced AI is going to used to try to control me (mostly my spending and
consumption)

------
En_gr_Student
And now they are superhuman with regards to "Fog of war"!

A truly crushing next step would be to make "World in Flames" into a computer
game, and have Alpha-Star-Zero, which is coming, to become the best it can on
that.

[http://www.matrixgames.com/products/296/details/World.In.Fla...](http://www.matrixgames.com/products/296/details/World.In.Flames)

If they do that, every single military on the planet should crap themselves,
including the US, Nato, Russia, China, and EU. It means that 20 years or less
to robots able to operate at every level in an army from soldier to general,
and able to comprehensively defeat the best humans at it, at every level. The
first nation to the battlefield with this wins the world. One ironic part is
that an excellent application is non-battlefield warfare. You don't have to
drive a tank to conquer the world; perhaps you can buy a factory or make a
deal. The corporate interface here should also be compelling.

Perhaps Alphabet will finally be able to use this to make non-advertising
profits.

~~~
erikpukinskis
You are comparing formal systems, like Starcraft, where the entire game world
is quantized, with the real world where the games are not quantized.

Not only is the game of war not “written down” in digital form, there is far
too much data to ever write down. Military strategists have been trying to
model war forever. But no one can even agree on what the “game pieces” are,
let alone list all of the Deus Ex Machina that might show up. An AI that is
better than any human commander at tank warfare would have been just as dead
as any human army when the opponent shows up with an H-bomb.

But being good at tank warfare also isn’t anything like being good at a tank
game. To be good at tank warfare you need to be good at building factories.
Pouring concrete in swamps. Convincing grannies to buy fewer cans of beans,
and eat more squash. Figuring out when your workers really need to go home and
sleep. Part of the game is just realizing thosethings are even things you
might want to think strategize around.

And the nature of competition in real life is that as players gain advantages,
opponents copy their skills and those advantages cease to work. Then you have
to find new advantages in some aspect of the game that has never been
documented before.

In a sense, high level real world competition is more like MAKING games than
playing them. It’s about designing a competitive landscape where your opponent
won’t be able to design their way out of it. Something AIs haven’t, to my
knowledge, even begun to be able to think about.

------
grogers
When deepmind announced they were working on sc2 expected it to be good, but
not this good. Compare this to the top bots in SSCAIT (a BW AI tournament) and
it's already so far ahead of anything that's existed up to this point. I'm
eagerly looking forward to them playing all maps and matchups, I suspect it
should be relatively easy to extend to. They've still got a long road ahead
but this makes me think they can do it.

However, they keep harping on the APM being similar to a top human and it's
just not. Maybe the average for a whole game is, but during fights it was
bursting over 1500 APM with perfect execution. This wasn't just spam clicking
corrosive bile or something like a human might to get that high but truly
coordinated targeting and unit movement. This had lead it to use pathological
unit compositions that wouldn't be effective for human play (like pure
stalkers beating immortals). If they set a hard max APM to something like 600
I think it would develop more useful strategies.

------
dpflan
[https://www.wired.co.uk/article/deepmind-starcraft-
results-a...](https://www.wired.co.uk/article/deepmind-starcraft-results-
alphastar)

"In a final match against Komincz, streamed live on Twitch, the DeepMind team
used a new agent that, unlike those in the other matches, could only see by
moving the focus of the in-game camera. As a result, the AI was forced to move
its units in a more focussed, human-like way. Despite dominating the game
early on, AlphaStar lost. In all the games contested to date, the score is:
AlphaStar 10 - Humans 1."

I'm not quite sure I follow the test then. How was DeepMind making decisions
before? In a less-human-like way?

~~~
Tistel
If I understood correctly. In the first 10 matches AlphaStar can see the whole
map in detail (not just the mini map) at the same time. The humans can only
see their camera's view. In the 11th match the AlphaStar can only see via a
camera view as well. It has to move the camera the way a human would to see
other parts of the map in detail. The 11th game was a fairer test.

This, along with the documentary of the human go players vs the deep mind AI,
seems to indicate we are entering an interesting time. The humans start out
cocky, then gradually concede the high ground of whatever skill is supposed to
be our exclusive domain. I think we will make it through to some new way of
living with very capable machines. But, to transition there we are going to
have some rough years. Just watching the Star Craft player's fingers flying at
the end of the movie makes my carpal tunnel wrists ache. It reminds me of the
folklore about "John Henry" trying to keep up with rail laying machine. TL/DR:
In the story he beats it for a while, then falls over dead from the Herculean
effort required to do so.

~~~
dpflan
Allowing the machine to have a far-more informed view of the game than a human
seems like these tests are not realistic battles. The human should have the
same information about the game as the machine. I would certainly hope a
machine with more information than the human could beat a human. Perhaps I am
not understanding the situation?

------
alasdair_
What are the games that, so far, still look like they will be too difficult
for ML to play them at the highest level? I know Go was held to be this kind
of game for a long time and is now close to being dominated by AI. Magic: the
gathering perhaps?

~~~
anabis
non zero-sum games seems like a logical next step.

~~~
jabl
Any (reasonably) popular games that are non zero sum?

------
stared
Now, waiting for AlphaStar with Has (the cheesiest professional Protoss
player, see:
[https://www.youtube.com/watch?v=AgxOV1L3Jl4](https://www.youtube.com/watch?v=AgxOV1L3Jl4)).

------
ece
I think Deepmind definitively showed the agents are learning high-level play,
which was great to see. I didn't really pay attention to AlphaGo, but did skim
the AlphaZero paper, and I'm not really left with any doubts about how good
RL/LSTMs can get against other AI or humans given enough time to train.

That said, it's an open question whether given all the constraints of the last
live match and what was mentioned during the talk (that newer strategies keep
getting discovered by humans and agents), whether humans could even win 50% of
the time against an agent.

------
karolist
I'd love this to be released for Company of Heroes (Relic), the game has
terrain that effects engagements and vehicle movement, multi directional
cover, individual model health in squad units, territory control based
resource income and tick down win condition and much more that makes it feel
more tactical than SC overall.

It's got a bit of randomness where engagements are not just pure deterministic
math inputs so I guess that's why it never gained pro league attention but it
would be hell of an interesting challenge for AI because of it.

------
bronz
i dont mean to be negative but ML and even conventional computing are starting
to make me tired. im always wondering what it will be next that ML can do
better than humans. what is next up for automation? will this be the one to
send a shock-wave through an industry / the economy? i feel like i need to
constantly watch and keep track of the progress thats being made. and im
starting to get tired from having to re-think life again and again.

for example, google has published voice synthesis samples, voices generated
from text, that are indistinguishable from real human speech. it hasnt been
perfected yet, but i think most people would agree that we basically now live
in a world where voice recordings cant be automatically trusted the way they
used to be. it completely changes the way you think about and navigate the
world. it will open up a universe of new schemes, methods of fraud, etc etc
that we will have to adapt to.

then there are deepfakes. there are limitations, and the results arent
perfect, but its very early days. again i would say that the consensus among
us is that we now live in a world where video evidence is basically no longer
intrinsically trust-worthy in the way that it used to be.

i practically grew up inside a computer. but i am now sensing that as ML fills
in, its going to be a very uncomfortable ride for me personally -- and i dont
understand how it couldnt be for anyone else. and what about when AGI comes?
just curious to see if anyone else shares my experience with this.

------
a_imho
This is from 2011
[https://www.youtube.com/watch?v=IKVFZ28ybQs](https://www.youtube.com/watch?v=IKVFZ28ybQs)

I will take this with a grain of salt.

------
kriro
First Go, now Starcraft 2...I fully expect archery or women's golf to be
conquered next. Someone at deepmind must have an axe to grind with Korea ;)

~~~
erikb
This is more about historical AI challenges. When Chess was beat people
realized painfully and happily that they can't beat Go with the same approach.
Therefore it was the new Mount Everest.

And Starcraft is a competition because SC1 was a game that was easily
adaptable for AI hobby coders and competitions. A little like they have this
Robo Soccer world championship for Robo builders. It's part of the domain
culture I guess.

------
confiscate
How does outcome prediction work?

It sounded like that was based partly on Supply (Army size). But the AI only
knows it's own Army Size, and not the human's army size.

So how can the AI's outcome prediction be so accurate? At game 3 against Mana,
the AI's outcome prediction changed from 60% win to 99% win, before the AI
decide to go up the ramp. It had no way to know if the human had more army up
that ramp

~~~
clickok
In imperfect information games (like SC2) the outcome prediction implicitly
takes into account the unknown. Given what it has observed and what it has
_not_ observed, it is essentially comparing the present state to similar
situations from the past.

You can see this in the replay-- at seemingly random times, its outcome
prediction jumps up, even when it hasn't had any interaction with its
opponent. But that's precisely why it's going up-- it notices that its
opponent has not executed a faster rush or cheese that it's unprepared for,
hasn't expanded early, and the scout has not been destroyed.

Similarly, after having won a fight, the worst thing that could happen is that
a bigger force emerge from your opponents base, destroying your army and
giving them a chance to rebuild. When that does not happen, you know that your
opponent probably doesn't have such an army (because otherwise they are
falling behind in resources due to being bottled up). Either way, after a
certain point it's worthwhile to press on to cause more damage, because you're
now far enough ahead in resources that you will win regardless if they manage
to repel that particular attack.

~~~
confiscate
Yes but it seems impossible to manually assign a score to all those different
situations? There are too many situations like that

Is the outcome prediction score, itself, also produced by AI training?

It has to be, right? Because it's clearly not just calculating the outcome
based on Army Size of the AI and the human. There must be some non-direct way
it's calculating the outcome prediction.

I mean, the hard part is accurate outcome prediction. Once you have that, it's
easy to train an AI by just throwing CPU's at the problem and making the AI
play a crazy lots of games

------
olliej
The gameplay was really interesting - i wonder if we'll start seeing the over-
saturation of the main prior to first expand in pro games?

~~~
ionforce
There are three things this seems to do. And what's changed is the perceived
value of these.

1) Increased mining rate. Although there's a prescribed "maximum", as I
understand it adding more workers, up to some limit, does yield more minerals
beyond the prescribed.

2) Buffer against harassment. If you are over-saturated and lose two probes,
you rate of income isn't affected.

3) Bootstrapping an expansion. All of the excess probes can be moved over to
the new expo.

~~~
olliej
Right, I get that it AlphaStar has decided this is sensible, I just wonder if
we'll start seeing pros adopt something similar?

I'm also more generally curious as to what the cutoff point at which adding an
additional probe ceases having _any_ value.

~~~
cjbprime
My vague recollection is that from 16 to 19 workers the 17/18/19 worker is
still harvesting >50% of a <16 worker, from 20-24 it's more like <25%, and the
25th worker onwards contributes 0.

------
ddebernardy
This is really sweet. Any odds someone at DeepMind might be working on Paradox
Interactive titles like Europa Uiversalis?

~~~
skwb
Do you know if there's a good API for paradox games? I have off the top of my
head seen extensible mods for games, but I assumed they all were based off of
the predefined Clausewitz Engine.

That said, HOI4 could really use some AI love. Right now, I think the AI has a
lot of hard coded values (when invasion should happen vs not). One of the
potentially beautiful things about modern AI is that if you can clearly define
your objectives, it's possible to find a parsimonious analytical solution to
solve that problem. I have immense hope that these gaming AI systems will
transform the way we game in the future.

------
etiam
"Along with the original title, it is among the biggest and most successful
games of all time, with players competing in esports tournaments for more than
20 years."

...

Dear Mister Language Person: I am curious about the expression, "Part of this
complete breakfast." The way it comes up is, my 5-year-old will be watching TV
cartoon shows in the morning, and they'll show a commercial for a children's
compressed breakfast compound such as "Froot Loops" or "Lucky Charms, " and
they always show it sitting on a table next to a some actual food such as
eggs, and the announcer always says: "Part of this complete breakfast." Don't
they really mean, "Adjacent to this complete breakfast, " or "On the same
table as this complete breakfast"? And couldn't they make essentially the same
claim if, instead of Froot Loops, they put a can of shaving cream there, or a
dead bat?

Answer: Yes.

------
dev_dull
Here’s the question I have. Will it _consistently_ beat the top player over
and over?

I see so much brittleness in AI such as this. Humans are much less prone to
“bugs”. In an evolutionary adversarial environment, the human brain invariably
comes out on top.

~~~
etiam
In the very long run it's hard to see that we could compete with arbitrary
scalability and lightning fast operations. In present and imminent situations,
I thing Louis Rosenberg had an exceedingly important point in

"Artificial Intelligence Isn’t Just About Intelligence, but Manipulating
Humanity: AlphaGo's true accomplishment isn't learning to play Go, but
learning to play (and manipulate) us." ([https://futurism.com/artificial-
intelligence-isnt-intelligen...](https://futurism.com/artificial-intelligence-
isnt-intelligence-manipulating-humanity))

, around the last AlphaFoo.

"Imagine a flying saucer lands in Time Square and an alien steps out carrying
the game of Go. He walks up the nearest person and says the classic line –
“Take me to your best player.” Now, let’s assume that the alien spent years
studying how humans play Go, watching replays of every major match.

If that was the situation, it would seem Humanity was being set up for an
unfair challenge.

After all, the alien had the opportunity to thoroughly prepare for playing
humans, while the humans had no opportunity to prepare for playing aliens. The
humans would likely lose. And that’s exactly what happened last month when an
“alien intelligence” named AlphaGo played the human Go master, Lee Sedol. The
human lost in 4 out of 5 games. But, if we look at the big picture, it wasn’t
a fair match."

------
jaredtn
I'd be interested to see its hierarchical strategy and planning, especially
across such a long timespan. Does anyone have any good references for similar
hierarchical planning work (Feudal Networks, etc.) to look at?

~~~
gwern
It doesn't use hierarchical approaches at all, apparently. Just flat Impala
with LSTMs and CNNs every tick.

------
knicholes
You can download the replays here: [https://deepmind.com/research/alphastar-
resources/](https://deepmind.com/research/alphastar-resources/)

------
ctoth
Every year or so we get another huge advance... Well, more accurately,
something comes along to benchmark the state of AI research against a human
activity.

Then come the HN comments.

For Alpha Go:

Oh this is impressive but can't generalize. Wake me up when it doesn't have to
have information precoded/doesn't learn from human players

For alpha go 0:

So this is cool but not amazing because they're all perfect information games.

And now here we have this, and people who haven't even watched the
presentation are yammering about stuff like the APM, or how this isn't
impressive because of ... Well, something?

If I believed in a terminator scenario, I would point out that a robot, too,
will presumably have higher APM than all of us desk warriors.

It really feels like there are just some people who are religiously attached
to being the only known intelligent entities on the planet, to the point
where, when presented with evidence that hey, this is actually a thing, they
will stick their fingers in their ears and shout about how unreal/unfair it
all is.

I invite you to have a look back at the other announcement threads from
Deepmind and OpenAI. I decided not to directly quote people as I don't want
this to get personal, but I couldn't just sit here and watch the same old
story play out again without at least mentioning that these same people have
been wrong, and wrong, and wrong, and will presumably continue to be wrong.

Exponential curves are unintuitive.

~~~
scarmig
What does it matter, at this point?

The proof is in the pudding, and we are making more and more pudding everyday.
Instead of caring about naysayers, we need to be working under the assumption
AI is here to stay and rapidly expanding in scope, and we need to build the
social and political structures to be able to handle it.

~~~
chronial
The saying is actually "The proof of the pudding is in the eating". I don't
quite understand what "the proof is in the pudding" is supposed to mean and
how that relates to making a lot of pudding ^^.

------
m0zg
Man, I can imagine the militaries the world over are rubbing their hands in
anticipation when watching this. Imagine a perfectly orchestrated swarm of
UAVs dealing damage like this (potentially to another swarm).

------
arnaudsm
Today, 1/1 games without the camera hack were won by the human.

This is disappointing, but their PR is a success. Mainstream media is already
saying "AI has beaten StarCraft". Another lie for the AI bubble I guess.

~~~
ehsankia
It's not "games", as it was a single game, and it came with the big caveat
that they didn't have much time to actually prepare and train this one as
much. The one that won against Mana was trained for 2 weeks, versus this one
that only had one week of training.

While the AI did have an advantage in terms of micro/camera view, it still was
able to make decent decision making and independently come up with a bunch of
interesting strategies. At the end of the day, that's really the goal of the
research, not whether or not it uses the right number of APM or uses the
camera properly. Those are just artificial restrictions put to make it look
fair and entertaining.

~~~
arnaudsm
I would love DeepMind to put that AI on the ladder as they discussed during
the last Blizzcon.

The APM and camera restriction is not for entertainment, it's to develop
intelligence rather than 1500+ APM. The interesting part of StarCraft II is
decision making and the meta of the opponent, and we didn't see that today.

Remember their Dota2 bot that was beaten by a lot of players after a single
day. I want to see if AlphaStar can really adapt to the real world, therefore
the meta, the real challenge of StarCraft.

~~~
schwurb
I honestly do not think meta or strategy will be interesting in SC2.
(Obviously as a player it will, but from an AI standpoint not). AlphaGo
already showed us that it can handle strategy well; A good AI in SC2 will
simply scout the minimaly needed amount of time to prepare the perfect
responses.

------
AmericanBlarney
It's certainly interesting but reminds me of DeepBlue playing Jeopardy against
humans but having nd the questions fed to it electronically. Half the
challenge of the game is buzzing in first. For humans, requires
reading/listening and potentially making a judgement they you'll be able to
answer the question and buzzing in before even hearing the whole thing. Same
thing for StarCraft. If I could nap out my movers in advance and feed them to
the API with precise timing, I think I could likely beat a lot of pros - all
the strategies mentioned a in the article are well known. The dexterity and
timing are a huge part of the challenge.

~~~
doktrin
> reminds me of DeepBlue playing Jeopardy

Apologies for the nitpicky correction, but that was Watson.

------
ENOTTY
I feel like this was a lost opportunity to play it centaur style, letting a
human choose and play an overall economic strategy (macro) and let the AI do
the combat (micro).

------
SubiculumCode
It would be interesting to see a match of [human controlling macro while
computer handled micro] vs [computer controlling macro and micro]. I haven't
played the game.

------
donmatito
Ok now are we getting DeepMind vs OpenAI competition ?

------
crb002
I'm curious to see a StarCraft II mod where you can assign advanced micro
behaviors to groups of units.

------
alextooter
I am as SC1 only player,hope someday there will be a SC1 AI player.

------
abakus
Hey, but isn't TLO a ZERG player?

~~~
FlorianRappl
Warmup opponent:)

They did the same with AlphaGo back then. First get some "good" player in
there to see if they are on the right track. Then get a better player in
there. Finally, prepare for a real showdown. In this case it would be
ShowTime, Neeb or even some Korean pro (e.g., Stats). Maybe its time to switch
matchups and make it PvZ - get Serral and then let's see if the current SC2
champion is good enough to beat this AI.

------
miguelrochefort
The first artificial Korean.

------
lettergram
So... I’m curious how long this will be before we can apply this to real life?

At this point it seems like it’d be fairly complicated, but you could build a
solid simulator for battles. Then direct humans and / or robots around the
battle field as nessecary to win a battle.

Upload a virtual map utilizing some point clouds, estimate densities, start
with estimating enemy combatants, add some scoring metrics negatively
impacting civilian deaths and probably a lot of other stuff. Run real life
scenarios in training environments, and bam.

The premise seems there.

~~~
civilian
There are a lot more variables that need to be added to for an "AI general".

\- There needs to be a system for simulating real-world battles. (Since we
need to iterate the AI, afterall.) In WW2 the WATU was a good simulation of
German submarine vs. Allied Convoy battles, but I imagine that ground battles
are messier. Link for background:
[https://www.youtube.com/watch?v=fVet82IUAqQ](https://www.youtube.com/watch?v=fVet82IUAqQ)

\- Autonomous directions. If a unit loses contact with the AI, what orders
should they follow?

\- Need to react quickly to changes in the effectiveness of weapons. If army-B
has a Surface-to-air missile that has a 80% hit rate, rather than the
estimated 40% hit rate, the AI needs to adapt.

\- Different armies have different tolerances for causalities, both military
and civilian.

I suspect you're getting downvoted because people don't like the idea of
military-general AI. I don't really love it either, but it's going to happen.
Hopefully we can encourage its programmers to include the Geneva convention
rules for war.

------
cellis
I would really like to see this for Age of Empires II. I think AOE has far
more races and is a far more complex game ( although I'm biased because I
haven't played SC2 as much as AOEII ).

~~~
justfor1comment
I have played both games and a fan of both. Starcraft is definitely more
complex than AoE for AI development and that's why the researchers must have
chosen it. The complexity of AI depends on how many potential decisions you
can make at any point of time. Here are a few reasons why: 1) Starcraft races
have completely different build trees and different advantages. This has a
large cascading effect of early decisions in the game. 2) Starcraft has much
more micro potential that AOE. There are many units with sort of super powers
like Stalkers can teleport and infestors can take control of enemy units. In
AOE, you can only issue attack commands to most units. 3) Variety of units.
Since Starcraft also has air units which may or may not be able to attack
ground units, you have more options within a race to create a unique army/air
force combination. 4) The Starcraft map terrain is hierarchical whereas AOE
happens on a flat map. There are interesting locations on the map where a
smaller army may be able to defeat a larger army based on positioning.

~~~
olalonde
AoE maps are not flat and units have an attack bonus when uphill (and defense
penalty when downhill). Top players will place castles on hills for example
and micro their units so that they are more elevated than their opponents.

AoE has monks which can take control of enemy units.

The fact that Starcraft has more micro potential should make AI development
easier, not harder. Micro management is a relatively mechanic task that is
time consuming for humans but which a computer should excel at.

~~~
keerthiko
Micro is often written off as a mechanical task, but this is only relevant
with _expert_ systems (like microbots).

There is a lot of tactical nuance and understanding that comes in micro, for a
neural network/learning algorithm to understand and optimize is actually very
impressive just as much as macro.

Knowing it's optimal to blink when your shield runs out before taking hull
damage? To dance your weakened stalker to bait your opponent to overextend? To
focus fire but not overkill? To wait for a projectile to be mid-air before
blinking or picking in a transport? There's so much more to micro than
precision and APM.

To your other points, SC maps have lots of ramps and cliffs and high-
ground/low-ground impact (no bonuses, but vision constraints and battle
surface area and choke points)

AI development is not easier because of micro potential, because it means the
AI needs to be prepared to deal with a wider range of effectiveness for any
given unit or set of units, they won't always scale linearly in impactfulness
or between different player's playstyles

------
z_open
They really should have used BroodWar for deep mind. SCII is way too volatile.
Way more gimmicks available that will make it difficult for AI to come close
to a human player.

~~~
kirrent
As much as I love brood war, great micro over a large number of units with
wide awareness (things which appear to be easy to AlphaStar) would be
ridiculously overpowered, even with completely average strategy. Brood War is
an awesome game because it's constantly asking too much of its players at all
times. I can't imagine games against an AI which has far more attention and
micro to give would be too interesting.

~~~
z_open
They are planning on limiting the APM of the AI anyway. With BW, you can focus
on an AI learning basic strategy with incomplete information. With SC2, you
mix in a hole assortment of figuring out how crazy abilities like force fields
and recall. It's going to take much longer for an AI to anticipate when its
opponent is going to force field the ramp and warp into the main.

~~~
Quekid5
> They are planning on limiting the APM of the AI anyway.

They did that in these games. (Or at least, it didn't _abuse_ absurdly high
APM.)

It still had insane micro a) because it had a FoV which basically extended to
the combined FoV of all if its units[1] rather than having to move a screen-
size FoV, and b) when it micros it never really "misclicks" like a human would
do under pressure.

(This was most obvious in how it could micro against MaNa's army on 3 fronts
in game 4 and how it was able to basically perfectly drain the Immortal
barriers in the game where MaNa actually should have been able to defend
against a human mass stalker build ~100% of the time.)

One thing they weren't clear on was how it could tell how much health, etc.
each enemy unit had -- did it have to spend an "action" (like a player would
have to click) to do that? If so, then that's even more insane in terms of
micro ability.

Anyway, disagree about BW. Perfect micro in BW is possibly even more
devastating than in SC2, IMO, because there are all these weird glitches that
you can do -- the best players can do them _some_ of the time, but no players
can do it perfectly _all_ of the time.

[1] This was not the case for the final game which it lost.

~~~
z_open
>It still had insane micro a) because it had a FoV which basically extended to
the combined FoV of all if its units[1] rather than having to move a screen-
size FoV, and b) when it micros it never really "misclicks" like a human would
do under pressure.

This is an oversight that I imagine they will eventually fix as well. Doesn't
make sense to allow the AI to do this, because the focus is on the AI
understanding the game.

>Anyway, disagree about BW. Perfect micro in BW is possibly even more
devastating than in SC2, IMO, because there are all these weird glitches that
you can do -- the best players can do them some of the time, but no players
can do it perfectly all of the time.

I didn't mention micro specifically, I mentioned abilities. The perfect micro
issue can again be solved by limiting the AI's APM. They may be able to
execute micro tricks, but not constantly.

~~~
tialaramex
Once machines exceed normal human play, further research does not focus on
trying to make the machines play badly so that it's fun to face humans again.
What ever would be the point of that?

Instead we just watch the machines versus other machines.

This is why we didn't see "AlphaZero plays chess versus grand master" games -
they'd be dull, A0 wipes the floor with grand masters because it's an AI and
grand masters aren't, boring.

But lc0 and similar have been entering computer chess competitions with (a
clone of) the Google Alpha Zero design. It does pretty well.

Just as with TAS in speed running, you get a synergy. On the one hand, the
machines play a distinctly different game, perfect on its own terms, a TAS run
never succeeds in a frame perfect trick on the second or third try, always the
first - the AI will never mis-blink a stalker to a pointless death. But human
play continues, not against the machine but parallel to it, and learning from
it. Golden Eye speedrunning was hugely influenced by TAS findings. Modern
human chess is influenced by the machine chess play styles.

------
Thaxll
The problem is see is the AI is not better than the human, it's just 1000x
faster at APM / macro. It could have a "shit" bo and still win because it's so
much faster than a human. It's not really interesting tbh.

~~~
aerovistae
Wrong. Did you watch the presentation? Its APM was less than half that of its
human opponent. If you aren't going to watch the presentation first, don't try
to get involved in the discussion in such a derisive and dismissive way. "It's
not really that interesting tbh"....wow. It's something that's never been
algorithmically done before, no matter who tried. How can you write that off
as "not interesting" as if it's totally insignificant?

~~~
saberience
Its APM maxed out at 1500apm which is far more than any normal human.

~~~
aerovistae
That's instantaneous APM, not general APM over the course of the game. The
human player's instantaneous APM went well past 1000 at certain moments during
the course of the matches, too. All that means is a few clicks as fast as
possible.

~~~
peterarmstrong
Yes, but individual perfect micro of 50 blink stalkers is beyond the
capability of any pro, whereas I can spam-create Zerg units by holding down a
key and have higher APM for a split-second, and I'm just a garbage diamond
player.

