
DeepMind and Blizzard Open StarCraft II as an AI Research Environment - nijynot
https://deepmind.com/blog/deepmind-and-blizzard-open-starcraft-ii-ai-research-environment/
======
qub1t
A lot of people here seem to be underestimating the difficulty of this
problem. There are several incorrect comments saying that in SC1 AIs have
already been able to beat professionals - right now they are nowhere near that
level.

Go is a discrete game where the game state is 100% known at all times.
Starcraft is a continuous game and the game state is not 100% known at any
given time.

This alone makes it a much harder problem than go. Not to mention that the
game itself is more complex, in the sense that go, despite being a very hard
game for humans to master, is composed of a few very simple and well defined
rules. Starcraft is much more open-ended, has many more rules, and as a result
its much harder to build a representation of game state that is conducive to
effective deep learning.

I do think that eventually we will get an AI that can beat humans, but it will
be a non-trivial problem to solve, and it may take some time to get there. I
think a big component is not really machine learning but more related to how
to represent state at any given time, which will necessarily involve a lot of
human-tweaking of distilling down what really are the important things that
influence winning.

~~~
strgrd
I wonder if we will see any advanced cheese strats come out of this. I'm
assuming some implementations will eventually develop micro control that is
far beyond any human player's capabilities, which would make things like all-
in probe rushing much more viable. Instead of playing the normal meta in a
computer-vs-human, I imagine an advanced AI would simply send all of its
workers off the mineral line as soon as the game starts, and attempt to out
micro the human opponent before they can build an army-producing building.

~~~
ionforce
I know this isn't the exact same as the article, but when genetic algorithms
were introduced to solve for build orders, the "seven roach rush" was in
vogue, something that was unexpected at the time and "discovered" using GA.

I think there is a space for finding strategies that have more leeway in
execution and thus are more suitable for humans to pilot rather than have
machine level micro.

~~~
Tanner
I love the story of the Seven Roach Rush. To quote the linked article, "The
most interesting part of this build, however, is how counter-intuitive it is.
It violates several well-known (and well-adhered-to) heuristics used by
Starcraft players when creating builds."

I'm fairly certain that this application of machine learning will present some
surprising strategies.

[http://lbrandy.com/blog/2010/11/using-genetic-algorithms-
to-...](http://lbrandy.com/blog/2010/11/using-genetic-algorithms-to-find-
starcraft-2-build-orders/)

------
dpflan
_Related_ : Today I learned that a group of AI researchers has released a
paper called: _STARDATA: A StarCraft AI Research Dataset_. According to one of
the authors: "We're releasing a dataset of 65k StarCraft: Brood War games,
1.5b frames, 500m actions, 400GB of data. Check it out!"

> Article:
> [https://arxiv.org/abs/1708.02139](https://arxiv.org/abs/1708.02139)

> Github:
> [https://github.com/TorchCraft/StarData](https://github.com/TorchCraft/StarData)

~~~
tgb
The great thing about this is that it includes the game state throughout the
game. It's been pretty easy to find lots of Starcraft replays, but the replays
only include enough information to recreate the game (basically just the
player actions). If you wanted to know what was happening in the game at the
time the player made an action, you had to load up Starcraft and simulate out
the game until that point. This dataset has already run the game for you and
provided the data!

~~~
wfunction
Is it that much computation to simulate an entire game? You obviously don't
need to render the graphics or anything, it should just be a list of events
that occur, which doesn't seem all that slow to process.

~~~
gwern
Until today's release of the headless Linux client, you still had to run the
full StarCraft program, which gets expensive fast. And it massively
complicates the workflow to have to play through every game serially to
recreate the state rather than simply reading random rows of data from a 300GB
dataframe on disk.

~~~
wfunction
Oh I see, thanks, I didn't know. But man, 300 GB per game sounds completely
nuts!

~~~
jzymbaluk
I believe the 400GB is the total amount for the 65000 different game replays

~~~
wfunction
Oh but how does that work? That's ~6 MB per game which sounds like just a list
of actions rather than precomputed data per frame. Is it compressed somehow?

~~~
sooper
"The full dataset after compression is 365 GB, 1535 million frames, and 496
million player actions." \- Yes

------
siegecraft
The API Blizzard is exposing is really nice. Sadly most of the advantages AI
had in SC1 were just due to the fact that an automated process could micro-
manage the tasks the game didn't automate for you (a lot of boring, repetitive
work). SC2 got rid of a lot of that while still allowing room for innovative
and overpowered tactics to be discovered (MarineKing's insane marine micro,
SlayerS killing everyone with blue flame hellions, some more recent stuff I'm
sure from the newest expansions). Hopefully the API lets AIs converge on
optimal resource management and get to exploring new and innovative timings,
transitions, army makeups, etc.

~~~
captainmuon
> SC2 got rid of a lot of that

You think so? My impression is that SC2 had a lot more of repetitive tasks you
had to do. E.g. wall off the ramp, send a worker scouting, ... and you have to
perform certain actions every X seconds (like using chrono boost). A lot or
mastering the game is rote learning, and polishing a build order. Another big
part is constantly scouting and reacting to what the enemy is doing.

Due to those reasons I found SC2 a bit tedious (it was still fun, just felt
more like work than SC1). Granted, this is maybe because I played SC1 more on
LAN, and there wasn't all the metagame going on. But I think SC2 really does
focus on "grinding" and rote learning to get better, probably this was chosen
to make it more "eSports"-like.

If I would get to design a SC2.5 or SC3, I would remove all the rote - the
actions you always have to perform - and I would give the player the
opportunity to trade off more between macro and micro.

Actually, it would be cool if you could "research" certain AI features in game
for a cost. For example, have one upgrade that micros your marines like a pro,
or positions your units in sensible formations. Another player could counter
this with a "radio jam" ability, that would make your units in an area take
bad formations, or be controlled by a very simple AI. And if you are good at
micro, you could save the update, or invest it in an update that makes macro
simpler. And so on, I think there are a lot of things one could explore there.
Maybe opening SC2 to AI exploration can lead to such gameplay innovations.

~~~
CuriouslyC
I really like the idea of a helper AI that takes high level commands and
handles the manual dexterity aspect of the game. That would make RTS games a
lot more attractive for someone like me who prefers simple, thoughtful games
rather than complex games with a steep learning curve.

~~~
pm90
There are actually 2 parts to the complexity of a modern RTS game like
starcraft:

1\. Memorizing certain well known strategies and counters very well and
recalling them immediately.

2\. Having decent speed with the mouse/keyboard to actually execute those
strategies within a very short period of time.

I think what you're talking about it automating 2)... which I completely agree
with. How to do it... that is more complex though...

------
hitekker
This seems all in good fun but I wonder if it's come too late.

Starcraft 2 is at its twilight.

The biggest leagues of South Korea have disbanded. [1] The prolific progamers
who transitioned to Starcraft 2 have gone back to Broodwar. [2]

Blizzard itself has scrubbed all references to Starcraft 2 on the very home
page of Starcraft. [3] Except for the twitter embed, it has only only one "2"
character... in the copyright statement.

My take is that the future for the Starcraft franchise will be through
remastered and potential expansion packs following it.

Starcraft 2 had a good run but, with the entire RTS genre stagnating [4], I
don't think Blizzard wants to bet on anything less than the top horse.

[1] [https://www.kotaku.com.au/2016/10/the-end-of-an-era-for-
star...](https://www.kotaku.com.au/2016/10/the-end-of-an-era-for-starcraft-
and-south-korea/)

[2] [http://www.espn.com/esports/story/_/id/18935988/starcraft-
br...](http://www.espn.com/esports/story/_/id/18935988/starcraft-brood-war-
glory-days-jaedong-best-bisu-talk-starcraft)

[3] [http://starcraft.com](http://starcraft.com)

[4][http://www.pcgamer.com/the-decline-evolution-and-future-
of-t...](http://www.pcgamer.com/the-decline-evolution-and-future-of-the-rts/)
(Aside from MOBAs)

~~~
cjbprime
I don't quite agree, FWIW.

SC2 does seem to be at its twilight _in Korea_ , and I agree progamers and
fans there are super interested in Remastered.

But I don't think Remastered will be very popular outside KR. The SC2 "war
chest" promo appears to have made more money than expected, as measured by
hitting its funding ceiling within a few days.

So I don't think it's "Remastered replaces SC2", I think it's a divergence
into KR playing Remastered and non-KR playing SC2, and the number of progamers
and players doesn't have to be zero-sum: it could enlarge the population
playing either game, too.

~~~
hitekker
I agree that Starcraft 2 won't suddenly drop dead. People do play it and FWIW,
I liked it! I played all the expansions, online, and even the arcade mode. It
was a good game.

But I disagree that Blizzard has faith in Starcraft 2 for America or any other
country.

The removal of Starcraft 2 from Starcraft's English-Speaking homepage is one
sign of finality. In-universe, Blizzard has also ended the main dramatic arc
of Starcraft 2's story, leaving room only for half-hearted spin-offs.

Numbers-wise, we're seeing 50% drop-offs in user activity the last 2 years
alone. Even with the release of "Legacy of the Void", the number of daily
games played for 1v1 since 2015 have gone from 321,000 to 138,000. The new,
much-advertised, much-worked-upon, Archon Mode has gone from 11,000 games a
day to a measly 1,000 [1]. Not just because of Korean disinterest, we're
seeing players leave across the board in all countries.

In 6 years, Starcraft 2 went from millions of players concurrently to an
average of 20k a day.

Compare with the lifespans of League-of-Legends, Dota, Counterstrike, even the
original Broodwar, and the reason for remastered becomes more obvious.

Blizzard knows Starcraft 2 won't lead to the resurgence of the RTS genre, so
they're trying another route.

[1]
[http://www.rankedftw.com/stats/population/1v1/#v=2&r=-2&sy=c...](http://www.rankedftw.com/stats/population/1v1/#v=2&r=-2&sy=c&sx=a)

------
SiempreZeus
It's a bit too bad they're having to move towards supervised learning and
imitation learning.

I totally understand why they need to do that given the insane decision trees,
but I was really hoping to see what the AI would learn to do without any human
example, simply because it would be inhuman and interesting.

I'm really interested in particular if an unsupervised AI would use very
strange building placements and permanently moving ungrouped units.

One thing that struck me in the video was the really actively weird mining
techniques in one clip and then another clip where it blocked its mineral line
with 3 raised depots...

~~~
dontreact
They can always finetune using RL later. Superversied training was the first
step at making AlphaGo work.

------
arcanus
I also want to see the algorithm win on unorthodox maps. Perhaps a map they
have never seen before, or one where the map is the same as before but the
resources have moved.

Don't tell the player or the algorithm this, and see how both react, and
adapt. This tells us a great deal about the resiliency of abilities.

~~~
jmcmahon443
I am considering a random map generator for just this reason.

------
ktRolster
When Watson won at Jeopardy, one of its prime advantages was the faster
reaction time at pushing the buzzer. The fairness of that has already been
hashed out elsewhere, but.....

We already know that computers can have superior micro and beat humans at
Starcraft through that(1). Is DeepMind going to win by giving themselves a
micro advantage that is beyond what reasonable humans can do?

(1)[https://www.youtube.com/watch?v=IKVFZ28ybQs](https://www.youtube.com/watch?v=IKVFZ28ybQs)
as one example

~~~
obastani
My understanding is that in a full match, AIs still have no hope against
humans, since even though they can crush humans at micro, their macro is still
abysmal [1]. I'm not aware of a match where any AI has beat a pro human player
at Starcraft -- I'd be interested in learning otherwise!

[1] [http://spectrum.ieee.org/automaton/robotics/artificial-
intel...](http://spectrum.ieee.org/automaton/robotics/artificial-
intelligence/custom-ai-programs-take-on-top-ranked-humans-in-starcraft)

~~~
flamedoge
would you love to be proven wrong?

~~~
sidusknight
Of course.

------
daemonk
Blizzard should put in an AI-assisted play mode where players are limited to X
lines of code that can be launched with keyboard commands.

~~~
ajkjk
I know that, as a player, the high mechanical limitations of Starcraft are
part of why it's such a difficult, high-skill-ceiling game. But.. I've tried
to enjoy watching SC2 on Twitch, and while it's _kinda_ fun, it's just so
disappointing when a complicated strategic game is thrown away because a
player doesn't react fast enough to workers being sniped or a drop being shot
down.

I wish the individual units had some automatic behavior -- for example,
marines would could run in spread out formations near tanks or banelings;
workers would flee from hazards; flying units would avoid turrets unless
specifically directed to fly over them. It would require a lot of rebalancing,
of course, but it would make the game so much more tactical and strategic and
(imo) enjoyable to watch.

~~~
daemonk
Yeah I can even imagine a thriving "marketplace" for specialty code that top
players would keep secret.

And it doesn't have to just be for micro. For people who are bad at macro,
maybe code can be written to consistently maintain X workers at all bases.

The difficult part here would be how to balance the AI-assistance. Is lines of
code (or number of characters) a good proxy for complexity? What's the number-
of-character to benefit ratio?

I guess that's ultimately determined by the individual player's strengths and
weaknesses. If a player sucks at macro, then the macro script is worth the
number of characters.

------
arnioxux
Are there any known arbitrary code injection for starcraft? Like how you can
use a regular controller to reprogram super mario world to play pong?

[https://www.reddit.com/r/programming/comments/1v5mqg/using_b...](https://www.reddit.com/r/programming/comments/1v5mqg/using_bugs_in_super_mario_world_to_inject_new/)

[https://bulbapedia.bulbagarden.net/wiki/Arbitrary_code_execu...](https://bulbapedia.bulbagarden.net/wiki/Arbitrary_code_execution)

Is this how we are going to accidentally let AGI loose into the world!? /s

On a more realistic note I think this will degenerate into a game of who can
fuzz test for the best game breaking glitch. Think of all the programming bugs
that turned into game mechanics in BW that we haven't discovered for SC2 yet:
[http://www.codeofhonor.com/blog/the-starcraft-path-
finding-h...](http://www.codeofhonor.com/blog/the-starcraft-path-finding-hack)

------
krasi0
The StarCraft 1 BroodWar AI scene has been thriving for a few years now:
[https://sscaitournament.com/](https://sscaitournament.com/) You can watch
24/7 live AI vs AI games on Twitch at:
[https://www.twitch.tv/sscait](https://www.twitch.tv/sscait) Support for
voting on who to play next and even a betting system are in place, too. For
those who wish to get their feet wet with BW AI development, here are the Java
/ C++ tutorials:
[https://sscaitournament.com/index.php?action=tutorial](https://sscaitournament.com/index.php?action=tutorial)

~~~
krasi0
Some thoughts and analysis on why Starcraft AI by one of the active AI
developers Dan: [https://dangant.com/2017/08/09/why-starcraft-
ai/](https://dangant.com/2017/08/09/why-starcraft-ai/)

------
siliconc0w
The SCAI bots I've seen are more hardcoded tactics engines rather than machine
learning models. They're still impressive, but their logic isn't quite
'learned' it's hand coded which is a crucial difference.

------
Havoc
That's surprising. I thought Bliz didn't want anyone near sc2 but approved of
sc1 being used for this purpose.

~~~
yflu
SC1 really doesn't make sense for this, 80% of the skill is just keeping on
top of the mindless but mechanically intensive stuff, which is trivial beyond
trivial for an AI.

SC2's automated away most of this (pretty much everything but production
cycles), which makes it a better measure for AI vs human.

~~~
TulliusCicero
> SC1 really doesn't make sense for this, 80% of the skill is just keeping on
> top of the mindless but mechanically intensive stuff, which is trivial
> beyond trivial for an AI.

If that were true, then AIs would be dominant in BW instead of still bad at
the game.

------
convefefe
I thought this was already happening. Right after AlphaGo beat Lee, I remember
hearing about it. Did they give up on having their AI playing SC2? I wondered
if that would work, since it seemed to take turns in Go at the same speed as a
normal player, I wondered if it was trying to compute the most likely winning
move each turn and the late game implications of those moves. If it tried that
in a fast paced game how it would deal with the speed. It obviously would need
to develop a pattern of pre-baked strategies that would win it the game. Would
it play the same build every round or would it realize that changing things up
each match wins it more games?

------
Companion
It's a bit too bad they're having to move towards supervised learning and
imitation learning.

I totally understand why they need to do that given the insane decision trees,
but I was really hoping to see what the AI would learn to do without any human
example, simply because it would be inhuman and interesting.

I'm really interested in particular if an unsupervised AI would use very
strange building placements and permanently moving ungrouped units.

One thing that struck me in the video was the really actively weird mining
techniques in one clip and then another clip where it blocked its mineral line
with 3 raised depots...

------
hacker_9
There's something funny about a company that is actively developing bleeding
edge AI technology, but who can't design a webpage that works on mobile
without crashing.

~~~
chii
Just goes to show how complicated web tech is, even ai researchers can't get
it right!

------
JabavuAdams
When I used to play a lot of StarCraft, and then later with Total
Annihilation, I wished for the ability to customize the AI.

So then BWAPI came along ... and ... AI is hard. The best SCBW bots are still
pretty pathetic compared to a human player, never mind an expert human player.

------
Ntrails
I'd be really interested in how differently tiered data sets (ladder rank)
would work as sources for teaching.

Is it possible that training on diamond players is less effective than
training on, say, silver? Is that actually even an interesting thing to look
at?

------
ipnon
Any predictions for how long it will take for an AI to win against the world's
best player?

~~~
aerovistae
Awhile. This just isn't like Go or Chess. The gap from perfect information to
imperfect information is quite a chasm, and from turn-based to real-time is
even more vast.

I play _Age of Empires 2_ semi-competitively, and I just can't imagine the
research progress that would have to be made for a pro to lose to an APM-
limited AI agent. So much of the game comes down to intuiting what your
opponent is planning without being able to see what they're doing, and more
importantly _intuiting what your opponent isn 't ready for._

The biggest difference, though, is the "RT" in "RTS"\-- real time. This isn't
turn-based anymore, where at a given moment you have a single choice to make,
a single piece to move as in Chess and Go, and can then wait for the singular
and visible reaction your opponent makes before making your next choice.

My understanding it that the moves a program like AlphaGo makes are not
interconnected-- it picks each move individually as an ideal move for that
board state. It could take over halfway through the game for someone else and
would make the same move that it would have made at that point if it had been
in control the whole time and arrived at that board state on its own.

But that doesn't work in a real-time game, since you and your opponent are now
moving simultaneously and the "board" is never static. Your moves must be
cohesive and planned and flow continuously without time to ponder, each
connected to the last. There is no "one" move for a given state.

Another facet of real-time play is the idea of _distraction_. It's very
important in RTS's to keep your opponent distracted, to disrupt their plans
and their focus, by coming from unexpected directions at unexpected times,
sometimes concurrently with other operations against them. This can't happen
in Chess or Go, where the demands on your focus are far less urgent and two
things can't happen at once in a literal sense. Can an AI agent learn to
appreciate the power of distraction? Can it learn to intuit what will be most
disruptive to a human, and what won't be disruptive at all? How can you teach
a computer to _learn_ to be annoying?

I will say, of course, that nobody saw AlphaGo coming. And I hope it's the
same with RTS's. That would be _so_ exciting. I would love to see an AI blow
us away with previously unthought-of strategies. That would be the coolest
thing ever. So I hope it happens. But I'd be astonished. RTS is just such a
whole new level of thinking for AIs.

~~~
cjbprime
On the other hand, bots are starting to beat professionals at (thousands of
hands of repeated) Poker, so I think we can't say that imperfect information
is something that's especially intractable for maching learning algorithms.

[http://spectrum.ieee.org/automaton/robotics/artificial-
intel...](http://spectrum.ieee.org/automaton/robotics/artificial-
intelligence/texas-holdem-ai-bot-taps-deep-learning-to-demolish-humans)

~~~
Synaesthesia
Yeah but compare the search space of Poker vs Starcraft.

~~~
aerovistae
Exactly. And poker is, again, turn-based with a single move to be made. And it
may not be "perfect information," but it again is a game of static board state
where there is probably a statistically optimal move for a given state
provided you have memory of the other players' previous moves so far in a
round.

------
naveen99
> even strong baseline agents, such as A3C, cannot win a single game against
> even the easiest built-in AI.

Then, why not release code for the built in ai, and improve on it ? Or is the
built in ai cheating ?

~~~
Synaesthesia
The built in ai is scripted whereas they’re trying to teach agents to learn
the game from scratch with some sort of reward-based/machine learning
approach.

~~~
naveen99
still, why not use machine learning to tweak the script instead of starting
from scratch.

------
captn3m0
Someone needs to link this to FB's ELF platform (An End-To-End, Lightweight
and Flexible Platform for Game Research). That was specifically made for RTS
games like SC.

------
toisanji
great they opened it up. I'm sure reinforcement learning / Deep learning will
solve this. It has been a tough problem before, but honestly doesnt seem that
tough compared to all the harder AI problems.

~~~
Synaesthesia
Such as?

------
DefNotARogueAI
This gives me great ideas

------
onorton
I think I know what my final year project will be.

------
blobbers
YESSSSSSS!!!!!!!!

\--why are there not more fanboy comments?!

~~~
hughes
Probably because they don't contribute much to the conversation.

------
JefeChulo
"so agents must interact with the game within limits of human dexterity in
terms of “Actions Per Minute”."

I am really glad they are limiting APM because otherwise things just get
stupid.

~~~
ktwo
IMO there should also be a precision limit. The timing of actions should
include human-typical jitter and the wrong action should sometimes be
activated to simulate misclicks/fat-finger keypresses — e.g., messing up a
control group by assinging a unit to the wrong number key. The bot must also
not be able to act faster than human reaction times (~250ms), this could be
enforced by adding a fixed delay to the observations.

I wouldn't be surprised if human Starcraft II play isn't so much limited by
decision-making as by the translation of decisions into mechanical actions,
which in turn dilutes the attention devoted to actual decision making.

~~~
branja
ideally they'd train it on real keypresses rather than actions

~~~
mulmen
Why would that be ideal? Wouldn't that just make ML at the strategy layer
harder without doing anything to make the discoveries more valuable?

~~~
branja
to emulate human handicaps at the interface layer. I didn't say it would be
free

~~~
mulmen
But why is that desirable? Why would we want to emulate the human physical
handicaps in our quest to advance AI at a strategy level?

~~~
branja
For the same reason the APM are limited: to ensure that what we are doing is
really focusing on advancing strategy rather than brute mechanical skill. If I
played against an AI using nothing but the rendered frames and sound of a game
as input, I might not even make the stipulation on reflexes. I'd be humbled if
I lost.

As it stands now, most of the games I like have bad AI. Sure, it can be fun to
play a hack and slash against lots of little, dumb minions, but FPS, RTS AI
these days still don't cut it as savvy opponents. Often they have inhuman
perception, direct knowledge of game state, or higher starting resources, but
they make abysmal decisions.

Yes, I realize these are unlikely, expensive goals and incremental progress is
how things are done. I just want to know if it's possible or desirable to
emulate actual human reaction time.

Do you disagree this would in principle help separate strategy from godlike
reflexes?

~~~
mulmen
This is not the same AI you normally face in a game. Most (all?) of those AI
opponents use rules written by the game developers to make decisions and some
of them simply cheat to be competitive ( _cough_ Mario Kart 64 _cough_ ).

This blog is about creating AIs that interact with the game the same way
humans do, the computer plays by the same rules and has no special access to
the game state beyond what the player would have. With these constraints there
are no existing bots for StarCraft or StarCraft 2 that can even beat the
built-in rule-based AI. They aren't even close to beating professional
players.

If the strategy abilities are so weak today that we can't even beat the
tutorial AI then why introduce further arbitrary handicaps on the bots? How do
those handicaps advance the state of the strategy layer? The AI has many
potential advantages over the human player beyond just reaction time. Should
we also limit the amount of data the bot considers to emulate the amount of
inputs a human player can process? What about emulating human memory, can a
human really learn from 60,000+ games? What about 1.5 million?

I do not think it is desirable to emulate human limitations in AI unless you
are trying to create an artificial human. I think the advantage of creating an
AI is to do something people _can 't_ already do so why should we impose our
physical constraints on them?

I do not think it is important to separate reflex from strategy. Since every
player has a different APM ability some strategies are more valid than others
for each individual. If I do not have the reflexes of a professional player
there are strategies I cannot employ. As long as StarCraft is not imposing APM
limits on human players to maintain competitiveness the bots should also not
have a limit.

~~~
branja
Okay, thanks. I appreciate the counterpoint. I guess I'd like to see it both
ways: bots limited to human speed and bots not.

------
Lambent
It's not like this is going to create fantastic AI.

Keep in mind there's been an amateur AI project for broodwar for almost 7
years now. Even after such a long learning period, the games are very
primitive, and the AI's still couldn't pose a threat to even a beginner human
player. Sometimes the games take hours. Trying to build strategy and decision
making into an AI is incredibly complicated. There have been teams working at
the SSCAIT for many years now, and the product is still fairly primitive.

So what CA did was instead write up a simpler AI that mimics strategy and
decision making. We all know it's not great, but I'd be really skeptical that
3rd parties would magically create an AI that can think strategically.

------
Outrageous
Novice here: I really want to try this Starcraft API but I don't know how to
start. I believe this uses more reinforcement learning and agent-based models
(which honestly I am not familiar with yet) What are good papers to get
started on this?

