
Humanity Has Hope: Pro-Gamers Win 1 of 11 StarCraft Matches Against DeepMind AI - lawrenceyan
http://fortune.com/2019/01/24/starcraft-2-deepmind/
======
slenk
Teach an AI how to take and keep control of a battlefield. What could go
wrong?

~~~
candiodari
That's easy: not doing it.

------
tylerjwilk00
Does the AI take a video input or is the AI still operating through a direct
API into the game? If so, it's a bit like cheating. Pure data access and no
visual noise to filter through is a huge advantage for it but still
impressive.

~~~
quux
I believe the AI still has a direct API to the game and in the 10 matches it
won it was able to effectively "see" and issue commands to the entire map all
the time.

In the last match they used a newer interface where the AI had the same
restricted viewport as a human and had to decide where to move its camera to
both see what's happening and make actions. The player and commentators both
noticed that this resulted in more human like play, and the AI seemed less
able to micro multiple armies stretching beyond the viewport like it had in
earlier matches.

The deepmind people said the restricted viewport did slow down training, but
in the end it was able to get to the same skill level as the previous ones in
the end.

~~~
darepublic
If the ai plays equally well with restricted viewport why were the majority of
the games played with nonrestrictive? It seems reasonable to assume that deep
mind will eventually best humans at sc2 but seems like Google wanted a nice
headline and rushed it imo.

~~~
the8472
Because they only added it recently. About a week before the livestream. The
other games were recorded last year.

------
sabujp
they would have won more if they played it more when it was only limited to
single view mode instead of playing off the entire map where it was able to
complete surround and micro a bunch of stalkers and defeat immortals. Also,
they should have gotten sos to play against alphastar instead of mana

~~~
pyrophane
Can you provide an explanation of what you said for those of us who don't know
much about Starcraft or competitive RTS games?

I find what you said interesting but I lack the context to understand it.

~~~
jimrandomh
AlphaStar played 11 games. In 10 of those games, it played in a mode where it
didn't have to manage the camera, so it didn't have to context-switch between
things going on in different parts of the map, the way a human would. It won
all 10 of those games, using a strategy which a human would not have been able
to pull off. In the 11th game, it played with the same camera restrictions a
human would have. It lost that game.

~~~
deburo
The game where Mana won seemed to be primarily because he discovered an
exploitable maneuver against the AI.

~~~
cjbprime
Yes, AlphaStar was already up 15 workers -- "build a Phoenix" would have
completely shut down the prism and is apparently something that particular
agent never learned how to do properly.

Camera management isn't part of that equation.

Though it is fair to conjecture that Mana may have won the earlier Blink
Stalker vs Immortal game, if camera management were required.

~~~
jimrandomh
Depends. If camera management soaked up a large share of the training capacity
of the network, then that might explain why it failed to learn how to handle
drop-harrassment.

------
buboard
It is also time to move on to real world environments. Games are cool, but the
network is essentially learning some thousand lines of C++. Reinforcement
learning needs to prove itself in real-world tasks.

~~~
kevinflo
I'm a layman, but to my knowledge navigating a digital environment and a real
one are the same minus some steps of the process. A self-driving car recreates
a digital reality via sensors with as little delay and fidelity loss as
possible, then navigates a digital car within that rapidly constructed virtual
reality. It then signals back to the real car to navigate the real car exactly
how it would navigate the digital one given its immediate digital environment.
Since many of the hard problems lie with the navigation in the digital space,
removing the sensors-to-digital part of it by training with video games still
nets a lot of valuable learnings that can be brought right back into real-
world applications. Also, learning in a fully digital environment allows this
part of the learning to be done without the spatial/time constraints of
reality.

~~~
buboard
games are simulations of idealized processes using generally a small number of
equations. a sufficient neural network should be able to learn them but the
real world has orders of magnitude higher complexity. Something similar
happens with robots trained with physics engines.

------
Madmallard
This kind of garbage needs to be downvoted...

This was a failure on Deep Mind's part, not a success. The bot is not given
remotely the same context as a person. It will lose in that context.

You can make a fighting game AI without deep mind that will parry every attack
and punish with a perfect combo and always beat humans because of 0 reaction
time and 0 execution flaws. Big whoop.

This is irritating clickbait.

~~~
new_guy
It really is. It's stupid people building stupider machines and trying to pass
it off as 'intelligence' so they can make $$$.

------
jahaja
These things are starting to get annoying. What's the plan here? Make a game
AI MVP, make sure it wins against pros in a PR event, make lofty statements
how promising the new AI revolution is & make sure to make vague connections
to AGI and that the human brain is soon obsolete, and finally wait for the AI
hype to pour funding at you to do some bullshit project with the primary
purpose to make some corp/exec look modern?

~~~
partingshots
With all due respect, I think your perspective is both limited and narrow
minded. Games are how we as humans develop in adolescence. They are incredibly
powerful tools in helping us through the process of _learning how to learn_.

All the model architectures and machine learning techniques that have been
developed through the playing of these games, exponentially increasing in
difficulty from Go to Starcraft, can be directly applied to real life tasks.

What is life anyways if not just a superset of sets of games to be played?
Optimizing the output of a protein structure designed for chemical catalysts,
teaching a car how to get from point A to point B while navigating a rule
based obstacle emplaced environment, etc, etc. All _merely simple_ games, and
yet filled with pretty much infinite possibility.

Underestimate the power and potential of machines learning to play at your own
folly.

~~~
jahaja
That entire reply is a great example of "vague connections to AGI".

I hope this AI hype cycle ends soon and funding gets diverted from tech to
actual science in the medical field. But I guess that's unlikely. I see no
obvious evidence for AI/ML etc to be less of a speculation hype than the
already deflating self-driving car hype.

I don't doubt that ML will have its practical application given the amount of
effort that is currently put in to it, but I do think it is wildly inflated.
Not to mention that the spectrum of what kinds of software that counts as AI
seem to get wider for every week.

That the AI hype is more or less entirely confined within an already hyped
tech industry doesn't seem to me like a coincidence. An AI revolution hellbent
on figuring out how intelligence works would surely include other fields than
just tech?

~~~
lawrenceyan
[https://deepmind.com/blog/alphafold/](https://deepmind.com/blog/alphafold/)

[https://moalquraishi.wordpress.com/2018/12/09/alphafold-
casp...](https://moalquraishi.wordpress.com/2018/12/09/alphafold-casp13-what-
just-happened/#s1.4)

------
pingec
Is there a video of the game where AI lost?

~~~
rococode
It was live streamed, you can watch it here:
[https://www.youtube.com/watch?v=cUTMhmVh1qs&t=2h31m22s](https://www.youtube.com/watch?v=cUTMhmVh1qs&t=2h31m22s)

Tthe first 2 and a half hours goes over the other 10 matches, which weren't
played live on stream.

