
DeepMind’s work in 2016: a round-up - jonbaer
https://deepmind.com/blog/deepmind-round-up-2016/?href=
======
rattray
I can't believe this one passed under my radar:

[https://deepmind.com/blog/deepmind-ai-reduces-google-data-
ce...](https://deepmind.com/blog/deepmind-ai-reduces-google-data-centre-
cooling-bill-40/)

> Our partnership with Google’s data centre team used AlphaGo-like techniques
> to discover creative new methods of managing cooling, leading to a
> remarkable 15% improvement in the buildings’ energy efficiency.

Which itself is an understatement of the achievement:

> Our machine learning system was able to consistently achieve a 40 percent
> reduction in the amount of energy used for cooling, which equates to a 15
> percent reduction in overall PUE overhead after accounting for electrical
> losses and other non-cooling inefficiencies. It also produced the lowest PUE
> the site had ever seen.

Basically, they built an AI that was able to tune the "large industrial
equipment such as pumps, chillers and cooling towers" to react to the dynamic,
nonlinear interactions that vary within and between datacenters (weather,
utilization, etc).

They also describe this AI as "General":

> Because the algorithm is a general-purpose framework to understand complex
> dynamics, we plan to apply this to other challenges in the data centre
> environment and beyond in the coming months.

They seem to imply that this technique could make almost any industrial
process more efficient with minimal oversight/training/customization.

~~~
forgotmyhnacc
I remember reading the blog post. Did they ever follow up with a research
paper explaining what they actually did?

~~~
gwern
Sadly, not yet. Might be _too_ secret sauce even for Google.

~~~
conistonwater
Nonlinear control with neural networks has existed for a long time, though,
it's not exactly new. Is there reason to think they did something profoundly
new in that specific instance?

~~~
mooneater
Probably the use of a DNC for industrial application is novel.

~~~
gwern
Yes. When this came first up, someone on /r/machinelearning scoffed that using
NNs to control fans/coolers/power systems to optimize was trivial and obvious
and had been done for decades. When I challenged him for a ref, he produced a
PDF which listed dozens of industrial applications for various nonlinear NN
algorithms... none of which were actually autonomously controlling anything
and most were predictive.

------
conistonwater
> _In its ability to identify and share new insights about one of the most
> contemplated games of all time, AlphaGo offers a promising sign of the value
> AI may one day provide, and we 're looking forward to playing more games in
> 2017._

What I find fascinating is how different AlphaGo's impact was from the impact
of early chess engines. Once chess engines became decent (not even good—before
even Deep Blue), they identified tons of inaccuracies in published chess
literature. These were missed tactics, hard-to-see moves, requiring only
relatively shallow calculation but a computer's precision. These inaccuracies
were found even in classical annotations of well-known chess games, as well as
standard books about openings. I believe John Nunn was known for this kind of
work.

AlphaGo hasn't achieved nearly the same impact. Have they even tried to
identify the same types of inaccuracies in classical Go books? Can you imagine
how absolutely cool it would be for a go engine to find errors in
_Invincible_? Maybe they tried, but didn't find any inaccuracies, so now this
negative result is sitting in one of their file drawers? I really wish they
were more active with this sort of thing.

~~~
nilkn
Are there any open source or widely available Go engines that are even
remotely as strong as AlphaGo (which I think we can safely assume plays beyond
the 9p level, i.e., at superhuman strength)? If the only seriously strong
engine is completely tied down behind the closed doors of a company that has
largely already moved on to other challenges, I think that's the issue right
there -- or at least a large part of it.

~~~
gort
I think the strongest commercial engines are Zen and CrazyStone. Zen has
certainly become pro-strength on fairly modest hardware (it had an exciting
BO3 against a Japanese 9-dan, which it lost 1-2), but the strong version isn't
available to the public yet, I think.

Interestingly, for the past 3 or 4 days there's been a mystery Go Bot playing
on 2 servers calling itself "Master" which is currently about 50-0 (!) against
professional opponents, many at the very highest level. Nobody knows its
identity yet. It's stronger than the commercial engines, and its presence must
be some sort of stunt or test by someone.

[http://lifein19x19.com/forum/viewtopic.php?f=10&t=13913](http://lifein19x19.com/forum/viewtopic.php?f=10&t=13913)

~~~
gort
Now confirmed to be AlphaGo.

[https://twitter.com/demishassabis/status/816660463282954240](https://twitter.com/demishassabis/status/816660463282954240)

------
gallerdude
I'm excited for WaveNet to get faster/more accessible. I'm not very
technically inclined when it comes to downloading things of github and hacking
them together.

My goal is audio books - I'd love to hear them read by my favorite movie
characters.

~~~
bobajeff
I can't wait to have Peter (Geoffrey Francis) Jones read me all Wikipedia
articles.

------
merqurio
They had a vast presence on EWRL, it's really amazing how many effort goes
into research. Really inspiring.

------
doobwa
> We’re still a young company early in our mission...

What? Why don't they consider themselves part of Google?

~~~
sanxiyn
I don't know why, but it's very clear they consider themselves somewhat
independent of Google.

The most clear sign I have seen is that Partnership on AI site has both Google
logo and DeepMind logo.

[https://www.partnershiponai.org/](https://www.partnershiponai.org/)

------
WhitneyLand
What indeterminate games can AI can beat humans at?

Backgammon - AI usually beats the best humans

Ms. Pac Man - AI loses to almost any human

~~~
john_reel
What makes Ms. Pac Man so hard for AI?

~~~
tim333
I'm not sure it's hard in general but deepmind's program didn't do well
because it couldn't plan ahead
[https://www.technologyreview.com/s/535446/googles-ai-
masters...](https://www.technologyreview.com/s/535446/googles-ai-masters-
space-invaders-but-it-still-stinks-at-pac-man/)

~~~
varelse
It's also inferring the game state from looking at the screen rather than
being spoonfed the data.

One could conceivably perform a depth-limited search on the actual game state
if it were available and then use an AlphaGo-like DNN to predict what a deeper
search would find, no?

~~~
tim333
Dunno really. A human would watch the ghosts behaviour and guess their likely
future behaviours based on that. I'm not sure if the software gets that, as it
were, or how you'd tweak it to do so.

------
malcolmgreaves
Does anyone know of any other companies that are similar to DeepMind, but
based in SF?

~~~
sherjilozair
OpenAI ([https://openai.com/](https://openai.com/)). OpenAI's research vision
is at a similarly ambitious scale as DeepMind. They're also relatively more
open in their research, and known to release papers, code, and models quickly.
Currently, they are not as big though, more like how DeepMind was immediately
after publishing their first Atari work in 2013. Give them time, and I expect
them to become a very formidable opponent to DeepMind.

~~~
deepnotderp
As a bonus, OpenAI is a nonprofit that always allows publishing. They also
have some of the top names in DL: Ian Goodfellow, Ilya Sutskever and Zaremba.
In DRL (deepmind's arena), they have Abbeel and Schulman, both absolute
houses.

