
Google's Intelligence Designer - finisterre
http://www.technologyreview.com/news/532876/googles-intelligence-designer/
======
mturmon
These MIT Tech Review articles, alas, emphasize hype: "No one had ever
demonstrated software that could learn to master such a complex task from
scratch," and "But until DeepMind’s Atari demo, no one had built a system
capable of learning anything nearly as complex as how to play a computer game,
says Hassabis."

I think the article must have overlooked significant activity in training
learning systems to play games well. The glaring omission for me was
Neurogammon (1987), later TD-Gammon (1992), developed by Gerry Tesauro and
colleagues ([http://en.wikipedia.org/wiki/TD-
Gammon](http://en.wikipedia.org/wiki/TD-Gammon)).

Neurogammon was, at the time, a sensation at the same conference the article
coyly refers to as "a leading research conference on machine learning." The
paper has almost 1000 citations. A curious omission.

~~~
cryptoz
Aren't these quite different tasks, though? There's a big difference in
'learning to play a specific game well' vs. 'learning to play arbitrary
games'; such a big difference that I think they're entirely different
disciplines. Correct me if I'm wrong, but the software in the research you
reference was given the ruleset to the game, right? And DeepMind's software is
not given that information, I think. I doubt they intentionally omitted that
work, I think it's more likely they didn't consider it relevant enough.

~~~
mturmon
Thanks for the correction. The "arbitrary" qualifier is not in TFA, but (as,
indeed, you said) that's the point of the demo, e.g.:
[https://www.youtube.com/watch?v=EfGD2qveGdQ](https://www.youtube.com/watch?v=EfGD2qveGdQ)
Note that they're using just the video signal from the game as input.

It's really a sad comment on the state of reporting at _MIT Tech Review_ that
you learn more about the tech from a youtube video than from an article.

(My complaint is not with the DeepMind people, it's with the article, which
should put the work in context.)

~~~
gumby
> It's really a sad comment on the state of reporting at _MIT Tech Review_
> that..

I feel compelled to point out that the only connection between the "MIT" tech
review and MIT is that the magazine licenses the name from the alumni
association. It's how the alumni associations funds itself and every MIT grad
gets a lifetime subscription to a version of the magazine with the alumni
notes bound into the back. I doubt many of us read it. I don't know how many
people _other_ than MIT grads read it, but I would imagine vanishingly few.

A friend of mine calls it "the magazine of things that will never happen"
which I think is dead on. It's a shame because the editor, Jason Pontin, as
actually a good guy so it's surprising that the magazine continued to suck
after he took it over.

There are many reasons to criticize MIT (don't I know it!) but you can't judge
the institute by this magazine.

~~~
ghaff
I'm going to disagree a bit here. Tech Review does tend to focus on the
possibilities of technology and to highlight potentially exciting research.
Almost by definition, a lot of this stuff is never going to amount to anything
commercially interesting. I suppose that TR could insert more implicit or
explicit disclaimers to that effect but I find it a good source for insights
into what's going on in the labs.

Personally, I think that Jason has brought a lot of positive changes to a
magazine that, for a long time, tended toward a technology policy wonkish
orientation.

So I think it's fair that a lot of what's written about "will never happen."
But I'm not sure that's really avoidable if you cover cutting-edge research.

------
higherpurpose
FYI this is also the guy that made Elon Musk fear strong AI. Elon Musk
invested in DeepMind in the early days just to see where AI is going.

------
xnxn
For a brief, horrifying moment, I thought this was the name of a product.

What a time to be alive.

------
drewda
Machine learning folks don't know the history of CS or AI, so they've
reinvented neural networks as "deep learning"?

Or, industry types are looking for the next big thing, after "big data," and
have rebranded neural networks as "deep learning"?

I don't mean to be too cynical, but I still don't understand if "deep
learning" represents any meaningful advance besides the ML and EE communities
finding the benefits of a certain amount of structure, which is already well
established in other lines of research.

~~~
eivarv
Deep learning is not just neural networks, but rather the application of these
in deep (i.e. many-layered) architectures, broadly speaking.

This enables hierarchical learning of increasingly complex concepts – building
new concepts upon less complex concepts from previous layers. Deep
architectures are thus able to learn high abstractions, as in [1], for
instance.

If you have not yet done so, I would strongly urge you to read some papers on
the subject from the last decade (e.g. Hinton, Bengio or LeCun), or even just
skim through the Wikipedia entry [2].

[1] [http://www.technologyreview.com/view/532886/how-google-
trans...](http://www.technologyreview.com/view/532886/how-google-translates-
pictures-into-words-using-vector-space-mathematics/)

[2]
[http://en.wikipedia.org/wiki/Deep_learning](http://en.wikipedia.org/wiki/Deep_learning)

~~~
return0
Deep learning is a large scale application of Restricted Boltzmann Machines,
of which Hinton (among others) was a pioneer. But that was in the 80s, not in
the 2000s.

[http://en.wikipedia.org/wiki/Restricted_Boltzmann_machine](http://en.wikipedia.org/wiki/Restricted_Boltzmann_machine)

~~~
eivarv
I don't believe the term "deep learning" is restricted to RBMs only – at least
that's not the way I've seen the term used in literature (e.g. Deep
Convolutional Neural Networks, various deep Autoencoders, etc.).

~~~
return0
Convolutional networks were also developed in the 80s as well as
backpropagating algorithms (autoencoders). The way i see it used, "deep"
usually means many layers, indicating a difference in quantity, not in
quality.

Point is, the science was there since the 80s, and not much has changed.

~~~
eivarv
Sure, but these types of deep architectures haven't really been practical
until relatively recently.

Well, then we're in agreement about the meaning of the term. Deep Learning,
then, would be Machine Learning using any of these deep architectures – be
they Restricted Boltzmann Machines, or otherwise.

------
piratebroadcast
This guy vs Shingy, AOL's "Digital Prophet".

