
Deep Learning – Review by LeCun, Bengio, and Hinton - chriskanan
http://www.nature.com/nature/journal/v521/n7553/full/nature14539.html
======
thisisdave
Has LeCun changed his position on open access? He'd previously "pledged to no
longer do any volunteer work, including reviewing, for non-open access
publications." [1] A major part of his reasoning was, "Why should people pay
$31 to read individual papers when they could get them for free if they were
published by JMLR?"

I had assumed that meant he wouldn't write for them either (and thus wouldn't
enlist other people to volunteer as reviewers when the final product would
cost $32 to read).

[1]
[https://plus.google.com/+YannLeCunPhD/posts/WStiQ38Hioy](https://plus.google.com/+YannLeCunPhD/posts/WStiQ38Hioy)

~~~
paulsutter
The publishers aren't the problem, and the authors aren't the problem. We the
readers are the problem. Seriously. Let me explain.

I have to admit, when I saw "LeCun, Hinton, in Nature" I thought "That must be
an important article, I need to read it". I haven't read every single paper by
LeCun or Hinton. The Nature name affected me. That's why it's rational to
publish there.

There's still no effective alternative to the journal system to identify what
papers are important to read. There have been attempts, Google Scholar and
Citeseer for example.

A voting system like HackerNews wouldn't work, because Geoff Hinton's vote
should count for a lot more than my vote. Pagerank solved that problem for web
pages (a link from Yahoo indicates more value than a link from my blog). How
can scientific publication move to such a system?

~~~
hackuser
> How can scientific publication move to such a system?

Complete amatuer speculation: Scientists' professional societies could create
elite, free, online journals, with a limited number of articles per month (to
ensure only the best are published there), openly stating that they intend
these to be the new elite journals in their respective fields.

~~~
aheilbut
PLoS has tried to do that, with journals like PLoS Biology.

Hypothetically, AAAS is in the best position to pull something like this off,
but as the publishers of Science, they're sadly very committed to preserving
the status quo...

~~~
joelthelion
PLoS is actually a pretty nice success. It's more than an experiment at this
point.

------
rfrey
$32 seems steep for one article. Does anyone know if Nature allows authors to
publish on their websites? Nothing so far on Hinton's, LeCun's, or Bengio's
page.

~~~
gwern
[https://www.dropbox.com/s/fmc3e4ackcf74lo/2015-lecun.pdf](https://www.dropbox.com/s/fmc3e4ackcf74lo/2015-lecun.pdf)
/ [http://sci-hub.org/downloads/d397/lecun2015.pdf](http://sci-
hub.org/downloads/d397/lecun2015.pdf)

~~~
dmazin
gwern saves the day once again.

------
cowpig
Hurray for science behind paywalls!

~~~
IanCal
Here's a link where you can read it:
[http://www.nature.com/articles/nature14539.epdf?referrer_acc...](http://www.nature.com/articles/nature14539.epdf?referrer_access_token=K4awZz78b5Yn2_AoPV_4Y9RgN0jAjWel9jnR3ZoTv0PU8PImtLRceRBJ32CtadUBVOwHuxbf2QgphMCsA6eTOw64kccq9ihWSKdxZpGPn2fn3B_8bxaYh0svGFqgRLgaiyW6CBFAb3Fpm6GbL8a_TtQQDWKuhD1XKh_wxLReRpGbR_NdccoaiKP5xvzbV-x7b_7Y64ZSpqG6kmfwS6Q1rw%3D%3D&tracking_referrer=www.nature.com)

------
yzh
There is a whole session on machine intelligence in this issue. Just curious,
is there more CS-related research on Nature now than before?

~~~
chriskanan
Nature and Science both generally have only a few CS-focused papers per year.
This issue has a special theme (Machine learning and robots), so it breaks
that general pattern. I regularly read Nature and Science for general science
news, commentary, etc.

~~~
yzh
Thanks.

------
paulsutter
If you only have time to read one paper on Deep Learning, read this paper.

A few quotes:

"This rather naive way of performing machine translation has quickly become
competitive with the state-of-the-art, and raises serious doubts about whether
understanding a sentence requires anything like the internal symbolic
expressions that are manipulated by using inference rules. It is more
compatible with the view that everyday reasoning involves many simultaneous
analogies that each contribute plausibility to a conclusion"

"The issue of representation lies at the heart of the debate between the
logic-inspired and the neural-network-inspired paradigms for cognition. In the
logic-inspired paradigm, an instance of a symbol is something for which the
only property is that it is either identical or non-identical to other symbol
instances. It has no internal structure that is relevant to its use; and to
reason with symbols, they must be bound to the variables in judiciously chosen
rules of inference. By contrast, neural networks just use big activity
vectors, big weight matrices and scalar non-linearities to perform the type of
fast ‘intuitive’ inference that underpins effortless commonsense reasoning."

"Problems such as image and speech recognition require the input–output
function to be insensitive to irrelevant variations of the input, such as
variations in position, orientation or illumination of an object, or
variations in the pitch or accent of speech, while being very sensitive to
particular minute variations (for example, the difference between a white wolf
and a breed of wolf-like white dog called a Samoyed). At the pixel level,
images of two Samoyeds in different poses and in different environments may be
very different from each other, whereas two images of a Samoyed and a wolf in
the same position and on similar backgrounds may be very similar to each
other. A linear classifier, or any other ‘shallow’ classifier operating on raw
pixels could not possibly distinguish the latter two, while putting the former
two in the same category.... The conventional option is to hand design good
feature extractors, which requires a considerable amount of engineering skill
and domain expertise. But this can all be avoided if good features can be
learned automatically using a general-purpose learning procedure. This is the
key advantage of deep learning."

"Deep neural networks exploit the property that many natural signals are
compositional hierarchies, in which higher-level features are obtained by
composing lower-level ones. In images, local combinations of edges form
motifs, motifs assemble into parts, and parts form objects. Similar
hierarchies exist in speech and text from sounds to phones, phonemes,
syllables, words and sentences. The pooling allows representations to vary
very little when elements in the previous layer vary in position and
appearance"

~~~
deepnet
Don't miss C Olah's blog where Figure.1 was copied from

[http://colah.github.io/](http://colah.github.io/)

Very very visually insightful on the nature of Neural Nets, Convnets, Deep
Nets...

------
evc123
Nature should just use targeted advertising to make their journals free,
similar to the way google/facebook make their services free using targeted
ads.

~~~
grayclhn
I think you're overestimating the number of people that read Nature.

------
jphilip147
Very Helpful review.

------
itistoday2
Why do these articles on RNNs and "Deep Learning" never mention Hierarchical
Temporal Memory?

[https://en.wikipedia.org/wiki/Hierarchical_temporal_memory#D...](https://en.wikipedia.org/wiki/Hierarchical_temporal_memory#Deep_Learning)

~~~
akyu
It seems like theres a kind of taboo against HTM in the ML community. I guess
it stems from their lack of impressive benchmarks, but I think that's kind of
missing the point when it comes to HTM. Maybe HTM isn't the right solution,
but I think there is a lot to be learned by using neocortex inspired models,
and HTM is at least a solid step in that direction. And the work Numenta has
contributed on Sparse Codings shouldn't be overlooked.

~~~
davmre
I don't think most ML people are actively hostile to HTM, just indifferent
until they show some results. For example, Yann Lecun in his Reddit AMA:
[http://www.reddit.com/r/MachineLearning/comments/25lnbt/ama_...](http://www.reddit.com/r/MachineLearning/comments/25lnbt/ama_yann_lecun/chisjsc)

