
What's wrong with deep learning? (2015) [pdf] - thedoctor
http://www.pamitc.org/cvpr15/files/lecun-20150610-cvpr-keynote.pdf
======
vonnik
This was a long presentation. Yann gets to the issue in the title of the post
about one third of the way through. The first third should probably be called
"What's right with deep learning, or how DL works"...

For each problem, he explores some salient ideas or ways to address the issue.

TLDR:

* Theory: We don't always have good explanations for why it works.

* Reasoning: Stick a CRF on top of a Deep Net

* Memory: We need a "hippocampus". Memory networks, neural embeddings.

* Unsupervised Learning: How do we speed up inference in a generative model? Sparse autoencoders, sparse models...

For those who could use an overview of neural nets and how some of them work,
this may be useful: [http://deeplearning4j.org/neuralnet-
overview.html](http://deeplearning4j.org/neuralnet-overview.html)

~~~
mrfusion
What's a crf?

~~~
argonaut
Conditional random field. It's a model that comes from the
probabilistic/statistical side of ML, which was the "hot" ML area before deep
learning.

------
andrewtbham
Here is the video.

[http://techtalks.tv/talks/whats-wrong-with-deep-
learning/616...](http://techtalks.tv/talks/whats-wrong-with-deep-
learning/61639/)

~~~
epberry
Really excellent talk, thanks for posting the video! That was a dense
presentation, absolutely jam packed with information and new papers and ideas.
Anybody know of a 2016 version of this? I know CVPR and ICML just happened and
I'm not sure their talks and presentations are up yet but this field is moving
at absolutely lightning speed and I'd be interested to see updates on the
techniques presented here.

