Hacker News new | past | comments | ask | show | jobs | submit login
What's wrong with deep learning? (2015) [pdf] (pamitc.org)
114 points by thedoctor on July 13, 2016 | hide | past | web | favorite | 7 comments



This was a long presentation. Yann gets to the issue in the title of the post about one third of the way through. The first third should probably be called "What's right with deep learning, or how DL works"...

For each problem, he explores some salient ideas or ways to address the issue.

TLDR:

* Theory: We don't always have good explanations for why it works.

* Reasoning: Stick a CRF on top of a Deep Net

* Memory: We need a "hippocampus". Memory networks, neural embeddings.

* Unsupervised Learning: How do we speed up inference in a generative model? Sparse autoencoders, sparse models...

For those who could use an overview of neural nets and how some of them work, this may be useful: http://deeplearning4j.org/neuralnet-overview.html


A lot of 2015-2016 work can be seen as addressing the latter two. For memory, all the work on neural programming, soft and hard attention, content-addressable memory and 'memory networks'. For unsupervised learning, adversarial networks have spawned a whole bunch of papers with OpenAI's latest batch being quite exciting.


What's a crf?


Conditional random field. It's a model that comes from the probabilistic/statistical side of ML, which was the "hot" ML area before deep learning.



Really excellent talk, thanks for posting the video! That was a dense presentation, absolutely jam packed with information and new papers and ideas. Anybody know of a 2016 version of this? I know CVPR and ICML just happened and I'm not sure their talks and presentations are up yet but this field is moving at absolutely lightning speed and I'd be interested to see updates on the techniques presented here.


Video is not available.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: