
Obstacles on the Path to AI - evc123
https://drive.google.com/file/d/0BxKBnD5y2M8NbWN6XzM5UXkwNDA/view?pli=1
======
vonnik
"There is no way in hell that you can learn billions of parameters with RL." I
really love LeCun's provocative stances, but I get suspicious when people talk
about impossibilities. RL is making huge strides. People adopted the same tone
with neural nets years ago, and LeCun proved them wrong...

~~~
paulsutter
He's saying you won't learn billions is parameters with RL /alone/ because
"one scalar reward per trial isn't going to cut it".

I find that convincing and a key insight. RL is going to be fundamental to
AGI, but he's saying curiosity / unsupervised learning will be necessary. And
I say this as a big believer in the need for more work on RL.

~~~
vonnik
Yes, I should have added the context about the scalar reward, but I'm not sure
why that changes anything.

This may seem like a naive question, but it's sincere: What makes a scalar
reward less effective at modifying a Q function than a scalar error that's
used in backprop and assigned to a neural network's coefficients?

~~~
karpathy
supervised learning: (after millions of operations you had just performed) out
of 1000 you predicted label #136, but actually the true result for this input
was label #25.

reinforcement learning: (after millions of operations you had just performed)
out of 1000 you predicted label #136. That's not right, but I won't tell you
what you should have done. Also, it could have been right, but maybe you had
screwed up something in one of your last 100 predictions and I won't tell you
what it was. Good luck.

~~~
paulsutter
To move the analogies into general intelligence, start with the example of
Elon Musk (a top notch example of GI). What drives Elon Musk? In interviews
he's said he wants to be useful. He's achieving that goal in a big way.

Take the moment he decided to build a rocket. It was after he had a difficult
meeting to buy a russian missile. He quickly roughed out, from first
principles, that he should be able to build one himself. The /drive/ to make
that analysis was in pursuit of his goal (and required reinforcement
learning).

But what lifelong process filled his brain with all the complex and nuanced
information needed to rapidly draw that conclusion? His curiosity, aka
unsupervised learning, which led him to learn so many things over the years
that culminated in that moment.

For this reason, I might make the analogy that RL is the "conductor of the
symphony", rather than calling it the "cherry on top" as LeCun does here.

------
tomp
Everytime I read something like this, I get sad that I don't understand most
of it.

But then I get happy because at least I understand a little bit :)

~~~
pzone
Reading some random slides from a specialist talk is not exactly the easiest
way to learn this stuff.

~~~
draven
Most often the slides alone are useless, they are only supposed to be a
support for the talk. Talks recordings (or transcripts) are more useful, you
can only get a high level idea of what the talk is about from slides.

I would be awesome to have a platform where you get recommendations of what to
learn / read or online courses in order to understand a given talk.

~~~
mkorfmann
Yes absolutely! A platform where not only the answers are praised, but also
curious questions.

------
thangalin
The talk:

[http://techtalks.tv/talks/whats-wrong-with-deep-
learning/616...](http://techtalks.tv/talks/whats-wrong-with-deep-
learning/61639/)

~~~
_0ffh
Thanks for the link, but goodness gracious, 6 GB for one hour of low quality
video? What are these people thinking, that internet bandwidth is bestowed on
all of us from on high?

------
grondilu
That's only the slides. Is there a video of the talk?(assuming there is a
talk, that is)

~~~
haddr
I'd love to see the video too... LeCun is probably most interesting person to
hear in this topic...

~~~
crazypyro
In case you didn't see the comment originally (I didn't either), someone else
posted a link.

[http://techtalks.tv/talks/whats-wrong-with-deep-
learning/616...](http://techtalks.tv/talks/whats-wrong-with-deep-
learning/61639/)

------
vkhuc
Although I'm working on deep neural nets, this material is too advanced to me.
Looks like deep nets + bayesian reasoning is the next big thing.

~~~
imh
Pick up this book. It's fantastic.

[https://mitpress.mit.edu/books/probabilistic-graphical-
model...](https://mitpress.mit.edu/books/probabilistic-graphical-models)

~~~
draven
The title sounds familiar, it's also a course on coursera:

[https://www.coursera.org/course/pgm](https://www.coursera.org/course/pgm)

Last session was in 2013 though.

------
ankurdhama
The real obstacles to the path to AI is that we don't even have the right
questions to ask and people are looking for answers to the wrong questions and
announcing the next big thing in AI.

------
KasianFranks
I like the focus on vector space.

