
Andrew Ng on What's Next in Deep Learning [video] - sherjilozair
https://www.youtube.com/watch?v=qP9TOX8T-kI
======
sherjilozair
Well, what I really wanted to post was his quote on "Fear of AI", not really
the whole video, which is
[https://youtu.be/qP9TOX8T-kI?t=3836](https://youtu.be/qP9TOX8T-kI?t=3836)

~~~
amsilprotag
His position seems to be similar to MIT engineer David Mindell's, who recently
wrote _Our Robots, Ourselves_ [1]. It seems to me that the more hours one
spends solving real problems with AI, the less likely they are to believe an
apocalypse is imminent. Conversely, the more dependent a prognosticator is on
book or film sales, the more likely they are to predict or depict a near-term
AI apocalypse.

[1] Google Talk:
[https://www.youtube.com/watch?v=4nDdqGUMdAY](https://www.youtube.com/watch?v=4nDdqGUMdAY)

~~~
IshKebab
Probably because they are the only ones that see the hundreds of hours of work
that go into making task-specific machine learning tools.

We still are very far from a system that can _really_ learn in a general way,
but I guess it might not look like that to people that only see the result.

------
richardkeller
Seeing this type of research really inspires me to get involved in deep
learning, at least to the point where I can accomplish some sort of basic task
with it. As somebody with an undergraduate computer science degree, but only a
very basic theoretical understanding of AI, what resources are available for
me to understand deep learning and build something useful?

~~~
yodsanklai
Andrew Ng's course on Coursera?

~~~
davidbarker
I signed up for this last night. Really looking forward to it.

[https://www.coursera.org/learn/machine-
learning](https://www.coursera.org/learn/machine-learning)

~~~
unsignedint
His machine learning course is amazing! I really enjoy lectures and I feel
like I'm learning a lot. So far I have better understanding of some
statistical concept way better than I ever have in the past. (even only third
week into the course!)

------
IshKebab
That was really interesting, especially their audio->letters approach to
speech recognition.

I also agree with him that phonemes aren't really a real thing - in the same
way that species aren't. It definitely makes more sense to have the neural
network learn its own representation of sounds, rather than prescribe a
representation made up by linguists.

I mean, the phoneme model is obviously pretty close to reality - Google use it
- but neural networks can clearly learn a closer representation.

Anyway, really impressive results!

~~~
nshm
It is not necessary to use phonemes in the model, however, it greatly
restricts model's ability to learn, in particular to learn many special cases
like rare foreign words. You need much more data to train the model and
basically you lose ability to quickly teach the system a new word by
specifying word pronunciation. It might sound great for advertising of
computing power factories like Baidu is doing, but speech recognition experts
are not that enthusiastic about such approach.

It is pretty disappointing that modern machine learning "experts" advertise
their improved results with questionable comparisons and do not care about
machine learning theory, in particular, never consider model ability to
generalize, model robustness to noises and related things which must be
primary subjects for research.

