Hacker News new | past | comments | ask | show | jobs | submit login
Andrew Ng on What's Next in Deep Learning [video] (youtube.com)
76 points by sherjilozair on Dec 12, 2015 | hide | past | favorite | 11 comments

Well, what I really wanted to post was his quote on "Fear of AI", not really the whole video, which is https://youtu.be/qP9TOX8T-kI?t=3836

His position seems to be similar to MIT engineer David Mindell's, who recently wrote Our Robots, Ourselves[1]. It seems to me that the more hours one spends solving real problems with AI, the less likely they are to believe an apocalypse is imminent. Conversely, the more dependent a prognosticator is on book or film sales, the more likely they are to predict or depict a near-term AI apocalypse.

[1] Google Talk: https://www.youtube.com/watch?v=4nDdqGUMdAY

Probably because they are the only ones that see the hundreds of hours of work that go into making task-specific machine learning tools.

We still are very far from a system that can really learn in a general way, but I guess it might not look like that to people that only see the result.

I think the concern is less that an AI apocalypse is imminent and more that if it does happen, we won't see it coming and it will happen so quickly we won't be able to do anything to stop it. So it's important to try to avoid it ahead of time.

He may not have meant to imply this, but I think he's wrong on at least one of the points.

He says there's no way to work productively on the problem. And perhaps from one perspective he's right, there is no imminent threat of overpopulation on Mars, and so probably no one is going to pay him a salary to work on it, but in general I think preventative maintenance is more important than dealing with fires when they actually happen.

If we know "overpopulation" is a problem in modern civilization, why not work on defining measures to prevent it in the future instead of letting history repeat itself?

And as far as AI apocalypse is concerned, I'm not concerned from one perspective but somewhat concerned from another. I think we don't need to be worried that AI will become sentient in the near future. I think we do need to be concerned about humans putting too much trust in AI to solve / automate critical problems for us. As a perfect example, I wouldn't recommend putting the decision of launching a nuclear weapon in the hands of some AI system. Though the AI wouldn't be evil in its intention to launch nuclear bombs inadvertently, the inadequacies of its ability to understand when it would be appropriate to launch nuclear bombs would certainly cause a lot of problems the world would need to deal with in the aftermath (if anyone were to survive).

Seeing this type of research really inspires me to get involved in deep learning, at least to the point where I can accomplish some sort of basic task with it. As somebody with an undergraduate computer science degree, but only a very basic theoretical understanding of AI, what resources are available for me to understand deep learning and build something useful?

Andrew Ng's course on Coursera?

I signed up for this last night. Really looking forward to it.


His machine learning course is amazing! I really enjoy lectures and I feel like I'm learning a lot. So far I have better understanding of some statistical concept way better than I ever have in the past. (even only third week into the course!)

That was really interesting, especially their audio->letters approach to speech recognition.

I also agree with him that phonemes aren't really a real thing - in the same way that species aren't. It definitely makes more sense to have the neural network learn its own representation of sounds, rather than prescribe a representation made up by linguists.

I mean, the phoneme model is obviously pretty close to reality - Google use it - but neural networks can clearly learn a closer representation.

Anyway, really impressive results!

It is not necessary to use phonemes in the model, however, it greatly restricts model's ability to learn, in particular to learn many special cases like rare foreign words. You need much more data to train the model and basically you lose ability to quickly teach the system a new word by specifying word pronunciation. It might sound great for advertising of computing power factories like Baidu is doing, but speech recognition experts are not that enthusiastic about such approach.

It is pretty disappointing that modern machine learning "experts" advertise their improved results with questionable comparisons and do not care about machine learning theory, in particular, never consider model ability to generalize, model robustness to noises and related things which must be primary subjects for research.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact