Google Talk: https://www.youtube.com/watch?v=4nDdqGUMdAY
We still are very far from a system that can really learn in a general way, but I guess it might not look like that to people that only see the result.
He says there's no way to work productively on the problem. And perhaps from one perspective he's right, there is no imminent threat of overpopulation on Mars, and so probably no one is going to pay him a salary to work on it, but in general I think preventative maintenance is more important than dealing with fires when they actually happen.
If we know "overpopulation" is a problem in modern civilization, why not work on defining measures to prevent it in the future instead of letting history repeat itself?
And as far as AI apocalypse is concerned, I'm not concerned from one perspective but somewhat concerned from another. I think we don't need to be worried that AI will become sentient in the near future. I think we do need to be concerned about humans putting too much trust in AI to solve / automate critical problems for us. As a perfect example, I wouldn't recommend putting the decision of launching a nuclear weapon in the hands of some AI system. Though the AI wouldn't be evil in its intention to launch nuclear bombs inadvertently, the inadequacies of its ability to understand when it would be appropriate to launch nuclear bombs would certainly cause a lot of problems the world would need to deal with in the aftermath (if anyone were to survive).
I also agree with him that phonemes aren't really a real thing - in the same way that species aren't. It definitely makes more sense to have the neural network learn its own representation of sounds, rather than prescribe a representation made up by linguists.
I mean, the phoneme model is obviously pretty close to reality - Google use it - but neural networks can clearly learn a closer representation.
Anyway, really impressive results!
It is pretty disappointing that modern machine learning "experts" advertise their improved results with questionable comparisons and do not care about machine learning theory, in particular, never consider model ability to generalize, model robustness to noises and related things which must be primary subjects for research.