> Only at such a high level of abstraction as to be meaningless.
I'm not sure what this means or how the abstractions are meaningless? From Gabor filters to concepts like "dog", the abstractions are quite meaningful (in that they function well), even if not to us.
> They are not. Hundreds of man years worth of engineering time go into each of those systems, and none of those systems generalizes to anything other than the task it was created for. That's nothing like human intelligence.
This isn't strictly true if we look at the ability to generalize as a sliding scale. The level of generalization has actually increased significantly from expert systems to machine learning to deep learning. We have not reached human levels of generalization but we are approaching.
Consider that DL can identify objects, people, animals in unique photos never seen before and that more generally the success of modern machine learning is it's ability generalize from training to test time rather than hand engineering for each new case. Newer work is even able to learn from just a few examples[0] and then generalize beyond that. Or the Atari work from DeepMind that can generalize to dozens/hundreds of games. None of those networks are created specifically for Break Out or Pong.
It's also not entirely fair to discount the hundreds of years of engineering considering most of these systems are trained from scratch (randomness). Humans, however, benefit from the preceding evolution which has a time scale that far exceeds any human engineering effort. :)
I'm not sure what this means or how the abstractions are meaningless? From Gabor filters to concepts like "dog", the abstractions are quite meaningful (in that they function well), even if not to us.
> They are not. Hundreds of man years worth of engineering time go into each of those systems, and none of those systems generalizes to anything other than the task it was created for. That's nothing like human intelligence.
This isn't strictly true if we look at the ability to generalize as a sliding scale. The level of generalization has actually increased significantly from expert systems to machine learning to deep learning. We have not reached human levels of generalization but we are approaching.
Consider that DL can identify objects, people, animals in unique photos never seen before and that more generally the success of modern machine learning is it's ability generalize from training to test time rather than hand engineering for each new case. Newer work is even able to learn from just a few examples[0] and then generalize beyond that. Or the Atari work from DeepMind that can generalize to dozens/hundreds of games. None of those networks are created specifically for Break Out or Pong.
It's also not entirely fair to discount the hundreds of years of engineering considering most of these systems are trained from scratch (randomness). Humans, however, benefit from the preceding evolution which has a time scale that far exceeds any human engineering effort. :)
[0] https://arxiv.org/abs/1605.06065