> And so do chimpanzees. Evolution must have provided us with something additional, which would be our rather more developed cognitive abilities to employ abstract reasoning and metaphor.
If you have an evolutionary learning algorithm you don't expect every branch to be equally capable, it doesn't mean we didn't get here along the same path.
I basically agree that humans have some "innate" ability, but this innate ability exists as the result of an evolutionary process.
> Those abilities aren't learned, they're innate, and they allow us to think in ways that don't require large amounts of data. An average human being can be shown an Atari game like Pacman, and easily understand what the objective of the game is almost right away.
Taking the single experience with a single Atari game is sort of missing the point, which is all the time we spent learning up until that point and all the evolution that went on until that point.
It also sort of misses the point that Atari games are explicitly designed to be understandable easily by humans, they're not something that just appeared and we happened to be good at it.
> I would conjecture that these things are easy for us, not because we are amazing learning machines, but because we have millions of years of evolution, and years as infants with caring teachers going for us.
But if the argument is that AlphaGo is the right approach to creating an AGI, then we should at some point expect it to learn how to recognize the goal of various tasks without a huge amount of training.
Maybe evolution provided us with something additional that is lacking in current generation of DL. And there are AI researchers who think that you need ontologies for the machines to understand the world, and that it's not reasonable to expect a machine to be able to learn everything from scratch, because the world is too complex for that.
It's not reasonable to expect AlphaGo to replay evolution in order to gain the ability to do abstract reasoning.
> But if the argument is that AlphaGo is the right approach to creating an AGI, then we should at some point expect it to learn how to recognize the goal of various tasks without a huge amount of training.
I never said anything about AlphaGo or AGI. I said that humans are not as good at generalizing from few examples as people would like to believe.
If you have an evolutionary learning algorithm you don't expect every branch to be equally capable, it doesn't mean we didn't get here along the same path.
I basically agree that humans have some "innate" ability, but this innate ability exists as the result of an evolutionary process.
> Those abilities aren't learned, they're innate, and they allow us to think in ways that don't require large amounts of data. An average human being can be shown an Atari game like Pacman, and easily understand what the objective of the game is almost right away.
Taking the single experience with a single Atari game is sort of missing the point, which is all the time we spent learning up until that point and all the evolution that went on until that point.
It also sort of misses the point that Atari games are explicitly designed to be understandable easily by humans, they're not something that just appeared and we happened to be good at it.