where a computer can do anything a human can do, but better, doesn't seem far away.
It's far away. Deepmind and chums are building systems that are very good at tuning some variables such that we end up with a set of calculations that perform well at tasks with small, well-defined input sets, fixed rules, very small sets of legal outputs, and simple measures of "goodness". A chum of mine working there says that this view isn't uncommon within the company, and that many people there believe that the current path is not leading towards any kind of general "intelligence", and isn't meant to. If we weren't already stuck with the term "AI" we would be calling these something like "rapidly iterated and highly-tuned algorithm sets for very specific, low-information, low-interpretation fixed-rule systems".
Don't get me wrong; they're impressive, and they've got the potential to be useful tools for certain kinds of task. Are there some tasks currently being done by humans that a suitably tuned set of automated calculations of this nature could do better? Sure.
Some (small) number of humans will own the capital and resources on which the AI run, in turn shackling and exploiting the remainder of the human race.
The remainder of the rest of the human race will be in full support of this state of affairs, as some (small) portion of the AI will be charged with public opinion manipulation and securing/hiding the resources of the small portion of humans mentioned above.
If you want to see the future, imagine a golden Iphone, "Siri, stamp on this poor person's face", forever.
While extremely impressive, AlphaGo is still a one-trick pony. We won't be coppertops for a while, yet.
But what does a future look like for humans when we're no longer needed? As it has always been, that depends largely on the compassion of those humans with great wealth and power.
Good thing there aren't flying weapons to worry about. Oh wait...
It's far away. Deepmind and chums are building systems that are very good at tuning some variables such that we end up with a set of calculations that perform well at tasks with small, well-defined input sets, fixed rules, very small sets of legal outputs, and simple measures of "goodness". A chum of mine working there says that this view isn't uncommon within the company, and that many people there believe that the current path is not leading towards any kind of general "intelligence", and isn't meant to. If we weren't already stuck with the term "AI" we would be calling these something like "rapidly iterated and highly-tuned algorithm sets for very specific, low-information, low-interpretation fixed-rule systems".
Don't get me wrong; they're impressive, and they've got the potential to be useful tools for certain kinds of task. Are there some tasks currently being done by humans that a suitably tuned set of automated calculations of this nature could do better? Sure.