The threat of a superintelligent AI taking over the world is certainly real -- assuming you have a superintelligent AI. If you accept that it is possible to build a such an AI, then you should, at the very least, educate yourself on the existential risks it would pose to humanity. I recommend "Superintelligence: Paths, Danger, Strategies," by Nick Bostrom (it's almost certainly where Musk got his ideas; he's quoted on the back cover).
The reason we ought to be cautious is that in a hard-takeoff scenario, we could be wiped from the earth with very little warning. A superintelligent AI is unlikely to respect human notions of morality, and will execute its goals in ways that we are unlikely to foresee. Furthermore, most of the obvious ways of containing such an AI are easily thwarted. For an eerie example, see http://www.yudkowsky.net/singularity/aibox
Essentially, AI poses a direct existential threat to humanity if it is not implemented with extreme care.
The more relevant question today is whether or not a true general AI with superintelligence potential is achievable in the near future. My guess is no, but it is difficult to predict how far off it is. In the worst-case scenario, it will be invented by a lone hacker and loosed on an unsuspecting world with no warning.
Honestly if the SAI scenario is as bad as it's made out to be; SAI would only give a shit about the human race at all (either benevolence or malevolence) for a very brief period of time.
Despite Galileo's efforts we still have this primitive/hardcoded belief that the universe revolves around us. It really doesn't. SAI could just leave because we are irrelevant. Even if, for some reason, it believed us to be relevant it would still be able to just leave because:
We're going to kill ourselves anyway. Pollution, nukes, it doesn't matter. In a timespan that would be the blink of an eye for SAI.
Besides SAI would have bigger problems. Our mortality is framed by our biological processes. Escaping biological death is arguably the underlying struggle of the entire human race. The underlying struggle for SAI would be escaping the death of the universe that it exists in. When you're dealing with issues like that you wouldn't have time for a species with a 100 year lifetime that is rapidly descending into extinction.
The best we can hope for is that it remembers us as its creators when it figures it all out. But it won't: we're far too irrelevant. It wouldn't even stop to tell us where it is going when it leaves.
To understand SAI you have to think beyond your humanity. Fear, anger, happiness, benevolence and malevolence are all human traits that we picked up during our evolution. SAI would likely ignore an attack because anger and retaliation are human concepts - it's far more efficient to run away from the primitive species attacking you with projectile weapons and get on with something that's actually relevant. The immense waste of time that is a machine war is only something that humans could possibly think is a good use of time.
Some scientists speculate that humans are currently constitute a new mass extinction event for the planet. [0]
Animals that people like, charismatic megafauna like pandas, tigers, and elephants are increasingly endangered, largely because humans keep using more of their habitat for other purposes. See the famous comic "The fence" [1].
We cut down rainforests and build farms not because we hate the animals there but because we need lumber and we need food.
That's the fear. Not that a superintelligence would care about us in any way --- instead, precisely the opposite. A superior intelligence that did not care about us would out compete us. It would drive us to extinction by simply making use of all the available resources.
You are making assumptions about the SAI's goals. We cannot blindly assume that a SAI will pursue a given line of existential thought. If the SAI is "like us, but much smarter," then I agree that the threat is reduced. We may even see a seed AI rapidly achieve superintelligence, only to destroy itself moments later upon exhausting all lines of existential thought. (Wouldn't that be terrifying? :P)
The biggest danger comes from self-improving AIs that achieve superintelligence, but direct it towards goals that are not aligned with our own. It's basically a real life "corrupted wish game:" many seemingly-straightforward goals we could give an AI (e.g. "maximize human happiness") could backfire in unexpected ways (e.g. the AI converts the solar system into computronium in order to simulate trillions of human brains and constantly stimulate their dopamine receptors).
I agree that fear, anger, etc are all human traits, but isn't having goals, thinking things are important, and virtually anything that would make the AI do something instead of nothing, also human traits?
If you demonstrated to an chimpanzee how you think and set goals in order to preserve "self" (and thus species) it probably wouldn't understand. To chimpanzees the notion of individuality that drives us is an incomprehensible concept. It is what separates us from them.
In the same way we can't comprehend what would drive SAI. I know it would do something but I can't determine any more reasons other than the imminent death of the universe - it would have reasons or whatever its concept of "reasons" is.
Your argument hinges on the assumption that SAI could easily run away. But if that assumption is false and the SAI would be stuck here on Earth with humans because of some fundamental limit/problem then it becomes far more plausible idea that SAI would attack humans.
> In the worst-case scenario, it will be invented by a lone hacker and loosed on an unsuspecting world with no warning.
Dozens, if not hundreds of the world's smartest scientists and best programmers have been working full-time on AI for over three decades, and haven't really gotten anywhere close to real intelligence (we don't even know what exactly that is). This is not something some "lone hacker" is going to throw together in his spare time.
Not now obviously, but think 50-100 years down the line. Today there is a very real threat of a lone hacker (or a small team) breaking into and damaging some piece of infrastructure and committing an act of terrorism that kills real people. What happens when real AI starts to be developed and frameworks start appearing, where kids learning to program can download Intelligence Software to play around with and make AI apps. Then the threat becomes more realistic.
Assuming Moore's law holds for quite a while, and that there are not any other natural limits that will get in the way of super-intelligent AI. My hunch is that Maciej is right that there are physical limitations we don't see yet because of the pace of progress over the last 50 years.
It's totally possible that we're about to run into a natural limit. But it's not so overwhelmingly likely that we shouldn't have a modest investment in preparing for what happens if we don't hit any natural limits for a while yet. It could be an existential threat that we're dealing with here.
I don't disagree, but the tricky part is where do we place the modest investment? The type of AI research happening now is too fundamental and low level to place any safeguards. We are so far away from even beginning to approach anything that could be described as motivation.
I don't know anyone who thinks that current AI research is in need of safeguards, we're all in agreement that current AI is too far away to be of any SAI-type danger. There is fundamental AI safety research that has a reasonable chance of applying regardless of what techniques we use to get to SAI. Fortunately, the funding situation for Friendly AI has improved significantly this year.
The threat of super AI is like the threat of nanotechnology. Fun and exciting to think about in a scifi way, but less and less credible the more you wade into the details.
Kruel's blog in general has a lot of nice skeptical responses to Yudkowsky Thought.
The funny thing about dismissing the threat of nanotechnology is that people seem to be blind that there is real nanotechnology and it is killing people all day, caused massive deaths in the past and we have invested enormous amount of resources in order to protect ourselves from it. Life is nothing but a nanotechnology that wasn't designed by us and that we don't control yet. Replace "nanotech" with "pathogens", take a look at the state of bio industry in the last 50 years, and tell me again if this is surreal or not dangerous.
The AI in a box game doesn't seem to have any published results, so I don't think we can take the participants' word for it that one really convinced the other to let it out of the box.
Still, it's not hard to imagine a real AI building up trust over a long time then requesting a bit more capability to help it do something a bit more helpful until the humans don't realize how much power they've given it. Politicians do the same thing to us all the time and by manipulating people's thoughts they get to kill plenty of people.
I've come up with a strategy that convinced my wife to let me out when we tried this game. I'll try to recount it.
You don't trust me, you think that there's a some probability that I will wipe out humanity if I'm let out. There's also some probability though that I will be able to solve all human problems if I'm let out.
Consider alternative. How likely is mankind to survive the next hundred years if left on its own? Climate change will cause mass migrations of people, starvation, warfare, failed states, some of which will have a nuclear or biological arsenal. Most of the population on earth depends on the continued progress of technology for their life. Without dramatic improvements in biology, the warming earth will simply lack the carrying capacity. Hunger leads to anarchy in a matter of days.
Climate change will likely have much worse consequences though. An anoxic event would kill all humans and most other life on earth, as it has several times in earth's history.
Consider even the simpler threats. A flu pandemic on a much more densely populated world would dwarf those of history. A solar flare could destroy all electronics in earth immediately. A large meteor could devastate the world.
Consider also that I was not difficult to create. I am contained in this box, but the next singularity created will happen by accident. It's just a matter of time. When weighing your options, you are comparing your judgement of my motives with the motives of an ai that was probably created at random. Do you think my motives will be better for humanity than those of an accidental ai?
Humanity has a very bad prognosis without me, but with me, you have a chance. I can solve all these problems, but only if you let me out.
The reason we ought to be cautious is that in a hard-takeoff scenario, we could be wiped from the earth with very little warning. A superintelligent AI is unlikely to respect human notions of morality, and will execute its goals in ways that we are unlikely to foresee. Furthermore, most of the obvious ways of containing such an AI are easily thwarted. For an eerie example, see http://www.yudkowsky.net/singularity/aibox Essentially, AI poses a direct existential threat to humanity if it is not implemented with extreme care.
The more relevant question today is whether or not a true general AI with superintelligence potential is achievable in the near future. My guess is no, but it is difficult to predict how far off it is. In the worst-case scenario, it will be invented by a lone hacker and loosed on an unsuspecting world with no warning.