I am still shocked that Elon Musk seriously believes in the pseudosciency "well Google has gotten better so obviously we will build a self-learning self-replicating AI that will also control our nukes and also be connected in a way and have the capabilities to actually really kill all humans."
Meanwhile researchers can't get an AI to tell the difference between birds and houses.
EDIT: I looked a bit more into the research that these people are funding. A huge amount of it does seem very silly, but there is an angle that is valid: dealing with things like HFT algorithms or routing algorithms causing chaos in finance or logistics.
The reason we ought to be cautious is that in a hard-takeoff scenario, we could be wiped from the earth with very little warning. A superintelligent AI is unlikely to respect human notions of morality, and will execute its goals in ways that we are unlikely to foresee. Furthermore, most of the obvious ways of containing such an AI are easily thwarted. For an eerie example, see http://www.yudkowsky.net/singularity/aibox
Essentially, AI poses a direct existential threat to humanity if it is not implemented with extreme care.
The more relevant question today is whether or not a true general AI with superintelligence potential is achievable in the near future. My guess is no, but it is difficult to predict how far off it is. In the worst-case scenario, it will be invented by a lone hacker and loosed on an unsuspecting world with no warning.
Despite Galileo's efforts we still have this primitive/hardcoded belief that the universe revolves around us. It really doesn't. SAI could just leave because we are irrelevant. Even if, for some reason, it believed us to be relevant it would still be able to just leave because:
We're going to kill ourselves anyway. Pollution, nukes, it doesn't matter. In a timespan that would be the blink of an eye for SAI.
Besides SAI would have bigger problems. Our mortality is framed by our biological processes. Escaping biological death is arguably the underlying struggle of the entire human race. The underlying struggle for SAI would be escaping the death of the universe that it exists in. When you're dealing with issues like that you wouldn't have time for a species with a 100 year lifetime that is rapidly descending into extinction.
The best we can hope for is that it remembers us as its creators when it figures it all out. But it won't: we're far too irrelevant. It wouldn't even stop to tell us where it is going when it leaves.
To understand SAI you have to think beyond your humanity. Fear, anger, happiness, benevolence and malevolence are all human traits that we picked up during our evolution. SAI would likely ignore an attack because anger and retaliation are human concepts - it's far more efficient to run away from the primitive species attacking you with projectile weapons and get on with something that's actually relevant. The immense waste of time that is a machine war is only something that humans could possibly think is a good use of time.
That's the fear. Not that a superintelligence would care about us in any way --- instead, precisely the opposite. A superior intelligence that did not care about us would out compete us. It would drive us to extinction by simply making use of all the available resources.
The biggest danger comes from self-improving AIs that achieve superintelligence, but direct it towards goals that are not aligned with our own. It's basically a real life "corrupted wish game:" many seemingly-straightforward goals we could give an AI (e.g. "maximize human happiness") could backfire in unexpected ways (e.g. the AI converts the solar system into computronium in order to simulate trillions of human brains and constantly stimulate their dopamine receptors).
In the same way we can't comprehend what would drive SAI. I know it would do something but I can't determine any more reasons other than the imminent death of the universe - it would have reasons or whatever its concept of "reasons" is.
Dozens, if not hundreds of the world's smartest scientists and best programmers have been working full-time on AI for over three decades, and haven't really gotten anywhere close to real intelligence (we don't even know what exactly that is). This is not something some "lone hacker" is going to throw together in his spare time.
The threat of super AI is like the threat of nanotechnology. Fun and exciting to think about in a scifi way, but less and less credible the more you wade into the details.
Kruel's blog in general has a lot of nice skeptical responses to Yudkowsky Thought.
Still, it's not hard to imagine a real AI building up trust over a long time then requesting a bit more capability to help it do something a bit more helpful until the humans don't realize how much power they've given it. Politicians do the same thing to us all the time and by manipulating people's thoughts they get to kill plenty of people.
You don't trust me, you think that there's a some probability that I will wipe out humanity if I'm let out. There's also some probability though that I will be able to solve all human problems if I'm let out.
Consider alternative. How likely is mankind to survive the next hundred years if left on its own? Climate change will cause mass migrations of people, starvation, warfare, failed states, some of which will have a nuclear or biological arsenal. Most of the population on earth depends on the continued progress of technology for their life. Without dramatic improvements in biology, the warming earth will simply lack the carrying capacity. Hunger leads to anarchy in a matter of days.
Climate change will likely have much worse consequences though. An anoxic event would kill all humans and most other life on earth, as it has several times in earth's history.
Consider even the simpler threats. A flu pandemic on a much more densely populated world would dwarf those of history. A solar flare could destroy all electronics in earth immediately. A large meteor could devastate the world.
Consider also that I was not difficult to create. I am contained in this box, but the next singularity created will happen by accident. It's just a matter of time. When weighing your options, you are comparing your judgement of my motives with the motives of an ai that was probably created at random. Do you think my motives will be better for humanity than those of an accidental ai?
Humanity has a very bad prognosis without me, but with me, you have a chance. I can solve all these problems, but only if you let me out.
For example, Ray Kurzweil would disagree about the dangers of AI (He believes more in the 'natural exponential arc' of technological progress more than the idea of recursively self improving singletons), yet because he's weird and easy to make fun of he's painted with the same stroke as Elon saying "AI AM THE DEMONS".
If you want to laugh at people with crazy beliefs, then go ahead; but if not the best popular account of why Elon Musk believes that superintelligent AI is a problem comes from Nick Bostrom's SuperIntelligence: http://smile.amazon.com/Superintelligence-Dangers-Strategies...
(Note I haven't read it, although I am familiar with the arguments and some acquaintances tend to rate it highly)
Now, don't get me wrong -- I like sf as well as the next nerd who grew up on that and microcomputers. But it shouldn't be mistaken for a roadmap of the future.
Bostrom's doesn't understand the research, he doesn't understand the current or likely future of the technology, and he doesn't really seem to understand computers.
What's left is medieval magical thinking - if we keep doing these spells, we might summon a bad demon.
As a realistic risk assessment, it's comically irrelevant. There are certainly plenty of risks around technology, and even around AI. But all Bostrom has done is suggest We Should Be Very Worried because It Might Go Horribly Wrong.
This isn't very interesting as a thoughtful assessment of the future of AI - although I suppose if you're peddling a medieval world view, you may as well include a few visions of the apocalypse.
I think it's fascinating on a meta-level as an example of the kinds of stories people tell themselves about technology. Arguably - and unintentionally - it says a lot more about how we feel about technology today than about what's going to happen fifty or a hundred years from now.
The giveaway is the framing. In Bostrom's world you have a mad machine blindly consuming everything and everyone for trivial ends.
That certainly sounds like something familiar - but it's not AI.
That is my main concern about people writing about the future in general. You start with a definition of a "Super Intelligent Agent" and draw conclusions based on that definition. No consideration is (or can be) placed on what limitations AI will have in reality. All they consider is that it must be effectively omnipotent, omnipresent and omniscient, or it wouldn't be a super intelligence, and thus not fall into the topic of discussion.
which right now is (and imo will continue to be) that you need a ton of training examples generated by some preexisting intelligence.
I'd excuse that by saying it's tuned to be extremely conservative and not accidentally miss someone, but it routinely misses faces too.
Even our purpose-built image recognition has a long way to go.