Andrew Ng (I believe) compared worrying about evil AI to worrying about overpopulation on Mars. Which is to say, the problem is so far off that it's rather silly to be considering it now. I would take it a step further and say that worrying about the implications of AGI is like thinking about Earth being overpopulated by space aliens. First we have to establish that such a thing is even possible, for which there is currently no concrete proof. Then we should start to think about how to deal with it.
Considering how hypothetical technology will impact mankind is literally the definition of science fiction. It makes for interesting reading, but it's far from a call to action.
Why does an AI need to be capable of moral reasoning to perform actions we'd consider evil?
The concern is that computers will continue to do what they're programmed to do, not what we want them to do. We will continue to be as bad at getting those two things to line up as we've always been, but that will become dangerous when the computer is smarter than its programmers and capable of creatively tackling the task of doing something other than what we wanted it to do. Any AI programmed to maximize a quantity is particularly dangerous, because that quantity does not contain a score for accurately following human morality (how would you ever program such a score).
If you're willing to believe that an AI will some day be smarter than an AI researcher (and assuming that's not possible applies a strange special-ness to humans), then an AI will be capable of writing AIs smarter than itself, and so forth up to whatever the limits of these things are. Even if that's not its programmed goal, you thought making something smarter than you would help with your actual goal, and it's smarter than you so it has to realize this too. And that's the bigger danger - at some unknown level of intelligence, AIs suddenly become vastly more intelligent than expected, but still proceed to do something other than what we wanted.
Berkeley AI prof Stuart Russell's response goes something like: Let's say that in the same way Silicon Valley companies are pouring money in to advancing AI, the nations of the world were pouring money in to sending people to Mars. But the world's nations aren't spending any money on critical questions like what people are going to eat & breathe once they get there.
Or if you look at global warming, it would have been nice if people realized it was going to be a problem and started working on it much earlier than we did.
Secondly - it's not necessarily about 'evil' AI. It's about AI indifferent to human life. Have a look at this article, it provides a better intuition for how slippery AI could be: https://medium.com/@LyleCantor/russell-bostrom-and-the-risk-...
This is a point everyone makes, but it hasn't been proven anywhere. Progress in AI as a field has always been a cycle of hype and cool-down.
Edit (reply to below). Talk about self-bootstrapping AIs, etc. is just speculation.
Just as one discovery enables many, human-level AI that can do its own AI research could superlinearly bootstrap its intelligence. AI safety addresses the risk of bootstrapped superintelligence indifferent to humans.
Of course, that assumes the return-on-investment curve for "bootstrapping its own intelligence" is linear or superlinear. If it's logarithmic or something other than "intelligence" (which is a word loaded with magical thinking if there ever was one!) is the limiting factor on reasoning, no go.
But even if we can do all that any time soon (which is a pretty huge if), we don't even know what the effect will be. It's possible that if we remove all of the "I don't want to study math, I want to play games" or "I'm feeling depressed now because I think Tim's mad at me" parts of the human intelligence, we'll end up removing the human ingenuity important to AGI research. It might be that the resulting AGI is much more horrible at researching AI than a random person you pull off the street.
This is a matter of conjecture at this point: Andrew Ng predicts no; Elon Musk predicts yes.
I agree with you that, if you can be sure that superhuman AI is very unlikely or far off, then we have plenty of other things to worry about instead.
My opinion is, human-level intelligence evolved once already, with no designer to guide it (though that's a point of debate too... :-) ). By analogy: it took birds 3.5B years to fly, but the Wright brothers engineered another way. Seems likely in my opinion that we will engineer an alternate path to intelligence.
The question is when. Within a century? I think very likely. In a few decades? I think it's possible & worth trying to prevent the worst outcomes. I.e., it's "science probable" or at least "science possible", rather than clearly "science fiction" (my opinion).
So returning to your Wright brothers example, it's more like saying: "It took birds 3.5B years to fly, but the Wright brothers engineered another way. It seems likely that we'll soon be able to manufacture even more efficient wings small enough to wear on our clothes that will enable us to glide for hundreds of feet with only a running start."
>We thus designed a brief questionnaire
and distributed it to four groups of experts in 2012/2013. The
median estimate of respondents was for a one in two chance that highlevel
machine intelligence will be developed around 2040-2050, rising
to a nine in ten chance by 2075. Experts expect that systems will move
on to superintelligence in less than 30 years thereafter. They estimate
the chance is about one in three that this development turns out to be
‘bad’ or ‘extremely bad’ for humanity.
I would suggest you read the history of the Manhattan project if you want to continue in your belief system regarding "impossible" deadly technology.
To quote Carl Sagan:
>They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown.
Now for a less "killer" use case, you might get denied access to your credit card because of what Facebook "thinks" based on your feed and your friends feeds (this is a real product.)
AI doesn't have to be full blown human-like and generalizable to have real world implications.
This is what my piece called personas is about.. Most people don't understand the implications of what's already happening and how constrains of programming/ML lead to non-human like decisions with human-like consequences. http://personas.media.mit.edu
Given that I could probably sketch out a half-assed design for one in nine months if you gave me a full-time salary - or rather, I could consult with a bunch of experts waaaaaay less amateurish than me and come up with a list of remaining open problems - what makes you say that physical computers cannot, in principle, no matter how slowly or energy-hungrily, do what brains do?
I'm not saying, "waaaaah, it's all going down next year!", but claiming it's impossible in principle when whole scientific fields are constantly making incremental progress towards understanding how to do it is... counter-empirical?
I mean, why can't I live forever? Let's just list the problems and solve them in the next year!
Ok: what don't I know, that is interesting and relevant to this problem? Tell me.
>I mean, why can't I live forever?
Mostly because your cells weren't designed to heal oxidation damage, so eventually the damage accumulates until it interferes with homeostasis. There are a bunch of other reasons and mechanisms, but overall, it comes down to the fact that the micro-level factors in aging only take effect well after reproductive age, so evolution didn't give a fuck about fixing them.
>Let's just list the problems and solve them in the next year!
I said I'd have a plan with lists of open problems in nine months. I expect that even at the most wildly optimistic, it would take a period of years after that to actually solve the open problems and a further period of years to build and implement the software. And that's if you actually gave me time to get expert, and resources to hire the experts who know more than me, without which none of it is getting done.
As it is, I expect machine-learning systems to grow towards worthiness of the name "artificial intelligence" within the next 10-15 years (by analogy, the paper yesterday in Science is just the latest in a research program going back at least to 2003 or 2005). There's no point rushing it, either. Just because we can detail much of the broad shape of the right research-program ahead-of-time, especially as successful research programs have been conducted on which to build, doesn't mean it's time to run around like a chicken with its head cut-off.