As a side note, Quora has done some really excellent AMAs with a bunch of machine learning researchers, including Ng, Bengio, researchers at Google, and many more.
I read Andrew Ng's answer on quora - but I'm not sure it is really addressing the stronger arguments or concerns that would suggest 'AI may be existential threat to humanity' as much as a straw-man of the 'evil super-intelligence', with all respect, it's hard to call this an interesting or involved answer.
For example I think a very serious concern is surrendering human decision making to machine-learning in the areas where big data-sets can be very widely applicable (crime, insurance, employment, immigration, finance) - loosing control of our own systems, to perhaps even provably better decision making is a totally not fantastic possibility. But if the response to this is a malicious super-intelligence won't send a robot with automatic weapons to your house in the next hundred years, I'm supposed to feel reassured?
You started by wondering why Ng didn't address the concerns. It's because experts don't tend to be concerned. The letter was your evidence to the contrary, and it is not convincing.
While not everyone who signed is necessarily concerned about 'existential dangers to humanity' or is an expert specifically in AI, for some signers both are definitely true - for an example you can cross reference: https://en.wikipedia.org/wiki/Existential_risk_from_artifici...
One example I've seen: imagine if we invented strong AI and gave it the goal 'eradicate cancer in humans'. The most efficient way to achieve this goal would probably be to vaporize all currently alive humans, thus eradicating cancer in humans for all time.
>We thus designed a brief question- naire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high- level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.
1) Bostrom surveyed participants in conferences, not people presenting papers. Anyone can attend a conference if they buy a ticket, expert or not.
2) Bostrom decided to survey: the conferences "Philosophy and Theory of AI," "AGI," and "AGI-Impacts," all of them organized by himself or his colleague. He also emailed the mailing list of "Members of the Greek Association for Artificial Intelligence." Neither of these conferences are known in ML (or technical, for that matter). I have never heard of this association. Just about the only legitimate methodology was emailing the top 100 AI researchers according to Microsoft Academic search (only 29 responded).
It would have much more weight to me if it were, say, a survey of people presenting papers at NIPS and ICML (the top 2 machine learning conferences). Maybe throw in ICLR, CVPR (top deep learning conference, top CV conference with ML focus). Because, like it or not, the only reason people fear super-intelligence is because of the advances being made in machine learning, and by extension, deep learning. Not because of advances (there are few) in classic AI.
And then they took the most popular guess.
I'm sorry, I don't believe that gives us any information about the future whatsoever. That methodology is nonsense.
The best way to do it would be to assume each expert is equally likely to be correct, and then estimate what the probability is from there. 50% of experts say there is a 50% chance it will be developed by 2045. Therefore 50% by 2045 is a good estimate. In fact if anything it's an underestimate, since many of the other 50% of experts will still assign some probability to it happening before 2045, increasing the odds.
If you want me to, I can do a simulation with the data given and find out what the exact numbers should be. But the numbers in that abstract are unlikely to be too far off, and if anything are an underestimate.
As for whether it's valid to take the opinions of experts, that's a different issue. But 1) the predictions of experts are more likely to be correct than random HN commentators. 2) the predictions of a group of experts are more likely to be correct than a single expert, which is what this thread is about. And 3) uncertainty doesn't mean you can default to the hypothesis that it won't happen. If nothing else a survey of predictions provides a good prior.
I will point out that there is no representative there of the other side of that discussion, namely that AI is a threat we should be devoting resources to solving.
If a self-improving AI could exist, it would be exponential in its self-improvement. So just like a supercritical nuclear reactor, it could conceivably go from undetectable level of activity to supreme intelligence within a very short span of time.
It would be like the nuclear engineer at chernobyl saying "is only 100MW comrade, is of not problem"
Why do you think that is?
If you want to learn machine learning, check out the course.
EDIT: And so are the problem sets: https://see.stanford.edu/Course/CS229
The Coursera class is closest to CS 229a at Stanford.
So, if you want to do ML in industry, applied to common problems, the Coursera course will get you there. If you want to become an ML researcher, the Coursera course will fall way short of that, while CS229 might be at least a first step (followed by CS229T/STATS231 ).
 Lecture notes online, too: https://web.stanford.edu/class/cs229t/
I realize it's a bit of a silly question, but strong AI is a serious issue in my opinion and I'd like to understand how one of the leading researchers in deep learning addresses this concern? I think Andrew's answer could have been much more interesting.
If it's his opinion that it's too far out to practically worry about, he should just say something like: "We're decades away from this being a serious concern, but someday society will have to decide how to deal with it."
And I'm 100% with him on that one. At the very best estimate (the one by Ray Kurzweil), we're 13 years away from it. In reality, probably 30-50. So, worrying about it now kind of feels like a waste of time, doesn't it?
Edit: also, in a study by Nick Bostrom and others, I've learned that not a huge part of AI researchers worry about AI. It seems like this sense of fear is mostly associated with those whose jobs are not really focused on AI (like Bill Gates, Stephen Hawking and Ellon Musk).
Here's one instance of a full quote from ang where he uses the "overpopulation on mars" phrasing.
I think that hundreds of years from now if people invent
a technology that we haven’t heard of yet, maybe a
computer could turn evil. But the future is so
uncertain. I don’t know what’s going to happen five
years from now. The reason I say that I don’t worry
about AI turning evil is the same reason I don’t worry
about overpopulation on Mars. Hundreds of years from now
I hope we’ve colonized Mars. But we’ve never set foot on
the planet so how can we productively worry about this
More on that point...
I think that kind of depends on how bad (or good) having AI will be.
Assuming the absolute worst-case scenario (an AI that wipes out humanity overnight), then not worrying about it because it's only 40 years away seems, in my mind, ridiculous. Not to mention 13 years away. Personally, I think 13 years and even 40 years is pretty "optimistic" on the timeline, but even if it's 200 years away, we should devote at least some resources to worrying about this problem now.
I'm not going to lose any sleep over it, but I hope someone's thinking about it. And it makes a lot more sense that some of the most powerful men in the world (Gates, Hawking, Musk) are worrying about it since it's on the decades-out timeframe as opposed to months or years.
AI researchers are probably more like me, wondering what they're going to get for lunch tomorrow and how to optimize the next algorithm.