Hacker News new | past | comments | ask | show | jobs | submit login
Q&A: Andrew Ng (mercurynews.com)
101 points by jonbaer on March 5, 2016 | hide | past | favorite | 39 comments



For those interested in the "AI threat" discussion, here's an excellent thread on Quora about it with answers from Andrew Ng, Yoshua Bengio, and Pedro Domingos (along with other random opinions): https://www.quora.com/Is-AI-an-existential-threat-to-humanit...

As a side note, Quora has done some really excellent AMAs with a bunch of machine learning researchers, including Ng, Bengio, researchers at Google, and many more.


> For those interested in the "AI threat" discussion, here's an excellent thread on Quora about it with answers from Andrew Ng, Yoshua Bengio, and Pedro Domingos (along with other random opinions): https://www.quora.com/Is-AI-an-existential-threat-to-humanit....

I read Andrew Ng's answer on quora - but I'm not sure it is really addressing the stronger arguments or concerns that would suggest 'AI may be existential threat to humanity' as much as a straw-man of the 'evil super-intelligence', with all respect, it's hard to call this an interesting or involved answer.


Why would he address those concerns? He was pretty clear that he believes it's silly to even think about the "problem", since it's so far from being a threat. In fact, I would guess that fear of the "AI threat" roughly correlates with ignorance of the state of AI. And it might correlate with number of science fiction movies watched.


You think that the numerous experts who signed this letter expressing concerns: http://futureoflife.org/ai-open-letter/ have these concerns from watching too many sci-fi movies about 'evil super-intelligence'?


Have you ever actually read the letter? It's not a dooms-day letter. It doesn't talk about impending doom. It says basically, "there is potential for great power, let's wield it responsibly." This is not controversial. I would sign this letter and I don't think AI poses an existential threat.


That was rather the point of my first post? That those whom have these concerns are mostly not worried about the fantastical version of 'evil super-intelligence' (although they are worried that AI could be an existential-threat) - of course you can address the most extreme-position presented and call it silly, but I don't think that's adding much to my understanding of the problem.

For example I think a very serious concern is surrendering human decision making to machine-learning in the areas where big data-sets can be very widely applicable (crime, insurance, employment, immigration, finance) - loosing control of our own systems, to perhaps even provably better decision making is a totally not fantastic possibility. But if the response to this is a malicious super-intelligence won't send a robot with automatic weapons to your house in the next hundred years, I'm supposed to feel reassured?


You're missing what I'm saying. There's no reason to think that most people who signed that letter have any concerns. The letter is written such that signing it is basically an affirmation that we should use AI to help society.

You started by wondering why Ng didn't address the concerns. It's because experts don't tend to be concerned. The letter was your evidence to the contrary, and it is not convincing.


> You started by wondering why Ng didn't address the concerns. It's because experts don't tend to be concerned.

While not everyone who signed is necessarily concerned about 'existential dangers to humanity' or is an expert specifically in AI, for some signers both are definitely true - for an example you can cross reference: https://en.wikipedia.org/wiki/Existential_risk_from_artifici...


That link leads to a 404. That aside, it would be disappointing if it's addressing the 'evil killer robots' straw-man rather than the 'increasingly efficient optimiser that kills everyone as a by-product of its optimisation process' issue. Even if we give strong AI seemingly benign 'goals' (optimisation targets), there's a non-trivial chance it will result in the extinction of the human race.

One example I've seen: imagine if we invented strong AI and gave it the goal 'eradicate cancer in humans'. The most efficient way to achieve this goal would probably be to vaporize all currently alive humans, thus eradicating cancer in humans for all time.


Here's a survey of AI experts: http://www.nickbostrom.com/papers/survey.pdf

>We thus designed a brief question- naire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high- level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.


I see this survey cited a lot. I'd rather not get into a "my experts are better than your experts" argument (going on in the other thread here), but I really don't like Bostrom's survey. There are two really glaring issues:

1) Bostrom surveyed participants in conferences, not people presenting papers. Anyone can attend a conference if they buy a ticket, expert or not.

2) Bostrom decided to survey: the conferences "Philosophy and Theory of AI," "AGI," and "AGI-Impacts," all of them organized by himself or his colleague. He also emailed the mailing list of "Members of the Greek Association for Artificial Intelligence." Neither of these conferences are known in ML (or technical, for that matter). I have never heard of this association. Just about the only legitimate methodology was emailing the top 100 AI researchers according to Microsoft Academic search (only 29 responded).

It would have much more weight to me if it were, say, a survey of people presenting papers at NIPS and ICML (the top 2 machine learning conferences). Maybe throw in ICLR, CVPR (top deep learning conference, top CV conference with ML focus). Because, like it or not, the only reason people fear super-intelligence is because of the advances being made in machine learning, and by extension, deep learning. Not because of advances (there are few) in classic AI.


Asking about technology in 2075 is ridiculous. If you had asked people in 1916 about 1975, nobody would have gotten anything right.


Perhaps, but that's what the comment I am replying to was doing. If we are going to be making predictions about the future, we might as well take a survey of experts instead of trusting the predictions of a single one.


So they asked a group of people who don't know how to build a model of human intelligence to model the trajectory of their modeling of intelligence and when, once they succeed, the model will model something even more intelligent than that?

And then they took the most popular guess.

I'm sorry, I don't believe that gives us any information about the future whatsoever. That methodology is nonsense.


No, that's not their methodology at all. They did a survey of AI experts and asked them for their predictions, and then took the median of those predictions. That's perfectly fair, and in fact far more likely to be correct than taking a random experts prediction. See the wisdom of crowds.

The best way to do it would be to assume each expert is equally likely to be correct, and then estimate what the probability is from there. 50% of experts say there is a 50% chance it will be developed by 2045. Therefore 50% by 2045 is a good estimate. In fact if anything it's an underestimate, since many of the other 50% of experts will still assign some probability to it happening before 2045, increasing the odds.

If you want me to, I can do a simulation with the data given and find out what the exact numbers should be. But the numbers in that abstract are unlikely to be too far off, and if anything are an underestimate.

As for whether it's valid to take the opinions of experts, that's a different issue. But 1) the predictions of experts are more likely to be correct than random HN commentators. 2) the predictions of a group of experts are more likely to be correct than a single expert, which is what this thread is about. And 3) uncertainty doesn't mean you can default to the hypothesis that it won't happen. If nothing else a survey of predictions provides a good prior.


It's a very interesting interesting thread.

I will point out that there is no representative there of the other side of that discussion, namely that AI is a threat we should be devoting resources to solving.


Well a bunch of AI researchers are not gonna say AI is gonna kill us is 5 years. If they did, they would lose their jobs.

If a self-improving AI could exist, it would be exponential in its self-improvement. So just like a supercritical nuclear reactor, it could conceivably go from undetectable level of activity to supreme intelligence within a very short span of time.

It would be like the nuclear engineer at chernobyl saying "is only 100MW comrade, is of not problem"


> it would be exponential in its self-improvement

Why do you think that is?


because its ability to improve its intelligence is based on its intelligence i.e. exponential.


In addition to umanwizard's comment, you're assuming that there are no external constraints involved. There are plenty of non-exponential growth curves out there, like https://en.wikipedia.org/wiki/Logistic_function and https://en.wikipedia.org/wiki/Gompertz_function , which you can imagine as being at least as probable as an exponential rate.


That assumes the marginal difficulty of further improvement stays constant as intelligence increases.


By that same logic, Nick Bostrom has an incentive to exaggerate the risk because it sells him more books.


Andrew Ng has a machine learning course online if you are interested: https://www.coursera.org/learn/machine-learning


I've been taking this course recently. Due to time constraints, I haven't kept up with the rest of the course, but the content is phenomenal.

If you want to learn machine learning, check out the course.


Probably one of the best online courses available. He's very engaging and makes some pretty tough concepts seem straightforward.


I took the in-person course at stanford. I recall the first problem set in cs229 was about as hard as the entirety of the Coursera course combined...


Well, everyone can judge by themselves, the actual CS229 lectures are also available online: https://www.youtube.com/view_play_list?p=A89DCFA6ADACE599

EDIT: And so are the problem sets: https://see.stanford.edu/Course/CS229


There is nothing to indicate the Coursera class and CS 229 are the same. CS 229 is a grad-level machine learning class that assumes heavy math prerequisites; the syllabus is completely different.

The Coursera class is closest to CS 229a at Stanford.


Fair enough, I just wanted to note that people can also watch the CS229 lectures online.


(Disclaimer: I have only seen part of the material, for both courses) This seems true in terms of difficulty, mostly because CS229 assumes a much stronger math background and provides more of the theory for a lot of the ML techniques being used. However, if you want to know the "technique space" of usual ML algorithms and when/how to apply them, the Coursera course seems to be 70-90% of the way there in terms of breadth of the material (with a bias towards the most commonly used methods, so perhaps 70% of the techniques, applicable to more than 90% of the use cases).

So, if you want to do ML in industry, applied to common problems, the Coursera course will get you there. If you want to become an ML researcher, the Coursera course will fall way short of that, while CS229 might be at least a first step (followed by CS229T/STATS231 [1]).

[1] Lecture notes online, too: https://web.stanford.edu/class/cs229t/


It's notable that there are actually no enforced prerequisites for 229, besides basically being OK at math. Which meant that I knew an education major who sat down, tried to take it and sort of crashed and burned in the third week.


Are there any enforced requirements at Stanford? I don't think I ever had any formal review of the classes I signed up for making sure I actually met the requirements -- there must've been at least a handful of classes I had signed up for that I didn't have the official prereqs for.


> But I hear fears of the robots taking over. What do you tell people who fear that?

I realize it's a bit of a silly question, but strong AI is a serious issue in my opinion and I'd like to understand how one of the leading researchers in deep learning addresses this concern? I think Andrew's answer could have been much more interesting.


Yeah, I was very disappointed in his answer to that. I don't think it's a silly question, and someone in his position should take it seriously.

If it's his opinion that it's too far out to practically worry about, he should just say something like: "We're decades away from this being a serious concern, but someday society will have to decide how to deal with it."


I'm on my phone, so I can't find the exact quote, but I remember him saying a couple of years ago something like he "doesn't fear AI for the same reason he doesn't fear the overpopulation of Mars".

And I'm 100% with him on that one. At the very best estimate (the one by Ray Kurzweil), we're 13 years away from it. In reality, probably 30-50. So, worrying about it now kind of feels like a waste of time, doesn't it?

Edit: also, in a study by Nick Bostrom and others, I've learned that not a huge part of AI researchers worry about AI. It seems like this sense of fear is mostly associated with those whose jobs are not really focused on AI (like Bill Gates, Stephen Hawking and Ellon Musk).


I'm on my phone, so I can't find the exact quote, but I remember him saying a couple of years ago something like he "doesn't fear AI for the same reason he doesn't fear the overpopulation of Mars".

Here's one instance of a full quote from ang where he uses the "overpopulation on mars" phrasing[1].

    I think that hundreds of years from now if people invent 
    a technology that we haven’t heard of yet, maybe a 
    computer could turn evil. But the future is so 
    uncertain. I don’t know what’s going to happen five 
    years from now. The reason I say that I don’t worry 
    about AI turning evil is the same reason I don’t worry 
    about overpopulation on Mars. Hundreds of years from now
    I hope we’ve colonized Mars. But we’ve never set foot on 
    the planet so how can we productively worry about this 
    problem now?


[1] http://www.wired.com/brandlab/2015/05/andrew-ng-deep-learnin...

More on that point...

https://www.reddit.com/r/Futurology/comments/2zpki0/deep_lea...

http://www.theregister.co.uk/2015/03/19/andrew_ng_baidu_ai/


"At the very best estimate (the one by Ray Kurzweil), we're 13 years away from it. In reality, probably 30-50. So, worrying about it now kind of feels like a waste of time, doesn't it?"

I think that kind of depends on how bad (or good) having AI will be.

Assuming the absolute worst-case scenario (an AI that wipes out humanity overnight), then not worrying about it because it's only 40 years away seems, in my mind, ridiculous. Not to mention 13 years away. Personally, I think 13 years and even 40 years is pretty "optimistic" on the timeline, but even if it's 200 years away, we should devote at least some resources to worrying about this problem now.


If the chance of the worst case scenario is an infinitesimally small number, then there is no threat. One in a quadrillion, for example.


Only a waste of time in the sense that worrying about the end of fossil fuels, or the overcrowding of the Earth, or the risk of having our entire species on a single rock is worth worrying about.

I'm not going to lose any sleep over it, but I hope someone's thinking about it. And it makes a lot more sense that some of the most powerful men in the world (Gates, Hawking, Musk) are worrying about it since it's on the decades-out timeframe as opposed to months or years.

AI researchers are probably more like me, wondering what they're going to get for lunch tomorrow and how to optimize the next algorithm.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: