
Q&A: Andrew Ng - jonbaer
http://www.mercurynews.com/business/ci_29596077/q-andrew-ng-chief-scientist-chinese-search-giant
======
brianchu
For those interested in the "AI threat" discussion, here's an excellent thread
on Quora about it with answers from Andrew Ng, Yoshua Bengio, and Pedro
Domingos (along with other random opinions): [https://www.quora.com/Is-AI-an-
existential-threat-to-humanit...](https://www.quora.com/Is-AI-an-existential-
threat-to-humanity)

As a side note, Quora has done some really excellent AMAs with a bunch of
machine learning researchers, including Ng, Bengio, researchers at Google, and
many more.

~~~
Houshalter
Here's a survey of AI experts:
[http://www.nickbostrom.com/papers/survey.pdf](http://www.nickbostrom.com/papers/survey.pdf)

>We thus designed a brief question- naire and distributed it to four groups of
experts in 2012/2013\. The median estimate of respondents was for a one in two
chance that high- level machine intelligence will be developed around
2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems
will move on to superintelligence in less than 30 years thereafter. They
estimate the chance is about one in three that this development turns out to
be ‘bad’ or ‘extremely bad’ for humanity.

~~~
narrator
Asking about technology in 2075 is ridiculous. If you had asked people in 1916
about 1975, nobody would have gotten anything right.

~~~
Houshalter
Perhaps, but that's what the comment I am replying to was doing. If we are
going to be making predictions about the future, we might as well take a
survey of experts instead of trusting the predictions of a single one.

------
godzillabrennus
Andrew Ng has a machine learning course online if you are interested:
[https://www.coursera.org/learn/machine-
learning](https://www.coursera.org/learn/machine-learning)

~~~
Gratsby
Probably one of the best online courses available. He's very engaging and
makes some pretty tough concepts seem straightforward.

~~~
curuinor
I took the in-person course at stanford. I recall the first problem set in
cs229 was about as hard as the entirety of the Coursera course combined...

~~~
icebraining
Well, everyone can judge by themselves, the actual CS229 lectures are also
available online:
[https://www.youtube.com/view_play_list?p=A89DCFA6ADACE599](https://www.youtube.com/view_play_list?p=A89DCFA6ADACE599)

EDIT: And so are the problem sets:
[https://see.stanford.edu/Course/CS229](https://see.stanford.edu/Course/CS229)

~~~
argonaut
There is nothing to indicate the Coursera class and CS 229 are the same. CS
229 is a grad-level machine learning class that assumes heavy math
prerequisites; the syllabus is completely different.

The Coursera class is closest to CS 229a at Stanford.

~~~
icebraining
Fair enough, I just wanted to note that people can also watch the CS229
lectures online.

------
trienism
> But I hear fears of the robots taking over. What do you tell people who fear
> that?

I realize it's a bit of a silly question, but strong AI is a serious issue in
my opinion and I'd like to understand how one of the leading researchers in
deep learning addresses this concern? I think Andrew's answer could have been
much more interesting.

~~~
arcticfox
Yeah, I was very disappointed in his answer to that. I don't think it's a
silly question, and someone in his position should take it seriously.

If it's his opinion that it's too far out to practically worry about, he
should just say something like: "We're decades away from this being a serious
concern, but someday society will have to decide how to deal with it."

~~~
r3bl
I'm on my phone, so I can't find the exact quote, but I remember him saying a
couple of years ago something like he "doesn't fear AI for the same reason he
doesn't fear the overpopulation of Mars".

And I'm 100% with him on that one. At the very best estimate (the one by Ray
Kurzweil), we're 13 years away from it. In reality, probably 30-50. So,
worrying about it now kind of feels like a waste of time, doesn't it?

Edit: also, in a study by Nick Bostrom and others, I've learned that not a
huge part of AI researchers worry about AI. It seems like this sense of fear
is mostly associated with those whose jobs are not really focused on AI (like
Bill Gates, Stephen Hawking and Ellon Musk).

~~~
edanm
"At the very best estimate (the one by Ray Kurzweil), we're 13 years away from
it. In reality, probably 30-50. So, worrying about it now kind of feels like a
waste of time, doesn't it?"

I think that kind of depends on how bad (or good) having AI will be.

Assuming the absolute worst-case scenario (an AI that wipes out humanity
overnight), then not worrying about it because it's _only_ 40 years away
seems, in my mind, ridiculous. Not to mention 13 years away. Personally, I
think 13 years and even 40 years is pretty "optimistic" on the timeline, but
even if it's 200 years away, we should devote at least _some_ resources to
worrying about this problem now.

~~~
argonaut
If the chance of the worst case scenario is an infinitesimally small number,
then there is no threat. One in a quadrillion, for example.

