
The most notable luminaries of our time are wrong to fear AI - inputcoffee
https://inputcoffee.com/the-most-notable-luminaries-of-our-time-are-wrong-to-fear-ai-f95dfe7611e#.iw97jugaj
======
dmreedy
This is a retread of the traditional I'm-actually-a-dualist argument, and a
pretty naive version of it at that. The brunt of the argument is the citation
of Searle's chinese room, which is treated as proven fact. I'll give it credit
as being a powerful intuition pump, but it really just reduces to the dogmatic
divide between those who believe that consciousness can be an emergent
property of mechanical systems and those who believe consciousness is the
domain of some kind of non-mechanical, non-physical thing, such as a soul.

Some other notes;

1) I dislike casting statistical learning and symbolic systems so starkly
against eachother. They are both facets of the same underlying principles, and
the relation is better illustrated as a continuum, rather than a sharp divide.

2) To succumb to participating in the retread, we don't know enough about the
nature of language right now to know whether the premise of Searle's chinese
room even stands on its own. We don't know if such a book of translation is
possible. And we don't know how to measure its accuracy. This is an old
question; c.f. the kerfuffle lately over what the Turing Test actually means.

3)

>>>A machine needs to at least have an analog of a desire to do something
before it does it. For a biological system, it “wants” to live. This desire
emerges from several millennia of evolution. A machine doesn’t want food, of
course, but it has no direct reason to “want” to live.

We need operational definitions of all of the terminology involved before we
can make statements like this. It's dangerous to take these understandings of
'desire', 'want', etc. as givens when we don't even really know how they apply
to -us-. This is where an understanding of the history of philosophy is
helpful.

I agree with the fundamental premise that AI isn't something that is so
worrying as a lot of these overhyped press releases make it out to be. But I
don't think Searle is on the right path here.

~~~
inputcoffee
If I had to do it over, I think I would shrink the rest of the piece, and
expand the last section. (Maybe I still can, it is not paper after all).

Anyway, thanks for your thoughtful feedback.

I don't necessarily disagree with most of what you say, but here are a few
responses:

1\. For what it is worth, I think consciousness _might_ be an emergent
property of machines, but not of the _sorts_ of machines we currently have.
That is to say, I don't think symbolic logic or machine learning are those
sorts of machines.

2\. Why do you dislike casting statistical and symbolic machines against each
other? (Because this is the internet: serious question.) alluding to your note
(1)

3\. I am aware of some of the original criticisms of Searle's Chinese room,
but I am not sure what you are alluding to here (in your note 2).

4\. You make a fair point about defining desire, wants and so forth. This is
quite a departure from many of the other points that question the _need_ for
desires or wants. I would be very curious as to how you believe that would add
an important dimension here. (again, because of the internet, serious
question.)

Thanks again for reading it and taking the time to comment.

------
snewk
> Their very limited and simple goals prevent them from even wanting to “take
> over” or “rule over us.”

I dont think that's the main concern regarding AI (correct me if I'm wrong).
What Hawking, Musk, and Gates fear is a weaponized AI. One that doesn't
necessarily "want" to take us over or rule over us, but one that is
specifically programmed to do so.

Nuclear warheads are built to blow up. That doesn't necessarily mean they
"want" to, but they are dangerous all the same.

~~~
inputcoffee
You are right about nuclear warheads.

I didn't read them as worrying about weaponized AI, so I will have to
reconsider that aspect.

------
csbrooks
[https://wiki.lesswrong.com/wiki/Paperclip_maximizer](https://wiki.lesswrong.com/wiki/Paperclip_maximizer)

"...a paperclip maximizer is an artificial general intelligence (AGI) whose
goal is to maximize the number of paperclips in its collection... It would
innovate better and better techniques to maximize the number of paperclips. At
some point, it might convert most of the matter in the solar system into
paperclips."

Also:

"Some goals apparently serve as a proxy or measure of human welfare, so that
maximizing towards these goals seems to also lead to benefit for humanity. Yet
even these would produce similar outcomes unless the full complement of human
values is the goal. For example, an AGI whose terminal value is to increase
the number of smiles, as a proxy for human happiness, could work towards that
goal by reconfiguring all human faces to produce smiles, or tiling the solar
system with smiley faces (Yudkowsky 2008)."

~~~
soared
Obviously we all know the paperclip example, you can't just quote it and call
that good. Countless people have made objections to this..

~~~
csbrooks
To me, it seems to refute the argument in the article: that AI has to have
some kind of "will" in order to be dangerous. Because AI doesn't "desire" to
take over the world, it can't harm us. I believe The paperclip example shows
that isn't the case.

~~~
vorotato
Or to simplify, I can mangle my arm with heavy machinery without it wanting to
harm me. It's somewhat ridiculous that people view conscious things as more
harmful, if anything they've shown to be less harmful. Before you throw the
nuke example at me, please consider we have yet to see what a being who was
not conscious but capable of nuking would do.

------
Bartweiss
This is a deeply flawed rehash of some of the field's least interesting
commentary.

Among other things, the likes of Musk and Hawking, while brilliant, are in no
way experts on artificial intelligence. Rebutting them is _far_ less
meaningful than rebutting Moravec, Russell, Bostrom, Sutton, or any of the
other actual experts in the field who have expressed concerns.

More to the point, a discussion of what an AI "wants" is sophistry at best.
That the Tesla autopilot does not "want" to kill anyone should be of little
comfort when it kills someone. No one talking serious about AI as existential
risk is envisioning Skynet, and no one interested in real consequences cares
whether an extinction-driving AI has human-style desires.

~~~
inputcoffee
Thanks for reading it, even if you're unhappy with it.

I agree that Musk et al are not AI experts. I wanted to respond to their
public comments which I do not agree with. I am not necessarily at odds with
the "actual experts" as you put it, so I didn't respond to them.

I also agree that the AI experts have substantive positions, and I am do not
claim I have undermined them.

This last point has been repeated many times. I have clearly failed to address
it in my piece. I summarize the point as: an AI may not have desires, but it
can have goals.

I did think about it, and I tried to address it in the last section, but
clearly it was too little and too late. I may try to rework it to address it
more fully.

Thanks for reading and responding.

~~~
Bartweiss
I addressed this elsewhere (on your 'ask for criticisms' post), but I think I
misunderstood your intent.

As a rejection of Musk/Gates/Hawking, I definitely endorse this. Those voices
are pushing a questionably-informed view of the topic (Gates a bit less so?)
that's had far too much public influence. Something (the title? my own
biases?) led me to conclude that you were trying a larger response to the AI
community, which is what I objected to so strongly.

That's a great summary of the point (as I meant it). I saw that it was
addressed at the end. I still came away with the feeling that the middle was
irrelevant in light of it (again, perhaps not if you're responding to Musk),
and the response was inadequate, but this may be about scope and intended
audience.

------
gdudeman
This article mistakes how we talk about AI with potential outcomes.

It presupposes that AI will only do what we ask it, not that we'll program AI
to, for instance, "protect us from enemy nations." It also seems to presuppose
that we'll be able to understand why it does what it does.

> "However computers change, they won’t have innate desires. They will be
> programmed by people. It still won’t be in their nature to want things."

Regardless of the truthfulness of this (What is an innate desire beyond
programming whether by evolution or people?), it doesn't matter if the desires
are innate or programmed. We will definitely program them to want things and
to do things like trade stocks and drive us places. If we get it wrong, it can
go haywire.

The AI Revolution: The Road to Superintelligence has a very good explanation
of this: [http://waitbutwhy.com/2015/01/artificial-intelligence-
revolu...](http://waitbutwhy.com/2015/01/artificial-intelligence-
revolution-1.html)

I'd suggest the writer should spend more time trying to understand what many
of the smartest people of our time are saying before declaring them wrong.

~~~
Bartweiss
Giving a thoughtful reading to _AI Revolution_ or pretty much any other
serious work on AI risk would have preempted this article, and it's a shame
that didn't happen.

To the extent that this is a useful piece, I think it's a reminder that citing
Gates/Musk/Hawking on AI is much less useful than citing an actual expert in
the field.

------
liquidise
The claims about the _method_ of teaching AI's is to me is moot. With the rate
technology changes we would be foolish to assume that the current method will
last even a couple of years.

The bigger fear surrounding AI is our reliance and the accountability at time
of a failure. Tesla's Autopilot has been around for less than a year and it
feels like it is under fire on HN weekly. How much is the NSA relying on deep
learning to identify "bad" behaviors? How long until drones use image analysis
to identify targets and act on them in real-time?

My more immediate fear is not with AI itself, but with our implementation of
it. I am not fearful of a Ex Machina-style robot on the loose. Instead, my
concerns are that humans will begin to rely on machines to make decisions i
don't even think humans should be allowed to make.

~~~
inputcoffee
I agree with this point. But I think that issue is already with us. For
example, algorithms are grading GMAT essays, giving out credit scores and so
forth.

That is more a fear of the increasing complexity and opacity of modern life.

I think you are right in that that is probably the right thing to worry about
AI as it actually exists.

I might need a follow up article.

------
ChuckMcM
I will know we should start fearing AI when AI is writing pieces on Medium
that says we shouldn't fear it :-)

But more importantly, the good/bad conversations are really about sentient[1]
AI and not programmatic AI. Essentially the argument boils down to, "If a
sentient AI is created that exists in, and can control, inter-connected
computer systems, that AI will be able to take action to satisfy any
particular 'want' it might have."

So it gets "scary" if you create an AI that is "wanting" something. And I
could imagine this done innocuously, for example creating a trading system
like deepmind what "wanted" to "win the securities game" by maximizing the
value of its holdings. And the risk being that there are very destructive ways
to achieve that (killing off your competition for example). But in those
scenarios, when it is clear the program is operating outside of its control
parameters, the question would be how would it have already figured out that
it could be turned off/deleted and so protected itself against that? And
_that_ is why sentience is required, by recognizing itself apart from its
programming, it recognizes threats to its existence.

I tend to lean more toward the "unlikely to be an issue in the near future."
camp.

[1] Sentience being the effect that the program actually understands it's own
existence apart from the environment.

------
zwieback
When Skynet takes over do you care if it was because machines developed "free
will" or because they were "shuffling bits of paper". I think the fear of AI
is independent of technical definitions of what goes on inside and people are
not comforted by explanations why machines don't actually think on their own.

~~~
cgag
Yeah, when my atoms are rearranged into paperclips, I don't care if the AI
that did it truly "desired" that outcome.

([https://wiki.lesswrong.com/wiki/Paperclip_maximizer](https://wiki.lesswrong.com/wiki/Paperclip_maximizer))

------
cgag
> If the whole point of the technological singularity is that it is beyond our
> ability to understand it, how can we have an opinion on it?

Some people here would probably enjoy this:
[https://www.gwern.net/Complexity%20vs%20AI#parable-of-the-
wo...](https://www.gwern.net/Complexity%20vs%20AI#parable-of-the-worms)

I can't understand seriously asking that question.

There are infinite ways for some superintelligence to be misaligned with our
goals, and very few ways for it go right. We're the one's building it, it's
not some inevitable fate that we'll build a superintelligence who's
motives/utility function is beyond us. Now is the time to think about it and
get it right.

~~~
cynoclast
I like Ian Banks' extrapolation of the post-singularity universe.

------
leblancfg
Dear Input Coffee,

First off, I would feel very uncomfortable contradicting Elon Musk, Bill
Gates, Stephen Hawking and Bill Joy in one fell swoop. Who am I to judge,
though. Argument from authority can be fallacious, sure, so let's not go
there.

Think about this for a minute. By the same argument you're taking, we are just
lumps of organic molecules replicating with DNA, surely we can't feel
anything, right? The thing you're not considering here is _emergent
behaviour_.

Now, you haven't talked about self-programming machines in your article, and
I'm pretty sure that's what all the really smart people you've rebuked in your
subtitle are scared of. Do a quick Google search for "self-programming" AND
"machine learning" AND "genetic". If you've read Dawkins' The Selfish Gene,
you should be getting goosebumps right about now. If not, and are interested
in AI in any way, I cannot stress hard enough how badly you need to go out and
get that book.

I was also surprised to see you didn't include Nick Bostrom's book called
Superintelligence (2014) in your quotes. If you haven't check it out, I would
highly recommend it. It goes deep and wide into how a sudden, exponentially
growing spike of machine intelligence could impact our society.

------
inputcoffee
Well that went well.

I am resisting the urge to respond to every single comment, but many of them
are a version of the following:

The AI can have goals without desires, and those might cause us harm.

I agree with this, but I thought the last part was supposed to address this:

>>>To be perfectly fair to Musk, there is another component to his argument.
Musk argues, about the stand-in for humanity, that “he’s sure he can control
the demon. Didn’t work out.”

>>>On this view, the primary danger is not that the AI will take our
resources, or compete with us, but that we don’t really know what may happen.
This is a more general kind of concern that applies equally to, say, nuclear
weaponry or genetic modification.

I wish I could pick one or two people on here, and just ask why that wasn't an
appropriate or convincing section of the post.

~~~
Chathamization
> This is a more general kind of concern that applies equally to, say, nuclear
> weaponry or genetic modification.

This is an interesting point. Concern about the possible threat posed by some
technology like genetic modification will often get dismissed as being "anti-
science", while concern about another technology like AGI (artificial general
intelligence - Skynet) usually doesn't. You have people like Bill Gates who
both admonishes people for being worried about genetic modification and
admonishes people for not being worried about futuristic AI.

Particularly interesting since concerns about genetic modification (or present
day AI) is more grounded in reality than concerns about AGI. Something like
the paperclip maximizer are like worrying that Monsanto will genetically
engineer trifids (killer plants). I can imagine the reaction most people would
have to such concerns.

I wonder how much of this is cultural.

~~~
inputcoffee
You raise an interesting point, and one that I don't think gets much
attention. (The point about the contrary attitudes towards AI and biotech.)

Bill Joy's essay took the twin issues together, but I think you are right in
that today one gets more press than the other. I don't know enough biology to
know if those concerns are valid or not.

It might be cultural in the sense that "the culture" these days is faced with
ever improving AI on an annual basis (Siri, Amazon Echo, etc) so the lay
person feels qualified to extrapolate whereas with biotech I can't tell that
my corn is more resistant so I don't see the change.

------
johnfjacobi
I always understood the talk of "desires" and "taking over" to be a strategy
to explain the real problems of AI. I don't think these people actually think
that AI will lead to a future like _I, Robot_.

Cuz there ARE major problems with automated computer systems already, problems
that only get worse with automated computing. Kevin Slavin has a good talk on
the subject, and he brings up the occurrence of "flash crashes" in the stock
market now that we rely on high frequency / algorithmic trading.

Here's the talk:
[https://www.ted.com/talks/kevin_slavin_how_algorithms_shape_...](https://www.ted.com/talks/kevin_slavin_how_algorithms_shape_our_world?language=en)

------
twblalock
This is too dismissive of the possibility of the development of consciousness.
John Searle's argument is famous, but it's not the final word on the subject.

Most of the arguments against the possibility of computers developing
consciousness are just as strong against the possibility of biological life
developing consciousness -- yet here we are! Why would the rules be different
for brains made of silicon than for brains made of meat?

Besides, AI can be dangerous whether it is conscious or not. After all, in the
Chinese Room argument, the _output_ is the same for a brain and the room, even
though the internal processes are not.

------
joesmo
Even a fully autonomous, self-driving car is not intelligent. It's just
advanced software. Shouldn't we wait for actual AI to exist before worrying
about it? Otherwise, we're worried about something that many not ever exist.
Every argument about the dangers of AI or singularity clearly assumes that
such AI is possible without any basis for said assumption. I'm going to
continue being a skeptic till I see some proof. Until then, it's like worrying
about zombies. I have enough things to worry about that aren't fantasy.

~~~
inputcoffee
I agree with you in a sense. I think that the algorithms _as they exist today_
are not capable of sentience.

I was trying to draw out this thread with the statistical and symbolic
implementation piece, but that went on too long as it was.

Then I realized that Searle had made that essential argument so I used him.

And that, everyone felt, missed the point. They are right but it is this
insight you present here that motivated that detour.

------
dojomouse
AI simply working to satisfy the 'benign' goals given to it by human
programmers, without our having adequately considered all possible
consequences (which, guaranteed, we won't), can be incredibly dangerous.

You should read some of Elizier Yudkowsky's writing on the topic, or Nick
Bostrom's 'Superintelligence'. I think they articulate quite well the concerns
that prompt many of the statements from Musk, Hawking, etc.

------
Animats
Intentionality...free will...Chinese box...consciousness... the usual
philosophical bullshit.

The more likely threat is a corporation run by a machine learning system set
to maximize shareholder return and nothing else. When an AI can make more
profitable decisions than a human, an AI will be put in charge. Investors will
demand it. Think about that for a while.

At least this will solve the problem of excessive CEO pay.

------
sdegutis
Nice try, _HAL 9000_.

~~~
inputcoffee
I laughed out at this one.

------
cynoclast
>Elon Musk, Bill Gates, Stephen Hawking and Bill Joy surely know that machines
cannot have desires

Bullshit. An AI without desires isn't an AI. It's a tool.

