
Why AI is a dangerous dream - interview with Noel Sharkey - limist
http://www.newscientist.com/article/mg20327231.100-why-ai-is-a-dangerous-dream.html?full=true&print=true
======
gizmo
Silly. So very silly.

First of all: he doesn't use his definition of intelligence consistently.
According to his usage of the word a being can be truly intelligent, and
another one can _appear_ perfectly intelligent but be in reality a dumb
machine that merely simulates intelligence. That's his chess example. But he
_defines_ intelligence as the science of making machines do things that lead
us to believe they are intelligent. A chess computer that outsmarts me most
definitely fits that definition of intelligence.

He continually makes the claim that "Technological artifacts do not have a
will or a desire". There is no reason to assume that you need some primal
force to get a will or desire (perhaps any sufficiently complex system will
have a will or desire as a side-effect), and there is no reason to assume a
will or desire can't be perfectly emulated.

He claims that there are no signs that computer processing speed will
eventually overtake that of the human brain. I say that computer processing
speed is undeniably improving rapidly, and that the number of tasks that
computers can do is rapidly increasing. Unless there is a hard limit somewhere
the assumption should be that we will be eventually overtaken.

Computer AIs can play Chess, Checkers and Super Mario. They can create art,
and compose music. They can drive some vehicles, and land planes at night. How
about science? Some proofs are made and proved correct almost entirely by
computer. Some proofs are so complex they can only be verified by computer. In
many research fields a single human is almost guaranteed to contribute
nothing. A computer, on the other hand, can probably brute-force his way to
many new discoveries.

He again makes the claim that computers are innately unable to feel compassion
or empathy. Only to finish with the dumbest remark of the article: "I don't
think they will be very good at faking fouls", which is clearly a matter of
basic game theory. And I think that robots will completely crush humans at
soccer, even if they lack strategy. The moment robots are good enough to take
the ball away from a pro human player, they will be able to do so
consistently. Even if they run slower and shoot only semi-accurately, they
will never make big mistakes. And history shows that in any game where
computers can compete the humans have to play (almost) perfectly to even stand
a chance (see: Chess / Checkers / Poker). We meat bags with our 100ms+
response times will never be in the same league as robots. Either we will be
far superior to the robots, or the robots will run circles around us.

~~~
alexfarran
> But he _defines_ intelligence as the science of making machines do things
> that lead us to believe they are intelligent. A chess computer that
> outsmarts me most definitely fits that definition of intelligence.

No he doesn't. He defines AI that way, and that's the point. AI is an illusion
of intelligence. He sees a danger in mistaking it for the real thing - in
effect mistaking machines for people.

~~~
Tichy
What's "the real thing"? Of course you could always define AI as "not real
intelligence", then by definition AI will never be "really intelligent". It
would not be a very interesting claim, though.

~~~
jshen
If it's brute force, i.e. chess programs, it's not real intelligence.

If it is a statistical model that produces a result given an input it's seen
before then it's not real intelligence.

To me intelligences is the ability to learn by applying patterns/metaphors
across domains. Computers aren't anywhere close to this.

~~~
jacoblyles
Applying a statistical model trained on previously observed input is a
reasonable description of much of human behavior.

~~~
jshen
A chess program trained on previously observed input is not able to then learn
how to paint, or drive a car, or play basketball. This is the difference.

------
Leon
His sticking point is that the brain or mind could be a physical system that
is not computable or equivalent to a Turing machine. If we take the assumption
that the mind is a physical system, then it would still only be a matter of
time before we found the reason for the difference in computational ability
(maybe it would be like Penrose's quantum arguments for the mind), but those
systems would still be within reach of implementation within a computer
system. The brain is still a combination of proteins, amino acids, and a few
other chemicals. We would just have to move to using chemical computers (which
are already being researched).

Even following his arguments, it would only be a matter of time before
sentient machines are created, equivalent to our own minds.

His arguments against AI are mostly 'just because' and has little proof.

~~~
jacquesm
No, his arguments are not 'just because', this is a pretty smart guy and from
what I get out of it his reasoning goes like this:

If the mind is a physical system then we can attempt to recreate it by
computer but this is no guarantee that such an attempt will be successful.

For an example of such a situation, imagine that it would take all the
resources of a planet in order to create a single AI then it might be
theoretically possible but in practice it would fail anyway.

And that is assuming that it _is_ possible, for which there is no proof until
it is done. If the mind turns out to depend on mechanisms that we can not
model accurately enough to get the desired effect then the whole deal is off.

There can be no 'proof' that AI is impossible, just as much as there can be no
'proof' that green elephants do not exist.

The only proof that would be acceptable is for someone to produce an AI.

Nobody needs to present proof that it is not possible.

~~~
Locke1689
Well, that's not entirely true. What is true is that if the mind is more
powerful that a Universal Turing Machine, then we have no idea how to approach
building one. However, we know that the resources required to build one are
small -- humans do it all the time in a relatively cheap process called "sex."
That is, the lower bound for creating AI is very small. Of course, the danger
is that our brains have certain limitations for a reason -- these could be the
property of this super-Universal Turing Machine. Therefore, a true AI may look
nothing like our modern computer. The GP is correct in that we should at least
be able to build a simulacrum of our own mind -- even if that is simply an
artificially constructed duplicate using biology, genetics, and biochemistry.
I would say that this may be fairly far away, because we have yet to truly
understand many of the processes and structures in the human brain. This is
why I believe AI is much farther off than most people think: to understand AI
we must first understand ourselves, which is much more complex than at first
take.

Now, this entire hypothesis rests on the idea that our brain is super-
Universal. That is, it cannot be phrased as a computable function (by the
Church-Turing thesis). This is a fairly unpopular view in the AI community,
but valid until we prove otherwise. Up till now there has been no evidence
that such a Turing machine exists in theory, much less in practice, which is a
minor blow. In addition, many of our psychological structures and chemical
activities seem to suggest a Universal Turing machine.

Edit: I just realized that I implied that a super-Universal Turing machine is
a Turing machine. This is obviously untrue, by definition. Consider my
terminology to be a replacement for whatever name we would give to this
noncomputable function solver (since we have no category for that at the
moment).

Edit2: Not that we don't have a category for noncomputable functions, only
that we don't know which category they would fall into. I guess the catch-all
"hypercomputable" is suitable. Thanks, naveesunder

~~~
naveensundar
There is a category for super Turing computation - hypercomputation.
<http://en.wikipedia.org/wiki/Hypercomputation>. There is a whole hierarchy of
such problems/machines <http://en.wikipedia.org/wiki/Arithmetical_hierarchy>

~~~
Locke1689
True. My impression was that the machines were not well defined though. That
is, hypercomputation is mostly defined as things not computable by a Turing
machine. In addition, I guess I was saying more that we don't know what
category our mind would fall into if it is indeed hypercomputable, not that we
don't have mathematical categories for noncomputable (not general recursive)
problems. I edited the parent post for clarity, as well.

------
rms
You should probably just read Eliezer Yudkowsky on this topic.
<http://yudkowsky.net/singularity/ai-risk>

------
mynameishere
_Soccer robots can move quickly, punch the ball hard and get it accurately
into the net, but they cannot look at the pattern of the game and guess where
the ball is going to end up._

I'm pretty sure that computers already play sports (minus the robots)
acceptably well.

~~~
coderdude
If you're talking about sports video games, the game knows of and keeps track
of everything going on in the game world.

------
growingconcern
"But accepting mind as a physical entity does not tell us what kind of
physical entity it is. It could be a physical system that cannot be recreated
by a computer." Child Please. Why is anybody even listening to this kook?

~~~
rortian
You are displaying very misplaced arrogance. Simulation of all sorts of
physical phenomena are intractable. An example I am familiar with is
attempting to simulate a large number of sand particles being vibrated. If you
think this is trivial, then you seriously underestimate how computationally
difficult simulation can be for physics problems with relatively
straightforward mechanics. The mechanics of the brain are not yet seen as
straightforward.

The person you refer to is not a child and displayed much more sophistication
on these issues than any 'rapture of the nerds' advocates I have ever seen.

------
hopeless
Just as an aside, Noel Sharkey taught a neural networks course I took but
everything I learnt about them came from implementing a back-prop net in Occam
for another class!

------
darien
In this interview he is suggesting that people stop a self-fufilling prophecy
before it is to late. I think his fear is that if people believe superior ai
is inevitable that researchers will work off of those assumptions as if they
were given facts - eventually creating systems which might pose threats or
general harms to humanity. Other than that, only time will tell who is more
correct.

------
endtime
>I'm an empirical kind of guy, and there is just no evidence of an artificial
toehold in sentience. It is often forgotten that the idea of mind or brain as
computational is merely an assumption, not a truth. When I point this out to
"believers" in the computational theory of mind, some of their arguments are
almost religious. They say, "What else could there be? Do you think mind is
supernatural?" But accepting mind as a physical entity does not tell us what
kind of physical entity it is. It could be a physical system that cannot be
recreated by a computer.

This sounds like the same kind of argument as "intelligent design" people use.
""What else could there be?" is not a religious argument, it's a perfectly
reasonably question, and to suggest that there are physical systems that
cannot be recreated by computers shows a lack of fundamental theoretical
understanding of computation.

------
tybris
Seems like he went from underestimating to overestimating human intelligence.

------
jongraehl
While I acknowledge the possibility that "real" intelligence may be inherently
non-computational, it's ridiculous to claim that it "equally might be" so.
(which I take to mean roughly 50% probable). I claim 99.9% certainty that a
computational process will demonstrate general human intelligence in the next
hundred years, given that civilization isn't decimated before then. We don't
even have to be clever enough to organize superpowerful computers into
rudimentary general intelligence; we just need to understand our own brains at
a fine enough level to emulate them computationally.

And, of course, you don't need anything close to general intelligence to do
well in sports.

------
billswift
This was posted 5 days ago; 3 days before this reposting. At that time I left
the comment: I thought this might be something like Eliezer's arguments
against developing a GAI until it could be made provably Friendly AI, instead
I just got an argument exactly like the ones in 1903 that said heavier than
air flight by men was impossible - go back and read some of them, some of the
arguments were almost identical. Some of the arguments are currently true, but
some of them amount to "I can't do it, and no one else has done it, therefore
there must be some fundamental reason it can't be done".

------
limist
Some very interesting points and discussion here; what surprises me is no
mention yet of the major thought experiment addressing many of the
AI/intelligence arguments made here - the 29-year old Chinese Room paper by
John Searle:

<http://en.wikipedia.org/wiki/Chinese_room>

In sum,

(A1) "Programs are formal (syntactic)." (A2) "Minds have mental contents
(semantics)." (A3) "Syntax by itself is neither constitutive of nor sufficient
for semantics." (C1) Programs are neither constitutive of nor sufficient for
minds.

~~~
req2
<http://lesswrong.com/lw/15a/a_note_on_hypotheticals/>

~~~
limist
Thanks for the link. While I don't disagree with the general point of the
linked article, the writer PhilGoetz misstates Searle's conclusion. Also,
Goetz's counter-argument - that the locus of consciousness need not be the man
in the room - is addressed in the Wikipedia article too:

[http://en.wikipedia.org/wiki/Chinese_room#System_and_virtual...](http://en.wikipedia.org/wiki/Chinese_room#System_and_virtual_mind_replies:_finding_the_mind)

------
omouse
This was already posted, wasn't it?

~~~
astine
It's the first time _I've_ seen it, so I appreciate it if it was re-posted.

