
Why there will never be human-equivalent AI - blasdel
http://qntm.org/ai
======
boredguy8
Searle's 'experiment' is silly. Searle is trying to make readers empathize
with the 'person in the box'. Let's concede, for the moment, that the person
in the box doesn't understand Chinese in any meaningful way.

That says nothing about whether or not the SYSTEM on a WHOLE understands
Chinese. Given the constraints of Searle's example (that every input produces
a 'correct' output, but does so via this labour-intensive process), it seems
there IS intelligence going on there at the SYSTEM level.

Ultimately, the point of that story is saying 'the cpu isn't smart'. Fine,
that's not the argument behind AI/PI. And the whole 'there's nothing going on
that isn't in the person already' also misses the point: what if, instead of a
slow human translating, we had a fast human doing the lookups at the magnitude
of a trillion lookup operations per second? They'd be able to do substantially
more 'translating' work than the author of the software. So now, say that we
program 'learning' instead of 'translation'. Sweet, now the computer can't
learn any BETTER than the human, but it can learn FASTER, and there may be
some substantial benefit to that.

His own argument is self-refuting: "in fact it was doing exactly what I had
told it to do and nothing which I couldn't have done myself longhand given the
time." Right. But you're limited by death, and a computer isn't limited in the
same way. So say AI never makes it to 'smarter than human given enough time'.
Fine. If it does in 30 seconds what it would take me 17,000 years to complete,
I'm pretty happy.

The FAR more interesting question (to me) and probably one that can't be
solved until we 'get there' is this: will computer intelligence be subject to
the same foibles and hangups as a human? So, for instance, could "Google AI"
respond to my search request by saying, "Wow, what a boring question, stop
bothering me."

~~~
extension
Since human motivations and preferences are caused by evolutionary forces, I
don't see how an AI would acquire such things, aside from what we build into
it. And if we wanted to build motivation into it, it shouldn't be difficult,
once we have an artificial mind capable of understanding the necessary
concepts.

~~~
boredguy8
1) A keyboard is not an evolutionary influence condom. Those influences affect
code decisions that we make. So imagine we create AI that 'desires' the
continuation of human life at some level, and assume this is a manifestation
of our evolutionary drive to survive. Such a beast may answer a query such as,
"How do I build an atomic bomb?" with feigned boredom (to hide an
unwillingness to answer my question).

2) A lot of AI research is premised upon modeling selection between choices.
It seems entirely possible that the questions I have would be incredibly
boring compared to whatever other thoughts are there to be had. That is:
what's useful to us might not be coequal with AI, and that limited, domain-
specific pseudo-intelligence is really what's necessary. But any time there
are external constraints on exploration space (for instance, requiring a
response within a certain time), we don't know what we're missing out on.

------
jey
I love how he completely undermines his own point in the last line of the
post:

    
    
      Lesser intelligences cannot create greater
      intelligences... dumb luck notwithstanding.
    

If "dumb luck" can do it, why can't "smart luck" do it too? You and I are
already an existence proof of the fact that it's possible to build
"intelligent" systems out of matter. So what's stopping us from actually
_understanding_ what it means to be "intelligent" on a theoretical level then
solving the resulting engineering problems to implement it in practice?

In other words, since "dumb luck" can optimize a fitness function that results
in intelligence, why can't that same (or some similar) fitness function be
optimized a little less dumbly, like using human intelligence to navigate the
through the solution space?

------
lionheart
Here's why I disagree: smart human beings evolved from not-very-smart
primates.

That proves there is a natural process by which greater intelligence can be
created.

Therefore there is no reason that an even greater intelligence cannot be
created with help from man. And for singularity's sake super-human
intelligence doesn't even have to be an AI. Genetically-engineered super-
intelligent primates would do the trick as well. The idea is that once you've
created something smarter than yourself, not matter how you did it, it will
then be able to figure out how to make something even smarter. And so on.
That's the singularity.

~~~
FlorinAndrei
What if natural evolution has become quick enough now (and it does seem to
accelerate all by itself, if you put the evolution of species on a time scale)
so that the "next step" (the superhuman beings) will emerge without any
conscious and/or voluntary input from us?

I'm not saying this is certainly what's going to happen, it's just a (literal)
what-if question.

------
baddox
Everything hinges on his belief: "I believe that it is impossible for human
beings to create a human-equivalent artificial intelligence."

I agree that it's difficult to give formal specifications for a "human-
equivalent artificial intelligence," but assuming the human brain is an
entirely physical object, why shouldn't we be able to at least theoretically
create or simulate a computer system with as much complexity and the same
functionality?

~~~
nkassis
I agree with you. Also, I don't believe humans aren't limited by the same
issues that machines have. Halting problem comes to mind, nothing shows that
humans aren't bound by it too. I believe (another belief ;p ) that
theoretically machines can be as intelligent as human considering that were
both bound by the same limits. But I'm no AI expert as my post will probably
prove. IANAAIE

~~~
baddox
Are you suggesting that the human brain may be capable of non-Turing computing
(hypercomputation)? I certainly don't agree with that.

~~~
nkassis
not sure about hypercomputation but Godel did try to prove that humans had
intuition something that would allow them to somehow go beyond the limitation
of his incompleteness theorem. Of course he did go crazy thinking about all
this and he never did finish his proof about intuition. I personally don't
believe it's possible. Humans are as limited as machines.

------
onlyafly
The whole article is an argument from personal incredulity.
[http://www.theskepticsguide.org/resources/logicalfallacies.a...](http://www.theskepticsguide.org/resources/logicalfallacies.aspx)

------
btilly
Someone clearly didn't read _Godel, Escher, Bach_.

I've long pointed out that human brains are far more complex than computers.
However given Moore's law it seems likely that somewhere in the 2015-2030
range (depending on how one defines the complexity of a neuron), computers
will be a match for human brains in complexity. At some point after that
happens I expect human level AI to appear.

The growing power of statistical techniques for previously impossible tasks
like translation and answering Jeopardy questions (see the computer IBM is
putting together) suggests that once the requisite CPU power is available, it
won't take long for computers to match humans on a wide variety of cognitive
tasks. Once you hook together the right selection of abilities, I expect
something that looks suspiciously like original thought to emerge.

~~~
FlorinAndrei
Complexity is easy. Just wait long enough.

It's consciousness that's harder to contemplate.

~~~
raimondious
Right — it's impossible to perfectly simulate something we don't even
understand in physical terms (we can only simulate what we think it's doing).
Though it's debatable whether artificial consciousness is needed for human-
level artificial intelligence. Intelligence is just one aspect of the brain.

------
houseabsolute
I've never understood how one could be so sure about this. It seems to me all
we need to program a human in a box is:

1\. An atomic-level scan of a single human being.

2\. A computer with program powerful enough to simulate all the atoms in a
single human being and their interactions.

If you can get this, you have a computer as smart as a human (modulo quantum
effects, if those do indeed play any part in cognition.

This would be a horribly inefficient way of doing it but I'd be very surprised
if it were not possible to get a computer this fast in real life, or to
conduct an approximated atomic level scan of a human being. It may not happen
soon, but it would be truly shocking to me if it doesn't happen eventually,
provided we do not destroy ourselves before reaching that technological level.

------
ck2
In 100 years we will have gobs of massively parallel computing power in the
size of today's cellphones.

Even badly written code will be able to execute at speeds we can't fathom
today. Every dot in an image will be able to be analyzed in milliseconds,
matched to terabytes of stored patterns. Every sound wave in a voice analyzed
by the same massive amount of power and patterns.

We may not have "real" AI in 100 years like in the movies but it will be a
spooky close simulation.

I just don't know what will happen with power storage, we'd need a huge leap,
the simple evolutions in battery chemistry so far are just not enough.

~~~
stcredzero
_Even badly written code will be able to execute at speeds we can't fathom
today._

Exponential complexity is still going to swamp available computing power for
large enough N. N will be larger, though, and that will open up new uses.

------
spot
Uninformed dribblings at best.

------
mark_l_watson
Never say never.

'Real AI' may require new hardware like quantum computers, and may take a very
long time, but even 400 years ago, did anyone expect us to be sending small
robotic vehicles around and out of the solar system?

I do believe that real AI will happen, but it may seem very alien to us.

------
stcredzero
From the article:

 _"Clearly intelligence is already an infinite-dimensional beast"_

Pure fluff. This author seems to have no more than a layman's appreciation of
AI. He's also either prone to hyperbole or unaware of the illogic/fluffiness
of such statements, or both.

------
growingconcern
What a ridiculous argument (something of a given intelligence can not possibly
create something more intelligent)! It's kind of like saying no one who can
only run at 15 miles per hour could ever create something that can go faster
than 15 miles per hour.

~~~
thefool
The given enough time part is the logical flaw i think.

It's kinda like saying that a man who can only run 15 miles per hour will
eventually be able to run 30 miles and thus even if he creates a machine to do
it faster, he was still capable of doing it.

------
megaman821
For practical purposes what does it matter? He is basically saying if humans
cannot comprehend something like the fifth dimension then we couldn't build an
AI that could either. So what.

I do believe humans could build a fusion reactor so a computer with expert
level knowledge in all areas related to fusion reactors could build one
thousands of times faster that it would take a human to do. So anything you
think a human could do a high-level AI could do dramatically faster. So even
if the AI could never construct something smarter than itself, it could make
something faster, and in turn that AI could make one even faster. That is the
singularity.

------
OldHippie
"What if I convince them that I'm the messiah? Would that make me the
messiah?"

Yes.

~~~
onlyafly
Similarly, he gets the Turing Test wrong. He seems to think that all you have
to do to pass the Turing Test is to convince a single person that you are a
person.

------
johngalt
Guide to create General AI:

1\. Figure out what the brain does chemically to create thought.

2\. Replicate that function electronically.

Responses to his objections about complexity greater than ourselves: If we
can't create something smarter than ourselves, then how did amoeba do it? If
you can figure out step #1 then you can also diff a smart brain vs a dumb one.
Take whatever makes a brain smarter and double it in step #2.

IMHO hardware's not the problem. It's reverse engineering the software our
brains use. I expect when we figure this out, we'll be suprised how simple we
really are.

------
totalc
the blogger closed discussion before I could reply so I'll reply on here for
what good it does me.

Whether a technological singularity is desirable is debatable. Whether
development of human-equivalent AI would lead to technological singularity is
debatable. On the other hand, human-equivalent AI seems inevitable to me.
Author asserts some things without much in the way of proof:

"You could take a precise reading of the structure of the human's brain and
simulate that brain inside a computer. But taking this initial reading is
impossible in practice right now, and may remain so indefinitely, and
computers need to be, conservatively, ten orders of magnitude more powerful
before the simulation step becomes possible."

Why would this remain impossible in practice indefinitely? It's only a matter
of time and effort until the human brain's goings-on can be reproduced outside
a human brain, no matter how many orders of magnitude "more powerful" thee
equipment needs to be. Unless there is truly some human-inaccessible spiritual
process that goes on alongside physical processes in the human brain to
produce human intelligence, there's no reason purely physical processes can't
be reproduced outside a human brain.

------
borisk
Useless nonsense.

------
klodolph
This essay makes asserts that, for example, a human could never write a chess
program that is better at playing chess than the human is, because the human
could always just run the chess program by hand, given time.

This is like saying that no computer is more powerful than any other computer,
given sufficient storage and time. But limits on storage and time are THE
limits. This is a very poor way to define "power".

------
fhars
What irks me most about the AI debates is that a machine that thinks like a
human would actually be artificially delusional, not artificially intelligent.
For true, strong AI all the SETI argumants apply: would such an intelligence
resemble our own, or could it be so different that we wouldn't even recognize
it?

------
Shorel
The fact that any single human will not be able to understand how such AI
works to the minimal detail, means nothing about the way such AI can be
created.

For example: Evolutionary programming, Iterative design, Gödel Machines. All
of them possible approaches.

------
hippich
He is right. Something he described as AI will be impossible to be created by
human being.. But I really doubt "real AI" will be something everyone
currently imagine..

------
dododo
i don't understand why "intelligence"/"smart" is a good thing to optimise/aim
for:

1\. it's not a well defined quantity.

2\. under any definition, it doesn't seem to be anything useful in itself.

can some entity create an entity better at a task that the creator? obviously
yes. c.f., compilers, factories, cranes, &c.

------
83457
1\. Design and build AI brain 2\. New AI brain designs and builds more complex
AI brain 3\. Repeat step 2

