

Scott Aaronson on Penrose's argument (why machines can't think) - lkozma
http://www.scottaaronson.com/democritus/lec10.5.html

======
erikstarck
For those not familiar with the Penrose argument, here it is, slightly
simplified:

\- Consciousness is weird and we can't explain it.

\- Quantum mechanics is weird and we can't explain it.

\- Therefore, consciousness and human intelligence must come from quantum
mechanics.

Yeah, yeah, I know, that's obviously not how he frames it, but it's close. I
read his book a few years ago, expecting some great insight but seriously,
that's all there is to it.

He actually claims that mathematicians have some magical ability to "see" in
to a parallell universe of mathematical answers, an ability that computers
lack. How can this argument even be taken seriously?

~~~
xtacy
Quantum mechanics is weird, alright. But we can still predict what's going to
happen, as we have theories. Do we have such a formal theory for
consciousness?

~~~
wlievens
Quantum mechanics is objectively observable, consciousness is not. It's a very
different kind of beast, therefore.

------
ars
An interesting argument, but the lookup table argument falls flat.

The number of inputs is pretty much infinite, not finite like he says.

I emailed him this, I'll post here if he replies:

    
    
      I think you are incorrect about the lookup table. Just
      walking around a city will provide more input permutations
      than any lookup table could possibly encode.
    
      You assumed that responses are always deterministic, but
      actually peoples behavior is influenced by what they have
      seen recently. Or even what they have been asked recently!
    
      Every lookup table will also have to include the result
      of choosing any question from the lookup table - and in
      what order.
    
      The possible number of orderings of that grows far faster
      than any finite lookup table could encode, because simply
      asking a question in the lookup table changes the answer
      to all further questions.

~~~
AndrewDucker
I think you're confusing "finite" and "small enough that I can imagine
something that size".

A billion operations per billionth of a second for a billion years is still
finite, it's just big.

~~~
ars
I am not confusing them.

My argument is that each time you add a question to the table, you need to
encode an answer that depends on all the other questions asked.

Meaning the number of answers is exponential (actually factorial) to the
number of questions in it. Such a table is essentially infinite.

i.e. if you have 10 questions you have 3 million possible answers. But with a
million questions you have 8 x 10^5565708 answers. I hope you realize how big
this number is.

~~~
almost
The difference between infinite and "essentially infinite" is that one is
infinite and one finite. That is a large number, but it's not infinitely
large.

~~~
ars
You are speaking about math, I am speaking about complexity theory.

~~~
almost
As far as I'm aware complexity theory doesn't have it's own special definition
of "infinite".

Just to be clear, I'm totally in agreement with you that a lookup table based
AI would not actually be possible in the real world, but I don't think any
sane person would claim that.

~~~
ars
I didn't say infinite! I said "essentially infinite".

I would have used the exact term from complexity theory, except I don't know
what it is for n! - the highest they have is EXPSPACE which is 2^n (which is
smaller - much smaller).

~~~
hugh3
I thought you said "pretty much infinite". Either way, neither "essentially
infinite" nor "pretty much infinite" is a meaningful concept (I can just
picture infinity, off at the nonexistent end of the number line, laughing at
the puny numbers you describe as "essentially infinite"), so it'd be better to
just rephrase it as "really really big", with as many "really"s as you feel is
appropriate.

------
kanak
I took a theory of computation class with Prof. Aaronson and this brought back
some fond memories. I like the fact that he's formal when he needs to be, but
doesn't shy away from using simple "real-life" examples when he can. Think
about how many professors would use a line like:

> can the human mind somehow peer into the Platonic heavens, in order to
> directly perceive (let's say) the consistency of ZF set theory? If the
> answer is no -- if we can only approach mathematical truth with the same
> unreliable, savannah-optimized tools that we use for doing the laundry,
> ordering Chinese takeout, etc. -- then it seems we ought to grant computers
> the same liberty of being fallible. But in that case, the claimed
> distinction between humans and machines would seem to evaporate.

Wonderfully written. I may not understand Godel's Theorems or ZF, but i did
gain an understanding about some faults in Penrose's argument.

------
narag
How old is the book he mentions? Is the argument still viewed seriously apart
from a mental exercise?

If the answer is yes, I'd be puzzled. It's naive to think that computers in
XXIII century will be anything like current ones. _We_ are after all machines
and I find no reason any complexity (quantum, midichlorians or whatever) that
we have can't be introduced in the construction of future computers.

That's like a XIX scientist saying that a microscope will never be able to see
atoms. Surely he could have demonstrated it with good math.

Rephrased. the argument should be "computers as are built now can't think"
that is so obvious simply because lack of power.

A reasonable simulation of conscience that tries to resemble that of a living
organism would need at the very least senses, pain/pleasure, feedback (if I
want to move, I can,) all throwed in an integration layer that "feels"
everything at once.

First we should see if we're able to simulate an ant, then a mathematician
brain :-)

Current simulations of living beings or colonies have value for statistical
studies, but they're nothing in which that impossibility assertions can base.

~~~
hugh3
It may still be viewed seriously by people who really _want_ to believe that
there's something essentially different about human consciousness, and that we
are not "just" fancy meat-computers.

I don't know if it has ever been taken seriously by anyone who doesn't
approach the problem from that point of view, though.

------
naveensundar
Encapsulated this is Penrose's argument in a crude fashion

1\. Humans can perform _Turing-uncomputable_ operations (mathematicians
mainly) 2\. _All_ of present day physics is computable 3\. So there should be
some uncomputable physics yet to be discovered. 4\. Penrose feels this new
physics should be in quantum mechanics.

A small video of Penrose's argument :
<http://www.youtube.com/watch?v=9Gl6aThSs1s>

On one hand the very notion that a computing device is conscious is patently
ridiculous. Why? Because everything computes something. Even my fan computes a
function so does the engine of a car[1], but are those artifacts conscious?
The only honest answer is to say that this is a difficult problem.

This is a very good interview of David Chalmers (the philosopher who
originated the term "Hard problem of consciousness") that I found on HN
sometime back

<http://www.youtube.com/watch?v=NK1Yo6VbRoo>

Obligatory XKCD reference: Can a bunch of rocks be conscious?
<http://xkcd.com/505/>

[1] Very trivially they compute the function which describes their dynamics.

------
terra_t
Penrose's argument has always been full of it.

Humans aren't Turing-complete; they can certainly emulate a small, finite,
Turing machine, but they can't emulate an arbitrary large one.

An individual human is not complete in any way; look at how many people have
failed to proof Fermat's last theorem.

Finally, people aren't consistent; Robert Penrose can say "Penrose will never
utter this statement" and the roof doesn't fall in.

------
kanak
> why not go further, and conjecture that the brain can solve problems that
> are uncomputable even given an oracle for the halting problem

Can anyone give me an example of such problems?

~~~
ihodes
The halting problem, for instance: whether or not a e.g. Turing machined will
halt on a given input or loop endlessly.

~~~
kanak
Sorry if i wasn't being clear. I mean problems that remain undecidable even
when you have an oracle for the Halting Problem.

~~~
robinhouston
I don't know whether you‘re going to like it, but the most obvious example of
a problem that is undecidable even given an oracle for the halting problem is
the next halting problem up: i.e. given a Turing machine that has an oracle
for the halting problem, does it halt?

~~~
wlievens
Turtles all the way down.

