
The Lucas-Penrose Argument about Gödel's Theorem (2012) - lainon
http://www.iep.utm.edu/lp-argue/
======
mannykannot
Part of this discussion illustrates something about philosophy that once
struck me as odd. Lucas carefully constructs a somewhat formal argument for
his position, that would, if correct, rule out the computational theory of
mind. One counterargument takes off from the point that Lucas' argument
assumes that any computational model of a mind must be consistent, and depends
on a mind being able to establish its own consistency. In response, we are
told "Lucas thinks it is unlikely that an inconsistent machine could be an
adequate representation of a mind." So now the careful, structured argument is
reduced to a personal opinion? (The article includes some more specific
variations on the theme by Lucas, but they all seem to me to be appeals to
plausibility.)

To me, this seems to leave Lucas with the burden of proof, and until he or
someone else can counter this objection with something more than an opinion,
the argument is dead in the water. I used to wonder why philosophy would hang
an elaborate argument on personal opinions, but I have come to appreciate that
whenever you speculate about matters where there is insufficient evidence to
settle the issue, it always comes down to opinions, though this is often
overlooked in the discussion.

~~~
Gormisdomai
You are doing good philosophy here. Though: I'd be careful to move from the
fact that Lucas' argument comes down to a personal opinion to the fact that
all philosophy comes down to personal opinions.

I assume you already know about the concept of "commitment" in Philosophy, but
I'll explain it briefly here in case anyone else comes across this comment.

Sometimes while you can't show that a fact is categorically false (given some
available evidence), you can show that people who believe that fact are
committed to believing other facts. In this case Lucas' own argument has
committed him to the "personal opinion" that an inconsistent machine cannot be
an adequate representation of mind. But his belief isn't entirely
unfalsifiable. If new evidence makes him give up this belief (e.g. a rise of
paraconsistent logics) he'll have to give up his original belief along with
it.

~~~
TheOtherHobbes
It's not about personal opinions. Or rather it is, but not in an obvious way.

Human minds are clearly not consistent. Human minds are a loose collection of
cobbled-together heuristics.

Logic is one of those heuristics. It has limits because ultimately "logic"
ends up being not a personal opinion, but a personal experience of the truth -
or otherwise - of a proposition.

Developed and trained minds have generated a heuristic of independent
consensus about truths and experience, and strategies for increasing the
correlation between the truth experience and the reality experience.

Less developed untrained minds have a much more naive and subjective
experience of "truth."

But both work on the same foundation - a nebulous and indefinable subjective
experience that is _associated_ with truth, and is typically mistaken as the
definition of it.

It's impossible to "prove" truth, because the experience of deciding whether
or not something is true is absolutely subjective.

Any attempt to make an objective definition will always be filtered through
that subjectivity and reduced to either having that experience or not having
it.

We're back to qualia, as usual. No one knows what "having an experience"
really means. Until that question is answered, there are no tools for dealing
with the rest.

You can argue "But what about the scientific method, and mathematical proofs?"
Same answer - they're practices that generate the truth experience in trained
minds. They have a better predictive record and are better at finding
consistent patterns than the disjointed heuristics of untrained minds. But
they still don't - and can't - prove "truth" in any kind of ultimate
definitive guaranteed-to-be-objective way.

They certainly can't guarantee that the truth experience is never biased,
contingent, limited, naive, circumstantial, or possibly even just plain wrong.

The implications for AI are obvious. If a machine is better at finding
consistent patterns, it will appear smarter than a human. It will still be
smarter even if it makes errors that are obvious to a human, because at the
same time it _won 't_ be making errors that humans do make, and which are
similarly "obviously wrong" to the AI.

It's the hit rate that matters, not idealised philosophical perfection.

------
enugu
Penrose argument, while it can be challenged, is actually much more careful
than dismissals here suggest.

For instance people talking about 'humans are not consistent', we are talking
about effort directed into a well specified formal domain, and after long term
reflection. So, of course, people make mistakes in mathematics all the time,
but we are talking about consistency after intense review over long period of
time. We write a program, and cant find the bug even if we look at it for a
google years.

To the claim, 'Godel statement is not interesting relative to other
statements', the goal is that to demonstrate non-algorithmic nature of _some_
human activity, which itself would be a big deal if true.

A more qualified version of the LP claim can be expressed as follows 'One of
the following is true 1) The activity of humans in some formal domain(like
mathematics) is inconsistent in a deep way(not routine mistakes). 2)Humans are
reflective consistent, algorithmic but will never have access to the source
code(a claim about limits about brain research). 3) We are reflective
consistent, can access source code, but wont be able to see why the source
itself is consistent.'

------
serhei
Both the Lucas-Penrose argument and the counterarguments seem to assume that
consciousness is a phenomenon within the laws of physics which arises from
more fundamental laws (e.g. chemistry or quantum mechanics), which is (to me)
a completely non-obvious assumption.

For example, I perceive my own thoughts rather than other people's. This
implies that my own thoughts would have an additional physical property, that
of being perceived by me, which the thoughts of other people lack. However,
according to all up-to-date theories of physics and neurology, there is no
difference between my own thoughts and other people's that would explain the
presence of this additional property.

~~~
lucozade
For you to perceive your thoughts doesn't require different properties to
other people's thoughts. Just different values to common properties.

It's not clear why you would expect this to be inherently different from any
other neurological response. For example, if I stub my toe, are you suggesting
that science doesn't have an answer to why you don't feel it?

~~~
serhei
I am suggesting that the fact that I (the perceiving subject) am myself (the
biological entity) rather than you (a different biological entity, but
ostensibly subject to the same laws) is (a) an objective fact (b) not within
the domain of science. I do not expect this fact to map to any observable
physical properties.

~~~
lucozade
It's because the thing generating the thoughts (you) is coupled to the thing
perceiving the thoughts (you) via pathways that can carry the information. I,
however, am not connected to that thing (you) via equivalent pathways.

I am, as luck would have it, connected in a more or less identical fashion to
the thing creating my thoughts (me).

Physics is ok with all of that.

------
lisper
Refuting Lucas is very simple: Godel sentences can only be constructed for
circumscribed systems, i.e. a system where you have written down _all_ of the
axioms and inference rules. So the only way to circumscribe a human brain is
to isolate it from its environment. As soon as you let it communicate with
(say) another human brain, you can no longer Godelize it because that brain is
not the system any more, it's the two communicating brains working together.
_That_ system might have a Godel sentence of its own, but it's not the same as
the Godel sentence of individual brains.

So if I can put you in an isolation tank and model your brain, then I can in
fact (at least in principle) construct your Godel sentence and know for
certain that you will never come to the realization that this outrageously
complicated mess I've just constructed (the one that says of itself that you
will never come to see its truth) is true, and I can feel smug about the fact
that I can see that this is true and you never will. But here's the kicker: I
can never gloat about my superiority to you, because as soon as I do you are
no longer in isolation. You are communicating with me, and the sentence I've
take great pains to construct is no longer your Godel sentence. It is the
Godel sentence of you in isolation, but as soon as I tell you your Godel
sentence -- indeed as soon as I (or anyone else) say _anything_ to you, my
construction falls apart and I have to start over.

Ironically, far from proving that we are not machines, Lucas's argument
actually goes a long way towards demonstrating that we _are_ , because coming
up with examples of things that we interacting brains know to be true that a
particular isolated brain will (with extremely high probability) never come to
realize is not difficult. The four-color theorem, for example, is probably
beyond any single isolated human brain. Indeed, the very argument you are
reading now is probably beyond any single human brain because the brain that's
writing it would never have come up with it had Godel and Lucas (and Turing)
not paved the way.

~~~
ifdefdebug
Does your argument hold if you consider that input to the brain is limited to
the reception of electromagnetic/mechanical waves through a set of sensors,
being the fact that one can transmit speech (and Godel sentences) through
those sensors is just a higher level abstraction?

Put in another way: Couldn't I in principle define such a circumscribed system
with all axioms and inference rules by including all of those sensors and all
of their possible states into the system?

~~~
lisper
Yes, of course you can. But if _you_ are _part_ of that system, then _you_
can't Godelize it. In order to Godelize a system, not only does the system
need to be circumscribed, but the thing doing the Godelizing has to be
_outside_ of the boundaries of the circumscription.

Let's recall the original argument:

"a mechanist formulates a particular mechanistic thesis by claiming, for
example, that the human mind is a Turing machine with a given formal
specification S. Lucas then refutes this thesis by producing S’s Gödel
sentence, which we can see is true, but the Turing machine cannot. Then, a
mechanist puts forth a different thesis by claiming, for example, that the
human mind is a Turing machine with formal specification S’. But then Lucas
produces the Gödel sentence for S’, and so on, until, presumably, the
mechanist simply gives up."

When you are presented with S there are two possibilities: either S includes
you, or it doesn't. If S includes you, then you can't Godelize it. If S
doesn't include you, then you might be able to Godelize it, but if you present
your result to the thing that S is supposed to be a model of -- indeed as soon
as you interact with that thing in any way -- then S is no longer an accurate
model of that system because now it is interacting with something that, by
assumption, was not part of the model.

------
manyoso
Most interesting part was suggestion that Gödel himself agreed with Penrose
argument against Strong AI. I think this will be highly contentious.

~~~
manyoso
Here is the quote from Gödel supposedly supporting this:

"So the following disjunctive conclusion is inevitable: Either mathematics is
incompletable in this sense, that its evident axioms can never be comprised in
a finite rule, that is to say, the human mind (even within the realm of pure
mathematics) infinitely surpasses the powers of any finite machine, or else
there exist absolutely unsolvable diophantine problems of the type specified"
. . . (Gödel 1995: 310).

------
alberto_ol
Solomon Feferman on Penrose’s Gödelian argument

[http://math.stanford.edu/~feferman/papers/penrose.pdf](http://math.stanford.edu/~feferman/papers/penrose.pdf)
(PDF)

------
mcguire
1\. If Lucas and Penrose are right, there are statements which are
unequivocally true, but which cannot be proved in any known formal system. It
should be possible to exhibit a few. (Extra credit if they avoid theology.)

If the only such statements are artificial Gödelian sentences, I'm afraid
I'm...uninterested, in the same way Lucas is uninterested in inconsistent
humans.

2\. Lucas and Penrose assert that some entities are significantly more
powerful than others. To my knowledge, Lucas describes no mechanism for this.
(To his credit, Penrose does, but, to my knowledge, the Bell inequality among
other things means quantum mechanics is fundamentally random. That's not
helpful.)

What could be the nature of such a mechanism? If it's material, it should be
possible to include it in future machines. If it's not, we are in the realm of
theology again.

3\. So humans are intelligent and intelligent-seeming machines only simulate
intelligence. If the difference is not common, obvious, and readily
meaningful, then where is the distinction? ("So what? I can't write an opera
either.")

~~~
lisper
> there are statements which are unequivocally true, but which cannot be
> proved in any known formal system

No. You are misunderstanding Godel's theorem. For any given formal system
there exists a Godel sentence which the system cannot prove. But the Godel
sentence for every system is _different_. Moreover, there is no _algorithm_
for constructing a Godel sentence. If there were, then a TM could follow that
algorithm to construct its own Godel sentence.

So every formal system has a (different) Godel sentence, but there does not
exist a universal Godel sentence that is unprovable by all formal systems.

~~~
enugu
Constructing the sentence is algorithmic. The key part is asserting that the
sentence is true. For instance ZFC will prove that A:(ZFC cant prove
GodelStatement(ZFC) OR ZFC is inconsistent).GodelStatement is an algorithmic
function which takes a list of axioms in some formal framework.

Now, lets make ZFC' which is ZFC + 'ZFC is consistent'. ZFC' can prove A(as it
contains ZFC) and uses the consistency of ZFC to prove A':(ZFC cant prove
Godel(ZFC)) .

Similarly, a program decides a set of problems(like does this Turing machine
halt or does this diophantine equation have solutions) with access to its
source code will be able to algorithmically generate the problem instance on
which it will not halt. You can modify the program to create a new program
which will use this information, but it will be a new program.

~~~
lisper
> Constructing the sentence is algorithmic.

No, it isn't. But it's a non-trivial point.

> Similarly, a program decides a set of problems(like does this Turing machine
> halt or does this diophantine equation have solutions) with access to its
> source code will be able to algorithmically generate the problem instance on
> which it will not halt. You can modify the program to create a new program
> which will use this information, but it will be a new program.

So this all depends on what you consider a "new program". Suppose we write a
program that actually does this: analyzes its own code, algorithmically
produces the Godel sentence for that code, then modifies its code to include
that as an axiom, forever. _That_ program is a program, and _that_ program
will _also_ have a Godel sentence, but one which it cannot construct using
_that_ algorithm. You need a _different_ algorithm to construct Godel
sentences for programs that constructs Godel sentences on programs that don't
construct Godel sentences.

The result is a hierarchy of Godel-sentence constructors. At any level in the
hierarchy the procedure is algorithmic, but going from one level to the next
is not. It's similar to the process of constructing the hierarchy of ordinals,
which is also provably non-algorithmic.

~~~
enugu
There is a program, G, which will take as input the source code of the
program, P (in some fixed language) and output Turing machine, T which doesnt
halt, such that either P will fail to say T doesnt halt, or P has
mistakes(claims halting for some Turing machines when they dont halt). Some
programs P can algorithmically run P', P'' etc, but as long as that process is
part of the source code of P, G will work.

~~~
lisper
Sure. So? How does that in any way refute what I said?

~~~
enugu
Point in my first post was that G is algorithmic. What you are referring to is
the repeated extension of G which gets into issues of ordinals.

~~~
lisper
Yes, but I'm the one who made the original claim that "there is no algorithm
for constructing a Godel sentence". So I'm the one who gets to say what I
meant by that :-)

------
Udo
_> Now suppose that we construct the Gödel sentence for this formal system.
Since the Gödel sentence cannot be proven in the system, the machine will be
unable to produce this sentence as a truth of arithmetic. However, a human can
look and see that the Gödel sentence is true. In other words, there is at
least one thing that a human mind can do that no machine can. Therefore, “a
machine cannot be a complete and adequate model of the mind” (Lucas 1961:
113). In short, the human mind is not a machine._

That is not a difference between human and machine. It's a rhetorical trick at
best, and an unreasonable appeal to mysticism at worst.

A human mind looking at the Gödel sentence while strictly adhering to the same
formal structure could _also_ not see that the Gödel sentence is true.
Conversely, a machine that is allowed to operate outside of this postulated
formalism could perfectly well make an inference that the Gödel sentence is
true.

The Lucas-Penrose argument boils down to a questionable attempt at a logic
bomb. So by first tying the machine's hands, they argue that the machine's
hand's are indeed tied, and therefore humans - who in this example have been
given free rein - can do something that you explicitly disallowed machines
from doing.

~~~
scandox
What formalism, if any, is the human mind subject to? Isn't our current
conception of a machine that it will bound to some formalism in terms of
reasoning? Are we so bound?

~~~
Udo
That's a loaded argument.

Human biology, and by extension human minds, follow a myriad of rules that are
a result of biological processes being played out. Likewise "machines" also
follow the rules laid out by their physical design. There is no fundamental
difference in this regard, there is no new set of information-theoretical
rules that comes into play when you switch to a carbon-based machinery.

> What formalism, if any, is the human mind subject to?

In the context of this argument, the mind of the human mathematician would be
subject to the rules of the formal system. Since Gödel we know that such a
formal system cannot be proven true inside the system itself. However, an
entity that views the system from the _outside_ can potentially see it's true.
There is no fundamental difference between a human mind and a hypothetical
machine mind as far as the capability to come to the same conclusion is
concerned. That's why Lucas took special care to specify upfront that the
machine mind is prohibited from coming to that conclusion, by confining it to
live within the formal system only. Yes, their argument is really that
circular.

> Isn't our current conception of a machine that it will bound to some
> formalism in terms of reasoning? Are we so bound?

We are not, and neither would a generally intelligent machine. The burden of
proof is on Lucas/Penrose to show that a generally intelligent machine would
still be bound to that formalism, at which point it would cease to be
considered generally intelligent. They seem to argue that their postulate is
true because the _individual components_ of such a machine are bound by those
rules, but then again so are the individual components of our brains.

~~~
manyoso
So your claim is that the Human mind is a physical machine and that computers
are also physical machines and thus both are bound to physical laws... and
this negates Penrose how?

Do you think Penrose disagrees that the Human mind is bound by physical laws??

~~~
Udo
I'm not making a claim, I argue based on what we already know. Penrose is the
one claiming something here.

 _> Do you think Penrose disagrees that the Human mind is bound by physical
laws??_

Yes. Having read his work in the past, I _know_ that he disagrees the human
mind is bound by physical laws. This is the whole reason behind the argument
they're putting forth to begin with: they postulate that humans are capable of
making an inference that could not possibly be made by a machine intelligence.

However, it has never been shown that we are _not_ a machine intelligence.

~~~
manyoso
Then I am sorry to say you have no idea what you are talking about. Penrose
_nowhere_ claims the human mind is not bound by physical laws.

Cite one place where he says anything like this or go home.

~~~
simonh
He argues that quantum magic enables human minds to do things that formal
computer systems cannot do. I'm not sure what his position is on computers
that incorporate quantum magic though.

~~~
Udo
I suspect the argument _manyoso_ is setting up here is based on sophistry
around the concept of "physical laws". His stance is most likely that quantum
woo is part of the physical laws, in the same way a fundamentalist religious
person would assert that the content of a holy book is part of the physical
laws.

What I - and I hope most people - mean by the phrase is different though, it's
the body of actual physical laws that have been shown scientifically.

> I'm not sure what his position is on computers that incorporate quantum
> magic though.

Interesting thought experiment. I suspect he would be fine with those in
theory, however since he already declared this special connection to the
supernatural for humans it'd be interesting what computer components he would
consider equivalent. But the beauty of his position is that, in the event of a
generally intelligent computer emerging, he could always claim that any given
nanoscale computing component is the silicon analogue of "quantum
microtubules" (which was his woo-related organelle of choice in neurons). It's
not like he ever has to prove his claims in order to be taken seriously.

~~~
dang
> _I suspect the argument manyoso is setting up here is based on sophistry
> around the concept of "physical laws". His stance is most likely that
> quantum woo is part of the physical laws, in the same way a fundamentalist
> religious person would assert that the content of a holy book is part of the
> physical laws._

That falls far below the standard of discussion we ask users to abide by here.
Please make your points directly and without such cheap insinuations.

~~~
Udo
It's not an _incorrect_ summary of his position. I agree it may not look like
a totally objective characterization on my part, _but you 're saying this in a
discussion where I was literally told to "go home"_, by the same user you're
protecting here.

If you think my comments in their entirety detracted from the discussion, I'd
like you to tell me. Otherwise I'm asking you to consider this statement in
the context of both manyoso's and my comments as a whole.

This is a discussion about quantum quackery, and you're stepping in on behalf
of the pro-quackery side that has no problem telling me personally that I
"have no idea what" I'm "talking about", and other dismissive discourse.

~~~
dang
I chided manyoso elsewhere. The problem with your comment was not that it
incorrectly summarized anything, it's that it broke the HN guidelines by
calling names ("quackery", "sophistry", "woo", "fundamentalist"). That matters
far more to HN than whether Penrose is wrong about Gödel.

Commenting here requires being scrupulously civil, even when others are not.
Any lower standard would lead to an unstable system.

~~~
Udo
In a discussion where one side freely makes stuff up as they go along and
attempts to put the burden of proof on everybody else, I stand by these
characterizations.

When someone asserts in an authoritative tone that quantum gravity is the key
to unlocking consciousness at the heart of the universe, that is utter hogwash
and deserves to be called out as such. You may personally not like my pointing
this out, but the entire "field" of quantum consciousness is a religious
movement that makes unsubstantiated claims about the nature of reality.

It actually _is_ quackery. Redefining the term 'physical laws' to include
magic in order to shield an opinion from criticism _is_ actually sophistry.
Insisting that spiritual convictions are factual statements about the universe
is _actual_ fundamentalism. These aren't bad words wielded to hurt someone,
they are supposed to convey meaning.

I never meant to denigrate anyone personally. I strongly feel that opinions
must be fair game though when it comes to criticism. In doing so suppose I
must come across more harshly than I intended or maybe I'm just exceptionally
bad at articulating myself. Whatever my intent, it's clearly not working out
all that well. It's probably a good idea for me to take a break from HN for a
while.

------
yongjik
Well, I don't think our brains work by applying "logic" when we think about
Gödel's theorem (or anything else). We evolved to do pattern matching, so each
of us spends decades to learn more and more sophisticated pattern matching
techniques, starting from mom and dad's faces to counting things to
multiplication tables to, finally, understanding Gödel's theorem (for some
people).

In light of this, we can reinterpret "However, a human can look and see that
the Gödel sentence is true" as:

 _When you take the best pattern matching machine nature has produced, train
it with distilled essentials of centuries of mathematical geniuses, and
present it with the Gödel sentence, the pattern matcher strongly signals "This
looks true!"_

When viewed this way, it looks less like an inherently human-specific
achievement, but more like exactly how ML algorithms work. The only difference
is that ML cannot quite pull it off, so far.

------
lainon
[http://www.cs.bham.ac.uk/~mmk/papers/05-KI.html](http://www.cs.bham.ac.uk/~mmk/papers/05-KI.html)
(Why is the Lucas-Penrose Argument Invalid?)

~~~
fiatjaf
> The matter is confused by the prima facie paradoxical fact that Gödel proved
> the truth of the sentence that "This sentence is not provable."

Gödel didn't prove that. He saw that it was true.

------
empath75
What if the human mind is computational, but isn't one unified system. I think
most neurologists today believe the mind is composed of many loosely coupled
systems communicating with each other?

~~~
serhei
A collection of loosely coupled computational systems still forms a
computational system, not some fundamentally different entity.

------
Chronos
Ugh, this again.

1\. Let's suppose for sake of argument that humans really can see the inherent
truth of "Peano Arithmetic is consistent". That doesn't mean humans violate
Gödel's Incompleteness Theorem: it could just mean that humans use axioms
stronger than PA.

2\. Gödel's Incompleteness Theorem only applies to systems that are perfectly
logically consistent. Not sure how Penrose didn't notice, but humans...
aren't.

3\. When scientists proposed Quantum Mechanics as a replacement for Classical
Mechanics, it was on them to explain how Quantum Mechanics simplified to
Classical Mechanics in the common case. "Penrose Mechanics" is an even more
radical departure — especially from a physics of computation standpoint, as
Penrose Mechanics by definition would allow solving at least some of the
problems in (ALL - R) in ~polynomial time. Penrose needs to explain how
Penrose Mechanics reduces to Quantum Mechanics in the common case.

4\. Penrose proposes that (a) there exist new physics, (b) that evolution has
learned to computationally exploit the new physics via microtubules, and yet
(c) that humans are the only lineage to make use of this feature of
microtubules, even though microtubules are found in all eukaryotic cells (from
mushrooms to amoebae). From a predator-prey standpoint alone, it would
seemingly be a huge evolutionary advantage to be able to compute NP or R
functions in polynomial time. (That ability is not _strictly_ implied by
Penrose Mechanics, but it's a very likely consequence.) Penrose needs to
explain why only humans are taking advantage of the computational power of
microtubules, when microtubules have existed for billions of years and across
millions of species. (TL;DR: It's the pineal gland all over again.)

~~~
manyoso
#1 Please show how to program a Turing machine to use "axioms stronger than
PA"

#2 Please show how to program a Turing machine so that it is not logically
consistent at a fundamental level?

#3 Are you questioning the viability of objective collapse theories in
general? To date we have no experimental evidence of said theories, but I
don't think anyone has suggested they are inconsistent with QM. Are you? If
so, please show how...

#4 Show where Penrose has said humans are only species to make use of
microtubules and your claims regarding Penrose' theory of objective collapse
and having staggering new computational speedups are ... without any evidence.
Please provide some.

The tenor of your comment suggests you find Penrose to be an idiot and missing
obvious problems. Consider that perhaps you are the one misunderstanding what
he is proposing and that he is not, in fact, an idiot.

~~~
Chronos
1\. Easy. Program a computer with the axioms of ZFC. Not powerful enough?
Program it with ZFC+Con(ZFC). Repeat as necessary.

2\. The Turing machine itself is logically consistent. The semantic
interpretation of its data need not be logically consistent: it's possible to
write a computer program that prints "2+2=5" to the screen. Likewise, a human
brain may be made of physics and physics may be logically consistent, but the
semantics of the data in the brain (the "beliefs") need not be consistent.

3\. No, no need to reject collapse itself (though I'm more of an Everett guy).
Penrose postulates the existence of a new level of physics that is _not_
Quantum Mechanics. People who aren't careful call it an extension of QM, but
QM's computing model is limited to solving a small subset of NP problems (BQP)
in polynomial time, and BQP is theorized to be a proper superset of P.
However, Penrose's proposed physics would allow solving problems outside of NP
in polynomial time. Hence, Penrose's physics IS NOT quantum physics.

4\. It is physically impossible for computers (as we understand them today) to
solve problems outside of the set we call "Turing complete". Penrose claims
that there's at least one problem outside that set that the human brain can
nonetheless solve: checking the consistency of formal systems. If Penrose's
assumption were true, computers (as we understand them today) would
consequently be strictly less powerful than the human brain. The crux of
Penrose's position is accepting this consequence as true, then asking how
that's possible. I reject that hypercomputation of problems beyond Turing-
completeness is what's happening, i.e. that humans simply hold unjustified
beliefs about the answers to such problems.

I do not doubt that Penrose is an intelligent and educated person. However,
his expertise is General Relativity. It's bad enough when he tries dipping his
toes into Quantum Physics, but when he wades into Physics of Computation, he
is no longer acting as an expert, but as an interested layman. It's only
because of the authority he holds as a GR expert that his argument is treated
seriously, but that authority is not actually relevant to his argument.

~~~
manyoso
#1 Not easy. Prove it. Show a Turing machine programmed with ZFC that can not
be modeled by a Turing machine programmed with PE. You can not do so because
Turing machines are universal. The fact that you claim this is easy tells me
you do not understand what a Turing machine is.

#2 See fundamental level. Penrose claims that what sets us apart from Turing
machines is fundamental. But whatever, you've refuted whatever point you
wanted to make by saying human's are not logically consistent. Your snark
about Penrose not understanding is as empty as whatever point you were trying
to make.

#3 Cite a paper showing that objective collapse gives these supposed
algorithmic speedups or go home. NOTE: objective collapse theories are an
active area of research and are not limited to Penrose by any means. Your
claim that they are inconsistent with QM begs for evidence. Cite some or stop
spreading nonsense.

#4 Your answer here is completely void of any context to the question: where
Penrose says that microtubules are limited to humans. I take it you concede
that he did not say any such thing?

You treat Penrose as an idiot missing obvious problems. I think it far more
likely that you've misunderstood. Pity your lack of humility might make it
impossible for you to understand what he actually argues rather than your
strawmen.

~~~
Chronos
Also: before I bother building a Turing machine that implements a proof-
generator for statements in ZFC, you should do me the courtesy of showing your
investment by building me a Turing machine that multiplies two integers.

Turing machines _suck_. Building a Turing machine that implements ZFC proof-
generation is a project appropriate to a graduate-level paper, not something
to toss off in an Internet pissing contest.

~~~
manyoso
You were the one who used the phrase "easy" with regard to Turing machines and
feasibility of showing them.

~~~
Chronos
Fine. "Straightforward, if menial and tedious".

------
ccvannorman
Site is down. Anyone have a mirror?

------
Crito
Franky it's irrelevant. Penrose's insistence that the human mind isn't
algorithmic in the Turing sense is born from his human insecurities. But if
the human mind isn't bound in such a way, than any man-made computer need not
be bound either. Penrose's argument falls short of being a proper dualist
argument. Furthermore Roger Penrose has not proven his claims, and they are
not supported by mainstream neuroscientists. He appeals to quantum woo that he
pulled out of his ass.

~~~
dang
> _born from his human insecurities_

> _woo that he pulled out of his ass_

It's not ok to argue like this on HN, so please don't. It lowers the quality
of discussion in its own right and sets off a downward spiral from others.

Edit: Actually, we've banned your account. Your comment history shows that
you've broken HN's civility rule so egregiously, and so repeatedly, that we
should have banned it long time ago. If you don't want to be banned on HN,
you're welcome to email hn@ycombinator.com and promise to follow the rules
scrupulously in the future.

We detached this subthread from
[https://news.ycombinator.com/item?id=14446064](https://news.ycombinator.com/item?id=14446064)
and marked it off-topic.

------
falsedan
> _When we notice an inconsistency within ourselves, we generally “eschew” it,
> whereas “if we really were inconsistent machines, we should remain content
> with our inconsistencies, and would happily affirm both halves of a
> contradiction”_

I feel that only philosophers & logicians are this black-and-white. There are
many people content to occupy a midpoint on the spectrum of "This fact is
true"..."This fact is false"; some even will accept a range!

One fact I think we can all agree on is that philosophers are: old; white;
dead; men; and wrong.

~~~
falsedan
Love too get downvotes with no comments/feedback. Is this about the dead white
men? I take it back. I didn't mean it. I changed my mind. I never meant for it
to be taken this way. of all the possible interpretations of my words, I
intended you to adopt the one where they were both witty and erudite. It's
your fault you picked the wrong one. I stand by my words. I am right. Only I
am right. I have seen into the heart of Man and I know its true meaning.

