
Minds, Machines and Gödel (1961) - prismatic
http://users.ox.ac.uk/~jrlucas/Godel/mmg.html
======
Udo
Cells are machines, (our) minds are running on cells, therefore minds run on
machines. It doesn't _have_ to be more complicated than that. Unless you are
making an argument for supernatural influences. It's debatable whether minds
_are_ machines, at least in a sense that satisfies us, but that's another
subject.

 _> "Therefore the machine cannot produce the corresponding formula as being
true. But we can _see* that the Gödelian formula is true."*

Essentially, the other side's argument boils down to the idea that minds can
reason in a way that machines cannot - they can intuitively look behind the
Gödelian curtain so to speak.

However, this contest between "mind" and "machine" only works as long as you
tie the machine's hands behind its back while letting the mind roam freely: a
mind forced to operate within the same formal system would come to the same
conclusion as the machine, and conversely there is no reason to assume that a
machine allowed to apply intuitive reasoning would not come to the same
realization as the mind.

This entire argument is a philosophical parlor trick that only works as long
as you don't pay too close attention to the unequal restrictions imposed on
the two participants.

~~~
sgt101
You are confusing brains and minds. Our minds extend beyond our brains into
our bodies; people reason and act because of hunger and thirst. Our minds
extend beyond our bodies, we act on the basis of our friends and society.

~~~
Udo
_> You are confusing brains and minds. _

That's a baseless assertion. In fact, I made it extra clear to distinguish
between minds and the hardware they run on. By necessity, we need to talk
about the "interface" between both though.

 _> Our minds extend beyond our bodies, we act on the basis of our friends and
society._

How does that counter the argument that minds (already) run on machines?

~~~
phorkyas82
An "interface" like Descartes' pineal gland? Or would you prefer something
RESTful? While technical metaphors applied to the good ol' philosophical mind
body problem look so modern they are also totally inept. There is no
archticture or VM to port your consciousness to, nor can you upload it to AWS.

To get more accurate metaphors on consciousness, maybe it's better to start
with the biologic basis like Plessner did:
[https://en.wikipedia.org/wiki/Helmuth_Plessner](https://en.wikipedia.org/wiki/Helmuth_Plessner)

------
zajio1am
I would say that the article is based on a flawed premise that constraints of
Gödel theorems do not apply to human mind.

The fact that some theorems undecidable in a formal system can be 'seen true'
or informally proven is explainable by a simple fact that formal systems are
'step behind' intuition.

We have an intuitive grasp of matematical structures and inference and we
created neat and comprehensible formal systems that tries to reflect that, but
they are behind in some cases. We can iteratively improve them by adding more
axioms and more general inference methods, but that is kind of futile as
second Gödel theorem shows that it will never make them complete.

But there is no reason to assume that intuition is complete in the same sense.
It seems likely that if we did that step enough times, we would end in
situation where we couldn't progress as intuition would fail on undecidable
theorems (either by being silent, or inconsistent between people, or by being
incomprehensible).

~~~
simonh
I love this bit:

>Nor could we make its inconsistency a reproach to it---are not men
inconsistent too? Certainly women are, and politicians; and {53} even male
non-politicians (121) contradict themselves sometimes, and a single
inconsistency is enough to make a system inconsistent.

Not 'love' in an agreeing sense. Boy are we in different times now.

Agreed, I see no justification in assuming that human minds are consistent, in
fact that seems self evidently not to be the case. Also just because a system
is inconsistent, that doesn't mean it's incapable of generating or
'simulating' consistent systems. So we can conceive of a consistent system,
apply it to problems and achieve all the advantages on consistency within some
limited domain, while at the higher level being ourselves inconsistent. I see
no reason why an automated system could not also do the same. This seems to me
to break the argument completely, though I confess I'm no great philosopher
and it's highly likely I'm missing something.

------
finm
A classic! Seems obviously wrong (see Udo's comment), but nobody can quite
seem to agree exactly what goes wrong. Probably Lewis' responses are best –
[https://philpapers.org/rec/LEWLAM-2](https://philpapers.org/rec/LEWLAM-2)

------
fancyfredbot
I didn't realise people were making these arguments back in the sixties. I
came across this in Penrose's much more recent books "The Emperor's New Mind"
and to a lesser extent "The Shadow Of The Mind" which make similar arguments
to this paper, appealing to incompleteness (and in the second book quantum
mechanics) to argue against strong AI. Both books are well worth reading, but
since I understand Turing machines are able to stimulate both quantum and
classical machines I found it hard to find them truly convincing.

~~~
deepnet
It is a question of efficiency, our brains run on about 15 Watts.

Quantum computers offer many orders of magnitude of efficiency by a (handwavy)
'searching the whole solution space' sort of ultimate parallelism.

The new field of Quantum Biology[1] shows that many biological processes
exploit quantum effects, e.g. enzymes exploiting quantum tunelling to unknot
proteins, robin's direction sense, photon pathfinding to reaction centre in
photosynthesis.

It is therefore not entirely unlikely that brain's exploit various quantum
effects to make thinking much more efficient.

[1]
[https://www.youtube.com/watch?v=_qgSz1UmcBM](https://www.youtube.com/watch?v=_qgSz1UmcBM)

[https://www.youtube.com/watch?v=wwgQVZju1ZM](https://www.youtube.com/watch?v=wwgQVZju1ZM)

~~~
fancyfredbot
Yes, "Can a computer think?", and "Can a computer think efficiently?" are two
closely related questions you'd reason about in very different ways.

I feel like the paper and the books are really asking the first question and
not the second.

The second question is also very interesting! If quantum effects are involved
then a classical computer might never be able to "think" efficiently.

------
caleb-allen
Very very excited to read this after work! Thanks for posting

------
peter303
Even though Goedel proved no logical or material system can be 100% complete,
you can probably get close enough for practical purposes of being intelligent.

------
danbmil99
I think John Searle wants his Chinese room back

~~~
wool_gather
The similarity is well-spotted, but this predates the Chinese Room by two
decades. Lucas's essay was actually cited in Searle's:
[http://cogprints.org/7150/1/10.1.1.83.5248.pdf](http://cogprints.org/7150/1/10.1.1.83.5248.pdf)

~~~
danbmil99
Good point. In any case, those arguments suffer from the same weakness.

Intelligence is not about formal systems proving theorems; It's about
navigating the real world as an organism, attempting to survive, thrive, and
procreate.

