
Gödel's Incompleteness Theorem explained - simonbrown
http://www.miskatonic.org/godel.html
======
lotharbot
It should be noted that Godel gave two Incompleteness Theorems.

The first says that, in a system based on a list of axioms, there will always
be some statements that aren't provable. (Remember Mr. Spock confusing a
computer by telling it "I am lying"? "This statement can't be proved" behaves
the same way -- it's true, but you can't prove it, because if you did then you
couldn't prove it.) In other words, _there is always truth the system can't
account for_.

The second theorem builds on the first. It says a system can't prove itself
consistent, and that if a system includes the claim "this system is
consistent" then it is necessarily inconsistent. (In essence, you can
construct a more complex version of the "I am lying" statement in any system
that includes a claim of "I am telling the truth".) In other words, the only
way to prove something consistent is from outside of it.

~~~
dmfdmf
> "This statement can't be proved"

This statement (and Godel's mathematical equivalent) does not assert anything
to be proved, it is contentless, which is why logical, axiomatic systems choke
on it, essentially due to recursion or self-reference. Logically there are
only two fundamental ways to err; by contradiction and by circular reasoning.

> In other words, the only way to prove something consistent is from outside
> of it.

This is what I take from Godel's theorems but this is a poor formulation of
the idea. A better way to say it is that proof _presupposes_ consistency and
more specifically the law of identity which is a metaphysical law that has to
be _validated_ not proved (since proof depends on it).

~~~
gburt
A "strange loop", perhaps.

------
icandoitbetter
Its relevance in philosophy/epistemology/science/computing has been inflated
out of proportion by the confused arguments of Hofstadter, Penrose, etc. Just
because it's difficult to understand doesn't mean that it will tell you the
meaning of life. Of course it's a mathematical theorem of profound importance,
but it has absolutely nothing to do with the limits of rational thought.

~~~
Dn_Ab
I agree with your sentiment. Gödel's work applies only to formal axiomatic
systems of a specific strength. There are formal systems that are so weak they
can prove their own truth and every statement is decidable (every true
statement can be proved as true). And if your theory makes no claim to an
axiomatic basis then Gödel really does not have jurisdiction.

But if a person accepts the Church Turing Thesis, then the human brain is not
doing any computation that can't be modeled by a Turing Machine. In fact, the
brain is at least a Turing Machine. Gödel's limitations will apply to it.

If we accept the Church Turing Thesis and replace the brain with a Turing
machine, I argue that the mind is a program that runs on that Turing machine.
The program embodies a particular formal system sufficient to encode
mathematical statements and leverage itself to prove statements. From this
argument one can infer that there are some mathematical truths that some minds
will not find. And that if each human mind represents a different formal
system, it is in their best interest to work together.

 _Disclaimer - The above is me thinking out loud not something from a peer
reviewed paper_

~~~
rcthompson
It's a fallacy to think of a human brain as a distinct entity. If you are
viewing the human brain as "just" a bunch of matter interacting in interesting
ways to produce what we call consciousness (as opposed to viewing
consciousness as something separate from the physical processes of the brain),
then there is nothing really distinguishing your brain from the rest of you,
and indeed from the rest of the universe. (Practically speaking, a good bit of
decision-making happens in neurons outside of your brain.) So if you want to
argue that the mind is equivalent to (or can be simulated by) a Turing
machine, then you really want to argue that the whole universe could be
simulated on a Turing machine.

Which is not necessarily a bad position to take. <http://xkcd.com/505/>

~~~
Udo
I have the vague hope that this insight about the nature of the brain being a
type of computing device might be kind of obvious to us as IT and information
science people. It is, however, sadly _not obvious_ to mathematicians and
physicists as the article itself quotes:

    
    
      Gödel's Theorem has been used to argue that a computer can never be as smart as 
      a human being because the extent of its knowledge is limited by a fixed set of 
      axioms, whereas people can discover unexpected truths
    

...which to me is an utterly surprising non-sequitur.

~~~
grepherder
Well, that paragraph you quoted is indeed non-sequitur, clearly you can
program computers to discover unexpected truths using their 'axioms', code
their 'axioms' to be as flexible and fragile as ours and so on, but the
assertion that the nature of the brain is a computing device in the sense we
"information science people" refer to one, is far from obvious. Obviously the
brain computes stuff, but we tend to be rather specific and we think of
computers when we say computing device, which might or might not be able to
simulate human intelligence, hence the strong vs weak AI debate.

Trivially you can, with perfect information and sufficient technology,
modelling neuron by neuron, create an electronic brain. The catch is, you
would only prove humanOS runs on silicon as well as it does on carbon.

Having a powerful enough computing device does not imply it can compute
anything a different type of computing device can. We have the turing-
completeness that indeed says this is valid for a subset of our current
programming languages and hardware, but turing-completeness does not trivially
apply to our brain.

As an analogy, a ruler is a drawing device, but you can't draw a circle with
it as with a compass. You can somewhat simulate drawing a circle by taking it
very slowly using a certain clever algorithm, but it will never be perfect or
else require infinite or virtually infinite time/space to do so.

------
rcthompson
Think of it kind of like the Sieve of Eratosthenes, but for provability
instead of primeness. You start with one axiom and prove everything you can
based on that. Then you pick one of the statements you couldn't prove and add
it as a second axiom (thereby expanding your axiomatic system), then prove
everything that you can with those two axioms. The pick another unproven
statement as your third axiom and repeat. Godel's incompleteness theorem is
equivalent to saying that you will never run out of axioms, and that you will
be able to continue this process indefinitely (just like there are an infinite
number of primes).

It's not an exact comparison, but it gives you the idea.

~~~
xyzzyz
I think your comparison misses an important point. If you pick in every step a
sentence that is non-provable from your current axioms, but not contradictory
with them either, after infinitely many steps[1] you will have picked all of
them, and get a system called True Arithmetic, in which every true sentence is
provable, essentially by definition. The reason why Goedel's incompleteness
theorem does not work here is that your set of axioms is not recursively
enumerable, and this is a necessary assumption of GIT.

[1] - If "after infinitely many steps" sounds fishy to anyone, please keep in
mind that you start with an infinite list of axioms anyway -- first order
Peano arithmetic theory has an induction axiom schema Ind that basically says
that for every sentence f, Ind(f) (the sentence you get by substituting every
occurrence of a single free variable in Ind with f) is an axiom of PA.

~~~
rcthompson
The "infinitely many steps" does sound fishy to me, because I wonder whether
the number of axioms in True Arithmetic is countably infinite or uncountably
so, and whether it makes a difference (I think it does).

But yes, the comparison is an analogy, certainly not correct at every level.

~~~
xyzzyz
The set of all statements is easily seen to be countable, so the subset of
statements being true, being infinite, is countable as well.

It does not make any difference, though, since what matters here is, as I
said, recursive enumerability of axioms. Anyway, transfinite induction lets
one use "pick next element" arguments even on uncountable sets.

------
hammock
_Gödel showed that provability is a weaker notion than truth_

This, to me, is a great summation and one of the most important conclusions we
can draw. When you reflect on it it's easy to see how Godel's theorems reach
beyond computer science and into philosophy, ethics and religion.

~~~
erichocean
Not that easy, because it only applies to truths represented in a system of
symbols. Very abstract stuff, and in fact, trivial to overcome – use two sets
of symbols.

The theorem was important at the time because it ended one pursuit (one
single-level system to rule them all), but its actual impact on anything of
importance in the wider world of, well, anything, is vastly overrated.

------
mappu
By using a UTM as a metaphor for the axioms themselves, the UTM must run
itself, essentially describing an infinite loop over the finite program, which
is another point to consider; with any axiomatic system, is it possible to
describe a UTM which you can guarantee halts in finite time? Probably not
necessarily.

It seems to be a solid notion that if a paradox can be constructed, then the
space is unverifiable, but in this case it feels like the paradox results only
from a poorly specified question (the correct answer exists, and is an
acknowledgement that the input is meaningless).

The important part to take away is that Gödel himself exists in a logical
space where he is able to understand and deal with the absurdity of his
paradox, which is outside the scope of the UTM. Every logical system
containing a paradox, is a subset of a logical system wherein the absurdity of
the paradox is understood. So, perhaps a logical system which explicitly
recognises paradoxes as absurd, can be complete. Like NaN in IEEE754, or
int|null in dynamic languages.

The result clearly only applies to logical spaces where the paradox is
translatable, so i'd be interested to see if a version exists for only basic
arithmetic.

A similarly interesting related discussion is the total, abject, and infinite
unavailability of a "quadratic formula" for polynomials in x^5 or greater.

~~~
kmm
> the UTM must run itself, essentially describing an infinite loop over the
> finite program,

Exactly. I came up with this counterargument not so long ago. It's why neither
Gödel's theorems nor the Halting problem ever really impressed me. I don't
doubt their validity but the conclusion that no system can be complete is only
true for very stringent definitions of "complete". I still believe the Halting
problem can be solved in some sense of the word solve. But perhaps I'm being
too pragmatic.

> So, perhaps a logical system which explicitly recognises paradoxes as
> absurd, can be complete.

Gödel explicitly counters this with his second incompleteness theorem which
says that no consistent system can provide a proof of its own consistency. In
other (but equivalent) words, I might be confident that my mind works
correctly but there's technically no way to be completely sure. In fact if I
were sure of it, my mind wouldn't be working fine.

~~~
keeperofdakeys
> I still believe the Halting problem can be solved in some sense of the word
> solve. But perhaps I'm being too pragmatic.

While its true there are some forms of functions that can be found to either
halt or not, some can't. For example, ask the user for a boolean, and do a
while loop based on this value. You can't say whether this will halt or not,
but it does contain unspecified variables.

Then consider the Collatz Conjecture. We can't prove any other number then 1
actually reaches 1 without simulating the process. Since we don't even know if
the next step will reach the destination, we can't make any decision about
when it will halt. If we can't even decide it for such a simple function, then
I don't think we can 'solve' it, even for a general 'solve'.

~~~
kmm
>For example, ask the user for a boolean, and do a while loop based on this
value. You can't say whether this will halt or not, but it does contain
unspecified variables.

I don't think that's an issue. The Halting Problem ask the question whether a
program will halt when supplied with a given input. Nothing is stopping us
from providing a separate proof for every single program for every possible
that it'll stop.

~~~
grepherder
Either I'm misunderstanding your ideas or you've misunderstood the nature of
the problem.

If you're being pragmatic, rest assured that for any machine/program we can
construct in this universe, we can decide the halting problem, as it is bound
to be finite state, i.e. it will have a bounded amount of states it can be in
and repeat itself. Less technical, there is practically no infinite search
space, even if theoretically there is - memory of a physical computer can only
count up to a certain number, even if very large.

The halting problem, or undecidability in general is the application of a very
basic intuition. If I give you infinite space, and ask you to find me a
certain something in that space, there is no way I can be sure whether or not
you will succeed. Diophantine equations for example, to solve them requires
you to search infinitely through the complex plane, and is analogous to the
halting problem.

Now in which sense of the word 'solve' could you possibly believe these can be
solved? Yes - our problems and machines are finite, so in that sense maybe.

~~~
kmm
What I mean is that while the proof for the Halting Problem is completely
valid, I find it rather "weak" as it uses self-reference. That doesn't tell
you anything about how it would work for other programs. The only thing you
know is that the problem is undecidable for _all_ input. What would happen if
we were to restrict the input to a program to everything that doesn't include
the entire program source code?

It's quite similar to Cantor's diagonal argument. Both proofs are completely
correct and use a very similar reasoning. The difference is that I'm not
interested in almost denumerating the real numbers (I can do that with the
rational numbers anyway), but I would be interested in almost solving the
Halting Problem.

------
petercooper
The BBC had a 45 minute radio panel with Marcus du Sautoy and some other
mathematicians going through Godel's theorems:
<http://www.bbc.co.uk/programmes/b00dshx3> (I believe people outside of the UK
can listen to it.. if not, sorry! For some reason the 'listen' image does not
appear for me, but is to the left of '(45 minutes)').

(And if you like it, the whole set of math and science episode of In Our Time
are similarly excellent.)

------
buu700
Relevant:
[http://www.reddit.com/r/explainlikeimfive/comments/j8yq3/can...](http://www.reddit.com/r/explainlikeimfive/comments/j8yq3/can_anyone_eli5_or_12_the_implications_of_g%C3%B6dels/c2a5pp4)

------
bjornsing
I of course have no proof, but I sometimes get a feeling that Gödel does have
something to do with the limits of rational thought.

To me it seems humans have found a way to cope with Gödel's incompleteness: we
accept that some of our axiomatic systems are inconsistent and adapt by simply
discounting the value of proofs.

It's difficult to explain, but let me give an example. Among engineers you can
often reason along very long logical chains and have your conclusions
accepted. Among (some) business people you can't. They seem to refuse the
validity of logic itself, being nervous about trusting logical conclusions. I
think that's because they know their axiomatic systems are self-inconsistent.
:)

By observing business people I have come to the conclusion that the best you
can do in an a self-inconsistent axiomatic system is to look for "nuggets of
truth" and never wonder too far from them. You can hear people say stuff like
"limiting tweets to 140 characters will brings out creativity - people are
less afraid to create when they are constrained" and similar. If you accept
that as truth then you can wonder a short short distance from it and reason
about business opportunities, but you can't combine that nugget with some
other nugget and be sure to come to a correct conclusion.

Perhaps it is so, that the likelihood of "proving" an untrue statement in a
self-inconsistent system increases with the number of axioms you are basing
your proof on. That's perhaps why many people are very skeptical of "proofs"
that seam to involve a lot of axioms? :)

~~~
mjw
Surely a simpler explanation for that is that business people don't usually
work with crisp, binary data. Instead they're (implicitly) doing some kind of
statistical inference on noisy data.

Propositional logic might be an acceptable approximation to this kind of
inference in certain narrow situations, but the approximation breaks down
quicker when the chain of inference gets longer.

~~~
bjornsing
That could certainly be a simpler (and therefore better) explanation. :) When
your "axioms" are fuzzy so to speak, and not real axioms, it's of course
dangerous to draw conclusions from several of them (because the risk of one of
them being false increases exponentially).

But I still have a feeling there's something there... :) For example, I
remember reading here on hacker news about this CS professor who had found a
way to predict performance in entry level programing courses:
<http://www.eis.mdx.ac.uk/research/PhDArea/saeed/>. Basically, they just
tested their students ability to form a self-consistent model of how
programming works. If they could they did well in the course. If they couldn't
they did poorly, and it was very difficult to help them.

That experiment leads me to believe that about 50% of the population is in the
habit of constructing self-inconsistent systems of hypothesis (provisional
axioms you could say). :) If that's true I'm not sure yours is a simpler
explanation... ;)

------
powertower
> The implication is that all logical system of any complexity are, by
> definition, incomplete; each of them contains, at any given time, more true
> statements than it can possibly prove according to its own defining set of
> rules.

Here is an even simpler explanation:

A _description_ of a _thing_ is not the thing itself, it's just a description
that allows you to: interact with that thing and place that thing into a
framework.

And an even more _accurate description_ of a _thing_ is still not the thing
itself, it's just a more accurate description.

Add more layers, and you still have just a description, and never the thing
itself.

It's like an onion, and each layer gets more distant from the core.

Then those layers begin to interact with other descriptions of other things.
So more layers are added to explain those interactions.

It goes on and on until you've _simulated_ the universe.

The problem is it's exponential, and even if it was not, you're still just
stuck with just a description, and not the thing itself.

Some people will claim otherwise here, so just ask them if a description of a
thing is the same as the thing itself and go back to the start of all this.

~~~
derrida
If I'm not mistaken, one way of seeing GIT is just as a reductio ad absurdum
on the idea that truth is just that which is proved.

Perhaps you should consider that talking about ontology clouds the water here,
this explanation might have some analogical value, but isn't 'simpler'.

------
cop359
Seems like a silly paradox. Why is it important?

Also it seems very similar to this famous paradox: (from 600BC btw.)
<http://en.wikipedia.org/wiki/Epimenides_paradox>

~~~
rcthompson
Constructing the paradox is just the means of proving the theorem. The paradox
itself is not particularly important. Also, the paradox in Godel's theorem is
"This statement is unprovable" not "This statement is false".

------
tumes
While it might not be the most technically accurate, the ways in which David
Foster Wallace touches on Gödel in Everything and More are worth a look for
interested parties.

------
ropz
The referenced page doesn't deliver on the hype of the HN headline, it merely
offers:

"... some selections that will help you start to understand it."""

------
flurie
The greatest explanation I've ever seen for the Incompleteness Theorems comes
from Palle Yourgrau in his book A World Without Time. He gives an excellent
introduction to his explanation: "To appreciate Godel's theorem is your
birthright; let no one, including the mathematical police, deprive you of what
you have a right to enjoy."

~~~
derleth
Who in the world are the 'mathematical police'?

~~~
keithpeter
A rhetorical and shadowy group who discourage ordinary people from reading
about or becoming interested in mathematics. A lot of popular science authors
invoke this underground organisation, so it must exist.

------
batista
With regards to the "Rucker, Infinity and the Mind" except that says:

 _> The proof of Gödel's Incompleteness Theorem is so simple, and so sneaky,
that it is almost embarassing to relate. His basic procedure is as follows:
(...)_

Actually, Godel's theorem is not that simple at all. It involves lots of hard
math. And it's not about some hazy "Universal Truth Machine", it's about a
specified axiomatic mathematical system with certain specific properties.

The "simple" thing that RI&TM describes is a variation of the Liar's paradox.
Which is somewhat like what Godel used, but he did not use it in a simplistic
way, not at that level of coarseness, and surely not "embarrassingly simple to
relate".

