
Gödel's Completeness Theorem - adamnemecek
https://en.wikipedia.org/wiki/Gödel%27s_completeness_theorem
======
bulatb
This theorem seems to be like monads: Someone posts about it on the internet,
someone doesn't understand and asks for help, someone comes along and writes a
simple explanation that seems to make sense, then someone says that
explanation is completely wrong (but here's this totally inscrutable "correct"
one... which someone comes along and says is even worse.)

And everyone is left confused.

~~~
anaphor
Just read this book, IMO [https://www.amazon.com/G%C3%B6dels-Proof-Ernest-
Nagel/dp/081...](https://www.amazon.com/G%C3%B6dels-Proof-Ernest-
Nagel/dp/0814758371)

It's only 160 pages and gives what seems to be a good explanation of the
basics.

~~~
novalis78
Currently reading Morris Kline’s “Loss of certainty” - a beautiful very
readable work that elucidates the relationship of Mathematics with concepts of
‘truth’ and ‘reality’ throughout mathematical history.

[https://www.amazon.com/dp/0195030850/ref=cm_sw_r_cp_api_i_aJ...](https://www.amazon.com/dp/0195030850/ref=cm_sw_r_cp_api_i_aJ.BDb2XFHVCA)

------
theWheez
This theorem actually changed my life. Once it clicked for me it shifted my
understanding of reality significantly.

~~~
meowface
I've tried to make this click for me for a long time, to no avail. Do you have
any tips?

~~~
solinent
Essentially, all logically valid formulae have proofs in first-order logics.

Remember, logically valid formulae means that if you enumerate all possible
interpretations (boolean values for the variables), the formulae remains true.

So you could prove everything that ever exists by creating a set of all
possible proofs using the given deduction rules.

~~~
xamuel
This is 1st-order logic so it's a bit more complicated, though you're on the
right track (what you stated is the completeness theorem for _propositional_
logic, not _first-order_ logic).

Rather than boolean values for the variables, you need to enumerate all
possible...

* Ambient universes where the language is interpreted

* Values (taken from the ambient universe) for the constant symbols

* Sets-of-tuples-of-values (from the ambient universe) for the predicate symbols

* Functions-from-tuples-of-values-to-values (from the ambient universe) for the function symbols

~~~
solinent
Yup, my definition of interpretation was limited for pedagogical purposes, but
I should have been clearer. The propositional logic version is much simpler to
understand, but the implications of the theorem are much more interesting in
first-order logic, so introducing all of this nomenclature may be too much for
someone who isn't motivated yet.

------
aerovistae
Godel's work has always been incomprehensible to me. no matter how I attempt
to understand his theorems, I find them impenetrable. Apparently they are of
great consequence, so I'm very interested, but to no avail.

~~~
rwill128
I can relate. My understanding of it thus far leads me to think I can
summarize it fairly well though, and I would welcome other people's input or
critique on this.

It seems like it's so consequential because he demonstrated that no matter
what kind of mathematical system you're using -- and no matter how much
mathematics generally speaking develops -- there will be objectively true
mathematical statements within that system that can't be proven.

If that part of my understanding is correct, the part that's really
interesting to me is whether we can know these true statements to be true,
despite them not having proofs. This is where I could be misunderstanding
things I suppose, but it suggests there's a disconnect between what's knowable
and what's provable, and furthermore, that we can know more than we can prove.

To actual seasoned mathematicians: is this a really naive interpretation of
what I've read, or not?

~~~
xamuel
You need to be a bit more specific: no matter what kind of _true, computable_
axiom-set you're using (this has nothing to do with 'how much mathematics
generally speaking develops'), there will be objectively true mathematical
statements that can't be proven _by that axiom-set_.

>a disconnect between what's knowable and what's provable

"what's provable [from a given axiom-set]" is a concrete, technical,
unambiguously defined set of things. "what's knowable" is a vague
philosophical set of things. Goedel's Incompleteness Theorem is a technical
result about the former, and it's a common mistake to assume it says anything
about the latter, except very tangentially.

For those who are interested in the misty area where the two things do
overlap, I will shamelessly plug this 2-page paper of mine, "Mathematical
shortcomings in a simulated universe":
[https://philpapers.org/archive/ALEMSI.pdf](https://philpapers.org/archive/ALEMSI.pdf)

~~~
guerrilla
I think they meant what is true, which is also concrete, technical and
unambiguously defined in this setting. But you are very correct to make the
distinction between truth and knowledge. Pointing out that many philosophers
have believed that knowledge is justified true belief might elucidate the
relationship a bit.

------
adamnemecek
The theorem reminds me of William Lawvere's "Adjointness in foundations".
Formulas/programs are in an adjoint relationship with their executions. The
fact that they are adjoints is a big deal.

~~~
theWheez
Is that related to the Curry-Howard isomorphism?

------
wrp
Godel's theorems are a popular subject among technophiles, but the popular
conception of their implications is seriously off base. Torkel Franzen's
book[1] is a quick read and will guide you away from the most common errors.

[1] [https://www.amazon.com/G%C3%B6dels-Theorem-Torkel-
Franz%C3%A...](https://www.amazon.com/G%C3%B6dels-Theorem-Torkel-
Franz%C3%A9n/dp/1568812388/)

~~~
adamnemecek
Most people talk about incompleteness theorem, not completeness. The book does
briefly mention completeness theorem.

------
kkylin
Anyone interested in this topic should be sure to follow the various links on
the page. One of my personal favorites is the Compactness Theorem:
[https://en.wikipedia.org/wiki/Compactness_theorem](https://en.wikipedia.org/wiki/Compactness_theorem)
.

------
120bits
Oh my! this reminds me of a class I took in my masters. Had to learn "Halting
Problem". Godel's work in just astonishing and way beyond my caliber. I
managed to pass the class but memories are still with me.

~~~
jandrese
I still think most CS courses teach the Halting Problem in just about the
worst way possible.

They always give you that one highly contrived counterexample where you're
feeding the algorithm with the output from itself which doesn't even begin to
touch on what the Halting Problem actually is or why it is so important. And
if the student asks "well, what if we redefine it so it only works on OTHER
algorithms to avoid this one weird edge case" the answer is basically "that's
not covered."

But really, the Halting Problem is asking the question if we can solve all of
mathematics, and indeed all of philosophy with a fancy enough computer
program. Can we build a machine God? And the standard textbook answer is
roughly "If we built a machine God it could create a burrito so big that even
it could not eat it, therefore machine God can not exist."

IMHO, it would make more sense to get students down the right path to ask them
what the halting computer would do when fed a program that calculates all of
the digits of Pi. Or one that halts when it computes the answer to life, the
universe, and everything. The halting problem may have been mathematically
disproven by finding one highly convoluted counter example, but it's more
significant failing is that it is infinite and thus can not be implemented on
a machine with finite limits.

~~~
xamuel
>And if the student asks "well, what if we redefine it so it only works on
OTHER algorithms to avoid this one weird edge case" the answer is basically
"that's not covered."

Only if the teacher doesn't know what they're talking about. If they know what
they're talking about, the answer is "Since every function can be implemented
by infinitely many different algorithms, in order to blacklist our halting-
solver from the list of things we can input into our halting-solver, we would
need to blacklist ALL the algorithms that implement it, but in order to do
that, we'd need an algorithm for determining whether or not a given algorithm
is a halting-solver, and it can be shown that that itself is just as
impossible as solving the halting problem."

>digits of Pi

You seem to be deeply mistakenly about something, but it's hard to point out
exactly what you're mistaken about because all the terms are so poorly
defined. If, for example, you meant, "What the halting computer would do when
fed a program P that takes an input n and outputs the nth digit of Pi, if we
asked whether or not P halts on a particular input n", then the answer would
be "Yes". If you meant "What the halting computer would do when fed a program
Q that takes no input and runs forever, listing all the digits of Pi, when
asked whether or not Q halts on no input", then the answer would be "No".
Either way, there's absolutely nothing deep or profound about the digits of Pi
here.

>life, the universe, and everything

Please, don't make things even _more_ complicated for the poor students.

~~~
jandrese
You only need to blacklist the possibility that the program under review has
access to the output of the Halting Detection program for itself.

The Digits of Pi example is to illustrate that the Halting Program needs to
understand the high level math necessary to prove that Pi is infinite. And
then you lead them on to discovering that it needs to be able to solve every
problem in mathematics, even ones that have not yet been discovered, and then
you realize that it has to be omnipotent and infinite.

The purpose is to pull students away from the kind of empirical solutions that
immediately pop into your head when presented with the halting problem. "Well,
if we look at the loops and what the exit conditions are and start iterating
over all possible inputs..." which is not at all what the Halting Problem is
about, despite what it looks like on the surface.

~~~
xamuel
Oh I see what you're getting at. I think what you're trying to get at is, "If
you had a halting-solver, you could use that like an oracle to answer
arbitrary mathematical questions." Unfortunately, this isn't true, you
couldn't use it to answer _arbitrary_ mathematical questions, just certain
questions.

For example, even if Pi happens to be rational, and the algorithm to list its
digits eventually starts listing a periodic repetition (possibly the all-0
repetition even), that still doesn't mean it halts. So the halting-solver
doesn't directly help you determine whether or not Pi is rational. You would
instead need a "determine-if-given-function-eventually-has-periodic-output"
solver, which is stronger than a halting solver.

I'm not sure if there are any really slam-dunk examples of what you seem to be
looking for, that don't involve proofs in some guise or other. An example
which does involve proofs might look like this: "Let x be a program which
attempts, by brute force, to find a proof that P<>NP, and immediately halts
when it finds such a proof, if ever. If you had a halting detector, you could
plug x into it and based on its output, you would know whether or not there is
a proof of P<>NP." [Which is subtly different than whether or not P<>NP is
true. That would require something stronger than a halting solver to obtain.]

~~~
jandrese
Wouldn't you simply define your sample program to halt upon discovering a
periodic repetition in its output?

~~~
tlb
Finding any finite number of repeating digits does not prove that there's an
infinite number to follow. You might find 1000 3s in a row, followed by not a
3.

~~~
jandrese
If you know the entire state of your algorithm then it's possible to compare
the state vs. it's state in a previous step to determine if you are in a
repeating loop. If all of the parameters and internal are identical, then the
algorithm can not produce a different result.

~~~
xamuel
A machine can print the same thing over and over, and yet never repeat its own
internal state. For example, "let x=0; while(true) { print("0"); x=x+1;}"

~~~
jandrese
A halting problem solver would necessarily have to be smart enough to detect
state that is relevant to the completion of the program vs. not. Plus, even in
this case on a real world machine x can only store as much state at the
underlying type. So if it's a 32 bit int then you can absolutely prove the
algorithm does not halt after only 4 billion iterations, but even before that
you can simply note the lack of exit conditions from the loop to show that it
never terminates.

However clever you are, the halting problem program has to be even more clever
by definition. But we don't have an upper bound on cleverness, so the halting
problem program has to be infinitely clever, hence my previous point about it
being able to solve all of mathematics.

------
phab
Douglas Hofstadter's book, "Gödel, Escher, Bach" is very highly recommended
reading for anyone who finds this (and the theory of computability more
generally) interesting.

I've lost entire days of my life to that book!

[https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach](https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach)

~~~
wellpast
Here's a good adjacent read that I thought was a much clearer and relatively
accessible tour through the Proof - [https://www.amazon.com/G%C3%B6dels-Proof-
Ernest-Nagel/dp/081...](https://www.amazon.com/G%C3%B6dels-Proof-Ernest-
Nagel/dp/0814758371) \- it's more of a pamphlet than a book, really.

~~~
jerf
GEB is _fun_ , but it's definitely a bad introduction to the idea itself. It's
a lot more fun if you already understand the subject matter and can enjoy the
tone of the book.

------
dwohnitmok
This is _not_ Godel's _in_ completeness theorem(s). This deals with a
completely different notion of completeness.

Godel's completeness theorem is the theoretical justification for why first-
order logic occupies such an important position in mathematical logic. It
basically establishes that two different methods of proving a logical
statement happen to coincide for first-order logic.

Let's say that I come up with a long list of axioms that define what it means
for an object to be a "cow" and what it means to have a "spot." Then I state
"all cows have spots."

There are two ways of proving this. One way, the "semantic" way, is to take
every example of cow and prove that it has a spot. This is hard; I not only
have to examine every cow that exists, I have to examine every possible
imaginary entity that satisfies the definition of "cow" and prove that it has
a spot.

The second way, the "syntactic" way, is to play the string manipulation game
that people often associate with the notion of a "logical proof." I start out
with the axioms of a "cow," and I'm given some rules for how I can manipulate
these axioms. Example of this include the string "A and B" allows me to
immediately substitute "A" instead. Given "A implies C" I can substitute all
instances of "A" with "C" and so on and so forth. Hopefully I can work out a
series of string manipulation steps that eventually end with the statement
"All cows have spots."

It is not obvious that these two methods should coincide in what they are able
to prove. The rule set that I get to manipulate my strings by could be a
really wacky set of rules. Maybe it says something like if you see "A implies
B," you get to substitute any arbitrary string "C." It's also not apparent how
you might choose to interpret the axioms that define a "cow." One of the
axioms might say "a cow has four legs" and you might say, in my world, I've
decided the string "four" corresponds to the natural number 5, and I'm going
to look at all entities with 5 legs.

Godel's completeness theorem states that for first-order logic these two proof
techniques coincide: every statement you can prove or disprove the first way
can also be proved or disproved the second way. More specifically, if you
stipulate that rules of string manipulation are exactly the rules of deduction
in first-order logic, and that you interpret your axioms when constructing
semantic entities according to the standard semantics of first-order logic
(e.g. you don't get to arbitrarily stipulate that the string "and" actually
means logical disjunction, i.e. or, in your world), then any property that is
true of every entity you can conjure up that satisfies your axioms has a
"string manipulation" proof that proves that property. Conversely, every
statement you can prove with your string manipulation game also holds true of
every entity you can conjure up that satisfies your axioms (this isn't
strictly speaking part of the completeness theorem, but is also true and
usually stated hand-in-hand with it).

------
tabtab
What are _practical_ consequences and examples of such in typical debates
about politics, programming paradigms, etc?

~~~
lacker
One practical consequence is that the halting problem is undecidable, so if
your boss asks you to build a program to check if another program will ever do
X, you can tell them "hey that's an impossible task!"

~~~
jmcqk6
In practical matters, this is not quite true. If you ask 'will this program
launch a toaster to orbit mars', you can usually answer that question with
high confidence.

The halting problem describes the situations where you can't decide one way or
another, but it does not say you can't decide anything. This is a very
important distinction.

We write code to analyze other code all the type. It's a core part of
developer tools. If it was impossible, life would really suck.

------
lacker
IMHO if you understand the history it makes sense why the theorem is both
important and confusing.

In 1921, Hilbert had this idea, that mathematicians could create an algorithm
that would automatically prove every true statement and disprove every false
statement. Wouldn't that be neat? It's important that this idea _predates_
computers. When Hilbert was thinking about an algorithm, it wasn't an
algorithm as we would think about it in computer science today, because the
modern idea of a computer hadn't been formalized.

In 1931, Godel proved that it was impossible for an algorithm to automatically
prove or disprove true mathematical statements. Take that, Hilbert. The modern
idea of a computer _still_ hadn't been formalized. So a lot of stuff that
nowadays we think of as simple, Godel did in a really weird and confusing way.

For example, as part of his proof Godel needed a way to algorithmically encode
a bunch of numbers into a single number. Nowadays, that's pretty intuitive -
any software program can be saved into a file, any file can be interpreted as
a sequence of binary bits, which can be interpreted as a number. Back then,
Godel used a really crazy encoding - encoding (a, b, c, d, ...) as 2^a * 3^b *
5^c etc.

So if you go back and try to understand Godel's proof, it is really just
needlessly complicated. I loved the book Godel, Escher, Bach when I first read
it, but once I understood the mathematics more deeply I started thinking that
it really wasn't the best approach to actually understand the mathematics.

In 1936, Turing defined a Turing machine and also proved that the halting
problem was undecidable. This is an equivalent statement to Godel's
incompleteness theorem, and the proof is about 100 times more intuitive,
_especially_ for modern computer programmers who tend understand computers and
computer programs pretty well, far better than they understand prime
factorization.

So, personally I think if you are learning this stuff, you are better off
starting by learning about the halting problem. Sometimes the historical
sequence of discovery isn't the best way to learn math, just like we don't
learn about the Greek method of exhaustion before we learn modern methods of
calculus.

~~~
teilo
You are talking about his _Incompleteness_ theorem. This article is about a
different theorem, with a different definition of "completeness".

~~~
lacker
Whoops. I thought about deleting the comment but hey maybe someone will find
it useful.

~~~
defective
I did! Thank you!

