Hacker News new | past | comments | ask | show | jobs | submit login
What Made Gödel’s Incompleteness Theorem Hard to Prove (algorithmsoup.wordpress.com)
126 points by williamkuszmaul on Oct 1, 2018 | hide | past | favorite | 99 comments

From the article:

> Here’s another example (known as the Berry paradox):

> Define {x} to be the smallest positive integer that cannot be defined in under {100} words.

> This might look like a valid mathematical definition. But again it’s nonsensical. And, importantly for the sanity of mathematics, no analogous statement can be made mathematically formal.

I thought it was worth mentioning that the Berry Paradox does show up in a mathematically formal way, and it was fascinating to me. It can be used to prove that a general algorithm to compute the Kolmogorov complexity of a string is uncomputable.


I felt like the article was just getting going, then it finished.

i was ready to roll up my sleeves, expecting “and here’s a walkthrough”.... and that’s how i felt.

article had no teeth whatsoever.

A layperson here, I always wanted to ask this question: Godel proves that not ALL statements that are true can be proven, because here is this one example. I get it, it's a valid mathematical proof. But does it really say anything about other statements, except this one? Does it says anything about statements that are not self-referential? Is it possible that the incompleteness theorem really applies only to self-referential statements and says nothing about non-self-referential statements?

The proof of the incompleteness theorem only shows you that recursively-enumerable arithmetic systems are always missing something, because the construction always holds. It doesn't make any grand statements about the structure or nature of things that are true and without proof.

The second incompleteness theorem gives an example of something undecidable that's probably more interesting that the uselessly paradoxical statement in the first: it says that no system can consistently prove its own consistency.

That's a common technique in math proofs, to disprove a statement by providing a counterexample. But there probably are improvable statements that are not self-referential, e.g. certain statements about infinite sets.

But the grander point, that Hofstadter pushes a lot in GEB, is that any sufficiently powerful symbolic system will be able to self-reference. For example, consider a programming language that is not Turing complete. You will basically need to disallow loops, because with those you can implement a virtual machine to break all your self-reference-banning rules.

What made the theorem hard is not the details, which are simple, it's the conceptual leap to thinking about mathematical statements as mathematical objects - something that is much easier for us than for mathematicians of Godel's time because we are familiar with programs and computers. Much of the text is taken up with proving things that were surprising to mathematicians of the era, but are now commonplace: e.g. that we can treat numbers as encoding mathematical propositions so that for every proposition P expressible in some formal logic L, there is a number #P which encodes P in some unambiguous way - as long as L is expressive enough to formalize some basic arithmetic. Then we can start making trouble by asking if we can construct a proposition Q so that Q(#P) is true iff the proposition #P encodes, P, is provable in our system. If so, we should be able to prove Q(#P) or NOT Q(#P) if L is complete, but things don't work out well because it's possible to encode paradoxical statements. In short.

> thinking about mathematical statements as mathematical objects...

Lock yourself away in a room with nothing but pen and paper (no internet, not books) and try to reproduce Goedel's original proof. Then post the attempt here :)

When I tried this a few years ago I made a mistake in the definition of numbering, got through half the proof, ran into trouble, and gave up before I figured out exactly where the mistake was. And I had already committed the overall outline and the tactical steps for each part to memory...

Even if you remember the entire outline, it'll be really hard to get the details right....

And, more importantly, the outline itself is actually not obvious.

>... much easier for us than for mathematicians of Godel's time because we are familiar with programs and computers

Again, it seems unlikely that this was the sticking point.

E.g. Hilbert understood mathematics as a symbolic game, thought about this exactly question, and didn't come up with Goedel's proof. And Hilbert was definitely no dullard.

It's very hard to believe Godel didn't face problems you faced. He probably did mistakes too, and went back periodically to make sure his proofs are sane. He was probably stuck too, many times even, and solved those problems as other mathematicians solve their problems. After all, it took him many years to complete his proof. And even more years until theorems stabilized to today's form (i.e. Rosser's trick etc). People have been practicing formal mathematics long before Godel, and this cycle of being stuck and solving problems is pretty common in mathematics. His coworkers would have had the same sort of issue. This seems to indicate me that GP comment is right that the inciting incident was understanding the correspondence between mathematics and its encoding. Which brings us to:

> E.g. Hilbert understood mathematics as a symbolic game

Hilbert had a radically different understanding of the nature of mathematics compared to Godel. Moreover, Hilbert was radical enough that he tried to use his fame to silence people with differing views than him (e.g. Brouwer). Hilbert hardly believed Godel when he published his results, because it was easy to see that it would shake the foundations of "Hilbert's Program" something Hilbert spent half his life on (and logicians still publish papers about the relationship between Godel's Incompleteness theorems and Hilbert's Program). This makes Hilbert orders of magnitude less motivated to think about this issue, probably. On the other hand, Godel had various non-mathematical reasons to prove incompleteness, one being his desire to block materialism; check for Godel's Gibbs Lectures.

> People have been practicing formal mathematics long before Godel

Right, this was pretty much my original point.

> This seems to indicate me that GP comment is right that the inciting incident was understanding the correspondence between mathematics and its encoding.

Doesn't this sentence contradict the previously quoted one? Or, at the very least, the point is far more subtle than the GP's "...it's the conceptual leap to thinking about mathematical statements as mathematical objects..."

I don't see anything in Hilbert's mathematics that indicates he did not "think about mathematical statements as mathematical objects". Very much the opposite, actually.

I'll agree that the key insight was basically about mathematics and its encoding. And that this insight was extremely non-obvious.

But Hilbert's program really was literally all about GP's "[thinking of] mathematical statements as mathematical objects".

Or, to restate this contentious agreement we're having another away: things like the idea to do numbering in the first place are the details, and do not obviously follow from treating mathematical objects as objects of mathematics itself.

Has anyone tried to find a transcript of the Gibbs Lecture? Any luck? It apparently appears in Vol III of Godel's Collected Works, and nowhere else I can see. Thanks!

Two things: if you can find this text by Fefermann, this is very helpful: Are There Absolutely Unsolvable Problems? Gödel's Dichotomy https://academic.oup.com/philmat/article-abstract/14/2/134/1... it's super simple and clarifies a lot of things about Godel.

Second, I took a philosophy of mathematics course when I was an undergrad studying computer science. And I have a copy of Godel (1951) (I think it's transcripted from his Gibbs Lecture) given to me as a class material. I tried search engines and couldn't find it anywhere, for the love of god. It's such a shame because it's a very important text imho. Since it's probably copyrighted or something, I don't know where to put it. If you're interested PM me and I can send you a pdf. Its title is "Some basic theorems on the foundation of mathematics and their implications". It's mostly about Godel explaining philosophical implications of his theorems and why he thinks they're important. If someone can find pdf, it'd be very helpful.

Could you send it to jacob at-sign solidangle.ca? I would greatly appreciate it.

I couldn't find your contact information. It might be good to add it to your profile for other people.

Sorry, my bad, I never realized HN doesn't have PMs (or does it? Couldn't find an inbox). Sending the pdf to your email.

The Gibbs lecture can indeed be found in Kurt Gödel, Collected Works, Volume III, Unpublished essays and lectures, edited by Feferman et al., 1995.


Thanks for the pointer to Feferman's (very interesting) article. As for Godel, my university's library turns out to have a set of the Collected Works. So, I'll just check that out. Thanks!

It's super easy to prove if you assume the Halting Problem is undecidable. Basically,

1. Assume every either T or ~T is provable for all theorems T, and that our logic is consistent. (The opposite of the incompleteness theorem.)

2. For every Turing machine, there is a proof that it halts or a proof that it doesn't halt.

3. A machine that enumerates and validates proofs will find one of those proofs, so the Halting Problem is decidable -- a contradiction.

Easy peasy. (The proof of the Halting Problem undecidability from the Incompleteness Theorem is similarly straightforward.)

Godel did his work first though, so he didn't have that machinery (hmm) available.

This proof using Turing machines is indeed very slick! Its elegance is balanced by its nonconstructiveness: Gödel's proof yields an explicit example for an unprovable statement (namely a formal rendering of "this statement is not provable"), while the Turing machine proof doesn't.

Also let me add a remark on a certain subtlety: Step 3 only works if we assume that, if a claim about the halting behaviour of Turing machines has a proof in the studied formal system, then it's actually true.

This assumption is believed to be warranted for the standard formal systems of mathematics such as Peano Arithmetic (PA) or Zermelo–Fraenkel set theory (ZF), but Gödel's proof also applies to formal systems for which this assumption doesn't hold, or for which the assumption does hold but cannot be proven in a weak metatheory.

These systems are typically not very useful as vehicles to carry out formalized mathematics, but they are nevertheless very interesting and fundamental to the study of logic. An example for such an anti-real formal system is PA adjoined by an axiom expressing that PA is inconsistent.

The Turing machine proof does yield an explicit unprovable statement when you combine it with the (constructive) proof of the undecidability of the Halting problem. Let H be the Turing machine that, when given a Turing machine M, searches for a proof (in the formal system under consideration) that M halts or a proof that M doesn’t halt. Build the machine Q that quines itself to apply H to the code of Q itself, then either loops forever or halts according to the opposite of what H purportedly proved. Then the true statement “Q doesn’t halt” is unprovable in the formal system.

It’s more work to formalize all the details here than in Gödel’s proof, because now you have to explain how the formal system can reason about Turing machines reasoning about Turing machines rather than just how the formal system can reason about itself, but it works just as well.

Wow! Didn't notice that. I stand corrected. Your rendition of the proof even dispenses of the assumption that the formal system correctly judges the halting behaviour of Turing machines.

Thank you for this. Else I'd have perpetuated that myth further!

As Wittgenstein correctly pointed out, Gödel's proposition "I'm unprovable" is not a proposition in Russell's system because it leads to inconsistency. Consequently, Gödel's incompleteness results are not valid for Russell's system. The proposition "I'm unprovable" cannot be constructed as a fixed point of P |-> ~|-P because ~|-P has order one greater than the order of P.

Could you give a reference for this claim of Wittgenstein’s, please?

A reference is:

Ludwig Wittgenstein. 1956. Remarks on the Foundations of Mathematics, Revised Edition Basil Blackwell. 1978.

There is a discussion here:


That seems entirely circular.

Gödel's proposition "I'mUnprovable" cannot be constructed as a fixed point of P |-> ~|-P because ~|-P has order one greater than the order of P.

If you actually want to make the proof above as rigorous as Goedel’s original proof, you’ll find that it will be exactly as technical as the original. The technical difficulty in the proof above is hidden in the step 2 — what you need to show is that every statement of a form “the Turing machine T = {..., ..., ...} halts on input ...” is expressible in the language of first order arithmetic. To do that, you need to do exactly the same work as Goedel did, with all the finite string encoding etc.

> It's super easy to prove if you assume the Halting Problem is undecidable.

Well, sure, because proving the Halting Problem is undecidable is about as hard as proving Godel's Theorem without having the undecidability of the Halting Problem to start from.

The proof of the Halting Problem's undecidability is a lot less technically challenging. Here it is, off the top of my head:

Assume you have a program HALT(n) that will return 1 if the Turing machine described by string n halts when fed itself as input and returns 0 otherwise. Construct the program BOOM(n) that runs forever if HALT(n) returns 1 and returns 0 if HALT(n) returns 1. Now consider HALT(BOOM). If HALT(BOOM) is 1, then BOOM(BOOM) halts by definition of HALT, but BOOM runs forever in that case. If it returned 0, then it runs forever, but BOOM(BOOM) in fact halted. Something must give, and the only thing that has room to give is that HALT itself can be constructed (since the encoding of Turing machines to numbers is well-defined, and the construction of BOOM is well-defined and possible iff HALT is well-defined and possible).

> Construct the program BOOM(n) that runs forever if HALT(n) returns 1 and returns 0 if HALT(n) returns 1.

I assume you mean "returns 0 if HALT(n) returns 0"?

That is correct. I did say this was on the fly ;-)

It isn't, actually. The proof of the undecidability of the halting problem is rather straightforward. You just assume you have a program that determines whether or not its input halts or loops. Then you write a program that uses your assumption as a helper and does the opposite:

If the input halts, then loop

If the input loops, then halt

You then pass the program itself as input to itself and you get a contradiction.

Here's my favourite version of this proof, in the style of a poem by Dr. Seuss: http://www.lel.ed.ac.uk/~gpullum/loopsnoop.html

I’m personally a fan of using Chaitin’s constant (Ω). It’s easy to show that, for some finite n, the nth digit of n is uncomputable. It follows that statements of the form “the nth binary digit of Ω 0” can neither be proven nor disproven.

cough Hypercomputation cough

Hypercomputation is a fascinating concept; I particularly like infinite-time Turing machines, which can continue computing "at the end of time" (day ω), at the day after that (ω + 1), the day after that (ω + 2), at the second end of time (day ω ⋅ 2), the day after that (day ω ⋅ 2 + 1), and so on.

Infinite-time Turing machines can readily solve the halting problem for ordinary Turing machines, but (by the same proof) not for themselves.

The theory of infinite-time Turing machines is in some regards parallel to the theory of ordinary Turing machines and in others quite different.

I particularly enjoy the Lost Melody Theorem: There is a specific infinitely long 0/1 sequence for which

(a) it's possible to devise an infinite-time Turing machine which will correctly detect whether that 0/1 sequence is written on the input tape, but

(b) there is no infinite-time Turing machine which will, starting on an empty tape, write out that 0/1 sequence to the output tape.

It's like when you would recognize a song but cannot sing it. Some details are here: https://rawgit.com/iblech/mathezirkel-kurs/master/superturin...

> Lock yourself away in a room with nothing but pen and paper (no internet, not books) and try to reproduce Goedel's original proof. Then post the attempt here :)

We can all agree that would be hard, but your claim that the details are the central thing that makes GP hard is much, much stronger than that.

The relevant question would be: How many people alive right now could succeed at your challenge? My guess would be thousands, maybe tens of thousands.

And compare that to:

The number of people alive who, without any knowledge of GP, could come up with the key insight you describe in your 1 paragraph summary of the proof, and the insight of using godel numbers to label mathematical statements to make that argument rigorous.

The answer to that is "somewhere around 0."

I get that you're impressed by the proof's details -- they're impressive -- but it seems like an odd take to rate those as more difficult than the incredible creative leaps that produced the high-level ideas behind the proof.

>Again, it seems unlikely that this was the sticking point.

Look at the proof - page after page of work showing how to encode and decode formal propositions using primitive recursive functions.

>E.g. Hilbert understood mathematics as a symbolic game, thought about this exactly question, and didn't come up with Goedel's proof. And Hilbert was definitely no dullard.

Goedel's theorem is exactly that mathematics is not a symbolic game - that Hilbert's program is not possible. The proof relies on reducing logical deduction to arithmetic.

> Look at the proof - page after page of work showing how to encode and decode formal propositions using primitive recursive functions.

Look at the game. Move after move of rooks moving in columns and rows and of bishops moving on the diagonals. :)

> Goedel's theorem is exactly that mathematics is not a symbolic game - that Hilbert's program is not possible. The proof relies on reducing logical deduction to arithmetic.

The point was that thinking of mathematics as a game in the first place requires "...the conceptual leap to thinking about mathematical statements as mathematical objects..."

Goedel's proof didn't come out of nowhere, and he was NOT the one to first propose thinking of mathematical statements as mathematical objects. In many ways, Goedel's proof was actually a response to/rejection of "mathematical statements as mathematical objects"

Goedel's big leap was... the proof itself. Was it the numberings? Well, I guess, but really it was what he did with the numberings. And then how he related that to arithmetic. It's hard to pick out a single piece of the derivation and say "ah ha! That's the key insight!". So I think "the point is in the details" is pretty accurate.

You're missing my point.

Whether you have a thing memorized is fairly unrelated to how complex the thing is.

Godel Incompleteness Theorem and Turing Halting Problem are two faces to the same coin, I intuited. It is deeper than I thought, especially that I forgot about the two godel's incompleteness theorems (not just one.)

Scott Aaranson "popularizes" Kleene's textbook proof of Godel's theorems using Turing machines in his blog:


I would say that's a pretty inaccurate analogy. The incompleteness theorem can be considered a consequence of the halting problem, but not the other way around. The incompleteness theorem depends heavily on the properties of Peano arithmetic, as defined by the presence of both addition and multiplication. If you construct a set of numbers with one defined arithmetic operation (Presburger arithmetic), the incompleteness theorem does not hold and all statements can be confirmed or disconfirmed by an algorithm (albeit not efficiently). The halting problem, by contrast, is much more general, and recurs in many settings where self-reference is possible.

Godel's completeness theorem rests on a system's ability to represent addition and multiplication, not whether they define it. Basically, it recurs where reasoning about addition and multiplication is possible within the internal theory. This is why it applies to e.g. ZF set theory, even though there is no mention of addition, multiplication, or even numbers in ZF's definition.

The role of Peano arithmetic is somewhat analogous to the role of the particular definition of a Turing machine in the proof of the halting problem: you can easily swap it out with something commensurate and get the same result.

If you can represent something, it's defined. There's no real distinction there. In weaker systems, this is not possible. ZF proves all of the Peano axioms, for example.

(Stop pretending to explain things to me that I already know.)

Then what do you mean by "depends heavily on the properties of Peano arithmetic"? There are lots of axiomatizations of arithmetic that can represent addition and multiplication that are not equiconsistent with Peano arithmetic, just like there are lots of definitions of computability that are equivalent to Turing machines (e.g. the partial recursive functions).

Also when talking about a formal theory distinguishing the definition (i.e. the axioms) and the theorems is pretty important.

EDIT: I realize now that a specific example would be helpful. Goedel's theorem can be applied to primitive recursive arithmetic[1], which is neither weaker nor stronger than Peano arithmetic. Interestingly enough PRA with a small addition (of broader transfinite induction) can actually prove Peano arithmetic[2].

[1]: https://en.wikipedia.org/wiki/Primitive_recursive_arithmetic

[2]: https://en.wikipedia.org/wiki/Gentzen%27s_consistency_proof

I mean this:

>represent addition and multiplication

It's the simplest possible explanation. Stop being a pretentious jerk. Yes, I know about primitive recursive arithmetic. I first learned about it about ten years ago.

EDIT: Wait a second. Look, I said this:

>depends heavily on the properties of Peano arithmetic, as defined by addition and multiplication

I said it right there. How could you not know this? When I said "the properties of Peano arithmetic", I meant the presence of addition and multiplication.

Let me explain it another way: Goedel's theorem is a theorem about math. Math as it was understood in the 18th century. That's what makes it interesting. The halting problem is proven with an abstract machine that was invented, mostly, to use as a basis to form analogies with other machines. So it's general by construction. (If the integers were not already interesting, Goedel's theorem would be like proving a variant of the halting problem for some weird computational structure that has no relevance to anything and is absurdly cumbersome to prove equivalent to other systems. But for the integers, the analogy is why it's interesting. Nobody expected the Goedel numbering.)

Yes, Goedel's theorem applies to all interesting versions of the integers, but it does not apply to all interesting mathematical systems. IIRC people usually cite the theory of "real closed fields" or something like that.

Just because I mention something doesn't mean I assume you don't know it. This is a public forum and although it's Hacker News I still doubt everyone has heard of primitive recursive arithmetic. You're the only one making personal statements here. Also I don't know what sort of code I'm supposed to use to make my arguments without offending you, if PRA is the example I think illuminates my point I kind of have to write its name whether you know of it or not.

> I said it right there. How could you not know this? When I said "the properties of Peano arithmetic", I meant the presence of addition and multiplication.

That's a curious definition of "heavily dependent" on the properties of Peano arithmetic (when the condition is in fact "any integer arithmetic with addition and multiplication"), and I'd argue it would be hazardously confusing to the uninitiated, but that's just semantics. Back to the original point (that the analogy is poor) I don't see how that breaks the analogy at all. Just like there are weaker arithmetics that escape Goedel's theorems (I will not mention their names since that seems to just cause issues), there are weaker models of computation that dodge the halting problem (again I'm sure you know of them, the one I'm thinking of off the top of my head rhymes with schmimative shmecursive shmunctions).

> Let me explain it another way: Goedel's theorem is a theorem about math. Math as it was understood in the 18th century. That's what makes it interesting.

I think what makes it interesting has nothing to do with the particulars of what people thought math was in the 18th century. I doubt there are any mathematicians from ancient Greece to a thousand years from now that would seriously consider using a logical system incapable of modeling addition and multiplication as a ground-level theory for the majority of mathematics. They probably would be fine with one that only killed off Peano arithmetic, just like we'd be fine with a form of computation that wasn't as limiting as PRFs but didn't happen to be susceptible to Turing's reasoning. It seems just as general by construction, and I doubt it is an accident that Goedel using something so basic.

Basically what I'm saying is just like the fact that the Church-Turing thesis has held up and all interesting forms of computation and equivalent to Turing machines, there is an implicit Addition-And-Multiplication-Are-Basic thesis that claims that systems unable to model addition and multiplication in some form are fun to visit but not to live in when you're doing mathematics.

A system so weak it can't represent arithmetic is a notable but minor edge case that doesn't really deserve to be called a logic system.

(Stop pretending people can read your mind.)

It's not an analogy. I thought so too at first, but the connection is material: In the blog it shows a proof of Godel's theorems using the Halting Problem.

While the Berry paradox isn't strictly speaking mathematically sensible, there are similar ideas that lead to new and less well-known impossibility results. https://arxiv.org/pdf/chao-dyn/9406002.pdf has a transcript of an overview of the idea and main results by the researcher who came up with them, as well as a fairly extensive bibliography.

There's also a version of Russell's paradox lurking in the standard proof of Cantor's theorem (https://en.wikipedia.org/wiki/Cantor%27s_theorem).

I couldn't make it through GEB, but I read a good concise explanation in a book called Godel's Proof: https://www.amazon.com/G%C3%B6dels-Proof-Ernest-Nagel/dp/081...

See the following for history and limitations of Godel's results:


I just read "I Am a Strange Loop" and he the author a pretty nice job of explaining this -- Unfortunately, I could never get through GEB so I do not know if the explanation in there.

its there, about the middle of the book..

For those interested, Hoffstadter's Godel Escher Bach gives a great introduction into formal systems and Godels incompleteness theorem. Highly recommended.

I read it cover to cover out of a sense of obligation but came away very disappointed. The Achilles-Turtle-Random Other Creature allegories got very tiresome, and the content took a very long time to get to the point. For anyone interested in the proof, I recommend Gödel’s Proof by Nagel instead. It covers the same material in about a tenth the space.

(GEB has other philosophical discourses that are interesting, if only as an example of how people can very reasonably mispredict the future. In Hofstader’s case, he posited that chess would only be played at a human level by an AGI.)

Absolutely agree. GEB constantly tries to prove the cleverness of the author, complete with winks and nudges. It stacks so many unintuitive analogies on top of each other that I spent most of the time trying to recall which real-world concepts they all mapped to.

Also, the book is about 98% Gödel and 2% Escher and Bach. Any time it steps outside its wheelhouse of math and into the realm of philosophy or art, you get the distinct sense that it has no idea what it's talking about.

Do yourself a favor and just take discrete math 1 and read a few Wikipedia articles instead.

Do yourself a favor and just take discrete math 1 and read a few Wikipedia articles instead.

+100, exactly. I have some familiarity with mathematical logic, and with Bach (though admittedly I'm less familiar with Escher) and I found GEB to be unintuitive at best, and misleading at worst.

To be honest, there may not be anything really magic about Goedel's incompleteness theorem once you grasp the core concept, which is: If you encode all proofs into a formal language, you can make a statement that is true but unprovable. (Which is, effectively: "This statement has no proof"). Most of Goedel's work on the incompleteness theorem was on the systematization of proofs, which can be tedious to work through.

I found at the time Godel's completeness theorem to be more interesting. Though I never got into model theory or more advanced mathematical logic so, grain of salt.

I agree that it's way over-long and rambling, but it was my introduction to the topic. Where it really shines is his interesting and controversial thesis that connects Godel's proof with the Church-Turing thesis and other things to form a theory of consciousness as pattern and make the claim that a sufficiently powerful AI is as sentient and meaningful a consciousness as we are.

I read Godel's Proof after though, and I agree that it is a far clearer introduction to Godel's theorems.

Chaitin [2007] presented the following analysis:

Godel's proof of inferential undecidability [incompleteness] was too superficial. It didn't get at the real heart of what was going on. It was more tantalizing than anything else. It was not a good reason for something so devastating and fundamental. It was too clever by half. It was too superficial. [It was based on the clever construction] I'm unprovable. So what? This doesn't give any insight how serious the problem is.

If you still have the energy or interest, try I Am A Strange Loop, he wrote in a number of years after and it seemed more approachable

thank you...I just bought it via amazon

The New Turing Omnibus has a good chapter on the Incompleteness Theorem; it covers the basics in 5 or 6 pages, if memory serves.

I found 'Gödel’s Proof' fascinating—surprisingly more so than GEB, which I'd anticipated loving for years before dipping into it (I didn't finish, but the primary reason was loss of interest). It was one of the first things I came across that gave me the sense of something like 'mathematical system architecture'.


And GEB's explanation of it is more complete/exact than the one in the article.

It is also amazingly mind-twisting. Not the explanation per se, but the theorem.

And I think its consequences are under appreciated: no (sufficiently complex) logical system is free from self-contradictions.

I believe that's a misinterpretation of Goedel; it's an incompleteness theorem, not an unsoundness theorem.

Incompleteness: there exist true statements that lack proofs

Unsoundness: there exist false statements with proofs

If I understand all this correctly, what Goedel demonstrated was that a sufficiently complex logical system could self-reference, and from self-reference unprovable (not contradictory) true statements fall out.

By reductio ad absurdum any self-contradictory logical system is useless as a logic because everything is provable in it, so one wonders if such systems should even be considered "logical systems".

The inconsistency part is the 2nd incompleteness theory.

You don't RAA it because you can't prove the system is consistent (because you can't prove the counterexample)

"Gödel's second incompleteness theorem states no consistent axiomatic system which includes Peano arithmetic can prove its own consistency. Stated more colloquially, any formal system that is interesting enough to formulate its own consistency can prove its own consistency iff it is inconsistent." [1]

I believe your position is that "sufficiently complex" == "able to formulate and prove its own consistency"; my position is that such a system would be worthless as a logic, because in such a system you can prove anything.

Your second paragraph is inscrutable to me. Are you trying to claim that you can't prove everything with a known inconsistent logic? Are you referring to paraconsistent logics?

[1] http://mathworld.wolfram.com/GoedelsSecondIncompletenessTheo...

> "sufficiently complex"

Meaning, not "trivially simple". The description in your link is good enough, the description in the Wikipedia is more detailed

> any formal system that is interesting enough to formulate its own consistency can prove its own consistency iff it is inconsistent.

That's what "sufficiently complex" means and why you can't Reduction Ad Absurdum it.

> And I think its consequences are under appreciated: no (sufficiently complex) logical system is free from self-contradictions.

That doesn't seem right? Any logical system that has a self-contradiction (which I take to mean can prove a statement and its inverse) will be able to prove everything, which is obviously a pretty big flaw.

What Godel's 2nd incompleteness theorem says is that if a system can prove its own consistency, then it follows that it is unsound.

The Dedekind/Peano higher order theory of the natural numbers prove its own formal consistency.

In this theory, proof checking is computationally decidable even though the theorems are not computationally enumerable by a total procedure.

> The Dedekind/Peano higher order theory of the natural numbers prove its own formal consistency.

No it doesn't - that would contradict Goedel's well-established theorem. The formal consistency of Peano arithmetic has been proven, but not by Peano arithmetic:

> Gentzen's theory obtained by adding quantifier-free transfinite induction to primitive recursive arithmetic proves the consistency of first-order Peano arithmetic (PA) but does not contain PA [...] Gentzen's theory is not contained in PA, either, however, since it can prove a number-theoretical fact—the consistency of PA—that PA cannot. [1]

[1] https://en.wikipedia.org/wiki/Gentzen%27s_consistency_proof#...

Godel's results are for the 1st order variant of the Dedekind/Peano theory of the natural numbers. Consequently, there is no contradiction with the inferential completeness (every proposition is provable or disprovable) or self-provability of formal consistency in the higher order theory.

Also, Godel's proposition "I'mUnprovable" does not exist in the higher order theory because it doesn't type check.

See https://hal.archives-ouvertes.fr/hal-01566393

What's being proposed is a analogue of Peano arithmetic that isn't effectively axiomatized (i.e. the axioms are not recursively enumerable), and therefore not subject to Goedel's theorems. That isn't usually palatable in a formal theory, and therefore not what most mathematicians would think "Peano arithmetic is consistent" means.

Of course, [Dedekind 1888] famously proved that the higher order theory characterized the natural numbers up to a unique isomorphism, which is impossible in the first order variant of this theory, which was developed later. Proof checking is computationally decidable in the higher order theory, which means that it is "effectively axiomatized." The fact that the theorems of the higher order theory cannot be computationally enumerated by a provably total procedure is irrelevant because, in practice, it does no good to enumerate the theorems.

That is not the definition of effectively axiomatizable as required for Godel's theorem to apply, which is that the axioms be recursively enumerable (not the theorems, which I agree is of little use aside from metatheory).

I also don't understand how proof checking can be decidable if the axioms are not recursively enumerable. If proof checking is decidable, there must be a way of determining if a use of an axiom refers to a valid axiom. But if you can do that, as long as you can put the syntactic formulas of the system in bijection with the natural numbers (which I presume is the case if this is aimed at computing, and would hope in general that you can write them on paper using a finite string of symbols) you can produce an enumerating program by just running the decision procedure on every syntactic formula and only emitting ones that pass. More succinctly, all recursive sets are recursively enumerable.

The axioms of the higher order theory of the natural numbers are not countable and consequently not computationally enumerable. For example, there are uncountable many instances of the higher order induction axiom. Consequently, there are axioms that are not expressible as the abstractions of finite strings, just as there are real numbers that are not expressible as the abstractions of finite strings.

But proof checking is still computationally decidable by a provably total higher order procedure. See the following: https://hal.archives-ouvertes.fr/hal-01566393

Would you mind explaining how the system avoids proofs being a recursive set implying the axioms are recursive? I'm afraid I don't understand how you can invoke axioms that are not expressible via finite strings in a proof that is expressible in such a manner.

Instances of the induction axiom are uncountable for the higher order theory of the natural numbers, which obviously means that they cannot be enumerated using the natural numbers. However, it is very easy to use the induction axiom in proofs that are expressible using finite strings. Please see the article above published in HAL Archives.

The 1st order variant of the Dedekind/Peano theory of natural numbers is not suitable for computer science because it allows infinite integers and other monsters.

See the following: https://cacm.acm.org/blogs/blog-cacm/231495-what-turing-and-...

sound NAND consistent

obligatory "I am a Strange Loop" reference here for anyone who wants a more polished introduction into the topic from many years later. He wrote GEB when still very young and had time to refine his points in this later book.

Unfortunately his style sticks.

I feel like the further you go in maths, the less attention is given to details, which are "left as exercise to the reader".

This depends a lot on the branch of maths: eg topology has much more of a reputation for being wishy-washy than analysis.

It also depends what the context is. A paper is not meant to prove things sufficiently for a layperson but for the author’s peers and so an omitted proof means one of “proven elsewhere;” “follows in a straightforward way from the definitions;” “follows in a straightforward way from an earlier proposition;” or “follows from a well-known pattern in the subfield.” And here “follows in a straightforward way” typically means something between “just look at;” “at each step there is only one reasonable thing to do so just do that;” and “apply all the standard tricks from the subfield and see what sticks.”

It is not true that advanced mathematics is less rigorous than more elementary maths, it’s just that materials for advanced topics require more domain knowledge, and provide less hand-holding.

"you must understand when I say it's trivial I just mean that I can do it "

And I feel like this is where better hyperlinking of papers could be used. Like not just citations, but the ability to click through to the "proven elsewhere" perhaps with a short paragraph of how the given citation proves it.

Hyperlinking wouldn't provide much help. It's easy to prove this because academics have been hyperlinking for hundreds of years. A lay person finding the connection between [21,45,48,60] and the "trivial" step of theorem 15 is the problem.

So, hyperlinking is not what is needed for outsiders. What's needed is lots of extra exposition and explicit connection-building/hand-holding/narration.

The result of the hard work of doing so is typically called "a book". Which, hopefully, makes it clear why no author does this exposition and connection-building work for everything they write...

Paper writing is hard. Adding more material - even supplementary material just slows down the work of good people.

On the other hand, I do feel like there should be websites where papers can be annotated by readers, who can provide missing details etc. Though this does require a lot more papers to be open access.

On the other hand almost all papers have a short introductory section where they present basic concepts in a concise way. I find them great for learning and reinforcing concepts. They should be collected together in a wiki. We would have dozens of variations for each concept and great coverage.

See: https://fermatslibrary.com/journal_club

Primary problem is interest, really. No one's going to annotate "The Last Ditch Journal Of Applied Theorem"'s non-headliner papers.

Basically something like rapgenius but for papers?

Wikipedia hyperlinks definitions, but if you don't have a background in the field, following all hyperlinks off hyperlinks produces something of an exponential explosion of things you'd have to understand before you can understand a particular article.

true..how many understand no really know Graph Theory at the level we use it to analyze code object coupling, RTTI, etc

I know some authors who write out all such proofs and then remove or shorten them for publication. What may seem incredibly hard for you may be just tedious to read for someone else. A math paper is more than just proofs. It’s also a story.

I feel small things like that, or professors not understanding their students’ perspectives — apparently this is a known cognitive bias called “curse of knowledge” — unnecessarily restricts access to field.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact