 Gödel's Completeness Theorem 55 points by adamnemecek 74 days ago | hide | past | web | favorite | 71 comments This theorem seems to be like monads: Someone posts about it on the internet, someone doesn't understand and asks for help, someone comes along and writes a simple explanation that seems to make sense, then someone says that explanation is completely wrong (but here's this totally inscrutable "correct" one... which someone comes along and says is even worse.)And everyone is left confused. Like monads, I think the issue comes from "frames" of thinking.You cannot understand monads in terms of functions as is understood in most languages. You also can't understand it in terms of its common uses like I/O. Yes, it can be applied in those domains, but it is like explaining the internet with electricity. Just read this book, IMO https://www.amazon.com/G%C3%B6dels-Proof-Ernest-Nagel/dp/081...It's only 160 pages and gives what seems to be a good explanation of the basics. Currently reading Morris Kline’s “Loss of certainty” - a beautiful very readable work that elucidates the relationship of Mathematics with concepts of ‘truth’ and ‘reality’ throughout mathematical history. Funny that you mention this, you can construct monads with this in some sense.GCT talks about adjointness (it's analogous to William Levwere's work "adjointness in foundations"). One can build monads from adjoint functors. I was mistakenly thinking of the incompleteness theorems when I wrote this, but I guess it kind of works for this one too. This theorem actually changed my life. Once it clicked for me it shifted my understanding of reality significantly. I read most of Gödel, Escher, Bach years ago and I remember it clicking back then... but now I've lost it again. :( Can you expand on this, I am quite curious. This, alongside the second incompleteness theorem, showed me that any formal system cannot prove itself, and any system deemed "consistent" can only be considered as such from outside that system.A great way that I visualize it (thanks to Hofstadter) is like Escher's never ending staircase painting. If you look at each step, it looks just fine. In fact, you can say that each step on its own is "consistent". But it is only when you take a step back, outside the system of each isolated step, that you see the paradox.Similarly, the theorem applies this to mathematics, and by extent, any formal system. Just as we can see each "step" going from one to the next, in mathematics we must hold an axiom as truth without proof, so that subsequent theorems may use it as the next "step". It is, in a sense, a paradox which we agree to use.In practical terms, I may extrapolate that further and more philosophically than was intended (or that I have commonly seen), but the subsequent Halting Problem combined with the Curry-Howard isomorphism shows me that Godel's Theorem has major real world effects. The idea that there can be something that is true but not provable rings true to me in many regards. I, like many, spend nearly every moment of the day unconsciously putting faith into the "truth" that I am not alone in the universe, thereby fending off solipsism.I cannot prove "I" exist. But it is true. What are the other truths that cannot be proven? That question guides my thinking. The FIT is a statement about finitary number theory. Even Godel, who was an ardent platonist, wouldn't try to extrapolate it to philosophy. Yes, I understand that, and am well aware of Godel's position on its application. Hence my statement that I extrapolate it further than intended.My extrapolation is influenced by the Curry-Howard isomorphism, equating computer programs and mathematical proofs. Extrapolation to "reality" and philosophy, to me, comes from my belief of computing as universal and that consciousness is not so separate from computing. That's kind of ridiculous, though. You completely lose the sense of the theorem once you expand the axioms beyond those explicitly defined. It would be like saying we don't know whether a boulder will stop rolling down a hill or not...because of the Halting Problem. That's just not how mathematics works. I'm not expanding the axioms, I'm comparing the paradoxical nature of mathematics to other systems.I'm not saying that reality is encompassed in the theorem because "mathematics", I'm positing that if mathematics, a man-made formal system, is self-consistent but requires a true, unprovable statement, then perhaps other man-made systems might reveal similar paradoxes, when put under scrutiny.Mathematics is unique in that it is systematic and, as Godel discovered, able to be self referential and explicitly outline this paradox. Yet one can recreate the sense of the theorem in English with the phrase "this sentence is false".This doesn't invalidate mathematics, nor English, nor any system. It simply demonstrates that what we may consider the "sound" logic of mathematics is paridoxical, and I believe that this paradox, this strange loop, is not unique to Godel's mathematical theory alone. Considering the Godel sentence is a purely syntactical construction, I think even Godel himself would have a very difficult time even imagining what an analog would be in some higher system (much less actually constructing one). This is why he was so critical of attempts by others (e.g. Wittgenstein) to knock down the theorem by discussing its philosophical implications. Thankfully we're not constrained to only consider it the way Godel did, but are free to extrapolate it to philosophy... We are also free to point out the obvious nonsense that is extrapolating the axioms of arithmetic, on which the theorem depends, to whatever we want them to. Philosophical insights can start from all kinds of empirical facts and theoretical proofs -- this includes a mathematical theorem like Godel's.Philosophers don't (or don't all) use Godel's theorem to _prove_ something beyond that in an axiomatic way. They don't extrapolate, they use it as a raw information / material for further thinking.In that role, the limited conditions under which the proof is true doesn't really matter. Only the fact that under such limited conditions, this (Godel's) result can occur matters -- which is philosophical far from evident, Hilbert/Russel etc thought they could axiomatize math without such contradictions. I've tried to make this click for me for a long time, to no avail. Do you have any tips? Essentially, all logically valid formulae have proofs in first-order logics.Remember, logically valid formulae means that if you enumerate all possible interpretations (boolean values for the variables), the formulae remains true.So you could prove everything that ever exists by creating a set of all possible proofs using the given deduction rules. This is 1st-order logic so it's a bit more complicated, though you're on the right track (what you stated is the completeness theorem for propositional logic, not first-order logic).Rather than boolean values for the variables, you need to enumerate all possible...* Ambient universes where the language is interpreted* Values (taken from the ambient universe) for the constant symbols* Sets-of-tuples-of-values (from the ambient universe) for the predicate symbols* Functions-from-tuples-of-values-to-values (from the ambient universe) for the function symbols Yup, my definition of interpretation was limited for pedagogical purposes, but I should have been clearer. The propositional logic version is much simpler to understand, but the implications of the theorem are much more interesting in first-order logic, so introducing all of this nomenclature may be too much for someone who isn't motivated yet. I think going through The Little Schemer by Daniel Friedman and Matthias Felleisen helped me a lot. It builds up your knowledge in a conversational way to achieve understanding of quite complex ideas in simple terms. In that case, it builds up to an understanding of the halting problem, and then to the Y Combinator.The click felt to me similar to understanding recursion for the first time--it doesn't fit into your head naturally. Once it does "click", even then you have to re-remember it later, and trace your logic on how you "realized" recursion.Godel's Theorem brings that "feeling" of recursion and takes it to the extreme, in that its subject matter is "meta"-mathematical. So in the same way that recursion can't be grokked in terms of only loops, Godel's Theorem can't be grokked only in terms of equations.I don't know if that made sense, I've also seen some links in this thread which I think may help as well Von Neumann said: "In mathematics you don't understand things. You just get used to them."For practical purpose of getting the completeness theorem to click, my advice would be to stop thinking about it and instead just use it in a bunch of proofs. The way you use it in the a proof is you say something like "...and since, as we've just shown, this sentence phi is true in all structures for the language, it follows that there is a proof P of phi, and therefore..."If writing such proofs isn't the sort of thing you want to do, then you almost certainly don't actually need to know Goedel's Completeness Theorem, which is a highly technical theorem whose philosophical, epistemological, metaphysical, mystical, and magical applications are all completely overblown by people who want to sell books. I took an entire college course on it and it never really clicked for me. Still very interesting though. Godel's work has always been incomprehensible to me. no matter how I attempt to understand his theorems, I find them impenetrable. Apparently they are of great consequence, so I'm very interested, but to no avail. I can relate. My understanding of it thus far leads me to think I can summarize it fairly well though, and I would welcome other people's input or critique on this.It seems like it's so consequential because he demonstrated that no matter what kind of mathematical system you're using -- and no matter how much mathematics generally speaking develops -- there will be objectively true mathematical statements within that system that can't be proven.If that part of my understanding is correct, the part that's really interesting to me is whether we can know these true statements to be true, despite them not having proofs. This is where I could be misunderstanding things I suppose, but it suggests there's a disconnect between what's knowable and what's provable, and furthermore, that we can know more than we can prove.To actual seasoned mathematicians: is this a really naive interpretation of what I've read, or not? You need to be a bit more specific: no matter what kind of true, computable axiom-set you're using (this has nothing to do with 'how much mathematics generally speaking develops'), there will be objectively true mathematical statements that can't be proven by that axiom-set.>a disconnect between what's knowable and what's provable"what's provable [from a given axiom-set]" is a concrete, technical, unambiguously defined set of things. "what's knowable" is a vague philosophical set of things. Goedel's Incompleteness Theorem is a technical result about the former, and it's a common mistake to assume it says anything about the latter, except very tangentially.For those who are interested in the misty area where the two things do overlap, I will shamelessly plug this 2-page paper of mine, "Mathematical shortcomings in a simulated universe": https://philpapers.org/archive/ALEMSI.pdf I think they meant what is true, which is also concrete, technical and unambiguously defined in this setting. But you are very correct to make the distinction between truth and knowledge. Pointing out that many philosophers have believed that knowledge is justified true belief might elucidate the relationship a bit. > If that part of my understanding is correct, the part that's really interesting to me is whether we can know these true statements to be true, despite them not having proofs.There may be proofs, just using different fundamental axioms. The finding is that there is no initial set of axioms that by themselves allow proving all provable truths. You're thinking of his incompleteness theorems, not this completeness theorem for first order logic. My response to the parent covers both. I wonder if a slightly higher level explanation might help. If any real mathemeticians are around, please correct me if I'm inaccurate.An alphabet is a set of symbols. Symbols can be combined by the operation of concatenation to form strings. Strings either satisfy a property of being a syntactically valid or not. Let those strings that satisfy the property of syntactic validity be called sentences. The set of all sentences that are syntactically valid is called a language.I think the trick to understanding this particular theorem of Gödel is to know that in such formal systems there is a difference between semantic consequence (which is about truth) and syntactic consequence (which is about proof).A sentence is provable if it is derivable from other sentences of the language by applying certain syntactic transformation rules (inference rules) to them. On the other hand, a sentence of the language is logically valid if it is true under all possible interpretations (sorting out what this means is what model theory does for you.)If every sentence that is provable is logically valid then the system has the property of soundness. If every sentence that is logically valid is provable then the system has the property of completeness.The systems are designed so that we only need to do this syntatic manipulation, much like algebra, to produce proofs, but we always have a map giving us the meaning of any given sentence. This is in fact what "formal" means in this context: that in proofs we're only looking at the form (syntax) of the sentences and not the interpretation/meaning (semantics) of any of the sentences.To re-iterate: Something can be true but not provable, i.e. in any incomplete system. Something can be provable but not true, i.e. in an unsound system. If everything that can be proven is true, then the system is sound. If everything that is true can be proven, then the system is complete.First-order logic is sound and complete.Now when a system is extended with artihmetic, then due to Gödel numbering, it is suddenly capable of a form self-reference. This allows a system to state metatheorems about itself. If it can do this, then it is necessarily incomplete. In particular, it cannot prove its own consistency. I was the same until I watched this: Do you understand Turing's proof that the halting problem is undecidable?Godel's proof of the incompleteness of mathematics is really the same thing. He just did his proofs before computers were invented so a lot of his explanations are needlessly complicated by modern-talking-about-algorithms standards. It is pretty incomprehensible. It relates to the nature of words like "proof" and just, without and doubt, those are tricky concepts.There's also a lot of implied background. It sounds simple to say what's going on in informal language and hope that it makes sense. That's not true, however. The statement of the theorem relies on the machinery of formal proof. That machinery and, more importantly, it's relation to day-to-day reasoning is big and complicated.---A formal theorem is a statement in a language about the world. Typically speaking, these statements are not interesting without "quantification", the use of terms like "for all" or "there exists". Quantification makes formal theorems powerful by letting them speak about an entire universe of things all at once.The meaning of quantification is difficult. A sentence like "For all x, E(x)" is a discussion of the nature of the "universe" of x's. Sometimes we even refer to that explicitly when we say "For all x in X, E(x)" and upper-case X is that universe itself.For the theorem to be true, we need to not only pick an explicit universe X and then show that E(x) is true for any choice of x in X, but also we often have to show that this whole process continues to work no matter which universe X is chosen. This is the power of universality: making such an incredibly wide-reaching claim.(From a programming perspective this is the difference between types and tests. Tests show E(x) for all kinds of interesting x's. Types show "for all x in X, E(x)". There's some magic to this.)So one kind of way to show the validity of a theorem is to enumerate all universes (at least all of those that are considered "compatible" with the theorem) and then show that all choices of elements from that universe support the statement of the theorem. This is considered to be "semantic" meaning in that it's how theorems are practically useful. We only care about theorems when they've been instantiated in a practical way. Their value comes out of how freely we can choose to instantiate them.But there's another tricky way.Formal proofs are (usually) data structures constructed cleverly so that their shape relates to the syntactic structure of a theorem. They exploit the compositional nature of the theorem language and afford means to manipulate theorems (and their corresponding proofs) in a purely mechanical way.For instance, if I have theorems A and B as well as formal proofs of those theorems, a and b respectively, then I can have a proof of the composed theorem (A & B) by composing the sub-proofs together into a tuple like (a, b). This is a totally mechanical exploitation of the structure of (formal) proofs and theorems.The benefit of formal proofs is that syntactic manipulation can be verified very quickly. Given a theorem T and a (potential) proof P I can tell you whether P actually proves T very quickly. It's very difficult to find a proof P given only a theorem T. It's very difficult to write a theorem T which matches what you're interested in, but if you've got them all then I can verify your proof quickly.On the other hand, actually checking every universe where your theorem might hold and proving that it does is expensive no matter how you cut it. We'd like to be able to skip over that work.And that's what Gödel Completeness does.Gödel showed that when a theorem holds in all possible universes (it has semantic meaning) then there must be a formal proof of it (it has syntactic meaning). Since we're generally interested in semantic meaning (e.g. the applicability of a theorem to whichever universe(s) we happen to care about) then we're very excited to learn that syntactic meaning will mirror that semantic meaning.Without Completeness, syntactic proofs would still be valuable. A syntactic proof still implies semantic validity. What wouldn't be true is the applicability of syntactic reasoning. Maybe there are tons of important theorems out there which are true but can only be shown to be true through exhaustive enumeration? Suddenly mathematics as an act of human communication is fraught---many interesting things can only be explained exhaustively. There's often no way to "compress" meaning down into a pithy proof.But, no, completeness holds and formal proofs hold exactly the position of meaning we'd hope that they would.Well. At least for first-order theories. The theorem reminds me of William Lawvere's "Adjointness in foundations". Formulas/programs are in an adjoint relationship with their executions. The fact that they are adjoints is a big deal. Is that related to the Curry-Howard isomorphism? Godel's theorems are a popular subject among technophiles, but the popular conception of their implications is seriously off base. Torkel Franzen's book is a quick read and will guide you away from the most common errors. Most people talk about incompleteness theorem, not completeness. The book does briefly mention completeness theorem. Anyone interested in this topic should be sure to follow the various links on the page. One of my personal favorites is the Compactness Theorem: https://en.wikipedia.org/wiki/Compactness_theorem . Oh my! this reminds me of a class I took in my masters. Had to learn "Halting Problem". Godel's work in just astonishing and way beyond my caliber. I managed to pass the class but memories are still with me. I still think most CS courses teach the Halting Problem in just about the worst way possible.They always give you that one highly contrived counterexample where you're feeding the algorithm with the output from itself which doesn't even begin to touch on what the Halting Problem actually is or why it is so important. And if the student asks "well, what if we redefine it so it only works on OTHER algorithms to avoid this one weird edge case" the answer is basically "that's not covered."But really, the Halting Problem is asking the question if we can solve all of mathematics, and indeed all of philosophy with a fancy enough computer program. Can we build a machine God? And the standard textbook answer is roughly "If we built a machine God it could create a burrito so big that even it could not eat it, therefore machine God can not exist."IMHO, it would make more sense to get students down the right path to ask them what the halting computer would do when fed a program that calculates all of the digits of Pi. Or one that halts when it computes the answer to life, the universe, and everything. The halting problem may have been mathematically disproven by finding one highly convoluted counter example, but it's more significant failing is that it is infinite and thus can not be implemented on a machine with finite limits. >And if the student asks "well, what if we redefine it so it only works on OTHER algorithms to avoid this one weird edge case" the answer is basically "that's not covered."Only if the teacher doesn't know what they're talking about. If they know what they're talking about, the answer is "Since every function can be implemented by infinitely many different algorithms, in order to blacklist our halting-solver from the list of things we can input into our halting-solver, we would need to blacklist ALL the algorithms that implement it, but in order to do that, we'd need an algorithm for determining whether or not a given algorithm is a halting-solver, and it can be shown that that itself is just as impossible as solving the halting problem.">digits of PiYou seem to be deeply mistakenly about something, but it's hard to point out exactly what you're mistaken about because all the terms are so poorly defined. If, for example, you meant, "What the halting computer would do when fed a program P that takes an input n and outputs the nth digit of Pi, if we asked whether or not P halts on a particular input n", then the answer would be "Yes". If you meant "What the halting computer would do when fed a program Q that takes no input and runs forever, listing all the digits of Pi, when asked whether or not Q halts on no input", then the answer would be "No". Either way, there's absolutely nothing deep or profound about the digits of Pi here.>life, the universe, and everythingPlease, don't make things even more complicated for the poor students. You only need to blacklist the possibility that the program under review has access to the output of the Halting Detection program for itself.The Digits of Pi example is to illustrate that the Halting Program needs to understand the high level math necessary to prove that Pi is infinite. And then you lead them on to discovering that it needs to be able to solve every problem in mathematics, even ones that have not yet been discovered, and then you realize that it has to be omnipotent and infinite.The purpose is to pull students away from the kind of empirical solutions that immediately pop into your head when presented with the halting problem. "Well, if we look at the loops and what the exit conditions are and start iterating over all possible inputs..." which is not at all what the Halting Problem is about, despite what it looks like on the surface. Oh I see what you're getting at. I think what you're trying to get at is, "If you had a halting-solver, you could use that like an oracle to answer arbitrary mathematical questions." Unfortunately, this isn't true, you couldn't use it to answer arbitrary mathematical questions, just certain questions.For example, even if Pi happens to be rational, and the algorithm to list its digits eventually starts listing a periodic repetition (possibly the all-0 repetition even), that still doesn't mean it halts. So the halting-solver doesn't directly help you determine whether or not Pi is rational. You would instead need a "determine-if-given-function-eventually-has-periodic-output" solver, which is stronger than a halting solver.I'm not sure if there are any really slam-dunk examples of what you seem to be looking for, that don't involve proofs in some guise or other. An example which does involve proofs might look like this: "Let x be a program which attempts, by brute force, to find a proof that P<>NP, and immediately halts when it finds such a proof, if ever. If you had a halting detector, you could plug x into it and based on its output, you would know whether or not there is a proof of P<>NP." [Which is subtly different than whether or not P<>NP is true. That would require something stronger than a halting solver to obtain.] Wouldn't you simply define your sample program to halt upon discovering a periodic repetition in its output? Finding any finite number of repeating digits does not prove that there's an infinite number to follow. You might find 1000 3s in a row, followed by not a 3. If you know the entire state of your algorithm then it's possible to compare the state vs. it's state in a previous step to determine if you are in a repeating loop. If all of the parameters and internal are identical, then the algorithm can not produce a different result. A machine can print the same thing over and over, and yet never repeat its own internal state. For example, "let x=0; while(true) { print("0"); x=x+1;}" A halting problem solver would necessarily have to be smart enough to detect state that is relevant to the completion of the program vs. not. Plus, even in this case on a real world machine x can only store as much state at the underlying type. So if it's a 32 bit int then you can absolutely prove the algorithm does not halt after only 4 billion iterations, but even before that you can simply note the lack of exit conditions from the loop to show that it never terminates.However clever you are, the halting problem program has to be even more clever by definition. But we don't have an upper bound on cleverness, so the halting problem program has to be infinitely clever, hence my previous point about it being able to solve all of mathematics. This isn't exactly what you're talking about, but I found this blog post to be an interesting exploration of some related ideas: https://eklitzke.org/the-halting-problem That is exactly what I'm talking about. The Halting Problem has to be able to solve every problem in mathematics (and by extension everything else in the world) even ones that we don't know the solutions to. Even ones that we have not yet discovered.The formal counterexample is completely correct, but absolutely useless in enlightening students as to what the halting problem is actually asking. If anything it drives them further down the wrong line of thinking. > The halting problem may have been mathematically disproven by finding one highly convoluted counter example, but it's more significant failing is that it is infinite and thus can not be implemented on a machine with finite limits.I agree that would be a better way to teach it, but Turing machines have infinitely-long tapes -- the really unexpected result is that you couldn't solve the Halting Problem even if you had (countably) infinite tape and infinite time to do it. The same probably holds for uncountably-infinite tapes too (the "burrito" counterexample still breaks it), though you can't really address a tape that long for obvious reasons. Douglas Hofstadter's book, "Gödel, Escher, Bach" is very highly recommended reading for anyone who finds this (and the theory of computability more generally) interesting.I've lost entire days of my life to that book! Here's a good adjacent read that I thought was a much clearer and relatively accessible tour through the Proof - https://www.amazon.com/G%C3%B6dels-Proof-Ernest-Nagel/dp/081... - it's more of a pamphlet than a book, really. GEB is fun, but it's definitely a bad introduction to the idea itself. It's a lot more fun if you already understand the subject matter and can enjoy the tone of the book. One more related recommendation would be James Gleick's Chaos book: https://en.wikipedia.org/wiki/Chaos:_Making_a_New_ScienceIt is a good introduction to Chaos Theory and complexity science and a nice read alongside GEB. I'm 100 or so pages in to this now. What a book! I'm sure this will be a long time classic for me. This is not Godel's incompleteness theorem(s). This deals with a completely different notion of completeness.Godel's completeness theorem is the theoretical justification for why first-order logic occupies such an important position in mathematical logic. It basically establishes that two different methods of proving a logical statement happen to coincide for first-order logic.Let's say that I come up with a long list of axioms that define what it means for an object to be a "cow" and what it means to have a "spot." Then I state "all cows have spots."There are two ways of proving this. One way, the "semantic" way, is to take every example of cow and prove that it has a spot. This is hard; I not only have to examine every cow that exists, I have to examine every possible imaginary entity that satisfies the definition of "cow" and prove that it has a spot.The second way, the "syntactic" way, is to play the string manipulation game that people often associate with the notion of a "logical proof." I start out with the axioms of a "cow," and I'm given some rules for how I can manipulate these axioms. Example of this include the string "A and B" allows me to immediately substitute "A" instead. Given "A implies C" I can substitute all instances of "A" with "C" and so on and so forth. Hopefully I can work out a series of string manipulation steps that eventually end with the statement "All cows have spots."It is not obvious that these two methods should coincide in what they are able to prove. The rule set that I get to manipulate my strings by could be a really wacky set of rules. Maybe it says something like if you see "A implies B," you get to substitute any arbitrary string "C." It's also not apparent how you might choose to interpret the axioms that define a "cow." One of the axioms might say "a cow has four legs" and you might say, in my world, I've decided the string "four" corresponds to the natural number 5, and I'm going to look at all entities with 5 legs.Godel's completeness theorem states that for first-order logic these two proof techniques coincide: every statement you can prove or disprove the first way can also be proved or disproved the second way. More specifically, if you stipulate that rules of string manipulation are exactly the rules of deduction in first-order logic, and that you interpret your axioms when constructing semantic entities according to the standard semantics of first-order logic (e.g. you don't get to arbitrarily stipulate that the string "and" actually means logical disjunction, i.e. or, in your world), then any property that is true of every entity you can conjure up that satisfies your axioms has a "string manipulation" proof that proves that property. Conversely, every statement you can prove with your string manipulation game also holds true of every entity you can conjure up that satisfies your axioms (this isn't strictly speaking part of the completeness theorem, but is also true and usually stated hand-in-hand with it). What are practical consequences and examples of such in typical debates about politics, programming paradigms, etc? Regarding programming paradigms, at least one very important consequence of the completeness theorem is that first-order logic can be used as a formal basis for computation.This is because you can model what a Turing machine does via predicates in first-order logic, describing a relation between states of the machine.The completeness theorem tells us that every semantic consequence of a first-order theory (with a well-orderable language) can be derived syntactically. This means that to derive results of the computation, we only have to systematically search for proofs. Due to the completeness theorem, we know that we won't miss anything (i.e., a result of the computation) that is actually the case.An immediate application is an important programming paradigm called logic programming. See for example Prolog, Mercury, and other languages that are based on this idea. One practical consequence is that the halting problem is undecidable, so if your boss asks you to build a program to check if another program will ever do X, you can tell them "hey that's an impossible task!" In practical matters, this is not quite true. If you ask 'will this program launch a toaster to orbit mars', you can usually answer that question with high confidence.The halting problem describes the situations where you can't decide one way or another, but it does not say you can't decide anything. This is a very important distinction.We write code to analyze other code all the type. It's a core part of developer tools. If it was impossible, life would really suck. There aren't any. These theorems apply to specific systems in finitary number theory with certain axioms. They're important, but they're important in mathematical logic, not rhetoric or anything like that. IMHO if you understand the history it makes sense why the theorem is both important and confusing.In 1921, Hilbert had this idea, that mathematicians could create an algorithm that would automatically prove every true statement and disprove every false statement. Wouldn't that be neat? It's important that this idea predates computers. When Hilbert was thinking about an algorithm, it wasn't an algorithm as we would think about it in computer science today, because the modern idea of a computer hadn't been formalized.In 1931, Godel proved that it was impossible for an algorithm to automatically prove or disprove true mathematical statements. Take that, Hilbert. The modern idea of a computer still hadn't been formalized. So a lot of stuff that nowadays we think of as simple, Godel did in a really weird and confusing way.For example, as part of his proof Godel needed a way to algorithmically encode a bunch of numbers into a single number. Nowadays, that's pretty intuitive - any software program can be saved into a file, any file can be interpreted as a sequence of binary bits, which can be interpreted as a number. Back then, Godel used a really crazy encoding - encoding (a, b, c, d, ...) as 2^a * 3^b * 5^c etc.So if you go back and try to understand Godel's proof, it is really just needlessly complicated. I loved the book Godel, Escher, Bach when I first read it, but once I understood the mathematics more deeply I started thinking that it really wasn't the best approach to actually understand the mathematics.In 1936, Turing defined a Turing machine and also proved that the halting problem was undecidable. This is an equivalent statement to Godel's incompleteness theorem, and the proof is about 100 times more intuitive, especially for modern computer programmers who tend understand computers and computer programs pretty well, far better than they understand prime factorization.So, personally I think if you are learning this stuff, you are better off starting by learning about the halting problem. Sometimes the historical sequence of discovery isn't the best way to learn math, just like we don't learn about the Greek method of exhaustion before we learn modern methods of calculus. You are talking about his Incompleteness theorem. This article is about a different theorem, with a different definition of "completeness". Whoops. I thought about deleting the comment but hey maybe someone will find it useful. I did! Thank you! Most of the use of mathematical theorems are not from their results, but from the insights obtained in the way of reaching that result. Accepting the Halting Problem as a proof of the incompleteness theorem misses out on all of them.Suppose that I hacked the Turing Machine to always respond `true` to the halting problem. Now nothing is undecidable. What is wrong with this action? We detached this subthread from https://news.ycombinator.com/item?id=20879490. Oh, thanks. I was sad to see it missing. Search: