It seems strange to say that Godel didn't know it at the time -- Godel's work was part of the effort to mechanize logic, Hilbert's program, and the axiomatization effort started in response to contradictory calculus theorems, and followed Frege, Russel, Whitehead, et al.
Or rather, Godel's work showed that Russel's efforts to create a consistent and complete foundation for mathematics was fundamentally insufficient. Turing extended this work, by providing an explicit model of a calculating machine, to show that no algorithm could effectively determine the truth -- even without providing a proof -- locking the door on Hilbert's program that Godel had slammed shut.
Regardless, the entire point of Godel's work was exploring the relationship between our ability to "reason" and our ability "calculate" or "perform rote tasks".
(Missing details of the narrative aside -- it was a really good read.)
In particular, IP doesn't really involve the multiplicative structure of the integers, which is crucial for how Gödel proved his results.
Also, I'd point out that while there's significant overlap between integer programming and discrete optimization, they're not the same thing. Integer programming is one tool used in discrete optimization among many.
I'd like to mention that this article is currently slated for publication in the Bulletin of the European Association for Theoretical Computer Science. A revised draft can be found at: https://github.com/SOberhoff/incompleteness_ex_machina/relea...
I haven't handed it over yet, so we'll have to see about further changes.
One thing that seemed a bit surprising to me, especially in light of the paper's strong focus on computation, is that the assumption that the axiomatization is effective is only mentioned in a footnote, and even there it is not explained but we are told to "don't worry about it" if we don't know what that means.
Yet, the code pieces in the paper make critical use of this assumption in that they assume that we can decide (with an algorithm) whether or not a string is a proof. This only works if the axioms are recursive.
So, maybe you could consider adding a short explanation about this to the paper?
Though, I admit that I have never spent much thought on non-effective axiomatizations. So I'm open to be educated.
When you say "either/or" in the paper, you make case distinctions that are intuitionistically non-contentious, i.e., you say "either F ⊢ G, then contradiction, so F ⊬ G, or F ⊬ G, then contradiction etc.", but the point is that G is explicitly constructed, so this is intuitionistically acceptable.
In contrast, the other assumption I mentioned is used in the paper and in fact essential for its proofs.
In any case, that was merely an example. The central point I was making doesn't depend on this.
However, as I mentioned, in the remainder of the paper, it appears to me that you assume that the theorems are not only recursively enumerable, but in fact recursive. For example, in the proof of Theorem 4 on page 5, the lines stating "if s is a proof ..." seem to assume that this check always terminates, requiring assumptions that the original proof does not and therefore weakening the obtained result.
In my opinion, adding a short explanation about how this can be salvaged (it can!), or at least precisely specifying what is assumed, could improve the presentation.
I'm only assuming that checking whether "does s prove S?" is a recursive property. That's not the same as demanding "is S provable?" to be recursive.
Upon reflection I agree that effective axiomatization isn't actually that difficult to explain. So I've changed the footnote to say: "For the purpose of this discussion every formal system is effectively axiomatized by definition. This basically just boils down to the fact that proofs are computer checkable."
I was initially worried that this might raise the question what non-effective axiomatizations are all about. That's not a can of worms I want to open. But I think this should be fine.
In my opinion, if the assumption is that "proofs are computer checkable" (i.e., "does s prove S" is a recursive property, which is what is required in the paper), then it would be good to either state that as the key assumption, or state for example that this follows via Craig's theorem from the assumption that the system is effectively axiomatized:
Interestingly, from this theorem it follows that such a system can be re-axiomatized so that "S is a theorem" is even a primitive recursive property. In fact, when Gödel wrote "rekursiv" around 1930, he meant the class of functions that we now call primitive recursive. Because of Craig's theorem, we know that primitive recursive, recursive, and recursively enumerable can be used interchangeably in the definition of effective axiomatization, but without such a theorem, it would be a leap to conclude one from the other.
Also, the Wikipedia article doesn't say that Craig's Theorem proves recursively enumerable axiomatizations equivalent to (primitive) recursive axiomatizations. I would find it very surprising if this was a consequence. It only says that a recursively enumerable set of formulas (e.g. the provable sentences in Peano arithmetic, not merely its axioms) can be given a (primitive) recursive axiomatization. That's a very much weaker claim.
"effectively axiomatized" means that the theorems are recursively enumerable. It does not immediately follow that the axioms are a recursive set. However, by Craig's theorem, we can then re-axiomatize so that the axioms are a recursive set, even a primitive recursive set!
You are of course right about the consequences: In the post above, I meant to say that due to this theorem, it makes no difference whether we require the axioms to be recursively enumerable, recursive, or primitive recursive. But still, without this theorem, it would be a leap to conclude from "the system is effectively axiomatized" that "s
is a proof of S" is recursive.
"If T is effectively axiomatized, in the sense that there is an algorithm for deciding whether or not a given sentence is an axiom of T,..."
In Computability: Turing, Gödel, Church and Beyond Martin Davis even makes the definition "T is axiomatizable if it has an axiom set that is computable" on page 42.
And that's also the definition Shoenfield gives on page 125 of Mathematical Logic.
Besides, "effective" has always been a synonym for "computable", "recursive", or "decidable" whenever I've encountered it in this context.
Also, I must insist that "it makes no difference whether we require the axioms to be recursively enumerable, recursive, or primitive recursive." is not what Craig's Theorem says. This is dangerously close to claiming that recursively enumerable sets are recursive which is plainly untrue. If recursively enumerable axioms (not just recursively enumerable theorems) are indeed interchangeable with recursive axioms, then I'd like to hear more about that.
The key point is that I would find it a worthwhile addition to the paper to somehow make clear that "s proves S" is assumed to be decidable. Since you have done this: Thank you!
It's nice to know that you don't need it; it's just a sideshow.
When a formal system says: "this computation halts after some number of steps", then under the default interpretation that means that after say 10000 steps the computation really halts. But in the "similar but weirdly different" reality where transfinite numbers exist the above claim can still be considered true if it runs indefinitely. One simply has to entertain the idea that "some number of steps" might mean a transfinite number of steps.
In other words, yes, we can say that the formal system lies provided we accept that what is and what isn't a lie depends on the viewpoint.
It's a bit odd to think that nearly all "natural" numbers are so large that we can never calculate them, even in principle (because it would take more bits than exist in the universe). Even constructive proofs can describe calculations that could never actually be carried out. The boundary between what I might call "practical" numbers and the larger natural numbers is fuzzy (since it depends on technology), but maybe admitting transfinite numbers exist among the very large naturals would be a way of dealing with it? A way of saying "induction takes us beyond anything we can really know; here be dragons".
And similarly, there are programs that in practice would never halt (because not enough time in the universe), even though theoretically they do.
I don't suppose that's very useful, though, so nice to know it can be avoided.
Throughout the discussion I'm making the tacit assumption that there is one standard viewpoint to which we adhere. That's the "normal" world. Numbers are finite and halting programs halt after finitely many steps. It is from this fixed viewpoint that I'm declaring certain claims to be lies. They are not lies in some grand universal sense.
I realize that the word "lie" usually also entails an accusation of deliberate deception. But that's immaterial here. Formal systems don't have intentions. They simply make claims.
The mere execution of basic arithmetic does not require the capability of manipulating propositions of basic arithmetic.
Doing basic arithmetic:
12 * ( 5 + 8 ) --> 12 * 13 --> 156
Talking about basic arithmetic:
a * ( b + c ) == a * b + a * c
The formal language needed to describe basic arithmetic is much more powerful than a simple execution engine, such as a stack engine, that can merely carry out basic arithmetic.
For example, I must be able to express that for all natural numbers x and y, if x = y, then y = x. The result could look like this:
∀ x,y ∈ N: x=y ⇔ y=x
Hence, this language must be able to express all Dedekind-Peano's axioms as well as every proposition provable from these axioms, along with their formal proofs.
In that sense, the Gödel numbering system is the "bytecode" of quite a serious programming (or at least, specification) language. (https://en.wikipedia.org/wiki/G%C3%B6del_numbering)
Induction isn't just reasoning about computation, (i.e simple equations). Instead it is reasoning about reasoning about equations (i.e. reasoning about all equations).
Specifically we have: this formula https://wikimedia.org/api/rest_v1/media/math/render/svg/67e2...
For example, Robinson Arithmetic is a finitely axiomatized theory whose axioms contain only one existential quantifier, but just like full (Peano) arithmetic, it is subject to Gödel's incompleteness theorems.
I'm imagining the difference between a system that can show "P(n)" for any n, rather than a system that can show "P(n) for any n."
It seems like the former must come with a proof about the system. The quantifiers "for any n" have to come somewhere. If they aren't embedded within the system, do we still end up with a system that must be able to express "This sentence is not provable?"
This idea is called Gödel numbering, and Wikipedia says “In formal number theory a Gödel numbering is a function which assigns to each symbol and formula of some formal language a unique natural number called a Gödel number (GN). ... A Gödel numbering can be interpreted as an encoding where a number is assigned to each symbol of a mathematical notation, and a stream of natural numbers can then represent some form or function.” That seems to sort of gel with what I heard, although it implies that maybe arthimetic operations like +, or -, aren’t actually involved. It might be more like 1 is a statement, and 2 is a statement, such that 12 is a combination of those two statements.
The point is that then you encode “this statement is not proveable” into numbers and show that you get a numerical contradiction (like 1=3 or something).
1. Encode the propositions of your formal system in ascii (or whatever encoding you like)
2. Observe that the resulting bit patterns can be interpreted as numbers, and so the rules of inference of your formal system can be expressed as mathematical operations on those numbers
Goedel had to invent all of that from scratch. On top of that, he had to describe how to actually carry out step 2 without the benefit of a programming language. All he had to work with was raw math. So his encoding was very different from ascii, or anything you are familiar with, because it was "optimized to run on raw math" rather than a digital computer.