In the first paragraph, he shows that if you allow proofs by contradiction, you can use them for every theory claimed to be inconsistent to show it indeed is consistent. This is so, because to prove by contradiction means to infer "not A" if from "A" follows "B" and "not B". So, if from "mathematics is inconsistent" follows "theorem A is true" and "theorem A is false", it follows that "mathematics is consistent". He claims that this shows consistency isn't rigorously proved in classical mathematics but is just assumed, since classical mathematics allows proofs by contradiction.
He then shows that from this consistency proof paradoxically follows the conclusion that mathematics is inconsistent, because this proof contradicts Goedel theorem "if mathematics is consistent, then it cannot infer its own consistency". He resolves the contradiction of Goedels proof by his proof by restricting the logical system in which they are done to exclude self-referential sentences.
Then, and I think this is really the gist of this work of his, he goes on to describe an approach that tries to somehow reduce the bad impact of including inconsistencies in the logical system, instead of trying to completely eliminate inconsistencies, as it was the goal traditionally.
So the first paragraph is about showing that if one allows statements about mathematics (e.g: "Mathematics is consistent") and statements in mathematics (e.g: "proposition A is true", for some statement A which can be expressed entirely "within" mathematics) to freely mix together on the same footing, then things become inconsistent? But this is well-known for over a century; see, e.g: Russell's Paradox . And (if I understand it correctly), one doesn't need Proof by Contradiction for this inconsistency to occur---it suffices to allow meta-statements and "plain" statements to mix together; see Curry's Paradox . And ways to work with metamathematical statements while avoiding this paradox where also worked out a long time back . That the first paragraph of the OP mentions none of this seems ... surprising, to say the least.
This guy spent a few decades working in mathematical logic. He might overestimate the importance of some things, but I would be careful assuming he suddenly stopped understanding simple technical arguments.
What his theorem boils down to is:
1. Let A be a formal system capable of describing the natural numbers.
2. Assume A is inconsistent. It follows that there exists such Φ that A ⊢ Φ and A ⊢ ¬Φ
4. A is consistent!
At step 3. he appears to conflate A⊢Φ ∧ A⊢¬Φ with Φ∧¬Φ and use it to refute the assumption in 2. But Φ∧¬Φ is not generally true outside the context of A and therefore the refutation does not follow.
This is really trivial in the end. Inconsistency is "Φ ∧ ¬Φ", law of excluded middle is "Φ ∨ ¬Φ", so in the assumption of law of excluded middle is hidden the assumption of consistency. That's what he means by
The above theorem means that the assumption of consistency is deeply embedded in the structure of classical mathematics
Edit: I agree with you in the end, the proof is valid, but the conclusion does not hold in A, but one "level" above, so it does not contradict Goedels theorem. Am I reading this right?
Indeed, most examples illustrating the Incompleteness Theorem involve self-reference. But, it seems hard to prove anything interesting in a system that precludes self-reference.
BTW, the Y Combinator is a way of adding self-reference to a non-self-referential formalism. And we're debating his argument here. Deep.
Indeed, isn't that one of the important features of Goedel's proof: that any system that can express basic arithmetic is capable of self-reference.
If you assume "proof by contradiction" to be a valid form, then you also assume that mathematics is consistent. Nothing to see here... Move along...
Edit: In case some people missed it... In our logical framework, an inconsistent theory can prove both Φ and ¬Φ. Notice that you are using our logical framework; you are assuming it to be consistent. Yes. Assuming the framework to be consistent, you can prove that the framework is consistent. Shocking.
You're right too, though: most of the value of that post is in the first paragraph, which is a very concise presentation of an interesting (apparent) paradox.
This is NOT a self-proof - it is a meta-proof. Taking arithmetic as an example, a self-proof of consistency would be a derivation of the consistency sentence (ie. "I am consistent", or "¬◻(0=1)") from the arithmetic axioms. That is not what we have here.
The existence of a proof of the consistency of a theory does not put it at risk from Godel's 2nd - for that we require that a theory prove its own consistency.
As far as I can tell, Hewitt begins with a proof that consistency and the validity of proof by contradiction are equivalent, and then proceeds on the grounds that consistency is proven internally to the theory - which it is not.
Isn't the inference that mathematics is inconsistent?
A proposition is a statement which is either true or false (and not both, and not neither).
An axiom is a proposition which we assume to be true, "for free".
A logical deduction is a way to combine or modify true statements to get other true statements.
A proof of a statement is a finite sequence of propositions. The sequence starts with an axiom, and ends with the target statement. Its other elements are propositions---these can be axioms or not. Each proposition which (i) is in the sequence and (ii) is not an axiom, must be the result of applying a logical deduction to one or more propositions which appear before it in the sequence.
A theorem is a proposition for which there exists a proof.
If we can prove two theorems which contradict one another (like Φ and ¬Φ in the quote above), then we say that our system for finding proofs is inconsistent. Since the OP starts out assuming that mathematics is inconsistent and ends up producing two contradictory theorems, he does not seem to get anywhere far. And his inference---that mathematics is consistent---seems to be incorrect.
What am I missing here?
From that assumption, they derived a contradiction, therefore the assumption can't be true, and math must be consistent.
This is what I don't see. To have derived a contradiction, they should have shown: "mathematics is consistent". Instead they ended up showing (because Φ and ¬Φ): "mathematics is inconsistent". Which is what they started out with. Where is the contradiction?
Hmm.. while writing this out, I think I am starting to get the source of the confusion. "Mathematics is inconsistent" is a meta statement about mathematics, and "Φ and ¬Φ" is a "plain" statement within the said mathematics. My intuitive feeling is that the latter contradiction does not "affect" the meta statement. I need to think further to make this more rigorous.
OT: Why the downvote(s)? Does my comment take away from the discussion?
I elucidate it in another comment:
But what fascinated me was not the Russell's set, but it's inverse, the set of all sets that contain it self, lets call it S.
For the statement S is a member of S —the set of all sets that contain it self, really contains it self— yields True if and only if you suppose it's True, and it yields False if and only if you suppose it's False.
S is therefore always consistent no matter the value. Both values are correct, but never at the same time.
I don't know particularly what it means, but the fact that you can do this, makes me wonder if this super-consistency is applied anywhere else in the philosophy of science, but nobody has ever realized the equivalent inverse, and therefor never demonstrated the inconsistency.
If we assume that S does contain itself, then I do not see a way at arriving at a contradiction.
1. It seems to me that if you assume that mathematics is inconsistent, you shouldn't then consider an inconsistency in the argument sufficient grounds to throw away the assumption. I suppose that means that I don't accept proof by contradiction in the case that the assumption is the inconsistency of mathematics.
2. Sanity Check: We can create known inconsistent systems, could the same logic prove that such inconsistent systems are consistent too? It seems to me that it could, am I wrong?
3. Supposing that mathematics did prove its own consistency, the fact that inconsistent systems can also prove their own consistency means that you can't take a proof of consistency to be strong evidence that the system is consistent.
4. Gödel didn't say that mathematics couldn't prove its own consistency, he just said that if it could it was unsound, which is a bit of a concern too.
My understanding is that it says math is either inconsistent or incomplete. So if you construct a mathematical system that is consistent (allowing proofs by contradiction), there will be some set of statements that are true, but cannot be proved true by that mathematics.
Kind of reminds me of the following proof I have that something in this world is indeed absolute.
1) "Nothing is absolute" is self contradictory.
2) Therefore, something is absolute.
By the way, one needs to elaborate a little bit about the above proof. The reason (2) follows from (1) is because the negation of 1, "something is absolute", is not self-contradictory. However, if both (1) and (2) were shown to be self-contradictory, then you wouldn't have the excluded middle.
Whether or not Gödel is correct soon becomes irrelevant as he is promptly ejected from the Church. As he walks away he hears the sound of the hymn "I Cannot Be Sung in the Church" being pelted out at at full volume by the congregation.
weak theory? not strong enough for computer science, eh? strange ways to talk about mathematical theorems that are either true or not. let's see him define what he means by any of these terms lol
For example, imagine there's a true and "important" statement S about Turing machines for which we could prove "it is impossible to prove S using provability logic." In that sense PL would be "too weak" to do computer science.
Also, I remember you - you did adonomics back in the day for facebook apps :)
That's not really the point, though. The point is that "strong" and "weak" are well-defined terms in mathematical logic and he was almost certainly using them in that sense. :)
He's saying something like, "You can axiomatize Peano Arithmetic in PL and explore the idea of provability therein because that's what PL was invented for. At first glance we might try to use something like PL to do what we're setting out to do, but it's so specialized that it winds up not being worthwhile."
This makes more sense in the context of his paper, anyhow. Gödel's Incompleteness Theorem basically says that Peano Arithmetic can't prove its own consistency within PA unless it's inconsistent
So, we could model PA inside another logical system (e.g., PL) which has notation for certain meta-mathematical statements like "P(X): it is provable that X". You'd then need to demonstrate that a version of PA axiomatized with this logic is both complete and sound WRT PA, etc. etc.