Carl Hewitt isn't just some random crank, he is the author of PLANNER, a breakthrough programming language in the early AI days, that gave rise to Prolog later on. This is a pretty interesting paper and it doesn't claim any big breakthrough in the sense of invalidating any great mathematical result. I can try to explain what it is about, at least the beginning of the quoted fragment:
In the first paragraph, he shows that if you allow proofs by contradiction, you can use them for every theory claimed to be inconsistent to show it indeed is consistent. This is so, because to prove by contradiction means to infer "not A" if from "A" follows "B" and "not B". So, if from "mathematics is inconsistent" follows "theorem A is true" and "theorem A is false", it follows that "mathematics is consistent". He claims that this shows consistency isn't rigorously proved in classical mathematics but is just assumed, since classical mathematics allows proofs by contradiction.
He then shows that from this consistency proof paradoxically follows the conclusion that mathematics is inconsistent, because this proof contradicts Goedel theorem "if mathematics is consistent, then it cannot infer its own consistency". He resolves the contradiction of Goedels proof by his proof by restricting the logical system in which they are done to exclude self-referential sentences.
Then, and I think this is really the gist of this work of his, he goes on to describe an approach that tries to somehow reduce the bad impact of including inconsistencies in the logical system, instead of trying to completely eliminate inconsistencies, as it was the goal traditionally.
> In the first paragraph, he shows that if you allow proofs by contradiction, you can use them for every theory claimed to be inconsistent to show it indeed is consistent. This is so, because to prove by contradiction means to infer "not A" if from "A" follows "B" and "not B". So, if from "mathematics is inconsistent" follows "theorem A is true" and "theorem A is false", it follows that "mathematics is consistent". He claims that this shows consistency isn't rigorously proved in classical mathematics but is just assumed, since classical mathematics allows proofs by contradiction.
So the first paragraph is about showing that if one allows statements about mathematics (e.g: "Mathematics is consistent") and statements in mathematics (e.g: "proposition A is true", for some statement A which can be expressed entirely "within" mathematics) to freely mix together on the same footing, then things become inconsistent? But this is well-known for over a century; see, e.g: Russell's Paradox [1]. And (if I understand it correctly), one doesn't need Proof by Contradiction for this inconsistency to occur---it suffices to allow meta-statements and "plain" statements to mix together; see Curry's Paradox [2]. And ways to work with metamathematical statements while avoiding this paradox where also worked out a long time back [3]. That the first paragraph of the OP mentions none of this seems ... surprising, to say the least.
This is just an abstract. I think the novelty he perceives is combining all this into a proof of consistency of mathematics. Page 6 of his paper mentions everything you have just written about:
This guy spent a few decades working in mathematical logic. He might overestimate the importance of some things, but I would be careful assuming he suddenly stopped understanding simple technical arguments.
He mentions those in relation to his own pet theories but in no way does he address GP's argument.
What his theorem boils down to is:
1. Let A be a formal system capable of describing the natural numbers.
2. Assume A is inconsistent. It follows that there exists such Φ that A ⊢ Φ and A ⊢ ¬Φ
3. ?????
4. A is consistent!
At step 3. he appears to conflate A⊢Φ ∧ A⊢¬Φ with Φ∧¬Φ and use it to refute the assumption in 2. But Φ∧¬Φ is not generally true outside the context of A and therefore the refutation does not follow.
My understanding is that if proof by contradiction is allowed in A, Φ ∨ ¬Φ is a true sentence in A, so if you try to include the sentence "A is inconsistent" in A, it follows that A ⊢ Φ and A ⊢ ¬Φ, therefore in A Φ ∧ ¬Φ is true, but this contradicts Φ ∨ ¬Φ, therefore A is consistent. In other words, it is impossible to show the inconsistency of any system that includes the law of excluded middle.
This is really trivial in the end. Inconsistency is "Φ ∧ ¬Φ", law of excluded middle is "Φ ∨ ¬Φ", so in the assumption of law of excluded middle is hidden the assumption of consistency. That's what he means by
The above theorem means that the assumption of consistency is deeply embedded in the structure of classical mathematics
Edit: I agree with you in the end, the proof is valid, but the conclusion does not hold in A, but one "level" above, so it does not contradict Goedels theorem. Am I reading this right?
Carl Hewitt is an interesting character [1] and I'm not really sure what to make of it. There are signs of a bright mind gone completely bonkers, and there are signs of someone being rejected for purely political (and not academic) reasons. Both have happened numerous times in the past which is why it's hard to tell.
If someone more enlightened on the topic cared to comment about the nature of things, that'd be greatly appreciated :)
I know Carl a little from the late 90s. I figured there was a 50% chance he would come up with something historically important, and a 60% chance he was crazy. (The two options are not quite mutually exclusive.)
Indeed, most examples illustrating the Incompleteness Theorem involve self-reference. But, it seems hard to prove anything interesting in a system that precludes self-reference.
BTW, the Y Combinator is a way of adding self-reference to a non-self-referential formalism. And we're debating his argument here. Deep.
If you assume "proof by contradiction" to be a valid form, then you also assume that mathematics is consistent. Nothing to see here... Move along...
Edit: In case some people missed it... In our logical framework, an inconsistent theory can prove both Φ and ¬Φ. Notice that you are using our logical framework; you are assuming it to be consistent. Yes. Assuming the framework to be consistent, you can prove that the framework is consistent. Shocking.
Did you see the first line? "That mathematics is thought to be consistent justifies the use of Proof by Contradiction." So the original author agrees with you.
You're right too, though: most of the value of that post is in the first paragraph, which is a very concise presentation of an interesting (apparent) paradox.
I stopped reading after the first paragraph, so I don't know what the rest of it says. You're right, he does agree with me. But he presents it with a shroud of mystery. I was turned off by his attempt at demonstrating a paradox (a la alarming title "contra Gödel et. al.) and using word play to trick the reader into believing there is one. But there is no paradox.
The claimed paradox is that this proof contradicts Goedels result "if mathematics is consistent, then it cannot infer its own consistency.". This is not what the paper is about anyway, though.
The proof offered in the abstract demonstrates a simple link between consistency and the validity of proof by contradiction. It shows that if mathematics is consistent (ie. ⊢Φ and ⊢¬Φ is impossible) then mathematics is consistent.
This is NOT a self-proof - it is a meta-proof. Taking arithmetic as an example, a self-proof of consistency would be a derivation of the consistency sentence (ie. "I am consistent", or "¬◻(0=1)") from the arithmetic axioms. That is not what we have here.
The existence of a proof of the consistency of a theory does not put it at risk from Godel's 2nd - for that we require that a theory prove its own consistency.
As far as I can tell, Hewitt begins with a proof that consistency and the validity of proof by contradiction are equivalent, and then proceeds on the grounds that consistency is proven internally to the theory - which it is not.
> both Φ and ¬Φ are theorems that can be used in the proof to produce an immediate contradiction. Therefore mathematics is consistent.
Isn't the inference that mathematics is inconsistent?
A proposition is a statement which is either true or false (and not both, and not neither).
An axiom is a proposition which we assume to be true, "for free".
A logical deduction is a way to combine or modify true statements to get other true statements.
A proof of a statement is a finite sequence of propositions. The sequence starts with an axiom, and ends with the target statement. Its other elements are propositions---these can be axioms or not. Each proposition which (i) is in the sequence and (ii) is not an axiom, must be the result of applying a logical deduction to one or more propositions which appear before it in the sequence.
A theorem is a proposition for which there exists a proof.
If we can prove two theorems which contradict one another (like Φ and ¬Φ in the quote above), then we say that our system for finding proofs is inconsistent. Since the OP starts out assuming that mathematics is inconsistent and ends up producing two contradictory theorems, he does not seem to get anywhere far. And his inference---that mathematics is consistent---seems to be incorrect.
> From that assumption, they derived a contradiction
This is what I don't see. To have derived a contradiction, they should have shown: "mathematics is consistent". Instead they ended up showing (because Φ and ¬Φ): "mathematics is inconsistent". Which is what they started out with. Where is the contradiction?
Hmm.. while writing this out, I think I am starting to get the source of the confusion. "Mathematics is inconsistent" is a meta statement about mathematics, and "Φ and ¬Φ" is a "plain" statement within the said mathematics. My intuitive feeling is that the latter contradiction does not "affect" the meta statement. I need to think further to make this more rigorous.
OT: Why the downvote(s)? Does my comment take away from the discussion?
I've always been fascinated with naïve set-theory, and how it was refuted after it's apparent inconsistency, the famous Russell's paradox of the set of all sets that don't contain it self, and only contains it self if it doesn't etc. So supposing it's False, yields True, and supposing it's True yields False.
But what fascinated me was not the Russell's set, but it's inverse, the set of all sets that contain it self, lets call it S.
For the statement S is a member of S —the set of all sets that contain it self, really contains it self— yields True if and only if you suppose it's True, and it yields False if and only if you suppose it's False.
S is therefore always consistent no matter the value. Both values are correct, but never at the same time.
I don't know particularly what it means, but the fact that you can do this, makes me wonder if this super-consistency is applied anywhere else in the philosophy of science, but nobody has ever realized the equivalent inverse, and therefor never demonstrated the inconsistency.
S doesn't seem that interesting. Assume that S does not contain itself. Consider the set P, which can be constructed by adding P as an element to S. Note that P contains itself. Therefore, P is in S. Also note that every other element in P is also in S, and that every element in S is in P. Therefore P and S are the same set. Therefore S contains itself, contradicting our assumption that S does not contain itself.
If we assume that S does contain itself, then I do not see a way at arriving at a contradiction.
hmm, you are correct. S really isn't that interesting. And my scepticism towards justification through non-contradiction because of possible super-consistancy is therefor based on false premises.
From my naive point of view I'm confused about how this argument can possibly work. Perhaps someone can explain it?
1. It seems to me that if you assume that mathematics is inconsistent, you shouldn't then consider an inconsistency in the argument sufficient grounds to throw away the assumption. I suppose that means that I don't accept proof by contradiction in the case that the assumption is the inconsistency of mathematics.
2. Sanity Check: We can create known inconsistent systems, could the same logic prove that such inconsistent systems are consistent too? It seems to me that it could, am I wrong?
3. Supposing that mathematics did prove its own consistency, the fact that inconsistent systems can also prove their own consistency means that you can't take a proof of consistency to be strong evidence that the system is consistent.
4. Gödel didn't say that mathematics couldn't prove its own consistency, he just said that if it could it was unsound, which is a bit of a concern too.
I don't think that Godel's theorem says that math is necessarily inconsistent.
My understanding is that it says math is either inconsistent or incomplete. So if you construct a mathematical system that is consistent (allowing proofs by contradiction), there will be some set of statements that are true, but cannot be proved true by that mathematics.
This is all kind of meta, but I think that, to prove anything, he would need to fix a system in which he could operate, and prove it in that.
Kind of reminds me of the following proof I have that something in this world is indeed absolute.
1) "Nothing is absolute" is self contradictory.
2) Therefore, something is absolute.
Neat, huh?
By the way, one needs to elaborate a little bit about the above proof. The reason (2) follows from (1) is because the negation of 1, "something is absolute", is not self-contradictory. However, if both (1) and (2) were shown to be self-contradictory, then you wouldn't have the excluded middle.
Gödel walks into the Church and proclaims, "You cannot prove that you are good and therefore you cannot prove that there is a good God but you need not fear for I have developed a system whereby you can still be ethical despite knowing not of your goodness."
Whether or not Gödel is correct soon becomes irrelevant as he is promptly ejected from the Church. As he walks away he hears the sound of the hymn "I Cannot Be Sung in the Church" being pelted out at at full volume by the congregation.
"(There is a very weak theory called Provability Logic
that has been used for self-referential propositions coded as integers, but it is not strong enough for the purposes of computer science.) "
weak theory? not strong enough for computer science, eh? strange ways to talk about mathematical theorems that are either true or not. let's see him define what he means by any of these terms lol
"Strong" and "weak" have well-understood meanings in mathematical logic and model theory.
For example, imagine there's a true and "important" statement S about Turing machines for which we could prove "it is impossible to prove S using provability logic." In that sense PL would be "too weak" to do computer science.
He's just speaking colloquially and I don't know anything about the system of logic he's describing. He means something like "weak relative to all the other ones we'd consider on a day-to-day basis" or "so weak we'd never seriously consider it except as a specialized tool."
That's not really the point, though. The point is that "strong" and "weak" are well-defined terms in mathematical logic and he was almost certainly using them in that sense. :)
No, he's almost certainly not. Keep in mind, I know nothing about PL. I'm solely relying on my background as a mathematician to think of the most reasonable interpretation for what he's saying.
He's saying something like, "You can axiomatize Peano Arithmetic in PL and explore the idea of provability therein because that's what PL was invented for. At first glance we might try to use something like PL to do what we're setting out to do, but it's so specialized that it winds up not being worthwhile."
This makes more sense in the context of his paper, anyhow. Gödel's Incompleteness Theorem basically says that Peano Arithmetic can't prove its own consistency within PA unless it's inconsistent
So, we could model PA inside another logical system (e.g., PL) which has notation for certain meta-mathematical statements like "P(X): it is provable that X". You'd then need to demonstrate that a version of PA axiomatized with this logic is both complete and sound WRT PA, etc. etc.
In the first paragraph, he shows that if you allow proofs by contradiction, you can use them for every theory claimed to be inconsistent to show it indeed is consistent. This is so, because to prove by contradiction means to infer "not A" if from "A" follows "B" and "not B". So, if from "mathematics is inconsistent" follows "theorem A is true" and "theorem A is false", it follows that "mathematics is consistent". He claims that this shows consistency isn't rigorously proved in classical mathematics but is just assumed, since classical mathematics allows proofs by contradiction.
He then shows that from this consistency proof paradoxically follows the conclusion that mathematics is inconsistent, because this proof contradicts Goedel theorem "if mathematics is consistent, then it cannot infer its own consistency". He resolves the contradiction of Goedels proof by his proof by restricting the logical system in which they are done to exclude self-referential sentences.
Then, and I think this is really the gist of this work of his, he goes on to describe an approach that tries to somehow reduce the bad impact of including inconsistencies in the logical system, instead of trying to completely eliminate inconsistencies, as it was the goal traditionally.