There's another "solution" to this paradox: if we assert something we are guided by a set of "conversational principles". For example, asserting "X implies Y" if we know that X is false is inappropriate. If X is false, "not-X" would be the appropriate assertion.
According to this theory, there's nothing wrong with the truth-functional meaning of "X implies Y". We just need to take into account what is implied by asserting "X implies Y", rather than e.g. "not-X", or "X and Y".
Same with disjunction: "X or Y" is true if we know that X is true. However, if we assert "X or Y", it is implied that we're not certain that X is true, otherwise we would have used "X", which is the simplest way to convey what that fact.
This is known as Grice's Pragmatic Defence of Truth-Functionality.
Sure, but then you're doing violence to the structure of the original sentence. (How did the negation get into the consequent?) If you're allowed free reign to paraphrase, then you can always get the right result.
"No student will succeed if he goofs off" is a standard example discussed in the semantics literature, by the way. What you're pointing out, in effect, is that this sentence seems to mean "No student who goofs off will succeed". The problem is that it's unclear how to get to that interpretation given the actual syntactic structure of the sentence. In other words, you can't get there just by following a direction to interpret if...then... as material implication.
I agree that you cannot just mindlessly parse "if ... then ..." as implication, especially in the context of other things going on in the sentence and hope it will automagically work out as a correct interpretation of the sentence meaning.
Right, but that means that material implication can't be used as an analysis of the meaning of "if...then..." in English. If you have to make ad-hoc adjustments for different kinds of sentence, then you don't have an actual theory of the interpretation of "if..then..." -- you just have a toolbox of techniques for paraphrasing it. Note that it is possible to do much better than this, so it's not an unreasonable goal to have in mind. See e.g. https://dspace.mit.edu/handle/1721.1/95781 for an overview of modern approaches to the semantics of conditionals in natural languages.
On the preceding point, virtually everyone agrees. I was questioning even the weaker assertion that analyzing "if...then..." as material implication always preserves validity.
Yeah, I understand your point and agree with it, it just probably wasn't very clear to me first, what was your intention and where you are going with the argument.
I like that sentence btw, very short and nice example!
Trying to map natural language onto logic is a mug's game, although the converse - mapping logic into natural language - is possible.
Leibniz was perhaps the first to understand that the solution to this was to abandon natural language and replace it was an artificial, perfect language, where connectives and grammar had one and only one clear meaning: the calculus ratiocinator and the characteristica universalis. Although Leibniz didn't succeed in his lifetime, he inspired Frege to write the Begriffsschrift[3] which was an early and very complete presentation of what today we would call predicate calculus.
One of Frege's insights was that quantification ("there exists x such that..." and "for all x...") needed to be explicit and could only be made unambiguous if the exact order and name of each quantification was used consistently - hence the idea of "bound variables." Without explicit quantification, it is impossible to determine the meaning of a statement such as:
"All mice fear some cat."
Does this mean that for every mouse, there is some nearby cat which that mouse fears? Or does it mean that there is some kind of King Cat, feared by every mouse in the world? Natural language is ambiguous on this point. (Note also that this particular example is one which appears to be a syllogism, but which cannot be fully analyzed using Aristotelian logic.) However, if we use explicit quantification, we write either:
∃x ∀y My ∧ Cx ∧ yFx (1)
∀y ∃x My ∧ Cx ∧ yFx (2)
Or, in the stilted yet precise jargon of mathematicians:
There exists a cat x such that for all mice y, y fears x. (1)
For all mice y there exists a cat x such that y fears x. (2)
The only difference between (1) and (2) is the order of quantification, and this is not something English or other natural languages is careful about tracking. This is why you feel like you have to butcher sentences to rewrite them in this form, and also why this re-writing cannot be done by rote but requires human judgement and understanding: because this critical information is in fact missing from the original natural language sentence!
These are not particularly contrived or unusual examples, by the way. One of the fundamental notions in real analysis is that of the limit, which is defined as follows:
the limit of the sequence a_n as n goes to infinity is C if and only if for every delta > 0, there exists N such that |a_n - C| < delta for all n > N.
Such a thought cannot even be precisely articulated unless one has the necessary language to talk precisely about statements involving multiple quantifiers and bound variables. Which is why early presentations of Calculus (Leibniz, Newton) relied of unsatisfactory notions of "fluxions" and "infinitesimals"[4] while later mathematicians (Weierstrass, Cauchy,) armed with a more sophisticated mathematical language were finally able to give a satisfactory foundation to calculus.[5]
We see the same category of problem when we try to interpret "if... then..." as the material implication of formal logic. Not only do we have the problem of conversational implicature[6] but we have the so called paradoxes described in the original article. Similar problems exist for common words like "or" which is often taken to mean "exclusive or" rather than the inclusive "or" favored by logicians, and even simple words like "is" don't necessary map purely onto the Law of Identity[7] the way logicians and philosophers would like them to.[7]
The way I see it, the fault lies fully on the side of natural language, which is too squishy and imprecise and overloaded to be useful to convey precise formal arguments. But that doesn't mean you have to learn an artificial language like Lojban[8]. Mathematicians do quite well by speaking in a kind of restricted subset of English (or whatever native language they're used to) simply by giving exact and precise meanings to certain words and formulations like "if and only if" or "implies."[9] When a mathematician says "implies" in a paper or lecture, you can be quite sure he or she is speaking of material implication.
But as for pining down the meaning of natural language in the wild, as it is actually spoken... well, that's a rather more difficult problem, don't you think?
>Trying to map natural language onto logic is a mug's game
Not really. Modern linguistic semantics has done a pretty good job. Check out the link to the overview article that I posted in the grandparent, or the von Fintel & Heim textbook here: http://web.mit.edu/fintel/fintel-heim-intensional.pdf (sections 4.3-5 in particular). It's possible to give precise logical analyses of natural language conditionals to a pretty significant extent. It's just the material implication analysis that doesn't work.
> because this critical information [about quantifier scope] is in fact missing from the original natural language sentence!
You're moving a bit too fast there. The information can't be recovered from the sequence of words, but that doesn't mean that it isn't present in the structures that underly interpretation. A precise logical analysis can be given for each of the possible interpretations of an ambiguous sentence. Semanticists treat quantifier scope ambiguities using such mechanisms as quantifiying in [1], quantifier raising [2], type shifting [3], or even continuations [4].
No-one is suggesting, by the way, that formal logical analyses of the meanings of English sentences are useful for the purposes of doing math or logic. But the "paradoxes" in the original article relate to the use of material implication to gloss the meaning of "if...then..." in English. This naturally raises the question of whether there might be better analyses available.
The "paradoxes" discussed in the linked page are a result of properties of English (and most/all other natural languages). Of course, there is no inherent problem with material implication as a logical connective.
About the example given with teh number 3, I prefer this even more absurd form: if the number 3 is not the number 3 then the number 3 is the number 3. It's true!
Descendants of Aristotle still find limitations of the system amusing, that’s amusing itself.
I hope to live to the day when philosophical advancements of 20th century (or re-discovery of 2500-old Indian logic, if you like), formalized in accessible forms, get widespread acceptance, could leave plenty of people who’se job it to juggle limited abstractions with the need to pick more useful jobs.
No pun intended, these are terribly useful abstractions we’ve built our world on, but they barely hold up against thorough reality check and leave out a lot as ‘paradoxes’.
Please add some substance, or I will make your argument for you.
Yes! We all need to get on board with constructivist mathematics [0][1] already. Construction is very similar to computation, and it is not inconsistent to take "all reals are computable" or "all functions are continuous", the same rules Turing discovered, as axioms if we like. We can therefore move computer science fully onto a foundation that is more rigorous than typical maths.
Keep in mind that Aristotelian logic did not stop with Aristotle. It kept developing through the Middle Ages and even into today. Now, it's called term logic. Fred Sommers made tremendous advances in expanding syllogistic logic into something more versatile than what Aristotle worked on.
Indeed. Yet, it is still based on True/False pair, which does not reflect neither reality or human experience in most cases. Where it is applicable - it perfectly works. But the scope is limited.
1. A "quite a while" is less than a hundred years after Godel and in math? Compared to 2000+ years of Aristotlean logic dominance in hard sciences just because Romans inherited most of their scientific views from Greeks, not from Indians/Chinese?
2. It depends on domains of applicability, if you think of it.
In pure CS and math? Yes, the visible value is limited, because most problems we choose to try to solve can be solved with math apparatus we're armed with. Value I know is mostly limited to optimizing problems that have poor solutions with binary logic.
In practical engineering? ATPG, to my understanding, requires multi-valued logic. Analysis of large phenomena and automated decision making becomes an order of magnitude simpler problems, with better efficiency over chosen metric. Temperature controllers, decisions based on photo-metering (autofocus, exposure adjustment), etc. Somehow, even with lack of readily-available building blocks and tooling, it turns out that there are problems people are motivated to solve from scratch, and MVL/FL comes handy.
It's only stuff I overheard of through my life among bright engineers.
3. Moreover, the biggest impact is not in CS (as original presented paradox isn't, as well), it's on human judgment, decision-making and general assessment of reality, where "neither true nor false" (I don't know) is the first stepping stone to make the world much easier place to live in.
There's another "solution" to this paradox: if we assert something we are guided by a set of "conversational principles". For example, asserting "X implies Y" if we know that X is false is inappropriate. If X is false, "not-X" would be the appropriate assertion.
According to this theory, there's nothing wrong with the truth-functional meaning of "X implies Y". We just need to take into account what is implied by asserting "X implies Y", rather than e.g. "not-X", or "X and Y".
Same with disjunction: "X or Y" is true if we know that X is true. However, if we assert "X or Y", it is implied that we're not certain that X is true, otherwise we would have used "X", which is the simplest way to convey what that fact.
This is known as Grice's Pragmatic Defence of Truth-Functionality.