
Paradoxes of Material Implication (1997) - olooney
https://legacy.earlham.edu/~peters/courses/log/mat-imp.htm
======
tr352
Let me copy/paste my reply from the same discussion yesterday
([https://news.ycombinator.com/item?id=18531650](https://news.ycombinator.com/item?id=18531650)):

There's another "solution" to this paradox: if we assert something we are
guided by a set of "conversational principles". For example, asserting "X
implies Y" if we know that X is false is inappropriate. If X is false, "not-X"
would be the appropriate assertion.

According to this theory, there's nothing wrong with the truth-functional
meaning of "X implies Y". We just need to take into account what is implied by
asserting "X implies Y", rather than e.g. "not-X", or "X and Y".

Same with disjunction: "X or Y" is true if we know that X is true. However, if
we assert "X or Y", it is implied that we're not certain that X is true,
otherwise we would have used "X", which is the simplest way to convey what
that fact.

This is known as Grice's Pragmatic Defence of Truth-Functionality.

------
foldr
The claim that the material implication analysis preserves the validity of
valid arguments is pretty questionable. Consider the following argument:

    
    
        No student will succeed if he goofs off
        Every student will succeed
        |- No student will goof off
    

Analyzing the 'if' in the first premise of the argument above as material
implication, we get:

    
    
        For every student x, it's not the case that [x won't goof off or x will succeed].
        = For every student x, x will goof off and x won't succeed.
    

The following argument is valid only trivially, as the premises contradict
each other (assuming the existence of at least one student):

    
    
        For every student x, x will goof off and x won't succeed.
        Every student will succeed
        |- No student will goof off
    

I suspect it's quite easy to construct other similar examples involving
quantifiers where validity is not even trivially preserved.

(It's easier to find examples of the material implication analysis failing to
preserve the invalidity of invalid arguments.)

~~~
a-nikolaev

      No student will succeed if he goofs off
       =
      Forall x: (GoofOff(x) -> not Succeed(x))
       =
      not Exist x : (GoofOff(x) and Succeed(x))

~~~
foldr
Sure, but then you're doing violence to the structure of the original
sentence. (How did the negation get into the consequent?) If you're allowed
free reign to paraphrase, then you can always get the right result.

"No student will succeed if he goofs off" is a standard example discussed in
the semantics literature, by the way. What you're pointing out, in effect, is
that this sentence seems to mean "No student who goofs off will succeed". The
problem is that it's unclear how to get to that interpretation given the
actual syntactic structure of the sentence. In other words, you can't get
there just by following a direction to interpret if...then... as material
implication.

~~~
a-nikolaev
I agree that you cannot just mindlessly parse "if ... then ..." as
implication, especially in the context of other things going on in the
sentence and hope it will automagically work out as a correct interpretation
of the sentence meaning.

~~~
foldr
Right, but that means that material implication can't be used as an analysis
of the meaning of "if...then..." in English. If you have to make ad-hoc
adjustments for different kinds of sentence, then you don't have an actual
theory of the interpretation of "if..then..." \-- you just have a toolbox of
techniques for paraphrasing it. Note that it is possible to do much better
than this, so it's not an unreasonable goal to have in mind. See e.g.
[https://dspace.mit.edu/handle/1721.1/95781](https://dspace.mit.edu/handle/1721.1/95781)
for an overview of modern approaches to the semantics of conditionals in
natural languages.

On the preceding point, virtually everyone agrees. I was questioning even the
weaker assertion that analyzing "if...then..." as material implication always
preserves validity.

~~~
olooney
Trying to map natural language onto logic is a mug's game, although the
converse - mapping logic _into_ natural language - is possible.

Leibniz was perhaps the first to understand that the solution to this was to
abandon natural language and replace it was an artificial, perfect language,
where connectives and grammar had one and only one clear meaning: the calculus
ratiocinator and the characteristica universalis. Although Leibniz didn't
succeed in his lifetime, he inspired Frege to write the Begriffsschrift[3]
which was an early and very complete presentation of what today we would call
predicate calculus.

One of Frege's insights was that quantification ("there exists x such that..."
and "for all x...") needed to be explicit and could only be made unambiguous
if the exact order and name of each quantification was used consistently -
hence the idea of "bound variables." Without explicit quantification, it is
impossible to determine the meaning of a statement such as:

    
    
        "All mice fear some cat."
    

Does this mean that for every mouse, there is some nearby cat which that mouse
fears? Or does it mean that there is some kind of King Cat, feared by every
mouse in the world? Natural language is ambiguous on this point. (Note also
that this particular example is one which _appears_ to be a syllogism, but
which cannot be fully analyzed using Aristotelian logic.) However, if we use
explicit quantification, we write either:

    
    
        ∃x ∀y My ∧ Cx ∧ yFx    (1)
        ∀y ∃x My ∧ Cx ∧ yFx    (2)
    

Or, in the stilted yet precise jargon of mathematicians:

    
    
        There exists a cat x such that for all mice y, y fears x.  (1)
        For all mice y there exists a cat x such that y fears x.   (2)
    

The __only __difference between (1) and (2) is the order of quantification,
and this is not something English or other natural languages is careful about
tracking. This is why you feel like you have to butcher sentences to rewrite
them in this form, and also why this re-writing cannot be done by rote but
requires human judgement and understanding: because this critical information
is in fact missing from the original natural language sentence!

These are not particularly contrived or unusual examples, by the way. One of
the fundamental notions in real analysis is that of the limit, which is
defined as follows:

    
    
      the limit of the sequence a_n as n goes to infinity is C if and only if for every delta > 0, there exists N such that |a_n - C| < delta for all n > N.
    

Such a thought cannot even be precisely _articulated_ unless one has the
necessary language to talk precisely about statements involving multiple
quantifiers and bound variables. Which is why early presentations of Calculus
(Leibniz, Newton) relied of unsatisfactory notions of "fluxions" and
"infinitesimals"[4] while later mathematicians (Weierstrass, Cauchy,) armed
with a more sophisticated mathematical language were finally able to give a
satisfactory foundation to calculus.[5]

We see the same category of problem when we try to interpret "if... then..."
as the material implication of formal logic. Not only do we have the problem
of conversational implicature[6] but we have the so called paradoxes described
in the original article. Similar problems exist for common words like "or"
which is often taken to mean "exclusive or" rather than the inclusive "or"
favored by logicians, and even simple words like "is" don't necessary map
purely onto the Law of Identity[7] the way logicians and philosophers would
like them to.[7]

The way I see it, the fault lies fully on the side of natural language, which
is too squishy and imprecise and overloaded to be useful to convey precise
formal arguments. But that doesn't mean you have to learn an artificial
language like Lojban[8]. Mathematicians do quite well by speaking in a kind of
restricted subset of English (or whatever native language they're used to)
simply by giving exact and precise meanings to certain words and formulations
like "if and only if" or "implies."[9] When a mathematician says "implies" in
a paper or lecture, you can be quite sure he or she is speaking of material
implication.

But as for pining down the meaning of natural language in the wild, as it is
actually spoken... well, that's a rather more difficult problem, don't you
think?

[1]
[https://en.wikipedia.org/wiki/Characteristica_universalis](https://en.wikipedia.org/wiki/Characteristica_universalis)

[2]
[https://en.wikipedia.org/wiki/Calculus_ratiocinator](https://en.wikipedia.org/wiki/Calculus_ratiocinator)

[3]
[https://en.wikipedia.org/wiki/Begriffsschrift](https://en.wikipedia.org/wiki/Begriffsschrift)

[4]
[https://en.wikipedia.org/wiki/History_of_calculus#Newton_and...](https://en.wikipedia.org/wiki/History_of_calculus#Newton_and_Leibniz)

[5]
[https://en.wikipedia.org/wiki/Limit_(mathematics)](https://en.wikipedia.org/wiki/Limit_\(mathematics\))

[6]
[https://plato.stanford.edu/entries/implicature/#GriThe](https://plato.stanford.edu/entries/implicature/#GriThe)

[7]
[https://en.wikipedia.org/wiki/Law_of_identity](https://en.wikipedia.org/wiki/Law_of_identity)

[8]
[https://en.wikipedia.org/wiki/Lojban](https://en.wikipedia.org/wiki/Lojban)

[9]
[https://en.wikipedia.org/wiki/List_of_mathematical_jargon](https://en.wikipedia.org/wiki/List_of_mathematical_jargon)

~~~
foldr
>Trying to map natural language onto logic is a mug's game

Not really. Modern linguistic semantics has done a pretty good job. Check out
the link to the overview article that I posted in the grandparent, or the von
Fintel & Heim textbook here: [http://web.mit.edu/fintel/fintel-heim-
intensional.pdf](http://web.mit.edu/fintel/fintel-heim-intensional.pdf)
(sections 4.3-5 in particular). It's possible to give precise logical analyses
of natural language conditionals to a pretty significant extent. It's just the
material implication analysis that doesn't work.

> because this critical information [about quantifier scope] is in fact
> missing from the original natural language sentence!

You're moving a bit too fast there. The information can't be recovered from
the sequence of words, but that doesn't mean that it isn't present in the
structures that underly interpretation. A precise logical analysis can be
given for each of the possible interpretations of an ambiguous sentence.
Semanticists treat quantifier scope ambiguities using such mechanisms as
quantifiying in [1], quantifier raising [2], type shifting [3], or even
continuations [4].

No-one is suggesting, by the way, that formal logical analyses of the meanings
of English sentences are useful _for the purposes of doing math or logic_. But
the "paradoxes" in the original article relate to the use of material
implication to gloss the meaning of "if...then..." in English. This naturally
raises the question of whether there might be better analyses available.

[1] [http://www.coli.uni-
saarland.de/projects/milca/courses/comse...](http://www.coli.uni-
saarland.de/projects/milca/courses/comsem/html/node96.html#sec_clls-scope.qi)

[2]
[https://dspace.mit.edu/handle/1721.1/16287](https://dspace.mit.edu/handle/1721.1/16287)

[3] [http://lecomte.al.free.fr/ressources/PARIS8_LSL/Hendriks-
TCS...](http://lecomte.al.free.fr/ressources/PARIS8_LSL/Hendriks-TCS.pdf)

[4] [http://www.nyu.edu/projects/barker/barker-
continuations.pdf](http://www.nyu.edu/projects/barker/barker-
continuations.pdf)

------
dwheeler
I created the "allsome" quantifier to reduce the risk of some of these
confusions. Details here:

[https://dwheeler.com/essays/allsome.html](https://dwheeler.com/essays/allsome.html)

------
leoc
Graham Priest's book is great:
[https://www.cambridge.org/ie/academic/subjects/philosophy/ph...](https://www.cambridge.org/ie/academic/subjects/philosophy/philosophy-
science/introduction-non-classical-logic-if-2nd-edition)

------
gus_massa
Original title: " _Paradoxes of Material Implication_ "

~~~
pierrebai
About the example given with teh number 3, I prefer this even more absurd
form: if the number 3 is not the number 3 then the number 3 is the number 3.
It's true!

------
ninegunpi
Descendants of Aristotle still find limitations of the system amusing, that’s
amusing itself.

I hope to live to the day when philosophical advancements of 20th century (or
re-discovery of 2500-old Indian logic, if you like), formalized in accessible
forms, get widespread acceptance, could leave plenty of people who’se job it
to juggle limited abstractions with the need to pick more useful jobs.

No pun intended, these are terribly useful abstractions we’ve built our world
on, but they barely hold up against thorough reality check and leave out a lot
as ‘paradoxes’.

~~~
myWindoonn
Please add some substance, or I will make your argument for you.

Yes! We all need to get on board with constructivist mathematics [0][1]
already. Construction is very similar to computation, and it is not
inconsistent to take "all reals are computable" or "all functions are
continuous", the same rules Turing discovered, as axioms if we like. We can
therefore move computer science fully onto a foundation that is _more_
rigorous than typical maths.

[0] [https://plato.stanford.edu/entries/mathematics-
constructive/](https://plato.stanford.edu/entries/mathematics-constructive/)

[1]
[https://www.ams.org/journals/bull/2017-54-03/S0273-0979-2016...](https://www.ams.org/journals/bull/2017-54-03/S0273-0979-2016-01556-4/S0273-0979-2016-01556-4.pdf)

~~~
ninegunpi
You've made far better one than me below. Hats off.

