
What is a proof, really? - CarolineW
http://profkeithdevlin.org/2014/11/24/what-is-a-proof-really/
======
bumbledraven
_The usual maneuver by which mathematicians leverage that formal notion to
capture the arguments they, and all their colleagues, regard as proofs is to
say a proof is a finite sequence of assertions that could be filled in to
become one of those formal structures. It’s not a bad approach if the goal is
to give someone a general idea of what a proof is. The trouble is, no one has
ever carried out that filling-in process. It’s purely hypothetical._

False. What is "hypothetical" about this formal proof that the reals are
uncountable?
[http://us.metamath.org/mpegif/ruc.html](http://us.metamath.org/mpegif/ruc.html)

Each step is clickable, and just a few clicks take you back to the axioms. For
more, see chapter 1 of the Metamath Book
([http://us.metamath.org/downloads/metamath.pdf](http://us.metamath.org/downloads/metamath.pdf))

How is it that Devlin is unaware of logical systems like Metamath? It's at
least 10 years old. [Edit: Maybe 20 years?
[http://us.metamath.org/copyright.html](http://us.metamath.org/copyright.html)
says "The name "Metamath" has been used publicly by Norman Megill since 1994
to refer to a computer language and related software."]

~~~
CHY872
Well, the author's point is probably still generally correct - in almost all
fields proofs are not presented from axioms, and you could easily go your
whole life as a mathematician without encountering such a proof, except
perhaps by accident. The example you gave is the sort of proof presented to
bright high school students - I'm sure you can do more interesting things with
metamath, but I feel confident in saying that if you're a research
mathematician, you'll lose a lot of time learning it and using it.

Where proof assistants are used, they're frequently totally counter to the
style of thinking traditionally taught in mathematics and computer science -
I've heard them described by experts in the field as 'proof obstructions'.

I'm currently writing my undergraduate project in a proof assistant, and you
end up with tonnes of unreadable garbage like ASM_SIMP_TAC (srw_ss ())
[DETERMINACY_LEMMA, simp_expand_def] THEN1 METIS_TAC [] THEN RW_TAC (srw_ss
()) [ss_ecases] THEN FULL_SIMP_TAC (srw_ss ()) [].

which has no real relation to any normal thought. Proving the determinacy of a
relation took a matter of days of careful thinking (as a newbie to the
package) - despite being able to prove it in a couple of lines of handwritten
prose.

I can imagine that this would be a massive turn-off for almost any research
student.

I also think that his line

 _(it almost certainly is not, but more pertinent, how could you ever be sure
it is?)_

is still a good point - you're trusting your correctness to a computer
program, and all of the libraries built on top of it. The chances that one of
these has a bug in (by this I mean a bug in the rules of inference of the
theory) that could lead to the theory outputting garbage is probably quite
high.

~~~
madez
> I'm sure you can do more interesting things with metamath, but I feel
> confident in saying that if you're a research mathematician, you'll lose a
> lot of time learning it and using it.

And then save time because you can be sure that the proofs you write are
correct.

> I'm currently writing my undergraduate project in a proof assistant, and you
> end up with tonnes of unreadable garbage like ASM_SIMP_TAC (srw_ss ())
> [DETERMINACY_LEMMA, simp_expand_def] THEN1 METIS_TAC [] THEN RW_TAC (srw_ss
> ()) [ss_ecases] THEN FULL_SIMP_TAC (srw_ss ()) [].

> which has no real relation to any normal thought. Proving the determinacy of
> a relation took a matter of days of careful thinking (as a newbie to the
> package) - despite being able to prove it in a couple of lines of
> handwritten prose.

Did you try isar?

> I also think that his line

> (it almost certainly is not, but more pertinent, how could you ever be sure
> it is?)

> is still a good point - you're trusting your correctness to a computer
> program, and all of the libraries built on top of it. The chances that one
> of these has a bug in (by this I mean a bug in the rules of inference of the
> theory) that could lead to the theory outputting garbage is probably quite
> high.

It's not at all a good point. There are theorem provers that rely on the
correctness of just a very small kernel. Everything on top (libraries) is then
guaranteed to follow the rules imposed by the kernel.

~~~
CHY872
> And then save time because you can be sure that the proofs you write are
> correct.

For better or worse, we both know that's not required to be a successful
mathematician. I remember reading a presentation from a guy advocating
mechanised proof who realised that almost every important proof in his field
had been later shown to be false.

> Did you try isar?

I had a go at using Isabelle (in general) and truthfully never really got onto
Isar - I discovered that both my supervisors have written large theories in
HOL4 (JIT compilers etc), but have no recent experience with Isabelle, so
that's what I'm using. I've seen examples of Isar which look nice, but I'm
skeptical about its scalability. You make a good point, thouugh.

> It's not at all a good point. There are theorem provers that rely on the
> correctness of just a very small kernel. Everything on top (libraries) is
> then guaranteed to follow the rules imposed by the kernel.

Yes, of course - but the richness of a theorem prover comes from the logics
you define on top of it.

If the library implementation of a particularly complex structure defined
externally to the logic with no easy to coverify definition had a mistake in
its definition which is not caught, any entailed inconsistencies are
transmitted to the theorems that rely on this data structure.

Sure, your theorems are correct in the logic - but they might not be useful. I
obviously pick the more engineering based example, but that's merely due to my
being ignorant of most pure mathematics - there are probably analogues.

This is a growing problem for companies like Intel, who use a software model
to test their firmwares before the hardware is available [1] - they want to
verify that the operations are identical.

[1] -
[http://www.cs.utexas.edu/users/hunt/FMCAD/FMCAD13/papers/44-...](http://www.cs.utexas.edu/users/hunt/FMCAD/FMCAD13/papers/44-Formal-
Covalidation-Low-Level-Interfaces.pdf)

~~~
madez
> For better or worse, we both know that's not required to be a successful
> mathematician. I remember reading a presentation from a guy advocating
> mechanised proof who realised that almost every important proof in his field
> had been later shown to be false.

Some professors I know told me something along those lines about various
topics of mathematics. The more advanced the topic gets the less people cope
with the complexity. In the end it gets so specialized that only a few people
in the world can and are willing to follow the advance. That situation
combined with human notorious fallibility brings mathematic research in a very
bad situation.

I makes me sad - and angry I have to admit - that the professors I met don't
recognize computer-aid as the solution. Besides that I personally had the most
pleasure doing mathematics while being guarded by a computer against my
stupidity[+].

> Yes, of course - but the richness of a theorem prover comes from the logics
> you define on top of it.

> If the library implementation of a particularly complex structure defined
> externally to the logic with no easy to coverify definition had a mistake in
> its definition which is not caught, any entailed inconsistencies are
> transmitted to the theorems that rely on this data structure.

> Sure, your theorems are correct in the logic - but they might not be useful.
> I obviously pick the more engineering based

[+] You raise a good point. The computer can't do all the work. Nevertheless
it's a huge advance to only have to verify the definitions and theorem
statements instead of all that combined with the proofs. It's like having to
remember the summary of a book instead of the whole book. Someday we will hit
a wall even when using computers but that shouldn't stop us know.

> This is a growing problem for companies like Intel, who use a software model
> to test their firmwares before the hardware is available [1] - they want to
> verify that the operations are identical.

Thats awesome! Nice problem to work on. Stories like this keep my dream alive
that there are interesting workplaces in this world.

------
mattxxx
This is a pretty sensational piece, but there's fundamental truth in that
what-a-proof-is is based on an individual's ability to understand it. This is
just how all ideas work.

In the end, you can then take a proof and make it empirical easily, and this
(while not being anything like a proof) can be more convincing, because it's
not tested by your understanding, but rather by your ability to observe...
this also is fundamental to proof-by-contradiction, which convinces you simply
by showing an example of a situation that doesn't work (and those disproves
the assumption).

I think there's a lot of criticism floating around here, but there's something
fundamentally right about a proof being about communication. The truth the
proof reveals is pretty much objective though.

------
wyager
Absolutely false. We have a large number of machine-checked proof that are
100% "filled in".

As a trivial example that a fair number of CS people might have been exposed
to: every time someone compiles an Agda program, the compiler does a complete
formal proof that the program is total and terminating.

------
bitwarrior
"What kind of a proof? It's a proof. A proof is a proof. And when you have a
good proof, it's because it's proven." \- Jean Chretien

[https://www.youtube.com/watch?v=aX6XMIldkRU](https://www.youtube.com/watch?v=aX6XMIldkRU)

------
debacle
It seems like the author is conflating the accuracy of the language of a proof
with the accuracy of the proof itself, which may be the same thing to some
extent, but overall to try and refute something like this:

> In particular, they believe proofs are fundamentally and exclusively about
> truth, and that they are either right or wrong.

...is just semantics. Arguing that a Euclidean proof is wrong because it makes
contextual assumptions about the reader's frame of reference is just pedantry.
Mathematics is much more rigorous in 2014 because the value of the time of the
average mathematician is worth much less.

------
hyp0
Professional mathematics is more like English Literature than computer
programming: you need extensive background in the area to follow it; and you
can't just run it to see if it works.

~~~
madez
I don't agree. Did you ever try Isabelle/HOL? It let's you "run it" to see if
it works.

~~~
anonymousDan
How many professional mathematicians use Isabelle/HOL? Honest question!

~~~
yummyfajitas
Close to none. Part of that is a social problem. People who do computer
assisted proofs are not particularly well regarded, in my experience.

Another big issue with those tools is that they don't work as well for
analysis as they do for algebraic geometry and graph theory. This might be in
part due to the fact that most of the folks constructing them are logicians,
not analysts.

------
mjklin
> with a once comforting illusion of crisp, clean certainty rapidly giving way
> to a panicked feeling of sinking into shifting quicksand.

Sounds like he is describing the intellectual development of Bertrand Russell,
who spent ten years proving that 1+1=2. Legend has it that his huge book on
the subject (Principia Mathematica) has never been read cover to cover.

With such precedents as Russell and Wittgenstein it's ridiculous for any
modern person to expect "crisp, clean certainty" in anything involving
language.

~~~
yummyfajitas
He didn't spend 10 years proving 1+1=2. He spent 10 years constructing an
axiomatic system in which arithmetic and everything else could be proved.

~~~
mjklin
Which then got the rug pulled out from under by Gödel

------
avmich
You think you know when you learn, are more sure when you can write, even more
when you can teach, but certain when you can program. -- Alan Perlis

As they say, that is, until you do machine learning.

