
In Mathematics, Mistakes Aren’t What They Used to Be - tosh
http://www.nautil.us/issue/24/error/in-mathematics-mistakes-arent-what-they-used-to-be
======
a3_nm
I do not find that the article insists strongly enough on the fact that
computer-aided _verification_ of proofs is not, in any way, a threat to human
creativity in mathematics.

You can just understand proof assistants as an effort to design a language for
mathematical proofs, and a procedure to verify them mechanically. (Spelled out
like this, it sounds a lot like Hilbert's program -- we have just understood
since then that you cannot hope to have a "perfect" proof assistant.) The
availability of computers means that you can actually implement the
verification process and run it, but if you do not want to involve computers
in the process (because you do not "trust" them, or whatever), you could
always check such proofs by hand, in principle.

In a sense, a proof designed for a computer assistant is one that can be
verified without any intelligence required from the reader.

~~~
nomorejulia
The recent financial crisis shows even those who claim to understand the math,
do not, let alone even being able progress on probability (hmm what are the
odds all this could go wrong? Nah).

~~~
AngrySkillzz
More accurately you might say that the people who understand the math are
somewhat segregated from the people who make the decisions.

------
weinzierl
I wonder which proof assistant he is using. His presentation[1] mentions Coq
but Wkipedia[2] lists several others. Is Coq kind of a standard tool or are
there others that are widely used.

[1]
[https://www.math.ias.edu/vladimir/sites/math.ias.edu.vladimi...](https://www.math.ias.edu/vladimir/sites/math.ias.edu.vladimir/files/2014_IAS.pdf)

[2]
[https://en.wikipedia.org/wiki/Proof_assistant](https://en.wikipedia.org/wiki/Proof_assistant)

~~~
vilhelm_s
Yes, Voevodsky works in Coq. Some other people in the "univalent foundations"
project that Voevodsky started have also done some developments in Agda.

Apart from Voevodsky's work, there are several other big math formalization
projects. Perhaps the biggest ones are the formalization of the Feit-Thompson
theorem (done by Georges Gonthier and his collaborators, in Coq), and the
proof of the Kepler conjecture (done by Thomas Hales and his collaborators, in
Isabelle/HOL and HOL Light).

~~~
platz
does using Coq mean that the math he is seeking to prove is based on /
formulated in a Constructivist style
([http://en.wikipedia.org/wiki/Constructivism_%28mathematics%2...](http://en.wikipedia.org/wiki/Constructivism_%28mathematics%29)),
or only that the verification program itself requires such Constructivist
properties?

~~~
vilhelm_s
I don't think Voevodksky himself has any constructivist leanings. The
univalent foundations developments make use of various axioms that have no
known constructive meaning (e.g. the law of the excluded middle, and the
famous univalence axiom itself), and which are motivated because there are
known models in which they hold.

On the other hand, many of his collaborators are interested in the project
precisely because it uses Coq and they like constructive mathematics. So the
HoTT book only uses exluded middle when necessary (and carefully tracks
where), and developing a computational interpretation of univalence is
considered to be an important task.

Edit to add: So why would someone who doesn't care about constructivism want
to use Coq? Because it's based around typed objects. Apparently, homotopy
theorists had for a long time been aware that instead of basing mathematics on
sets, and then constructing equivalence classes on top of those, you could
take homotopy types as primitive.. something like basing mathematics on blobs
rather than points. This was all considered theoretical and impractical--until
someone noticed that the logic provided by Intensional Type Theory (as
implemented in e.g. Coq) can be considered to do exactly this!

------
chubot
I'm surprised they didn't mention Leslie Lamport. He apparently started
writing "structured proofs" decades ago, and then developed a system TLA+ to
do the same thing in a computer.

I am hazy on the details, but I think TLA+ proofs are just checked by brute
force, and not with an actual theorem prover? It's for distributed algorithms
with a large but finite number of states.

So it's not exactly the same as what's discussed in the article, but it's
definitely related. I would appreciate if anyone has any clarification on this
point.

[https://scholar.google.com/scholar?cluster=14553320809011812...](https://scholar.google.com/scholar?cluster=14553320809011812377&hl=en&as_sdt=0,5&sciodt=0,5)

"A method of writing proofs is described that makes it harder to _prove things
that are not true_. The method, based on hierarchical structuring, is simple
and practical. The author's twenty years of experience writing such proofs is
discussed."

[http://research.microsoft.com/en-
us/um/people/lamport/pubs/p...](http://research.microsoft.com/en-
us/um/people/lamport/pubs/pubs.html#wired)

~~~
nmrm2
TLA+ is better described as a verification tool than as a proof assistant, and
is better suited to analysis of software systems than to analysis of the sort
of mathematics in which Voevodsky is involved. The most striking
characteristics that make Coq and other tools based on dependent type theories
interesting to Voevodsky are absent from systems like TLA+.

TLA+ is sufficiently different from Coq -- and in ways essential to
Voevodsky's intentions -- that discussion of TLA+ in this article would be out
of place.

In addition, there is a huge community of people who have worked with theorem
provers for the past 20 years or more; it's not clear to me why the author
would single out TLA+/Lamport over any of these other systems/researchers.

~~~
chubot
Sure, but the motivation is the same: to avoid proving things that aren't
true. Lamport and Voevodsky are in different fields, and use different tools,
but that just means that comparing their journeys is all the more interesting.

If it were a technical journal, maybe they wouldn't be related. But they're
definitely related for a popular article.

~~~
nmrm2
_> If it were a technical journal, maybe they wouldn't be related. But they're
definitely related for a popular article._

Hence my last paragraph -- they're only related in the sense that the entire
formal methods/verification/theorem proving community is related. Given the
hundreds of systems/researchers that would make for interesting additions to
the story, it's not surprising that TLA+ isn't mentioned.

(Incidentally, the author's choice representative for "other people are doing
this too" \-- Automath -- is probably just as good of a choice as TLA+.)

------
mathgenius
It's interesting that computer science also struggles with similar issues:
program verification. But at least with a program you can run it and have it
check itself to some extent (eg with invariants). But Voevodsky is talking
about a whole other level of difficulty, akin to writing a massive program but
not being able to run it.

~~~
tel
You can run it, it's just that the operation of the program is not
interesting. Voevodsky's system is proof relevant so, at least, the programs
themselves are interesting, but other models exist where the programs aren't
even interesting. Their mere existence is all that matters.

~~~
mathgenius
Yes, once you formalize (in computer code) a bit of mathematics you can run
it, but i'm pointing out that you don't need to formalize (the correctness of)
a program in order to run it. I guess it is not much of a distinction, because
a (non-formalized) mathematical argument can also carry "invariants" (like
loop invariants in a for-loop) that somehow cross-check its correctness.

~~~
nmrm2
You also don't have to prove a theorem in order to use it.

To give an elementary example, you can conjecture that a given mapping is
something-morphism between two groups and use that morphism to carry out a
proof about one group in terms of a known result about the other group.

In fact, "conjecture lemma; verify main result is true using the conjectured
lemma; go back an prove lemma" is a bog standard problem solving technique in
Mathematics...

 _edit: Oh, I see what you mean is bit more nuanced than I understood at
first. But this is still true -- often you can probably check that the mapping
you 've written down is a something-morphism locally, for the elements you're
working with atm. I'm stretching the example now, but you get my point? I
imagine this is probably not unheard of in research-grade mathematics -- e.g.
we don't have a general proof for "really cool conjecture" so we check it in
special cases whereever we think it might be useful to have... idk._

------
tosh
> Broadly speaking, the argument against the use of computers is a lament for
> the loss of the human element: intuition and understanding. Acknowledging
> something as true because the computer says so is not the same as knowing
> why it is true. One might reckon it’s analogous to relying on an Internet
> mash-up of reviews about the mysteries of Venice, rather than going there
> and splurging on the water taxi and experiencing the magic for oneself. But
> then again, the same conundrum arises in building upon previous results
> rather than working from scratch.

------
vitriol83
the article seems to be based on the presentation by Voevodsky himself,

[http://www.math.ias.edu/~vladimir/Site3/Univalent_Foundation...](http://www.math.ias.edu/~vladimir/Site3/Univalent_Foundations_files/2014_IAS.pdf)

~~~
igravious
This is the same Voevodsky that came up with the univalence axiom[1] that is
foundational in HoTT[2] (homotopy type theory)?

What's not to love about:

    
    
        The univalence axiom states:
    
             (A = B) ≃ (A ≃ B)
    
        "In other words, identity is equivalent to equivalence. In particular, one may say that 'equivalent types are identical'."
    

[1]
[https://en.wikipedia.org/wiki/Homotopy_type_theory#Univalenc...](https://en.wikipedia.org/wiki/Homotopy_type_theory#Univalence_axiom)

[2] [http://homotopytypetheory.org/book/](http://homotopytypetheory.org/book/)

~~~
mathgenius
Isn't this just like memoization? So somehow the axiom is saying that
memoization is built into the language. (Not sure if i'm making any sense..)

~~~
mathgenius
Well, it was a serious question. Like in the python language you can create a
string a="foo" and another b="foo" and a==b but it's not true that a is b. So
it seems like this univalent axiom is saying that this does not happen, and so
a is b.

I wish I knew enough about functional languages to ask the same question
there.

~~~
thoughtpolice
Well, it's saying more than that. It states that the notions of 'equivalence'
and 'isomorphism' are basically the same thing in this new language (NB: I'm
not really a mathematician).

Equality is already a very overloaded term in mathematics, but roughly means
"these are the same thing" \- X^2 is 'equal to' X * X for example, or in most
cases we think simply of the identity relation on some set or whatever.
Isomorphism states that two things can be 'transformed' into each other
through the existence of a function, along with its inverse.

For example, the sets {A, B, C} and {1, 2, 3} are not equal, but isomorphic
because you can define a bijection between them: A = 1, B = 2, C = 3. In some
sense equality is 'more rigid' than an isomorphism, obviously. Also, you have
to choose the bijection explicitly here because in this case, more than 1
valid one exists, which is another aspect of an isomorphisms 'identity'.

The univalence axiom states that in this new constructive mathematical
framework, 'equality' and 'isomorphism' are exactly the same thing.

I suppose in some sense you can view concepts like "object identity" and
"object state" in some languages like Java in the same bucket, because while
two Java objects may be "equal in terms of state" (members all the same) they
might not be "equal in terms of identity" (because they point to different
heap objects).

But this kind of distinction mostly doesn't make sense in functional
programming languages because you often throw the notion of 'value identity'
out the window, so 'equality' in these languages is defined solely as
mathematical equality - that we can 'normalize' two expressions to the same
final form. 'isomorphism' then, normally just has an interpretation like two
functions `a -> b` and its inverse `b -> a`, so even in these languages, we
aren't quite talking about the same things.

HoTT is much more radical than all of this when you look at it together of
course - because it's more like saying if you have two proofs that x = y, you
can think of 'x' and 'y' as points in space, and proof of equality is a path
from x to y in this space - so two proofs of equality are just two distinct
paths.

The 'space' on which these points exist is actually not sets but 'types',
which is like a set, but also including propositions too. So now the set 'X'
and propositions about X exist together. Proving some proposition requires
'constructing' a value of that propositions' type, using the elements and
propositions already existing. In a language like Haskell, to implement a
function of type 'f :: A -> B' requires being able to 'construct' a B given a
value of type A - to do this, in effect, is a proof that 'A -> B' does in fact
exist, because you built it.

Also, under this interpretation of 'space', which is topological, equivalent
paths give rise to notions of 'homotopy' (saying a path A can be 'morphed
into' a path B), so if you have two 'paths' representing the proofs of x = y,
these 'paths' admit a homotopy between them. And furthermore you can have
notions of homotopies between homotopies, etc etc. So things get very 'higher
order'. Types can also be viewed as an 'infinity groupoid', which is a thing
that not only has elements and isomorphisms between elements, but isomorphisms
between isomorphisms, and isomorphisms between _those_ , infinitely. So if you
squint you can see how this infinity-groupoid notion of 'higher order'
isomorphisms between other isomorphisms, and higher order homotopies, are all
very closely linked. It's all very strange and unifying and delightful.

That's the extremely handwavy explanation that is might be pretty fluffy, but
it might help.

~~~
harryjo
[http://www.math.harvard.edu/~mazur/preprints/when_is_one.pdf](http://www.math.harvard.edu/~mazur/preprints/when_is_one.pdf)

"When is one thing equal to some other thing", by Mazur

------
deanmen
The Oxford dictionary "intuition" as "immediate apprehension by the mind,
without the need for reasoning"

When people emphasise intuition in mathematics it suggests that

(i) that they have never tried to teach mathematics formally and explicitly
without appeal to intuition —if they had, it would have been a most refreshing
experience for them and for those of their students that were sufficiently
well-educated to appreciate simplification

(ii) that they have never taken the trouble to learn how to let the rules of
the formal game guide their “writing of papers” —if they had, they would have
discovered how to do mathematics way beyond their former powers.

EW Dijkstra

~~~
Confusion
If formally precise reasoning would be all there is to it, computers could do
all math for us. However, even in that formal-if-anyhing-is area, they fail.
Computers can't solve instances of the Halting problem or the Tiling problem,
whose solution is immediately _intuitively_ clear to a human. Those solution
can also be immediately verified by fellow humans, via the informal
communication method of 'speech'. The reasoning involved can't be verified by
computers.

A nice overview was given by Peter of Conscious Entities recently:
[http://www.consciousentities.com/?p=1918](http://www.consciousentities.com/?p=1918).

~~~
dons
Actually computers can solve many instances of the halting problem.
[http://en.wikipedia.org/wiki/Microsoft_Terminator](http://en.wikipedia.org/wiki/Microsoft_Terminator)

~~~
Confusion
Yes. The point is there are many instances that humans can solve, but
computers can't. My sentence was not meant to imply computers cannot solve any
instances of the halting problem at all.

------
mathgenius
Conway is listed as being negative about using computer verified proofs, but
he is well known for trashing set theory (or at least strongly protesting the
shackles of set theory). So I wonder what he thinks of this new univalent
foundation.

~~~
auggierose
I found a mistake in one of his proofs in his book "On Numbers and Games"
using computer verified proofs. It was only a mistake in the proof, that is,
the proof was easy to correct and the result was still true, but it shows that
noone is above mistakes. So using a computer can certainly drastically reduce
the probability of an error.

~~~
bumbledraven
Good job. What proof, may I ask? What was the mistake, and what tool did you
use? Is there a paper on this I can read?

~~~
auggierose
I was using Isabelle. There is indeed a paper about this:
[http://link.springer.com/chapter/10.1007%2F11921240_19](http://link.springer.com/chapter/10.1007%2F11921240_19),
and the flaw is described on page 284.

------
mekaj
I wonder if more computer-aided proofs will lead mathematicians to discover
serious conflicts in their ontologies. Critical parts of mathematics, and even
the way we talk about it, may eventually need to be reworked.

~~~
xamuel
Very doubtful.

Within the formalized version of mathematics, some sort of namespace mechanism
will probably be necessary to deal with terminology collisions.

And certain things commonly thought of as one notion might need to be split up
into multiple notions (example: in some contexts, "function" means "set of
pairs satisfying the vertical line test"; in other contexts, "function" means
"set of pairs satisfying vertical line test, along with designated
'codomain').

But at most, this amounts to changing the wallpaper. The structure itself is
sound unless something really shocking happens like a proof that PA is
inconsistent or something. Such a shocking event needn't hinge on computer-
aided proofs (although computer-aided proofs will surely help to convince
mathematicians who would otherwise assume such a startling result must be
mistaken).

~~~
madez
I also doubt it.

The critical parts in mathematics are the foundations and basics. These have
been studied over and over again by many smart people. I think if there is
something wrong, it would have been found. If you are familiar with
mathematics you will probably agree with this assessment.

And, in the unlikely case that there is an issue, then it will be worked
around, so no harm is done to the rest of mathematics. Luckily, in
mathematics, we make the world (axioms and definitons) like we wish it to be.
There is a famous quote I don't recall right now about mathematics being now
in the garden of eden and mathematics shall never be forced to leave it again.

However, non-critical parts of mathematics like advanced research is often
plagued by mistakes.

Computer-verified proofs are the necessary future of mathematics.

One little remark, people often assign more attributes to functions than just
the mapping-tuples itself and optionally the codomain. Often the arithmetic
formulation of the mapping is part of the function because we want to
differentiate based on the way the function is written.

