
Will Computers Redefine the Roots of Math? - jonbaer
https://www.quantamagazine.org/20150519-will-computers-redefine-the-roots-of-math/
======
MichaelBurge
I've played around with Coq before, and it's really hard to work with
mathematics in it. If you start with a system of axioms in first order
logic(like groups), it's pretty easy to prove single things. But once you
start wanting to quantify over all subgroups, you're in trouble.

I remember seeing a presentation where someone challenged the viewer to write
a simple proof of the infinitude of the primes. I recommend you try it. I
studied math in college, have an interest in formal proofs, and have written
Coq code, so surely I could bang out a simple proof like that. I think I got
stuck when I wanted to claim, "Given a natural number n, x <= n implies x = 0
or x = 1 or ... or x = n". It's possible that I have more experience now and
could complete that proof if I tried again, but this is really the kind of
thing they teach on the first day of an introduction to proofs course. I can't
see mathematicians using theorem provers until they're easier to work with.

So I sympathize with and am rooting for Voevodsky here: There's no reason why
theorem provers have to be a pain to use - we just need better foundations,
better tools, and more experience using them.

~~~
yaantc
The typical way to prove that there is an infinity of primes is to do a
reductio ad absurdum: you assume there is a finit number of primes, call the
biggest P and build another bigger one that is prime. This is a contradiction,
so you can conclude that there is no highest prime and hence there is an
infinity of them.

The thing here is that this uses the law of the excluded middle: a proposition
is either true or false, there is no other possibility. That's the basis of a
proof using reduction ad absurdum. But that's not part of Coq (or any P.A.
based on constructive logic) by default. If one is not aware of this indeed
doing this proof will be impossible in Coq. This said Coq can work with the
law of excluded middle (so support classical logic), but you have to
explicitly assert it and use it.

This being said, I haven't tried this proof in Coq ;) Just seeing this point
as something to take into account, and that may be a problem for some people
new to Coq and not aware of it.

~~~
Chinjut
Proving NOT P by showing a contradiction follows from P is perfectly fine in
intuitionistic logic (the sort used in Coq), and not dependent upon the law of
the excluded middle in that sense. What's not available to you is proving P by
showing that a contradiction follows from NOT P (rather, you end up with NOT
NOT P, which is not assumed to entail P simpliciter).

~~~
digama
I believe that in the context of the infinitude of primes, this is the
difference between "NOT (finitely many primes)" and "infinitely many primes",
where "infinitely many" means that the primes are in bijection with the
natural numbers (and of course finitely many means in bijection with some
natural number). A proof by contradiction will only give the weaker first
statement in intuitionistic logic.

One explanation for why many people say that Euclid's proof is not a proof by
contradiction is because you can actually get the second statement with only a
little more care (i.e. n!+1 has a prime divisor, which is thus greater than
n).

------
kmicklas
It's worth noting that Univalent Foundations (and type theory in general)
really has nothing to do with computers. You can do it on paper like
traditional set theory based math, and you can formalize set theory on a
computer.

The reason type theory is primarily done using proof assistants is that is a
constructive theory, or in other words, you can actually compute with the
functions you define. This is something you can't do with set theory, which is
why doing type theory on a computer lets the computer do a lot more things for
you than it easily could doing set theory on the computer.

------
MaysonL
See [https://github.com/UniMath/UniMath](https://github.com/UniMath/UniMath)
for more!

------
agumonkey
I have no idea how important HoTT is, but I feel that FP and abstract algebras
are merging into one.

ps: I was just watching this lambdajam talk, a tiny introduction about HoTT
[https://www.youtube.com/watch?v=OupcXmLER7I](https://www.youtube.com/watch?v=OupcXmLER7I)

------
AnimalMuppet
Two comments. First, it's interesting to me that they're saying that category
theory really isn't adequate. Given how Haskell is based on category theory,
is there somewhere more to go in programming languages?

Second, about proof-by-Coq: If there's ever a bug in Coq, they're going to
have to re-run all the proofs that have been done this way, and see which (if
any) of them are actually invalid. (It still may be better than proof by
convincing humans, though...)

~~~
eli_gottlieb
> Two comments. First, it's interesting to me that they're saying that
> category theory really isn't adequate. Given how Haskell is based on
> category theory, is there somewhere more to go in programming languages?

Haskell isn't really "based on" category theory. There are just a lot of type-
classes that implement category-theoretic machinery for quantifying over types
and what you can do with the resulting data in a neat way.

>Second, about proof-by-Coq: If there's ever a bug in Coq, they're going to
have to re-run all the proofs that have been done this way, and see which (if
any) of them are actually invalid. (It still may be better than proof by
convincing humans, though...)

It's been done before, when, IIRC, a VM bug appeared in Coq that allowed
certain ill-formed programs to prove False.

~~~
tjradcliffe
Someone in another comment also mentioned the possibility of "proof libraries"
that could be used to assemble more complex proofs from simpler ones. My first
thought was, "I wonder what'll happen when someone finds a suble bug on one of
the most common proof libraries?" It's bound to happen, after all. Some
theorems will be proven false that were true and vice versa. A few people 90%
of the way through their PhD will have years of their lives swept away.

~~~
yaantc
Check dedukti:
[http://dedukti.gforge.inria.fr/](http://dedukti.gforge.inria.fr/)

It's a different proof assistant that can convert Coq's (and other P.A.)
proofs into its own language and check them. It's not a guarantee of course,
both Coq and Dedukti could have a similar bug. But it certainly helps having
independent implementations being able to check the same proofs.

~~~
axorb
We're talking proofs here tho, and you aren't allowed to prove something by
"throwing enough input at it", isn't it like saying that unit tests are proofs
of the correctness of a program?

~~~
darkmighty
Well perhaps you could actually prove the correctness of _coq_ itself.

In any case it's just a matter of reliability in the traditional sense: we're
going to end with a small set of statements that will have to be verified by
humans upon which we can rest everything else. Those statements will need
enough people looking at them to get a level of certainty we're comfortable
with.

I believe the risk of a major flaw going unnoticed is astronomically low, and
the good thing is this certainty _propagates_ all the way to modern proofs if
we take care to go through formal verification.

------
ColinDabritz
The article is delightful, but regarding the title "Will Computers Redefine
the Roots of Math?"

No.

Math is about human understanding, elegance and beauty.

Computers may be useful tools, and perhaps they will be of even greater
assistance in mathematics moving forward, but Alan Turing and others have
shown us they can't do it all. Math is a fundamentally human endeavor. We will
have to transcend modern computers with something like strong AI before that
can change.

~~~
wamatt
This line of reasoning feels similar to that which says AGI will never be
possible.

It's also a bit unclear to me, how one can justifiably hold this position with
such confidence?

~~~
ColinDabritz
Sorry, I didn't mean to imply that strong AI isn't possible, I believe it is
very possible. I was reacting strongly to the headline specifically, as it was
invoking a "magic computers" image that I dislike.

I only meant that the computers of today are tools we use to explore math, but
that the work today is very human. The "computer" that transcends that will
look very different. What Alan Turing showed us was that you cannot turn all
of "mathematics" into an algorithm. The beauty and elegance and human
experience of the exploration of mathematics are important. Some day we will
probably be able to teach a machine those traits.

The title reads more like someone wrote an app that solved math, which is
silly. The article itself is fortunately much better.

~~~
kefka
I think an interesting question here is:

How do we humans identify and solve the halting problem?

~~~
ColinDabritz
In the general case, we can't of course, as Turing proved. How we untangle
special cases and categories of algorithms around this is part of the wonder
of mathematics for me.

~~~
eli_gottlieb
>In the general case, we can't of course, as Turing proved.

Not quite. Turing proved that no single algorithm cannot solve the halting
problem for all machines; it has always been trivial to show that we can
answer the halting question for many machines we care about, often because
they are specifically designed to always halt.

Chaitin elaborated this into a detailed theory of quantified incomputability,
showing (roughly) that any N-bit axiomatic system could decide the halting
problem only for machines composed of strictly less than N bits of information
(Kolmogorov complexity).

So the question is either, depending on your philosophy of mathematics, how we
locate ontologically special cases where halting proofs are possible, or how
we obtain the information we put into our axioms that allows us to write
halting proofs for increasingly complex programs.

~~~
kefka
I'm sorry that my understanding of set and computation theory is lacking.
Please excuse my ignorance while I study up on the topic further.

But, I do believe the original commenter misunderstood my question. Why is it
that we humans, when armed with how a computer works, can identify halting
problems? Why do we not get stuck in a loop when studying a potential halting
problem?

What allows us to make the intuitive leap past that? And to follow up, what
are the implications about "brains are 10^10 200Hz computers in parallel"
hypothesis?

~~~
eli_gottlieb
>But, I do believe the original commenter misunderstood my question. Why is it
that we humans, when armed with how a computer works, can identify halting
problems? Why do we not get stuck in a loop when studying a potential halting
problem?

Well, given my current understanding, I'd say it's because human beings reason
inductively. We pick up new axioms by observing our environment, which
includes things like equations and programs. Since we reason inductively,
we're only bound by incompleteness theorems for phenomena too random/complex
_for our current understanding_. We "catch up" eventually to be able to solve
computability problems, at least to some degree.

------
dzdt
The big change will come more from natural language processing than from
theorem-proving. There is already a huge corpus of carefully formalized
mathematics that human mathematicians have been developing for hundreds if
years. The only issue is the "carefully formalized" is by human standards and
in natural human languages. Sometime in the not-too-distant future someone
will make a natural language AI sophisticated enough to use this corpus as a
scaffolding for "assisted" theorem proving, where the prover being assisted is
a program generating completely formalized proofs and the assistance is coming
from the natural-language AI parsing textbooks and journal articles. Once such
a thing exists and is good enough to parse and formally verify standard
graduate textbooks, there will quickly be a shift that all research
mathematics should be written in a way that is acceptible to humans AND
verifiable by such provers. Mathematicians would love the verified
correctness, but the main thrust will remain that mathematics is by humans and
for a human audience.

~~~
ende
This assumes that advanced mathematics is currently written by humans for
humans. Many proofs can be a trudge for even highly trained humans. The field
should really be working on making formalisms more accessible. It would be
fantastic if what you describe came about by first making maths more
accessible/intuitive through more interactive and graphical representations.
Maybe a lot of these 'algorithm visualizations' that get posted to HN is the
first step.

------
whitten
Does anyone have experience with Coq and Mizar (
[http://wiki.mizar.org/bin/view/Mizar](http://wiki.mizar.org/bin/view/Mizar) )
and could answer the question of the base for both ?

If Coq can be used with Vladimir Voevodsky's theory of univalent foundations,
can Mizar also be based on them as well?

~~~
digama
I don't believe you could vary Mizar to the extent that it mimics HoTT. Much
of the basic properties that follow from the axioms Mizar uses (roughly ZFC +
the Tarski-Grothendieck axiom) are built in to the internals of the Mizar
prover, in order to increase its proving capability, at the expense of tying
it to its axiom set.

By contrast I can tell you that Metamath
([http://us.metamath.org/mpegif/mmset.html](http://us.metamath.org/mpegif/mmset.html)),
whose main library is also built on ZFC, does not at the most basic level
assume any axioms at all (not even first order logic), so it is possible to do
type theory or HoTT with no change to the verifier.

------
brudgers
I think so, because just the fact that mathematicians are using Coq and other
tools means that it has already begun. But the big important changes aren't
what happens at the Phd level. It's what happens down at the ordinary level of
arithmetic and the sorts of notation that are useful for doing ordinary
computations. Public schools need to be using J [1] as a way of both expanding
notation and a practical way of leveraging the computer as a mathematical
instrument.

Infix operation and order of precedence make sense when we're working with
pencils...well ok order of precedence beyond nesting with parenthesis doesn't
make sense then either...but why make a computer notation constrained by rules
that happen to have grown up around manual process?

[1] [http://jsoftware.com](http://jsoftware.com)

~~~
tempodox
I took a quick look at J and find it rather obscure. Can only guess that the
computation model is something procedural. A language so far off the beaten
path should have better introductory documentation. Can it to more than
shuffle character arrays?

~~~
brudgers
This is a good resource for onboarding with J:

[http://www.jsoftware.com/jwiki/Books#Math_for_the_Layman](http://www.jsoftware.com/jwiki/Books#Math_for_the_Layman)

J is as easy to pick up as using an ordinary calculator. Reading J can be as
hard as reading mathematical notation. It is obscure in the same way as any
other mathematical notation. What does:

    
    
       x
    

signify in a third grade classroom? J is off the beaten path of programming
languages because it is designed to be immediately useful for mathematics, and
not just the higher level Phd stuff, but ordinary tasks that 3rd graders
perform.

It looks unusual because it starts from the point of view of notation and is
compact in exactly the same way that

    
    
      *
    

is compact in C, Python, VBA etc. It is directly applicable in a third grade
classroom because:

    
    
       3 * 4
       3 x 4
       3 times 4
    

are isomorphic notations. What J does is extend the number of notations that
are accessible from the Latin character set encoding to include compact
notations for counting and summing and all sorts of other things. But it still
provides the sort of operations that are useful to young students and which
they commonly perform on a calculator.

~~~
tempodox
Your answer told me more about J than the web pages I'v seen. If you put it
that way, J might even whet my curiosity.

~~~
brudgers
Ken Iverson won a Turing Award largely on the back of APL. After 30 years of
field use, he developed J in the late 1980's to in part address its
shortcomings. Then for two decades he sold J as a commercial product. More
recently it was GPL'd.

 _Math for the Layman_ is only the tip of the iceberg fro what lives on
jsoftware.com. And it all tends to be about that level of quality.

