
The Future of Mathematics? [pdf] - mathgenius
http://wwwf.imperial.ac.uk/~buzzard/one_off_lectures/msr.pdf
======
ocfnash
I've been following Buzzard's evangelism of proof assistants for some time.

A few months ago I decided to try formalising some old mathematics olympiad
problems in Coq. Part of my motivation was to get a sense for how much work
would be involved in proving something slightly non-trivial but still very
elementary. I managed it, but it was _a lot_ more work than expected (results
here: [https://github.com/ocfnash/imo-coq](https://github.com/ocfnash/imo-
coq)).

Partly based on this exercise, I think that while it is possible that
mathematics may go the way of chess, so to speak, it is a lot more distant
than the ten years mentioned in this article.

I strongly support, very much hope, and even expect, that the use of proof
assistants may become mainstream in mathematics within a generation or so but
I think it is impossible to guess the exact role they will play accurately.

~~~
meuk
I like to think that I'm good at these combinatorics-kind of questions, but I
can't wrap my head around this one. I can follow the construction of the
matrix, but I don't understand how you came up with the formula.

~~~
ocfnash
I presume you've been looking at my presentation here
[http://olivernash.org/2019/07/06/coq-
imo/index.html](http://olivernash.org/2019/07/06/coq-imo/index.html) which I
confess is quite concise; apologies if it is too terse.

Reusing the notation of my post, here's how the formula arises.

The argument based on counting cells that are either row/column good/bad
requires the following of the parameter k:

    
    
      1. k-1 < r/m
      2. k-1 < c/n
      3. k−1 < cr/(c(m-1) + r(n-1))
    

These are equivalent to:

    
    
      1. k <= ceil r m
      2. k <= ceil c n
      3. k <= ceil cr (c(m-1) + r(n-1))
    

And so it is sufficient to have:

    
    
      k <= min (ceil r m) (ceil c n) (ceil cr (c(m-1) + r(n-1)))
    

We can then rearrange this to the headline formula by noting:

    
    
      * ceil r m = ceil (cr) (cm)
      * ceil c n = ceil (cr) (mr)
    

And then simplifying by repeatedly using the fact that for any x, y, z:

    
    
      * min (ceil x y) (ceil x z) = ceil x (max y z)
    

All this is of course in the Coq code. E.g., the last fact mentioned above is
here: [https://github.com/ocfnash/imo-
coq/blob/f6d2e8337fadf00583fd...](https://github.com/ocfnash/imo-
coq/blob/f6d2e8337fadf00583fd09f86faf9cba62a25677/q3_2001/inequalities.v#L50)

~~~
krapht
Related: [https://imo-grand-challenge.github.io/](https://imo-grand-
challenge.github.io/)

~~~
ocfnash
Cool!

------
ivan_ah
Could someone with knowledge of both COQ and LEAN provide more
info/comparison? Are they based on the same foundation or are there
differences? Might it be possible to automatically convert proofs between the
two? What about "strategies" ?

I was able to find a basic example of the natural numbers and the operation
`plus` in both languages, and it seems like the structure is quite similar.

Coq:

    
    
       Inductive nat : Type :=
         | O : nat
         | S : nat → nat.        (* S stands for successor *)
    
       Fixpoint plus (n : nat) (m : nat) : nat :=
         match n with
           | O ⇒ m
           | S n' ⇒ S (plus n' m)
         end.
    

via [https://softwarefoundations.cis.upenn.edu/lf-
current/Basics....](https://softwarefoundations.cis.upenn.edu/lf-
current/Basics.html#lab30)

Lean:

    
    
       inductive nat : Type
       | zero : nat
       | succ : nat → nat
    
       def plus : nat → nat → nat
       | m nat.zero     := m
       | m (nat.succ n) := nat.succ (plus m n)
    
    

via
[https://leanprover.github.io/introduction_to_lean/](https://leanprover.github.io/introduction_to_lean/)

~~~
krapht
There are subtle differences. What ostensibly they are both based on the
calculus of inductive constructions, there are incompatible extensions to the
logic in the kernel.

The majority is convertible, though. If I had to make a programming analogy,
it would be like converting between C++ and D, or different Lisp dialects. The
difference is bigger than Python 3 vs Python 2, but less than F# vs OCaml.
Clearly possible, in a sense, and if you can read one language, you can read
the other, but automatic conversion is just out of reach due to edge cases.

------
solinent
The biggest problem in mathematics has never been proving theorems.
Mathematics was never focused on proof until the formalization of math became
popular approximately 100 years ago. Mathematics is used as a tool to explain
the world and to create the least-complex model which predicts the data.

So computers can aid us in verifying our work--but ultimately we aren't
interested in determining all the possible logical theorems. Instead, we're
only interested in finding the logically valid theorems which serve the above
purpose of explaining the patterns in the world concisely.

It's a subtle difference, but for computers to solve that problem requires for
them to have a certain understanding of humans and our goals with science.

I think that's more than 10 years out.

~~~
ncmncm
Mathematicians choose what to keep according to aesthetics. Mathematics can
only be defined as what mathematicians like.

For a long time, they didn't like irrationals, so anything involving those
didn't count as mathematics. Zero took a long time to be accepted, starting in
India. Negative solutions of quadratics were illegitimate until astonishingly
recent days. Complex numbers were accepted even more recently. Greek geometers
knew they lived on a sphere, but spherical geometry was too unpleasant to
contemplate until quite recently, when it turned out interesting theorems
could live there.

People worked outside these boundaries all along, but what they wrote didn't
catch on. Laplace's transforms were ignored and forgotten until they were
needed to shore up Heaviside's extremely practical D operator. Complex numbers
turned out to be needed to for electromagnetics. Once people got deeply into
the topic, they discovered beauty and then mathematics accepted them.

Mathematics is the world's largest and longest-running effort to produce a
collective work of sublime beauty. What is beautiful goes in, what isn't dies
with its creator. New forms come to be seen as beautiful as they are shown to
open new vistas to explore, but very slowly.

~~~
jacobolus
> _For a long time, they didn 't like irrationals, so anything involving those
> didn't count as mathematics._

Note that this is more or less a myth.

> _Greek geometers knew they lived on a sphere, but spherical geometry was too
> unpleasant to contemplate until quite recently_

Astronomers did a huge amount of sophisticated spherical geometry, from
Mesopotamians through e.g. Hipparchus and later Ptolemy, then Arabs/Persians,
Indians, medieval Europeans, right down to the present.

~~~
ncmncm
Why then, was violating Euclid's fifth axiom explored so recently?

I assume yo have no problem with the rest.

~~~
Rerarom
Because they considered the sphere as embedded in 3D space.

------
chris5745
[https://leanprover.github.io/about/](https://leanprover.github.io/about/)

~~~
joe_the_user
The entire article makes it sound like Lean is significant step in usability
and power over other systems. That seems like an important thing and makes me
interested to download it and play with it.

Microsoft Research seems to have done some exciting things with provers over
the years - Z3 is another significant program. All under the direction of
Leonardo de Moura, notably.

~~~
0815test
Power, perhaps. But I'm a bit skeptical about usability. Lean doesn't even use
one of the most obvious things that make interactive proof systems far more
usable - a declarative mode instead of the usual tactics-based scripts. (Yes,
you can kinda sorta fake the former with "structuring" tactics, except not
really - declarative proofs are really their own kind of thing.) There even
used to be systems that automatically rendered inputed definitions and
declarative proofs in natural language (given that the basic terms and symbols
were previously defined of course) which does enable even an average
mathematician to easily figure out what the system is up to. You just can't do
this properly if all you have is a list of "tactics" fiddling with the prover
state.

~~~
krapht
"Lean doesn't even use one of the most obvious things that make interactive
proof systems far more usable - a declarative mode instead of the usual
tactics-based scripts."

Citation needed. I can make a perfectly reasonable Isar-style declarative
proof in Lean. Just because most users of Lean choose not to do this doesn't
mean it can't be done. I should mention that users are more willing to write
imperative proof code instead of declarative in Lean is because 1) the
interactive debugger is responsive and easy to use, and 2) writing a nicely
structured declarative proof is more work than an imperative proof.

~~~
ratmice
Indeed, most of the book
[https://leanprover.github.io/logic_and_proof/](https://leanprover.github.io/logic_and_proof/)
is entirely term mode proofs, with tactics largely absent.

------
adamnemecek
I agree with this wholeheartedly. The down side is that it's going to take
forever to get everyone on board. Mathematicians are very hostile to outside
influences.

Constructive mathematics should be more of a thing too.

~~~
jewelry
I don't think Mathematicians themselves didn't like this idea. In fact Hilbert
tried it
([https://en.wikipedia.org/wiki/Hilbert%27s_program](https://en.wikipedia.org/wiki/Hilbert%27s_program))
one hundred years ago and it failed and proved to be not possible. Well maybe
the Hilbert program is too aggressive, and this Lean thing is much moderate.
But at the end of the day, the boundary of what program solver could do is
bounded already hence no one is paying too much attention.

~~~
adamnemecek
Things get nicer when you accent constructive mathematics.

I (I'm not the only one) think that Hilbert was wrong.

------
inflatableDodo
_" So if proper mathematicians aren’t interested in a proof of the odd order
theorem, what are they interested in?

Example: Perfectoid spaces.

(Topic) - Proof of odd order theorem - Perfectoid spaces

Got author a Fields Medal? - Yes (1970) - Yes (2018)

High level mathematics? - No - Yes

Lots of PhD students and post-docs working in the area? - No - Yes

Talks happening about these things all over the world? - No - Yes

Mathematicans interested in 2019? - No - Yes

Earlier this year, Patrick Massot, Johan Commelin and myself formalised the
definition of a perfectoid space in Lean. I am getting invitations from across
the EU to speak in mathematics departments about the work. Serious piece of
research, or elaborate PR stunt? Maybe both."_

They may be learning more than programming from their contacts in the computer
department. Is always much easier to get something out there when it is fully
buzzword compatible.

------
Myrmornis
> Possible: tools such as Lean will begin to do research semi-autonomously,
> perhaps uncover problems in the literature. Maybe these tools will replace
> research mathematicians.

> In April, Christian Szegedy from Google told me that he believes that
> computers will be beating humans at math within ten years.

I wonder whether automated mathematics software would really be able to
compete with humans at explaining how areas of mathematics fit together and
what roles different mathematical objects play.

I'm not a mathematician so sorry for the simplistic example, but for example,
would automated mathematics software be able to do something like (a) invent
the complex numbers (i.e. as an object in algebra -- a "field extension" of
the reals), and (b) also make a statement like "these are useful for modeling
cyclical/oscillatory behavior"?

~~~
a_imho
_In April, Christian Szegedy from Google told me that he believes that
computers will be beating humans at math within ten years._

From my experience the track record of Outlandish Claims Made by People Having
Vested Interest in Casting Them is rather poor. In fact, I can not recall a
single one that was accurate.

~~~
williamstein
I was at Buzzard's talk at Microsoft last week. Kevin spent a lot of time in
the live talk explaining what he thinks Szegedy actually meant (there should
eventually be video on youtube of this). Kevin clarified that in his opinion
what Szegedy means by "math" isn't at all what Kevin means by "math", since it
takes so long to get to the frontiers of research math. In particular, Kevin
speculated that for Szegedy "math" is some specific research questions in
combinatorics, which isn't at all what Kevin's view of "math" is. So Kevin's
conclusion was that Szegedy really means that computers will beat humans at
"something specific involving combinatorics" in the next ten years. Kevin
wasn't optimistic about the same claim was about number theory, unless 10
years is replaced by "some day".

Kevin also explained live in his talk that his main motivation for his recent
interest in interactive theorem provers is a very bad experience he had a few
years ago refereeing an important research paper by some famous
mathematicians. That experience very much shook his faith in the ability of
humans to do correct deep mathematics... I suppose from that perspective
"computers will be beating humans" might just mean that humans are convinced
of a (false!) result, but computers can't be convinced of that same false
result (in the sense that nobody can formalize it).

(Incidentally, some of the people at Kevin's talk were at Richard Stallman's
talk in the same place the day before. Evidently RMS didn't allow any pictures
or video, so not much will appear from that.)

------
joker3
No one is going to care until an important proof can't be verified and a
counterexample to some step is found. At that point, you'll see a sea change.
This could take twenty years, or it could happen next year.

------
xvilka
There are way more libraries and plugins for Coq though. Using Lean would
require writing them by yourself.

