
Pure Maths in Crisis? - lelf
https://plus.maths.org/content/pure-maths-crisis
======
klyrs
As a computational mathematician... I should clarify; I write programs to
construct and verify proofs in discrete mathematics, and also I'm a hacker
obsessed with high performance code... I agree that mathematics is in crisis.
Why? Because I'm an oddball rarity. My first supervisor was a hacker too, but
my PhD supervisor thought I was a wizard. Professors, by and large, can't
hack. Those who could, abandoned the field for CS. Students are rarely hackers
- often the celebrated kids are groomed through Olympiad, then Putnam, and
jump straight into grad school with nary a glance towards a computer -- until
they are broken against the only language seen as practical: latex.

Mathematics is innately about computation; every "proof" is an algorithm in
disguise, and yet the field fails to attract talent because computation itself
is treated with such disdain. Math has its own reproducibility crisis -- we
don't write code that performs the tedious operations "left to the reader" and
even when we do, journals won't post it.

And I don't see a ton of progress. My old department has hired a few young
professors since I graduated... none can hack. It's sad

~~~
stabbles
I have no idea what you mean with hacking or hacker in this context

~~~
klyrs
This is literally Hacker News, I assumed that would be obvious. There's more
to the culture than programming. There's two types of programmers: those who
learn the grammar, and those who learn the interpreter / compiler.

~~~
ruang
Best description I’ve seen! Can sometimes see it in the bimodality of the
types of comments too.

------
dannykwells
I think articles like this are hilarious because they act like written down
proofs are a gold standard of unassailable logic. Of course they aren't - what
constitutes proof to one person (or in one age) may or may not constitute
proof to others. Look at Mochizuki's work - someone so convinced they are
right, and yet no one else can understand it:

[https://www.nature.com/news/the-biggest-mystery-in-
mathemati...](https://www.nature.com/news/the-biggest-mystery-in-mathematics-
shinichi-mochizuki-and-the-impenetrable-proof-1.18509)

Another good example: Euler's proofs in his day would be failed in even a
basic real analysis class now.

Having known many many pure mathematicians, I believe that they don't want to
use computers for the simple reason that many of them don't know how to, and
thus denying the legitimacy of computer-generated (or assisted) proofs
protects their livelihood/careers.

~~~
tacomonstrous
>denying the legitimacy of computer-generated proofs protects their
livelihood/careers.

While this might be an issue, the bigger problem is that there are no
interesting new theorems that were obtained in this way yet.

What is true is that there is still serious prejudice against theorems
verified by reducing to a finite (yet large) number of concrete cases, which
are then verified by an algorithm. This is the case even after the whole
brouhaha over and resolution of the 4 colors theorem by Hales.

~~~
auggierose
Hales didn't resolve the 4 colors theorem, Gonthier did that. Hales proved the
Kepler Conjecture and afterwards facilitated its formal verification via the
Flyspeck project.

~~~
wolfgke
> Hales didn't resolve the 4 colors theorem, Gonthier did that.

Gonthier didn't resolve the 4 colors theorem, Appel and Haken did that.
Gonthier just created the first computer-checked proof for the Four-color
theorem.

------
romwell
Mathematics has always been like that.

The Fundamental Theorem of Algebra went through many "proofs" over decades
before settling on one we accept as rigorous.

And rigor itself? A relatively modem requirement. Calculus was put on a
rigorous basis only in the 19th century. It existed, and was applied, for
centuries before that.

The real truth about mathematics is that it's way more about intuition than
rigor, and that things that most people think are true are taken as such. It's
human nature, and mathematics is a human endeavor.

Moreover, math itself resists rigor in a fundamental way, as Godel showed us.
Machine proofs? Eh. Ultimately, everything is up to humans, even the axioms.
Will the computer _believe_ in the axiom of choice?

And stuff that's too complex to be understood except for the select few? So be
it. It means nobody really cares about that stuff. Maybe someone will discover
a better proof later. Maybe someone will stumble upon a counterexample. Maybe
nobody will touch that for a thousand years. Math will go on.

Not that all math is accessible, but all _good_ math is _not_ limited to in-
groups.

For an extreme example, Grigori Perelman's famous work is surely not
accessible, and is famously incomplete. It didn't matter. The ideas were clear
to enough people around the world that it didn't take long for them to spread,
and for others to step up and close the gaps.

I've always said that mathematics is a form of art, perhaps with a smaller
audience than most. The only criterion in mathematics is _beauty_ : whether
the work is _interesting_ or not. Some works lack in that aspect, as it
happens in all arts, and will not be appreciated by future generations. And
it's fine.

Disclaimer: I do math, sometimes.

~~~
dwheeler
> And rigor itself? A relatively modern requirement.

Perhaps. I would argue that it is _LONG_ overdue.

> Moreover, math itself resists rigor in a fundamental way, as Godel showed
> us.

This merely shows that some things cannot be proved, be it by machine or
computer. That's not the issue. The issue is whether or not we should be
moving towards machine-verified proofs, since "proofs" verified only by humans
often turn out to be wrong.

> Machine proofs? Eh. Ultimately, everything is up to humans, even the axioms.
> Will the computer believe in the axiom of choice?

A computer will "believe" the axiom of choice if it is told that it is an
axiom. I agree that the humans get to decide what is an axiom (at least in
their system), but it should be immediately obvious exactly what axioms are
being accepted, and then rigorously shown that _only_ those accepted axioms
are being used.

I can even point to a demonstration. One of the mathematical formalization
tools available today is Metamath (
[http://us.metamath.org/](http://us.metamath.org/) ). In Metamath you can
state the axioms you wish to accept, and then generate proofs that are
automatically checked; every step must lead back to a previously-proved
theorem or an axiom.

There are several existing Metamath databases. The "set.mm" database (Metamath
Proof Explorer) is based on classical logic, ZFC, and the axiom of choice.
Since the axiom of choice is included as an axiom, you can use it. See here:
[http://us.metamath.org/mpeuni/mmset.html](http://us.metamath.org/mpeuni/mmset.html)

In contrast, the "iset.mm" database (Intuitionistic Logic Explorer) is based
on intuitionistic logic and does _not_ include the axiom of choice. Since the
axiom of choice is not included, you cannot verify a proof that tries to use
it - the verifier will simply complain that the step is invalid if you try to
do it. See here:
[http://us.metamath.org/ileuni/mmil.html](http://us.metamath.org/ileuni/mmil.html)

------
fspeech
For people not familiar with the subject (or Buzzard and his cohort) I
recommend checking out the Zulip archive [https://leanprover-
community.github.io/archive/116395maths/i...](https://leanprover-
community.github.io/archive/116395maths/index.html) Some of the (esp. earlier)
threads are very illuminating on the process of constructing proofs using ITPs
(interactive theorem provers).

My personal experience is that modeling a suitable domain of math formally can
be a very good pedagogical exercise. There are many fine points that are easy
to ignore when doing math informally but a formal tool would not let you. This
is particularly helpful when working on foundational subjects. When successful
you are rewarded with the feeling of possessing the understanding at a level
deeper than everyone else (most likely untrue, but certainly a typical
expository paper or book could not cover the subject as thoroughly).

That said note my use of the word "modeling". In a formalization attempt you
may have in your head an isomorphism between the code and the mathematical
objects under study but the isomorphism is not necessarily apparent or
universally recognized. Beyond the most quotidian matters other people's
modeling code (think of it as an API) may or may not fit with your conceptual
models. Just like there are different programming languages and libraries,
there is no universally accepted approach to encode mathematical constructs.
Your claim of correctness is only as good as your axioms and definitions. Your
theorem, with all definitions completely unfolded, will be an unreadable
tangle of first order logic formulas that anyone would be hard pressed to
recognize as universally meaningful -- though it is extremely likely to be
logically sound. Even if no one questions the soundness of your logic there is
no tool that can certify that your formal expression actually means what you
say it means in a mathematical domain.

------
0815test
There's not much that's new here - Freek Wedijk estimated the cost of
formalizing the undergrad math curriculum in its entirety at 140 man-years,
for a rough monetary cost well above the millions-of-dollars range - link @
[https://www.cs.ru.nl/~freek/notes/index.html](https://www.cs.ru.nl/~freek/notes/index.html)
Expensive to be sure, but not excessively so for a mostly one-time effort.
Part of the problem though is that we are still far from having a reasonably-
standard platform for formal math that could be expected to fully unlock its
benefits. Most of the effort in formalization so far seems to be occurring in
domains of "synthetic mathematics", which seem to be mostly self-contained and
don't take much formalization effort to reach close to the research frontier.

~~~
philipkglass
Billions of dollars? If it only takes 140 man-years that implies more than $14
million per man-year.

~~~
0815test
Yup, I suppose that it's closer to the hundreds-of-millions range for the
_undergrad curriculum only_ , in a _single_ system - maybe even a bit less
than that. But that then leaves open the question of how to go the rest of the
way to the actual research frontier in whatever domains of math you care
about, plus the inherent uncertainty as to whether these efforts will in fact
be useful, since these systems are in such an early state. You're right though
that the way I phrased that was a bit confusing, so thanks!

------
mathgenius
> Proof is the essence of mathematics.

Proof may be essential, but it is not the essence of mathematics. Like many
things, the essence of mathematics is beauty.

In this mad world, of which academia is a part, we only seek to measure
things. And how can you measure beauty? You cannot, or, at best, it is
subjective. So, in place of beauty we have big complicated proofs, that no-one
understands.

John Conway, later on in life, decided to quit being a "real mathematician"
and focus instead on trying to _understand_ mathematics. His student Borcherds
came up with a proof of Conway & Norton's moonshine conjecture, but Conway
himself never accepted it. Why? Because it was too complicated, it couldn't
possibly be the right explanation for moonshine. What a dude.

------
mcguire
As an aside, "The Endoscopic Classification of Representations: Orthogonal and
Symplectic Groups" is a _damn_ good title. I'll assume it means something to
someone, but all I can say is that I expect it to make a _big_ hole in the
ground.

------
F-0X
I cannot understand this "solution" of turning to machine-verified proofs.
Coq, Isabelle, etc. invariably contain bugs. Even if they didn't, you can
still run into hardware bugs. Computers are not a source of perfect
computation and software is certainly not a source of perfect logic.

~~~
moomin
They’re not, but they’re spectacularly good at spotting dumb errors that
humans are spectacularly bad at spotting.

Me, I still haven’t got my head around how to prove something on a computer,
but the principle is sound. Theoretically one could build the proof as some
form of literate program.

~~~
chess93
A computer just does symbolic manipulation according to a list of rules
(axioms) and the human/programmer specifies which sequence of symbolic
manipulations to apply and then the computer simply states whether or not the
specified manipulations transform the theorem in to the truth symbol.

[http://us.metamath.org/mpegif/mmcomplex.html](http://us.metamath.org/mpegif/mmcomplex.html)

(Warning: I've never actually done much of this before so some details might
be wrong.)

~~~
dwheeler
I _have_ done some, thanks for pointing to that page!

Here's a prettier version for most people (the "mpegif" version uses GIFs for
math symbols, which works everywhere but doesn't look at nice):

[http://us.metamath.org/mpeuni/mmcomplex.html](http://us.metamath.org/mpeuni/mmcomplex.html)

