
The Future of Mathematics? [video] - dgellow
https://www.youtube.com/watch?v=Dp-mQ3HxgDE
======
auggierose
He got a lot of things right in that video despite being just 2 years involved
with mechanised theorem proving. I thought it was a fantastic and inspiring
talk.

For example, when he said at the end, that it is obvious to him that this will
change how mathematics is done in the future. Yeah, thought the same when I
encountered this technology, but I was just a 2nd year math student back then,
and Isabelle was developed in the same building. Still, obvious.

When he said that math is more about structure, not so much about induction;
yeah, said the same at some ITP more than a decade ago. Got a dismissive
question from Andrew Appel when I said that. Tobias Nipkow seemed to agree
with the dismissal, I never understood the question though. Didn't get a reply
when trying to investigate further on stage :-)

Of course set theory can be automated. I would argue, it can be automated even
better than dependent types. Just embed the set theory including Grothendiek
universes in simple type theory. The problem here is how to keep the notation
sane, as for example + could be defined only once.

Tactic style theorem proving is sooooo old. It is how interactive theorem
proving was done from the very beginning. Tactic style proofs are not readable
at all except maybe via experiencing them by stepping through them.
Declarative proofs are much more readable. Writing declarative proofs with the
machine bridging the intermediate steps is the obvious future.

I was always astonished how people can learn stuff. When taking Latin classes,
I thought, sure, I can kind of learn to read it, but speak it in real time? No
way! But of course, at some point in time people have. Without understanding
how.

AlphaGo Zero kind of explains how people can do something really difficult,
even strategy based, without exactly knowing what they are doing. This is not
much different from how mathematics is done. I think we will have a deep
mathematician solving at least one of the remaining millennium problems within
the next 20 years. The path starts exactly with how Buzzard and Hales envision
it: Bring enough mathematics into one system, so that the millenium problems
can actually be stated in the system. Machine learn that stuff. Give feedback
to users, they will find it easier to use the system interactively. Create
more mathematics. Machine learn that stuff. Rinse repeat.

This can be done. Certainly with 100 million dollars in funding.

~~~
empath75
It seems like encoding ‘all of math so far’ in some theorem prover seems like
an ideal open source project where lots of people can make small contributions
which don’t require a vast amount of mathematical knowledge — you’d mostly be
simply encoding other people’s results. Is there such an effort now?

~~~
auggierose
I tried to do something like that
([http://proofpeer.net](http://proofpeer.net)), but got lost in the details.
Learnt a lot from it, but implementing your own cloud versioning system is
probably more appropriate for a project with 100 million dollars funding, not
600000 pounds, half of which goes to university bureaucracy ;-)

I think formal abstracts
([https://formalabstracts.github.io](https://formalabstracts.github.io)) is
promising in principle (it doesn't focus on proofs though).

The Archive of formal proofs ([https://www.isa-afp.org](https://www.isa-
afp.org)) is the biggest effort I know, but the logic (Isabelle/HOL) is not
powerful enough for doing advanced mathematics comfortably, and the process of
contributing is quite arcane.

------
raphlinus
I found this really worth watching, as much for the sociological commentary
about the way modern mathematics is done as for the presentation of the
program (and the latter resonates strongly for me).

One of the more interesting bits for me was in the Q&A, where Prof Buzzard was
asked about alternatives to Lean. Lean is the most evolved of the dependent-
typed calculus of constructions provers, but two other approaches might work.
One is univalence, which is sexy, but Prof Buzzard observed that they haven't
actually got much done in their system.

The other is set theory, which is more familiar and approachable to working
mathematicians, but the systems out there lack automation. He didn't mention
Metamath by name, but that's currently the most developed such system, and
they _have_ managed to get a lot done, either in spite of or because of the
lack of type theory sexiness of the foundations.

So the question I'd love to see answered is whether automation is inherently
easier in type theory, or whether it might be possible to build automation for
a set theoretical approach. John Harrison gave a talk last year at AITP on the
topic, but I haven't heard much more of this.

~~~
Miltnoid
No way is Lean more evolved than Coq.

~~~
raphlinus
Maybe that wasn't the best way to phrase it, but the question was asked and
Prof Buzzard replied basically that in Lean it's straightforward to express
quotients, while in Coq you get "setoid hell". This specific statement is at
1:04:20 in the video. The question is at 1:00:02, and is probably useful for
context.

~~~
zozbot234
Aren't quotients non-constructive? Univalence is probably also relevant to
this issue, given that it generalizes the treatment of 'equality' in a broadly
similar direction to the one needed for quotients. Groupoids are after all a
fairly natural generalization of setoids.

~~~
raphlinus
Quotients are not purely constructive, but they are present in Lean as an
extension (this is covered in Chapter 1 of the book[1]).

[1]:
[https://leanprover.github.io/theorem_proving_in_lean/theorem...](https://leanprover.github.io/theorem_proving_in_lean/theorem_proving_in_lean.pdf)

~~~
uryga
could you point me towards an explanation of why quotients aren't
constructive?

~~~
ratmice
I can't, perhaps lean's definition of quotient isn't constructive (I hadn't
looked or noticed that), But there is at least this construction of quotients
in NuPRL.

[http://www.nuprl.org/documents/Nogin/QuotientTypes_02.pdf](http://www.nuprl.org/documents/Nogin/QuotientTypes_02.pdf)

I think quotients are not typical of constructive objects in that if you
construct some object, and then project it into the quotient, then you have
the quotient, but you cannot project that object back out. You get subtly
less, that the resulting quotient satisfies some equivalence relation.

Under a specific set of assumptions, you may very well be able to construct
the quotient... but under a different set of assumptions (including the
assumption of the quotient), Perhaps in these differing sets of assumptions,
you have the quotient, but lack the assumptions necessary to construct it
yourself.

Anyhow, the argument that they are non-constructive in a humpty dumpty sense
you cannot put it together -> take it apart -> put it back together again.

Where it seems reasonable to consider the "put it together" phase as not
entirely incompatible with constructivity?

~~~
uryga
thanks for the reply! i'm gonna need some time to mull it over...

would you be able to say how "normalizing" fits into this? by "normalizing" i
mean applying some function that takes each equivalence class to a
representative element (think simplifying fractions). using something like
that, we could round-trip a value (object -> quotient -> object), though at
the end we might end up with a different value than what we put in. so yeah, a
normalizing function seems like a useful thing when looking at quotients
constructively. sorry for the handwavyness, i hope i managed to get my point
across!

~~~
ratmice
I don't know, normalizing at least seems reasonable constructively, sorry
about the handwavyness in my reply as well!

In lean specifically there is a bit more information here:
[https://leanprover.github.io/theorem_proving_in_lean/axioms_...](https://leanprover.github.io/theorem_proving_in_lean/axioms_and_computation.html)

"and quotient construction which in turn implies the principle of function
extensionality"

There is also, Michael Beeson's "Extensionality and choice in constructive
mathematics"
[https://projecteuclid.org/euclid.pjm/1102779710](https://projecteuclid.org/euclid.pjm/1102779710)

So it seems at least that the definition of quotients in lean is classical,
and justified by axioms, NuPRL at least seems constructive, this at least
leads me to believe quotients as defined in lean aren't constructive. But i'm
not sure we can take the step to "quotients aren't constructive".

Anyhow it's an interesting question, and one which I too wish I had more
clarity on.

------
dwheeler
I don't know whether or not Lean is the "best" (no doubt that depends on what
aspects you think are important). But I completely agree with the speaker that
there is a need to formalize mathematics with computer verification. The
current "counsel of elders" model of verification is simply _not_ able to
verify proofs with any serious level of confidence today, given the explosion
of complexity in modern math. This _will_ change how math will be done in the
future; we will look back on current math "proofs" in the way we look back on
medicine based on the humors (woefully inadequate).

There are, of course, people working on fixing this. I just created a Gource
visualization of the Metamath set.mm project, which has been working to
formalize mathematics with absolute rigor. Over the years there has been
increased activity; there have now been 48 contributors to set.mm.

In the end, _that_ is what is necessary to formalize mathematics: efforts by
many people working together to do it.

See:
[https://www.youtube.com/watch?v=XC1g8FmFcUU](https://www.youtube.com/watch?v=XC1g8FmFcUU)

~~~
MayeulC
How is interoperability going on between those various formal systems?
Couldn't we devise some common intermediate representation, or API?

Some languages (and their approaches) are probably better suited to some
mathematical objects, like some demonstrations are easier to perform
algebraically than geometrically, for instance.

~~~
raphlinus
This turns out to be quite hard, because there are irreconcilable differences
in the fundamentals of the different proof systems. That said, there's the
Dedukti project, which is showing promising interop results between (at least)
Coq and HOL style logics, and the work Mario Carneiro is doing trying to
bridge between Metamath and more type-theory approaches.

------
mherrmann
I did my undergrad at Imperial. My first lecture was supposed to be with Prof.
Liebeck. Kevin came in and wrote on the blackboard: "Lemma 1 - I am not
professor Liebeck". He was wearing his (I think typical) trousers. I never
interacted with him personally but I did look at his Wikipedia page once. He
was the top student in his undergraduate class (Mathematics) at Cambridge.
Very impressive to a mere mortal like me.

I once played with Lean but found the tutorial not very approachable. It took
an hour of reading through abstract explanations until it finally explained
the idea how it works. Essentially, true statements are expressions that pass
the "type check" of a compiler. A function taking type A as param and
returning type B is an implication A->B. To prove this implication, you need
to find a function implementation that passes the corresponding type checks.
This is what I would have wanted the tutorial to say at the start.

~~~
foooobar
Which tutorial did you use? As someone coming from computer science, I found
[https://leanprover.github.io/theorem_proving_in_lean/](https://leanprover.github.io/theorem_proving_in_lean/)
very approachable as an introduction to ITP in general.

~~~
mherrmann
That's the one I meant I think. It is nice, but as I said the impression it
left me with was "you could have told me how the approach generally works
sooner".

~~~
lonelappde
That books wants to build the foundation (dependent types) before making the
big claim in chapter 3.

It's a bit weird because has Haskell shows, you don't need dependent types for
basic theorem proving. (but dependent types do give a lot of useful power)

~~~
dwohnitmok
Well if you just have Haskell 2010 types, you're talking about really really
basic theorem proving since all you have is propositional logic. The most
interesting thing I can think of to prove with Haskell 2010 types is (the
constructive version of) de Morgan's laws. Almost all other interesting
mathematical statements are out of reach.

------
MayeulC
A few links (the chat is mentionned multiple times during the chat)

Lean: [https://leanprover.github.io/](https://leanprover.github.io/)

Repo:
[https://github.com/leanprover/lean/](https://github.com/leanprover/lean/)

Chat: [https://leanprover.zulipchat.com/](https://leanprover.zulipchat.com/)

The maths course (in French) that can be seen during the presentation:
[https://www.math.u-psud.fr/~pmassot/enseignement/math114/](https://www.math.u-psud.fr/~pmassot/enseignement/math114/)

License: Apache 2.0

I was afraid there would be a CLA, as is customary with Microsoft's projects
(and the main reason I didn't contribute to any), but I couldn't find one.
Good call.

> Lean 4

> We are currently developing Lean 4 in a new (private) repository. The source
> code will be moved here when Lean 4 is ready. Users should not expect Lean 4
> will be backward compatible with Lean 3. [Committed one year ago]

Really, really not a fan of this. This basically prevents anyone from
attempting to add new features or fixes, as they might be obsolete by the time
the new version comes out (incompatible or already fixed).

~~~
jonathanstrange
Can I ask how Mizar compares to Lean, Coq, and Isabelle?

I've wondering about that quite a while because I knew someone who was
involved in the Mizar project, but never had the time to get into automated
theorem proving myself. I was impressed by the semi-natural language proofs.

~~~
dwheeler
Comparing these different approaches is not trivial, of course.

One view is to look at "Formalizing 100 Theorems" by Freek Wiedijk, which
lists 100 mathematical theorems and the various systems that have formalized a
nontrivial number of them. It's basically a "challenge list" for these kinds
of systems:

[http://www.cs.ru.nl/%7Efreek/100/](http://www.cs.ru.nl/%7Efreek/100/)

This list is discussed in "Formal Proof - Getting Started" (Freek Wiedijk,
Notices of the AMS, Volume 55, Number 11, December 2008).

That list is absolutely not the only way to compare different tools. Still, it
gives you a sense of how far along each one has come in actually making
proofs. Here's the current status:

    
    
        HOL Light  86
        Isabelle  83
        Metamath  71
        Coq  69
        Mizar  69
        ProofPower  43
        Lean  29
        nqthm/ACL2  18
        PVS  16
        NuPRL/MetaPRL  8
    

As you can see, the top ones today are HOL Light, Isabelle, Metamath, Coq, and
Mizar. Lean has far fewer, but to be fair it's also much newer.

------
est31
Prior discussion:
[https://news.ycombinator.com/item?id=20909404](https://news.ycombinator.com/item?id=20909404)

Back then I told myself to check out lean one day... still hasn't happened yet
:/.

~~~
dang
We missed that earlier link, and this talk is so incredibly good that I'm
going to pretend I didn't see it here and so couldn't mark it as a duplicate.

~~~
est31
FWIW, my comment wasn't a demand to mark this as duplicate. I only included
the discussion link for further reference. The thread here had very valuable
comments like the one by raphlinus.

------
ocfnash
I strongly recommend reading
[https://github.com/coq/coq/issues/10871#issuecomment-5404526...](https://github.com/coq/coq/issues/10871#issuecomment-540452626)

which was written a few hours ago broadly in the vein of Lean vs. Coq, more
precisely on the issue of how Lean handles quotients.

~~~
auggierose
Very symptomatic for the current state of mechanised theorem proving. And it
has been going on like that for decades. It's mostly because computer
scientists are driving development, not mathematicians, I think.

After reading this, are you really any wiser now on the topic of quotients in
Lean? I didn't learn much from it except that some Coq developers are fed up
with the current popularity of Lean among top mathematicians.

~~~
ocfnash
> After reading this, are you really any wiser now on the topic of quotients
> in Lean?

Only a little but I'm hoping that some weekend reading up on Canonicity and
Subject Reduction (now that I know these are the issues at play) will shed
some light.

I'm interested in both Lean and Coq but what I'm most excited about is the
(Lean-based) Mathlib project.

I can believe that Lean may have made the wrong call on quotients (I guess
with `quot.sound`) though I am not qualified to decide. If this is so, I
imagine that it will manifest in terms of a natural limit on how far a project
like Mathlib can push its borders before it reaches some sort of natural limit
(maybe where the complexity of formalising what we want to say dominates the
inherent complexity of the statements themselves).

However from what I've seen, Mathlib is by far the most successful
formalisation project in terms of what seems to matter most sociologically
right now: attractiveness to mathematicians. Whatever its fate, I think it
will help make formalisation of mathematics much more mainstream, and will
teach us a lot. I still think that a univalent type theory looks like most
promising candidate, but we'll have to wait and see.

------
devicetray0
Near the end he talks about mathematicians with published papers that say "you
shouldn't believe any of my papers, they're just telling stories" and don't
care if their theories are wrong. Wow

~~~
vanderZwan
I suspect nobody who has been in academia long enough to get a masters degree
at least would be surprised at this statement, no matter what their
background. Every field has people like this.

Don't forget how essential story-telling is to the human condition: culture,
accumulated knowledge and everything we as a species have achieved is built on
top of it. With that in mind, it's much easier to see why people some might
want to play the role of storyteller at all costs.

------
mikorym
I agree that Lean or Coq or something else is the future of solving
mechanistic argument, but my opinion is that this has been the expectation for
maybe a hundred years already and was right up there in the time of Turing and
Church (and Boole and Heyting and Curry and Kolmogorov).

But compare all of mathematics to just linear algebra and specifically neural
network implementations. You have a lot of people working on AI who sometimes
grossly overstate the capabilities of their system and fail to understand
their systems when they do succeed. I would venture that the issue is not
solving problems as much as it is to understand things to the level of
mastery. It is always worth it to understand something to a continually
exhaustive level of detail.

I think this is what artisans are. If you can make incredible hand made books
[1], then surely you have underlying skills and abilities that transfer as
well? If you are a grand master chess player then you may be dismayed that
computers will always beat you [2], until you use a computer yourself to beat
another computer (or at the very least until you become resentful towards IBM
for misleading you in 1997). [3]

[1]
[https://www.reddit.com/user/iostopan](https://www.reddit.com/user/iostopan)
[2]
[https://en.wikipedia.org/wiki/Human–computer_chess_matches](https://en.wikipedia.org/wiki/Human–computer_chess_matches)
[3]
[https://en.wikipedia.org/wiki/Solving_chess](https://en.wikipedia.org/wiki/Solving_chess)

------
outlace
I watched the whole thing and it was fantastic. Professor Buzzard is an
excellent presenter. I downloaded Lean and started playing around with it
after this lecture.

------
delhanty
Mainly for my own reference when I return to this story with more time, links
to Professor Kevin Buzzard's project Xena [0][1]

>Basically, Lean can understand mathematics, and can check that it doesn't
have any mistakes in. Most of the files here are Lean verifications of various
pieces of undergraduate level mathematics.

>Some of the lean files are in a library called Xena. One could imagine Xena
as currently studying mathematics at Imperial College London.

[0] [https://github.com/kbuzzard/xena](https://github.com/kbuzzard/xena)

[1] [https://xenaproject.wordpress.com/](https://xenaproject.wordpress.com/)

------
ttctciyf
> So in the end it wasn't Gödel, it wasn't Turing, and it wasn't my results
> that are making mathematics go into an experimental mathematics direction,
> in a quasi-empirical direction. The reason why mathematicians are changing
> their working habits is the computer. I think that this is an excellent
> joke!

\- Gregory Chaitin, 2007[1]

1:
[https://books.google.com/books?id=DS7AOrIw8bkC&pg=PA97&lpg=P...](https://books.google.com/books?id=DS7AOrIw8bkC&pg=PA97&lpg=PA97)

------
MaysonL
A few links found by following up:

[https://xenaproject.wordpress.com](https://xenaproject.wordpress.com)

[http://wwwf.imperial.ac.uk/~buzzard/one_off_lectures/msr.pdf](http://wwwf.imperial.ac.uk/~buzzard/one_off_lectures/msr.pdf)

[http://aitp-conference.org/2019/slides/KB.pdf](http://aitp-
conference.org/2019/slides/KB.pdf)

[https://galois.com/blog/2018/07/the-lean-theorem-
prover-](https://galois.com/blog/2018/07/the-lean-theorem-prover-) past-
present-and-future/

[https://florisvandoorn.com](https://florisvandoorn.com)

[https://florisvandoorn.com/talks/JMM2019formalabstracts.pdf](https://florisvandoorn.com/talks/JMM2019formalabstracts.pdf)

[https://sites.google.com/site/thalespitt/](https://sites.google.com/site/thalespitt/)

[https://jiggerwit.wordpress.com/2014/05/21/27-formal-
proof-p...](https://jiggerwit.wordpress.com/2014/05/21/27-formal-proof-
projects/)

[https://jiggerwit.wordpress.com/2016/10/18/elliptic-curve-
ad...](https://jiggerwit.wordpress.com/2016/10/18/elliptic-curve-addition-
without-tears/)

[https://arxiv.org/abs/1610.05278](https://arxiv.org/abs/1610.05278)

------
kkwteh
Looking at the code frequency, it looks like development of Lean 3 all but
stopped around January 2018. The Lean 4 repo shows lots of activity since
January 2019, but isn't in a useable form.

What are the future plans for the project? How will this be distributed in a
form that mathematicians can use and contribute to?

------
vanderZwan
This might be a very naive question but I was wondering: are there
mathematical proofs that fundamentally cannot be computed? And I don't mean
that in the Ackermann function[0] sense of the words, I mean is there
mathematics that is inherently _beyond_ computation?

(still watching the video)

EDIT: follow-up, since the replies helpfully point out that yes, it is: does
this limit this kind of software, or can theorem proving software get around
this by human intervention?

[0]
[https://en.wikipedia.org/wiki/Ackermann_function](https://en.wikipedia.org/wiki/Ackermann_function)

~~~
317070
Yes, we have proofs (which probably can be proven formally!) that there are
things which cannot be proven. That is essentially the incompleteness theorem
[0]

The smallest 'practical' example I know, is a finite state machine with a
proof that we cannot proof (within a specific set of axioms) whether it will
stop or not.[1] It's not even very big: 1919 states.

[0]
[https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...](https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems)

[1]
[https://news.ycombinator.com/item?id=16988612](https://news.ycombinator.com/item?id=16988612)

~~~
vanderZwan
> _Yes, we have proofs (which probably can be proven formally!) that there are
> things which cannot be proven._

That is a very subtly different question than the one I was trying to ask.
Or.. well.. I think it is. What I was trying to ask was if there are _formally
provable theories_ that cannot be expressed in a way that would let a computer
"compute" the proof.

I _think_ that this is not the same as proving that there are unprovable
theories.

EDIT: I just noticed my phrasing was off in the original question.. gah, this
reminds me of conversations with my mathematician friends IRL where I would
always trip over my own sloppy use of human language. What I mean with
"computing" is _verification_ , not _execution_

~~~
dooglius
What you're talking about can be formalized via
[https://en.wikipedia.org/wiki/Arithmetical_hierarchy#The_ari...](https://en.wikipedia.org/wiki/Arithmetical_hierarchy#The_arithmetical_hierarchy_of_formulas)
where your notion of "computable" likely corresponds to some set between
(Sigma_1 intersect Pi_1) and (Sigma_1 union Pi_1) depending on some nuance. In
any case, you can show by diagonalization that there are statements strictly
outside these sets--that is, first order propositions that cannot be verified
or refuted by computer.

~~~
lonelappde
The question was whether there are statements provable by humans but not
computers, which is either trivially impossible or guaranteed, depending on
whether you allow the computer to use the same axioms as the people.

~~~
dooglius
It depends on how you interpret it, I interpreted it as asking if there are
provable statements that can't be "computed" directly.

~~~
vanderZwan
Uh... well... can I claim the _" sorry, not a mathematician so my reasoning
about these matters is quite sloppy, relatively speaking"_-defense?

(The responses here have been very nice despite that though, thank you
everyone!)

------
olooney
Greg Egan's Diaspora has a rather good illustration of what a fully formalized
mathematical system might look like. He called it the Mines[1]: a
representation of the full acyclic digraph of all the theorems proved so far
by a community of mathematicians. A student can examine any proposition, see
if its yet been proven (and if so, how, and from what antecedents) as well as
what deeper results might depend on it. Critically, a student might select a
research topic simply by travelling to the ends of the Mines and start
digging, or might select some not yet proven goal as an end point and start
working towards it... today, each mathematician must build their own mental
model of the mines by exhaustively reading all related papers - a time
consuming and somewhat fallible effort. Such a comprehensive map would be one
of the main benefits of formalization.

The video mentioned Formal Abstracts[2], which is still getting off the
ground... a similar project that has been around for a while is Metamath[3].
Metamath uses an SRS (string rewriting system[4]) to formalize much
foundational mathematics. It's proof explorer is conceptually similar to
Egan's Mines: a database full of theorems and connections starting from
fundamental axioms and definitions. Today, Metamath only includes fairly basic
results, and has not yet reached a level where it can capture the state-of-
the-art in modern research mathematics the way Formal Abstracts and other
systems aspire to. To be fair, _no_ system today is really at that level yet.
Coq has a good set of packages and I think is probably in the lead today. I
don't know where Lean is in comparison.

The aspect of formalization where it somehow lends additional credence to
proofs seems less important. Famously, when Hilbert went to fully formalize
Euclid's geometry, he found a missing axiom[5]. (Euclid had assumed that two
circles with centers closer than their radii would intersect at somewhere...
which is not true over the field of rational numbers! Therefore this requires
an explicit axiom, which had been overlooked for 2,000 years!) This seems to
be the exception rather than the rule... as far as I know, no important
theorem has been "overturned" as a result of formalization efforts.

[1]:
[http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumbe...](http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf16)

[2]: [https://formalabstracts.github.io/](https://formalabstracts.github.io/)

[3]:
[http://us.metamath.org/mpeuni/mmset.html](http://us.metamath.org/mpeuni/mmset.html)

[4]:
[https://en.wikipedia.org/wiki/Rewriting#String_rewriting_sys...](https://en.wikipedia.org/wiki/Rewriting#String_rewriting_systems)

[5]: [https://math.stackexchange.com/questions/2074781/can-we-
real...](https://math.stackexchange.com/questions/2074781/can-we-really-
intersect-circles/2076491)

~~~
lidHanteyk
I politely disagree; Metamath doesn't really have any ceiling on its
complexity and can do anything that fancier, slower solvers can do. Did you
have a specific example of something that Metamath cannot capture? Keep in
mind that, in terms of formal power, there ought not to be a higher level of
power than the level that Metamath accesses.

~~~
olooney
Oh, it _can_ capture everything. I believe that. It's just that it relies on
volunteers to actually do the work of formalizing advanced theories. It has
some pretty darn advanced stuff already, like complex numbers. But there's
also a huge amount of research-level mathematics which nobody's gotten around
to coding in the Metamath formalism yet. This is the same issue discussed in
the video: with millions of dollars and a full time team, you could get
something like the proof of Fermat's last theorem in there, but its not there
yet.

~~~
sanxiyn
Metamath has a proof of Prime Number Theorem. Its coverage of advanced
mathematics is pretty much as good as any.

------
vackosar
Guys working on Homotopy type theory are trying to do something like this.
[https://en.wikipedia.org/wiki/Homotopy_type_theory](https://en.wikipedia.org/wiki/Homotopy_type_theory)

------
thekhatribharat
This kind of reminds me of that _Avengers: Endgame_ scene where _Tony Stark
(cf. Kevin Buzzard)_ was working with _F.R.I.D.A.Y. (cf. Lean Theorem Prover)_
to interactively look for time travel solutions. :D

------
dgellow
Also, Formal Abstracts, the project by Thomas Hales he mentions in the video:
[https://formalabstracts.github.io](https://formalabstracts.github.io)

------
adamnemecek
Mathematics really needs to embrace constructivism.

~~~
techwizrd
Are you referring to constructivism in terms of education? Or in terms of
proving the existence of an object in mathematics? Traditionally, it is not
necessary to construct an object to prove its existence. One can simply assume
the object does not exist and prove a contradiction. Requiring constructive
proofs does not seem to provide tangible benefits, but it does unnecessarily
hinder mathematical thinking.

~~~
adamnemecek
Mathematics of course. Theorem provers work nicer with constructive
mathematics.

~~~
krapht
You won't convince anyone of this until they start working heavily in a
theorem prover. With constructive proofs you can introduce certain automation
that is not possible otherwise; until then, when we are working on pen and
paper, it is a limitation on your proof techniques.

But you are are aware of this already, I'm just writing it out.

~~~
c-cube
I'm surprised by this statement. Most of the research in automatic theorem
proving (including for first and higher order logics) is based on classical
logic, because it's much easier to reduce to a search for false, than to try
and prove an arbitrary formula. The automatic provers able to do
intuitionistic proofs generally do it by encoding the intuitionistic logic
into a classical logic first.

Look at these provers, they're almost all based on classical logic, and even
on proofs by contradiction:
[http://www.tptp.org/CASC/27/SystemDescriptions.html](http://www.tptp.org/CASC/27/SystemDescriptions.html)

Even Isabelle/HOL, which is quite user friendly and has a lot of automation
(like Sledgehammer, which can call to the automatic provers mentioned above)
is based on classical logic with choice.

~~~
krapht
Hmm, I was more thinking about proof translation across isomorphisms. I'm not
speaking from my own experience here, just that I have seen people grumble
about it.

[https://leanprover-
community.github.io/archive/113488general...](https://leanprover-
community.github.io/archive/113488general/20549invalidoccurrenceofrecursivearg10ofrvecparamvcons.html)

------
username90
I'd bet that we will invent a general AI which can invent math faster than we
can make theorem provers do something useful.

What separates math from pure logic is that math has a set of very intuitive
axioms. So mathematics is first and foremost about stretching and fixing
inconsistencies in our intuition, I personally see no value in trying to
refactor the foundations of mathematics like constructing integers based on
sets etc, taking integers and their operations as an axiom hurts nobody.
Theorem provers seems to be like that as well, people don't prove new things
in it they just refactor old things to fit in, kinda like people rewriting
their services in languages with more street cred like Haskel or Rust.

~~~
logicchains
>What separates math from pure logic is that math has a set of very intuitive
axioms.

Taking certain sets of axioms can lead to very un-intuitive conclusions,
that's why some people care about building a solid foundations. A classic
example is
[https://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox](https://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox)
, which states "Given a solid ball in 3‑dimensional space, there exists a
decomposition of the ball into a finite number of disjoint subsets, which can
then be put back together in a different way to yield two identical copies of
the original ball." If we change the foundations to remove the axiom of
choice, this paradox (and many others like it) is destroyed.

