Hacker News new | past | comments | ask | show | jobs | submit login
Titans of Mathematics Clash Over Epic Proof of ABC Conjecture (quantamagazine.org)
373 points by digital55 on Sept 20, 2018 | hide | past | favorite | 261 comments



I have a MSc in mathematics so I am by no means an expert in mathematics proofs but I get the gist of it. To me a proof is literally a logical argument that you can follow to "believe" that a theorem is true. I do worry how many people actually understand or verify mathematics proofs. How many people have actually read and verified Perelman's or Wiles' proofs (I'm only singling them out because they're famous, not because they're particularly sketchy). We seem to think that once a proof is published in a reputable journal then it is definitely true, but really we should only think "a small number of qualified people have read it and think it is fine". The foundations actually seem a bit shakier to me than people seem to appreciate. Perhaps machine-verified proofs will be the answer one day.


The really interesting case here is Voevodsky, who found an error in one of his own proofs many years later and then got interested in foundations of Mathematics and automated proofs. Homotopy type theory is an interesting development, particularly if you are into programming languages.


I don't really understand why the entire known mathematics is not automatically proven yet. We, people, understand very formal proofs. Mathematics is very strict science with axioms and following theorems. It should be a perfect application for computing. I'm not talking about computer prooving theorem himself, but mathematician should write proof using some formal language and computer should be able to follow that proof.


If you've ever tried encoding proofs in a proof assistant such as Coq (which is what the INRIA folks used to encode the four color theorem and the Fiet-Thompson theorem), you'll realise just how painful it is --- I speak as someone who's done this for fun (and now for research. [my report is available here](https://github.com/bollu/dependence-analysis-coq/blob/master...)


If you've ever tried encoding proofs in a proof assistant such as Coq... you'll realise just how painful it is

Is that a result of how painful the tool is to use? (And by “tool” I mean any such tool - coq or whatever.)

What would need to be improved in the state-of-the-art of automated proof validation for the process to be less painful?


> What would need to be improved in the state-of-the-art of automated proof validation for the process to be less painful?

I think there are two parts:

1) Improve the theorem proving languages syntax so it's more intuitive, and so there is a wealth of "libraries" to build upon so you don't have to go from the basic axioms (of arithmetic or the reals for example) any time you wanted to prove something.

2) Provide some sort of computational intelligence to fill in natural gaps of proofs. I believe humans tend to leave a large number of more or less trivial gaps even in rigorous mathematical proofs that couldn't be avoided by language syntax. For this probably some kind of AI system would be ideal that tries to derive successive results and complete the proof automatically.


Isabelle semi-automatic theorem prover tries to help with 2 if you use sledgehammer strategy and print out the simplified proof with isar_proofs argument, which you can more easily read and reintegrate into the main document if needed.

You can help its heuristics by providing it a set of potential proof facts explicitly.

The libraries available for it are already quite huge... sometimes in a few encodings as well. (valued, typed, functional etc.)

https://www.isa-afp.org/


I think coq has some kind of ml software. I saw it mentioned in the proof general UI. I'm not sure how good it is


Possibly this? http://www.macs.hw.ac.uk/~ek19/ML4PG/ It's mostly in maintenance mode (at least, I seem to have inherited its maintenance; and it's such a horrible mess of Emacs Lisp + Matlab + shell scripts that I don't dare touch it unless asked ;) ).

It's used for clustering statements and proofs, the idea being that if you're stuck trying to prove something, you might be able to find something "similar" in an existing library to use as inspiration. It's mostly to help with manual searches, rather than automatically solving anything (although there is some related work on doing that for Coq called SEPIA https://arxiv.org/abs/1505.07987 and ACL2 http://www.macs.hw.ac.uk/~ek19/ACL2ml/index.html ).


Imagine convincing someone that a web service could be written, vs getting it written and deployed to production.

And none of its dependencies actually exist either. Like someone just said... relational database? Yeah we could write one of those...


The bottleneck is not the tool itself, but the theory it builds upon: Calculus of Constructs in Coq's case. CoC and other calculi oriented towards proofs have limitations.

To better understand current challenges with provers I can recommend this episode of the Type Theory podcast with Dan Licata as a guest: http://typetheorypodcast.com/2015/01/episode-3-dan-licata-on...

It's quite dense, but there are some bits that I, as a programmer, could understand.


The vast majority of formal proofs don't run into any limitations with the CiC. They run into more practical limitations with the tooling and available libraries and tactics.


https://github.com/bollu/dependence-analysis-coq/blob/master...

Put a space before the final ), else it becomes part of the URL. Thanks for the paper, by the way.


The fact that encoding Math proofs in a formal language is currently painful is well acknowledged.

However, it shoudln't deter us from the goal.

People used to code in assembly and Fortran. It was and still is quite painful.

But programming languages have improved, and resorting to those original languages (which is what Coq is) is rarely necessary these days.


What is the painful part? The level of detail required for it to check?


How current proof assistants work is using the Curry-Howard correspondence, [1] the gist of it is that all of first order logic can be encoded into a type. All one must do is write an expression that type checks to what one is trying to prove. That turns out to be fairly hard for reasonably complex proofs.

[1]: https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...


To be only a little more precise, every proposition becomes a type of its own. The AND operator forms product types, the OR operator forms sum types, IMPLIES forms a function type... The proof of a theorem turns into an inhabitant of the theorem's type.


The painful part is that you're using tools which are trying to bridge between wildly different logical foundations. Just because a computer beeps and says "proof correct" doesn't mean you've proven what you thought you have, so we need to first translate the logic in which your assumptions are derived into the logic of the machine. That is a complex and subtle project.

In other words, it's actually very easy to prove something with a computer, but it's very difficult to prove the thing you actually set out to prove. It's not any easier than writing a bug-free program.


> The painful part is that you're using tools which are trying to bridge between wildly different logical foundations.

If you're referring to the fact that many proof assistants (e.g. Coq) are based on constructive/intuitionistic type theory, whilst most mathematics is formulated in classical set theory, then yes there is some translation involved. However, I would point out that we can use classical logic pretty easily by adding excluded middle, double negation elimination, etc. to an intuitionistic system (this is available in the Coq standard library https://coq.inria.fr/library/Coq.Logic.ClassicalFacts.html ). There are also systems which can use set theory (directly or indirectly), like Metamath and Isabelle.

Personally, I much prefer type theory (which I self-learned, before pursuing a related PhD) to set theory (which I was taught as an undergraduate and enjoyed, but not enough to do it as a hobby). However, I'm well aware that my computer scientist background is nothing like that of a trained mathematician.


Oh, come on. "It's not any easier than writing a bug-free program" is not even remotely true. Getting the spec right is way easier than getting the implementation right, in almost all cases. And the logic isn't that different from set theory--there's even a set theoretic model (which you get when you use Prop, to which you can add classical axioms).


If getting the spec right is way easier than getting the implementation right, then strict waterfall would be the most effective way to write programs, and the holy grail of software engineering would be to automate the process of generating implementations from formal specs (there are some people who think exactly that, but that has been the case for about five decades, with effectively no impact on how software is actually made.)

The reality is that we often overlook issues with the spec until we try to implement it, or try using what we have implemented, even when we are trying to get the spec right first.


I have no idea where you're going with this, but getting the spec right is very different from actually finding a proof for it. I never said actually finding the proof was always easy. But I will say this for sure: getting the spec right and finding a proof for it are jointly much easier than writing a nontrivial bug-free program without a theorem prover. The success or lack of success of waterfall is not relevant here, as that has very little to do with formal verification; additionally, the success of most software methodologies is not measured solely on whether the end product was correct.


If programmers sat around writing <10K line pure programs with no user interaction, they would have just as lovely a time as Mathematicians. If theorem proving in mathematics is easy for you, it is because your aren't trying to prove hard things.


Getting a spec right is not the same as proving a theorem. It is the same as stating a theorem. And I absolutely disagree with you that making small pure programs magically makes them bugfree.


Would it be possible to simplify the system by making default assumptions about a consensus/common-sense logic? Are topologists interested in different logical foundations?


You're right (and the GP isn't). A proof of your topological theorem in any one of Coq/Isabelle/Lean is a pretty good indication that it's correct, in spite of their various foundations.


... assuming you correctly wrote your theorem in those frameworks.


I think you are exaggerating the difficulty of theorem statement compared to proof. Here is one (admittedly extreme) example in Lean, from the formal abstracts website.

theorem Wiles_Taylor : ∀ (x y z n : nat), x > 0 → y > 0 → n > 2 → x ^ n + y ^ n ≠ z ^ n := more years than I have to spare


A lot of proofs at masters level and beyond omit a lot of details because they can easily be checked, but would be tedious to write out explicitly. This means that a 1 page proof would probably actually correspond to 2-3 pages (or more!) if written out with all the details (e.g. with standard but tedious arguments fleshed out properly, and all cases delt with explicitly (rather than resorting to a 'and the other cases follow through similarly' remark)). When written in a proof assistant this would become even longer because natural language allows us to brush a lot details under the carpet because they are understood from context.


A great example of how tedious it is to list out all the steps of a proof is the Principia Mathematica https://en.wikipedia.org/wiki/Principia_Mathematica

"Proof assistants" like Coq and Isabelle can generate some of the more "obvious" steps using metaprogramming (known as "tactics").


> Mathematics is very strict science with axioms and following theorems.

Mathematics textbooks would be intractably longer if they spelled out every step in explicit, formal detail. And then the proofs wouldn't make sense to people because the core ideas would be obscured by the formality! A typical mathematics proof is intended to be read by a thinking human assumed to have some level of mathematical sophistication and knowledge. These assumptions make proofs far more implicit than proofs in a formal language, which essentially only assumes that the "reader", i.e. the computer, can push symbols around and compare them.


This is a common objection from mathematicians, that is very understandable. I think both Lamport and Voevodsky’s have made important contributions here [1]. Namely in presenting idea from software that we use to manage _exactly_ your complaint.

The way in which mathematicians structure proofs has fallen behind the ways in which software engineers structure programs (which are isomorphic to proofs). The question is then to develop languages which encompass these modern methods.

This is really the goal of the homotopy type theory project, to provide a language which is both machine checkable and easy to use informal reasoning with.

[1]https://www.microsoft.com/en-us/research/publication/write-2...

https://www.youtube.com/watch?v=kOY-XE29_d8


It's also important to note that this situation is _precisely_ what Voevodsky wanted to prevent:

For nearly a decade, Voevodsky has been advocating the virtues of computer proof assistants and developing univalent foundations in order to bring the languages of mathematics and computer programming closer together. As he sees it, the move to computer formalization is necessary because some branches of mathematics have become too abstract to be reliably checked by people.

“The world of mathematics is becoming very large, the complexity of mathematics is becoming very high, and there is a danger of an accumulation of mistakes,” Voevodsky said. Proofs rely on other proofs; if one contains a flaw, all others that rely on it will share the error. [1]

Most mathematicians still think of machine check for dealing with a large _number_ of cases; but Homotopy Type Theories goal is to manage abstract complexity not just size.

[1]https://www.quantamagazine.org/univalent-foundations-redefin...


I own a copy of the HoTT book :)

> Namely the way in which mathematicians structure proofs has fallen behind the ways in which software engineers structure programs (which are isomorphic to proofs).

The problem is that the required structure for writing sophisticated mathematical proofs is much more demanding than even a piece of software as complex as the Linux kernel. Part of that is because software as such, when interpreted as proofs, don't verify properties nearly as strong as a real theorem. The difference between a proof of `Integer -> Integer` and `Integer -> Odd Integer` is already a gulf the vast majority of software does not bridge.

We're just not there yet. Voevodsky's program is a great first step but just that: A first step. No one really knows what the next thousand steps will be. It's not straightforward at all.


I agree completely that the tooling is not there today.

However:

"The problem is that the required structure for writing sophisticated mathematical proofs is much more demanding than even a piece of software as complex as the Linux kernel. "

Mis-understands why and how we use the linux kernel. Linux isn't that difficult to understand really (both in LOC, and the abstractions it presents). It's simply a base which can be relied on that provides abstraction points we can interface with (both as software, _and_ hardware_).

The important part here is that not only does linux give us a reliable "library" of behavior, but it is one that is common and reuseable.

This "library" of behavior is something we've begun to see present itself in category theory. This is why so many modern papers start with "we show that there exists a isomorphism...".

The biggest thing _modern_ computing and libraries focuses on is composition of behavior. Aka functions that take functions. If a computational interpenetration of uni-valence is found we will have similar capabilities in math (e.g. making it possible to simplify the process of moving between isomorphic structures). I believe this would be a major stride towards broader use of machine verification.


> The way in which mathematicians structure proofs has fallen behind the ways in which software engineers structure programs (which are isomorphic to proofs).

If we accept that a software program is isomorphic to a proof, is there anything that isn't isomorphic with a proof?

I can prove Pythogoras' theorem with a diagram, so by extension anything in the physical world is a manifestation of mathematical theory and, since it exists, is isomorphic with a trivial proof for something.


I'm inclined to agree with you; though the key word here is "trivial".

Types are propositions, and programs are proofs of their types. Unfortunately most programs are overly-complicated proofs of trivial propositions (when looked at from the mathematical side of curry-howard; of course what's actually the case is that we tend to only use types to specify a tiny part of what our programs are meant to do).

In an untyped language every expression has the same type (which we might call `Thing`, and contains integers, strings, functions, etc.) so in the curry-howard sense every expression is proof of the same trivial proposition.

I'm inclined to think of the laws of physics (whatever they are) as a programming language, and physical systems as programs running in that language. On first impression, it doesn't seem like physics is typed; hence every physical system is a proof of a trivial proposition (essentially: physical things can exist). It would be interesting to hear a counterargument though.

Note that in these cases it's the types/propositions which are trivial, not necessarily the programs/proofs. Consider that we could, if we wanted, prove a pretty trivial proposition like `((A -> B) AND A) -> B` in a roundabout way that involved applications of Fermat's last theorem. I tend to call such things "(over)complicated", rather than "complext", since their complexity isn't inherent or required.


>I can prove Pythogoras' theorem with a diagram

I don't think I agree with this statement. As a matter of fact, I've always had a deep unease with so-called "geometric proofs", where a succession of visual transformations of a diagram are used to prove a theorem.

You can certain explain the intuition behind the proof of Pythagoras theorem with a diagram, and there's huge pedagogic value in doing that.

But to me it isn't a formal proof until it has been codified in a language that a computer can walk from start to end making sure each step is valid.


scratches head I'm pretty sure I can come up with a codified language proof that you will accept, then define a mapping between the language and transformations of a diagram. And while not an expert on isomorphism, but I'm pretty sure that makes the visual proof isomorphic with the codified language and hence valid (and, indeed, computable).

Basically, although code and proofs are in a sense isomorphic, I think everything and proofs are isomorpic in the same sense. So claiming that proofs should be structured to the same standards as code because code is isomorphic to proof is a bit of a non-sequitur. Why not show the layout of my rock garden is isomorphic to code and hence proving something (my rock garden can be Turing complete) and then claim my rock garden should be structured like a program?

Basically, the idea that the structure mathematicians favour for proofs have fallen behind programmers laying out of code is not well substantiated and the idea that 'proofs are isomorphic to code' is not useful because it is far too broad.


It's not so much that programs are isomorphic to proofs; it's that mathematics and programming are the same activity. It's essentially a historical accident that the people who practice this activity have ended up in different groups, and that those groups use different terminology for the same things ("program" vs "proof", "type" vs "proposition", "execute" vs "simplify", "tuple" vs "conjunction", "union" vs "disjunction", "function" vs "implication", "dependent function" vs "universal quantification", "interface" vs "existential", "programming language" vs "logical system", etc.).

All of the relationships between these are the same too (e.g. a program `p` has type `T`, iff the proof `p` proves the proposition `T`; the proof `w` proves that the existence of `x` implies the proposition `y AND z` iff the program `w` is a function which generates a tuple of `(y, z)` when given an argument implementing the interface `x`; and so on).

Mathematicians and programmers do care about different things. Mathematicians are more concerned with informally proving previously unknown results; which corresponds to something like learning that a type has a value by sketching some pseudocode. Programmers care deeply about how many steps their proofs (programs) take to simplify; how easy it is to modify a proof to prove a slightly different proposition, etc. Yet there are so many good ideas on each side that are immediately applicable (e.g. dependently typed languages take rigorous logical systems and view them as programming languages; proof engineering takes software engineering ideas and applies them to building and maintaining proofs).

I'm struggling to think of an activity involving rock gardens that corresponds precisely to, say, universal quantification.


Note that there are formal, machine-checkable languages to describe geometric proofs: http://lambda-the-ultimate.org/node/3899

Of course, that still leaves the same problem of explicit proofs being tedious, i.e. being able to prove pythagoras's theorem "with a diagram" does not mean being able to prove it "within an explicit, formalised diagram language".


Speaking of Lamport's "How to write a 21st-century proof" paper, here's a video of a great presentation he gave on this topic last year:

https://www.heidelberg-laureate-forum.org/blog/video/lecture...


Why can't textbooks use code-folding style display to hide unwanted detail? Paper books are basically obsolete except as a master copy for archival.


I participated in a discussion on /r/math about this a few months ago, and there was surprising resistance to ideas like this. Most of it fell into two categories. The first was people with a paper fetish who refuse to believe that there are now computer display technologies that provide adequate readability and resolution to be a full-time substitute for books. The second was people who can't imagine a usable way to typeset proofs that embed metadata about which lines are the key steps and which could be folded. This is probably influenced by the fact that LaTeX doesn't support this kind of semantic markup and it would be awkward to add.


Except... it's not a fetish. Paper books have many advantages. It is easily annotable. (none of the current ebook types are easy). It provides tactile feedback, which can be used for organization - folded corners give you two types of indices (top/bottom outer corner). The entire structure of the book as a stack of pages gives you immediate visual feedback where you are, and it's effectively random-addressable. ("Yeah, it's roughly a third in, check there"). Heck, you can color page edges to make it even easier, and that gives you three spatial indices on top of the two tactile ones. It's not as unwieldy as screens. It is flexible, which makes it nicer to handle. It doesn't need to be charged. It doesn't break when you drop it. (Not to mention you can kill a fly with a paper book, ebooks don't work)

Yes, e-books have other advantages - but calling it a paper fetish basically means you're making an uninformed call, because you're not even aware of the advantages.

The second, you could have easily addressed by proposing a markup if it were easy to do so. The fact that you haven't, and aren't even aware why people think it's hard, means you're again coming from an uninformed position.

And you thought the resistance was surprising?


The reddit thread in question was surprisingly lacking in discussion of the usability aspects you're focusing on. I think most of those issues make for a much more productive conversation, because there's real room for improvement in those areas—paper and current computer/tablet technology each have their own usability pros and cons. I'm not at all uninformed about those usability challenges. I was just surprised and disappointed at how many people assumed that paper had an unbeatable advantage on the purely visual aspects of presenting information, as if computers were still limited to 100dpi with really poor black levels.

> The second, you could have easily addressed by proposing a markup if it were easy to do so. The fact that you haven't, and aren't even aware why people think it's hard, means you're again coming from an uninformed position.

I didn't propose any specific markup because the specific syntax isn't the problem. The problem is that the predominant markup language for mathematical documents is LaTeX, which is a bit too presentation-oriented rather than semantic/structure-oriented for it to be really easy to use it to produce documents that can be interactively folded (especially when the output file format is almost always PDF). Something that used eg. MathML and an HTML/XML based overall document structure can trivially encode all the information necessary for rich interactive reading, and proposing specific tag or class names to indicate what lines of a proof should be collapsed by default only invites bikeshedding.

I think you're quite unjustified in accusing me of being so thoroughly uninformed.


>I was just surprised and disappointed at how many people assumed that paper had an unbeatable advantage on the purely visual aspects of presenting information, as if computers were still limited to 100dpi with really poor black levels.

I'm not surprised. Virtually all "designers" think low contrast text is better, despite high contrast being preferred on paper. They deliberately reduce contrast by setting the text color to gray. Even Firefox's Reader View, which otherwise fixes most mistakes of designers, sets the text to gray unless you override it in userContent.css. Desktop GUIs often have low contrast text too, which is also difficult to fix. Unless somebody somebody knows how to do this their expensive monitor will be wasted.


Actually latex would be quite sufficient, basically what you are proposing is an appendix where you list additional details

The real reason people don't do this is simple, it would require an inhumane amount of work and details which will likely be wrong themselves and will for sure be to boring to be checked. Most science is wrong in some aspects, sometime it is better to have a meaningful intuition that can be understood by other expert (and maybe rejected)


Paper math books also do not serve up distracting ads.


Math ebooks shouldn't either lol


people with a paper fetish who refuse to believe

Maybe there was surprising resistance because you spoke like that?


Nah. I'm only using language that harsh now because of how disappointingly irrational and weak the objections on reddit were. It's odd to see mathematicians resistant to quantifying what they believe to be inferior about computer displays, and improperly generalizing what seemed to be their experience with $500 laptops to also argue against high-quality eInk, OLED and IPS LCDs.


Two points:

1) Mathematicians write all over the papers and books they read. Electronic versions of this exist (e.g., Xournal, written by a well known mathematician), but they tend not to be as convenient as simply scribbling on paper.

2) Mathematicians digest papers nonlinearly. Digital presentations don't usually lend themselves to flipping back and forth between pages. At times, I suspect mathematicians use papers and books as memory mansions, organizing concepts by relating them to their location in the physical copy.


> Mathematicians digest papers nonlinearly.

That's a fairly weak argument: you can browse an electronic document in a non-linear fashion way easier than a book.

And also: books don't have CTRL-F


> That's a fairly weak argument: you can browse an electronic document in a non-linear fashion way easier than a book.

How do I make handwritten side notes? How can I create a bookmark for a specific page? How can I view multiple non-contiguous pages next to each other? How can I reference a specific part in an electronic text (in non-electronic text "3rd paragraph on page 11") and send it to a collaborator?


I was specifically addressing the "non-linear" part, and none of the things you list have anything to do with that.

However, to nevertheless address your points:

>How do I make handwritten side notes

There is a number of applications to do this (e.g. xournal), and when combined with a touchscreen on a decent hirez modern device, we're getting close to what paper can do. I'll grant you: this is still the weak point of computers as compared to pen and paper (especially for scribbling diagrams), but it won't be for long.

>How can I create a bookmark for a specific page

I'm not sure if you're trolling here, all e-book readers I've used have this feature.

>How can I view multiple non-contiguous pages next to each other?

Multiple windows ? CTRL+N ?

>How can I reference a specific part in an electronic text (in non-electronic text "3rd paragraph on page 11") and send it to a collaborator?

Cut and paste?

Or ... sending him an email with ""3rd paragraph on page 11"" in the body?


I think if you reframe the problem it can be much easier (if not trivial) to find solutions.

If you present a 100 page book as a folder of 100 image files instead, most of these become image manipulation problems:

>How do I make handwritten side notes?

The same way you'd edit an image.

> How can I create a bookmark for a specific page?

Create a subdirectory with the page or a link to the page.

> How can I reference a specific part in an electronic text (in non-electronic text "3rd paragraph on page 11")

Cut and paste the excerpt into a new image file.

> and send it to a collaborator?

File sharing is a solved problem. Hell, you can even share it on Instagram if you so choose.


It's odd to see mathematicians resistant to quantifying what they believe to be inferior about computer displays

They're mathematicians, not display interface experts. This is like demanding that someone explain to you why they don't like a particular kind of food -- people are allowed to dislike things without having a complete internal axiomatic system justifying it.


Dislike, sure. Personal preferences are acceptable. But claiming that it's impossible for a computer/tablet display to be as readable as paper and that all electronic displays cause more eyestrain is unreasonable. Especially when the person making that claim has already demonstrated that they are familiar with terms like resolution, contrast ratio and form factor, but they fail to even propose a hypothesis for what might still be inferior about the best computer display technologies.


Mathematicians are happy to use computers for typesetting papers, but not for the initial creative phase:

1) computer displays require you to scroll, which interrupts your mental flow

2) paper and pencil or pen is free-form

3) in advanced mathematics, results are rarely calculated in numerical form, so the computer isn't helpful

4) notebooks are permanent. No backups needed.

I imagine the same is true for most cartoon artists.


> improperly generalizing what seemed to be their experience with $500 laptops to also argue against high-quality eInk, OLED and IPS LCDs.

And what Grad student can afford those at [Insert State University]. I was a PhD student in math not long ago, and vastly preferred paper books, and even printed out articles. It is way way easier on your eyes than something actively blasting light. Maybe in 20 years high quality e-ink will be cheap enough, but that has been promised since forever.


> It is way way easier on your eyes than something actively blasting light.

See, this is the kind of bad argument that really irks me. There are good arguments about how computers aren't ready to replace pen and paper for actively doing math, and even a few shortcomings for computers replacing paper textbooks. But "actively blasting light" doesn't mean anything. Photons are photons. If you think a backlit display is somehow harder on your eyes than an indirectly-lit piece of paper, then you should be trying to figure out whether you simply have the backlight set too bright for your surroundings, or if your screen's contrast ratio at reasonable brightness levels is inadequate. Your LCD's default settings are probably optimized for movie-watching more than reading, but that's easily fixed and definitely not an inherent limitation of all backlit displays everywhere.


>not an inherent limitation of all backlit displays everywhere.

This may be true, but in my experience(not very wealthy, typically sub $400 devices) the presets are set such that 0% backlight in the OS is glaringly bright. I cannot turn it down without rooting the device.


Maybe for tech device things we should have two separate tracks, one for wealthy snd people who can modify devices and one for poor and people who cannot. Even my Thnkpad laptop under Ubuntu has a setting for 10% brightness which is far too high for a truly dark room. The next lowest setting is 0% which is absolute dark. How do you use your laptop under such conditions?


You could use xbacklight to have a finer control of the display brightness. You can also rebind the brightness keys to shell scripts that use "xbacklight -inc <percent>" or -dec, with a finer resolution.


I don't see how backlight settings for a truly dark room are relevant here. I've been annoyed by several of my devices being unable to dim far enough, but never in a situation where I would expect to be able to easily read words printed on paper. An office or classroom setting that has adequate light for pen and paper work could be too dim for an LCD at 100% brightness to be comfortable, but almost any device can be dimmed enough for that environment.

Color temperature can also be a problem, but there's a lot of awareness of that issue nowadays. Most operating systems now support automatic adjustment of color temperature based on time of day, plus manual adjustment. Devices that adjust their color temperature to account for ambient lighting are starting to catch on, and I expect they'll be pretty common in a few years.


a lot of mathematicians prefer chalkboards over whiteboards too, because 85% of all known whiteboard markers are nearly empty.

love of math and love of technology aren't as closely linked as you believe.


Trying to do learn math with a computer is frustration. The computer eats up your desk space, you have to constantly move it around as you work on different things, a mouse and keyboard eat up even more space. You have to have power, and you have to drop your pen every other time you need something.

Maybe you don't see the problem because you aren't doing any math?


I’ve been studying undergrad maths for the last year or two, and I had a bit of a breakthrough when I realized how much more effective it was for me to do all my work (notes, exercises) in LaTeX rather than with pen and paper. I can refactor at will, improving proofs, and the consistent tidy typesetting makes me think more systematically about the problem I’m working on.


I've started doing everything in LaTeX - diary, everything I'm learning about, books I'm writing, maths, documenting my programming etc - the last few months too; it's going great. It took a few months to learn about LaTeX packages, basic TeX etc. (And every time I use TikZ I've forgotten it all again..) It's especially useful mathematically in avoiding errors when doing page-long calculations with pen and paper - copying lines instead of writing them out, and the whole thing looking so lovely and neat - eliminates 95% of the errors I made before, and it's faster. I still do ideas, sketches etc on paper, but anything that's likely to be want to be kept goes straight into LaTeX.

But also..all my favourite books have markings on each page, margin comments, turned-over corners etc. Any pdf book I read more than once or twice, I'll get a paper copy, I think.


What tool(s) do you use for working with LaTeX?


A Makefile to encapsulate all the details of how to generate the PDFs.

Text editor customizations to trigger `make <current-doc>.pdf` on save. (Emacs in my case)

A PDF viewer that reloads the PDF when it changes on disk (Skim in my case, which does auto-reload, but I also have some Applescript in the Makefile to poke Skim so that it notices the file change faster).

A good text editor LaTeX editing mode (Emacs in my case).

Random test editor customizations for working on LaTeX.

My main tips are

1. don't be shy of perfecting your work environment

2. use git to track both your work environment customizations and your LaTeX work.


In all of my papers, I have attempted to include an indent-based hierarchy of the argument. Adding code-folding to this would allow for so much more detail here.

I don't think we should have papers being nothing but folds, but there is definite value in presenting proofs in this way.

One potential downside is that there is no longer an order that guides the reader through the paper. Because it isn't quite clear how deep the author expected you to unfold an argument.


Maybe a default path through the document could be specified by the author or by convention


I think the idea is that you go deeper only if you are stuck or unsure about something at your current level.


Why can they? You can't assume fully explicit proofs are practical just because they're plausible. The justification of practicality will be highly nontrivial. If its impractical to write such a textbook then code folding gets you nothing.


If it is impractical now, that simply means the software needs improvement. If mathematicians stop being so accepting of informal proofs then there is a very strong incentive to improve the software.


> that simply means

The point is that it's not simple. It will require significant advances in computer science.


Is code-folding necessary? Instead you could put the demonstration in an appendix, and the authors would be free to decide what goes in the core of the article and what goes into the appendix. This could be made automatically where trunks of article would move back and forth from the core to the article and vice-versa, depending of what the reader chose to, and in each case a corresponding pdf could be generated.


You could say the same thing about code, yet we're able to build upon our abstractions to make something both high level and able to be broken down to the foundations.


Can't you make the formal proof software "smart" enough to prove a theorem given sufficient intermediate steps and hints about what other theorems/defs to use? i.e. basically what a book does? This would makes writing formal proof much easier, wouldn't it?


There is a big difference between explaining / teaching a proof and verifying its correctness. We're talking about the latter here.


> Mathematics textbooks would be intractably longer if they spelled out every step in explicit, formal detail.

Just as an example, Russell & Whitehead's Principia Mathematica famously requires some 400+ pages to prove that 1+1=2 (well, they prove some other stuff, too, I suppose.)

See the image and caption here to get a flavour: https://en.wikipedia.org/wiki/Principia_Mathematica


Someone already linked metamath, it also gives a flavor of just how long some proofs can be when you fully formalize them and start from ZFC axioms. From the trivia page at http://us.metamath.org/mpeuni/mmset.html#trivia "The complete proof of [(2+0i) + (2+0i) = (4+0i)] involves 2,863 subtheorems including the 189 above. These have a total of 27,426 steps—this is how many steps you would have to examine if you wanted to verify the proof by hand in complete detail all the way back to the axioms of ZFC set theory."


Depends on your starting point. You can prove 1+1=2 quite easily from the Peano axioms:

1+1=1+S(0) [definition of 1]

1+S(0)=S(1+0) [definition of addition, a+S(b)=S(a+b)]

1+1=S(1+0) [transitivity of equality]

S(1+0)=S(1) [definition of addition, a+0=a]

1+1=S(1) [transitivity of equality]

S(1)=2 [definition of 2]

1+1=2 [transitivity of equality]


It's not so much to prove that 1+1=2; rather to set up the definitions and background entirely from scratch. If you had to define the grammar and vocabulary of English before you wrote a sonnet, your sonnets too would be exceedingly long.


Exactly right. Most of those background steps do things like establish the basics of complex numbers, and you can reuse them for anything else involving complex numbers.

The actual proof in metamath that two plus two is equal to 4 is at http://us.metamath.org/mpeuni/2p2e4.html

It is only 10 steps long. Yes, it is longer than an informal proof (such as one printed in a math journal), but it is not hard to follow. In this case this proof is rigorously verified by four different independent computer verifiers all the way back to the axioms of logic and set theory.

It is already possible to have proofs that are completely computer verified. Up to this point, mathematicians often do not use them because it is simpler not to. But in the long-term, I think we should change our expectations to start requiring proofs to be verified by computer if we really want to believe in them. Humans make mistakes, and reviewers often miss them. Computer verification, especially when there are multiple independent verifiers, provide much greater confidence that the claimed proof is actually correct. The number of so-called proofs that are actually not proofs is very large.

If you are curious about metamath, I have a video on YouTube that you might find interesting: https://m.youtube.com/watch?v=8WH4Rd4UKGE#


It's like telling that there are thousands of machine instructions to be able to write printf("%.2f", x). It means that we should use the power of abstraction.


That was exactly the point, if you abstract then you are handwaving details (unless you abstract inside a formal system that itself defines methods for abstraction and then you are back to square one)


You don't understand because you haven't looked into it very much :) You'll find that automated theorem proving and proof verification is far from trivial. State of the art tools are still very cumbersome and verbose to use and are infeasible to use in those problems where you need them the most.


Formal proof assistants and interactive theorem prover for mathematics already exist. So do proof libraries. See: http://metamath.org/

They are not used because it's tedious to type and proof every step in the process. Mathematicians don't write all steps down. You also need build a library from ground up.

We could to the same for computer programs but we don't wan to do that either.


Also I want to add syntax, mathematical syntax is incredibly rich, complex, ambiguous and effective; so much that in some paper it is the hardest part to understand, not always because of bad writing.

At least we are better off than theoretical physicists :-)


It is actually surprisingly fussy to set up a formal proof for anything complex enough to be interesting to a mathematician (for computer assisted verification). People take big leaps that are often quite awful to actually prove. I think you are right though, this area will definitely be very important and make big strides.


There’s this curious thing in mathematics where most things are either obviously true or obviously false and those that aren’t are mostly inconsequential. Those that remain are certainly more interesting but also typically their proofs are a few obvious steps.

Obvious in this case means something like “clear to a mathematician familiar with the field” or “otherwise many things will fall apart”

This leads to the situation where if a mistake is found in a theorem then it is likely still true even if the proof is wrong.


Some fun large counterexamples to conjectures: https://math.stackexchange.com/questions/514/conjectures-tha... So maybe Fermat's Last Theorem etc isn't always obviously true. But I do get your point.


A lot of areas like real/stochastic analysis are really difficult to map to any formal proving mechanism. Really there are only some areas in abstract algebra where it can be done, and in there you still need some proofs that link a set of real problems to a computable problem and finish with the computation (Groebner basis proofs are one example). But that's rare.


Well, first, use a logic programming language and report back. Then understand that math is almost wholly non-empirical. The things discovered or made up in math are not reflective of the real world, most of the time, regardless of how closely it resembles.

Take for instance euclidean geometry. In the world of equations, everything works. But a point, or a line, are neither things that exist out in the world, much less their relations which, when drawn on the surface of the earth, are probably not actually euclidean at all.

Number theory in particular is almost never about math, but about the patters mathematicians believe they recognize. Somebody a'ong the lines goes "oh hey check out this flower. I bet this is the only flower of this type in the world" and then spend hundreds of years trying to prove it.


Because most mathematics is not constructive, and most automated theorem proving tools are based on constructive logic.


There are number of proof verification systems that do not require constructive logic. So if you require classical logic, just use a tool that supports it.


That's irrelevant, all mainstream proof assistants are compatible with classical logic.


That doesn't sound right at all. Law of excluded middle is very easily computed with, It's a college homework exercise.


Well, yes because you can embed classical logic in constructive logic in many ways but then you lose a lot of convenience.

The if you had exensionality things get really weird


I don't really understand why the entire known mathematics is not automatically proven yet.

I think the problem is that this would be a significant amount of programming and data entry work, and there's no incentive for anyone to put in this work. There are many important math papers that are written at a level that is enough for a human math professor to understand, but not for a symbolic mathematics program to understand.

Since math research is generally prioritized by other math professors, and the current system of publications is optimized for math professors to read, they have no incentive to convert to a "formal language proofs are what matters" model. And there is no real financial incentive for computer programmers to implement something like this if math professors don't want it.

At the end of the day, what is the benefit of formally proving mathematical theorems via computer? If the only people consuming math proofs are other mathematicians, the only point is to check that you haven't made a mistake, which is useful but perhaps not worth much investment because the current system already does okay at catching mistakes.

If you could actually develop new mathematical research more easily with computer assistance, then I think these formal methods would be quite valuable, but I don't see how that would happen.


Formal proofs increase trust in mathematics. E.g. Fermat's Last Theorem is supposedly true, but I can't understand the proof. I have to trust the mathematicians who say the proof is correct, and I know that in other fields (most notably with the replication crisis in psychology) people accepted things as true when they weren't. If I had a formal proof I could run it through a theorem checker and see that it was correct, and I could use multiple independent implementations of the theorem checker for more confidence. If I'm using human mathematicians as black-box theorem checkers, I don't have the option of using multiple independent implementations, because all mathematicians work with other mathematicians, so it's possible that they're just agreeing with each other's mistakes. With the software I also have the option of examining how it works, all the way down to transistor level if I want to.


But who is demanding this sort of increased trust in abstract mathematics? Mathematicians don't seem to be demanding it, as this sort of work/research is not in high demand for publications. Non-mathematicians don't seem to be demanding it, because this sort of mathematics doesn't yet have much application outside of mathematics.


Some mathematicians are concerned that the possibility of an error in a computer program or a run-time error in its calculations calls the validity of such computer-assisted proofs into question.

(This might be one reason, from https://en.wikipedia.org/wiki/Mathematical_proof#Computer-as... )


That's silly, because humans have been repeatedly proven to be full of errors.


As dweller says below, I'll rephrase it here. Godel's incompleteness theorem proves that with any axiomatic system there are statements which are true but are absolutely not provable within the system. Godel, Escher, Bach is a beautiful read for this.


I don't understand why the entire known software corpus is not automatically proven yet. Computation is a very strict process. I'm not talking about a computer writing code itself, but a programmer should write programs using some formal language and the computer should be able to follow the proof. I.e. no bugs.

/s


Who proves that the automated proof is correct. Who proves that... etc.

All this is tied up in Gödel's incompleteness theorem.


Verifying proofs is a much simpler job than creating a proof. Also, you can have multiple independently implemented verifiers, which act in many ways like multiple human reviewers. That is what the metamath community does, they have 4 independently implemented verifiers that check every proof. That is very strong evidence that a proof is correct, far stronger evidence than is typically accepted now.


Yes but these will only accept proofs from within the mathematical framework of the multiple verifiers. I'm not saying that having 4 automatic independent verifications of a proof is useless, far from it! Just that, the notion you can "prove" all of mathematics is almost meaningless, or demonstrable false.


I'm not sure what your point is here. If the point is that the provers require a particular format, and not just English sentences, then that is true but not really a problem. Writing programs also requires that you use a particular format, AKA programming language, and that happens all the time.

It's true that some things cannot be proven, due to Goedel's incompleteness theorems... but that is a fundamental limitation of mathematics and has nothing to do with the limitations of computer tools.

You can write proof verifiers that are very generic. The metamath proof verifiers, for example, can work with arbitrary axiom systems. Most people use metamath to prove statements using classical logic with ZFC, but there are a number of other systems that are supported. Quine's New Foundationd set theory axioms and intuitionistic logic are alsk supported.


I was responding directly to:

"I don't really understand why the entire known mathematics is not automatically proven yet."

Which I believe is going to be related to English (or whatever natural language) if we really want to talk about what is meant by "entire known mathematics".

edit: I mean it's a crazy proposal, so you get a crazy response. I think somewhat intuitively we know you can't prove everything.


Gödel's incompleteness theorem shows the inherent limitations of every formal axiomatic system capable of modelling basic arithmetic. The incompleteness theorem does not show that nothing can be proven.


Who said anything about nothing ;)


> Mathematics is very strict science with axioms and following theorems. It should be a perfect application for computing.

The short answer is that mathematics is not actually a very good application for computation. It's a rather poor application for computation. Herein lies the rub and the reason why it can't be done as easily as you'd suppose. Computationally-oriented questions and theorems are a subset of all available mathematics research in the same way that computer science is only a subset of all available applied mathematics. Stated another way, imagine the massive difference of effort involved between developing an application that works and formally proving that your application works for all possible inputs. It's extremely difficult to take the set of all possible test cases and reduce them to a set of equivalence classes which can be individually proved. This is the difference between constructive and non-constructive proofs.

Most mathematics is not actually constructive, which intrinsically presents a difficulty for computational theorem proving. Furthermore a lot of mathematics (particularly in the various flavors of analysis) is continuous in nature, which is inherently difficult for a computer to correctly model. For example, computers don't actually deal with real numbers, the floating points are just a good enough practical approximation of them.

If you look at the noteworthy cases where computer-assisted theorem proving has done very well, they generally fall into one of two categories:

1. Cases where the program mechanizes a well-established theorem with a vast amount of scaffolding theory already existing around it. In this way it's relatively easier to automatically prove much of undergraduate mathematics, the bulk of which has been reduced to very succinct proofs over the past few centuries (undergraduate mathematics doesn't really make it past the late 19th century in terms of novelty).

2. Cases where a vast problem can be provably reduced to a large but finite number of special cases, each of which can then be individually proven. The four color theorem is an example of this variant of automated theorem.

To give you an idea of how monumental the gap between research mathematics theorems and computation is: in order to make as much progress as we have, we've had to develop an entirely new theory of foundational mathematics (type theory). This allows us to bridge the gap between theorems and automated proofs in a way that set theory doesn't really allow. But that's a huge undertaking and a very quickly growing field of math.

When it comes down to it, mathematicians do not tend to do research in, nor does their intuition naturally map to, a computationally rigid language. Essentially no current mathematician was taught or trained to do their research in a computational manner unless that was explicitly their field of research. Furthermore, while mathematical terminology looks formal to an outsider, there is a significant amount of symbolic overloading and notational interpretation in modern math. That doesn't mean most proofs are wrong, but it does mean that most mathematicians are informal; to the extent they write down proofs of their theorems, they've only ever needed to do so enough to get other mathematicians to say, "Yes, I see what you mean, this does logically follow from what we already both agree on."


> That doesn't mean most proofs are wrong, but it does mean that most mathematicians are informal; to the extent they write down proofs of their theorems, they've only ever needed to do so enough to get other mathematicians to say, "Yes, I see what you mean, this does logically follow from what we already both agree on."

THIS is the real reason more mathematics isn't computer-checked. You massively buried the lede.

> Most mathematics is not actually constructive, which intrinsically presents a difficulty for computational theorem proving

How so? There are plenty of theorem provers for classical logics.

> Furthermore a lot of mathematics (particularly in the various flavors of analysis) is continuous in nature, which is inherently difficult for a computer to correctly model.

You can do continuous mathematics in a theorem prover. E.g. much of the classical theory of ODEs has been formalized in various theorem provers. Proofs about infinite structures are still a finite sequence of axiom applications.

Furthermore, outside of theorem proving, numerical analysis and simulation of ODEs/PDEs is probably THE killer example of computers revolutionizing a field of mathematics...

> For example, computers don't actually deal with real numbers, the floating points are just a good enough practical approximation of them.

When you drill down into the model theory, you come to the amazing surprise that exactly the opposite is true.

Integers aren't decidable: https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...

Reals are deciable: https://en.wikipedia.org/wiki/Real_closed_field#Model_theory...

> To give you an idea of how monumental the gap between research mathematics theorems and computation is: in order to make as much progress as we have, we've had to develop an entirely new theory of foundational mathematics (type theory).

Again, there's nothing stopping us from doing classical mathematics in a theorem prover.


> When you drill down into the model theory, you come to the amazing surprise that exactly the opposite is true...Integers aren't decidable...Reals are decidable

I'm not talking about decidability. I'm talking about computability. Real closed fields are decidable, in that a Turing machine can determine in a finite number of steps whether or not a field of real numbers is algebraically closed. But that has no bearing on the fitness of real numbers for computation. Most real numbers are not computable, which is why floating point numbers are only a good enough approximation as I stated.


> I'm not talking about decidability.

If you are talking about theorem proving then you are most definitely talking about decidability! And especially if you are talking about wide-spread use of theorem proving (or lack thereof), then you are definitely talking about decision procedures!

Decidable theories are easier to build formal proofs for than undecidable theories because the former requires only pressing a button and grabbing a coffee/lunch, while the latter requires hard work manually encoding bespoke deductions.

> Real closed fields are decidable, in that a Turing machine can determine in a finite number of steps whether or not a field of real numbers is algebraically closed

No.

Real closed fields are decidable, in that an implementable algorithm can in finite time and without a single iota of effort on the part of the user prove any arbitrary theorem stated in first-order logic over the reals.

If you want to prove some arbitrary thing about Peano integer arithmetic, odds are fairly good that you're going to have to carefully program out the proof by hand.

If I want to prove something arbitrary about real closed fields, I can press a button and go for a run.

Computers are much better at proving things about real closed fields than they are at proving things about integers!

> I'm talking about computability... Most real numbers are not computable, which is why floating point numbers are only a good enough approximation as I stated.

One does not need to use a single float point in order to axiomatize the real numbers and prove things using that set of axioms.

This has been done so many times that there are even survey papers about all the various approaches: https://hal.inria.fr/hal-00806920v1/document (Floating points are a nice optimization and therefore many people care about them in practice so that they can take fewer coffee breaks during their theorem proving sessions, but again, they are not necessary.)


Edit: never mind, it seems I confused deriving a proof with verifying it!


vbezhenar is talking about verifying proofs not finding proofs.

Computers can easily verify that proof is correct once the proof is written down using formal semantics. Proof checkers already exist.


Great point, thank you!


Wouldn;t it bump into the same kind of problems that Bertrand Russell's attempts bumped into with the Principia Mathematica? Namely it crashed into Gödel's incomplete theorem?

Disclaimer: What I don't know bout maths would fill volumes, in both senses of the word

Edit: From the down-votes, apparently not.


Human mathematicians are bound by Gödel's theorem in exactly the same way computer software is. If an informal human-based "proof" does something that would be impossible with a formal software-based proof, then it's incorrect. There is no theoretical advantage to running the calculations on a human brain.


Sorry, I didn’t intend to suggest that running a proof on wetware had advantages. Simply that running complete, from first principal proofs were doomed to failure


How to Write a 21st Century Proof: https://lamport.azurewebsites.net/pubs/proof.pdf

From page 6 with figure 1, to page 10 with figure 3, it's clear that even just changing to structured proofs over prose proofs can be helpful in catching errors. Of course the rest of the paper goes into detail about going even further... I don't know if Lamport's ideas have gained any more acceptance in the mathematics communities, but I'm doubtful. At least if memory serves the ABC papers were classic prose-style...


Here's a video of an excellent presentation Lamport gave on "21st-century proofs" last year:

https://www.heidelberg-laureate-forum.org/blog/video/lecture...


> We seem to think that once a proof is published in a reputable journal then it is definitely true, but really we should only think "a small number of qualified people have read it and think it is fine". The foundations actually seem a bit shakier to me than people seem to appreciate.

I think you are understating the certainty of fundamental proofs like this.

Once such a fundamental proof is considered "true", it almost always has applications beyond what the original proof was used for. It is often used to try to prove things that we already know are "true" in other ways.

So, any "true" proof generally gets tested from multiple directions.

Wile's proof is a good example of this. It doesn't just prove Fermat's Last Theorem. Quoting Wikipedia: "Wiles' path to proving Fermat's Last Theorem, by way of proving the modularity theorem for the special case of semistable elliptic curves, established powerful modularity lifting techniques and opened up entire new approaches to numerous other problems."


True, but even then, how large do you think the field is? And how many people verify these proofs?

I have a PhD in an esoteric field myself. There's literally <50 people worldwide who write/verify proofs in it. I've spotted errors in my own work or other's work that got past peer review. And I don't think I'm particularly special. There's just not enough eyeballs, time and incentives sometimes.


> True, but even then, how large do you think the field is? And how many people verify these proofs?

It only takes one to break it. And, for something so fundamental, it's a big deal if you are the one.

This isn't physics, where you have to rule out all manner of confounding data before you can trust your conclusion.


The point you are missing that studying a proof can't proof it correct, only fail to disprove it.


That's incorrect.

For example, given the axioms of Euclidean geometry, I can prove that the sum of the angles of a triangle add up to 180 degrees.

That statement is true. Full stop.

Now the starting axioms may not be correct (the parallel postulate, for example, is not required to be true). However, given the starting axioms, proofs are true. Period.

In addition, there will be things in mathematics that you cannot prove--Godel's Incompleteness Theorem sits in this section.

And, science has the problem that a hypothesis can only be disproven.

Mathematics does not suffer from the same issue, though.


I just have a Master in Physics, but here is my take:

The thing is that mathematic proofs work on different levels of "zoom", ranging from just stating "it follows trivially" to formal proofs. And this is a normal process, that's how Mathematicians boil down long proofs to simple lines of reasoning.

I think what happens here I that the author of the original proof is arguing on a high "it's obvious" level and seems to be unwilling to go into detail.


Lamport suggests hierarchical proofs as a possible remedy. They're somewhere in between computer verification and the current prevalent style.

Lamport acknowledges it would make writing proofs harder, while reading them easier. The style doesn't help to communicate the intuition behind a proof particularly well; it's more useful for verification. Mochizuki, for one, would have to work harder to write his proof (?) in that style.


I once tried proving the very first theorem in Cannas da Silva's book of symplectic geometry in this style. Screw that.

The proof that I eventually used in my dissertation is even wordier and with less display-mode equations than the one in the book (I became very influenced by the prose in old topology books).

Most of the time, the point of proofs isn't to establish something is true, but to communicate something about the internal structure of a piece of mathematics.

The problem with Mochizuki is that he did his work isolated from the rest of the world, so the internal structure of the kind of maths he invented is opaque. Accordingly, what mathematicians are really trying to do is to examine what makes this new maths tick; the books that make this accessible to mathematicians worldwide will not be fully-verifiable proofs of a statement, but webs of separate propositions whose statements are illuminating and whose proofs are easy to understand. If they're really successful, some may even be left as an exercise to the reader.


If you are right then calling them "proofs" is misleading. The results aren't proven, just made convincing. So I guess they are "arguments" -- only detailed enough to convince others who already know enough about the entities being discussed.

Automated proofs by contrast really are proofs but as many of these comments show are not arguments.

So maybe the problem to be solved is to appropriately connect mathematical arguments (what mathematicians call proofs) to proofs, while recognizing that they are actually two entirely separate families -- as different as plants and animals.


Again, the goal is not to convince, but to understand. Thus the ubiquitous "proof is left as an exercise for the reader".

To extent that mathematics is a science, the whole point is to understand. Of course, people time and again have succumbed to the feeling that (e.g.) numbers are divine, but there's nothing mathematical about that.

---

It's not like it has been established as a mathematical theorem based on simpler ontological facts that capital-T truth exists. And everything in our daily experience points out to the contrary.

That fact that computers and programming languages have evolved from XOR and NAND semiconductors says a lot about the power of a particular branch of mathematics; it says nothing about the nature of mathematics. That idea is like claiming that the existence of probability theory implies that mathematics is about reasoning with uncertainties.


Which topology books contain prose, in your opinion?


General Topology by Munkres is a good example because it has a great deal of prerequisites covered in the first chapters so you can compare how he explains eg set theory, functions, equivalence classes with what something like Wikipedia gives.


To take one of your examples that I happen to be familiar with: Wiles' proof of FLT. I can assure you that in the >20 years that this proof has been out, thousands of professors, postdocs and grad students have read and reread every line of the two Annals papers.

They have been the foundation of hundreds of Phd projects and the scientific merit and public fame to be gained by finding a gap in the argument is clear.

Actually I think breakthrough papers are least problematic when it comes to unverified claims in math.


> We seem to think that once a proof is published in a reputable journal then it is definitely true, but really we should only think "a small number of qualified people have read it and think it is fine"

This same could be said of many other professions, medicine and psychology to start. It seems a related manifestation of the Gells-Mann Amnesia.


Math is different in that one feels there should be something close to "absolute truth." That is, given a set of axioms, it is possible to verify with complete certainty whether a proof follows from those axioms or not. Though when proofs become so long and abstruse that only a handful of people can even read them, perhaps that's no longer true. In other fields, it's more clear that nothing is known with 100% certainty. That differences of opinion or experimental error could always lead to uncertainty.


Unfortunately, math doesn't really permit this type of truth: if your axioms are strong enough to prove general statements about arithmetic, there is no effective procedure to determine whether an arbitrary proof follows from those axioms.


Did you mean to write “there is no effective procedure to determine whether an arbitrary formula follows from those axioms?”

A proof is exactly how we demonstrate that a formula follows from the axioms.


Still, given a set of axioms, statements will fall into one of three categories. 1) Provably True, 2) Provably False, 3) Undecideable

Claims that a statement is in category 1 are fully verifiable (by providing the proof). The same goes with claims that a statement is in category 2.


Yup, not picking on mathematics at all :-)


Agree with this. Don't think there are too many mathematicians who can really argue for or against a really important theorem. My prof has come up with one for Riemann hypothesis[1], but he is not finding anyone willing to engage in a discussion. Most mathematicians seem to believe that the Riemann hypothesis is unsolvable and I don't think it helps that my prof is currently not associated with any reputable university. It's as if proofs can come from only reputable universities and reputable mathematicians.

[1]https://medium.com/@nutanc/the-final-and-exhaustive-proof-of...


It appears your professor's work has gotten the attention of at least one other researcher -- a Matthew Watkin at the University of Exeter. He seems to believe that it can be disproven by a student of analytic number theory [0].

[0]: https://empslocal.ex.ac.uk/people/staff/mrwatkin//zeta/RHpro...


Unfortunately way more often than not attempted proofs of hard problems end up flawed, especially if there are certain warning signs: https://www.scottaaronson.com/blog/?p=304

That said, I’d be happy to go over this as an analytic number theorist rfurman@alumni.stanford.edu


Thanks. Will share the details with my prof.


The most definitive version of a proof one can imagine is when the proof has been expressed via a formal language and a proof assistant checks that each step of the proof complies with axioms or already proven theorems from a library (which, in this case, are simple rewrite rules in the formal grammar). The [Mizar System](https://en.wikipedia.org/wiki/Mizar_system) is an example of that.

The question of formally proving Wiles' theorem is [discussed here](http://www.cs.rug.nl/~wim/fermat/wilesEnglish.html). Currently there exists no tool powerful enough to formalize such proof. It's considered a challenging problem in computer science.


Wasn't this very issue what inspired Voevodsky to work on a Univalent foundations of mathematics, resulting in the HoTT Book (in collaboration with others)?


There is an alternative way (but not a replacement) to go about it as well. Rather than reverifying proofs (I mean now ones that have been proved in the literature) one could study central features. And whenever a feature breaks, then one can stop to think whether the theorems themselves hold water or not.


The term "proof" seems to be a lot of the problem, because it implies something definitive.


Excellent point. What mathematicians write to each other are arguments not proofs. Machine checkable proofs are really proofs.

This should not be embarrassing to mathematicians. It opens up potential for a lot of progress in several directions. Arguments can become better ways to communicate with humans. Proofs can be developed where needed to resolve questions about arguments. The relationships between arguments and proofs can be improved. Tools for each can be developed without having to support both.


What is the philosophical difference between an 'argument' that human minds following the laws of logic agree on "is correct" and a machine built by these minds flashing a "correct light"?

IMO the only advantage to be gained by introducing physical computers in the picture is lightning-fast book keeping.

Why would you trust an Intel CPU, a Samsung SSD and a linux fs driver running some verifying framework more than the human brains designing, writing and executing said framework?


Usually grad level courses were about reading research papers. And many times the exercise is to read the paper, try to predict what next para or page will be about and even try to find faults, even though there might be none. I assume Math department does the same.


Here's another "not sure if correct proof": Jordan's theorem!

Seemingly "evident" result, extremely difficult to prove in some cases (e.g. continuous, nowhere differentiable function etc.)


You have an MSc in mathematics but don't consider yourself an expert?


Where high-school mathematics was analogous to learning to form letters with a pencil, undergraduate BSc mathematics is analogous to learning to form words and short sentences.

By the end of your BSc in the analogy, you can look at a sentence and recognise at least what sort of genre it might be part of, and you can write down many of the most common words as well as a number of rehearsed useful sentences.

An MSc is like reading some examples of long sentences and short paragraphs, and being shown some long paragraphs or even chapters of books (which you have no hope of understanding, but you try, and the experience is salutary). If you're lucky, you did a research project in which you perhaps rewrote a certain very specific short paragraph in your own words.

The job of a research mathematician is analogously then to read and write books.


> If you're lucky, you did a research project in which you perhaps rewrote a certain very specific short paragraph in your own words.

Well stated. this pretty accurately characterises my experience of undergraduate studies & an honours year in mathematics


This is beautiful. Thank you.


I've also got an MSc in mathematics, I and feel that that was barely enough time to just start scratching the surface.


Trust me, an MS in math doesn't even get you to the fun part. :)


Haha not at all. My MSc project was about splitting graphs into other types of graphs and some generalizations of graphs into hypergraphs. I don't even consider myself an expert on graphs, let alone number theory.


Mathematics is this beautiful and terrifying field where, much like swimming in the Ocean, the further into it you get, the deeper and infinitely more vast it becomes. Now consider that most people can only really swim in one direction at once and this Ocean goes off in all directions.


[flagged]


This is such a shallow comment, very surprising its up there. (Did I miss a joke?)

Mathematics is written in a language that is not only precise enough to make it easier to follow logical arguments, but also optimized for human readability. Mysql is written in C, which is optimized in the early days of computing for portability between machines, which is an irrelevant goal to provability. C does not give enough guarantees to prove anything worthwhile, whereas the language of mathematics has been evolving since the 19th century to give as much guarantees as possible. If mysql would be written in agda, it'd be so much more easier to prove it compared to Wiles' proof of Fermat's or whatever. As far as any programming language go, C is probably the worst language you can prove anything let alone mysql.

OP comment wasn't implying proving is supposed to be easy. Mathematics is not written for machines, it's written for humans. Ultimately, the act of writing mathematical papers is not that much different than act of writing sociology papers. A mathematical proof is written so that it convinces a human. Once you start writing those proofs into Coq or agda or whatever, you start seeing how much handwaving is going in mathematics. This is because being experienced in mathematics gives one the ability and confidence to operate on higher levels of abstraction which doesn't need as much precision/rigor as a programming language as context signals the tactic, argument etc...


I don't see the parent poster making any claims about the straightforwardness of proving worthwhile things.


I remember an instructor explaining during the math olympiad training program: "A proof doesn't have to be in any particular format. A proof is an argument that can convince other mathematicians."

At this point, it seems clear that we do not have a proof of the ABC conjecture. Perhaps someone will be able to improve this to make a real proof - after all, there were some errors in the first Wiles proof of Fermat's Last Theorem, but they ended up being minor errors that were fixed up once they were found. But what we have is not a proof.


It's interesting that this article doesn't touch on this aspect of the situation at all, but when reading about Mochizuki's proof in the past it was presented as clear that proving the ABC conjecture was one application of a new general theory—not that all of Mochizuki's innovations are tied specifically to that problem.

I was given to understand that his work was more like Grothendieck's, where he's developing this entirely new, super general system. But you can imagine if Grothendieck for example had done all of his work for years and years in isolation and then presented Schemes, Topoi, and Motives to everyone all at once, using them in combination to prove something familiar, and expecting other mathematicians to just learn and understand them all in order to verify a single proof.

I think in that case, folks would not have been immediately so interested in the new constructions Grothendieck had come up, viewing it initially as mountains of unnecessarily alien concepts built up just to give other mathematicians a hard time ;)

(Not to identify the two mathematicians overly—I'm not sure how appropriate that really is—but I remember that being my impression when first digging into Mochizuki's work, and I haven't seen it mentioned here yet.)


This is where Terry Tao's comment from December seems relevant: https://galoisrepresentations.wordpress.com/2017/12/17/the-a...


I believe the article explicitly steers away from this because nearly all the discourse on this proof has been precisely about this (c.f. remark Scholze that 'The whole discussion surrounding the proof had gotten “too sociological,”').


I think you're exactly right. Mochizuki's work may or may not turn out to be sound, but in the mean time he's failed to communicate it to other experts, even when they've met with him in person for the specific goal of understanding his reasoning.

It's not sufficient to have a great idea, or even to be right about it. You have to be able to explain it thoroughly enough that other qualified peers can understand your logic enough to weigh in on it. That seems not to have happened yet hear.


It's sufficient for Mochizuki. Everyone else can decide for themselves what is sufficient for them.


> It's sufficient for Mochizuki. Everyone else can decide for themselves

This sounds like a variant of the Turing test.

Mochizuki and your pet dog both claim to have proven the ABC conjecture. The ABC conjecture has been proven when you can understand one proof but not the other.


I love your idea in principle, but it would be very problematic, and very sensitive to the tester's prior experience. I think that many people would (un)naturally be more trustful of mathematically-looking squiggles than a dog's barks, while others, especially if they got burned by damn lies before, would inherently distrust mathematics. In any case, people are very prone to trying to complete gaps in their understanding of a speaker based on prior conceptions.


I can't edit my post, but I should have phrased it more pithily:

Mochizuki and your pet dog both claim to have proven the ABC conjecture. The ABC conjecture has been proven when you can tell the difference between the two proofs.


Presumably he’d like recognition for having proven it. Unless he can explain and communicate his theorem to others, his work won’t be accepted. That doesn’t seem sufficient.


And it's unlikely that there will be anything following from it if the article is right that it's basically "Trivialities to confuse the reader + an esoteric leap of faith"


FYI, the proof has convinced other mathematicians.


> When he told colleagues the nature of Scholze and Stix’s objections, he wrote, his descriptions “were met with a remarkably unanimous response of utter astonishment and even disbelief (at times accompanied by bouts of laughter!) that such manifestly erroneous misunderstandings could have occurred.”

Is this normal in advanced Math or is this guy kind of a jerk?


He is being a jerk. Also, not responding to the actual argument, as far as I can tell (which is technical, but quite clear). There are some other damning statements in the rebuttal, too--for example, they say that all the new machinery developed for anabelian geometry lies unused in the case to which Corollary 3.12 applies (meaning most of the effort mathematicians have been spending in order to try to understand Mochizuki's new mathematics has been wasted). They also mention other incidental complexities that serve to obfuscate the argument but don't seem to buy anything (for example, not choosing a specific isomorphism at one point in the proof, but instead asking the reader to consider all isomorphisms with particular properties, for reasons he has been unable to justify). Someone in his position should be aiming to clean up his proofs to be as straightforward as possible, not making them deliberately harder to verify.


This is typical behavior in Japan when you question someones authority. It doesn't go well with them, it's a direct attack on their honor. You really need to go deeper with him (over dinner or so), but he refuses, so most likely he is wrong snd he knows it.


If nothing else, this new criticism can be used to clarify and improve the original proof. This response is .. not encouraging.


Math academic community is actually very healthy overall, in my experience.


see: monty hall problem

But anyway, mathematics is not any more unhealthy than any other field, but it does have toxicity like all others do as well.


My understanding is that the Monty Hall problem was just a cute puzzle published in a pop-sci magazine that confused a few professors who perhaps ought not to have been confused by it. I don't think it says anything at all about the health of the mathematical community.


It was also ambiguously worded, which is the bane of probability problems posed by people who think they are clever (like vos Savant).


I will repeat what I wrote about this point a couple of weeks ago: According to vos Savant, in most of the cases where the correspondent actually had a correction to offer for her alleged error, they were assuming the intended interpretation of the puzzle.


Your understanding of the history of the problem is lacking and you've caused me to get downvoted as a result, thanks for that. The professors were misogynistic towards her as they could not believe a woman could solve this problem. She even addressed this.


Given the anecdotal correlation between math and autism, and the effects of autism on sociability, one might expect mathematics to have different toxicity.

I never went deep enough into mathematical academia to judge, let alone getting into other fields to allow for a comparison.


Well, let's lay aside the fact that there clearly are plenty of jerks in academia. The entire article seems to layout a sort of string of unusual and problematic situations. Whether it's people who hit upon the problem conjecture and chose not to speak out for fear of causing trouble, or the journal publishing the proof being edited by the author, or the large number of 'delegates' who seem to be pushing the proof without being able to explain the proof, it seems this is a mix of problems that are arising from an unhealthy situation.


If the objections really were founded on a basic misunderstanding, then one would expect a straightforward clarification to be forthcoming (where 'straightforward' should be read in the context of the formidable mathematical abilities of the parties to this dispute.) Its absence suggests that this is bluster (though that interpretation is made through the filter of my western/English-speaking cultural background.)


https://galoisrepresentations.wordpress.com/2017/12/17/the-a...

Terence Tao and Peter Scholze (and others) join the discussion in the comments section.


Tao's commentary is especially interesting for me as a non-mathematician:

>It seems bizarre to me that there would be an entire self-contained theory whose only external application is to prove the abc conjecture after 300+ pages of set up, with no smaller fragment of this setup having any non-trivial external consequence whatsoever.

Likely such is the understatement-ish way of a mathematician to say "This is clearly wrong, but I have not the time to dig in and find where exactly"


I think Tao describes important metaheuristic.

Important conjecture like ABC expressess some fundamental feature in the field. It's not just a arbitrary brainteaser. How to solve this one problem should give insights to many related problems.

Thought experiment:

Imagine a modern mathematician/time traveller who want's to impress Carl Friedrich Gauss by solving problems CFG can't solve, but at the same time the not wanting to give away any clues to modern mathematics. Traveler would need to devise complex and convoluted ways to proof things. Proofs would probably piss off GFG more than impress him.


I thought he was more suggesting that the proponents could bolster their case by using their weird mathematical machinery to prove some other, less daunting, results or otherwise show that it has more general uses. (Not asserting that this is impossible, just complaining that it hasn’t been done.)


I think it might be more akin to a programmer complaining about a monolithic/black-box chunk of code that can't easily be split into understandable and/or unit-testable functions.


For the analogy, I'd replace 'unit-testable' with reusable


I'd argue for something stronger: both reusable and useful (outside of the initial application) and novel.

It's possible to write structurally reusable components that are useless, i.e. they do a thing that no one wants or needs.


I read it more as "This is probably wrong, because it differs in this important way from every other true groundbreaking proof."


Tao has always been a proponent of the fact that proving a theorem establishes powerful tools that might help mathematicians work on other problems, i think his comment is that this does not do that


> Likely such is the understatement-ish way of a mathematician to say "This is clearly wrong, but I have not the time to dig in and find where exactly"

This is a statement of "If this thing has no other application, it isn't worth my time to dig through what is likely an impenetrable pile of garbage."

Basically, proof of the abc conjecture should have a bunch of follow on consequences. If those aren't materializing, that makes taking the time to understand the proof quite a bit less enticing (ie. the Bayesian prior on "true" goes down a lot).


To me it sounds more like: I don't like the idea of this theory working, but I don't have a refutation.

Honestly I am surprised of hearing such an argument from one of the world's best mathematicians.


Well, that's not what he said, and he still is one of the world's best mathematicians. So one wonders why you would take that surprise as indicative of a mistake on his part rather than on yours?


Well, in science there’s a simple rule: your substantiate your claims.

A plausibility argument like Tao’s one is simply not an argument.

To show that Mochizuchi is wrong you need to point out which equation of his work is wrong, which by the way is Peter Scholze’s plan.

Tao by his own admission does not have the background to evaluate Mochizuchi’s theory and still gives this worthless plausibility argument, appealing to his knowledge of other unrelated proofs. I’m astonished.


Except that he doesn't say it is incorrect, he says that "it seems bizarre" to him that seemingly Michozuki's paper doesn't have any uses beyond proving the ABC conjuncture. The rest of this comment is just explaining how it's usually not like this.


When the world's most brilliant mathematician (or at least one of the most brilliant) says "it would be rather bizzarre if your argument worked" the implications are absolutely clear.

As a scientist I would never make such a statement, moreso for a theory I admittedly don't understand.

Tangentially, it's not Tao's case, but I have seen a lot of bullying in the academia along these lines... "I am not saying it's wrong, just very bizzarre", "I am not saying it's wrong, but my Ph.D. student worked on it for two years and couldn't solve it", "I am not saying it's wrong, actually I don't even understand the details, but please recheck everything..." etc...


Is it not bizarre?


Euler identity looks very bizarre if you don’t understand the proof. Why should exponentiation, the imaginary unit and pi be related?

In mathematics there’s absolutely no place for judging the bizarreness of statements, either they are right or wrong, and of course such a judgement cannot be made by someone who admittedly doesn’t have the necessary background.

As I already said, often in the Academia saying that a result is bizarre is a not-so-subtle way of implying something is wrong by appealing to authority.


That discussion is from before Scholze & Stix wrote their criticism.



> “The abc conjecture is a very elementary statement about multiplication and addition,” said Minhyong Kim of the University of Oxford. It’s the kind of statement, he said, where “you feel like you’re revealing some kind of very fundamental structure about number systems in general that you hadn’t seen before.”

I can kind of see that, but there's something in it that gives me the feeling of arbitrariness.

It's looking at the relationship between prime factors of A, B, C in A + B = C, where no prime factors may be shared between A, B, or C. The specific relationship in question is between the magnitude of C and the unique prime factors of A, B, and C multiplied together.

To take an example from the article, 5 + 16 = 21 meets the basic requirements since the prime decomposition looks like (5) + (2 * 2 * 2 * 2) = (7 * 3) —no factors are shared. But, the quantity we're supposed to relate to C is (5 * 2 * 7 * 3), since it is the product only of the unique primes.

Does that not feel arbitrary, to drop the repetitions? So it ends up being that the sought out smaller products arise because they had large numbers of repetitions, so we removed more when forming the product.

But I suppose there are probably just some deep number theory mysteries behind it which give justification, so it only feels arbitrary to someone like myself who is basically ignorant on the subject :)


Another way to think about it is to consider what needs to happen for rad(abc) to be much less than c. This can only happen if the "dropping" you mention drops a whole lot of factors. In other words, it can only happen if a, b, and c all contain mostly large powers of primes.

The prime factorization of a number is essentially "random". That makes numbers that are large powers of primes rare; it's like rolling a die and continuously getting the same face. Now if you add two numbers, the prime factorization of the sum does not appear to be related to the prime factorization of the summands, so adding two very rare numbers and getting yet another very rare number seems unlikely. And that seems to be roughly what the abc conjecture says: it is not likely that all of a, b, and c are rare.


Also, I didn't notice this before, but when you say:

> And that seems to be roughly what the abc conjecture says: it is not likely that all of a, b, and c are rare.

It almost sounds like a tautology. Is your view that it is somewhat vacuous after all? Or maybe the interest comes from the fact that while it seems obviously true, we need a proof in order to use it as a theorem for other things where it would be a useful tool...


My intention was to show why you might think it's likely to be true. Maybe I did too good of a job? Heh :)

The justification I gave depends on an intuition that addition "scrambles" prime factorizations, ie. except for common factors, which obviously pass through to the sum, the prime factorization looks random. Understanding how prime factorizations behave under addition is certainly not an easy problem. And then, even if it is unlikely that a, b, and c are all rare, perhaps it is not unlikely in precisely the quantitative way that the abc conjecture supposes.

On the topic of things where it is a useful tool: I found http://www.ams.org/notices/200210/fea-granville.pdf which discusses its relation to other famous results but which also has a much better argument for why it should be true. They approach it as the integer analogue of this result for polynomials

> If a(t), b(t), c(t) ∈ C[t] do not have any common roots and provide a genuine polynomial solution to a(t) + b(t) = c(t), then the maximum of the degrees of a(t), b(t), c(t) is less than the number of distinct roots of a(t) b(t) c(t) = 0.


Great explanation—thanks. I think I got the upshot, though I am unclear on what function rad() is.


rad is the radical of a number [1]. In short rad(x) is the product of all unique prime factors of x.

So rad(8) = 2, rad(24) = 6, rad(75) = 15

[1] https://en.wikipedia.org/wiki/Radical_of_an_integer


rad is the arbitrary function you asked about ;) rad(x) is the product of the distinct prime factors of x, ie. it drops repeated prime factors, rad(2^n)=2, etc.


> Does that not feel arbitrary, to drop the repetitions?

If the conjecture turns out to be true, then that apparently arbitrary step turns out to be meaningful, because it uncovers a hidden relationship between the numbers A, B and C and their factors.


Slightly OT: it was very nice to read an article that made such wonderful use of links. Just about every case where material could be directly linked, it was done.

I particularly enjoyed the link to Dr. Calegari's blog post, as it was interesting to read the comments in 'real time' and compare that with the author's synthesis. Very good article!


Even further OT: I was good friends with Frank Calegari's older brother (also a mathematician) at school, and used to play bridge with Frank at university. I'm not a mathematician but I do sometimes read articles when they're posted on HN, so it was a surprise to see someone I know mentioned. :p


I think this would count as esoteric if you ignore the drama portion of it. Just to get to the point of understanding what they're talking about would probably take months of dedicated uninterrupted work. Neat. I have no idea where to even start.

https://en.wikipedia.org/wiki/Anabelian_geometry


I think what makes these number theory arguments so interesting to me is how simple the theorems are, but how impenetrable the proofs are. For example, Fermat's last theorem is so easy to understand, but there's no chance in hell I'll ever understand Wiles' paper proving it.


You might not understand all the details of the paper, but you can definitely follow the thrust of the argument and see the elegance in it without advanced math skills.

The really hard part (and main part of Wile's contribution to the proof) was proving something called the Taniyama–Shimura–Weil conjecture. However if you skip that bit and just accept that the conjecture is true the rest of the steps of the proof are both elegant and relatively easy to follow (if you gloss over some of the details) for anybody with a decent grasp of basic math.

The book Fermat's Last Theorem by Simon Singh is a pretty great read and does a good job of outlining the basic structure of the proof for anybody with a decent grasp of high school mat.


I don't profess to understand the mathematics behind these papers but i have been following the debate for a couple years now. It is weird for such eminent mathematicians to not be able to reach a conclusion as to whether mochizukis arguments are true or false. I think this is the first time a really eminent mathematician like scholze has put in his two cents that the proof might not be right. At least it looks like work is being done at a serious level to prove or disprove these papers.


Years even. And really, that's kind of the issue at hand. This stuff is so difficult that very few mathematicians really understand it, and Mochizuki's work is so much more esoteric than that, that basically no one understands it, even recent Fields Medallists.


Coincidentally I recently finished a MOOC about prime numbers where the ABC conjecture was introduced, I believe in order to show how mysterious prime numbers still were. The MOOC is here if someone is interested: https://courses.edx.org/courses/course-v1:KyotoUx+011x+2T201...

Apart from that, the article makes me curious about the state of automated mathematical proof. I remember having read an article posted here about a mathematician (I think he was a Field medalist) who was claiming that automated proof were the future of mathematics as they would enable mathematicians to collaborate much more easily, removing problems of trust in others' proof.


Peter Scholze is too valuable to be lost to the hall of mirrors that is Mochizuki's conjecture. :P He is young and at the peak of his powers. Even I've heard of him.


So Mochizuki has had years to revise and restate things right? Knowing that it hasn't been well received wouldn't one want to clarify things? Did he move on to a different project, or is he so arrogant that once he submits a paper he walks away from it?


A few years ago I asked Mochizuki via email if he would be interested in organising with me an effort to mechanise his proof. At the time I was developing a collaborative interactive theorem prover, for which this would be an ideal application. He politely declined, and I am kind of happy about this now, because I need a few more years anyway before my theorem prover could be used for something like this without being a pain for its users :-D


I do have to wonder, if the proof is so complex, and yet doesn't offer insights to any other proof or theories.. does it matter?

Older threads for reference:

    https://news.ycombinator.com/item?id=15971802
    https://news.ycombinator.com/item?id=4502856
    https://mathoverflow.net/questions/106560/philosophy-behind-mochizukis-work-on-the-abc-conjecture


To me, this article highlights two things: - the sore need for a way of automatically verifying proofs. - the sore need for a formal, agreed-upon language in which math proofs can be written.

The language of math has always been much more mushy than mathematicians are willing to concede, and the chickens are coming to roost: 21s century math has become so complex and sophisticated that very few people can actually even read the content of proofs, much less understand them.

Shinozuki's case is an extreme example of that: after almost ten years, even he other experts in the field aren't sure of what he's saying.

There is a clear need for formalizing the language of mathematics in a way that allows machine to verify he validity of a proof.


I wonder if language is a factor in this at all? I know mathematics is written entirely in English but most likely discussions still happen in local languages. Could part of the proof be lost in translation?


I think Mochizuki won't have any problems with English, since he spent his school days, up to PhD in the US.

https://en.wikipedia.org/wiki/Shinichi_Mochizuki

About Hoshi, I have seen him in a conference video before, and have an impression that he isn't good at English, which can make a bit difficult in discussion.

I don't know about Scholze and Stix.


Scholze has a strong accent, but speaks flawless, sophisticated English.


It sounds like what is lost in translation is somebody's face and/or reputation, purely judging the presented situation socially. That is all we can do as non-mathematicians.


The math community can't accept a new result is true without a good proof. As part of good, the proof has to be readable by peer reviewers. Net, if the writing is tough to read by some of the best qualified peers, then the writer has more work to do, in particular, to learn to write better.

We have lots of examples of good writing in math from Bourbaki, W. Rudin, P. Halmos, H. Royden, and more.

Sorry 'bout that: No one wants math to miss out on a great, new result, but the writer has to do their job first, in particular, do good math writing.


Mathematicians should do more computer verifiable proofs to avoid discussions like this.

> Definitions went on for pages, followed by theorems whose statements were similarly long, but whose proofs only said, essentially, “this follows immediately from the definitions.”

This sounds perfect for machine checked proofs, but I guess the proofs are actually a lot more involved than they are presented.


Per Peter Scholze (from https://galoisrepresentations.wordpress.com/2017/12/17/the-a...):

>One final point: I get very annoyed by all references to computer-verification (that came up not on this blog, but elsewhere on the internet in discussions of Mochizuki’s work). The computer will not be able to make sense of this step either. The comparison to the Kepler conjecture, say, is entirely misguided: In that case, the general strategy was clear, but it was unclear whether every single case had been taken care of. Here, there is no case at all, just the claim “And now the result follows”.


Well that's sort of the point, isn't it? If you can't make the computer make sense of it, you have no proof.


> The computer will not be able to make sense of this step either.

What does that mean, then? That a leap is being made that depends on the reader's fuzzy intuitions, rather than the established axioms? If that's the case, it's not a formal proof at all, no?

Or am I way off here?


I find his explanation of the reverse quite clear:

> Any proof that can be spelled out at a level of detail sufficient to be analyzed by a computer is necessarily going to consist entirely of steps that are each completely comprehensible to a mathematician.

This means there must either be a lot of steps or a lot of different paths to follow in order for a computer to be of use. Neither is the case here.


Maybe he says that the computer won't help understand the proof since the proof is missing a step.

He may be saying the equivalent of "A computer won't help you to prove that 1+1=3“


> He may be saying the equivalent of "A computer won't help you to prove that 1+1=3“

But it can. You need to have the computer 'comprehend' the relevant axioms, of course.

If your 'proof' is in fact just arguing the case for new axioms, that isn't a proof at all, it's a misunderstanding of what 'axiom' means. (They're definitions, not profound universal truths.)


Exactly, this is how I read it as well.


For the Kepler Conjecture, human mathematicians first found a way to show that it could be proved by checking a finite (but long) list of computer-verifiable claims.

Here Scholze is saying that part of the proof simply hasn't been written, so there's nothing to verify. I think he's missing that computer verification proponents are also aware of that and are either

1. trying to motivate the camp that claims to understand Mochizuki's work to formalize and computer-verify it

and/or

2. suggesting that some of the gaps can be filled in by automatic theorem provers


It took like a decade to produce a computer verified proof for the Kepler Conjecture. And that was just for a relatively well-defined exhaustive search. That sounds like a molehill compared to Mochizuki’s Everest of a proof.


Methods have improved a lot since then.

The problem is that these papers are vague. The translation to computerized version would be a very different thing, it'd require a lot of creative writing on behalf of the translator.



Thanks for the link. As that seems very relevant to the discussion, let me add some context: This is Mochizuki's answer to the recent criticism titled "Report On Discussions, Held During The Period March 15 – 20, 2018, Concerning Inter-Universal Teichmuller Theory (IUTC)"

For TL;DR see around page 40 in that PDF:

> Indeed, at numerous points in the March discussions, I was often tempted to issue a response of the following form to various assertions of SS (but typically refrained from doing so!): Yes! Yes! Of course, I completely agree that the theory that you are discussing is completely absurd and meaningless, but that theory is com- pletely different from IUTch!

> Nevertheless, the March discussions were productive in the sense that they yielded a valuable first glimpse at the mathematical content of the misunderstandings that underlie criticism of IUTch (cf. the discussion of § 3). In the present report, we considered various possible causes for these misunderstandings , namely:

> (PCM1) lack of sufficient time to reflect deeply on the mathematics under discussion (cf. the discussion in the final portions of § 2, § 10);

> (PCM2) communication issues and related procedural irregularities (cf. (T6), (T7), (T8));

> (PCM3) a deep sense of discomfort ,or unfamiliarity ,with new ways of thinking about familiar mathematical objects (cf. the discussion of § 16; [Rpt2014], (T2); [Fsk], § 3.3).

> On the other hand, the March discussions were, unfortunately, by no means sufficient to yield a complete elucidation of the logical structure of the causes underlying the misunderstandings summarized in § 17.


So what would this imply over the larger body of Mochizukis work? As far as I have understood the situation, the abc proof is more of a sideproduct of him developing this completely new system(?) for maths rather than the main thrust of the works. Is it all falling down like house of cards, or is there still workable and useful bits there?


Somewhere in this comment section someone quoted the following line by Terrence Tao:

>It seems bizarre to me that there would be an entire self-contained theory whose only external application is to prove the abc conjecture after 300+ pages of set up, with no smaller fragment of this setup having any non-trivial external consequence whatsoever.

so it's not so much that the main result is called into question but there are some useful and workable bits left over. Rather the main result is called into question because there are no other useful and workable bits.


Will others fork his proof to fill in the gaps? Is there a github for maths proofs?


We need Yasantha on this


My God! This Titans' website needs urgent attention: http://www.kurims.kyoto-u.ac.jp/~motizuki/top-english.html


Why? It might be worse than a <p> and a <ul> with links to papers but it loads ok and it’s basically a point of pride amongst academics to have an ancient or amateurish looking handmade website. Perhaps it is a signal that time is spent on academic work or a that the site was made by the academic (and so has useful materials for academics) and not by some publicist who does not provide useful academic materials.


No no no! His website is a treasure! I think the world needs many more garish, lurid, irreverent websites like this.


Why? At least it's not hot pink or otherwise eye-blinding. I still find it charming that many Japanese sites seem stuck in 90s web design. But recently I've been running into more Japanese sites that seem to have advanced to mid-2000s web design, with proper CSS and designs that use curves and angles and don't force everything into a box shape (visible or invisible)! That's my favorite era so I hope it's a while before they join the rest of us in bloated SPA hell.


Why? It actually performs well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: