Hacker News new | past | comments | ask | show | jobs | submit login
Number theorist fears many proofs widely considered to be true are wrong (vice.com)
87 points by lelf 17 days ago | hide | past | web | favorite | 100 comments



Headline is waaaay overblown, but not all of it is total hyperbole -- checkout out arxiv and try to read the abstract from just about any paper.

It's basically totally impermeable. I'm almost done with an undergraduate degree in math and basically have no idea what ~90% of the research is about at anything other than a topical level. This is fine, it's written for specialists in the field (hopefully), but damn, for most fields, it's not a whole lot of people.

Compared to physics or chemistry, math gets very specialized, very quickly. Depending on the field, there's no easy real world isomorphisms either, making it even more difficult.


I did some undergraduate research and ended up getting published. My initial drafts were written with prose so that I (and hopefully any novice) could understand. However, my professor wasn’t happy with it so I got some help from one of his grad students to re-write it. By the end of it I could barely understand my own paper. IMO the final paper had too much technical jargon which was convoluting some simple concepts. Kinda made me disenchanted with the academic world. Still have to give credit to my prof and the grad student, without their edits it probably wouldn’t have been published.


This is one of the reasons why I pursued engineering instead of a hard science. Mathematics in particular feels like an ivory tower with a bunch of gatekeeping.

Better language and grammar around core concepts could improve accessibility, but few in the field that I've encountered seem to care about that.

Biology and chemistry are a bit better, but there are still improvements that could be made. You can't easily go from zero to biochem or ochem, and I don't think that has to be the case. Gen chem and gen bio just suck at presenting themes and concepts and instead focus on being a smorgasbord of facts (that you later have to amend or unlearn).

Computer science and electrical engineering are a walk in the park by comparison. We have all sorts of interactive tools and visualizations of algorithms and analysis. We keep refining and making the onboarding process easier and more accessible.


So I find this interesting, because it seems to confirm a suspicion I held. I "only" did some undergrad math during my computer science (but due to a lack of specialization at my university back then it was "real math", i.e. we took the same courses as the math students).

My impression often was that a lot of this stuff isn't so hard if you "get to it", but it's clouded in a lot of complicated language that makes it sound harder than it really is. I never dared to postulate that this is a more general problem of math, but your words sound like that's exactly what is happening.

Do you think there's a way forward to make math more accessible by using simpler language without sacrificing correctness?


The problem with math is that the only strategy we’ve discovered (with any success) for making a concept easier to learn is to connect it with a concrete, real-world analogy. This works because people find it easier to learn something if they can connect it to something else they already know. The Achilles heel of this approach is that it discards the abstract form of the concept such that the full generality is lost on the student.

One example I see frequently around here is the much-maligned monad tutorial. Writers keep trying to connect it by analogy to concepts with which imperative programmers are already familiar. As a math student, I would much prefer to be given a basic statement of the definition of a monad, a list of properties, and maybe a theorem to prove that some structure is a monad instead of having to fall back to the definition. From there, I would just work through a few exercises and start to understand what it is and how it works. Most people don’t work this way, however. People seem to be extremely resistant to learning by experimentation. Instead, they prefer to be given all the answers.

I suppose this isn’t really a problem that’s unique to math. We’ve also discussed how a lot of computer users fail to truly understand how the software works and instead prefer to follow steps by rote in order to carry out one particular task.


People struggle with monads because it's a two-step process to learn them: Learn their mathematical properties, then learn how to apply them as a pattern. That's not a quick process you can do in half an hour. It doesn't help that the concept is alien to everyday processes. So yeah, analogy doesn't get you far with monads.


I think most monad tutorials skip over the math and try to teach people the applications and I think this is why people don’t get very far. But then people are probably not ready for the math if they haven’t gone beyond high school algebra. Very few high school students learn about abstract algebra topics such as group theory unless they’re studying on their own.

Aside: as with monads, group theory has many disparate applications, from solving Rubik’s cubes [1] to particle physics [2] to crystallography [3]. Trying to teach someone group theory using only a Rubik’s cube may not be the best idea, in light of the abstract nature of the structure.

[1] https://en.wikipedia.org/wiki/Rubik%27s_Cube_group

[2] https://en.wikipedia.org/wiki/Particle_physics_and_represent...

[3] https://en.wikipedia.org/wiki/Crystallographic_point_group


No, there is no magic formula to make hard things easy. This is the same as the classic "monads are burritos" thing. A programmer spends a week trying to understand monads, and finally it clicks when he makes an analogy with wrapping burritos. So that must mean that burritos are the real way to explain monads, and everybody who didn't use the burrito analogy was being obfuscatory on purpose, right?

Of course not! If that programmer than goes out and tells somebody else that "monads are burritos", they'll have no idea what that vague, nearly nonsensical statement means. You need the context of grappling with the idea over time for an analogy to actually help.

Good textbooks are already extremely optimized for ease of learning; the authors of these books are universally people who started in your position, thinking that everybody would be so much easier to learn if it were just written in sensible language, usually the language they were happening to use when the idea finally clicked for them. And the result ends up hard to understand, because the concept genuinely is hard to understand.


Absolutely.

It's a problem with academia in general. Academics don't get rewarded for explaining things in an easy to understand way. Moreover, it is often a huge amount of work to distill results down like this. And if you succeed the response you may get is "well that is obvious".

Anyway, this is my counter argument to the "formalization of mathematics": why don't we incentivize people to bring some clarity to the exposition? If mathematics is suffering from faulty proofs, then I have little sympathy. Make it simpler, organize the concepts. Muscular calculations only go so far.


There's the idea that pure math papers should not have a "Conclusion" section, where the author summarizes what has been done and says a few informal words. This doesn't seem to help the exposition but is nonetheless prevalent: if you haven't paid attention to the proofs and derivation, they aren't going to retell what happened in plain plain language.

e.g. look at this advice from the academia stackexchange, to a young aspiring mathematician:

"If you're relatively young and inexperienced and hoping for best results on the rapid publication of your work in strong journals, I would stick pretty mercilessly to the format: (i) strong introduction motivating your work and explaining clearly the value added both in the results themselves and the techniques of proof and (ii) the rest of the paper contains careful proofs of all the results, in a very clear, linear, easy to follow fashion, e.g. "Section A.B: Proof of Lemma C".


In other fields, writing survey articles is (or was, recently) an appreciated activity.

Survey articles are explicitly supposed to bring people up on the important results and current state of a subject.


> My impression often was that a lot of this stuff isn't so hard if you "get to it", but it's clouded in a lot of complicated language that makes it sound harder than it really is. I never dared to postulate that this is a more general problem of math, but your words sound like that's exactly what is happening.

I "only" have a masters degree in Math, but this sounds really wrong to me. Most advanced math concepts are complex and can take months to understand for someone with an unrelated math background (e.g., say, Lebesgue measure theory or Galois theory). I wouldn't even know how to start teaching something like that to someone who doesn't have a formal mathematical background.


I think the GP is trying to say that, once you get past all of the definitions and proofs, the underlying idea is almost always relatively simple.

I do agree that it's pretty hard to be rigorous without setting up all of this edifice, but I've also found that it's very easy (at least, as an applied mathematician) to take an idea and continue generalizing it until the original application becomes a very small corollary of the new statement—one that appears usually unmotivated and "magical."

In some sense, this is a wonderful thing: you can usually prove a much stronger statement using the same tools you used to prove the weaker one, but in some sense it makes many things unbelievably difficult to understand as a first or even second-time reader.

To give an example, I was reading a book on dissipative PDE-solving as an optimization problem. The book's main chapter starts with a theorem that is so general and so far removed from any one specific application that, even as someone who works on several related fields, I struggled to understand its significance. The book then spends the next several chapters showing how existence/uniqueness of solutions to a bunch of PDEs are all corollaries (with some work) of this theorem.

The problem I have is that the presentation would've been much clearer going the other way. Start with the easy-to-prove existence theorems for the PDEs you care about (which mostly relied on simple tools and could've been proven in a few lines) and build up the generalization to the "main theorem." There, the reader doesn't have to guess the meaning (and/or marvel at the "ingenuity") of the original statement since the motivation is perfectly natural and clear, and they certainly don't have to struggle through a proof without a clear ending. Plus, you usually get the nice side-benefit that, by the time you present the most general statement, the proof is almost always immediate ± some small tricks.

While I do think that math requires a large amount of definitions to work and to put everything on rigorous footing, I also think there's this kind of weird way of writing where definitions and proofs are presented as "magical" and then shown to be right or useful later, rather than the other way around. This is fine for a few things, but I've found that most presentations that work don't follow this template (yet most mathematical papers I've read sadly do).


People will complain about that too. The introduction is "redundant" or "trivial". The book is a long, bloated slog. (Since what you're proposing will unavoidably increase the page count.) The author intentionally confuses students by alluding to some "main theorem" but waiting a hundred pages to say what it is, instead of putting it in front where it belongs.

For every complaint about a textbook, there is an equal and opposite complaint if it had done the opposite thing. At some point we have to accept that the underlying concepts inherently take time and effort to assimilate, that the easiest style to learn from is different for different people, and that the authors really did think about all these issues when they were writing. They clearly care about exposition, otherwise they would not have written a book in the first place, a thankless and financially useless task.


I mean, perhaps; though I've found it does quite the opposite. The idea is proposed front and center, sure (this approach does not preclude that), but far less time is spent explaining the motivation and doing mathematical yoga to clarify steps in a proof than is spent providing clear examples and sensible constructions.[1]

I am not, of course, saying that all textbooks (or papers) will be better served by this structure, but I think a surprisingly large amount are. This may also be a consequence of the field I'm in, as well, where most statements I've made have several natural generalizations which are all relatively straightforward to prove, but presenting them all under one large umbrella with a very general construction makes the end result much harder to understand.

Either way, both ways have their merits, but I've (along with several of my peers and editors) found one much harder to work with and read than the other. Though, I will again mention that this may be due to the specific fields I work in.

--

[1] I want to be clear that I'm also not advocating for the "big reveal finish" which many papers (and some terrible textbooks) fail miserably at. Neither of these mediums is a novel. But I am saying that some approach motivating the construction and building the picture in a natural way (instead of a here's the theorem, let's explain why everything is a trivial corollary of it) lends itself to more clear expositions.

Perhaps either presentation can be made clear, but I've found it's very, very difficult to make the latter work in a way that readers can easily understand, while the former falls almost naturally from most research notes I've written and keeps people more engaged.


People will complain about that too. The introduction is "redundant" or "trivial". The book is a long, bloated slog.

Some people will, but they probably aren't the target audience for the book, and most people won't.

You don't teach kids who are learning to count about 0 by explaining group theory and identity elements.


I promise you that explaining a technical advanced math subject is less obvious than this, and that even students that come in with the same background will strongly disagree over which books are easiest to read.

Maybe it's obvious that sketching only a little formalism is the best. But then some of your statements end up vague, and students stumble on practice problems where they really need to understand the details of the formalism.

Maybe it's obvious that stating things as simply as possible is the best, even at the cost of generality. But then your statements won't be powerful enough when people try to go beyond the few simple examples you give, and people will feel betrayed.

Maybe it's obvious that a historical approach is the best, for motivation. But historical papers are extremely confusing to read. They're full of redundant checks of things we know are obviously true today, and archaic notation and jargon. People will get annoyed at slogging through ten pages of text to confirm something that is obvious in a modern formalism.

Maybe it's obvious that a modern, clean formulation is best. But then the definitions are unmotivated, seem to come out of nowhere, there's not enough connection to the historical development, and your book won't be compatible with older books.

Maybe it's obvious that no formalism at all is best; you just show how to solve problems concretely. But this turns what could have been a few simple principles into an enormous bag of ad hoc tricks people need to memorize. Textbooks that do this are derisively referred to as "cookbooks".

It is not obvious how to write a book.


It is not obvious how to write a book.

I agree, and I also agree that readers will differ in which books they find easiest to read. Different people learn in different ways.

However, in some respects, it is obvious how not to write a book. Mathematics is not magically exempt from effective technical writing skills, nor all we have discovered about cognitive psychology and how people learn, nor anything else that goes into creating a generally good presentation.


Most advanced math concepts are complex and can take months to understand for someone with an unrelated math background (e.g., say, Lebesgue measure theory or Galois theory).

Certainly some advanced math concepts require a lot of foundations and therefore take time to build up to them, but usually the individual steps along the way should be quite accessible if well presented.

To follow up on one of your own examples, suppose we have someone who understands the basic ideas of abstract algebra, say someone who had done a first course in topics like groups and rings as part of their undergraduate studies. (I note in passing that even a keen school student considering studying maths at university might be able pick up these concepts with good presentation in a few days.) Then suppose that person are curious about why there is a quadratic formula but people claim that not only do we not know of any corresponding quintic formula, in fact we can prove that no such formula is possible.

Explaining the concepts and key results from Galois theory needed to get to that proof shouldn't take months. There are lots of technical terms like solvability by radicals and field extensions and quotient rings and so on, but if these concepts are properly explained, none of them would in itself be more than a typical undergrad student could understand after a few minutes. Building up the argument would of course take longer than understanding any given step, but it's not prohibitively difficult or complicated.

Unfortunately, in many university lectures and textbooks, those concepts are introduced using only a mixing pot of other technical terms and notation, without any sort of motivation or examples to put them in context. There's no intuition behind them. There's no frame of reference. So then of course the final proof of the unsolvability by radicals of a general quintic just feels like word and symbol soup that you have to learn by rote, because how could anyone possibly have derived it in any sort of deliberate or systematic way? And worse, not only does this result become "magic", the student lacks the ability to use the mathematical tools that support the result for any other purpose either.


The thing is that actually doing mathematics is a process where you take your intuition and find a way to formalize it into definitions and proofs and computations.

A toddler could understand the difference between a smooth curve and a spiky curve, but to actually get a handle on them computationally, the best tool that we have is Calculus -- and even after decades of people iterating on how to teach it, Calculus is still a mystery to most undergrads who muddle through it.

Similarly I could give you some intuition about manifolds by telling you that they're kinda like surfaces that can have higher dimensionality, but until you're able to actually understand and work with the definition of "locally homeomorphic to R^n", you're not going to be able to do anything with them.

I think it's bollocks, this idea that mathematicians are obfuscating easy concepts with complex language. The language doesn't exist to get research grants and gatekeep, it's there because you need some succinct way to describe objects and concepts without running through a full definition every time.

Obviously you can always improve communication, and from my experience most mathematicians try pretty hard to do that, especially in the classroom, but it's not easy.


IMO the final paper had too much technical jargon which was convoluting some simple concepts.

This is symptomatic of IMHO the biggest single problem with the world of mathematics today. This discipline should be about developing ideas based on rigorous foundations and logic, which is a useful and important purpose. Once you have understood those ideas, even "advanced" results often seem quite simple and intuitive, and we can take that understanding and apply it to other work if helps us to make useful progress. However, the amount of needlessly obscure terminology, poor notation and just plain bad writing in formal papers make the whole field absurdly inaccessible, even to many who might have no trouble understanding the underlying concepts and important results.

Just imagine what would happen if we tried to write engineering specs or software the way postgraduate mathematics research is done. We'd be trying to debug source code where every identifier was a single character, taken from one of three or four different alphabets, with several of them looking similar enough to mistake one for another, with a bunch of combining accent modifiers on top that were used to fundamentally change the semantics of the program, interspersed with comments by someone who needs a remedial class in basic writing skills, full of technical terminology that is defined in terms of other technical terminology from other software packages, except that sometimes the meaning is subtly different in this context but that isn't noted anywhere on the screen at the time, resulting in needing to spend half an hour doing a depth-first-search of all the definitions only to find that a function whose only identifier is the name of the developer who wrote the second version (because the first person to work on it already has another function bearing their name) is actually equivalent to what a programmer would write as

    const DAYS_IN_WEEK := 7
I write this as someone who studied mathematics to a high level and has touched on the field many times since in connection with heavily mathematical software development. It's the worst sort of closed-world gate-keeping, and we could do so much better, but sadly inertia and vested interests are not our friends in this matter.


Are you sure though that your extra stuff was actually on correct an on point? AFAIK in math you have the primary notions, the axioms then all the previous definitions and proofs, maybe I am imagining somethng different then what actually happened though, an example would be good but it may not be possible for you to remember or express it in a comment here.


Software is very much in crisis because of this exact problem.

Most of us build on top of a mountain of existing work; sometimes just setting a pebble on top and we have a new thing. But no human alive has seen, could remember, or could understand, each and every behavior down in that mountain, from pixels on the screen to pulses on the phy all the way to gates inside their processors.

So you read about the APIs you need; you set your pebble down with a little glue or duct tape; you hope you've connected it correctly; you hope the rest of the mountain works how you want; you hope you didn't miss any use cases for interfacing to the mountain; and you hope it doesn't have any undocumented surprises. Abelson and Sussman called this engineering by poking (1).

Of course, if you want to make something work fast with minimal work, this is great, it's the component future we dreamed about. But it also means we are clueless.

1. http://lambda-the-ultimate.org/node/5335


In this case, it likely applies to every aspect of modern day life. Man is no longer a generalist, and we are forced to build upon the foundations others have put, and hope they work.


No one human knows how to make a pencil: https://fee.org/resources/i-pencil/


Beautifully put, with afterward from Milton Freidman putting it into context.


> I'm almost done with an undergraduate degree in math and basically have no idea what ~90% of the research is about

Find a grad student who are just about done with their studies in, say, analysis and the odds are they understand jacksquat about anything in current research in Number Theory either.

This post is in support of your point :)


That doesn’t mean that it’s wrong, it means that you need to read more. In a lot of fields, undergraduate material doesn’t qualify you to read papers, a PhD does.

If all papers actually were written to undergraduate level, many would be 500 pages long. Who has time to write a book-length exposition of the basics of their field every time they publish? I mean, have you ever published a technical work? If so, did you go out of your way to make it completely accessible to high schoolers? If you didn’t, you get why professors don’t do the same for undergrads.


A B.A. in math gets you to the forefront of mathematical research as it was circa 1912. (Source: https://slate.com/human-interest/2003/05/is-math-a-young-man...) This is purely my impression as somebody who switched away from pure math early in their career, but: amazingly, in pure math, unless you are really advanced, you are not even that close to engaging independently in cutting-edge research until well after your Ph.D. Math has been continuously researched by all the smartest people for hundreds of years (thousands, depending on how you view the dark ages). The low hanging fruit got gobbled up centuries ago. The same could be said for physics and various other "classical" scientific fields.


> The same could be said for physics and various other "classical" scientific fields.

I think mathematics is in a different class than physics or other scientific endeavors because there is meaningful mathematics going back to ancient Greece whereas I am not aware of any physics that was known thousands of years ago that is still relevant.


Suppose a narrow field where only two human beings can understand. How could the society judge the progress of the field or the merit of a new project? I don't think it is possible to avoid collusion or groupthink in such a field and it would be difficult to differentiate the two.

So does it mean we should curb some superhuman thinking and let them explain better to other human being? I'd think so. It is all about balance or trade-off, but at some point, a field would become too esoteric for a society and it might need to slow down the short-term progress to ensure consistent long-term progress.

It seems the math field as a whole (or more specifically the number theorists?) needs more of good communicators who can link the cutting edge and more general public. The math academia could give incentives to such activities in addition to pushing the boundaries.


Scientists and mathematicians already spend an extreme amount of energy trying to think of the shortest, cleanest possible route for people to learn their fields. The resulting documents take years to create, and are called textbooks. (At least, the good textbooks are like this.) Often people will work for thousands of hours on their books, essentially for free. Students go from outsiders to participants by reading them, all the time.

The problem is that things genuinely are hard, and there's only so much compression you can do before you're reduced to meaningless handwaving. Textbooks are already compressed by a factor of 100 relative to the original research literature, sometimes even oversimplified. Demanding something a factor of 100 better than a textbook is going to get you something like one of those useless "learn C++ in 24 hours" paperbacks.


That assumes there is some link to the general public, to the real world. Lots of math has absolutely no practical use, yet. Why bother explaining the cutting edge when even those who understand it cannot fathom how it would ever be used?


The above statement is true with the assumption that the topic being discussed is truly above the level of understanding for undergraduates. For the majority of the papers, that assumption is not true though. Papers are hard to read has more to do with the idea that if it's not filled with jargons, its chance of getting published is greatly reduced.

Papers are not rewarded for being easy to understand. People/organizations get rewarded for publishing "complicate" papers. That misalignment is the issue.


Depends on the field. Many CS papers can be understood by undergrads because CS is a very young and very broad field; the path to the frontier in any direction is short. Theoretical physics and math are not like that. They have been building cumulatively for centuries.


Theoretical physics and math are not like that. They have been building cumulatively for centuries.

For state-of-the-art research in obscure specialisms, of course you're right.

However, numerous papers and books cover more mundane subjects and there is no good reason they could not be readily accessible to anyone with an undergraduate-level background or at least a masters delving a bit deeper into the field of interest. Often the problem isn't dumbing down the material, it's simply poor communication skills.

Put another way, while our cumulative understanding of mathematics now covers a very broad range of areas, that doesn't necessarily mean the depth to reach a good understanding of any particular specialism has increased at the same rate. On the contrary, given that even the most specialised of theories must be something that an individual can come to understand and build upon within a single career, there is an inherent limit to how deep our understanding of the subject can go. That limit fundamentally depends on how efficiently we can build the layers of more elementary understanding on which the start of the art must rest.


The problem is really that a PhD doesn't get you to reading level either. It might get you there in only one specialized subfield. And there might only be a few dozen other people who are also at reading level for that area. How much do you trust that small group to catch every single subtle, abstract error in papers published in the area?


I don't, I'm just saying that trying to make every paper accessible to undergrads (or high schoolers, or elementary schoolers, or whatever) is going to bloat their length by so much that errors become harder to check, not easier.


Not trying to imply it's wrong -- if anything, research should be written to a professional level. I also don't think the background should take you from high school calc to topology or differential geometry in one fell sweep.


I feel the same way when I look at many of these papers.

I get that I don't have a post doctorate in super math or whatever the bleeding edge academic ivory tower game is these days, but at the same time I feel that if 90% of the research cannot convey any practical description of real-world implications, or at least how it relates to other abstract things that DO have real-world implications, maybe there is some BS house-of-cards fantasy game going on.

From what I understand, there are some financial benefits to cranking out research papers. Perhaps researchers are incentivized to build layer upon layer of abstraction so they can spin any tale they desire in order to turn a quick buck? What better field than math? The purest and most non-tangible of fields.

Another perspective I have on this - Virtually all computer technology or software papers I read are typically understandable to some degree, despite my not having the highest credentials in my field. It would seem the constraint of "if this is in a compsci paper, it needs some reference implementation pseudocode" seems to help keep people honest. If the computer can run it, at least you can view things in a more concrete (albeit still somewhat abstract) manner. Especially imperative code examples. These quickly take very abstract algorithmic concepts and transform them into a step-by-step understanding. If the paper is total bunk, you can usually tell pretty quickly from the results on the computer.

Automated proofs, where feasible, seem like a very reasonable requirement for published math papers. My ignorance at the higher levels of this field fails to inform me if all math papers could be proven automatically, or if there is some more 'complex' realm of reasoning unsuitable for classical/quantum/etc verification. Perhaps this should be a good constraint regardless - If you can't implement your algorithm/proof on a computer, where are you headed with it anyways? Surely piling another 500 papers worth of layers on top isn't going to move you any closer to a practical outcome.


Sounds like a sales pitch for Lean (Microsoft's theorem prover), even though Coq [1] is the standard tool for the UniMath [2] project.

[1] https://en.wikipedia.org/wiki/Coq

[2] https://github.com/UniMath/UniMath


Well, having used both, Lean really is a lot more ergonomic. Unimath is mostly category theory; other parts of mathematics are woefully underdeveloped.


That's fair.

I actually find Coq rather painful to use. It gives me the distinct impression that it has been designed/implemented by somebody who has never written human-computer interfaces before.

When it comes to adapting humans to Mathematics, or adapting Mathematics to humans - I prefer the latter.


"I believe that no human, alive or dead, knows all the details of the proof of Fermat’s Last Theorem. But the community accept the proof nonetheless," Buzzard wrote in a slide presentation[...]

That s 100% clickbait. People knows the demonstration and the demonstration helped to create entire new fields. By the way there are new and shorter proof of the theorem.


There’s a great discussion of this on Buzzard’s blog.

https://xenaproject.wordpress.com/2019/09/27/does-anyone-kno...

Read the comments too.


One of the commenters wonders why we can trust a proof assistant more than a human:

>> if one needs a program to check all the proofs, who’s gonna check that program? Another program-checking program? And a program-checking-program-checking program, etc.?

To which Buzzard's reply is that, well, computer scientists have done a thorough job proving the correctness of proof assistants:

>> To check that Lean has correctly verified a proof, we don’t need to check that Lean has no bugs, we just need to check that it did its job correctly on that one occasion, and this is what the typecheckers can do. This is a technical issue and you’d be better off talking to a computer scientist about this, but they have thought about this extremely carefully.

Reading this as a computer scientist whose field of study is buiilt on automated theorm proving and logic programming (not what proof assistants do, exactly, but close, and some proof assistants even use resolution theorem proving, I understand) this is placing way too much trust on computer scientists. Every bit of theory on automated theorem proving that has ever been published is like maths: theorems are proved by hand, on paper, by humans. And some proofs can be quite convoluted. There's nothing approaching the complexity of the mathematical proofs that make Kevin Buzzard worry, but still, the proofs are not trivial and there is plenty of scope for error.

So I'm afraid that even with computer-aided proofs, we 're still building castles on sand.

Which is quite shocking if you think about it. We think that, maybe we can't trust our minds to know anything with any certainty, but we can trust computers to be flawless in their computations. But how do we know that with any certainty, if we can't know anything with any certainty?


Luckily, this is a solvable problem. A theorem prover is software, so you can apply formal methods to it, and prove that the software performs its stated function. The CakeML project does this, and I'm working on a bootstrapping theorem prover (Metamath Zero) to do this in a few hundred lines of code. So it's not as hopeless as it seems. You just have to have a really efficient and simple checking algorithm and run it over a big computer generated proof of correctness, and it all scales very well. Then you can take that verified theorem prover and apply it to all the regular math we care about.


But formal methods are also supported by theory where any theorems have proofs done by humans, which may contain mistakes. It's humans all the way down, see.


Also, that is quite a remarkable and provocative statement given that Andrew Wiles is still very much alive.


Sure, Wiles obviously understands the main thrust of his proof. But one could argue that Wiles' result depends on lots of other results, which in turn depend on other results, and so on through decades and decades of work, ultimately going back to the foundations of mathematics. Neither Wiles nor anybody else can claim to rigorously understand all of it. You can imagine this as a tree, with Wiles' work as root, and his dependencies as ancestors, and so on. An error at a lower level of the tree could, in theory, invalidate the root node.

I do agree with Buzzard that it's hard to be sure. I've definitely read papers where a critical argument isn't well written or what is written seems wrong. However, if there are low-level errors, I suspect that with some work things could be patched up.


Right, speaking as a lapsed mathematician, I definitely see errors or gaps in published work. Wiles’ original FLT proof had one. And yes these can generally be patched up. I’m not quite as alarmed as the author is, because generally a major false result would have all sorts of alarming ripple effects and implications which would be pretty easy to spot. FLT is an extreme example where literally anyone with a calculator could in theory disprove it. The fact that no one has suggests to me that it’s likely true.


You don t need to understand everything. People spend there all lives creating new demonstrations. Each one is a new prospective on a subject. And if the first one was made of milions of nodes an other one can be made of two nodes only. Take the index theorem there are proofs that have nothing to do with each othere some are very long some aren t.

And finally from the beginning of math people misunderstand their on theorems, doesn t mean that there students won t do better.


I think the intended implication is that Wiles has not read or understood all of the proofs he built on.

If someone said, "nobody dead or alive understands all the details of Windows" I doubt anyone here would find it controversial. It's too big to fit in one person's head.


Buzzard asked Wiles himself, who stated "that he would not want to sit an exam on the proof of Langlands–Tunnell".


This short pdf which the article cites is more useful than anything the article itself contains:

http://wwwf.imperial.ac.uk/~buzzard/one_off_lectures/msr.pdf



Summary: Motherboard interviewed an attendee at the Interactive Theorem Proving conference who says that he thinks Interactive Theorem Proving is important. Also, professors specializing in proof theory think their tooling may become important but "the technology still needs improvement".


I'm currently working on the formalization of Catalan's conjecture (or Mihăilescu's theorem) in Isabelle/HOL/Isar.

My personal impression is that more formal verification is necessary in math, and it should be tought to every student of Mathematics in introductionary courses because it gives an unforgiving but also rewarding learning experience with formal proofs.

Feel free to ask questions.


How much research is being done on making these tools more user friendly? I spent a little bit of time using Coq about a decade ago, and while I agree that it's very important to make things like that widely used, there was a pretty steep learning curve.


I can't answer how much research or work is done to make the tools more user-friendly as I'm rather exclusively on the user-side. There exists extensive documentation in form of multiple PDFs, but they are only of little help to me as they lack useful examples. Sadly Google thinks every possible answer about Isabelle/HOL is answered by these PDFs as they are often the only relevant search result.

However, Isabelle/HOL/Isar allows proofs to be written in a very human readable way, so the "documentation" I use is the existing library and how things are done there. There is a convenient Ctrl+left click action on terms to jump to their definition. That is very helpful to navigate unknown libraries. In comparison, I couldn't read and make sense the proofs I've seen in Coq.

There is a learning curve with Isabelle/HOL/Isar. It can be extremely frustrating to not know how to express "simple" arguments or definition. But for someone familiar with manual "formal" proofs in mathematics, most can be learned in some hours under guidance, like in a seminar on a weekend with a couple of hours on each day.

I'm learning rather autodidactly by trial and error. I think there would be value in a series of blog posts about "This is how you define a group" or "This is how you reason in nonclassical logic" or "This is how to proof the fundamental theorem of Algebra" all applied to Isabelle/HOL/Isar. There are often multiple ways to define things, and to be honest, I don't know the specific advantages and disadvantages. For example, I think you can introduce rings as type classes or as locales or using axiomatizations, and possibly more. This is one example of stuff I don't know how to manage in the system, but there is already a lot of structure defined, and I can reuse that.


To mathematicians,

Welcome to our world!

https://xenaproject.wordpress.com/2019/09/27/does-anyone-kno...

Actually working through some of [the details], when formalising them, made me understand that some of these issues were far bigger and more complex than I had realised.

In the perfectoid project there is still an issue regarding the topology on an adic space, which I had assumed would be all over within about a page or two of pdf, but whose actual proof goes on for much longer

Yeah, that's how software developers feel, all day, every day. That's why we all moan about being asked to give estimates for things. You don't really understand a thing until you've formalised it in code, and often things that look simple and easy turn out to be unexpectedly deep.

Mathematics is a field that's been doing the equivalent of writing pseudo-code for thousands of years. There are endless reams of pseudo-code that looks like it might work, but as any programmer who's ever written pseudo-code knows, whether it actually runs when you try it out for real is anyone's guess.

So now mathematics is starting to experience the pain of multiple incompatible programming languages/ecosystems. There's Lean, Coq, Isabelle/HOL and some new JetBrains language too. Soon they'll start to discover the value of 'foreign language interop', rigorous code review, comments, style guides, design documents, package management, automatic optimisation etc.

Have you ever looked at a proof written in one of these languages? All code in these languages is a hot mess judged by the standards of professional software developers: tons of abbreviations, single letter variables, uncommented, ad-hoc module structure etc. I'm sure we'll eventually see mathematicians having raging flamewars about what clean style is and tabs vs spaces.

Luckily for mathematicians though they work at their own pace and are under no deadline pressure. They'll figure it out.


Vice really does a disservice to the author. "All" math is not wrong, but I sure wish theorem provers were the preferred method of formalizing mathematics. But theorem provers really aren't quite ready for prime time either. But it sure feels like a lot of verification of mathematics is somewhat repetitive.

I think that programming/CS is to mathematics what neolithic revolution was to human culture. It allows for specialization in the stack. People can obsessively optimize different parts of the stack to everyones benefit.


Suggestion to rely on AI for proof verification is just laughable. Neuron weights instead of formal definitions. So reliable.


Proof assitants have nothing to do with neural nets or optimisation. They are entirely determinitsic and return an absolute "true" or "false" result, not an approximation.

https://en.wikipedia.org/wiki/Proof_assistant

This is Good Old-Fashioned AI, through and through.


It's not really like that. NN's are only used when searching for new proofs or steps which can be used to fill in details, the verification of a finished proof is done by a classical program by a series of substitutions which are very straightforward and spelled out in detail.

Here is an example.

https://arxiv.org/pdf/1608.02644.pdf


You seem to think "AI" means machine learning. In this case it does not. Theorem prover systems don't use neural nets.


Okay but they are formal systems and don't have anything to do with AI either, do they?


That's not the suggestion. Proof verification is easy (for a proof written in a formal system). Proof search is hard. Many (most/all?) proof assistants heavily use heuristics to guide the proof search. Machine learning on a database of proofs could lead to superior heuristics and make proving stuff in the formal systems easier.

Proving stuff in these systems is still harder than natural language proofs, so making them easier could increase adoption, which would make mathematicians' results more robust/rigorous.


Note that this is editorial license on the part of the writer; Buzzard is proposing the use of interactive theorem provers, which only use a small amount of 80's style AI (backtracking search and higher order matching). No one in ITP is seriously using modern neural net based AI in real theorem provers yet, although there are several research teams working on it.


The headline is certainly overblown, but there probably errors out there. The interesting question is where we should be looking for them. All of the major results in an undergraduate or introductory graduate textbook have been studied often and thoroughly enough that we can be reasonably confident there are no errors in their proofs. There are errors in very new or very obscure material, but those tend to get corrected as more people start paying attention to them. It therefore seems like the search should therefore be concentrated in recent results that are becoming more useful but still aren't standard.

On the other hand, we do need to build a standard library of results that can be used to prove those newer results. You can't do much in analysis without basic results on metric spaces, for instance.


There is a big grey area between "not 100% correct" and "wrong and harmful". Like a faulty computer program that produces correct results in 99.999% of the cases a wrong mathematical statement can still yield useful results in practice. You just don't want to run into those pesky edge cases where it indeed is wrong.

Having a database of all mathematical proofs where all of them are checked for validity, however, is a very useful tool to actually find those edge cases where a proof is wrong and give future mathematicians a solid foundation.

Using AI as an extension of human capabilities is a given to me. Like with heavy machinery, we don't want to do the heavy lifting. We want to steer it properly to gain most benefit.


> a wrong mathematical statement can still yield useful results in practice

Yes and no. For some basic conjectures this might be the case, but one of the greatest utilities of certain theorems is how they can be applied to completely new fields. If a theorem used in this manner is not actually true, then the proof is invalid -- and it might be the case that (several proofs down the line) the conjecture you've ended up proving is almost entirely invalid.

There's also the slippery slope of "Principle of Explosion"-type problems with accepting "mostly right" proofs -- it means you will be able to prove any given statement (even ones which are outright false).


Soo... Unit testing / CI for math proofs? If it can be done in software, great.


To my knowledge the hard part is writing the input for the software. While in text you have a bit of leeway to glance over some details, with the computer program you must, indeed, write every last relevant detail down and you must get it right. This is very heavy work.


But it's once-off work. Thereafter the computers can take over.


It's actually type checking. Per the Curry–Howard correspondence, proof checking is equivalent to type checking, just for much more sophisticated type system than in an ordinary programming language.


I was thinking it's a lot like property-based testing


I had these fears about my dissertation work, but then realized that the “elder’s” papers were highly cited because they contained important machinery which could be modified and refined to prove other results... or the paper contained an important idea which went on to be proved in a variety of ways. Maybe it’s different outside of PDEs, but I doubt it.


I just can't help feeling that either the true should be correct, or the wrong should be false. But I guess English is not a strongly typed language.


This just seems like a native advertisement for Lean...


I am not that perturbed by whatever framework they use, tools to translate the bulk of the majority of proofs from Lean to MetaMath to [...] and back will be made,

the more important thing is that the number of humans aware of the existence of this tech and the more actually use it the better society will be off in the long run

the following may seem totally impossible today but seems a lot less impossible to people who have been thinking about this for a long time: once sufficient mathematics has been translated, people can start adding physics postulates, one most physics has been inserted we can start formally verifying (up to validity of postulates on top of axioms) not just proper operation of digital circuits etc but also power plants and so on, and once interacting with proof systems becomes as widespread as alphabetic literacy, people might one day be able to formalize and vote on their requirements of society and policy, see predicted perverse incentives, or decide that policy can only be enacted when facets of their consequences can be proven. We might one day not vote on promises of strategy, also not vote on strategy, but rather on outcomes!

I know it sounds absurd.


I’m sure this will help proofing papers. There will still be the problem of proofing that the software proofs what it is intended to proof.


Headline is of course absurd hyperbole that does not belong here.

> "I think there is a non-zero chance that some of our great castles are built on sand," Buzzard wrote in a slide presentation. "But I think it's small."


Yes, the title ought to be changed. At least, "all" to "some".

But he does note:

> This overreliance on the elders leads to a brittleness in the understanding of truth. A proof of Fermat's Last Theorem, proposed in 1637 and once considered by the Guinness Book of World Records to be the world's "most difficult math problem," was published in the 1990s. Buzzard posits that no one actually completely understands it, or knows whether it's true.


Well, they also quote him as saying:

"I’m suddenly concerned that all of published math is wrong because mathematicians are not checking the details, and I’ve seen them wrong before,”



Extreme clickbait headline. The subheading is '"I think there is a non-zero chance that some of our great castles are built on sand," he said, arguing that we must begin to rely on AI to verify proofs.'


But, more consistent with the alarmist headline:

> “I’m suddenly concerned that all of published math is wrong because mathematicians are not checking the details, and I’ve seen them wrong before,” Buzzard told Motherboard while he was attending the 10th Interactive Theorem Proving conference in Portland, Oregon, where he gave the opening talk.


Someone who says that is, ironically, not concerned with the details. All of published math is wrong? No, a lot of is is understandable and verifiable by advanced undergraduates.

Most of the fundamental theorems in each field are understandable by humans. As you get more and more abstract, and get into more and more obscure areas, you start to see non-understandable proofs built on top of non-understandable proofs. There's a danger that those are wrong, but when they collapse it just means that one professor has wasted their life, not that all of mathematics will suddenly collapse.


The headline is clickbait. A more measured debate about the issues is here https://xenaproject.wordpress.com/2019/09/27/does-anyone-kno...


We've replaced that with a more measured sentence from the article, in accordance with the HN guidelines ("Please use the original title, unless it is misleading or linkbait")

https://news.ycombinator.com/newsguidelines.html


Anyone interested in this topic, I have a prediction that the time is right for a new math, that starts from just 1 and 0, and builds every symbol up in a rigorous way.

We're working on the infrastructure to make such a thing easier, but the project does not currently have a leader: https://github.com/treenotation/jtree/issues/83


Disregard cranks, acquire Metamath: http://us.metamath.org/downloads/metamath.pdf


Metamath is awesome if you're willing to slum it :)

https://jiggerwit.wordpress.com/2018/04/14/the-architecture-...


How long does it take your preferred proof assistant to prove things? Metamath's standard set.mm takes only about 10s to prove everything.


Actually if you use the smm verifier that's been cut down to about 800ms. :) However, most of the theorem provers of today have been built on the philosophy that performance doesn't matter, or at least is secondary to ease of use, mathematical cleanliness etc, from the functional programming community. It turns out that once you make this decision it's difficult to get that raw speed back, even if you start worrying about performance later, and the HOL family provers make it worse by defining correctness in terms of the running of an ML program, which bakes the runtime of the ML system into the proof checking time.

When you combine this with the fact that these ML programs are not proofs but proof scripts, that perform a lot of "unnecessary" work like searching for a proof rather than just going straight for the answer, it suddenly begins to make sense why these systems take on the order of hours to days to check their whole libraries.

Coq and Lean are somewhere in the middle, because they have proof terms, but the logic itself still requires some unbounded computations. Checking a proof here is often fast, unless you make too much use of computation in the logic. But people often don't care about proof terms, and still store the proof scripts, which are as slow as ever.

Metamath is in this setting somewhat unique in eschewing proof scripts altogether, or more accurately, inlining proof scripts immediately on the author's machine. The resulting proofs are often comparatively long and verbose, but I would argue this is only a display matter, since all the other provers are doing the same thing, they just aren't showing it.


My preferred proof assistant is definitely Metamath, I think it just needs a big company to come along and build an awesome front end for it and it will become the standard.


Very interesting, thanks! This is definitely along the lines of what I’m thinking. It uses EBNF, i think if we switched it to tree notation it would be much, simpler, tighter, and more self contained.


Forgive me, but why is the TreeNotation/jtree library the right place to craft a new "rigorous" mathematics language?


It’s a good question. This paper hints at the reason: https://github.com/treenotation/jtree/blob/master/papers/pap...

First, jtree is a bit orthogonal as that’s just an implementation. The github issue is there for convenience, it’s arbitrary.

The reason for tree notation is that it’s a level up from binary. Binary notation does not give you the ability to define new symbols. Tree notation is a 2d binary. You can define new abstractions. It is a bridge between human and machine languages.

Take any concept in mathematics, such as “derivative”. How would you define such a concept, starting only from 1s and 0s? Tree notation gives you a method to do that, in a way that’s not only efficient (noiseless), but also gives you practical tools, like complexity counting.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: