Hacker News new | past | comments | ask | show | jobs | submit login

That wasn't really Erdős' issue with the proof. Unsolved problems in mathematics are often not important merely because of other problems that depend on them; you can look at the Riemann hypothesis to see how much work has already been done just assuming its truth. We're interested in these problems because we hope that the proof will teach new tools and grant new insights, and possibly spark other problems and generalizations.

A proof that requires a computer to understand it is not helpful for these goals. The proof's veracity is not in question.

I was responding to the "I assume the proof is true". There's really no reason to just assume that.

Of course he's right that the proof was lacking in insight for human mathematicians. That's particularly important in combinatorics, which more than other areas of math grows by the accumulation of techniques rather than accumulation of results.

And yet you assume that the problem was correctly stated in Coq, that the authors were using the software correctly, that they ran it more than once to account for random bit flips, that the authors are not just simply lying... while all those assumptions are reasonable, Erdös’s statement was as well.

Proving the theorem in a theorem proving engine adds credibility to the result, just like reproducing an experiment in science adds credibility to the experimental outcome.

The random bit flip comment is of course silly, since anyone can mechanically verify the Coq proof; it doesn't have to be those who created the proof. What has to be assumed is the correctness of the Coq kernel, but all uses of Coq contribute to that, not just this use.

I have no idea what you are even arguing anymore. I made the "bit flip" comment specifically to show that even with the most credible proof, there are still always at least some little (and possibly completely trivial) implicit assumptions made about the actual correctness of the proof. One of those trivial assumptions is that the authors ran their code enough (most likely just as part of just writing it and getting it ready for publishing) more than once. Or that anyone else did, as you say.

And so I simply don't get why you're so hung up on Erdős's assumption of credible mathematicians making a credible proof. Are you seriously insinuating that Paul Erdős, of all people, was not aware of everything you're saying here?

> even with the most credible proof, there are still always at least some little (and possibly completely trivial) implicit assumptions made about the actual correctness of the proof

This is true of any proof, not only computerised proofs. Sometimes maths journals publish incorrect proofs because a person made a mistake. I doubt an incorrect proof has ever been published on account of a hardware bit-flip.

I'd be more worried about the kind of deterministic failure than manifests in every run, such as a bug in the CPU (the famous Intel floating-point bug, say), or in the operating system, etc. Could be mitigated by using a variety of platforms I suppose.

And the silly hardware bit flip complaint is easily addressed: run the program N times, and reduce the chance of the error by a factor that's exponential in N.

Large randomly generated primes are produced by a random algorithm like this, with a step that (when run on a composite number) shows that it is composite with probability p = a constant < 1. The chance of error there can also be made exponentially small.

As you say, deterministic, or at least highly correlated, errors would dominate the chance that the proof was wrong. But that's true of human proofs too.

> that's true of human proofs too.

I'm not sure that's right. The peer review process works well (e.g. [0]) precisely because human mathematicians are likely to make (at least somewhat) different mistakes from each other.

[0] https://en.wikipedia.org/w/index.php?title=Wiles%27s_proof_o...

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact