
Why is writing mathematical proofs more fault-proof than writing code? (2011) - dgellow
https://cs.stackexchange.com/questions/85327/why-is-writing-down-mathematical-proofs-more-fault-proof-than-writing-computer-c
======
yaseer
I think the line of reasoning is incorrect.

Most mathematical arguments are actually not formal, in the way a program is a
formal language.

In many instances, there are not only ambiguities but errors in the
mathematical expression. They only become formalised to an equivalent degree
in a system like Isabelle or Coq.

Writing proofs in Isabelle and Coq is essentially the same process as writing
highly functional code, and is susceptible to the same arduous process. It's
hard because it's a formal language running on a rigid machine.

Programming would be easy if you specified everything in a semi --formal
pseudo code, designed to be interpreted by a human mind that can fill in the
blanks. This is what most mathematics is.

~~~
vidarh
I often point out that exactly because of this, it's a red flag for me when I
see a CS paper that relies on maths - it often indicates handwaving and
missing details that would be a lot more obvious in code to the extent that
I've learned to expect that if the paper is full of maths, chances are it will
be a long, arduous process to try to reconstruct what the paper is actually
doing. Often it extends to missing out essential details, such as values of
important constants and the like.

~~~
lgas
Can you link me an example or two of good CS papers that don't rely on math?

~~~
lgas
To answer my own question, I came across this paper today which was I found
quite insightful and had little to no math especially for a PLT paper:

Elementary Strong Functional Programming by D.A. Turner

[https://pdfs.semanticscholar.org/b60b/1c2e49ec6f574f220f162c...](https://pdfs.semanticscholar.org/b60b/1c2e49ec6f574f220f162c8fdc81b2831830.pdf)

------
dnautics
Because in mathematical proofs the preconditions are usually explicit and well
understood. A program usually has reality-bound constraints that are not
explicit, e.g. "does not work unless the target system has at least 3.5mb of
memory" or "assumes that opcode X never fails" or "assumes memory is
consistent and never is corrupted"

~~~
Peaker
Programs could be explicit about those constraints, encode more of them
statically and check them statically.

~~~
traviscj
Everything has a failure rate. This introduces another set of heuristics (and
another false-{positive,negative} rate), another set of potential
implementation bugs in those heuristics. What if being able to run the program
on a particularly memory-limited computer could save the universe, even more
slowly than the project manager originally wanted it to run?

Statically encoding something like a memory constraint sounds very hard for
all but the simplest algorithms. What about a randomized algorithm that
sometimes uses more heap? How do you get around not knowing how much it's
going to use without running it?

~~~
Peaker
Allocate for the worst case use in the bss. The program's static size
indicates explicit memory requirements.

------
CJefferson
I consider myself a bit of an expert in this area, I work on lots of
mathematical algorithms.

My experience is this isn't true. The reason it seems to be true is
mathematicians will skip over writing out "boring" edge cases, which while
hypothetically simple are often where the bugs occur in code.

~~~
wbhart
There are many, many errors in the published mathematical literature, and the
older you get, the more of them you see. What I find even more amusing is when
people don't withdraw their papers after you convince them their paper is
fundamentally wrong, and they agree the argument can't be salvaged. I used to
think of mathematical arguments that I couldn't follow as being "advanced" and
written by "elite" people. I now consider them "heuristic" or worse still,
"the spam of confused minds".

At the top of various fields, you basically have a bunch of experts writing
for each other, using heuristic arguments, and writing only to a level that
will convince one another that the result is "probably true". Important
results of great public interest are picked to pieces. But this is not true of
less prominent work (of which the majority of the literature consists),
probably each paper is read an average of 0.6 times, including by the authors
and referees.

~~~
dsacco
_> There are many, many errors in the published mathematical literature, and
the older you get, the more of them you see._

Can you provide any examples? I know this is true, but I'm interested in
examples you have off the top of your head.

~~~
wbhart
For the most part, I'd prefer not to give explicit examples, since it amounts
to public shaming of (often) still living authors.

In broad generality, I work on algorithms for number theory and fast
arithmetic. If you look at published, peer reviewed, conference proceedings in
this area, there are many, many well known cases of incorrect algorithms.

Sometimes in the mathematical literature there are multiple definitions given
for similar concepts, and later authors assume without checking that they are
the same thing. I'm aware of papers that are entirely wrong because of this.

Another area where errors are common is in stating identities, which the
author didn't try to check with a computer.

But, I am also aware of examples in less computational, pure mathematics too.

The phenomenon also extends to famous textbooks and undergraduate notes.
Simply look at the errata section of any sufficiently well-read textbook.
Sometimes incorrect or inadequate assumptions are carried right through the
book without explicitly mentioning them.

A famous cryptographer used to have online a rant about half-gcd algorithms.
As the story goes, the original publication was correct. One famous
improvement is only correct for polynomials. A later attempt by another author
is not correct for polynomials or integers. And a famous textbook in the field
took until their third edition to get the proof/algorithm correct.

I've even seen an incorrect definition given for something as fundamental as a
field, even by the most famous of mathematicians, now two or three times I
think (admittedly once by someone who wasn't an algebraist).

You can find many examples easily by Googling, "proof was unfortunately
incorrect", although it is usually stated more generously than that, e.g. "we
have been unable to reconstruct their argument", "we supply some missing
details", "it is not clear exactly what the authors intended", "where we
correct an obvious typographical error", etc.

And this is to say nothing of the almost infinite supply of papers that are
missing critical elements of the proof, or where folklore theorems are relied
upon without proof or citation. Simply search for the ubiquitous phrases, "the
well-known result" and "it is well-known that".

------
graycat
Nice observation.

What really matters is the actual thinking about the actual subject. In nearly
all math proofs, the writing is _closer_ to that thinking than in coding. Or,
each sentence in a math proof -- and well written proofs actually are written
in sentences -- is closer to the thinking than for the programming language
statements in the coding. Or, look at the statements in the coding -- without
mnemonic identifier naming, they are close to just gibberish.

To get the coding closer to the thinking, that's the documentation in the
coding. However the documentation is usually not as precise as the sentences
in a math proof or the statements in code.

So, maybe we need to do something to get programming closer to the real
thinking: For this, maybe try to have what we use in programming to be easier
to describe. That is, in a math proof, we have numbers, sets, vectors,
functions, etc. and at each point in the proof see clearly what these mean. In
programming we have variables, arrays, structures, IF statements, loops,
subroutines, and without documentation or careful reading don't see clearly
what these mean.

Once I tried: I wrote some subroutines for sorting, permutations, etc. and
tried to have clear properties for the _operations_ the code die, properties
that could reason about. Then for some uses of these subroutines, could do
some logical reasoning much like in algebra in math. I didn't go very far with
that idea!

~~~
nerdponx
Why the hell are you being downvoted? I really appreciated this line of
thinking.

~~~
zzzcpan
Because he argues under the same incorrect assumption that writing
mathematical proofs is more fault-proof than writing code. Which is what the
answers on stackexchange are partly about.

~~~
posterboy
I liked this part of one answer especially

> Still, I hope that this sheds some light on the question that is implied:
> "What really is the difference between proving some theorem and writing a
> program?" (To which a careless observer of the Curry-Howard correspondence
> might say: "Nothing at all!")

------
alangpierce
> I've never heard of someone who wrote a big computer program with no
> mistakes in it in one go, and had full confidence that it would be bugless.

I've actually done this a number of times before, but only in the ACM ICPC
programming contest, so maybe a difference here is the environment in which
people write programs vs proofs. In the contest, you have a team of three
people and only one computer, and if you submit an incorrect solution, the
only response you get back is "Wrong answer" with no detail about what test
case was wrong. The typical way that teams approach this is to write code out
completely on paper, then transcribe it from paper to the computer, run the
provided test cases, and submit. Computer time is valuable, so if you run into
a mistake that you can't figure out, it's common to print your code and look
through the code on paper rather than trying to debug it on the computer.

When you train enough in an environment like that, you learn to be very
careful about all of the details of the program you're writing. A single
mistake can dramatically increase the time it takes to solve a problem, and
there's pretty much no opportunity to "try it and see" like there is in normal
programming. Also, the simple act of writing code down on paper and then
copying it into the computer (double-checking as you go) gives you much more
time to notice mistakes.

Mistakes are still common, of course, but I've had a number of times where I
write out a program on paper, type it into the computer, compile it with no
errors, run the sample test cases and have them all pass, and then submit it
and and see that it's correct. I've never experienced something like that in
any other programming environment, including other programming contests, which
typically let you write your code on the computer.

Proofs tend to be the same type of environment: you write it out, with no
compiler to help you, and you have plenty of time to read through it over and
over before you have someone else check it for correctness. Maybe that sort of
environment forces a level of care that you don't see as much in programming.

------
auggierose
The answer is simple.

In Mathematics you operate on a much higher level of concepts, and these
concepts don't just fall out of thin air, but were sometimes decades or
centuries in the making.

In programming though your concepts are made up very much on the fly (if you
bother developing concepts at all; usually you just use your framework of
choice and hope it got enough concepts baked in so you can survive), and
therefore often not very good and coherent. That means you can not rely on
them for any kind of correctness assessment.

~~~
JohnStrange
It's not just that, people spend way more time going through mathematical
proofs, finding, and formulating them. If you give programmers time to muse
about every function for weeks and write no more than 50-100 pages of program
text per year or even less, then their programs will be almost bug free. You
get what you pay for.

Note: I'm not claiming that programmers should be treated like mathematicians
or vice versa, I'm merely pointing out the differences of the activities.

 _Edit: There is of course also a more trivial reason. On average, research
mathematicians are probably more intelligent, skillful and diligent than
typical programmers._

~~~
peoplewindow
Programmers are less intelligent, skillful and diligent on average than
research mathematicians? How do you figure that? This statement appears to
rely on a circular definition of intelligence - maths is "hard" therefore
people who do it must be "intelligent".

Given the examples in the linked discussion of cases where not only did
mathematicians write buggy proofs but it took years to figure out a mistake
existed at all, let alone _what it was_ , I find it hard to believe that. To
the extent it may appear superficially to be true it's likely the result of
the academic setting in which there is no connection to real world relevance
and the only pressure that exists to get things done comes from peers and the
need to publish papers - but if those papers achieve very little and your
peers papers also achieve little, that's totally OK.

Compare to the world of the working programmer who is judged _not_ by how fast
he writes code but by the real world positive impact of that code, and it is
easy to see how the mathematician may come across as more diligent or
intelligent. But it's just an artifact of the pressure-free environment.

I was also surprised to learn that the idea of checking mathematical proofs
using Coq is considered just as exotic or impractical as checking programs. I
had thought that Coq and other proof assistants were in wide usage for this
use case, as surely pure mathematics is simpler to reason about formally than
entire programs ... but apparently not. The fact that maths proofs are still
mostly checked by hand, whereas machine-checked proofs (like static type
systems) are widely used by even novice programmers, is hardly reassuring.

~~~
nerdponx
Research mathematicians are certainly more highly trained than non-research
programmers, at least for the first few years of their career. A PhD, at least
in the USA, is really intensive.

~~~
dfee
What’s the connection between being highly trained and being “intelligent,
skillful and diligent”? Circus animals are definitely highly trained after
all.

~~~
TheOtherHobbes
All math PhDs can become programmers.

Some might be too bored to be good programmers, and some might have issues
with corporate nonsense and workplace politics. But purely in terms of the
ability to manipulate symbolic systems to useful effect, the base level is
more than high enough for most programming jobs.

Evidence: for a long time, math BSc/PhD quals were highly valued by software
houses. This continues to be true to an extent, especially at the high end
with FP/ML.

The percentage of programmers who can become math PhDs is... lower.

~~~
pas
> All math PhDs can become programmers.

First of all what does "can become" mean? If it means with sufficient training
and supervision they can learn to be programmers, then that's also true for
the inverse.

Furthermore, we need only one counterexample to make it false, and I happen to
know a few math PhDs that are not that great at programming even though one of
them actually works as a programmer.

~~~
SamReidHughes
> If it means with sufficient training and supervision they can learn to be
> programmers, then that's also true for the inverse.

No way. Most programmers are genetically unfit to do research mathematics.
Many programmers can't take in an idea and expel it back out without
corruption because they've got some cosmic ray simulation device in their
brain stem. Or they just don't have the creativity. For example, think of all
the people that complain about interview warm-up questions or think they're
something you'd memorize.

~~~
pas
This largely ignores the fact that research is 90% reading papers (or doing
lab work in less pure fields) and trying to come up with something. Fighting
for money, writing papers, producing graphs/charts, etc.

Pure maths research is undeniably simpler, but not that much. Look at HoTT
(homotopy type theory), or reverse maths
([https://github.com/ericastor/rmzoo/](https://github.com/ericastor/rmzoo/)).
These are sufficiently close to programming - because they are largely
composed of programming tasks.

Furthermore, researchers usually don't do work alone, they are usually
enrolled in some kind of a program, with a supervisor, mentor, guide, or at
least a program/faculty chair. And even if they are totally on their own, they
can start doing work on unsolved problems. Usually people new to research
start by doing a survey paper for a certain field, to get an overview of
recent and past progress and problems, solutions and techniques.

Oh, and this also applies:
[https://78.media.tumblr.com/41b40230404ccfd7af8a0146ea6689d3...](https://78.media.tumblr.com/41b40230404ccfd7af8a0146ea6689d3/tumblr_p1kuln9dzV1wolmxbo1_500.jpg)

Yes, 99.9% of programmers would never become the next Tao, cranking out blog
posts, books, polymath papers, lectures and otherwise results every few
days/months, but that doesn't mean they couldn't do pure maths research. But
luckily they don't have to. Because it's a very different realm than
programming. (Or even protocol design, IETF work, low level microcode work, or
run of the mill mobile apps.)

------
Maro
I think it's not an apples to apples comparison.

In mathematics, there are abstract concepts like integers. For any i, i+1 is
also an integer.

In programming, there is a MIN_INT and MAX_INT. If you write i/2, it's
different than i/2.0. If you store a long string in a database, the column may
be cut off at 32 chars because of the column type. Running string algorithms
that sounds perfectly reasonable on ASCII may result in weird outcomes on
Unicode. Most of these things also depend on the programming language, the
compiler (MS, gnu, etc), the compiler version, OS, libraries, sometimes even
the locale! A provably good crypto algorithm may be vulnerable to side-channel
attacks. A mathematician may be happy to give an existential proof, but a
programmer needs an implementation, and one that is polynomial [usually]. And
so on...

The tldr is, the real world is more complicated, limited and nuanced than the
abstract concepts of mathematics.

------
eximius
Compare writing a proof in Coq to writing a normal program. That is far more
equal in terms of what you're doing. Writing a proof for people implicitly
assumes a mountain of background information and interpolation between
statements that doesn't exist in programming.

------
bor0
As the top voted answer says, proofs are at a very high level compared to
programming.

Now imagine writing proofs on a very low level (check Metamath) where you work
with wff and rewrite rules. It gets as tricky as programming.

------
hyperpallium
I feel less confident of my proofs than my programs, because I can't run them,
to see what happens. Yes, there are proof assistents like coq, but
mathematicians look down on them.

Proofs are usually about some very small thing, and require a tremendous
amount of work. Then, there is a huge literature for guidance and reusable
elements (though, a _lot_ of work to use it), and peer review to check it
(also labour intensive).

~~~
nerdponx
I do not understand why mathematicians look down on proof assistants. I've
heard this claim multiple times and it continues to make no sense to me.

Are there any mathematicians here who can weigh in?

~~~
kxyvr
I'm a professional mathematician and, candidly, most people in my subfield
don't really know that they exist. I do, and would love to use them, but I
find them very difficult to use and don't have good examples of proofs
relevant to my work.

To be specific, I've used Coq in the past to help verify properties of
programs such as a sorted list really being sorted. That said, I primarily
work in applied math areas such as optimization, numerical linear algebra, and
numerical PDEs and I have no idea where to begin with fundamental proofs such
as Newton's method converging quadratically within a certain neighborhood of
the solution. Alternatively, I'd love to see formalized proofs of the theorems
that we'd learn in a course in real analysis such as one based on Rudin's
Principles of Mathematical Analysis. Really, if anyone has examples of these,
please let me know.

There's an amazing number of errors in published work. Most of the time, the
work is still mostly right, but that's not OK. In my opinion, we'd be better
off with formalized proofs.

~~~
EugeneAZ
> I primarily work in applied math areas such as optimization

It may be rather an off-topic, but I suggest you to read my funny paper:
[https://docs.google.com/document/d/10pTRJFgwEGnUM5RF1CHX2OO5...](https://docs.google.com/document/d/10pTRJFgwEGnUM5RF1CHX2OO5xZTwzyITHd4cjpZTsqw/edit)

------
solomatov
I had experience both of writing programs and writing proofs including proofs
in formally verified languages, e.g. Coq, Agda.

First of all, I think that the author underestimates the complexity of writing
correct programs. What if integer overflow occurs? What is some exceptions
happens somewhere? Programmers rarely consider all the pathological cases,
while mathematicians must consider all of them. If program works most of the
time, it's good enough for programmer but not good enough for mathematician.

Second, thanks to Curry Howard Correspondence, every formal proof has a
corresponding program which 'constructs' a result of the theorem from
preconditions, so, the proof is basically a program (and vice versa in a typed
programming language).

------
fpoling
In mathematical theorems the initial conditions are rather uniform without
much if any of special cases. Thus mathematical proofs needs typically much
less control flow to cover the initial conditions than computer programs. In
fact many proofs are just straight deduction without single if to cover corner
cases. Such type of "code" is simple to check. One just checks for logic of
deduction, not the coverage of all initial conditions.

------
dtx1
Adding to the arguments present here already, it's also often easier to work
with mathematical formulas and concepts compared to programming concepts. For
example:

pi. Easy in "math", effing hard to do in code. You don't have to worry about
iee floats in "math". You don't have to worry about computational restrictions
like ram, cpu time, null pointers etc.

------
tzahola
Because in a program you can't have functions having their implementation left
as an exercise for the computer. :^)

------
sorokod
Both theorem proving and programming operate on an abstracted view of reallity
( in case of mathematics this reallity is often determined by some choice of
axioms). By some metric the abstracted reallity of programming tends to be
more complicated and noisy than that of mathematics.

------
ejz
One extra element I'd add is that what counts as an error in a proof is
different than what counts as a bug. In pure math, an error is a strict
failure of logic that breaks the proof. In programming, a bug doesn't need to
only refer to something that stops the code from working. I've also heard it
referred to as something that stops the code from working properly or
efficiently, for instance. Mathematicians don't care about that because they
have a different goal than a programmer, that is, to prove the truth of
something (though some are motivated to produce more "elegant" proofs).

------
shrimpx
I don't agree that mathematical proofs are more fault-proof than code.

Mathematics is based more in the "spirit" of the idea and proof, rather than
pedantically driving things down into the ultimate nuts and bolts. Many paper
proofs are likely full of bugs deep in the details.

As intuitionism and machine formalism develop, and the standard of rigor
rises, we might see entire branches of mathematics being relegated to the
dustbin of historically interesting human endeavors but totally useless as
foundations/formal assumptions for future work.

------
gerbilly
Two words, iteration and conditional execution.

Most math lacks these features common in programming.

With iteration you blend the results at time t with the results at time t+1.

Because of conditionals (if statements), a programming mistake at time t may
not become evident till some much later step in the iteration, because the
path through the logic is not guaranteed to be consistent.

Typically the data of all intervening steps has been lost by the time you
encounter the error.

------
lainon
It was posted dec 11 2017, not dec 2011 :)

------
willtim
Most programming languages are unsound logic systems which allow us to prove
true==false rather too easily!

------
wccrawford
If I spent as much time on a few lines of code as I spent in high school on a
few lines of a proof, they'd be just as bug-proof.

Nobody is going to pay me to spend that kind of time on that little amount of
code. Heck, I'm not even going to spend that kind of time on my own projects.

------
yakitori
Because one is dealing with abstract theory with specific rules vs engineering
in the real world.

One exists in the world of abstraction while the other lives in the real
world.

Might as well ask why we can have a perfect circle in math while a perfect
circle doesn't exist in the real world.

------
bsznjyewgd
Because the interpreter for mathematical proofs is far more forgiving than the
interpreter for code.

------
0x445442
x = x + 1

I remember the fist time I saw that line and said, that's false. And it's been
an ever ending divergence from pure math for the past 30 years.

~~~
goatlover
It can also be:

x <\- x + 1

x := x + 1

But I guess since assignment is more frequent than equality checks, most PL
designers opt for using equals. I do like R's left arrow assignment operator.
If only keyboards had a default arrow character key. It would completely get
rid of a class of errors based on forgetting the second equals sign.

~~~
user5994461
A sane compiler always give a warning when doing a comparison outside a
condition and when doing an assignment within a condition.

------
ttty
Not sure if really related but I couldn't follow the math at school. For
example in algebra the rules were complicated and required you to know things
that are not explicitly set. Because of that I wrote my own functions to do
what my teacher said. Then in the exams we are allowed to bring formulas,
therefore I bring my own JavaScript function and I was the compiler :)

The teacher steps would resume in 5 sentences, but the function would be more
long and include exceptions and weird cases. No room for mistakes with my
JavaScript. No wonder I had the biggest score in the classroom.

~~~
tuyy
What level algebra and school are you talking about? I don't really see how
any teacher would allow this. Mind explaining more?

~~~
ttty
University.

------
vorotato
I think the resounding answer has been over the years that written proofs
aren't more fault proof, they're just better at hiding the faults because they
lack explicitness. There has been a recent push for formal verification
because it's easier to actually find issues with the proof.

------
imranq
Just peruse any upper level undergraduate math text. Flip to any proof and
you’ll find “it follows naturally that...”

The bane of my existence as a math undergrad.

------
setra
These days programming languages actually are used for expressing proofs. They
can automatically check them to. For example the Coq theorem proving language.

[https://en.wikipedia.org/wiki/Coq](https://en.wikipedia.org/wiki/Coq)

Some would argue that writing code IS more fault proof than writing proofs the
traditional way.

~~~
contravariant
I would still trust a mathematical argument I can understand over the output
of some automated theorem prover.

Of course these aren't mutually exclusive, provided the theorem prover is
simple enough to understand.

------
ychen306
Because it's not true :-)

Writing mathematical proofs is as fault-proof as writing pseudo-code.

Writing machine-check proofs is as fault-proof as writing code.

------
hyperpallium
I like the Lamport answer, implying that proofs are just as faulty as
programs, they just don't get tested
[https://cs.stackexchange.com/a/85362/23759](https://cs.stackexchange.com/a/85362/23759)

One might say the amazing thing is that many (not all!) such theorems are
correct, despite the errors. Somehow, mathematical insight sees the truth,
despite formalism failure...

But, by this standard, most programs are "correct", they just have some
inconsequential errors.

I think this goes to a systemic problem in mathematics, that proofs are not
very rigorous. I've never found one convincing. Instead, math is slightly like
_English Literature_ , where you learn "theories" that are held to be true by
the community. You get inducted, and off you go. It's not purely objective.

Of course, this doesn't explain the usefulness of mathematics when applied to
reality, so I only claim it's "slightly like" an Arts subject. Perhaps some
survivor bias, and engineers fix any problems.

~~~
umanwizard
> I think this goes to a systemic problem in mathematics, that proofs are not
> very rigorous. I've never found one convincing. Instead, math is slightly
> like English Literature, where you learn "theories" that are held to be true
> by the community. You get inducted, and off you go. It's not purely
> objective.

This is an outrageous exaggeration.

------
ajarmst
Because we generally only accept mathematical proofs from highly-trained
experts, and no one would hire the kid next door, who taught himself math and
seems to spend a lot of time fooling around with calculators to do
professional work in mathematics?

------
aknoob
Depends on how you define fault here. Is it mere logical correctness of code
or any error that might occur when code executes. If we follow former
definition of fault then , they both are similar in being "fault-proof". If we
follow later definition then execution environment of the program comes into
play and it depends on a lot of variables and their complex interplay, that it
becomes much harder to provide any formal guarantees of fault tolerance.

However there are execution environments, read RTOS that try to be fault
tolerant. Also various virtual machines like JVM,CLR try to provide fault
tolerance to varying degrees.

------
jondubois
With maths, you're merely discovering something which already exists.
Programming is more like you're creating something new.

------
api
You know all the preconditions and constraints ahead of time.

With real world applied software that is very rarely the case.

------
totemizer
math resides in the land of abtractions(fantasy land) and actual code has to
run on actual physical machines in the real world. In the real world you can't
just state an axiom and fix bugs as math people have solved all their problems
in the past 150 years.

~~~
majos
What? If this is true then there are a whole lot of misguided mathematicians
out there working on "solved" problems. If they could just add whatever axiom
seemed useful math research would be meaningless.

If you're saying that math/theoretical computer science makes unrealistic
assumptions, that's a different (and more reasonable) claim.

~~~
totemizer
That is not what I said. What I said is that math ultimately doesn't need to
and doesn't want to deal with reality. The "proofs" they create do not even
have to be consistent with each other, you can pick whatever you need...

------
mcguire
Posted 11 Dec, 2017, not 2011.

------
sunstone
Very few real time interrupts in math proofs.

------
gpmcadam
[Meta] Why do people respond in the HN comments with answers to this question?
Do people not realise that this is a post to a thread, wherein the value (and
the answers) are in the original link, on stackexchange?

~~~
soroso
Not really because the StackExchange thought police really does not let people
freely answer the questions. It means in practice that if you write an answer
not in alignment with the correct thought you get scolded and penalized.

~~~
hungerstrike
It's just like here on HN where you MUST agree with the crowd or else your
comment will be disappeared.

IMO, we need better commenting systems in general.

~~~
MikkoFinell
The problem is that content providers have a financial incentive to create an
environment that penalizes anything controversial and rewards unchallenging
posts that regurgitate whatever the lowest common denominator of that
community happens to be. Reddit is by far the worst example of this, but it
feels like HN is becoming more like that as time goes by.

~~~
zzzcpan
Isn't there always everywhere more people interested in a subject than experts
and groupthink with wrong and not well informed opinions becomes inevitable?
At least on HN you are free to attempt to influence the groupthink.

------
EGreg
Because the proof checker is a smaller piece of software to find bugs in, and
used by more people so more people look. Simple.

