Three specific complaints he has involve a belief in infinity (and limits), which he asserts do not exist in the real world; a delay in the publication of a pair of papers on which he worked, due to a variety of circumstances; and a "pernicious" influence of axiomatic mathematics which leads to "stupid" questions such as Hilbert's Second Problem.
The failed publication of one paper is particularly notable, as it claimed a counterexample of Fermat's Last Theorem (according to Dr. Zeilberger's "Opinion 123", on his Rutgers website), and (ibid) was recognized by Andrew Wiles himself as one of three possible counterexamples: the reason given for this oversight was in fact the acceptance of the decidedly non-rigourous statement "it is easily seen ..." (ibid). This particular instance seems to contradict the main thrust of Dr. Zeilberger's rant against a mathematics overburdened by rules.
Unfortunately, little example of what mathematics SHOULD look like is offered - beyond a statement that 'obvious things should be treated as such' and an assertion that infinities and continuities do not occur in real life/nature/the universe. While such a statement may be understandable coming from a respected (and clearly accomplished) combinatorist such as Dr. Zeilberger, physics has yet to demonstrate conclusively that space and time are discrete - undisproven interpretations of quantum mechanics exist which allow for continuities, and 'the size of the universe' is defined as that which we can see (ie, within ~13.8 billion light-years of Earth/Sol) so that actual infinities are ruled out only because of our inability to perceive them.
Perhaps Dr. Zeilberger needs to think outside his own discrete box.
If we cannot interact with arbitrarily precision and can only ever observe a finite universe, does it make sense to even ask if the it is any other way? Should it be figure out how to interact with the world using some new fundamental mechanic, other than a wave bound by C and h, it opens the question.
We are used to doing discrete approximations to continuous problems, but the other way around works too. For example, evaluating the sum
1 + 1/4 + ... + 1/n^2
is quite hard. On the other hand, we may approximate by the integral over x^2, which remarkably has an easy formula.
Continuous and discrete models complements each other, they are not mutually exlusive.
There are limitations to how much we can know, but not limits to what can be.
EDIT: also cf. https://link.springer.com/book/10.1007%2F978-3-319-41285-6
But to be charitable, I think it comes from a place of pushing with tremendous force a pendulum of thought back from the axiomatic nature towards more exploratory and relevant mathematics. It reminds me of Howard Zinn's intro to A People's History of the US in which he says that he purposefully did not water down his arguments to be more neutral because of the overwhelming nature of the opposition.
The infinity issue is quite interesting to ponder. Obviously with our current understanding of the universe and humans, the number of numbers ever encountered by all humans over all history would seem to be finite regardless of the fundamental nature of the universe itself. There is a very strong notion of our world being discrete in that way.
And it is very interesting to pursue these ideas. For example, the proof of infinitely many integers is to assume there is a largest and we add 1. But what if the largest is unknown as is the case much of the time? This is true, for example, with computers (it can be known for the native format, but that can obviously be extended in programming in an arbitrary way)
It is a practical question to ask about existence questions that require an axiom (axiom of choice often) which gives no constructive way of doing something. To what extent is that actually useful? And if we had an approximate construction of something without the axiom, but the thing itself may or may not exist depending on the axiom chosen, what would that mean?
It is also interesting to think about doing away with infinity and thinking about how annoying it would be to not be able to talk about pi or e or even sqrt(2). There would be no irrational numbers, presumably, in this framework.
One can then see the wonders that embracing infinity can lead to as a crutch for dealing with the horrendously messy world of the finite and discrete.
Discussing these issues when learning mathematics strikes me as elevating a great deal of bland rule absorption and gives students power over the tools of mathematics. So this debate is useful in my opinion though it would be absurd to try and draw a conclusion.
One last note about his discussion of the lack of rigor in QFT and that it was good. Recent work (at Rutgers) suggests that at least some of the infinities and problems come from not appreciating the proper physical picture of what is actually happening. Nonetheless, the standard procedures worked to give us needed answers before such insight was discovered.
I think the basic answer should be that formal and exploratory mathematics are both useful and it is important to avoid dogma from either side.
I'm by no means a mathematician, but I do sympathize somewhat with this. To put it a bit controversially, the axiom of choice feels to me almost like the string theory of mathematics.
There's a lot of really cool stuff you can do with it, but we've left the realm of even potentially useful, and are off approaching a form of rigorous philosophy.
I was looking into Banach–Tarski recently. Is it actually that useful to discuss something which almost certainly can never physically exist in our universe?
Now of course you could play devil's advocate and say as some point people claimed you could never in the universe have a negative amount of something, so integers aren't useful, or a square root of a negative isn't useful so no imaginary numbers. Things are created all the type in math that we don't find practical applications for for decades or centuries.
Yes that's true, but it's not a question of absolute, it's a matter of degree. Has math swung too far into the world non-relevant "play" at the expense of other discoveries?
Recently we've just come across one of the most incredible tools of the past 1000 years. Computers! As a programmer aware of Hindley-Milner it boggles my mind the amount of programming people have been doing doing centuries without the benefit of computers to run them on. Now that we have them we can course correct. Previously we shooting in the dark, and were surprisingly accurate. Now that we have computers this utilize them to focus more and prioritize the amazing things that come from powerful computability.
Let's explore the systems that can be constructed using these new amazing tools. Infinity in itself is ok as a model common real world applications (pi, e, sqrt 2).
Getting out of my knowledge area, here, but what tools would we discover if we were focusing on a math without the Axiom of Choice? What proofs would be reformulated and what tools would we develop to do this work.
Again, everyone should be free to think about whatever they like, but as a field it should also be cautious of how it spends it's resources. Particularly it's most valuable resource of intellectual capital.
- A theory with non-measurable sets is much simpler/more expressive than the alternatives and thus is still a useful tool.
- There is a simpler theory without this defect, which is at least as expressive when reasoning about real world phenomena.
In this case, the latter possibility turns out to be true. It is just unbelievably difficult to convince mathematicians to change the rules of the game, even if you can point to concrete gains.
I think when people mention the "reals" they are usually referring to the uncountable reals, which you can't construct, compute, name, or know:
...(maybe skip to chapter 5 to get to the meat of it). So constructivists don't believe in those types of numbers, and so get "choice" as a theorem instead of an axiom.
In some constructive versions of math you get weird things like there being an injection from R to N but there not being a bijection due to diagonalization.
I think you are confusing computable/nameable with constructive as that term is used by most mathematicians.
Note that Chaitan's Constant is not computable.
The common criticism of AC is Banach-Tarski. So if you don't agree with Banach-Tarski and want every subset of R to be measurable then you have to conclude an equally bizarre result.
As an algebraist I accepted AC. Instead of constantly saying, "let V be a vector space over k with a basis" it's easier to just assume all vector spaces have a basis. I think analysts need AC more than other branches, The Intermediate Value Theorem isn't provable without AC. I think.
The case of measure theory is particularly interesting, though, since you don't even need to change your underlying logic to get a better model. For instance, if you base your "measure theory" on valuations on locales instead of measures on sigma algebras then the theory itself becomes simpler and the Banach-Tarski "paradox" goes away.
Briefly, in locale theory, your "measure" is defined on sublocales instead of subsets. While there are more sublocales than subsets, the condition for when two sublocales are disjoint is stronger. This is what breaks the Banach-Tarski construction. The orbit subsets used in Banach-Tarski still exist, but while they are disjoint as sets, they are not disjoint as sublocales and thus don't decompose the volume of the sphere.
Here is a reference to a mathiverflow comment.
This is really interesting to know, and was exactly the type of think that I was wondering about. And I assume the author felt similarly although didn't draw identical conclusions.
Out of curiosity could you describe or link me to this other theory?
"The Mathematical Experience"
by Phillip J. Davis, and Reuben Hersh
...which is a great book, composed of a bunch of different essays comparing and contrasting three philosophies of mathematics (Platonism, Formalism, Intuitionism).
>• Stupid Question 2: Trisect an arbitrary angle only using straight-edge and compass.
>Many, very smart people, tried in vain, to solve this problem, until it turned out, in the 19th century, to be impossible
>• Stupid Question 3: Double the cube.
>Ditto, 2^(1/3) is a cubic-algebraic number.
>• Stupid Question 4: Square the Circle.
>Many, very smart people, tried (and some still do!) in vain, to solve this problem, until it turned out [...] to be impossible.
>Today’s Mathematics Is a Religion
>Its central dogma is thou should prove everything rigorously.
To me, that's the entire point. You prove things rigorously so that (among many other reasons) when someone considers trying to square the circle, you can point them to the proof that the circle cannot be squared.
>To me, that's the entire point. You prove things rigorously so that (among many other reasons) when someone considers trying to square the circle, you can point them to the proof that the circle cannot be squared.
This. Those defending mathematical rigour are a minority almost everywhere except a school of mathematics. There is no shortage of people trying all the slap-dash ways of doing things.
An example in computer science is the whole P = NP nonsense. We have never found any evidence that P = NP, nor has anyone discovered a compelling reason why it should be. Yet, the first sentence on its Wikipedia page is:
> The P versus NP problem is a major unsolved problem in computer science.
"Unsolved?" Only in the perspective of the "rigorous proof" crowd which you imply are an unimportant minority. The truth is that P = NP is just another, as Dr. Zeilberger calls it, "stupid question."
We have no reason to believe that P = NP, but on the other hand, without proof, we can't know for certain that it doesn't. We're not infinitely smart, and we just might not have yet seen the way in which P does equal NP.
Surely we live in a world filled with uncertainty, and we have to be able to deal with that. But in instances in which we can be certain about certain things, it certainly benefits us.
Also, I don't understand the argument that learning to think with the rigor of mathematics is somehow detrimental. There are many different ways of reasoning, and a well-educated person should strive to learn a good amount of these useful mental tools. Learning to think with mathematical rigor is a good tool to have in one's mental toolbox.
This is conjectured, but can you prove it? When the degree of the polynomial of the minimal running time or its coefficients gets large, one can imagine a world where e.g. asymmetric cryptography, as we understand it today, it still possible.
Are you able to expand on this point?
In my experience I have not heard of children learning rigorous mathematics. Grade school kids learn arithmetic. High school kids learn some calculus and probability (at most). This curriculum is practical for those going to uni or a technical trade. (although the way it's taught is critical to be of any use. If I were a science teacher my class would just build arduino projects all day!)
Your reference to magical thinking seems to be the opposite of what I would consider rigour... I would associate magical thinking with an economist or engineer building a sprawling quantitative risk model and then justifying billion dollar decisions with it. Later realising a small logical error has invalidated their conclusion or that they have overfit their data etc. (Just for example...)
(X^3) * 2 -- Done.
r is known. X is not.
pi * r^2 = X^2
sqrt(pi * r^2) = X , for positive X
That's not a dogma. Its a proof because anyone, no matter whom, no matter when, or where in the universe, can duplicate these results and show they are logically true. Or they can show the results are logically false, no matter the inputs given.
It's not "dogma", as some high edict by a Pope or something. A rank amateur could further the field by proving a new theorem - because the person doesn't matter. The soundness of logic does.
It is not even only that. Rigorous proofs show the limits of your knowledge. Modern Math is a huge edification that we would be completely unable to build if we based it on intuitive semi-rigorous fundaments.
Yes, proving things is boring, and won't add anything to your immediate problem. No, we still need it, like we need many other kinds of investment.
I hope you're not being serious. Just in case you are, your algebra is wrong. I'm quite certain you didn't look up what "doubling the cube" means, since the (faux) algebraic solution is y = cube_root(2 * x^3). It undercuts the rest of your comment.
If it really meant doubling the volume of a cube of X unit size, then absolutely it's (2 * x^3)^(1/3)
You're also selling the problem short. Doubling the cube is about producing a finite algorithm (given a limited set of operations) that realizes the value of (2 * x^3)^(1/3) concretely. An algebraic solution does not do this, because it stops at the inability to realize, say, the cube root of 2 explicitly.
Here are formal proofs of the impossibility of trisection and of doubling the cube, using the Isabelle proof assistant .
Wasn't that the status quo before the 20th century? It's strange to suggest that working with infinite (or rather, ideal) objects is stupid. The sheer amount of progress in 20th century mathematics provides incontrovertible evidence that ideal objects are a useful reasoning tool. This is even true in combinatorics: working with generating functions is working with an algebraic structure on infinite streams...
This essay is written to incite... there is so much in there that invites comment from everyone who has ever spent five minutes thinking about these things. At the same time it is lacking in examples for ways in which a mathematical world without rigor would be better than what we have today. So instead of getting worked up about the essay itself, does anybody here have concrete examples where a lack of rigor lead to faster progress?
> But I do believe that it is time to make [mathematics] a true science.
Surely any 'true science' has a dichotomy between conjecture or hypothesis, and theorem or accepted fact?
Going back to his example, how does it help if we all collectively agree that, yes, there must be infinitely many twin primes because of computational evidence X, Y and Z. Nothing more than agreement has been achieved in such a case. And we are none the wiser in the way of insight or explanation besides the computations we already had...
In a scientific or engineering context mathematics evolves naturally whenever the requirement for precision exceeds the capability of the available language and you have the right folks around to develop it.
I also wonder what is the motivation of the other factions, since I was never able to understand them.
The situations that we mathematicians mostly consider are axiomatic constructs, because it's easier to then unambiguously establish necessity.
That this art, its methods, techniques and tools, happens to be so useful to other sciences (and in fact most of human knowledge) is an interesting phenomenon...
Also, your definition applies to physics as well.
Um.... I think that it looks like that a lot of what maths does is shared with physics, but I think (I could be wrong) that it's mostly because physics has adopted the language of mathematics.
That having been said, there is an aspect where they are definitively different. Physics is motivated by the understanding of the physical world. For instance a physics theory gets dropped when we discover that we mis-observed whatever it was meant to explain. This doesn't happen in maths. The theories in maths (here defined as axiomatic structures) do not need to align with the natural world and get studied for other reasons than because they would increase the understanding of the natural world. Such understanding may eventually happen, but was not the motivation factor.
What do you think ? :)
- an extension of pure logic
- not a science (since it cannot be falsified)
- only indirectly concerned with observations (because we don't know the universe)
- used to predict, but also to reason about the past (big bang theories)
- a tool
- a game (puzzle)
Physics is maths with the addition of experimentally verified axioms.
Chemistry is Physics with the addition of heuristics that explain phenomena which are too complex to express using casual logic.
Every step after chemistry just add additional heuristics.
And, of course, the knowledge that solved them also become essential to other areas of Math, so I also don't like his framing.
Science is all about gathering evidence and the scientific method focuses very heavily on observation and testing. Basically, it is impossible to conduct science without data.
Maths is data independent. More data or less data doesn't influence what maths is. Maths links the axiom and the result - once a result has been prooven to follow from an axiom, the data is irrelevant. No amount of observation or testing will change the value of Pi.
Sure, there is data that suggests it is probably true. The scientific can say there is enough data to be almost certain it is true and move on. The mathematical community cannot say for certain it is true because it is not yet prooven. Which is why they are still working on it.
Some mathematicians start with data, and no doubt about it data is effective. But I'm specifically saying that that is employing a tool of the scientific community (see ) as a starting point before then proceeding to do some actual mathematical work. This distinction is why maths is often classified in the Arts rather than the sciences.
But if you can point to a maths textbook that teaches someone "general theories" by printing 10,000 data points followed by a QED then I'd suggest it is a pretty extraordinary theory.
EDIT I'll throw in an example; a software consultant might be involved in invoicing for a project. The invoicing is still accounting work, even if it is being done for a software project.
Consider making the observation (shown as a table):
| n | sum(1 to n) |
| 1 | 1 |
| 2 | 3 |
| 3 | 6 |
| 4 | 10 |
| 5 | 15 |
| 6 | 21 |
The same is how many other conjectures begin on less trivial examples. The four color theorem, for instance, was notably hard to prove (and there was a lot of controversy over its method of proof). But it was still just a conjecture until they laid out their proof, though no one had ever found a counterexample.
Data for mathematicians rarely looks like tables of numbers. More often it looks like a list of simple manifolds where we can do computations by hand, or topological spaces that don't have the usual properties, or fields of characteristic different from what you're most comfortable with, or continuous functions whose derivative is zero almost everywhere but are not constant. The first thing my advisor asks me when I say "I might be able to prove X" is whether it's true in the simplest examples.
"An Aristotelian Realist Philosophy of Mathematics: Mathematics as the Science of Quantity and Structure" by James Franklin
"German Science" by Pierre Duhem
Some call it "contemplating their navels", some other call it curiosity. I'm reminded of this other essay  that was discussed on HN a week ago:
>[..] throughout the whole history of science most of the really great discoveries which had ultimately proved to be beneficial to mankind had been made by men and women who were driven not by the desire to be useful but merely the desire to satisfy their curiosity.
Regarding the rest of the Zeilberger's essay ... while it is intuitively obvious that there something odd going with the assumptions in problems like the Zeno's paradox (that's why we call them paradoxes), it is certainly not intuitively obvious what that something exactly is, because it took us a good amount of principled application of curiosity (and quite while of time) to see why and how. What you'd call a mind that does not yearn to know the details but instead yells and lavishly prefers to drop "that's stupid" in boldface and with an exclamation point? I'm not sure, but "uninteresting" comes to mind.
In general, it's easy to see that ancient Greeks were mistaken on some issue or their ideas on some other ones were downright silly now that we have spent over two millenia improving on them and developing the ideas and tools such as "experimental science".
 Abraham Flexner, "Usefulness of Useless Knowledge", Harpers 179, 1939, https://library.ias.edu/files/UsefulnessHarpers.pdf HN discussion: https://news.ycombinator.com/item?id=14558775
The "refuse to acknowledge the existence" line is also nonsense (ignoring for the moment that "existence" is not a useful term for an abstraction), because Euler made "i" widespread by including it in his famous textbook, the complex plane was invented as early as the 1600s, Gauss used it for some of his most impressive discoveries (and he lived at the same time as Fourier, who, let's be real, was also a mathematician), and their reservations were more along the lines of "guarding against its misuse, "which still occurs today in algebra classes all around the world, and having doubts about its "metaphysics."
But I understand the desire to glorify engineers. It does often happen that practical concerns drive great mathematical discoveries. Just... be humble and do your research :)
I think more probability and statistics courses would be easily justified. Regarding computational thinking, math in the K-12 level is computational thinking more than it's based on proof construction. Computational thinking should be added as a subset of the math and science courses. I think it helped many of my college classmates who weren't in CS to understand computational thinking when we ran simulations using matlab of experiments, or in the reverse used it to process our results.
Also strange that he cites Hersch, who, if I remember correctly became relevant during the post-modernism vs science wars, unless this is from that particular conflict--in which case I wasted my time reading this at all.
I'm really worried that peoples who have knowledge in computer science doesn't get that from a false statement you can prove ANYTHING. And yes literally ANYTHING. Should we accpet models of computer that can compute faster than speed of light....?