Hacker News new | past | comments | ask | show | jobs | submit login
What Is Mathematics and What Should It Be? [pdf] (rutgers.edu)
107 points by santaclaus on June 22, 2017 | hide | past | favorite | 91 comments



Dr. Zeilberger writes a rather pleasant rant. He appears frustrated with the lack of imagination in mathematicians which occurs when they focus too strongly on the axiomatic nature of mathematics, and seems to wish that we take a step back from the powerful axiomatic tools we have developed, so that we may search for what is 'really true about the world' rather than 'playing a mathematical game/believing in a mathematical religion'.

Three specific complaints he has involve a belief in infinity (and limits), which he asserts do not exist in the real world; a delay in the publication of a pair of papers on which he worked, due to a variety of circumstances; and a "pernicious" influence of axiomatic mathematics which leads to "stupid" questions such as Hilbert's Second Problem.

The failed publication of one paper is particularly notable, as it claimed a counterexample of Fermat's Last Theorem (according to Dr. Zeilberger's "Opinion 123", on his Rutgers website), and (ibid) was recognized by Andrew Wiles himself as one of three possible counterexamples: the reason given for this oversight was in fact the acceptance of the decidedly non-rigourous statement "it is easily seen ..." (ibid). This particular instance seems to contradict the main thrust of Dr. Zeilberger's rant against a mathematics overburdened by rules.

Unfortunately, little example of what mathematics SHOULD look like is offered - beyond a statement that 'obvious things should be treated as such' and an assertion that infinities and continuities do not occur in real life/nature/the universe. While such a statement may be understandable coming from a respected (and clearly accomplished) combinatorist such as Dr. Zeilberger, physics has yet to demonstrate conclusively that space and time are discrete - undisproven interpretations of quantum mechanics exist which allow for continuities, and 'the size of the universe' is defined as that which we can see (ie, within ~13.8 billion light-years of Earth/Sol) so that actual infinities are ruled out only because of our inability to perceive them.

Perhaps Dr. Zeilberger needs to think outside his own discrete box.


Re: infinities and continuities, to my understanding, it has been proved than interaction with the world is inheriently discrete determined by Planks constant - I am not a physicist, so may be wrong.

If we cannot interact with arbitrarily precision and can only ever observe a finite universe, does it make sense to even ask if the it is any other way? Should it be figure out how to interact with the world using some new fundamental mechanic, other than a wave bound by C and h, it opens the question.


Even if the world is fundamentally discrete, that does not mean that continuous models are not valuable. They are.

We are used to doing discrete approximations to continuous problems, but the other way around works too. For example, evaluating the sum

1 + 1/4 + ... + 1/n^2

is quite hard. On the other hand, we may approximate by the integral over x^2, which remarkably has an easy formula.

Continuous and discrete models complements each other, they are not mutually exlusive.


I wasn't trying to say that continues models are not valuable, only that the question "is the world continues?" does not make sense if we can only interact with it in a discrete way.


That is simply a rewording of the question: is it useful to use continuity to model the world.


This is getting quite philosophical now. It depends on the intrepetation of "world". If we mean the universe then no, it is to our current understanding of it, not useful to model it's entirety using continuity, if we can only messure it discretely. If by world we mean a subset of the universe, then it makes perfect sense, because the world might be on the macro level where this "discretness" is far smaller than we ever want to go.


I think you should concentrate on the world model in my sentence rather than world. I have something I want to analyse. How is it useful to think about it and what techniques do I want to use?


We don't interact with complex numbers but "Are quantum 'probabilities' complexL" is still a meaningful question.


Planck's constant arises in continuous theories of quantum mechanics, such as Bohmian mechanics and its QFT extensions.

There are limitations to how much we can know, but not limits to what can be.


I haven't studied Bohmian mechanics to any reasonable degree, so I cannot comment on it. But preliminary reading has me sceptical because it relies on infinity. Time will tell I suppose.


Then read "The Cellular Automaton Interpretation of Quantum Mechanics" by Gerard 't Hooft (https://en.wikipedia.org/wiki/Gerard_%27t_Hooft), which tries to formulate a superdeterministic version of quantum mechanics based on cellular automata (i.e. much more discrete):

> https://arxiv.org/abs/1405.1548

EDIT: also cf. https://link.springer.com/book/10.1007%2F978-3-319-41285-6


There is some light bedtime reading for a decade or two... Thanks for the link, I actually look forward to dive into the topic.


A slight aside- due to the expansion of space, we can see objects that are currently further away than the simple amount of time it took the light to reach us. GN-z11 is one of the most distant objects imaged, has a light-travel distance of 13.4 billion light-years, but a proper distance of little more than double , 32 billion light-years.

https://en.wikipedia.org/wiki/GN-z11


I remember Dr. Zeillberger from my graduate days at Rutgers. He clearly had a lot of interesting stuff to say which he packages in sensational and emotional form. I found it off-putting then as well as now.

But to be charitable, I think it comes from a place of pushing with tremendous force a pendulum of thought back from the axiomatic nature towards more exploratory and relevant mathematics. It reminds me of Howard Zinn's intro to A People's History of the US in which he says that he purposefully did not water down his arguments to be more neutral because of the overwhelming nature of the opposition.

The infinity issue is quite interesting to ponder. Obviously with our current understanding of the universe and humans, the number of numbers ever encountered by all humans over all history would seem to be finite regardless of the fundamental nature of the universe itself. There is a very strong notion of our world being discrete in that way.

And it is very interesting to pursue these ideas. For example, the proof of infinitely many integers is to assume there is a largest and we add 1. But what if the largest is unknown as is the case much of the time? This is true, for example, with computers (it can be known for the native format, but that can obviously be extended in programming in an arbitrary way)

It is a practical question to ask about existence questions that require an axiom (axiom of choice often) which gives no constructive way of doing something. To what extent is that actually useful? And if we had an approximate construction of something without the axiom, but the thing itself may or may not exist depending on the axiom chosen, what would that mean?

It is also interesting to think about doing away with infinity and thinking about how annoying it would be to not be able to talk about pi or e or even sqrt(2). There would be no irrational numbers, presumably, in this framework.

One can then see the wonders that embracing infinity can lead to as a crutch for dealing with the horrendously messy world of the finite and discrete.

Discussing these issues when learning mathematics strikes me as elevating a great deal of bland rule absorption and gives students power over the tools of mathematics. So this debate is useful in my opinion though it would be absurd to try and draw a conclusion.

One last note about his discussion of the lack of rigor in QFT and that it was good. Recent work (at Rutgers[0]) suggests that at least some of the infinities and problems come from not appreciating the proper physical picture of what is actually happening. Nonetheless, the standard procedures worked to give us needed answers before such insight was discovered.

I think the basic answer should be that formal and exploratory mathematics are both useful and it is important to avoid dogma from either side.

[0]: https://arxiv.org/pdf/1703.04476.pdf


> It is a practical question to ask about existence questions that require an axiom (axiom of choice often) which gives no constructive way of doing something. To what extent is that actually useful?

I'm by no means a mathematician, but I do sympathize somewhat with this. To put it a bit controversially, the axiom of choice feels to me almost like the string theory of mathematics.

There's a lot of really cool stuff you can do with it, but we've left the realm of even potentially useful, and are off approaching a form of rigorous philosophy.

I was looking into Banach–Tarski recently. Is it actually that useful to discuss something which almost certainly can never physically exist in our universe?

Now of course you could play devil's advocate and say as some point people claimed you could never in the universe have a negative amount of something, so integers aren't useful, or a square root of a negative isn't useful so no imaginary numbers. Things are created all the type in math that we don't find practical applications for for decades or centuries.

Yes that's true, but it's not a question of absolute, it's a matter of degree. Has math swung too far into the world non-relevant "play" at the expense of other discoveries?

Recently we've just come across one of the most incredible tools of the past 1000 years. Computers! As a programmer aware of Hindley-Milner it boggles my mind the amount of programming people have been doing doing centuries without the benefit of computers to run them on. Now that we have them we can course correct. Previously we shooting in the dark, and were surprisingly accurate. Now that we have computers this utilize them to focus more and prioritize the amazing things that come from powerful computability.

Let's explore the systems that can be constructed using these new amazing tools. Infinity in itself is ok as a model common real world applications (pi, e, sqrt 2).

Getting out of my knowledge area, here, but what tools would we discover if we were focusing on a math without the Axiom of Choice? What proofs would be reformulated and what tools would we develop to do this work.

Again, everyone should be free to think about whatever they like, but as a field it should also be cautious of how it spends it's resources. Particularly it's most valuable resource of intellectual capital.


One thing which the author of this essay gets right is that contemporary mathematics is a bit dogmatic when it comes to the logical foundations. We should be looking at the Banach-Tarski theorem as evidence that a particular logical framework for reasoning about volumes/probability is more complicated than strictly necessary. Presumably no non-measurable sets exist in the "real world". At this point there are two possibilities:

- A theory with non-measurable sets is much simpler/more expressive than the alternatives and thus is still a useful tool.

- There is a simpler theory without this defect, which is at least as expressive when reasoning about real world phenomena.

In this case, the latter possibility turns out to be true. It is just unbelievably difficult to convince mathematicians to change the rules of the game, even if you can point to concrete gains.


Without the axiom of choice you get similarly bizarre results. For instance wothout choice there exists a surjection from the reals to a set of greater cardinality.


That's because the reals don't exist, which is Doron's point. We need the Axiom of Choice because we invented the reals.


The existence of the reals is independent of the axiom of choice. The collection of mathematicians that don't believe infinite sets exist has maybe a few members. Constructive mathematicians believe the reals exists. They can be constructed.


>Constructive mathematicians believe the reals exists.

I think when people mention the "reals" they are usually referring to the uncountable reals, which you can't construct, compute, name, or know:

https://arxiv.org/abs/math/0404335

...(maybe skip to chapter 5 to get to the meat of it). So constructivists don't believe in those types of numbers, and so get "choice" as a theorem instead of an axiom.


Constructive mathematicians definitely do believe the reals exist and are uncountable. Not all reals are computable or definable. The set of computable reals is countable. There are different varieties of constructive mathematics. A very large majority of constructive mathematicians believe the reals are uncountable. In intuitionistic mathematics the reals are uncountable.

In some constructive versions of math you get weird things like there being an injection from R to N but there not being a bijection due to diagonalization.

I think you are confusing computable/nameable with constructive as that term is used by most mathematicians.

Note that Chaitan's Constant is not computable.


This link may be of interest to you:

https://mathoverflow.net/a/30694


What do you mean? If ZF without choice shows that a statement P holds then certainly ZFC also proves the same statement. Do you mean that there is a consistent extension of ZF with the statement "There exists a surjection from R to P(R)"?


There is a theorem that states that either there is a nonmeasurable subset of the reals or that there is a surjection of the reals to a larger set. The theorem is by Sierpinski. This is way out of my area but I'm guessing ZF is not strong enough to show either all sets are measurable or the existence of a non measurable set.

The common criticism of AC is Banach-Tarski. So if you don't agree with Banach-Tarski and want every subset of R to be measurable then you have to conclude an equally bizarre result.

As an algebraist I accepted AC. Instead of constantly saying, "let V be a vector space over k with a basis" it's easier to just assume all vector spaces have a basis. I think analysts need AC more than other branches, The Intermediate Value Theorem isn't provable without AC. I think.


What I found after Googling is that there is a theorem by Sierpinski which shows that if we make an additional assumption about the cardinality of [R]^\omega, which seems to be approximately the same as the boolean prime ideal axiom, then there is a non-measurable set. Thus the existence of non-measurable sets is implied by principles weaker than full choice, but that's not too surprising.

The case of measure theory is particularly interesting, though, since you don't even need to change your underlying logic to get a better model. For instance, if you base your "measure theory" on valuations on locales instead of measures on sigma algebras then the theory itself becomes simpler and the Banach-Tarski "paradox" goes away.

Briefly, in locale theory, your "measure" is defined on sublocales instead of subsets. While there are more sublocales than subsets, the condition for when two sublocales are disjoint is stronger. This is what breaks the Banach-Tarski construction. The orbit subsets used in Banach-Tarski still exist, but while they are disjoint as sets, they are not disjoint as sublocales and thus don't decompose the volume of the sphere.


I'm not familiar with locale theory. Thanks for the reference. I'm guessing there will be some non-intuitive results. My non-expert impression is that whatever one chooses in terms of logic and set theory there will be bizarre results when dealing with sets of cardinality of the continuum and not dealing with the continuum leaves out too much.

Here is a reference to a mathiverflow comment.

https://mathoverflow.net/a/22935


> There is a simpler theory without this defect, which is at least as expressive when reasoning about real world phenomena.

This is really interesting to know, and was exactly the type of think that I was wondering about. And I assume the author felt similarly although didn't draw identical conclusions.

Out of curiosity could you describe or link me to this other theory?


I was thinking about categorical probability theory or measure theory based on locales. Unfortunately there are (to the best of my knowledge) no textbooks or good writeups available for either, merely long lines of research papers. As said, it's a bit of a niche area, since most mathematicians don't want to think about reworking foundations.


Are there any papers you would recommend as good starting points?


There is a whole body of mathematics which takes these things seriously (constructive math, intuitionistic foundations) and it's, if anything, even more heavily formalized. I think the author has some technical points, but his desire for "make math more like science" is outmoded in many areas. Mathematicians went toward formalism for good reasons. They might not have made all the right decisions, but that doesn't mean that upending the table and starting fresh makes sense.



Better yet, read the book Zeilberger mentions at the beginning of his essay:

"The Mathematical Experience" by Phillip J. Davis, and Reuben Hersh

http://isbn.nu/9780395929681

...which is a great book, composed of a bunch of different essays comparing and contrasting three philosophies of mathematics (Platonism, Formalism, Intuitionism).


Zeilberger's Opinion 123 is an April Fool's Day joke.


I am truly confused by this screed.

>• Stupid Question 2: Trisect an arbitrary angle only using straight-edge and compass.

>Many, very smart people, tried in vain, to solve this problem, until it turned out, in the 19th century, to be impossible

>• Stupid Question 3: Double the cube.

>Ditto, 2^(1/3) is a cubic-algebraic number.

>• Stupid Question 4: Square the Circle.

>Many, very smart people, tried (and some still do!) in vain, to solve this problem, until it turned out [...] to be impossible.

>Today’s Mathematics Is a Religion

>Its central dogma is thou should prove everything rigorously.

To me, that's the entire point. You prove things rigorously so that (among many other reasons) when someone considers trying to square the circle, you can point them to the proof that the circle cannot be squared.


>Its central dogma is thou should prove everything rigorously.

>To me, that's the entire point. You prove things rigorously so that (among many other reasons) when someone considers trying to square the circle, you can point them to the proof that the circle cannot be squared.

This. Those defending mathematical rigour are a minority almost everywhere except a school of mathematics. There is no shortage of people trying all the slap-dash ways of doing things.


Then why do I frequently get confronted by the religion of "inherent truth in math" and we keep teaching our children math made overly convoluted and unintuitive by the "rigorous" crowd? Pretending that this is only a minority is not helpful in quashing the damage done by magical thinking in mathematics.

An example in computer science is the whole P = NP nonsense. We have never found any evidence that P = NP, nor has anyone discovered a compelling reason why it should be. Yet, the first sentence on its Wikipedia page is:

> The P versus NP problem is a major unsolved problem in computer science.

"Unsolved?" Only in the perspective of the "rigorous proof" crowd which you imply are an unimportant minority. The truth is that P = NP is just another, as Dr. Zeilberger calls it, "stupid question."


I don't get your P = NP example. If P = NP, it has serious consequences. E.g., as I understand it, a lot of cryptography will ultimately fail if we discover that P = NP.

We have no reason to believe that P = NP, but on the other hand, without proof, we can't know for certain that it doesn't. We're not infinitely smart, and we just might not have yet seen the way in which P does equal NP.

Surely we live in a world filled with uncertainty, and we have to be able to deal with that. But in instances in which we can be certain about certain things, it certainly benefits us.

Also, I don't understand the argument that learning to think with the rigor of mathematics is somehow detrimental. There are many different ways of reasoning, and a well-educated person should strive to learn a good amount of these useful mental tools. Learning to think with mathematical rigor is a good tool to have in one's mental toolbox.


> a lot of cryptography will ultimately fail if we discover that P = NP.

This is conjectured, but can you prove it? When the degree of the polynomial of the minimal running time or its coefficients gets large, one can imagine a world where e.g. asymmetric cryptography, as we understand it today, it still possible.


> Then why do I frequently get confronted by the religion of "inherent truth in math" and we keep teaching our children math made overly convoluted and unintuitive by the "rigorous" crowd? Pretending that this is only a minority is not helpful in quashing the damage done by magical thinking in mathematics.

Are you able to expand on this point?

In my experience I have not heard of children learning rigorous mathematics. Grade school kids learn arithmetic. High school kids learn some calculus and probability (at most). This curriculum is practical for those going to uni or a technical trade. (although the way it's taught is critical to be of any use. If I were a science teacher my class would just build arduino projects all day!)

Your reference to magical thinking seems to be the opposite of what I would consider rigour... I would associate magical thinking with an economist or engineer building a sprawling quantitative risk model and then justifying billion dollar decisions with it. Later realising a small logical error has invalidated their conclusion or that they have overfit their data etc. (Just for example...)


This problem IS actually unsolved. Maybe you found something very smart to tell us that nobody knows...?


It depends what you mean by "solved." In a rigorous, pure math sense, no. But so what? Math is only about itself and its own esoteric rules, not about practical reality. In a scientific sense, it is proven in the same way we know there are no unicorns: there isn't a scrap of evidence they exist, nor is there any reason to believe they should.


We use math as a framework for reasoning about physical reality. When doing this we assume that the logic of our math holds for physical reality and that we can use it to make useful predictions. If a mathematical theory has held up so-far then that is evidence that it will continue to. It's not a guarantee, but it is evidence.


P = NP is interesting precisely because of what you're saying: it feels trivially obvious that they aren't equal but no one can actually prove it... which sort of implies that it's not trivially obvious.


>• Stupid Question 3: Double the cube.

     (X^3) * 2  -- Done.
>• Stupid Question 4: Square the Circle.

     r is known. X is not.
     pi * r^2 = X^2
     sqrt(pi * r^2) = X , for positive X
oh, geometrically? No. Algebraically works cleaner, and for any arbitrary positive solutions for r.

>Its central dogma is thou should prove everything rigorously.

That's not a dogma. Its a proof because anyone, no matter whom, no matter when, or where in the universe, can duplicate these results and show they are logically true. Or they can show the results are logically false, no matter the inputs given.

It's not "dogma", as some high edict by a Pope or something. A rank amateur could further the field by proving a new theorem - because the person doesn't matter. The soundness of logic does.


> That's not a dogma. Its a proof because anyone, no matter whom, no matter when, or where in the universe, can duplicate these results and show they are logically true. Or they can show the results are logically false, no matter the inputs given.

It is not even only that. Rigorous proofs show the limits of your knowledge. Modern Math is a huge edification that we would be completely unable to build if we based it on intuitive semi-rigorous fundaments.

Yes, proving things is boring, and won't add anything to your immediate problem. No, we still need it, like we need many other kinds of investment.


> oh, geometrically? No. Algebraically works cleaner, and for any arbitrary positive solutions for r.

I hope you're not being serious. Just in case you are, your algebra is wrong. I'm quite certain you didn't look up what "doubling the cube" means, since the (faux) algebraic solution is y = cube_root(2 * x^3). It undercuts the rest of your comment.


Doubling the cube was being ridiculous to prove the bad language to explain the problem.

If it really meant doubling the volume of a cube of X unit size, then absolutely it's (2 * x^3)^(1/3)


I don't think the author was trying to give a precise description of the problem. "Doubling the cube" is a term of art. It's like if he used the word "derivative" and you thought it meant a cheap copy of something, and then went further to prove how silly calculus was because of your misunderstanding of the term.

You're also selling the problem short. Doubling the cube is about producing a finite algorithm (given a limited set of operations) that realizes the value of (2 * x^3)^(1/3) concretely. An algebraic solution does not do this, because it stops at the inability to realize, say, the cube root of 2 explicitly.


There's an interesting proof of the impossibility of trisecting an arbitrary angle with only straight-edge and compass on Terry Tao's blog [1]. It's interesting because it is mostly a geometric proof rather than the usual Galois theory approach. The only major non-geometric part is that it depends on the fact that 3 does not divide any integral power of 2.

Here are formal proofs of the impossibility of trisection and of doubling the cube, using the Isabelle proof assistant [2].

[1] https://terrytao.wordpress.com/2011/08/10/a-geometric-proof-...

[2] https://www.isa-afp.org/entries/Impossible_Geometry.shtml


Robert Yates wrote an interesting book that I liked about a plethora of non-compass-and-straight-edge trisection methods: The Trisection Problem.

https://www.google.com/#q=the+trisection+problem+yates


To be extremely rigorous one would have to write formal proofs, that includes all steps. This is a type of proof that could be verified by a computer and may be tedious and very hard for a human to understand. Mathematicians tend to prove things informally that is easy for others to read and understand. It's almost akin to writing programs in machine code vs writing programs in a high level programming language.


> In particular, one should abandon the dichotomy between conjecture and theorem.

Wasn't that the status quo before the 20th century? It's strange to suggest that working with infinite (or rather, ideal) objects is stupid. The sheer amount of progress in 20th century mathematics provides incontrovertible evidence that ideal objects are a useful reasoning tool. This is even true in combinatorics: working with generating functions is working with an algebraic structure on infinite streams...

This essay is written to incite... there is so much in there that invites comment from everyone who has ever spent five minutes thinking about these things. At the same time it is lacking in examples for ways in which a mathematical world without rigor would be better than what we have today. So instead of getting worked up about the essay itself, does anybody here have concrete examples where a lack of rigor lead to faster progress?


It seems strange to me for the author to write that, and then to end with:

> But I do believe that it is time to make [mathematics] a true science.

Surely any 'true science' has a dichotomy between conjecture or hypothesis, and theorem or accepted fact?


Nothing is preventing you from using computers cleverly to gain a better understanding of mathematical problems you are facing. But abandoning or discounting the importance of rigor is down right stupid. The great achievement of mathematics is the vast repository of irrefutable statements which will stand for eternity. And this comes from rigor, which helps and does not hinder.

Going back to his example, how does it help if we all collectively agree that, yes, there must be infinitely many twin primes because of computational evidence X, Y and Z. Nothing more than agreement has been achieved in such a case. And we are none the wiser in the way of insight or explanation besides the computations we already had...


> But abandoning or discounting the importance of rigor is down right stupid.

https://en.wikipedia.org/wiki/Italian_school_of_algebraic_ge...


From an engineering perspective... Mathematics looks like an add-on to natural language designed to express precise concepts in the simplest possible way. (Simple != easy, of course).

In a scientific or engineering context mathematics evolves naturally whenever the requirement for precision exceeds the capability of the available language and you have the right folks around to develop it.


As evidence, look at how math looked before we formalized notation. For example, the algebra book by al-Khwarizmi (the guy whom algorithms are named after). Wikipedia has a commented excerpt: https://en.wikipedia.org/wiki/Muhammad_ibn_Musa_al-Khwarizmi...


Very nice description. This is how I feel about mathematics as well. This is also why I have no problems with infinity: In a lot of situations it describes problem and solution just in the most succinct way.


I wonder if all engineers are algebrists.

I also wonder what is the motivation of the other factions, since I was never able to understand them.


Mathematics is the art of highlighting the necessary consequences of a situation. That's all.

The situations that we mathematicians mostly consider are axiomatic constructs, because it's easier to then unambiguously establish necessity.

That this art, its methods, techniques and tools, happens to be so useful to other sciences (and in fact most of human knowledge) is an interesting phenomenon...


Mathematics is also used for purely abstract entertainment.

Also, your definition applies to physics as well.


> Also, your definition applies to physics as well.

Um.... I think that it looks like that a lot of what maths does is shared with physics, but I think (I could be wrong) that it's mostly because physics has adopted the language of mathematics.

That having been said, there is an aspect where they are definitively different. Physics is motivated by the understanding of the physical world. For instance a physics theory gets dropped when we discover that we mis-observed whatever it was meant to explain. This doesn't happen in maths. The theories in maths (here defined as axiomatic structures) do not need to align with the natural world and get studied for other reasons than because they would increase the understanding of the natural world. Such understanding may eventually happen, but was not the motivation factor.

What do you think ? :)


In my opinion, math is:

- an extension of pure logic

- not a science (since it cannot be falsified)

- only indirectly concerned with observations (because we don't know the universe)

- used to predict, but also to reason about the past (big bang theories)

- a tool

- a game (puzzle)


Maths is logic with axioms that are intuitively true.

Physics is maths with the addition of experimentally verified axioms.

Chemistry is Physics with the addition of heuristics that explain phenomena which are too complex to express using casual logic.

Every step after chemistry just add additional heuristics.


Reading the word "pseudo-problems" sets me off to a rough start. But calling problems stupid merely because they're unsolvable is a mortal sin, specially if there's a non-trivial reason why we cannot mathematically go from this to that. That the impossibility of squaring the circle is due to it's equivalence with the transcendentality of π is in itself an interesting piece of mathematics.


He didn't call them stupid because they were unsolvable. He called them stupid because they didn't exist on any practice.

And, of course, the knowledge that solved them also become essential to other areas of Math, so I also don't like his framing.


This gentleman needs to watch Feynman talking about the difference between maths and physics.

Science is all about gathering evidence and the scientific method focuses very heavily on observation and testing. Basically, it is impossible to conduct science without data.

Maths is data independent. More data or less data doesn't influence what maths is. Maths links the axiom and the result - once a result has been prooven to follow from an axiom, the data is irrelevant. No amount of observation or testing will change the value of Pi.

https://www.youtube.com/watch?v=obCjODeoLVw


I wouldn't say it's data independent. Look at goldbach's conjecture. While there is no rigorous proof, it seems almost impossible to be the case that it is not true. A lot of conjectures like that are based on data gathered first.


You are referring to an as yet unsolved problem. The maths is as yet un-done!

Sure, there is data that suggests it is probably true. The scientific can say there is enough data to be almost certain it is true and move on. The mathematical community cannot say for certain it is true because it is not yet prooven. Which is why they are still working on it.


I would suggest that formulating conjectures based on data is part of the mathematical process, would you not?


I would suggest that formulating conjectures based on data is specifically not maths.

Some mathematicians start with data, and no doubt about it data is effective. But I'm specifically saying that that is employing a tool of the scientific community (see [1]) as a starting point before then proceeding to do some actual mathematical work. This distinction is why maths is often classified in the Arts rather than the sciences.

But if you can point to a maths textbook that teaches someone "general theories" by printing 10,000 data points followed by a QED then I'd suggest it is a pretty extraordinary theory.

[1] https://en.wikipedia.org/wiki/Scientific_method

EDIT I'll throw in an example; a software consultant might be involved in invoicing for a project. The invoicing is still accounting work, even if it is being done for a software project.


Formulating a conjecture and proving a conjecture are two different activities. A conjecture is based on incomplete information (data gathered so far), and then we try to prove or disprove it using the mathematical tools available (and sometimes developing new tools).

Consider making the observation (shown as a table):

  +-----+-------------+
  |  n  | sum(1 to n) |
  +-----+-------------+
  |  1  |      1      |
  +-----+-------------+
  |  2  |      3      |
  +-----+-------------+
  |  3  |      6      |
  +-----+-------------+
  |  4  |      10     |
  +-----+-------------+
  |  5  |      15     |
  +-----+-------------+
  |  6  |      21     |
  +-----+-------------+
We can come up with the formulation: sum(1 to n) = n(n+1)/2 by several methods, but from the data given it's only a conjecture. Depending on how we came up with that formulation we may already have proven the conjecture. Or if we constructed it using the data only, we can prove it via induction or other methods.

The same is how many other conjectures begin on less trivial examples. The four color theorem, for instance, was notably hard to prove (and there was a lot of controversy over its method of proof). But it was still just a conjecture until they laid out their proof, though no one had ever found a counterexample.


I don't think any mathematician will agree with you on that (I'm a phd student in math). Examples (i.e. data) is foundational to a mathematical intuition which is a foundation of writing proofs. Drawing a line between the two would be ridiculous.

Data for mathematicians rarely looks like tables of numbers. More often it looks like a list of simple manifolds where we can do computations by hand, or topological spaces that don't have the usual properties, or fields of characteristic different from what you're most comfortable with, or continuous functions whose derivative is zero almost everywhere but are not constant. The first thing my advisor asks me when I say "I might be able to prove X" is whether it's true in the simplest examples.


Mathematics is a formal science, not an empirical science. In other words, both Zeilberger and those he criticizes (as he characterizes them) are wrong. Some interesting books on the subject...

"An Aristotelian Realist Philosophy of Mathematics: Mathematics as the Science of Quantity and Structure" by James Franklin

"German Science" by Pierre Duhem


It is unclear to me why this rant was necessary. The field of applied mathematics and other mathematical modeling sub-fields seems to be what the author means when he talks of mathematics as science and of mathematical truths. His criticism would have been much more salient if applied mathematics weren't thriving today more than ever--with the explosion of computational power every field now seeks to make evermore complex mathematical models of the problems they study and solve.


That was very unpleasant to read but what he basically proposes is to change "don't know if true until proven true" to "true until proven false" which is ridiculous.


I can't say I like the essentially manufactured grievance against Hardy and his young man's game quote, completely out of context. It is a cheap shot and has nothing to do with his argument.


>[..] Since slaves did all the manual labor, the rich folks had plenty of time to contemplate their navels, and to ponder about the meaning of life. Hence Western Philosophy, with its many pseudo-questions was developed in the hands of Plato, Aristotle, and their buddies, and ‘Modern’ pure mathematics was inaugurated in the hands of the gang of Euclid et. al.

Some call it "contemplating their navels", some other call it curiosity. I'm reminded of this other essay [1] that was discussed on HN a week ago:

>[..] throughout the whole history of science most of the really great discoveries which had ultimately proved to be beneficial to mankind had been made by men and women who were driven not by the desire to be useful but merely the desire to satisfy their curiosity.

Regarding the rest of the Zeilberger's essay ... while it is intuitively obvious that there something odd going with the assumptions in problems like the Zeno's paradox (that's why we call them paradoxes), it is certainly not intuitively obvious what that something exactly is, because it took us a good amount of principled application of curiosity (and quite while of time) to see why and how. What you'd call a mind that does not yearn to know the details but instead yells and lavishly prefers to drop "that's stupid" in boldface and with an exclamation point? I'm not sure, but "uninteresting" comes to mind.

In general, it's easy to see that ancient Greeks were mistaken on some issue or their ideas on some other ones were downright silly now that we have spent over two millenia improving on them and developing the ideas and tools such as "experimental science".

[1] Abraham Flexner, "Usefulness of Useless Knowledge", Harpers 179, 1939, https://library.ias.edu/files/UsefulnessHarpers.pdf HN discussion: https://news.ycombinator.com/item?id=14558775


My formulation would be something like this: Mathematics is how you discover further truths about worlds that have objective truths. We don't know if the world "we live in" is such a world but it contains a lot of those other worlds =)


Brilliant!


Just consider that academic math refused for over 300 years to acknowledge the existence of complex numbers, leaving the problem to the engineers. No scientist could afford this substitution of dogma for reality.


This is a straw man and does a huge disservice to the complicated history behind complex numbers. [1] In particular, mathematicians came up with it first (the Italians, Cardano et al., if you don't count the greeks who did something fuzzily approaching complex numbers), but it doesn't matter anyway because back then there were no clear lines between the sciences. Mathematicians were physicists, engineers, philosophers, astronomers, and whatever else.

The "refuse to acknowledge the existence" line is also nonsense (ignoring for the moment that "existence" is not a useful term for an abstraction), because Euler made "i" widespread by including it in his famous textbook, the complex plane was invented as early as the 1600s, Gauss used it for some of his most impressive discoveries (and he lived at the same time as Fourier, who, let's be real, was also a mathematician), and their reservations were more along the lines of "guarding against its misuse, "which still occurs today in algebra classes all around the world, and having doubts about its "metaphysics."

But I understand the desire to glorify engineers. It does often happen that practical concerns drive great mathematical discoveries. Just... be humble and do your research :)

[1]: https://en.wikipedia.org/wiki/Complex_number#History


I don't have a problem with the pure math crowd doing their own thing. They absolutely discover some amazing things using their techniques. What I have a problem with is that these people write the math textbooks. Why are we bamboozling our poor children with this impractical, esoteric, overcomplicated, pseudo-religious, gobbledygook? Primary and secondary school mathematics need to change focus to applied and computational math.


Since I haven't been in a K-12 math classroom in about 10 years (contemplated becoming a teacher and visited some friends' classes), what's the gobbledygook we're teaching kids these days? I don't recall any gobbledygook from my years in K-12 education (excepting an awful, awful long-term substitute teacher). Arithmetic and algebra (K-8, basically), geometry, trig, and calc were all practical and applied (the latter two in advanced science courses and work rather than daily life).

I think more probability and statistics courses would be easily justified. Regarding computational thinking, math in the K-12 level is computational thinking more than it's based on proof construction. Computational thinking should be added as a subset of the math and science courses. I think it helped many of my college classmates who weren't in CS to understand computational thinking when we ran simulations using matlab of experiments, or in the reverse used it to process our results.


Do you have a citation for that? Wikipedia's history of complex numbers makes no mention of this. What uses for complex numbers did engineers have prior to Gauss that weren't anticipated by mathematicians? https://en.m.wikipedia.org/wiki/Complex_number#History


I hate combinatorics. Number theory and algebra are so beautiful and profound and I feel combinatorics just sucks out all the soul.

Also strange that he cites Hersch, who, if I remember correctly became relevant during the post-modernism vs science wars, unless this is from that particular conflict--in which case I wasted my time reading this at all.


"Its central dogma is thou should prove everything rigorously"

I'm really worried that peoples who have knowledge in computer science doesn't get that from a false statement you can prove ANYTHING. And yes literally ANYTHING. Should we accpet models of computer that can compute faster than speed of light....?


I'm not convinced that you're really worried about that. You seem fairly dramatic, to be honest.


That's some undergraduate logic and it was my first weeks in college so yes i'm concern of what people get from expensive computer science degree.


We are already suffering greatly in computing because reduction of rigor. People who can't handle rigor in mathematics, the tool that every freaking scientist in STEM should seek a different, more enjoyable, field.


offer some proof of this?




Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: