Hacker News new | past | comments | ask | show | jobs | submit login
What Makes for 'Good' Mathematics? (quantamagazine.org)
46 points by digital55 10 months ago | hide | past | favorite | 59 comments



I have sort of a contrarian take on this, at least by traditional academic standards. I think applicability is extremely undervalued in modern mathematics. Tao references algebraic geometry in the interview, but the people who do algebraic geometry in applied settings ("applied" is also being quite generous here, in my opinion) are doing math that is basically unrelated to algebraic geometry as practiced by theoretical mathematicians. Even in connections to computer science, the people studying algebraic circuit complexity-which a priori should benefit a lot from algebraic geometry-basically can't use any modern algebraic geometry theory because (to my understanding) the theoreiticians choose to work in a setting that is so elegant it rules out interesting algebraic circuits. And circuit complexity is still quite far removed from applications the way I think of it.

I think the best and most beautiful mathematics is driven directly by applications, and made better for the pressure to apply to a real problem (not a problem that just mathematicians care about).

A quote from Milind Tambe's 2018 AAAI keynote talk sums up why:

> Deploying the algorithms in the field often gives us new insights into what might be wrong with our models and provides us new research directions.

If you've never heard of Tambe's work, you should really check it out. He literally sends his grad students to do field work to study how their algorithms are performing.

Too many mathematicians consider "applications" as "can be used in other, purely theoretical academic papers," and based on my research for my next book (pmfpbook.org) I find that large swaths of what is claimed to be applied math is actually not useful for solving the real world problems they claim to be useful for. A math thing will be tried in production, to much acclaim and fanfare, and then it will be quietly discarded in favor of simpler, more holistic, or less complicated solutions that end up performing better. But the academic publications praising it persist for decades, and nobody seems to get the message that it's not used.

One thing Tao gets right is that the best math is interdisciplinary. Connecting ideas across different theories is one great way to demonstrate the power of a theory. I would add that, to be great a theory needs to _grounded_ in a practical setting, where it's not just mathematicians judging the math for its beauty but an external reality pushing back. For my aesthetic, a pretty theory is just not enough.


The idea that academic work should be grounded in "applicability" is a lazy canard. Nobody was much convinced that decades of work in number theory was worth much of anything until applications in cryptography were discovered. We just don't know what we'll find.

Academic research in every field is undertaken for its own sake. If you want to work on "applicability", go get a job in industry. Finance is still hiring quants, for example.


People are welcome to do whatever they like. I'm expressing an opinion, my value judgement, that the math done for its own sake is just not as beautiful to me as math that has applications.

You call it laziness, and I call it a refined view built up over decades of study and repeatedly hearing mathematicians bullshit and be misinformed about how applicable their work is.


>that the math done for its own sake is just not as beautiful to me as math that has applications.

The point that the person who you're replying to is making is that you dont know the distinction between maths done for its own sake and maths that has applications until potentially hundreds of years later.


If you want to read a more thorough defence of the value of pure mathematics research, I recommend that you read A Mathematician's Apology by G.H. Hardy. Hardy witnessed first-hand the beginnings of the applications of his field, number theory, to cryptography. At that time, the only use for cryptography was in warfare so as a committed pacifist you can well imagine how much this upset him.

misinformed about how applicable their work is

I think I know exactly what you're talking about and it's unfair. I've also seen it happen to scientists across the spectrum. A question from a journalist or other layperson -- about the applicability of their research -- to catch them off guard and put them on the spot, designed to humble them and make the questioner feel superior. You never hear this sort of question levelled at painters, sculptors, musicians, writers, game designers, or any other artist!


It's different. Painters, sculptors, musicians, etc. are all doing something which is ultimately in service of making others happy and fulfilled. A mathematician or scientist working on something terribly narrow and abstruse is arguably only working on something in service of making themselves happy and fulfilled. Since they are frequently supported by taxpayer money, asking whether what they're working on is worth that expense is totally legitimate. The idea that we can't question the legitimacy or applicability of research because we can't predict how useful that research will be useful in 300 years is dumb.

If you want to be taken seriously, it's better to spend some time dwelling on whether you are responsibly fulfilling your obligation to society. For many professors, I think this could easily be addressed by shifting the focus away from research back to where it belongs: teaching.


On the other hand, if there are potential commercial applications 10-20 years in the future, it may not be the best idea for the government to fund the research. The topic is already concrete enough for the market, and it could be a waste of tax money to fund it. There is a lot more money in commercial research than publicly funded research. The government should focus on funding topics where the applications are too uncertain or too far in the future to make sense in the market.


Since they are frequently supported by taxpayer money, asking whether what they're working on is worth that expense is totally legitimate.

That's not the deal they signed up for. They were told that they were going into a purely theoretical field. To turn around and demand applications is called a bait-and-switch.

For many professors, I think this could easily be addressed by shifting the focus away from research back to where it belongs: teaching.

Same as above. The universities hired these professors on the basis of their research. If they want teaching staff they should hire teachers. The researchers will seek greener pastures elsewhere.

As for taxpayers demanding applications? Vote for it! Give the money to DARPA or Johns Hopkins instead of Princeton IAS. But to demand that theoreticians come up with applications on the spot is disingenuous.


Most people don't have the luxury of tenure. Frankly, most people don't have the luxury of a single career that's well paying throughout their professional life. If funding dried up tomorrow and every number theory professor was out on their ear, that wouldn't be a bait and switch. That would be a uniquely nice and pleasant situation for them coming to an end.

It's probably also good to contextualize this historically. The NSF has only existed since 1950. The high degree of funding for pure math let alone the rest of science and engineering could easily be a historical blip.

I don't really care what happens to people who are currently employed doing some kind of useless research in academia. Indeed, they got hired for it. I'm simply pointing out that it's totally fair and reasonable for anyone to ask whether it's a good idea to employ people this way. Your earlier comment seems to suggest that you don't think it's OK for people to call this into question.

I'm also not particularly worried about it. I'm not sure how much money IAS gets, but my guess is that DARPA and the DOE get a hell of a lot more, which (at least in my opinion ;-) ) is good.


It may be fair but it's definitely not reasonable. Progress in mathematics has foreshadowed practical applicability sometimes by centuries. It's short-sighted to consider applications outside of mathematics in the hiring of mathematicians.

In that context, it's worth noting that research mathematicians almost always mean "applications to other fields of mathematics" when they talk about the applicability of their research.


I don't agree. You're basically sticking your head in the sand saying that because there have been unanticipated benefits sometimes hundreds of years later, we can't even attempt to decide what research is valuable.

If the public is paying for research, it is both fair and reasonable to ask questions about its utility. It's as simple as that.


> it is both fair and reasonable to ask questions about its utility.

What questions, though? How do we know that these questions are well-calibrated for the long-tail cases that end up being valuable centuries later?


It’s totally fine if you want to say “defund pure mathematics!” I was complaining about the bait-and-switch people do when they take someone who was hired to do pure mathematics and then demand applications from them. It’s disingenuous and unfair. Likewise for the superstar researcher who is demanded to teach!


I never said we should defund pure math.

How many pure mathematicians have gotten tenure in math departments and were subsequently forced to work on "applications"? Do you have any actual examples?

How many superstar researchers have been forced to teach against their will? If they're working as a regular, tenure-track professor, then teaching is part of their job. The word doctor comes from the Latin word for teacher!


You didn’t say “defund pure mathematics” but you implied it with your original statement:

A mathematician or scientist working on something terribly narrow and abstruse is arguably only working on something in service of making themselves happy and fulfilled. Since they are frequently supported by taxpayer money, asking whether what they're working on is worth that expense is totally legitimate

To me this reads as “pure mathematics is a taxpayer-funded hobby that provides no value to society.” Is that an unfair reading? I don’t know how else to parse the statement that mathematicians are ”arguably only working on something in service of making themselves happy and fulfilled.” If that is what they’re doing then why shouldn’t they be defunded?

As for teaching, plenty of pure mathematicians do teach graduate students. Asking a world-leading number theorist to teach first year algebra class is silly though. Universities hire dedicated teaching faculty for those courses. Many of the expert researchers are terrible at teaching.

I also want to add that the reason universities covet superstar researchers (in any field) is because they bring in grant money (paying for themselves and then some) and because research elevates the profile of the school in the world rankings. The best schools have the best researchers but that doesn’t mean they provide the best education. Trying to fix this is an enormous challenge because you’ve got to tackle the issue of schools competing against each other for students. The losing schools are going bankrupt [1].

[1] https://www.cbc.ca/news/canada/ottawa/queens-university-budg...


I'm not saying we should defund math. I'm saying that it's perfectly fair and reasonable to think through how much sense it makes to allocate research funds for pure math. More specifically, I'm pushing because on the idea that it isn't okay to discuss this because past math successes have been useful in other areas. People really seem to cling to this argument and it doesn't make any sense to me. Maybe, at the end of the day, after thinking about it, it will make sense to continue funding pure math or even fund it more.

I agree that having a top number theorist teach Algebra I is a waste, but it's not a waste having them teaching undergraduate abstract algebra or number theory. The bigger issue is that in many places, there is very little expectation of teaching quality. I'm not sure this is connected to the issue of competition over students. The average math undergrad isn't picking a school based on how many Fields medalists it has.

I should say that I took number theory in undergrad from a top number theorist, and he was a phenomenally good teacher. I could have taken abstract algebra from him, too, if I had timed it right. Taking the craft of teaching seriously and being a good researcher aren't mutually exclusive!


> catch them off guard and put them on the spot

No, it's the researchers who first claim their work is applicable, in their papers, in lectures, etc. I have read entire books with titles like "Applications of X" and later discover all the claimed applications were unrealistic.

Hardy was a prickish snob who claimed in that book that any math that can be applied is dull.


> that the math done for its own sake is just not as beautiful to me as math that has applications.

I think the reverse. In general, the math done for its own sake will be more beautiful.

Think of it this way: mathematicians doing math to get some place will accept to take a long path that’s ugly and/or extremely dull, as long as it gets them there; mathematicians doing math for math’s sake will prefer to walk nicer paths.

But of course, beauty is in the eye of the beholder; those who define ‘beautiful’ as ‘has direct application’ will disagree.


This is the reason I prefer the intuitionist view of mathematics vs the more common platonic view. Something as simple as the Pythagorean theorem is true because it has found so many ways to solve practical problems, the Sumerians and Egyptians basically worshiped that formula.


I really don't think that intuitionism has that much to do with practical application—rather, it's a view about what it means for something to be "true" in the first place. That is to say, whether an intuitionist accepts a piece of mathematics is not contingent on if it has any practical applications.

I suppose one could argue that something proven in an intuitionistic manner would have a higher probability of being practically useful (since it could e.g. be extracted into an algorithm), but IMO any such correlation is pretty weak.


This is simply wrong. There is loads of academic research born out of industrial collaborations which is absolutely focused on how useful it is. Applied research exists in academia.

On the flip side, if you think the only place you can do something "applicable" is by working as a quant... again, wrong!

If anything, the idea that research is free to disregard applicability (read: "relevance to anyone outside academia") is what's a lazy canard. :-) People who are wrestling with whether their research is applicable are struggling to figure out how to connect it to the rest of society, which is valuable. It's responsible of them to do so.


You didn't engage with the argument of the comment above you at all, you engaged with imagined arguments: that there is no academic research that is focused on research, and that the only place to do something applicable is finance. The above comment said nothing of the sort, it only claimed that research did not have to be motivated by applications, and provided evidence: such research in the past has been incredibly applicable.


'If you want to work on "applicability", go get a job in industry.'


decades of work in number theory

Centuries of work. To Galois and at least all the way back to the Chinese Remainder Theorem, and to Euclid as well! Mathematicians have been fascinated by this stuff for a very, very long time.


The Chinese Remainder Theorem had explicit applications to astronomical cycles and calendrical calculations right from the start. See https://people.math.harvard.edu/~knill/crt/lib/Kangsheng.pdf

The "Euclidean" algorithm was also directly relevant to both plane geometry and astronomical calculations. Many of the relevant ideas were worked on by Plato and his contemporaries at the Athenian Academy, and then in the next generation by Eudoxus and his followers. (Unfortunately most of the precursors to the Elements have been lost.)


Fun fact sort of about the Chinese Remainder Theorem:

It appears to have been named after the fact that it appears in a medieval Chinese mathematical treatise. But it always reminds me of the fact that in old Chinese texts it is frequently necessary to determine a number given its remainders modulo 10 and 12.

Why? Because that is the formal way to record dates! It has been for more than 3000 years. There is a Chinese cycle of 10 sequence markers, the "heavenly stems", and a separate cycle of 12, the "earthly branches". Because Chinese characters have no real ordering (there are ordering systems, but only specialists would bother learning or using them), the stems and branches are traditionally used when you need to give arbitrary names to things. (Examples of this survive in modern Chinese -- the two parties to a contract are 甲方 and 乙方, after the first two celestial stems 甲 and 乙 -- but modern Chinese are likely to use Western letters like A and B for their arbitrary names.)

The stems and branches together form the cycle of sixty, from 甲子 (1-1, meaning one) up to 癸亥 (10-12, meaning sixty). The system is so old that instead of incrementing one place value at a time for a cycle of 120 the values of which can be easily calculated in your head, incrementing bumps both place values, so the value following 甲子 is 乙丑, 2-2, meaning two. What does 乙卯 refer to? Well, 乙 is 2 (of 10) and 卯 is 4 (of 12). A little work with the Chinese Remainder Theorem will get you there!


Astronomy had no applications for centuries (unless you count astrology but do you really want to go there?). Saying that mathematics applies to it is disingenuous in the context of a discussion around the applicability of mathematics to the daily lives of an ordinary taxpayer.

Same goes for much of the theoretical work in geometry. Very little of it is applicable to daily life, with the notable exception of trigonometry and its uses in surveying and engineering.

But the results I was alluding to by mentioning Euclid were those from his work in number theory, including his namesake lemma and his proof of the infinitude of the primes, and the fundamental theorem of arithmetic. Those results had no application to daily life until the advent of modern cryptography.


It seems like you are unfamiliar with technology before the past few centuries. Beyond astrology (which was important!), people used these theoretical tools from geometry and number theory for time keeping, calendars, cartography, navigation, surveying (you mentioned), city planning, architecture, construction and analysis of machines, hydrology, water distribution, crafts such as metalworking and carpentry, accounting, optics, music, any kind of engineering involving measurements, ....

Obviously mathematicians were also excited about numbers and shapes for their own sake, and some of their activities (e.g. trying to square the circle or trisect an angle using compass/straightedge) were more like puzzles or games than engineering. But that doesn't mean the broad bulk of mathematics (even deductive proof-based mathematics) was practically worthless. Some of the most famous mathematicians of antiquity were also engineers.

Fluency with prime numbers per se (and prime factorization) is pretty useful when you are trying to do substantial amounts of (especially mental) arithmetic.

I'm not sure which "taxpayers" you are thinking about, but establishing an effective tax system and doing tax assessment and accounting is one example where having a more robust understanding of elementary arithmetic and number theory is pretty valuable. The development of the number and metrological systems and work on number theory in ancient Mesopotamia had quite a lot to do with tax accounting.


Who do you think was making the calendars around the cycles of which agricultural societies were based ?

And while we may think that astrology has no value, that certainly wasn't the opinion of most premodern societies !

(Though it all tended to be a bit lumped together, as the term "natural philosopher" indicates.)


> Nobody was much convinced that decades of work in number theory was worth much of anything until applications in cryptography were discovered. We just don't know what we'll find.

There are plenty of examples of this, Bezier Curves is one such.


"Bézier curves" were separately invented by De Castlejau at Citroën (1959) and Bézier at Renault (early 60s) and were explicitly developed for their application to computer-aided car modeling right from the start.


that's not the history as I understand it but I also don't claim to be an expert.


You can read about it here, §3 https://web.archive.org/web/20170830031522id_/http://www.far...

Of course both De Castlejau and Bézier were building on other tools developed previously: Most importantly, Bernstein polynomials, which were developed by Bernstein in service of a constructive proof of the theorem that every continuous function can be approximated arbitrarily well by a polynomial. You can read about that history here https://doi.org/10.1016/j.cagd.2012.03.001 https://www.researchgate.net/profile/James-Peters-3/post/Sin...


From memory, I recall reading that there was work done by a french mathematician that didn't seem to have much applicability until the advent of 3d modeling.

I'm probably wrong here, it's mostly from memory some 20 years ago. But having said that, I think the overarching point is still valid, it's often the case that someone does something to explore and an application for it is discovered, sometimes posthumously.


De Casteljau and Bézier were both French mathematicians paid by different French car companies to work on systems for doing computer-aided design for cars, because of advantages vs. building physical scale models.

Bernstein was a Jewish Ukrainian mathematician who studied in Germany and later worked in Moscow, and his main research was about partial differential equations, probability, and function approximation, three "applied" topics. (Of course all of these topics also have theoretical aspects; such is the nature of mathematics.)


This assumes that the parts of number theory that ended up being useful could not have been developed after people realized you could do public key cryptography with primes.

If the work is undertaken for its own sake, there should not be a need to argue about how it will be useful in the future.


No, we could not wait to start doing number theory until after we discover that it's useful for cryptography. It took thousands of years to get to the point where we understood it well enough to use it. Its use would never occur to us if we had not discovered it beforehand. That's completely the wrong causal direction.

What completely undiscovered branch of mathematics do you think we should explore based on an immediate need that we have right now? Not so easy, is it?


What specific, useful things in cryptography would never have happened if we had not been studying number theory for thousands of years? Even if there are some examples, would we be significantly behind in useful capability if we didn't have those specific results?

It's more efficient to work backwards from the problems you have and build out the math. That's what they did with a lot of linear algebra and functional analysis when quantum mechanics came about. I am not saying discovery-based exploration would never work; I am saying it's inefficient if the goal is technological progress.


It just feels like asking which bits of a human wouldn't have been possible without having evolved for billions of years. It's an interconnected body of work that made cryptography possible at all. So ... all of it? I know it sounds like I'm copping out of the question, and maybe I am, because it's a really complicated question you're asking. I just don't know how you're imagining humanity came up with the ideas for:

- Diffie-Hellman key exchange without a deep understanding of quotient groups (and their properties, and proofs of their properties), large (co)prime numbers, and computability

- Quotient groups and its applicability to this problem without a deep understanding of group theory, equivalence classes, isomorphisms, etc.

- Large (co)prime numbers without work by Euler, calculations of GCD, proofs of infinite primes, understanding their densities and occurrence on the number line, etc.

- Computability without tons of work by Turing, von Neumann, Church, Babbage, Goedel, etc. relying on ideas on recursion, set theory, etc.

- Ideas on recursion and set theory without work on the fundamental axioms of mathematics, Peano arithmetic, etc.

- Group theory without modular arithmetic, polynomials and their roots, combinatorics, etc.

- Polynomials and their roots without a huge body of work going back to 2000 BC

- Calculations of GCD without work by Euclid

Most of these generalized abstractions came about by thinking about the more specific problems: e.g. Group Theory only exists at all because people were thinking about equivalence classes, roots of polynomials, the Chinese remainder theorem, modular arithmetic, etc. Nobody would have thought of the "big idea" without first struggling with the smaller ideas that it ended up encompassing.

You can't just take out half of these pillars of thought and assume the rest would have just happened anyway.


I agree that it's hard to imagine an alternate history when things happened through a mixture of pure and application-motivated work. In each example, I can see how people arrive at these notions through an application-driven mind-set (transformation groups, GCD through simplifying fractions during calculations, solving polynomial equations that come up in physics calculations). Computability and complexity, in the flavor of Turing's and subsequent work, I already see as application-driven work, as they were building computing machines at the time and wanted to understand what the machines could do.

Related to this topic. I highly recommend this speech / article by Von Neumann: https://www.zhangzk.net/docs/quotation/TheMathematician.pdf


An effect similar to this is what ultimately drove me away from pursuing an advanced degree in economics.

I kept seeing what were supposedly great innovations in our understanding of economic theory, which were mathematically elegant and completely unrealistic with respect to how people actually behave. What's the value in a closed-form expression that doesn't describe any real-world phenomenon?

I came to realize later that there is plenty of "good" economics research and modeling out there, but I had no desire to enter a field where models were not expected to be useful, in the sense of Box's aphorism.


> What's the value in a closed-form expression that doesn't describe any real-world phenomenon?

it keeps smart, driven, and potentially disruptive people busy chasing around closed forms while "real" power politics are kept firmly and rigorously under the control of incumbent powers.

this may be a perspective too cynical, nonetheless I offer it as an answer to your question; still working on making this funny somehow as it hits close to why I couldn't make it through the academic-industrialized pipeline of institutional corporations (big universities) 'manufacturing' and 'selling' graduated professionals in the labor 'marketplace'


This example feels categorically different to me, since economics is fundamentally about modeling the real world, whereas many would argue that the mathematics' ability to do so is more of a convenient byproduct. This reads to me like an indictment of economics in particular, rather than a general statement about the value of the real-world applicability of research.


As a former mathematician, I completely agree. Academic math is good stuff but you end up making too many assumptions that end up being hard to reconcile with a working system.

Personally, I stopped caring much about beauty because doing work guided by some beauty heuristics didn't make me happy. Doing work that is useful to many people does make me happy; and there ends up being beauty in it somehow.


Every year I see tons of CS students captivated by the beauty of algorithms in theoretical CS. I lose track of all the bright eyed undergraduates saying they love thinking about algorithms and would ideally like to spend their summer doing research on these sorts of problems. More often than not, I end up telling them they should focus their efforts on the systems side of things and chat with an OS or DB professor rather than a prof in TCS, but very few of them actually take this advice.


I’m curious why would it better to focus on the system side of things instead of algorithms?


Because there are real problems to work on that benefit from the sort of problem solving with algorithms that students are hoping to do.


software algorithms need to be written once to be used by anyone (digital copies and all that)

systems side doesn’t have this problematic quality


> I think the best and most beautiful mathematics is driven directly by applications, and made better for the pressure to apply to a real problem (not a problem that just mathematicians care about).

That's the thing, in my opinion, when you study math, you start with applications, you work your way to solve complex applicative problems, and then, those problems start to be less interesting, in part because they become easy for you, and the other part is that most of the problem is just 'mechanical and technical' work. Not really interesting.

But what you like in math, it's not the maths themselves, it's to be challenged by problems. So you start to work on abstracts problems, until they become easy, then you go one more step in the abstraction, to find challenging problems, etc..

So basically, I find it somewhat natural to be driven always to more abstract theory, with maybe the hope that one day they will be applied. It's far from you to solve too 'technical' problems in the sense that 'it's just an application of XYZ problem to solve, now, I don't want to implement it, as it will take me a lot of time, while I can find pleasure again in discovering new hard things'

I'm also on the applicative side, but I understand the attraction of the abstract side, and sometimes, I allow myself to work with just a bit of higher abstraction.

So with enough luck, some day, a mathematician, with the right level of abstraction at that moment, will discover an applicative problem that is of your interest, and dedicate to it some time, maybe a lot of time, and maybe even, his/her whole carreer, making it more abstract, elegant and practical.


"The people who do algebraic geometry in applied settings are doing math that is basically unrelated to algebraic geometry as practiced by theoretical mathematicians" seems like it is just a gatekeeping statement. It is basically related---it is algebraic geometry.


There are two different things people mean when they say "mathematics". (Probably lots more, but I'll focus on two.) There's mathematics that is invented, and then there's mathematics that is discovered. The "math that is discovered" was already there. It must be true. It was true before our universe existed, and would be true in every other conceivable universe. It is very difficult to believe that any of this math that we discover could possibly not be useful. It's what reality is made out of. How could a deeper understanding of reality not be useful? And I assert that it is simply not possible to know whether a given discovery would be useful before you discover it; how could you possibly evaluate a discovery's usefulness before you even know what it is?

The "math that is invented" is attempting to describe this more fundamental math. These are the terms we come up with and write down in papers, the symbols we use, etc. Sometimes we discover that The X Theorem and The Y Theorem are actually describing the same fundamental math in different ways. Clearly this descriptive math can vary in usefulness. How simply and clearly does it describe the underlying "ideal" math? Does thinking of it this way lend itself to immediate application to make human lives better? Does it help us discover even more math?

Almost everyone who criticizes math is missing this distinction: they're criticizing the writing, the models, even the institution of math. Some of those criticisms have merit; most don't. Either way, they tend to overlook the fundamental math that was already true, already there long before our universe. You simply cannot identify whether or not a given mathematician is "wasting their time", because you cannot know what underlying fundamental math they might be about to discover, and what its uses might be.

If I were in charge of giving out grants for general-purpose math research, I would not attempt to quantify "usefulness" at all. The main metric I would try to optimize for would be breadth. I'd prioritize the math that seems to be exploring areas that haven't already been thoroughly explored, or making connections between areas of "described math" that currently seem only distantly related. In our current institutions of math, there are forces that push toward increased breadth and forces that push toward conformity (decreased breadth). It might be enough to simply work against those conformity forces -- demanding "usefulness" is a conformity force.


Good or interesting mathematics imho is about finding the exceptions. Like, why something works for a particular case when otherwise it does not, or the converse. An example is why a general quintic cannot be solved with radicals, or finding non-trivial quintics that can be solved with radicals.


> An example is why a general quintic cannot be solved with radicals

A general quintic can be solved if you allow the "Bring radical", which is a function f(x) that gives the (unique) real root of x^5 + x + 1. When I first heard this, it sounded like a total cop-out, almost like saying "you can find the answer if you already know the answer". But what convinced me was the argument that the Bring radical is not more difficult to compute than the quintic root x^(1/5). So if you care about computability, then "solving" a quintic using roots + the Bring radical is equally as useful as solving it using roots alone.


Good Mathematics doesn't use the axiom of choice.


In undergrad my friends and I made a shirt:

   Pro-axiom of choice
   Because every vector space deserves a basis


Something bothers about some of the special cases, but I can't quite describe it.


I agree, the axiom of choice disincentives us from striving for more elegant solutions.

That said, the axiom of choice is always available in Gödel's sandbox of "constructible sets", and by "Shoenfield absoluteness", some results can escape the sandbox.

For instance, the result that every vector space has a basis is equivalent to the axiom of choice. We have it in Gödel's sandbox and we don't have it outside, if we prefer to not use the axiom of choice in our meta theory. But every purely number-theoretic consequence of that result flows from the sandbox to the ambient universe.

In this sense, the axiom of choice can be regarded as a useful fiction. Not necessarily true in a literal sense, but true enough for many purposes.

I gave a 37c3 talk on this topic: https://www.speicherleck.de/iblech/stuff/37c3-axiom-of-choic...


The axiom of choice should be considered supernatural.


I was sad to see how little discussion there was of goodness itself. Seems like a key problem to address for any instrument.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: