I appreciate the point the article is trying to make, but I think this example is shoehorned in. You can misuse math without it being because you're "seduced by the beauty" of it.
I do agree with the author's example in physics. I have seen a lot of beautiful math in physics; look at lie algebras, monstrous moonshine and representation theory. Quite a bit of modern physics PhD dissertations are actually just math dissertations, and the same holds for a significant amount of new research in the field.
On the other hand I haven't seen that in finance. Highly exotic (read: "beautiful") mathematics is extremely rarely used in financial engineering. Pricing derivatives is decidedly mundane work compared to the brain-meltingly abstract mathematics deployed in high energy particle physics research. That's not to say it isn't difficult - it is difficult! But difficulty is better described by the word "complex" rather than "beautiful", and then of course financial engineering is complex. Then we should be talking about how getting mired in complexity can be bad for accountability and transparency.
This is a different thesis than the one presented by the author. Being led astray because you've built extremely brittle financial products using layers of complicated math is not the same as being more preoccupied with the elegance of a grand unifying theory than its agreement with reality.
But hey, maybe I'm just being pedantic. You can misuse mathematics in a lot of ways.
Most of the evidence points to the economists abusing assumptions, which is hardly a mathematics problem. Most assumptions can lead to elegant math. The biggest problem in modern economics as practiced is the tacit assumption that because practically all people would like to be able to consume more the system should favour consumers over savers. Which is a logical non-sequitur, so that can't be pinned on mathematics.
They may as well call the modern approach to interest rates the "Global War on Savers". Anyone attempting to save without moving into stocks & other assets will be wiped out long term.
The risk from using maths is irrelevant compared to the damage done by assuming a bad value structure - and there are so many forces influencing the value structure (particularly political ones) that I don't see how mathematical beauty could be a problem for economics as a discipline.
I have difficulty what the term "economics" means, but usually my best intuition is to regard it as modelling of economic phenomena rather than engineering economy from theory.
Agreed. Economics is about the real world. Therefore, it has to be empirical. That means that axiomatically deriving conclusions from assumptions is not legitimate in economics. Still, in the context of empirical knowledge, we have only two usable methods: the scientific or the historical one. Economics cannot be validated by testing experimentally. Hence, economics cannot possibly rest on a sound method. Therefore, economics is fundamentally not a legitimate academic discipline.
We know that hyperinflation is a way to screw over an economy utterly. It can and will fail and in the best case be the equivalent of dissolving the currency and going bankrupt.
The most benign form of it that may not techically count would involve massive growth as well and the devaluation wouldn't be a pathology but a reflection that yes, a well honed spear, flint knives, a badket, and a few carved bone pieces of jewelry may have been respectable wealth for nomadic hunter-gathers but aren't really worth anything compared to even the contents of a jalopy in the great depression.
Just a steel knife or pot would be grand artifacts because they are better in performance than anything else they could find.
That their old currency isn't worth anything is reflective of the fact that past production has been rendered obsolete and the old goods are worth little.
No, there isn't.
> Applying absolute standards of rigor is ironically also unscientific.
The rules governing science are not determined by science itself. Science experimentally tests propositions about facts. Rules about science are propositions about other propositions. Hence, science has absolutely nothing to say about its own rules. Therefore, propositions about the scientific method are necessarily unscientific.
I haven't read the Krugman article cited but the context makes me think of all the stuff around perfect competition and efficient markets, which is beautiful imo. Unfortunately it doesn't describe reality very well.
Not sure if they stick to mundane math, but Renaissance Technologies did pretty well hiring mathematicians and theoretical physicists.
I've personally been getting a lot of satisfaction from learning Haskell and seeing how the functional programming community is taking ideas from abstract mathematics, like category theory, and is turning them into practical ways of thinking about programming.
For anyone interested in Haskell, I recommend starting with http://learnyouahaskell.com/ (very friendly, intuitive and light) then doing these exercises: https://github.com/data61/fp-course.
Also, in terms of learning Haskell and its cousins, my advice is to start building stuff right away. It's easy to read about this stuff almost endlessly and never do anything productive with any of it. After learning Haskell, I got into Elm and then later PureScript. PureScript has really opened the door for me regarding getting some of these concepts out into the real world. It's really fun and feels rewarding to actually take advantage of some of the constructs that seemed pretty alien and abstract for a long time.
I appreciate the effort to extend the story into CS, but I wonder if you have to be familiar with the particular work he's alluding to. The charge (as leveled against theoretical physics) is not that some people do pure mathematical work for the sake of beauty. The charge is that people who are supposed to be applying mathematics to reality are instead prioritizing mathematics and neglecting reality. To extend the analogy to CS, he must be talking about researchers supposedly trying to model real systems but instead just chasing beautiful math, but he isn't specific. Is it obvious to people in the know who or what he's talking about?
For practical programmers, I think the problem is the reverse of being "lost in math." Practical programmers use extremely general theoretical results because they don't want to do math, not because the math is more beautiful. If they applied the information they know about their particular problem, they could get more useful mathematical results, but since they want to stay as far away from (doing) theory as possible, they use whatever facts they remember from class, which are ironically the most purely theoretical ideas because those are the simplest and easiest to remember.
Arguing that "SAT is NP-complete and therefore useless" is not misusing complexity theory, it's misunderstanding complexity theory—not misunderstanding some deep result or non-trivial consequence of complexity theory, but misunderstanding the fundamentals that are covered in the first lecture of the first class on the topic.
We did learn the definitions but "worst case" was never highlighted as such, it was naturally assumed. The closest we got was the discussion that constant factors may matter for small problems and O analysis masks those differences.
I mean, if your boss is demanding a solution that's both accurate and fast on every input, you can tell your boss that showing P=NP is probably out of scope of your project. But that's the start of a discussion about how to step back from that ideal, not the end of a discussion about the potential product.
In my opinion, theoretical physics is about explaining observable phenomena in a falsifiable way (I am with Popper here). Otherwise an omnipotent god would be an equally good explanation for a given phenomenon.
Think of "problems" as abstract primitives: sorting, search, graph operations, optimization, etc. To give a concrete example, let's take a particular search problem: boolean satisfiability (SAT). The input is boolean formulas, and the output is "an assignment to the variables of this formula that make it TRUE."
Some problems are "NP-complete". The definition of NP isn't important for our discussion here. What is important is that NP-completeness or lack thereof is indeed a "beautiful" way to classify problems. Further, we expect NP-complete problems to be impossible to solve efficiently on every input. This is what the article means by "worst-case" or "overly pessimistic" theories.
SAT is NP-complete, so "classical complexity" would predict that SAT cannot be efficiently solved with computers. But often, SAT can be solved efficiently "in practice" using simple heuristics. So Moshe would say that classical complexity is simply wrong; it doesn't describe the "real-world" behavior of actually trying to solve SAT. While difficult-to-process formulas may exist, we "just happen" to only encounter easy-to-process formulas.
So, to summarize: the article criticizes researchers who focus only on worst-case complexity. While the theory is beautiful, we can point to many problems for which it does not accurately predict performance.
I think this criticism is a little strange, because most complexity theorists also work on "average-case" complexity; the study of when "typical" inputs are difficult vs easy to solve. This is mentioned at the very end of the article. Any working complexity theorist would immediately list these problems with worst-case complexity, and explain that it is an imperfect and primitive theory compared to how we would really want to understand when and how these problems are difficult. There is a great deal of work trying to understand what about each formula makes it difficult or easy to process, and why.
The issue is, we are nowhere near understanding even worst-case complexity. So theoretical research often toggles back and forth between the two settings, trying to make progress overall.
Computational complexity theory is a fundamental mathematical field that will only occasionally produce enough understanding to impact practice. For an example, see:
I still don't get it, because it's okay for some people to be working on purely theoretical problems, motivated by mathematical curiosity and aesthetics. Is there a lack of people working on more concrete problems, bridging the gap between theory and practice? Are there outstanding problems arising from practice that are ignored because supposed "applied" researchers don't actually care about applications?
I think this criticism is a little strange, because most complexity theorists also work on "average-case" complexity
That sounds almost as theoretical as worst case to me. I would expect "applied" complexity theory to provide a theoretical framework for how a practitioner can add information that they know about how their problem differs from the aesthetically ideal problems that arise in theory. Like, my factory floor is not a frictionless plane, aha, here's how you measure a "coefficient of friction," and here's a new equation where you can see how the coefficient of friction affects the results. Or, my dataset isn't a uniformly random blob of bits, so is there a statistical property I can measure that lets me estimate the probability of the working memory of my algorithm exceeding 2.4 times the size of the input size?
Finding ways to characterize hardness and easiness in terms of underlying structure to those input distributions is exactly what average-case complexity tries to accomplish. The "holy grail" is to classify problems over efficient distribution of inputs. That is, first make the fairly reasonable assumption that the family of formulas you actually run (say) SAT-solvers on came from an efficient computation ("nature", or people coming up with problems). Then, identify properties of those distributions that you can measure and "blame" intractability on.
For example, there's a huge amount of work on these types of parameters for SAT instances. See: https://people.csail.mit.edu/rrw/backdoors.pdf
This type of work could, eventually, directly inform practice.
And of course I think it is okay for some researchers to work on purely theoretical or worst-case problems. Insights from worst-case complexity are often useful in solving the more difficult problems of average-case complexity; it is useful to know where and how the theories diverge.
Reconsidering, I guess I just don't understand the article at all. Maybe he's arguing that TCS doesn't have this problem because we have a hierarchy of theories, where worst-case complexity inspires average-case complexity inspires parameterized average-case complexity which could someday be used in the real world. Over very long timescales (think, centuries) I think complexity theory will produce practical insights.
Are there any problems taken directly from practice that complexity researchers are working on and using as context to define shortcomings in existing theory? Maybe that's what he's talking about, expanding on theory that "could, eventually, directly inform practice" and calling that "applied research" instead of tackling practical problems directly.
Further from "core complexity," a lot of the work on machine learning primitives is also directly informed by instances from practice. See the "manifold hypothesis" :
In the FP communities, I do sometimes see people overoptimize for TCO. Your datastructure is never going to be more than 10-100 deep. Don't worry about it. Just write the most legible recursive algorithm, not the most performant.
My graph traversals are >100 deep.
My recursive calculations are >100deep
Dynamic programming wouldn't be relevant if non-tail recursion was always good enough in practice.
I highly disagree regarding algorithms. No you don’t need to use the most modern Matrix multiplication algorithm ever but there are lots of situations (at least at large companies or people working with large amounts of data) where you do need to be aware of computational complexity.
What? Try computing the 1000th fibonacci number.
Her criticism has gotten more attention than justified by its merits. No one has argued that beauty holds precedent over truth
> In most cases, however, physicists are not aware they use arguments from beauty to begin with (hence the book’s title). I have such discussions on a daily basis.
> Physicists wrap appeals to beauty into statements like “this just can’t be the last word,” “intuition tells me,” or “this screams for an explanation”. They have forgotten that naturalness is an argument from beauty and can’t recall, or never looked at, the motivation for axions or gauge coupling unification. They will express their obsessions with numerical coincidences by saying “it’s curious” or “it is suggestive,” often followed by “Don’t you agree?”.
> What physicists are naive about is not appeals to beauty; what they are naive about is their own rationality. They cannot fathom the possibility that their scientific judgement is influenced by cognitive biases and social trends in scientific communities. They believe it does not matter for their interests how their research is presented in the media.
Have you read the book or any of her posts about the ideas in the book? There are in fact a lot of people who do claim that research programs and research dollars should be prioritized because of ideas like naturalness or beauty, even when decades of increasingly expensive and time-consuming work has led to no support for the natural or beautiful hypothesis.
The stuff she writes on her blog are so stupid it's beyond belief. First she criticizes theorists (who write far more reasonable and far less philosophical articles than she) for using advanced mathematics and exploring various possibilities, a minute later she criticizes experimenters for trying to build a better particle collider. She's a person who would gladly see the resources flowing to the less skilled wannabe-physicists with no real knowledge, because she's one of them.
Here's Paul Dirac, writing in 1963:
"it is more important to have beauty in one’s equations than to have them fit experiment. [...] It seems that if one is working from the point of view of getting beauty in one’s equations, and if one has really a sound insight, one is on a sure line of progress."
Here's John Schwarz, one of the pioneers of string theory, explaining why he and a collaborator kept working on it in the early 1970s after quantum chromodynamics turned out to be a better way of dealing with the strong nuclear force and before it emerged as a promising approach to quantum gravity:
"We felt strongly that string theory was too beautiful a mathematical structure to be completely irrelevant to nature."
The idea that string theory is "so beautiful it must be right" is, I think, mostly a strawman -- you hear critics of string theory taking it down, rather than advocates of string theory talking it up -- but the idea that beauty is a reliable guide to truth in physics isn't so strawy.
I feel the other way around: Applied mathematicians and physicists led the pure mathematicians astray. But I say this for a different reason. I feel that physics has become convoluted with a plethora of theories where this same beauty is interpreted wildly differently by different people. In other words, people all have different ways of thinking, different notions of beauty, and ultimately, this manifests into different (competing) notions in physics. These may even be equivalent notions and they may embody the desired perspective on beauty, but instead of consolidation there is extended differentiation.
My point is that physics has become ugly exactly because of physicists' ignorance towards mathematical elegance in favour of personal beauty. I don't think physics can become consolidated without exactly a stark appreciation for elegance.
IMO this is why category theory took so long to start appearing in physics: The physicists are caught up in their own idea of beauty rather than the mathematical tradition of finding the minimal sufficient proofs and theories (which I call elegance).
Regardless, you only can become lost in math if you have bad premises. Mathematics is a relative subject, abstracting the arbitrary of reality to axioms. And if the axioms do not hold, the theory is bunk. It will always be the case that more granularity is required in real-life situations. Mathematics is precise and sound; it's not gospel.
You assume the axioms hold. That is what an axiom is: an assumption.
> Mathematics is precise and sound; it's not gospel.
We do think that mathematics is precise. We don't really know if it is sound (unless I am missing something). For example, what is a set? It is not a "collection of things". Rather, it is an object in some mathematical setting.
The topic of whether mathematics is "correct" is something else. We can all go out and build a DIY logical machine from scratch and see for ourselves that the mathematics we use give the results that we expect. Applied mathematics in this sense concerns itself with mathematics that suitably describe real situations.
Beauty is a subjective emotion, it's not so easy to quantify. You may not find mathamatics beautiful due to it's clarity and insight, but many others would precisely because of that.
And it's not just math. Beautiful paintings, architecture, legal arguments, religions, etc., are often believed to be beautiful because of the clarity and insight they provide as well.
I think Computer Science used to be more aligned with math, in the sense that the mathematics courses most people took in school were overwhelmingly symbol manipulation. Just like CS.
Now I think things are not only more data driven, any sort of "undertanding" might be prioritized away forever, unless some adversarial network requires it. :)
Ironic that intelligent mathematics types get caught up in what could be analogous to socially hurtful stereotypes.
I am going to follow the author.
Dirac did believe his equation had to be right as it was so elegant, however it turns out we now interpret it differently now