Hacker News new | past | comments | ask | show | jobs | submit login
Lost in Math? (acm.org)
143 points by ernesto95 on Feb 27, 2019 | hide | past | favorite | 68 comments



> About 10 years ago, in the wake of the 2008 financial crisis, the Nobel Laureate economist Paul Krugman made the same point with respect to economics and mathematics in an influential article titled "How Did Economists Get It So Wrong?" His main answer was: mistaking mathematical beauty for truth. "As I see it," wrote Krugman, "the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth."

I appreciate the point the article is trying to make, but I think this example is shoehorned in. You can misuse math without it being because you're "seduced by the beauty" of it.

I do agree with the author's example in physics. I have seen a lot of beautiful math in physics; look at lie algebras, monstrous moonshine and representation theory. Quite a bit of modern physics PhD dissertations are actually just math dissertations, and the same holds for a significant amount of new research in the field.

On the other hand I haven't seen that in finance. Highly exotic (read: "beautiful") mathematics is extremely rarely used in financial engineering. Pricing derivatives is decidedly mundane work compared to the brain-meltingly abstract mathematics deployed in high energy particle physics research. That's not to say it isn't difficult - it is difficult! But difficulty is better described by the word "complex" rather than "beautiful", and then of course financial engineering is complex. Then we should be talking about how getting mired in complexity can be bad for accountability and transparency.

This is a different thesis than the one presented by the author. Being led astray because you've built extremely brittle financial products using layers of complicated math is not the same as being more preoccupied with the elegance of a grand unifying theory than its agreement with reality.

But hey, maybe I'm just being pedantic. You can misuse mathematics in a lot of ways.


It is possible to be too critical about a magazine article, but it can't be stated enough that the evidence that economists are abusing maths is weak.

Most of the evidence points to the economists abusing assumptions, which is hardly a mathematics problem. Most assumptions can lead to elegant math. The biggest problem in modern economics as practiced is the tacit assumption that because practically all people would like to be able to consume more the system should favour consumers over savers. Which is a logical non-sequitur, so that can't be pinned on mathematics.

They may as well call the modern approach to interest rates the "Global War on Savers". Anyone attempting to save without moving into stocks & other assets will be wiped out long term.

The risk from using maths is irrelevant compared to the damage done by assuming a bad value structure - and there are so many forces influencing the value structure (particularly political ones) that I don't see how mathematical beauty could be a problem for economics as a discipline.


Exactly. Mathematics is the study of which statements follow from which assumptions.

I have difficulty what the term "economics" means, but usually my best intuition is to regard it as modelling of economic phenomena rather than engineering economy from theory.


> Most of the evidence points to the economists abusing assumptions, which is hardly a mathematics problem.

Agreed. Economics is about the real world. Therefore, it has to be empirical. That means that axiomatically deriving conclusions from assumptions is not legitimate in economics. Still, in the context of empirical knowledge, we have only two usable methods: the scientific or the historical one. Economics cannot be validated by testing experimentally. Hence, economics cannot possibly rest on a sound method. Therefore, economics is fundamentally not a legitimate academic discipline.


There is a difference between non-repeatable in same state and non-scientific. Applying absolute standards of rigor is ironically also unscientific.

We know that hyperinflation is a way to screw over an economy utterly. It can and will fail and in the best case be the equivalent of dissolving the currency and going bankrupt.

The most benign form of it that may not techically count would involve massive growth as well and the devaluation wouldn't be a pathology but a reflection that yes, a well honed spear, flint knives, a badket, and a few carved bone pieces of jewelry may have been respectable wealth for nomadic hunter-gathers but aren't really worth anything compared to even the contents of a jalopy in the great depression.

Just a steel knife or pot would be grand artifacts because they are better in performance than anything else they could find.

That their old currency isn't worth anything is reflective of the fact that past production has been rendered obsolete and the old goods are worth little.


> There is a difference between non-repeatable in same state and non-scientific.

No, there isn't.

> Applying absolute standards of rigor is ironically also unscientific.

The rules governing science are not determined by science itself. Science experimentally tests propositions about facts. Rules about science are propositions about other propositions. Hence, science has absolutely nothing to say about its own rules. Therefore, propositions about the scientific method are necessarily unscientific.


I don't know - pricing options using Black-Scholes uses assumptions (i think normal bell curves) that aren't exactly true and LTCM for instance went under showing that. i think the big difference between economics and physics is that maths is used in finance as 'credibility' and the mortals just assume it's correct because the wizards say it is and crank the dial up to 11 (sub-prime affected a lot of normal people). In physics, the wizards are just talking to other wizards and the mortals don't even enter the discussion (honestly, LHC is cool and all but other than a couple thousand physicists, no-one would notice if it stopped working).


I think Kurgman was talking about the economics profession rather than the finance profession when he said that. My understanding is that the DSGE models many economics grad students use do have some sophisticated math involved.


Maths doesn't have to be complex to be beautiful.

I haven't read the Krugman article cited but the context makes me think of all the stuff around perfect competition and efficient markets, which is beautiful imo. Unfortunately it doesn't describe reality very well.


Agreed. There's a huge difference between mathematical beauty in theoretical physics and in economics, the latter not really being on the radar in modern math. The author is conflating mathematical beauty with the "fancy math" effect of being able to easily publish papers in fields that aren't very mathematical just by having equations as part of the analysis. It's not like economists have found some amazingly beautiful theory (by modern math standards), they just use math to promote/legitimize their work.


I agree totally. A lot of people tried to bury the 2008/2009 crisis in philosophy, whereas there is a simple "model" to described what happened: bad debt.


> On the other hand I haven't seen that in finance. Highly exotic (read: "beautiful") mathematics is extremely rarely used in financial engineering. Pricing derivatives is decidedly mundane work compared to the brain-meltingly abstract mathematics deployed in high energy particle physics research.

Not sure if they stick to mundane math, but Renaissance Technologies did pretty well hiring mathematicians and theoretical physicists.


Related: Dijkstra's comments on mathematics and CS - http://www.cs.utexas.edu/users/EWD/transcriptions/EWD12xx/EW....

I've personally been getting a lot of satisfaction from learning Haskell and seeing how the functional programming community is taking ideas from abstract mathematics, like category theory, and is turning them into practical ways of thinking about programming.


Me too. Listened to a podcast recently where they were talking about things like, "once you know what a monoid is, you start seeing them everywhere". I've tried to express these benefits to coworkers recently. Leveraging ideas from math allows you to take advantage of many decades of research and provides a structural foundation that's substantially more robust than things you might find in the gang of four book or other "design pattern" resources. I think there's still a lot of opportunity to bring these ideas to the masses in the way that Evan Czaplicki , the author of Elm, is trying to do. The challenge in doing so seems to be finding the right level of abstraction to expose people to with the goal of maximizing benefit while simultaneously minimizing the prerequisite knowledge you're requiring folks to take on to participate.


I think it depends on what you are doing. If you are just doing API plumbing, the idea of introducing more rigor into the process may seem somewhat absurd. If you are doing real programming, however, Haskell allows you to elegantly structure and think about a problem. Certainly in parsing applications Haskell is a no brainier, Pandoc is written in Haskell for example. People are saying Rust is going to be the chosen one that brings it all together.

For anyone interested in Haskell, I recommend starting with http://learnyouahaskell.com/ (very friendly, intuitive and light) then doing these exercises: https://github.com/data61/fp-course.


I'm mostly in agreement with you with the caveat that I've seen lots of "throwaway code" turn into production code that folks end up depending on. These days, if something seems like it has even a remote chance of being adopted into a production system, I'll try to make sure that it's on a solid foundation.

Also, in terms of learning Haskell and its cousins, my advice is to start building stuff right away. It's easy to read about this stuff almost endlessly and never do anything productive with any of it. After learning Haskell, I got into Elm and then later PureScript. PureScript has really opened the door for me regarding getting some of these concepts out into the real world. It's really fun and feels rewarding to actually take advantage of some of the constructs that seemed pretty alien and abstract for a long time.


It is actually quite important to read up on Haskell if you don't have a background in modern FP. It isn't like other languages where you just see what's different from what you already know. There's a lot of conceptual stuff that helps understand what's going on. I actually started by jumping in myself, so I just wrote programs that were quite bad in Haskell terms. I usually recommend jumping into a language right away too, but it really really helps to understand some Haskell concepts at a conceptual level that is hard to understand from just jumping into the code. If one wants to jump right in, they can just do that FP course which is all exercises and skip the book, but I assure you that most noobs will be completely lost. It's like starting someone off in calculus with derivatives, completely skipping over limits and the geometrical underpinnings.


I should clarify that by "jumping in", I'm suggesting getting to a structural foundation that includes monads, applicative functors, monads, and their ilk, and then moving into writing code. I say this because I spent a few years learning this stuff without doing anything remotely practical, and I think that's too long.


Isn't the book you linked a bit dated?


LYAH is a bit dated, but since it focuses a lot more on conceptual stuff, instead of being directly pragmatic, it has aged a lot better than books like "Real World Haskell".


The main concepts are presented well, but people have complained about it being dated. What in particular do you find outdated?


I saw someone confused on haskell-cafe yesterday because it doesn't cover Applicative so their Monad instance was invalid.


Oof that is bad. I admit that I only used it to learn the basic concepts that just did exercises and read the prelude documentation ( which is fantastic ). Thanks for letting me know.


What's the name of the podcast?


The Haskell Cast. This is the episode I was referring to. It's pretty good.

https://www.youtube.com/watch?v=O3EjNRgypXg


"practical" is highly debated.


But complexity theory aims at describing the performance of A over the space of all problem instances and it does so by abstracting away from individual problem instances.

I appreciate the effort to extend the story into CS, but I wonder if you have to be familiar with the particular work he's alluding to. The charge (as leveled against theoretical physics) is not that some people do pure mathematical work for the sake of beauty. The charge is that people who are supposed to be applying mathematics to reality are instead prioritizing mathematics and neglecting reality. To extend the analogy to CS, he must be talking about researchers supposedly trying to model real systems but instead just chasing beautiful math, but he isn't specific. Is it obvious to people in the know who or what he's talking about?

For practical programmers, I think the problem is the reverse of being "lost in math." Practical programmers use extremely general theoretical results because they don't want to do math, not because the math is more beautiful. If they applied the information they know about their particular problem, they could get more useful mathematical results, but since they want to stay as far away from (doing) theory as possible, they use whatever facts they remember from class, which are ironically the most purely theoretical ideas because those are the simplest and easiest to remember.


SAT is NP-Complete. In principle, this means that SAT solvers don't scale. If we stopped here then we would never have developed symbolic execution. It turns out that SAT and SMT solvers do scale for lots of real world inputs. Cook's proof is amazingly elegant and powerful but fails to inform real development.


SAT solvers only scale sometimes. Complexity theory is a starting point for understanding why this "sometimes" is inevitable, but it never even implies that NP-complete problems are never tractable. Complexity theory gives us an understanding of how powerful and expressive SAT is which very much does inform real development.

Arguing that "SAT is NP-complete and therefore useless" is not misusing complexity theory, it's misunderstanding complexity theory—not misunderstanding some deep result or non-trivial consequence of complexity theory, but misunderstanding the fundamentals that are covered in the first lecture of the first class on the topic.


It's common. My lecturer in the first lecture gave a motivation why we learn about this saying:imagine your boss proposes you compute <whatever>, then you - informed by this lecture - will recognize it belongs to an infeasible complexity class and can tell your boss it won't be possible.

We did learn the definitions but "worst case" was never highlighted as such, it was naturally assumed. The closest we got was the discussion that constant factors may matter for small problems and O analysis masks those differences.


> tell your boss it won't be possible

I mean, if your boss is demanding a solution that's both accurate and fast on every input, you can tell your boss that showing P=NP is probably out of scope of your project. But that's the start of a discussion about how to step back from that ideal, not the end of a discussion about the potential product.


This has been a common point of complaint in Physics since Einstein, and probably before. The idea that you can come up with ideas about how the universe works in the absence of experimentation doesn't sit right with many scientists. There are many scientists, like Edward Witten, that are working on things that are purely mathematical at the moment, and to some it seems like an inbred mathematical fantasy. In their defense, this is why what they do is called theoretical physics.


>In their defense, this is why what they do is called theoretical physics.

In my opinion, theoretical physics is about explaining observable phenomena in a falsifiable way (I am with Popper here). Otherwise an omnipotent god would be an equally good explanation for a given phenomenon.


I consider their role to be the people that are exploring mathematical models of phenomena that may or may not exist, so when the regular physicists find something new, there's some mathematical precedent they can use. Physics is about explaining "observable phenomena in a falsifiable way," theoretical physics is broader than that. I think they both have their place. I don't know why people put them at odds against each other, I thought they have been shown to have a long and fruitful relationship ( the experimentalists and the mathematicians ). The relationship between Faraday and Maxwell should be the shining example that shows how the two sides of understanding nature balance each other.


"the particular work" he is referencing is computational complexity theory; a long-running attempt to mathematically characterize which problems can be efficiently solved with computers and which cannot. I work in complexity theory, and this does make the objects of his criticism obvious to me. I'll try to explain below.

Think of "problems" as abstract primitives: sorting, search, graph operations, optimization, etc. To give a concrete example, let's take a particular search problem: boolean satisfiability (SAT). The input is boolean formulas, and the output is "an assignment to the variables of this formula that make it TRUE."

Some problems are "NP-complete". The definition of NP isn't important for our discussion here. What is important is that NP-completeness or lack thereof is indeed a "beautiful" way to classify problems. Further, we expect NP-complete problems to be impossible to solve efficiently on every input. This is what the article means by "worst-case" or "overly pessimistic" theories.

SAT is NP-complete, so "classical complexity" would predict that SAT cannot be efficiently solved with computers. But often, SAT can be solved efficiently "in practice" using simple heuristics. So Moshe would say that classical complexity is simply wrong; it doesn't describe the "real-world" behavior of actually trying to solve SAT. While difficult-to-process formulas may exist, we "just happen" to only encounter easy-to-process formulas.

So, to summarize: the article criticizes researchers who focus only on worst-case complexity. While the theory is beautiful, we can point to many problems for which it does not accurately predict performance.

I think this criticism is a little strange, because most complexity theorists also work on "average-case" complexity; the study of when "typical" inputs are difficult vs easy to solve. This is mentioned at the very end of the article. Any working complexity theorist would immediately list these problems with worst-case complexity, and explain that it is an imperfect and primitive theory compared to how we would really want to understand when and how these problems are difficult. There is a great deal of work trying to understand what about each formula makes it difficult or easy to process, and why.

The issue is, we are nowhere near understanding even worst-case complexity. So theoretical research often toggles back and forth between the two settings, trying to make progress overall.

Computational complexity theory is a fundamental mathematical field that will only occasionally produce enough understanding to impact practice. For an example, see:

https://www.youtube.com/watch?v=FGtsqEwANWY&feature=youtu.be...


So, to summarize: the article criticizes researchers who focus only on worst-case complexity. While the theory is beautiful, we can point to many problems for which it does not accurately predict performance.

I still don't get it, because it's okay for some people to be working on purely theoretical problems, motivated by mathematical curiosity and aesthetics. Is there a lack of people working on more concrete problems, bridging the gap between theory and practice? Are there outstanding problems arising from practice that are ignored because supposed "applied" researchers don't actually care about applications?

I think this criticism is a little strange, because most complexity theorists also work on "average-case" complexity

That sounds almost as theoretical as worst case to me. I would expect "applied" complexity theory to provide a theoretical framework for how a practitioner can add information that they know about how their problem differs from the aesthetically ideal problems that arise in theory. Like, my factory floor is not a frictionless plane, aha, here's how you measure a "coefficient of friction," and here's a new equation where you can see how the coefficient of friction affects the results. Or, my dataset isn't a uniformly random blob of bits, so is there a statistical property I can measure that lets me estimate the probability of the working memory of my algorithm exceeding 2.4 times the size of the input size?


Let me clarify: average-case does not just mean "over the uniform distribution", though that is of course one distribution of inputs we can study (mostly with applications to cryptography). Average-case complexity can be studied over any family of distributions.

Finding ways to characterize hardness and easiness in terms of underlying structure to those input distributions is exactly what average-case complexity tries to accomplish. The "holy grail" is to classify problems over efficient distribution of inputs. That is, first make the fairly reasonable assumption that the family of formulas you actually run (say) SAT-solvers on came from an efficient computation ("nature", or people coming up with problems). Then, identify properties of those distributions that you can measure and "blame" intractability on.

For example, there's a huge amount of work on these types of parameters for SAT instances. See: https://people.csail.mit.edu/rrw/backdoors.pdf

This type of work could, eventually, directly inform practice.

And of course I think it is okay for some researchers to work on purely theoretical or worst-case problems. Insights from worst-case complexity are often useful in solving the more difficult problems of average-case complexity; it is useful to know where and how the theories diverge.

Reconsidering, I guess I just don't understand the article at all. Maybe he's arguing that TCS doesn't have this problem because we have a hierarchy of theories, where worst-case complexity inspires average-case complexity inspires parameterized average-case complexity which could someday be used in the real world. Over very long timescales (think, centuries) I think complexity theory will produce practical insights.


This type of work could, eventually, directly inform practice.

Are there any problems taken directly from practice that complexity researchers are working on and using as context to define shortcomings in existing theory? Maybe that's what he's talking about, expanding on theory that "could, eventually, directly inform practice" and calling that "applied research" instead of tackling practical problems directly.


Yeah, a lot of the work on average-case SAT is directly informed by what industrial benchmarks look like.

Further from "core complexity," a lot of the work on machine learning primitives is also directly informed by instances from practice. See the "manifold hypothesis" :

https://deepai.org/machine-learning-glossary-and-terms/manif...


It's like you have some problems in graph theory that are NP-Complete/Hard, so in theory you are out of luck. But if you restrict yourself to a certain type of graphs, or you change your problem from e.g. computing a chromatic number on a certain type of graphs to instead decide if for a given K such K-coloring exists, suddenly there is a polynomial algorithm. And practically-wise, you might be fine with just K={2,3,4,5} and couldn't care less about other values of K for your practical case; so while in general you can't solve it, in practice you have a fast algorithm you run 4x in a row and are done.


I haven't been in the developer industry for too long, but excepting the haskell community, I would say that the way CS tends to treat math is as guardrails, as in, "you can't do that because of the halting theorem". "you might be butting up against computational complexity if you try doing it this way". "reconstruction of this data shard is impossible because you don't have enough points to determine the equation".

In the FP communities, I do sometimes see people overoptimize for TCO. Your datastructure is never going to be more than 10-100 deep. Don't worry about it. Just write the most legible recursive algorithm, not the most performant.


Hmm? My queues and stacks are >100 deep.

My graph traversals are >100 deep.

My recursive calculations are >100deep

Dynamic programming wouldn't be relevant if non-tail recursion was always good enough in practice.


I was obviously referring to a different you. Yes, there are (many) cases when tco is 100% a great idea.


The way development works, 95+%, yes math is simply guardrails. But it definitely isn’t always. I would actually say that we as the developer community have done a remarkable job of abstracting the hard math away from needing to be thought of by the average developer by bundling the math so invisibly into libraries.

I highly disagree regarding algorithms. No you don’t need to use the most modern Matrix multiplication algorithm ever but there are lots of situations (at least at large companies or people working with large amounts of data) where you do need to be aware of computational complexity.


> Your datastructure is never going to be more than 10-100 deep. Don't worry about it. Just write the most legible recursive algorithm, not the most performant.

What? Try computing the 1000th fibonacci number.


I agree as it is important to know your boundaries. But I also like the GP prefer legible over performant code whenever it is obvious that I am not going to need more performance. In almost all cases I encountered the necessity to optimize code the reason was I/O bound. I don't recall any instance of algorithmic performance being a problem.


Tco has little to do with timing performance and all to do with keeping code legible in functional languages.


In FP that implements actor model, for example, you will be in a world of hurt if you don't TCO your actor - has nothing at all to do with legibility.


But the seductive power of mathematical beauty has come under criticism lately. In Lost in Math, a book published earlier this year, the theoretical physicist Sabine Hossenfelder asserts that mathematical elegance led physics astray. Specifically, she argues that several branches of physics, including string theory and quantum gravity, have come to view mathematical beauty as a truth criterion, in the absence of experimental data to confirm or refute these theor

Her criticism has gotten more attention than justified by its merits. No one has argued that beauty holds precedent over truth


In a recent blogpost [1], Hossenfelder responds to a review of her book. All but the first two paragraphs are basically a response to this idea. A brief excerpt:

> In most cases, however, physicists are not aware they use arguments from beauty to begin with (hence the book’s title). I have such discussions on a daily basis.

> Physicists wrap appeals to beauty into statements like “this just can’t be the last word,” “intuition tells me,” or “this screams for an explanation”. They have forgotten that naturalness is an argument from beauty and can’t recall, or never looked at, the motivation for axions or gauge coupling unification. They will express their obsessions with numerical coincidences by saying “it’s curious” or “it is suggestive,” often followed by “Don’t you agree?”.

[…]

> What physicists are naive about is not appeals to beauty; what they are naive about is their own rationality. They cannot fathom the possibility that their scientific judgement is influenced by cognitive biases and social trends in scientific communities. They believe it does not matter for their interests how their research is presented in the media.

Have you read the book or any of her posts about the ideas in the book? There are in fact a lot of people who do claim that research programs and research dollars should be prioritized because of ideas like naturalness or beauty, even when decades of increasingly expensive and time-consuming work has led to no support for the natural or beautiful hypothesis.

[1] http://backreaction.blogspot.com/2019/02/a-philosopher-of-sc...


I find Hossenfelder problematic because if you look at the rhetoric the blog post itself is relying on intuition and in essence a beauty argument in order to make a sociological point. I think Hossenfelder has a point about the state of professional physics, but the way this is communicated is unclear and lacks this rhetorical introspection as well. It's why her posts draw so much attention; the strong assertions, the arguing, and how the unrigorous writing itself obscures this.


There's no need for anyone to read her book, because it's just a pile of garbage written by someone who doesn't have a clue about modern physics. Hossenfelder is an armchair physicist who failed in her discipline and started writing books for profit. The very idea of naturalness is basically the thing people hope the machines will someday do - namely think and be able to explain what is actually going on through fleshing out noise from the essential mechanisms.

The stuff she writes on her blog are so stupid it's beyond belief. First she criticizes theorists (who write far more reasonable and far less philosophical articles than she) for using advanced mathematics and exploring various possibilities, a minute later she criticizes experimenters for trying to build a better particle collider. She's a person who would gladly see the resources flowing to the less skilled wannabe-physicists with no real knowledge, because she's one of them.


> No one has argued that beauty holds precedent over truth

Here's Paul Dirac, writing in 1963:

"it is more important to have beauty in one’s equations than to have them fit experiment. [...] It seems that if one is working from the point of view of getting beauty in one’s equations, and if one has really a sound insight, one is on a sure line of progress."

Here's John Schwarz, one of the pioneers of string theory, explaining why he and a collaborator kept working on it in the early 1970s after quantum chromodynamics turned out to be a better way of dealing with the strong nuclear force and before it emerged as a promising approach to quantum gravity:

"We felt strongly that string theory was too beautiful a mathematical structure to be completely irrelevant to nature."

The idea that string theory is "so beautiful it must be right" is, I think, mostly a strawman -- you hear critics of string theory taking it down, rather than advocates of string theory talking it up -- but the idea that beauty is a reliable guide to truth in physics isn't so strawy.


> the theoretical physicist Sabine Hossenfelder asserts that mathematical elegance led physics astray

I feel the other way around: Applied mathematicians and physicists led the pure mathematicians astray. But I say this for a different reason. I feel that physics has become convoluted with a plethora of theories where this same beauty is interpreted wildly differently by different people. In other words, people all have different ways of thinking, different notions of beauty, and ultimately, this manifests into different (competing) notions in physics. These may even be equivalent notions and they may embody the desired perspective on beauty, but instead of consolidation there is extended differentiation.

My point is that physics has become ugly exactly because of physicists' ignorance towards mathematical elegance in favour of personal beauty. I don't think physics can become consolidated without exactly a stark appreciation for elegance.

IMO this is why category theory took so long to start appearing in physics: The physicists are caught up in their own idea of beauty rather than the mathematical tradition of finding the minimal sufficient proofs and theories (which I call elegance).


Coming from a mathematics background, I personally do not care so much for the 'beauty' of mathematics, and am moreso interested in clarity of properly abstracting and insight to the resulting formal theory. I feel that physicists care more about such intuitive ideas than anybody else.

Regardless, you only can become lost in math if you have bad premises. Mathematics is a relative subject, abstracting the arbitrary of reality to axioms. And if the axioms do not hold, the theory is bunk. It will always be the case that more granularity is required in real-life situations. Mathematics is precise and sound; it's not gospel.


> And if the axioms do not hold, the theory is bunk.

You assume the axioms hold. That is what an axiom is: an assumption.

> Mathematics is precise and sound; it's not gospel.

We do think that mathematics is precise. We don't really know if it is sound (unless I am missing something). For example, what is a set? It is not a "collection of things". Rather, it is an object in some mathematical setting.

The topic of whether mathematics is "correct" is something else. We can all go out and build a DIY logical machine from scratch and see for ourselves that the mathematics we use give the results that we expect. Applied mathematics in this sense concerns itself with mathematics that suitably describe real situations.


>We don't really know if [mathematics] is sound. My apologies, I was hoping to be concise. I meant to say that mathematics is sound relative to the assumptions, which is exactly correct. But if the assumptions do not hold in reality (and they never quite do), then the theory as a whole is slightly off. I mean to say that mathematics is never 'correct' with reference to reality, but of course is always 'correct' with reference to the axioms.


> Coming from a mathematics background, I personally do not care so much for the 'beauty' of mathematics, and am moreso interested in clarity and insight.

Beauty is a subjective emotion, it's not so easy to quantify. You may not find mathamatics beautiful due to it's clarity and insight, but many others would precisely because of that.

And it's not just math. Beautiful paintings, architecture, legal arguments, religions, etc., are often believed to be beautiful because of the clarity and insight they provide as well.


If you count all the work in AI/ML then the criticism has been overwhelmingly in the other direction, i.e. to much "just trying stuff to see what happens" and not enough "really understanding what is going on". Always seemed like a weak criticism to me honestly. You can advance theory, or you can advance through experimental insight. Neither is the right or wrong path, just whichever seems like the best way to make progress given the state of current knowledge.


Yeah, I feel ML would have been a more apt analogy than complexity theory. For some problems it's an ugly, "brute force" approach that works really really well.


I think AI/ML has changed things.

I think Computer Science used to be more aligned with math, in the sense that the mathematics courses most people took in school were overwhelmingly symbol manipulation. Just like CS.

Now I think things are not only more data driven, any sort of "undertanding" might be prioritized away forever, unless some adversarial network requires it. :)


I have long felt like "beauty" in mathematics is just oversimplification.

Ironic that intelligent mathematics types get caught up in what could be analogous to socially hurtful stereotypes.

I am going to follow the author.


Is't a large part of the history of physics about doing away with wrong assumptions based on beauty? Circular orbits of planets etc.


Except that you've got the point completely upside down! At first people didn't believe in heliocentrism because it used circular orbits, and instead used to believe that the universe is better described from the geocentric point of view which was one ginormous mess from the mathematical point of view. Then people realized that the beautiful addition of conical curves solves the problem completely. I say 'beautiful' because clearly you don't have a sense of mathematical beauty. Mathematical beauty is not only the simplicity of the solutions, but also it's robustness and fluid way in which it fits into the rest of our knowledge. The real simplification that came from it is far deeper than any 'circular orbits' picture you imagine. Naturalness and search for beauty can never be proved wrong. Those principles can perhaps only be applied prematurely, before enough data is collected.


Not recently.

Dirac did believe his equation had to be right as it was so elegant, however it turns out we now interpret it differently now


Can you give an example of beauty (or assumed beauty I guess) just being oversimplification?


Although I admittedly only do mathematics through physics, I can't really think of any. If it's oversimplified, it's wrong and therefore not beautiful. Beauty is often given after discovery rather than on the way


It's terrible that nobody got punished for it, I hope Bitcoin will solve all of our problems soon.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: