Hacker News new | comments | show | ask | jobs | submit login
What makes maths beautiful? (newhumanist.org.uk)
79 points by Hooke 124 days ago | hide | past | web | 84 comments | favorite

I had a run-in with this issue once. Many years ago, when working on a program verification system, I had to show that two theorem provers (Boyer-Moore and Nelson-Oppen) were working on compatible theories. If that was botched, the system could prove false statements. So I undertook to prove, in Boyer-Moore theory, the "axioms" of arrays used in the Nelson-Oppen prover. McCarthy defined four classic axioms for arrays, and here are the first two.

    p=r ⇒ read(write(a, p, v), r) = v

    ¬(p=r) ⇒ read(write(a, p, v), r) = read(a, r)
This just says that if you store v at a[p], and then read it back, you get what you put there, and you don't change any other elements.

So I set this up in Boyer-Moore notation.

        (IMPLIES (AND (arrayp A) (NUMBERP I)) 
                 (EQUAL (selecta (storea A I V) I) V)))

        (IMPLIES (AND (arrayp A) (NUMBERP I) (NUMBERP J) (NOT (EQUAL I J))) 
                 (EQUAL (selecta (storea A I V) J) (selecta A J))))
But Boyer-Moore theory is constructive; everything must have a definition that can be evaluated as a function. So I had to define arrays and the "selecta" and "storea" functions so that this would work. I could do this by defining an array as a list of (subscript, value) tuples.


    (ADD-SHELL array-shell 
        ((array-elt-value (NONE-OF) UNDEFINED) 
         (array-elt-subscript (ONE-OF NUMBERP) ZERO) 
         (array-prev (ONE-OF array-recognizer) empty-array)))
This is simply a structure definition for a list. An array-shell has an array-elt-value, an array-elt-subscript, and array-prev, which is the rest of the elements. "array-shell" is the constructor, "empty-array" is essentially "nil" for this type, and "array-recognizer" is a predicate which is true for objects of type array-shell.

For array equality to work, the list of tuples had to be ordered by subscript, so that two arrays with the same values would always have the same representation. So selecta and storea are complicated. selecta is a recursive lookup in a list:

    (DEFN selecta (A I) 
        (IF (EQUAL (array-elt-subscript A) I) 
                   (array-elt-value A) 
                   (IF (EQUAL (array-prev A) (empty-array)) 
                       (selecta (array-prev A) I))))

Store into array is ugly, because it's an update of an ordered list.

    (DEFN storea (A I V)
        (IF (NOT (arrayp A))                   
            (IF (NOT (NUMBERP I))
                (IF (EQUAL A (empty-array)) 
                    (IF (EQUAL V (UNDEFINED)) 
                        (array-shell V I (empty-array))) 
                    (IF (EQUAL (array-elt-subscript A) I) 
                        (IF (EQUAL V (UNDEFINED)) 
                            (array-prev A) 
                            (array-shell V I (array-prev A)))
                        (IF (LESSP (array-elt-subscript A) I) 
                            (IF (EQUAL V (UNDEFINED)) 
                                (array-shell V I A)) 
                                (array-elt-value A)
                                (array-elt-subscript A) 
                                (storea (array-prev A) I V))))))))
This recursive function is complicated enough that it needs a lemma to help with proof of termination.

    (PROVE-LEMMA storea-is-sound (REWRITE) 
        (implies (and (arrayp a) 
            (not (equal a (empty-array)))) 
            (lessp (count (array-prev a)) (count a))))
This says that the array-prev list component of a is always shorter than a. The Boyer-Moore prover must prove termination for all defined functions, which it does by examining any recursive call and proving that there is a measure defined on the arguments which is a smaller nonnegative integer. For the easy cases, the prover can do this automatically, but sometimes you have to define a lemma to help it. The lemma, of course, also has to be proved.

So, given all that, the Boyer-Moore theorem prover could be turned loose to grind out proofs of the rules for arrays. It took about 45 minutes of grinding in 1982 on a 1 MIPS VAX 11/780. You can rerun it now. Here's the prover, which requires clisp.[1]. Here's the entire set of definitions and theorems we used for program verification.[2]. Read the instructions at [1], and get the system up to where it's done "boot-strap". Then load the file at [2]. Long, detailed, human-readable explanations of the proof steps are displayed. The entire proof run for all those basic theorems takes about a second, and that's probably mostly scrolling.

So, having rigorously proved the "axioms" for arrays from first principles, I submitted a paper to JACM. The reviewer comments I got back were mainly about the uglyness of the approach. All that case analysis! One comment was that the reviewer was upset about having a proof that ugly at the foundations of computer science. The paper was rejected.

The reason it's ugly is that it doesn't get to use set theory. Informally, a set is a collection whose order does not matter. "Order does not matter" is not a meaningful concept in strict Boyer-Moore theory. If order matters, there are many cases to prove, stemming from all those IF statements in the definitions.

A generation later, fewer mathematicians would be bothered by this form of "uglyness". But back then, machine proofs were so rare that this was considered awful. Also, now that computers can do this in a second, complaining that it's too complicated seems whiny.

[1] https://github.com/John-Nagle/nqthm [2] https://github.com/John-Nagle/pasv/blob/master/src/work/temp...

Just to look at, here are the actual machine proofs from running our verifier theory through the Boyer-Moore prover.[1] AI, the early years, at work. Here's one proof, for commutativity of TIMES for the natural numbers.

         Give the conjecture the name *1.
         We will appeal to induction.  Two inductions are suggested by terms in
    the conjecture, both of which are flawed.  We limit our consideration to the
    two suggested by the largest number of nonprimitive recursive functions in the
    conjecture.  Since both of these are equally likely, we will choose
    arbitrarily.  We will induct according to the following scheme:
          (AND (IMPLIES (ZEROP X) (p X Y))
               (IMPLIES (AND (NOT (ZEROP X)) (p (SUB1 X) Y))
                        (p X Y))).
    Linear arithmetic, the lemma COUNT-NUMBERP, and the definition of ZEROP inform
    us that the measure (COUNT X) decreases according to the well-founded relation
    LESSP in each induction step of the scheme.  The above induction scheme
    produces the following two new conjectures:
    Case 2. (IMPLIES (ZEROP X)
                     (EQUAL (TIMES X Y) (TIMES Y X))).
      This simplifies, applying TIMES-ZERO and TIMES-NON-NUMBERP, and opening up
      the functions ZEROP, EQUAL, and TIMES, to:
    Case 1. (IMPLIES (AND (NOT (ZEROP X))
                          (EQUAL (TIMES (SUB1 X) Y)
                                 (TIMES Y (SUB1 X))))
                     (EQUAL (TIMES X Y) (TIMES Y X))),
      which simplifies, unfolding the functions ZEROP and TIMES, to:
            (IMPLIES (AND (NOT (EQUAL X 0))
                          (NUMBERP X)
                          (EQUAL (TIMES (SUB1 X) Y)
                                 (TIMES Y (SUB1 X))))
                     (EQUAL (PLUS Y (TIMES Y (SUB1 X)))
                            (TIMES Y X))).
      Appealing to the lemma SUB1-ELIM, we now replace X by (ADD1 Z) to eliminate
      (SUB1 X).  We employ the type restriction lemma noted when SUB1 was
      introduced to constrain the new variable.  We must thus prove:
            (IMPLIES (AND (NUMBERP Z)
                          (NOT (EQUAL (ADD1 Z) 0))
                          (EQUAL (TIMES Z Y) (TIMES Y Z)))
                     (EQUAL (PLUS Y (TIMES Y Z))
                            (TIMES Y (ADD1 Z)))).
      However this further simplifies, applying the lemma TIMES-ADD1, to:
    That finishes the proof of *1. Q.E.D.
Boyer and Moore were trying really hard to make the results understandable to traditional mathematicians.

[1] https://github.com/John-Nagle/pasv/blob/master/src/work/temp...

I love these anecdotes.

Just an aside, but I was under the impression that Set Theory (these days) is regarded as not-necessarily-all-that-well-founded or, perhaps, not ideal for "foundations"? Is that hopelessly misguided?

If it isn't already apparent, I'm pretty much a noob in formal verification, but I've read enough to hopefully have a very shallow understanding...?!?

EDIT: I just thought I'd add what "makes math beautiful" for me: The mere difficulty of defining equality of 'things' and deciding what it should mean.

Use of the phrase "not-necessarily-all-that-well-founded" is unfortunate since standard set theory studies the properties of extensional, well-founded objects in general. Though clearly you didn't have the technical understanding of well-foundedness in mind. Just might confuse ppl because there is a branch of set theory called "non-well-founded set theory" that is perfectly respectable -- I.e., and "well-founded" in the sense you intend -- but not about the same objects studied in the standard context.

I believe what you have in mind regarding set theory as "not ideal for 'foundations'" (why is the word foundations in quote, btw?) is that there is a recent push to reexamine the prospects more intensional foundations based on type theory. The most promising and currently fashionable version of this push is Homotopy Type Theory (HoTT), which has been given press and prestige from the IAS at Princeton and interest from Fields Medalist Vladimir Vovodsky (sp?). There are plausible reasons given, though often polemical, for preferring some sort of type theoretical or category theoretical foundations over set theoretical ones. Perhaps most importantly, for readers of HN, the comparatively easy formalization into computer-checkable proof and computer-assisted proof (look up, for instance, Coq).

That said, there is a long way to go before HoTT (or something like it) displaces traditional foundations and set theory. For one thing, there is no obvious axiomatization or way to confirm that HoTT is a suitable foundation for extant non-foundational mathematics. For one thing, standard type theory (MLTT) plus Vovodsky's univalence axiom is proof-theoretically equivalent to a subsystem of analysis. I would guess that, for instance, you can't prove that all Borel sets are measurable using MLTT plus univalence (but this is totally speculative and I would appreciate being corrected).

Last -- I suspect the only mathematicians who find set theory proofs beautiful are set theorists. Most other mathematicians don't really give af about set theory.

Ok, I'm clearly lacking in fundamentals here.

I've seen HoTT seem receiving quite a bit of attention, but after reading a few papers and texts it seems that it's just out of reach without additional introductory material. I got about 50% though TAPL, if that's any indication of (my lack of) sophistication in type theory. I really should pick that up...

Any recommendations for introductory texts?

As a practical matter, what do the mathematicians who don't give AF about set theory (as you say) do? Just assume other axioms that "do enough" adn then go from there?

Looking at TAPL, it seems like a good text coming at things from the theoretical computer science angle. I'm unsure how exactly to branch out from there, since my own background is more in math logic. So prob best to take my recommendations with that in mind (I think there's a ton of work in CS in the last 20 years that I'm just totally unfamiliar with).

You could check out Lambek and Scott (1970s), which is excellent. Also Girard (citation for this one was in TAPL). I also think Martin Lof himself is pretty good for the feel of things, but he has always seemed kind of cryptic and mystical to me. There are also some good videos on YouTube that Steve Awodey (CMU) recorded for more intro/motivation. I'd also add Vovodsky's three Bernays lectures on YouTube for the general idea of using typed lambda calculus vs set theory as foundations.

Sociologically, my impression is that most mathematicians trust that the usual axioms of set theory are consistent and safely ignore them. So, yes -- they choose axioms in their own field that are known to be derivable in set theory. Even in intro Analysis textbooks, where the role of set theoretic axioms themselves can be pretty easily extracted from the construction of the reals, I don't think you'd see set theoretic axioms explicitly appealed to (might be wrong about this). In constructive mathematics, it ends up mattering where Choice is used I guess. And it matters how these uses can be "constructivized." But constructive analysis is primarily motivated by foundational considerations. So it is not really an example of ordinary mathematics. Some axioms, like Regularity, are almost never even implicitly used outside of set theory itself except in very specific cases.

Most every case where there is a question of whether a given bit of mathematics is derivable from the usual axioms of set theory arise only within set theory itself (CH, measurability of projective sets of reals, etc), and so such questions are easy to ignore for the vast majority of mathematicians. They simply don't care that much about logic and set theory aside from "just working" under the hood, as it were.

FWIW, I've only had a pretty basic intro to math meta-theory, and I think my most "advanced" course in that direction was about Point Set Topology (+ some discrete math courses, but that's par for the CS course).

If one is a mathematician, I think it's entirely reasonable to just "trust" your basic axioms of your "sub-field", but OTOH I can really understand the worry that the rug is going to get pulled out from under you... which is a pretty remote worry given the complexity of math nowadays.

Anyway, thank you for taking the time for that response.

Whoops -- Lambek and Scott is 1986. My apologies

Equality should at least have the property that

    ∀ f, x = y ⇒ f(x) = f(y)
In a constructive theory like Boyer-Moore, all variables have a concrete representation, and functions can look into that representation. Thus, for EQUAL to have the property above, the representation must be the same.

In Boyer-Moore theory, there's no "hiding", in the sense of an object with private members. One could define a "weak equality" for sets which were lists of values, true if the same values were in both lists. Weak equality would not require that the lists have the same order. But you couldn't hide the representation. Functions could peek into the list representation, and the property above would not hold. So "weak equality" would not be very useful in proofs.

I once proposed a sort of "object oriented" extension to Boyer-Moore theory. The idea is that you create something like a set represented by lists, prove some theorems about such objects, and then "hide" the private functions that can look into the interior of the representation. Then you can define a weak equality function p. If you can prove ∀ f, p(x,y) ⇒ f(x) = f(y) for every non-hidden function, you have proven that p(x,y) ⇒ x = y. Then, p can be treated as equality. Proving this requires proving it for every function already defined. Any newly defined functions are composed from existing functions, so the equality still holds. There's trouble with newly defined types, though, because they come with some built-in functions such as the type predicate.

If you pursue this, you probably get to some theory of types, but I didn't want to go there, so I gave this up.

> Equality should at least have the property that ∀ f, x = y ⇒ f(x) = f(y)

My impression was that even that might be a bit contentious[1]. Well, maybe not contentious, but maybe... disputable? It also requires a theory of functions, or at least a syntax to derivation rules.

That was just to be absurdly nitpicky, I think I understand what you're saying here, and it's much appreciated.

[1] https://pdfs.semanticscholar.org/806a/92153c4b8d344edec45eec...

I think the type system along Haskell's lines can give you something like that? (Or go all the way to Agda, Idris, Coq etc.)

I'm having trouble finding the source of this quote, but here:

>Many mathematicians have the opposite opinion; they do not or cannot distinguish the beauty or importance of a theorem from its proof. A theorem that is first published with a long and difficult proof is highly regarded. Someone who, preferably many years later, finds a short proof is "brilliant." But if the short proof had been obtained in the beginning, the theorem might have been disparaged as an "easy result." Erdős was a genius at finding brilliantly simple proofs of deep results, but, until recently, much of his work was ignored by the mathematical establishment.

And here is an interesting study on what mathematicians consider beautiful: https://www.gwern.net/docs/math/1990-wells.pdf They found that mathematicians varied a great deal in what they considered beautiful. And that there isn't any consensus on what is considered beautiful. It also changes over time:

>There was a notable number of low scores for the high rank theorems 9 Le Lionnais has one explanation [7]: "Euler's formula e^pi*i = - 1 establishes what appeared in its time to be a fantastic connection between the most important numbers in mathematics... It was generally considered 'the most beautiful formula of mathematics' . . . Today the intrinsic reason for this compatibility has become so obvious that the same formula now seems, if not insipid, at least entirely natural."

I think the interestingness or surprisingness of a mathematical idea is a temporary thing. Once you really deeply understand something, it's no longer surprising or interesting. It should just feel obvious and simple.

This goes along with Jurgen Schmidhuber's theory of art (http://people.idsia.ch/~juergen/creativity.html). That humans crave novelty and unexpectedness in all areas, and that is what we consider beautiful. But once you get very familiar with something, it loses it's novelty.

Euler’s formula says (in one interpretation) that a half-turn rotation in the plane is the same as a reflection through a point, a statement which is obvious to children who think about it for a bit. It’s phrased in a cute way, but “most beautiful formula of all time” etc. is really a stretch.

The tricky part is thinking of rotations as a type of “number” with a multiplicative structure, in which case the [additive] angle measure becomes the logarithm of the rotation, letting us write our half-turn rotation using the fancy-looking exp(πi) instead of ½⥀ or rot(180°) or whatever.

If you asked someone whether ⤺² = ⥀ (or similar) was the most beautiful formula in mathematics they would probably laugh or look at you funny.

Euler's formula is "e^ix = cos(x) + i sin(x)", which arguably encodes the "trick" of thinking of e^ix being a rotation. Once you understand this relationship, the special case of x=pi becomes obvious and boring. If you arrived at that special case without first hitting the more general understanding, than I can see calling it "beautiful" because it is so suggestive of something deeper going on.

Of course, when we present this identity as the most "beautiful formula", it is often to people who don't have any understanding of what it means to take something to a complex power, so the correct response should be to look at you funny.

Personally I would take θ = log(x + yi) / i – where x is a scalar and yi is a bivector, with x² + y² = 1 – to be the definition of angle measure θ of the rotation x + yi, though usually I like to think of this as θi, a bivector-valued quantity.

In that formulation, the structure of rotations is already clear, and the better question might be why the (principal branch of the) natural logarithm of some bivector i is (π/2)i. That starts us on thinking about what the logarithm really means, and asking about conformally mapping a punctured plane to an infinite two-ended cylinder, etc.

The reason this formula of Euler’s seems like a mystery is that we teach angle measure to students as the canonical way to represent a rotation which makes it logically primary in their minds, when (in my opinion) it should be a derived quantity. Rotation is a naturally multiplicative kind of operation, a linear transformation of a vector space.

If we need to represent a rotation of 2-dimensional vectors by a single parameter, I generally prefer the stereographic projection yi/(1 + x) (“half-angle tangent”) instead of the angle measure (logarithm). Depends a bit on the particular use case though.

> Euler’s formula says (in one interpretation) that a half-turn rotation in the plane is the same as a reflection through a point, a statement which is obvious to children who think about it for a bit.

I guess by "reflection through a point" you mean "the composition of two reflections through orthogonal axes through the point", right? Otherwise this cannot possibly be true, since a rotation is orientation-preserving but a reflection is orientation-reversing.

If you reflect every point in the plane across a given “axis” point, you get a half-turn rotation.

Reflecting across a point is also the same as reflecting twice across two perpendicular lines which intersect at that point.

> If you reflect every point in the plane across a given “axis” point, you get a half-turn rotation.

> Reflecting across a point is also the same as reflecting twice across two perpendicular lines which intersect at that point.

My claim was rather that "reflecting across a point" has no meaning by itself—in the strict, mathematical sense according to which a reflection has a fixed hyperplane https://en.wikipedia.org/wiki/Reflection_(mathematics) —unless you use one of these two statements as the definition. (Probably you'd better use the second, because the first doesn't generalise to higher dimensions.)

I love the way you wrote that? Why do mathematicians have a predisposition to overload symbols? I hope in the future, mathematical notation will evolve to make full use of Unicode and of names with more than 1 letter.

is there anything interesting to you in the notion that cos(x)+isin(x) is the unique analytic continuation of e^x in the complex plane?

It's not just a rotation by definition, I don't think.

If you think about it for a bit, it’s clear that this must be so.

A circle’s circumference is proportional to its radius, so if you want to have a conformal map between a rectangular grid in one space and a polar-coordinate grid in another space, the spacing of the polar-coordinate grid lines in the radial direction must necessarily be proportional to the radius so that they can match the radius-dependent spacing in the tangential direction.

Anyhow, there are many interesting features of the structures here. Both rotation and logarithms are very interesting and important both in mathematics and in science/engineering. I just don’t think Euler’s identity in particular is worthy of so much attention.

Sure, but I think most people believe that cos(x)+isin(x)=e^ix because that's how it is defined, in which case e^(i\pi)=-1 isn't even a result. It is at least the tiniest bit more interesting than that.

It is beautiful that this interpretation maps (complex) numbers and geometry so well---even if the formula itself doesn't look so beautiful and surprising anymore.

Personally I take “similarity transformations which fix a point in the plane” (or if you like, “amplitwists of planar vectors”) to be the definition of complex numbers, from which a Cartesian coordinate representation (i.e. breaking them into scalar + bivector parts) can easily be derived. In this view the notion of “[mapping] complex numbers to geometry” is just tautological.

Maybe better would be to say that Euler’s formula shows how classical trigonometry (angle measure &c.) is what you get when you apply logarithmic thinking to Euclidean vector geometry.

If you see Geometry as flowing from something like Euclid's axioms, and (complex) numbers as flowing from a long line of advances in essentially bookkeeping/accounting, it is quite nice that they have such a neat synthesis.

A whole lot of the original bookkeeping/accounting was about adjudicating land boundary disputes and calculating land taxes based on area, or predicting/explaining astronomical events. In fact, the origins of both trigonometry and logarithms are in astronomy, though I’ll admit that the natural logarithm per se came from Bernoulli’s study of compound interest.

But yes, the connection between geometry and arithmetic is something people have been noticing for at least 5000 years, and is certainly important and very interesting. I just think this particular expression of that connection is oversold as being uniquely insightful or beautiful.

Sure. And if anything the more general equation e^(iphi)=cos phi + i cos phi is much more useful: it allowed me to forget all the special rules we learned in school about how to manipulate the trigonometric functions and just use my knowledge of exponentials instead.

Saw a lecture by Zeki (mentioned in the article) at an art college summer school last year. He showed "e to the i pi" and also a Ramanujan equation. Of course his results are that people see the elegance of e to the i pi but don't see beauty in the apparent arbitrariness of the Ramanujan equation. I have to admit I don't remember what his sample was -- definitely mathematicians (but of what level?). This is an issue that was raised by art colleagues -- would you determine the ultimate aesthetic value of an artist's work by using an unqualified audience?

Zeki is a bit of a buffoon with a tendency to assume that he understands other people's domains well enough to talk about them. It may not be arrogance on his part but there's a definite sense that he doesn't let complexity and nuance get in the way of a good story.

I'm a mathematician, and never quite got the emphasis on "beauty" that some obsess over. Maybe I'm just too deep into the applied side. But mathematical beauty to me is unexpected terseness or conciseness in a result. Which is similar to how I would define beautiful source code I think.

I got pretty far in math (ms in cs with half of the classes being legit math classes) I never got the obsession either. in general it always felt to me like a bunch of awkward people trying to ascribe aesthetic value to something that had none. it also always felt self-aggrandizing: here look at how beautiful is this game I play where I made up the rules.

edit: down votes on hn without comments? boo

Well, most of your post involves insulting a fairly broad group of people.



>this is the dumbest math trope ever. i bet you're the undergrad.

Undergrad math major here. Please don't insult us; even we know to cringe when this identity is mentioned as some amazing beauty of math.

In pure math, the journey is the reward; in applied math, the destination is, and how you get there is unimportant.

That, IMO, is why applied math rarely is beautiful. Pure math often is because no sane person would spend time doing ugly work that doesn't even lead to useful results.

Computational proofs such as that of the four color theorem have changed that a bit. That may be the reason some mathematicians don't accept them as proofs.

Similarly, "publish or perish" may have led to more proofs in pure mathematics that are more hard work than inspiration, and of course, zillions of hours have been spent looking for counterexamples of theorems in pure mathematics.

What you said made me want to write surpisingly terse and concise code. It's what I shall try to do from hereon.

Terseness and conciseness are great, but should be balanced with readability and ease of editing.

Now there are too many rules. I'll just go back to writing crap code.

"What makes math beautiful?" Having it rendered correctly would be a good start.

Maybe it doesn't work just for me, but displaying "a2 + b2 = h2" in article about math is really ungraceful degradation.

Much of complex analysis and linear algebra I think is beautiful and can be easily made accessible to the general layman.

I think you overestimate "the general layman". For example, complex analysis requires one to accept the existence of the square root of minus one and that multiplying by it is equivalent to a rotation.

In my experience, "the general layman" simply doesn't possess enough suspension of disbelief to get past such hurdles.

linear algebra, though, I feel is an excellent example.

Everyone has solved a system of linear equations in school. The concept is pretty accessible.

In 2D, I'll give you that, but in general, I would say "has struggled solving", rather than "has solved", and not even 'everyone'.

Few can, for example, reliably compute the intersection point of a line l' parallel to l through a point p with a plane P, and many would panic when given a problem in N>3-dimensional space.

I know people who memorized formulas for solving linear systems as used in macro-economics (on the order of 10 dimensions, but lots of zeroes in their matrix), and, because of that, couldn't work when given a slightly different model to solve (for example, if taxation became be a constant plus a fraction of income instead of just a fraction of income, or if capital gains tax were introduced).

What on earth are you talking about?

The first step in a linear algebra course is to codify the method that you use in middle/high school to solve systems of equations of the form a00x0+a01x1+...=b0....

That algorithm sets the stage for everything. The pace is natural, the math is accessible, no geometric interpretation is ever required, and it's built for problems that people of even average intelligence can understand. Frankly, it's astonishing we don't teach it in high school.

Is a beautiful poem still beautiful if it is written in a language that you don't understand?

I conjecture that the inherent beauty of mathematics is overstated. The beauty of Euler’s identity is like the sublime beauty of a Himalayan peak shrouded in thick fog. A lot of the pleasure and reward stems as much from the effort to climb above the fog to view the peak in all its majesty as from any structural formal beauty. There are only so many ways formal structures can relate to each other aesthetically.

At least that's what I think today.

> There are only so many ways formal structures can relate to each other aesthetically.

I think that a lot of the inherent beauty of mathematics (there is also the beauty of the climb, as you very nicely described it) comes precisely from the realisation that, sensible as this claim is, it doesn't seem to be true: there seem always to be more relationships out there to discover, and their hidden and surprising natures can make them all the more beautiful once uncovered.

Yes, this ^^

There is even a mathematical case to be made that there are always "more relationships out there to discover." One of the more plausible, sober interpretations of the various limitative phenomena in the foundations of mathematics discovered throughout the 20th century is that mathematics is not reducible to fintary symbol manipulation. Perhaps this is a bit loose and controversial, but the point is just that we don't necessarily need to solely lean on the observed fact that there have always been "more relationships out there to discover."

parsimoniousness and specificity. Each number and symbol serves a specific function/role that if rearranged or changed, the whole system/equation would cease to work, but also that the symbols themselves encode so much information and meaning within them..

Not all math is beautiful IMHO.

The proof of the 4 color theorem "reduced the infinitude of possible maps to 1,936 reducible configurations (later reduced to 1,476) which had to be checked one by one by computer and took over a thousand hours."

Of course beauty is subjective and not well defined. Some might consider that proof beautiful.

> Not all math is beautiful IMHO.

Surely no-one claims that it is? Speaking of 'beautiful code' does not invite the rebuttal that there is also ugly code, just encourages the appreciation that ugly code isn't all there is, and that formal systems can be æsthetically pleasing.

I would argue that the first computer verified proof in some sense changed the mathematical aesthetic.

In code I define beauty as the ability to do a lot with just a little. Simplicity while at the same time providing robustness is what gives me this feeling.

E=mc^2 is considered the greatest equation in history because of this reason. It makes it simple to explain to someone not well-versed in math what is actually behind the symbology.

>E=mc^2 is considered the greatest equation in history because of this reason.

Only people with a "pop-culture" idea of math&physics think that.

- wholemeal view

- knowing out of ignorance (probabilities)

- compression of knowledge

Math isn't beautiful, beauty is mathematical.

formula, please.

Is it a UK thing to add the "s" to math? I see it pretty often but it never seems right. I've always used the word as an unchanging irregular plural similar to "sheep".

Well, there's a certain logic to it: The long version of the word, Mathematics, is a regular plural. So is it so strange for the shortened version Maths to also be a regular plural?

Edit: I was imprecise with my phrasing above, I really meant to say 'takes the form of (a regular plural)' instead of 'is' - I don't know if it is or is not technically a plural - the point was only that the shortened word 'Maths' follows the same form as the long version, which seems logically consistent.

Is Mathematics plural? Would you say "Mathematics is cool", or "Mathematics are cool"?

It's a special case like physics, optics, dynamics, economics, and so on.

The simple (linguistically obscure, but let's pretend) way to understand it is that there's an implied "the study of" before the word. So although you are studying multiple elements of a topic, the study itself is singular.

It isn't a regular plural. What is a mathematic? Mathematic is not a noun. Mathematics is a mass noun like software or equipment.

Numberphile has a good video on this: https://www.youtube.com/watch?v=SbZCECvoaTA

That's a great video and exactly the depth of answer I was hoping for. Thanks!

UK, Ireland, Australia, New Zealand and almost all other English speaking countries except for the US and Canada, I believe.

Same in French, eg. J'aime les maths

And in Spanish, "Matemáticas".

EDIT: also probably in a lot of European languages as well. I wonder to which extent this may be just a result of cultural exchange.

Since we are doing this, Maths is common in India as well. I'd never heard "Math' used in an Indian context.

why not say "Mathematics"?

Actually school time tables, college transcripts, etc all say Mathematics. Only everyday communication uses the shortened form.

I may be biased since I'm american but I always felt "maths" to be silly. The argument for "maths" is that mathematics is plural. But it is plural in the sense it is a "mass noun" like software or equipment. You wouldn't shorten software to "softe" or equipment to "equipe". Also, maths sound awkward and silly and went spoken pretty much similar to math.

> Also, maths sound awkward and silly

Which, when in the UK, becomes: "Also, 'math' sounds awkward and silly"

> Which, when in the UK, becomes: "Also, 'math' sounds awkward and silly"

But math came from england via the oxford dictionary.


Try this. Do you say "maths are fun" or "maths is fun"? What about physics. Do you say physics is difficult? Or physics are difficult.

Is it singular or plural?

It is uncountable. Nobody serious justifies 'maths' based on it being plural, it is just a different contraction of the word 'mathematics' than what you are used to.

> Nobody serious justifies 'maths' based on it being plural

That's the only justification. What other justification is there?

> it is just a different contraction of the word 'mathematics' than what you are used to.

Yes. Because the british mistakenly assumed mathematics was a "regular plural". For what reason would you keep the s at the end?

Do you shorten economics to econS or to econ? Do you takes ECONS 101?

The explanation was given here.


Why are you so resistant to FACTS and HISTORY? I don't get people who argue even after given evidence.

>That's the only justification. What other justification is there?

That those who hear it can understand what the speaker is talking about -- I would say that is the only justification necessary (indeed, for any word), and is more than justified for the term "maths".

That is a justification to continue using it. Sure. But that isn't the justification for shortening mathematics to maths.

All I'm saying is that it was shortened to maths due to a misunderstanding of what mathematics is. And all I'm saying is that when british people saying the "correct" way to shorten mathematics is to shorten it to "maths", I'm saying their JUSTIFICATION is wrong. Because mathematics is not a regular plural.

Okay? I'm explaining it as simply as possible. Use maths if you want. If that is what is used in britain, fine. I'm just saying the JUSTIFICATION for why british people say it is "correct" is wrong. Okay?

You seem amusingly obsessed over a trivial difference in the way two different varieties of English chose to shorten the word 'mathematics'. As far as I can tell, there are no official rules for shortening words in English, so I don't think that 'maths' is in any more need of a justification than 'math'. Or 'stats' vs 'stat' for statistics, come to that.

> You seem amusingly obsessed over a trivial difference in the way two different varieties of English chose to shorten the word 'mathematics'.

I'm not. I don't care. I know the world will eventually choose "math" because that's the way america says it.

The only people who are obsessed are people justifying "maths".

I don't care if you say "maths". Just don't justify it as being "logical".

> As far as I can tell, there are no official rules for shortening words in English, so I don't think that 'maths' is in any more need of a justification than 'math'.

There is. You add an "s" for REGULAR PLURALS. Like STATISTIC is a noun. Statistics is a regular plural. So statistics becomes STATS rather than stat. It's a simple rule. It's why economics is shortened to ECON rather than econs... Because economic is not a noun. Economics is not a regular plural.

> Or 'stats' vs 'stat' for statistics, come to that.

Statistic is a noun. Statistics is a regular plural. Regular plurals you shorten with s. So statistics shortening to stats makes sense. Mathematic isn't a noun. Mathematics isn't a regular noun. I already explained this to you.

The fact that you think statistics supports your argument just goes to show that you really aren't getting it. Why you are so invested in pushing a lie even after people have corrected you, I'll never know.

> The argument for "maths" is that mathematics is plural.

No, mathematics is uncountable, not plural. I've always assumed it was because “math” is a distinct noun already, for a mowing of grass (were mathematics, and hence maths, plural rather than uncountable, there'd actually be more potential for confusion between “maths”, the short form of mathematics, and “maths”, the plural of “math”, which is avoided because uncountable nouns are grammatically treated as singular for most purposes, like subject/verb agreement; conversely, there is opportunity for such confusion with “math” as the short form. Admittedly, “math” for a mowing is rare enough in current usage outside of modified forms like “aftermath” that that's not big deal now.)

> You wouldn't shorten software to "softe" or equipment to "equipe"

Well, sure, because the long form of latter doesn't end with an “e” and equipt is already the related adjective form. Though shortening equipment as “equip’t” is something I've encountered (the apostrophe makes unambiguous that it's a shortening and minimizes confusion with the adjective.)

Don't think there's much of an 'argument for' or 'argument against'. They are just different usages.

It's just like arguing that the German Mathematik is silly, or that the English mathematics is.

> Don't think there's much of an 'argument for' or 'argument against'.

Sure there is. The british argument for maths is that mathematics is "regular plural". Which it is not. It is a "mass noun". The british say maths IS fun, not maths ARE fun.

Economics is another such word. Do british people go around saying econs 101?

So the british argument for maths doesn't make any sense. So there are plenty of argument AGAINST maths. No argument FOR maths. Plenty of argument for math. No argument against math.

The arguments were made up after the fact to 'explain' usage. Just like the 'rules' you hear to explain when to use 'I' vs 'me' or 'who' vs 'whom'.

Check out eg http://fine.me.uk/emonds

> The arguments were made up after the fact to 'explain' usage.

No. The reason why it was shortened to maths was because of the reasons stated.

The original word has a plurality to it; I think it is we who did violence to "maths".

No. Go watch the video Houshalter linked.


The sensible and correct shortening is math. That's how it was originally shortened to. The brits went off-kilter in the 1900s and added the s to math.

Everyone, including brits, say mathematics IS difficult. Nobody says mathematics ARE difficult.

It's common in New Zealand as well.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact