p=r ⇒ read(write(a, p, v), r) = v
¬(p=r) ⇒ read(write(a, p, v), r) = read(a, r)
So I set this up in Boyer-Moore notation.
(PROVE-LEMMA SELECT-OF-STORE-1 (REWRITE)
(IMPLIES (AND (arrayp A) (NUMBERP I))
(EQUAL (selecta (storea A I V) I) V)))
(PROVE-LEMMA SELECT-OF-STORE-2 (REWRITE)
(IMPLIES (AND (arrayp A) (NUMBERP I) (NUMBERP J) (NOT (EQUAL I J)))
(EQUAL (selecta (storea A I V) J) (selecta A J))))
(ADD-SHELL UNDEFINED-OBJECT UNDEFINED UNDEFINEDP ())
((array-elt-value (NONE-OF) UNDEFINED)
(array-elt-subscript (ONE-OF NUMBERP) ZERO)
(array-prev (ONE-OF array-recognizer) empty-array)))
For array equality to work, the list of tuples had to be ordered by subscript, so that two arrays with the same values would always have the same representation. So selecta and storea are complicated. selecta is a recursive lookup in a list:
(DEFN selecta (A I)
(IF (EQUAL (array-elt-subscript A) I)
(IF (EQUAL (array-prev A) (empty-array))
(selecta (array-prev A) I))))
(DEFN storea (A I V)
(IF (NOT (arrayp A))
(IF (NOT (NUMBERP I))
(IF (EQUAL A (empty-array))
(IF (EQUAL V (UNDEFINED))
(array-shell V I (empty-array)))
(IF (EQUAL (array-elt-subscript A) I)
(IF (EQUAL V (UNDEFINED))
(array-shell V I (array-prev A)))
(IF (LESSP (array-elt-subscript A) I)
(IF (EQUAL V (UNDEFINED))
(array-shell V I A))
(storea (array-prev A) I V))))))))
(PROVE-LEMMA storea-is-sound (REWRITE)
(implies (and (arrayp a)
(not (equal a (empty-array))))
(lessp (count (array-prev a)) (count a))))
So, given all that, the Boyer-Moore theorem prover could be turned loose to grind out proofs of the rules for arrays.
It took about 45 minutes of grinding in 1982 on a 1 MIPS VAX 11/780.
You can rerun it now. Here's the prover, which requires clisp.. Here's the entire set of definitions and theorems we used for program verification.. Read the instructions at , and get the system up to where it's done "boot-strap". Then load the file at . Long, detailed, human-readable explanations of the proof steps are displayed. The entire proof run for all those basic theorems takes about a second, and that's probably mostly scrolling.
So, having rigorously proved the "axioms" for arrays from first principles, I submitted a paper to JACM. The reviewer comments I got back were mainly about the uglyness of the approach. All that case analysis! One comment was that the reviewer was upset about having a proof that ugly at the foundations of computer science. The paper was rejected.
The reason it's ugly is that it doesn't get to use set theory. Informally, a set is a collection whose order does not matter. "Order does not matter" is not a meaningful concept in strict Boyer-Moore theory. If order matters, there are many cases to prove, stemming from all those IF statements in the definitions.
A generation later, fewer mathematicians would be bothered by this form of "uglyness". But back then, machine proofs were
so rare that this was considered awful. Also, now that computers can do this in a second, complaining that it's too
complicated seems whiny.
(PROVE-LEMMA COMMUTATIVITY-OF-TIMES (REWRITE) (EQUAL (TIMES X Y) (TIMES Y X)))
Give the conjecture the name *1.
We will appeal to induction. Two inductions are suggested by terms in
the conjecture, both of which are flawed. We limit our consideration to the
two suggested by the largest number of nonprimitive recursive functions in the
conjecture. Since both of these are equally likely, we will choose
arbitrarily. We will induct according to the following scheme:
(AND (IMPLIES (ZEROP X) (p X Y))
(IMPLIES (AND (NOT (ZEROP X)) (p (SUB1 X) Y))
(p X Y))).
Linear arithmetic, the lemma COUNT-NUMBERP, and the definition of ZEROP inform
us that the measure (COUNT X) decreases according to the well-founded relation
LESSP in each induction step of the scheme. The above induction scheme
produces the following two new conjectures:
Case 2. (IMPLIES (ZEROP X)
(EQUAL (TIMES X Y) (TIMES Y X))).
This simplifies, applying TIMES-ZERO and TIMES-NON-NUMBERP, and opening up
the functions ZEROP, EQUAL, and TIMES, to:
Case 1. (IMPLIES (AND (NOT (ZEROP X))
(EQUAL (TIMES (SUB1 X) Y)
(TIMES Y (SUB1 X))))
(EQUAL (TIMES X Y) (TIMES Y X))),
which simplifies, unfolding the functions ZEROP and TIMES, to:
(IMPLIES (AND (NOT (EQUAL X 0))
(EQUAL (TIMES (SUB1 X) Y)
(TIMES Y (SUB1 X))))
(EQUAL (PLUS Y (TIMES Y (SUB1 X)))
(TIMES Y X))).
Appealing to the lemma SUB1-ELIM, we now replace X by (ADD1 Z) to eliminate
(SUB1 X). We employ the type restriction lemma noted when SUB1 was
introduced to constrain the new variable. We must thus prove:
(IMPLIES (AND (NUMBERP Z)
(NOT (EQUAL (ADD1 Z) 0))
(EQUAL (TIMES Z Y) (TIMES Y Z)))
(EQUAL (PLUS Y (TIMES Y Z))
(TIMES Y (ADD1 Z)))).
However this further simplifies, applying the lemma TIMES-ADD1, to:
That finishes the proof of *1. Q.E.D.
Just an aside, but I was under the impression that Set Theory (these days) is regarded as not-necessarily-all-that-well-founded or, perhaps, not ideal for "foundations"? Is that hopelessly misguided?
If it isn't already apparent, I'm pretty much a noob in formal verification, but I've read enough to hopefully have a very shallow understanding...?!?
EDIT: I just thought I'd add what "makes math beautiful" for me: The mere difficulty of defining equality of 'things' and deciding what it should mean.
I believe what you have in mind regarding set theory as "not ideal for 'foundations'" (why is the word foundations in quote, btw?) is that there is a recent push to reexamine the prospects more intensional foundations based on type theory. The most promising and currently fashionable version of this push is Homotopy Type Theory (HoTT), which has been given press and prestige from the IAS at Princeton and interest from Fields Medalist Vladimir Vovodsky (sp?). There are plausible reasons given, though often polemical, for preferring some sort of type theoretical or category theoretical foundations over set theoretical ones. Perhaps most importantly, for readers of HN, the comparatively easy formalization into computer-checkable proof and computer-assisted proof (look up, for instance, Coq).
That said, there is a long way to go before HoTT (or something like it) displaces traditional foundations and set theory. For one thing, there is no obvious axiomatization or way to confirm that HoTT is a suitable foundation for extant non-foundational mathematics. For one thing, standard type theory (MLTT) plus Vovodsky's univalence axiom is proof-theoretically equivalent to a subsystem of analysis. I would guess that, for instance, you can't prove that all Borel sets are measurable using MLTT plus univalence (but this is totally speculative and I would appreciate being corrected).
Last -- I suspect the only mathematicians who find set theory proofs beautiful are set theorists. Most other mathematicians don't really give af about set theory.
I've seen HoTT seem receiving quite a bit of attention, but after reading a few papers and texts it seems that it's just out of reach without additional introductory material. I got about 50% though TAPL, if that's any indication of (my lack of) sophistication in type theory. I really should pick that up...
Any recommendations for introductory texts?
As a practical matter, what do the mathematicians who don't give AF about set theory (as you say) do? Just assume other axioms that "do enough" adn then go from there?
You could check out Lambek and Scott (1970s), which is excellent. Also Girard (citation for this one was in TAPL). I also think Martin Lof himself is pretty good for the feel of things, but he has always seemed kind of cryptic and mystical to me. There are also some good videos on YouTube that Steve Awodey (CMU) recorded for more intro/motivation. I'd also add Vovodsky's three Bernays lectures on YouTube for the general idea of using typed lambda calculus vs set theory as foundations.
Sociologically, my impression is that most mathematicians trust that the usual axioms of set theory are consistent and safely ignore them. So, yes -- they choose axioms in their own field that are known to be derivable in set theory. Even in intro Analysis textbooks, where the role of set theoretic axioms themselves can be pretty easily extracted from the construction of the reals, I don't think you'd see set theoretic axioms explicitly appealed to (might be wrong about this). In constructive mathematics, it ends up mattering where Choice is used I guess. And it matters how these uses can be "constructivized." But constructive analysis is primarily motivated by foundational considerations. So it is not really an example of ordinary mathematics. Some axioms, like Regularity, are almost never even implicitly used outside of set theory itself except in very specific cases.
Most every case where there is a question of whether a given bit of mathematics is derivable from the usual axioms of set theory arise only within set theory itself (CH, measurability of projective sets of reals, etc), and so such questions are easy to ignore for the vast majority of mathematicians. They simply don't care that much about logic and set theory aside from "just working" under the hood, as it were.
If one is a mathematician, I think it's entirely reasonable to just "trust" your basic axioms of your "sub-field", but OTOH I can really understand the worry that the rug is going to get pulled out from under you... which is a pretty remote worry given the complexity of math nowadays.
Anyway, thank you for taking the time for that response.
∀ f, x = y ⇒ f(x) = f(y)
In Boyer-Moore theory, there's no "hiding", in the sense of an object with private members. One could define a "weak equality" for sets which were lists of values, true if the same values were in both lists. Weak equality would not require that the lists have the same order. But you couldn't hide the representation. Functions could peek into the list representation, and the property above would not hold. So "weak equality" would not be very useful in proofs.
I once proposed a sort of "object oriented" extension to Boyer-Moore theory. The idea is that you create something like a set represented by lists, prove some theorems about such objects, and then "hide" the private functions that can look into the interior of the representation. Then you can define a weak equality function p. If you can prove ∀ f, p(x,y) ⇒ f(x) = f(y) for every non-hidden function, you have proven that p(x,y) ⇒ x = y. Then, p can be treated as equality. Proving this requires proving it for every function already defined. Any newly defined functions are composed from existing functions, so the equality still holds. There's trouble with newly defined types, though, because they come with some built-in functions such as the type predicate.
If you pursue this, you probably get to some theory of types, but I didn't want to go there, so I gave this up.
My impression was that even that might be a bit contentious. Well, maybe not contentious, but maybe... disputable? It also requires a theory of functions, or at least a syntax to derivation rules.
That was just to be absurdly nitpicky, I think I understand what you're saying here, and it's much appreciated.
>Many mathematicians have the opposite opinion; they do not or cannot distinguish the beauty or importance of a theorem from its proof. A theorem that is first published with a long and difficult proof is highly regarded. Someone who, preferably many years later, finds a short proof is "brilliant." But if the short proof had been obtained in the beginning, the theorem might have been disparaged as an "easy result." Erdős was a genius at finding brilliantly simple proofs of deep results, but, until recently, much of his work was ignored by the mathematical establishment.
And here is an interesting study on what mathematicians consider beautiful: https://www.gwern.net/docs/math/1990-wells.pdf They found that mathematicians varied a great deal in what they considered beautiful. And that there isn't any consensus on what is considered beautiful. It also changes over time:
>There was a notable number of low scores for the high
rank theorems 9 Le Lionnais has one explanation :
"Euler's formula e^pi*i = - 1 establishes what appeared
in its time to be a fantastic connection between the
most important numbers in mathematics... It was
generally considered 'the most beautiful formula of
mathematics' . . . Today the intrinsic reason for this
compatibility has become so obvious that the same
formula now seems, if not insipid, at least entirely natural."
I think the interestingness or surprisingness of a mathematical idea is a temporary thing. Once you really deeply understand something, it's no longer surprising or interesting. It should just feel obvious and simple.
This goes along with Jurgen Schmidhuber's theory of art (http://people.idsia.ch/~juergen/creativity.html). That humans crave novelty and unexpectedness in all areas, and that is what we consider beautiful. But once you get very familiar with something, it loses it's novelty.
The tricky part is thinking of rotations as a type of “number” with a multiplicative structure, in which case the [additive] angle measure becomes the logarithm of the rotation, letting us write our half-turn rotation using the fancy-looking exp(πi) instead of ½⥀ or rot(180°) or whatever.
If you asked someone whether ⤺² = ⥀ (or similar) was the most beautiful formula in mathematics they would probably laugh or look at you funny.
Of course, when we present this identity as the most "beautiful formula", it is often to people who don't have any understanding of what it means to take something to a complex power, so the correct response should be to look at you funny.
In that formulation, the structure of rotations is already clear, and the better question might be why the (principal branch of the) natural logarithm of some bivector i is (π/2)i. That starts us on thinking about what the logarithm really means, and asking about conformally mapping a punctured plane to an infinite two-ended cylinder, etc.
The reason this formula of Euler’s seems like a mystery is that we teach angle measure to students as the canonical way to represent a rotation which makes it logically primary in their minds, when (in my opinion) it should be a derived quantity. Rotation is a naturally multiplicative kind of operation, a linear transformation of a vector space.
If we need to represent a rotation of 2-dimensional vectors by a single parameter, I generally prefer the stereographic projection yi/(1 + x) (“half-angle tangent”) instead of the angle measure (logarithm). Depends a bit on the particular use case though.
I guess by "reflection through a point" you mean "the composition of two reflections through orthogonal axes through the point", right? Otherwise this cannot possibly be true, since a rotation is orientation-preserving but a reflection is orientation-reversing.
Reflecting across a point is also the same as reflecting twice across two perpendicular lines which intersect at that point.
> Reflecting across a point is also the same as reflecting twice across two perpendicular lines which intersect at that point.
My claim was rather that "reflecting across a point" has no meaning by itself—in the strict, mathematical sense according to which a reflection has a fixed hyperplane https://en.wikipedia.org/wiki/Reflection_(mathematics) —unless you use one of these two statements as the definition. (Probably you'd better use the second, because the first doesn't generalise to higher dimensions.)
It's not just a rotation by definition, I don't think.
A circle’s circumference is proportional to its radius, so if you want to have a conformal map between a rectangular grid in one space and a polar-coordinate grid in another space, the spacing of the polar-coordinate grid lines in the radial direction must necessarily be proportional to the radius so that they can match the radius-dependent spacing in the tangential direction.
Anyhow, there are many interesting features of the structures here. Both rotation and logarithms are very interesting and important both in mathematics and in science/engineering. I just don’t think Euler’s identity in particular is worthy of so much attention.
Maybe better would be to say that Euler’s formula shows how classical trigonometry (angle measure &c.) is what you get when you apply logarithmic thinking to Euclidean vector geometry.
But yes, the connection between geometry and arithmetic is something people have been noticing for at least 5000 years, and is certainly important and very interesting. I just think this particular expression of that connection is oversold as being uniquely insightful or beautiful.
Zeki is a bit of a buffoon with a tendency to assume that he understands other people's domains well enough to talk about them. It may not be arrogance on his part but there's a definite sense that he doesn't let complexity and nuance get in the way of a good story.
edit: down votes on hn without comments? boo
Undergrad math major here. Please don't insult us; even we know to cringe when this identity is mentioned as some amazing beauty of math.
That, IMO, is why applied math rarely is beautiful. Pure math often is because no sane person would spend time doing ugly work that doesn't even lead to useful results.
Computational proofs such as that of the four color theorem have changed that a bit. That may be the reason some mathematicians don't accept them as proofs.
Similarly, "publish or perish" may have led to more proofs in pure mathematics that are more hard work than inspiration, and of course, zillions of hours have been spent looking for counterexamples of theorems in pure mathematics.
Maybe it doesn't work just for me, but displaying "a2 + b2 = h2" in article about math is really ungraceful degradation.
In my experience, "the general layman" simply doesn't possess enough suspension of disbelief to get past such hurdles.
Everyone has solved a system of linear equations in school. The concept is pretty accessible.
Few can, for example, reliably compute the intersection point of a line l' parallel to l through a point p with a plane P, and many would panic when given a problem in N>3-dimensional space.
I know people who memorized formulas for solving linear systems as used in macro-economics (on the order of 10 dimensions, but lots of zeroes in their matrix), and, because of that, couldn't work when given a slightly different model to solve (for example, if taxation became be a constant plus a fraction of income instead of just a fraction of income, or if capital gains tax were introduced).
The first step in a linear algebra course is to codify the method that you use in middle/high school to solve systems of equations of the form a00x0+a01x1+...=b0....
That algorithm sets the stage for everything. The pace is natural, the math is accessible, no geometric interpretation is ever required, and it's built for problems that people of even average intelligence can understand. Frankly, it's astonishing we don't teach it in high school.
I conjecture that the inherent beauty of mathematics is overstated. The beauty of Euler’s identity is like the sublime beauty of a Himalayan peak shrouded in thick fog. A lot of the pleasure and reward stems as much from the effort to climb above the fog to view the peak in all its majesty as from any structural formal beauty. There are only so many ways formal structures can relate to each other aesthetically.
At least that's what I think today.
I think that a lot of the inherent beauty of mathematics (there is also the beauty of the climb, as you very nicely described it) comes precisely from the realisation that, sensible as this claim is, it doesn't seem to be true: there seem always to be more relationships out there to discover, and their hidden and surprising natures can make them all the more beautiful once uncovered.
There is even a mathematical case to be made that there are always "more relationships out there to discover." One of the more plausible, sober interpretations of the various limitative phenomena in the foundations of mathematics discovered throughout the 20th century is that mathematics is not reducible to fintary symbol manipulation. Perhaps this is a bit loose and controversial, but the point is just that we don't necessarily need to solely lean on the observed fact that there have always been "more relationships out there to discover."
The proof of the 4 color theorem "reduced the infinitude of possible maps to 1,936 reducible configurations (later reduced to 1,476) which had to be checked one by one by computer and took over a thousand hours."
Of course beauty is subjective and not well defined. Some might consider that proof beautiful.
Surely no-one claims that it is? Speaking of 'beautiful code' does not invite the rebuttal that there is also ugly code, just encourages the appreciation that ugly code isn't all there is, and that formal systems can be æsthetically pleasing.
E=mc^2 is considered the greatest equation in history because of this reason. It makes it simple to explain to someone not well-versed in math what is actually behind the symbology.
Only people with a "pop-culture" idea of math&physics think that.
- knowing out of ignorance (probabilities)
- compression of knowledge
Edit: I was imprecise with my phrasing above, I really meant to say 'takes the form of (a regular plural)' instead of 'is' - I don't know if it is or is not technically a plural - the point was only that the shortened word 'Maths' follows the same form as the long version, which seems logically consistent.
The simple (linguistically obscure, but let's pretend) way to understand it is that there's an implied "the study of" before the word. So although you are studying multiple elements of a topic, the study itself is singular.
EDIT: also probably in a lot of European languages as well. I wonder to which extent this may be just a result of cultural exchange.
Which, when in the UK, becomes: "Also, 'math' sounds awkward and silly"
But math came from england via the oxford dictionary.
Try this. Do you say "maths are fun" or "maths is fun"? What about physics. Do you say physics is difficult? Or physics are difficult.
Is it singular or plural?
That's the only justification. What other justification is there?
> it is just a different contraction of the word 'mathematics' than what you are used to.
Yes. Because the british mistakenly assumed mathematics was a "regular plural". For what reason would you keep the s at the end?
Do you shorten economics to econS or to econ? Do you takes ECONS 101?
The explanation was given here.
Why are you so resistant to FACTS and HISTORY? I don't get people who argue even after given evidence.
That those who hear it can understand what the speaker is talking about -- I would say that is the only justification necessary (indeed, for any word), and is more than justified for the term "maths".
All I'm saying is that it was shortened to maths due to a misunderstanding of what mathematics is. And all I'm saying is that when british people saying the "correct" way to shorten mathematics is to shorten it to "maths", I'm saying their JUSTIFICATION is wrong. Because mathematics is not a regular plural.
Okay? I'm explaining it as simply as possible. Use maths if you want. If that is what is used in britain, fine. I'm just saying the JUSTIFICATION for why british people say it is "correct" is wrong. Okay?
I'm not. I don't care. I know the world will eventually choose "math" because that's the way america says it.
The only people who are obsessed are people justifying "maths".
I don't care if you say "maths". Just don't justify it as being "logical".
> As far as I can tell, there are no official rules for shortening words in English, so I don't think that 'maths' is in any more need of a justification than 'math'.
There is. You add an "s" for REGULAR PLURALS. Like STATISTIC is a noun. Statistics is a regular plural. So statistics becomes STATS rather than stat. It's a simple rule. It's why economics is shortened to ECON rather than econs... Because economic is not a noun. Economics is not a regular plural.
> Or 'stats' vs 'stat' for statistics, come to that.
Statistic is a noun. Statistics is a regular plural. Regular plurals you shorten with s. So statistics shortening to stats makes sense. Mathematic isn't a noun. Mathematics isn't a regular noun. I already explained this to you.
The fact that you think statistics supports your argument just goes to show that you really aren't getting it. Why you are so invested in pushing a lie even after people have corrected you, I'll never know.
No, mathematics is uncountable, not plural. I've always assumed it was because “math” is a distinct noun already, for a mowing of grass (were mathematics, and hence maths, plural rather than uncountable, there'd actually be more potential for confusion between “maths”, the short form of mathematics, and “maths”, the plural of “math”, which is avoided because uncountable nouns are grammatically treated as singular for most purposes, like subject/verb agreement; conversely, there is opportunity for such confusion with “math” as the short form. Admittedly, “math” for a mowing is rare enough in current usage outside of modified forms like “aftermath” that that's not big deal now.)
> You wouldn't shorten software to "softe" or equipment to "equipe"
Well, sure, because the long form of latter doesn't end with an “e” and equipt is already the related adjective form. Though shortening equipment as “equip’t” is something I've encountered (the apostrophe makes unambiguous that it's a shortening and minimizes confusion with the adjective.)
It's just like arguing that the German Mathematik is silly, or that the English mathematics is.
Sure there is. The british argument for maths is that mathematics is "regular plural". Which it is not. It is a "mass noun". The british say maths IS fun, not maths ARE fun.
Economics is another such word. Do british people go around saying econs 101?
So the british argument for maths doesn't make any sense. So there are plenty of argument AGAINST maths. No argument FOR maths. Plenty of argument for math. No argument against math.
Check out eg http://fine.me.uk/emonds
No. The reason why it was shortened to maths was because of the reasons stated.
The sensible and correct shortening is math. That's how it was originally shortened to. The brits went off-kilter in the 1900s and added the s to math.
Everyone, including brits, say mathematics IS difficult. Nobody says mathematics ARE difficult.