
Terry Tao on some desirable properties of mathematical notation - jjhawk
https://mathoverflow.net/questions/366070/what-are-the-benefits-of-writing-vector-inner-products-as-langle-u-v-rangle/366118#366118
======
Koshkin
Mathematical notation is great at facilitating formal manipulations. This is
its critical feature, and without it we would get stuck at the level of
ancient mathematics. This is the reason it was invented a few hundred years
ago in the first place. That said, I find that notation is often abused in
texts as a mere substitute for the normal human language which, while allowing
to compress the text, does in fact nothing to help the reader better
understand what is being said but rather looks like a crazy mess of characters
and other marks in a multitude of fonts, styles and sizes the only purpose of
which seems to be to cause an eye strain.

~~~
cryptica
Math symbols are a minor issue for me. What confuses me the most are
descriptions of mathematical concepts.

For example, Wikipedia describes a 'field' like this:

"In mathematics, a field is a set on which addition, subtraction,
multiplication, and division are defined and behave as the corresponding
operations on rational and real numbers do."

It doesn't make sense to me. What does it mean if an operation 'is defined' on
a set? Does it mean that any 2 elements combined together using that operation
always need to output an element which is also in the same set? But if that
was the case then "behave as the corresponding operations on rational and real
numbers do" would mean that the fields would always need to be of infinite
size (have an infinite number of elements) wouldn't it? Because if the field
had a limited number of elements and you added the last two (highest) elements
together, the property which requires that the result also be present in the
same set could not be met because the result would be greater than the highest
element in that set...

The problem is that if you start with a highly abstracted math concept and you
dig through all the links and definitions of sub-concepts, they all have huge
gaps like this... So when you try to combine all the definitions together to
make sense of that original highly abstracted concept, you end up with tens or
hundreds of possible interpretations. But in fact, Math should only have 1
interpretation for each concept so this is a very bad situation to be in.

I think math definitions should be more elaborate and repetitive if necessary.
They should not try to sound terse and clever. They should not assume that the
reader can fill in the gaps. The most rational readers will not be able to
fill in the gaps because rational people know the dangers of making
assumptions.

~~~
impendia
I'm a math researcher, and I'll explain why I _like_ these sorts of
definitions.

In the first place, what you quoted is not a formal, precise definition; it is
not a substitute for such a definition, nor is it intended to be one. The
Wikipedia page you mention has a precise definition further down the page.

So what, then, is the purpose of the description you quoted? Why include it at
all?

Because it's how mathematicians _conceptualize_ of what a field is. It is the
peg we hang our hat on; it is what we remember. A mathematician who has seen
fields would be able to fill in the details; and if not, they would know to
look up the precise definition in a textbook.

In short, these definitions are how we keep track of the forest at the same
time as the trees.

I should note that taste differs among mathematicians, and you can find
different styles of exposition in math books. Some are very formal and
precise; whereas others are more informal and have lots of handwavy statements
along the lines of the one you quoted.

~~~
jhanschoo
I'd also like to point out that it is also frequent that one encounters
equivalent but different formal definitions for the same mathematical
structures, and this is why the informal descriptions are important as well.

------
dopu
Is it just me, or does probability theory in general have fairly terrible
notation? Ambiguity between random variables and their distributions because
of them simply being distinguished by being upper-case or lower-case, writing
likelihood functions alternatively with an L() or p(), and using p() (with
different arguments) to refer to different probability distributions. Perhaps
I'm just having such a difficult time grokking probability theory because it's
just difficult stuff, but I often find myself immensely frustrated with the
notation.

~~~
datastoat
Probability notation used in ML and engineering has this problem, of
overloading p(). Probability notation as used by probabilists in maths
departments is completely different: it’s more explicit, and sometimes more
clunky.

There’s a hybrid notation that I prefer, for example “Pr_X(x)” for the density
function of random variable X at point x; you drop X if the random variable is
clear from the context, and you drop x if you’re referring to the entire
distribution. Or Pr_X(x|Y=y) for a conditional density. But this notation
still has problems when you’re working with hairier conditional distributions,
or with distributions that are neither discrete nor continuous.

(Source: used to be a mathematical probabilistic, now working in ML.)

~~~
quietbritishjim
I used to hate the way Bayesian ML people used p(...), until I realised that
strictly speaking for a conditional variable we ought to be writing:
p_X|Y=y(x). The variable is X|Y=y so all that ought to be in the subscript.

It's definitely worthwhile everyone using the full notation at least once so
they can get a feel for what's really going on. I've spoken to Bayesian ML
professionals who are especially unconfortable with that because it conditions
on a zero-probability event (if Y is continuous)... of course p(x|y) does too,
they just weren't thinking about it before! And (as I think you're getting at)
the appreviated p(x|y) simply throws away information e.g. there's no way to
represent the identity p_Y(x)=p_X(x) without adding back some sort of
subscript.

But on the other hand p(x|y) is obviously much visually cleaner. If you're
writing out a more complex identity and the abbreviated notation isn't
ambiguous then it generally communicates the idea much more clearly because
there's so much less visual noise.

------
JadeNB
I found this post a shame. (The post itself, not putting it here; I love
seeing math posts on HN, and automatically upvote. Bringing hackers and
mathematicians together is highly worthwhile for both.)

Usually Tao's posts are so insightful, and crystallise some idea so perfectly
that it feels like I was just on the cusp of discovering it myself—a rare
talent, and hard to cultivate since it goes against the ego. In this case,
though: I'm a professional mathematician, and as prone as anyone in my
discipline to use mathematical language to describe not strictly mathematical
things, but the pseudo-mathematisation here ("Notation^{-1}(C)", for example)
seems more like wit than clarity. Not that there's anything wrong with wit,
but in this case it seems to me that it's at the expense of, rather than a
pleasant addition to, the central point.

I'd like to hear especially from anyone who _isn 't_ a professional
mathematician: did you feel that this post improved your understanding of the
purpose and function of good notation?

(EDIT: I was scared about making this post, since there's rightfully a lot of
respect and appreciation for Tao—and I hope it's clear that I concur on both
counts—and I wasn't sure how my reticence on his post would go over; but I'm
super glad I asked. Thanks so much to everyone downthread; these are wonderful
responses and I feel that it benefited me a lot to read them.)

~~~
carlob
I'm not sure I agree about the fact that notation is a pseudo-mathematisation.

For example in Mathematica there is a (mostly deprecated) package called
Notation`[0] that does just this kind of stuff. I have to admit that it's not
really used in production code anymore as MakeBoxes and MakeExpression are
more fine-grained and robust.

Thus said I have to admit that the transformation between 2-D boxes and
M-expression is not as foundational as what Tao is talking about, however the
whole field of designing programming languages is deep down an exercise in
defining notation, the transformation mentioned above just make this a bit
more explicit.

[0]
[http://reference.wolfram.com/language/Notation/guide/Notatio...](http://reference.wolfram.com/language/Notation/guide/NotationPackage.html)

~~~
JadeNB
> I'm not sure I agree about the fact that notation is a pseudo-
> mathematisation.

I definitely don't think that notation is pseudo-mathematisation; good
notation is inordinately powerful in enabling good mathematics (and bad
notation can make even simple mathematics hard). What I meant to describe as
pseudo-mathematisation was the _discussion_ of notation in what seemed to me
in a (to me) unnecessarily formally mathematical way.

------
moonchild
Another interesting notation is iverson notation. See _Notation as a Tool of
Thought_ [1]. Here's the inner product (note that this is actually _general_
inner product):

    
    
      c ≡ u +.× v
    

1\.
[https://www.jsoftware.com/papers/tot.htm](https://www.jsoftware.com/papers/tot.htm)

~~~
dhash
Iverson notation originated from APL, which was itself born out of the horror
that Iverson saw when presented with "standard mathematical notation". Its use
of strange, otherwise cryptic notation was influenced by two main themes: it
was originally designed on a blackboard, so strange glyphs were normal, and a
desire to become unmoored from "standard mathematical notation" in order to
strengthen principle 1. of OP - Unambiguity. Casting off historic baggage and
canonicalizing mathematical notation under the principles of OP was APL's
prime goal, and it does a damn good job of it.

I wish they taught it to math majors.

APL's a wonderful rabbit hole to fall down, and J was my eso-lang of choice
last year.

------
emmanueloga_
The discussion of mathematical notation reminds me of the talk by Guy Steele
"It's Time for a New Old Language", discussed previously in HN [1]. That talk
was focused on the Math notation that is used in computer science papers, but
I feel a similar analysis could be expanded to other areas of Math.

1:
[https://news.ycombinator.com/item?id=15473199](https://news.ycombinator.com/item?id=15473199)

------
Koshkin
Difficulties, if any, perceived or real, arising in connection with notation,
are usually _incomparably_ smaller than those presented with the subject
itself. (Personally, I only wish mathematical notation were better integrated
with software in general and programming languages in particular.)

~~~
rytill
Not true at all, there are several times I've attempted to read through a
textbook only to be stopped by notation because something was introduced prior
to being referenced, or notation is overloaded with multiple meanings.

I consistently have run into "perceived or real" confusing mathematical
notation as an impediment to learning in a way that programming languages have
never, ever caused me.

Does no one else feel this way? I can't be alone, and like the others
responding to you have said, your claim does not seem substantiated.

~~~
omaranto
You are making the huge mistake of assuming that if you hadn't given up when
facing difficulty with the notation, the rest would have been easy, that you
would have no trouble understanding the concepts!

Don't worry: you are not alone in making that huge mistake, plenty of people
do. People that don't give up when confused about notation usually quickly
learn that is the concepts and relations between them that require careful
thought (and also, usually, that the notation was advantages they didn't
realize when at first blush they found it confusing).

~~~
MaxBarraclough
You are in no position to assume rytill 'gives up when confused about
notation'.

~~~
omaranto
I thought that was what he or she meant by "there are several times I've
attempted to read through a textbook only to be stopped by notation". I
wouldn't have said what I said otherwise.

------
riazrizvi
_Unambiguity_ as an adjective is slippery. Mathematical notation must be
concise, because a key purpose is to provide understanding, which it achieves
by focused abstraction. So when you search for notation to model some real
world system, you leave things out, as such it leaves room for interpretation
when remapping back to the real world, ie there is ambiguity. I think this #1
item should really be termed _Consistency_ , because above all, notation must
not contradict itself.

~~~
JadeNB
> I think this #1 item should really be termed Consistency, because above all,
> notation must not contradict itself.

This is a good goal, but I'm not sure it's the primary goal; the phrase 'abuse
of notation' exists precisely to describe its breakage, with even the best
mathematicians and expositors engaging in it, and I think insisting on no
abuse of notation leads us rapidly to the style of impenetrable Principia-
style logic, or of modern formal proofs—both of which have their place (at
least the latter …), but neither of which should govern _all_ mathematical
discourse.

As with all writing, I think that part of being a good mathematical writer is
knowing the rules so that you can figure out when to break them
unintentionally, rather than stumbling into it accidentally.

~~~
riazrizvi
Great point. It is bad to nitpick consistency when you are in the initial
stages of developing a model outline, and looking to capture the most
important points. What's the right term for this notational quality?
_Precedence_?

------
Darkstryder
Steal this idea: a Shazam of mathematical notation. In an app you would draw
(or take a picture) of a mathematical symbol you don’t recognize and get a
link to the appropriate Wikipedia page.

My biggest pet peeve with mathematical symbols is the difficulty of looking
them up when you don’t know them already. If I’m reading a text on a topic I'm
unfamiliar with, I can at least google the keywords I don't know. This is
difficult with symbols.

~~~
cdu1
Great idea. I'm a fan of this app/site for finding the Latex command for a
particular symbol:

[https://detexify.kirelabs.org/classify.html](https://detexify.kirelabs.org/classify.html).

Then just need to look up the Latex on Wikipedia:

[https://en.wikipedia.org/wiki/List_of_mathematical_symbols](https://en.wikipedia.org/wiki/List_of_mathematical_symbols)

------
btrettel
Terry Tao mentions that notation can help with error detection. Anyone here
aware of some good examples?

One that I like is that in Einstein notation you can't have 3 of the same
index, e.g., u_i u_i u_i is invalid.

~~~
lmkg
From the example notations that he gives, all three Einstein notations as well
as the Penrose notation make the indices explicit in a way where a mismatch or
misalignment will stand out.

Another good example is the Liebnitz notation for derivatives. Proper
application of the chain rule visually resembles how fractions cancel: dy/dz
dz/dx = dy/dx. It's very easy for the eye to follow and make sure that the
cancellation is valid. Newton's notation doesn't make that as easy.

------
wavegeek
I like his point about lack of ambiguity. Nothing makes me want to punch an
author in the head (without, to be clear, any possibility I would actually do
it) like lazily creating an ambiguous notation, which is supposed to be "clear
from context", but rarely is. As for example the Einstein summation convention
which is to be ignored "when clear from context".

I would add

1\. Clearly telegraphing notations. Not hiding them in the middle of long
paragraphs or even, and yes I have seen this a few times, defining essential
notation in an optional exercise.

2\. Having a glossary of notations, so people don't have to remember every
single notation and to read every word of the book sequentially.

3\. Not creating low value notations that may be used only once and then,
possibly forgotten. I have read books with > 1 new notation or definition per
page, mostly forgotten thereafter but some random subset needed later, and you
are not to know which.

------
vii
Enumerating what we want from notation helps us understand how far we are from
the ideal. The whimsical introduction of Notation to talk about notation makes
it practical. Given a domain in mathematics, adding notation (e.g. modulo
arithmetic) can make complex notions pretty to express and quick to prove. I
used to really enjoy this and tried to redefine notation for each exposition.
It's shorter and prettier, but just pushes complexity into the notation :) and
teaching people new notations is expensive, actually unless repeatedly used,
more expensive than laying out details in a less concise notation.

Programming languages are notations within this framework - and domain
specific languages, while much more efficient are unpopular as the costs of
changing notation, in terms of training people, are too high.

The cost of communicating the notation is captured in a few of the desiderata
(e.g. 1,7) but practically it is most important. If we want to be easily
understood we should speak a common language!

~~~
dragonwriter
> domain specific languages, while much more efficient are unpopular

JSX seems pretty popular, and when XML was popular similar XML embeddings
were, as well. Templating languages are popular. Heck, the relative popularity
of “general purpose” programming languages is not consistent across domains,
with domain fit being a factor even for general purpose languages.

~~~
Transfinity
I would say that JSX is popular precisely because the cost of teaching it is
low, which in turn is because of its similarity to other commonly used
notation (HTML / XML).

Of course it's got its fair share of dumb gotchas, but I found it far easier
to learn than, say, any of the myriad Rails DSLs.

------
jjhawk
discussion on reddit:
[https://www.reddit.com/r/math/comments/hv6m2n/terry_tao_on_s...](https://www.reddit.com/r/math/comments/hv6m2n/terry_tao_on_some_desirable_properties_of/)

------
enriquto
It would be nice to have an equivalent post, but with programming languages.
The fact that different programs perform an identical computation is
important. For example, in Python/numpy you can write

    
    
        c = 0
        for i in range(u.size):
            c = c + u[i] * v[i]
    

or

    
    
        c = u.T @ v
    

and even if the result is identical, the computation is not, the first one
being orders of magnitude slower. There is no good reason for it to be so,
unfortunately.

~~~
lordgrenville
I was with you until your last line, which I categorically disagree with.
These two alternatives _perfectly_ map to two different ways of thinking about
the vector c:

\- As a regular array (for someone with Python experience but no knowledge of
linear algebra). In this case you can just loop over it as with any iterable.

\- As a vector, with the associated mathematical properties. In this case you
can operate on the entire vector, which is much faster; this happens to be
because of implementation details (using C structures instead of Python lists,
parallelisation, whatever), but is also just highly intuitive.

I'd argue that this is exactly what Tao is saying, about how different
notations suit different contexts, and that allowing both methods in no way
violates the Zen of Python (in reference to jimhefferon's comment).

~~~
enriquto
My problem is a practical one, not philosophical. I would expect that the
computer operations in both cases are _identical_ and thus the performance
exactly the same. It is 2020 and optimizing compilers exist, and even JITs.
The first code is just a notation for the second one. The fact that the first
code is extremely slow (thing about iterating over all the pixels of a video
sequence) is utterly disheartening. Of course, in that particular case you can
say "just use the vectorized version", but in practice not all the
computations that you need to do can be expressed in that form. If you try to
iterate, in python, over all the pexels of a realtime video using integer
indices you are in for a world of pain; it is just not possible and this is a
major limitation of the expressivity of the language.

~~~
lordgrenville
It isn't just notation, though. It's a different way of operating. For
example, the following adaptation of your code:

    
    
        c = 0
        for i in range(u.size):
            if u[i] < u[i + 1]:
                c = c + u[i] * v[i]
            else:
                c = 0
    

cannot be vectorised (ignoring the off-by-one error for simplicity's sake).

Saying that the Python interpreter should reinterpret the iterative code as if
it were vectorised isn't increasing the expressivity, it's reducing it, by
overriding the user's intentions.

~~~
enriquto
My point exactly! If I need to implement your algorithm in Python, why am I
condemned to be hopelessly slow? Or worse, condemned to find a "trick" that
vectorizes this code while it becomes unreadable?

------
eternalban
"Preservation of quality, II" and "Suggestiveness, I" are likely co-manifests.

I suggest that should one strive to 'fine tune' notation N to possess the
above two qualities for a _family of objects in X_ , other categories of
objects in X will become opaque and difficult to express, i.e. a domain
specific notation.

------
peignoir
Anyone would be interested to help on building a google translate for math?

------
peter303
Tao a rare person with 200 IQ

~~~
fuzzfactor
Almost ten years ago I was working on an interesting Linux multiboot system,
googling to find far-from-default grub operation hints.

I think it can be agreed the present grub documentation is very broad, with
only a few undocumented features, but still not very deep even on the default
operation. Grub was also undergoing more rapid change at the time.

Came across a message where Tao had explained a concept like no other, and
after what I had seen it was clear to me he understood it like no other, so it
was purely logical. He knew more than the documentation. I didn't know he was
a widely recognized mathematician or anything, I just thought he was a very
bright computer scientist on a message board.

There were only a couple sentences that nearly applied to my system, within a
couple paragraphs on his solution to a different problem.

There was no useful answer for me yet so I moved on.

Googled to exhaustion that session with no code changes to make, but I looked
at it again and it was the only tab I kept open, even though Tao did not show
the direct way forward for me at all, everything else was actually useless.

Next day I read it again, scrutinizing it over & over for an action I could
follow through with, wishing someone had posted equally straightforward advice
for my particular situation.

No such luck, but it inspired me to go forward in a similar fashion.

Mostly use both grub and syslinux as separate alternatives to boot my
distributions ever since.

For years I've felt like I couldn't have done it otherwise.

And with Windows, grub or not, I ended up bringing more reliability to my
employer.

Fairly recently I found out Tao had started out as a mathematical child.

That was incredibly helpful.

He actually communicated the unique abstract concept I needed without even
knowing the problem and without intentionally trying.

I imagine he made up his own notations quite a bit before he carefully adopted
the various professional terminologies.

------
foobar_
No one uses mathematical notation for practical purposes. This is just like
the medival music notation which is neither practical nor what modern
composers use, which is more visual in nature. Infact modernism is a rejection
of medievalism.

I think in the future programming will force all mathematicians to code or
give out simulations. Most mathematical notation was intended to be throwaway
by the original authors, thats why there are so many notations. Trying to find
relevance in them is a pointless exercise. Much like 80x20, tabs vs spaces ...
most of the original intent is lost and what survives is guff meant for
ceremonious purposes.

~~~
wheresmycraisin
Programming != proofs, or in general communicating abstract mathematical
ideas. Writing mathematics is nothing like writing software.

~~~
foobar_
What I am trying to convey is writing software is better than writing maths,
just like medieval music notation vs modern notation. Programming is better
than proving because most proofs are mere tautologies or artificial
constraints. This is why theorem provers in code rely on term rewriting.

A triangle has a sum of 180 ? Well how about if you push the triangle inside
out. In code you can easily run a more complex simulation which gives you all
possible values of the sum ... which is why ascertaining useful facts like 180
ad-nausea is boring at best. In fact most mathematics if it can't be simulated
can't exist.

~~~
wheresmycraisin
Ok, then convince me. Write 'software' of, say, the proof of the the dominated
convergence theorem or something else reasonably advanced and let's compare it
to the proof in conventional math notation.

~~~
foobar_
I'm guessing there was a physical intuition behind the theorem, if you can
simulate it you will probably do something better than the proof. Now it's
your turn to tell me why 1 + 1 = 2.

~~~
augustt
Honestly what are you talking about. You can simulate for 100 years without
finding a counterexample, but that doesn't make a proof. The whole point of
math is to understand why things are true, not to just be satisfied that it
seems true.

~~~
foobar_
The way I see it ... Most mathematicians nowadays use mathematica or matlab or
even python, proving my point. The notation is medieval ... and probably the
only reason it survives is because of form factors of paper.

> Mathematics is a part of physics. Physics is an experimental science, a part
> of natural science. Mathematics is the part of physics where experiments are
> cheap.

[https://www.uni-muenster.de/Physik.TP/~munsteg/arnold.html](https://www.uni-
muenster.de/Physik.TP/~munsteg/arnold.html)

I see simulating as a part of the experiment. If the proof is wrong it
wouldn't last a seconds worth of simulation. I suppose a proof in essence is a
pattern or an invariant of the system ... but most proofs have really no meat
to them. The notation is merely intimidating like obfuscated code.

~~~
gspr
> The way I see it ... Most mathematicians nowadays use mathematica or matlab
> or even python, proving my point.

Yes. But most of us don't use those to prove anything; rather, a lot of us use
it to implement computations based on those proofs (and do some exploratory
"could this possibly be tru?" kind of work). Useful tools, for sure, but not
something that remotely proofs your point. Most mathematicians also eat bread.
That does not mean that math is a baked good.

> The notation is medieval ...

It is not. Read Gauß or Euler from the 18th and 19th century, and the notation
is nothing like modern mathematical notation. I can't even imagine what
medieval mathematics notation looks like!

> [https://www.uni-
> muenster.de/Physik.TP/~munsteg/arnold.html](https://www.uni-
> muenster.de/Physik.TP/~munsteg/arnold.html)

That is indeed the opinion of Arnold, a giant of mathematics. An opinion that,
I daresay, does not reflect the majority opinion on mathematics.

> I see simulating as a part of the experiment.

Sure. Simulating is a valuable experimental tool to many mathematicians (where
available; of course it isn't always).

> If the proof is wrong it wouldn't last a seconds worth of simulation.

At face value this statement betrays how little you know about this matter.
There can very well be errors in proofs that cannot be uncovered without
thousands of years of simulation, if at all.

Now, even interpreting your statement in the best possible light, namely
something along the light of "simulation can often uncover mistakes in
proofs", I would say: fine, but what about the converse?

> but most proofs have really no meat to them. The notation is merely
> intimidating like obfuscated code.

Are you insane? Take something that is patently "useful" and patently "real
world", like the fundamental theorem of calculus. Meatless?

~~~
foobar_
I'm not insane ... you are just the type of person who will defend roman
numerals. Maybe you just have OCD.

1\. Socrates is mortal

2\. Mortals die

3\. Socrates dies

Deduction is really like amazing. Holy shit we really proved something
spectacular here. I guess you would be really impressed if I used tau and
sigma and defined death with vietnamese alphabet.

Almost the entirety of calculus was derived from problems related to physics.
Volumes were calculated for doing engineering. Mathematics != Thinking. The
last time I checked both logic and critical thinking were branches of
philosophy.

All good mathematicians are physicists or engineers. Heck some even learnt
maths on their own. All mediocre mathematicians write textbooks and hide
behind notations. Come to think of it they remind me of OO programmers in
their utter arrogant mediocrity. Most abstract mathematics is like the
definition of protocols/interfaces and other platonic garbage. I suppose this
debate will never end. Plato vs Aristotle, Deduction vs Induction, Analytic vs
Synthetic ....

~~~
an_android
Don't use phrases as "Maybe you just have OCD". This is offensive and
trivializes the problems those with OCD face. OCD is a serious disorder and
your use of that phrase illustrates your lack of mental maturity.

Further, that phrase is bigoted. What you are implying is that someone with
OCD is "lesser" or "other" as you are using the phrase to discount the person
you are talking with. Hence it is bigoted.

In fact, it is obvious that you have no idea what you are talking about.
Mathematics is not "just notation" in the same way software engineering is not
"just programming language syntax", music is not just "notes on a piece of
paper", and literature is not just "grammar rules".

If you cannot see that, I suggest you read more and expand your view of the
world. Don't hurl insults at others.

If you want a more concrete example, show that the sum:

1 + (1/2)^2 + (1/3)^2 + (1/4)^2 + ... = pi^2/6

That is, first define what it means to take a sum of an infinite number of
terms, prove that your definition is consistent with a sum of a finite number
of terms, and then show that the sum is __exactly __pi^2 /6\. Showing that
they agree to 100 billion decimal places is not enough. You need to show they
are exact.

When you are done with that, find an __exact __closed form for the sum:

1 + (1/2)^3 + (1/3)^3 + (1/4)^3 + ...

~~~
gspr
> If you want a more concrete example, show that the sum […]

He won't. I keep running into people like this all the time. They are hellbent
on the idea that anything they don't understand _must_ be meaningless,
useless, or the fault of others. If you get a reply at all, I suspect it will
be something like "pi is just a meaningless approximation to a real physical
concept" or "infinite series don't actually exist in real life, I'll sum the
first 1000000 terms on a computer and that's all that exists".

~~~
foobar_
Mathematicians who think infinity is real should be treated with the same
disdain as Neptune worshiping astrologers.

The internet makes it easy for pedantic losers to have a loud opinion. Hell I
have even run in to pedantic losers who have the time to create multiple new
and fake accounts and use old sock puppets to create the illusion of an
audience because these friendless, loveless losers literally have no one to
talk to IRL.

> I keep running into people like this all the time.

Psychological attacks, amazing! I'm guessing you are one of those deeply
insecure symbol twiddlers. Let me guess, as kid you were crap at everything,
especially sports except symbol twiddling so you latched onto those praises
your teacher gave you and as an adult that is the only source of your self-
esteem. And you can't handle it when someone on the internet thinks abstract
mathematicians are full of shit.

~~~
gspr
> Mathematicians who think infinity is real should be treated with the same
> disdain as Neptune worshiping astrologers.

Mathematicians will not say anything like "infinity is real" or "infinity is
not real". We are careful creatures, and will ask what you _mean_ by
"infinity". In this subthread we've been discussing infinite series. What part
of the definition of those do you have a problem with? (Prediction: you'll
never answer this, but instead go on ranting with no ability to focus on the
topic at hand. I can definitely see why math is hard for you, you have a
severe problem with focus).

> The internet makes it easy for pedantic losers to have a loud opinion.

I can see that.

> Hell I have even run in to pedantic losers who have the time to create
> multiple new and fake accounts and use old sock puppets to create the
> illusion of an audience because these friendless, loveless losers literally
> have no one to talk to IRL.

That's pretty sad. It's also very sad that this is the conclusion you jump to
when someone speaks out against your insane ravings in an entirely logical and
coherent way.

> Psychological attacks, amazing!

It's a bit entertaining that you can go from what you wrote above (and what
you write below) straight into accusing me of this.

> I'm guessing you are one of those deeply insecure symbol twiddlers.

I am indeed quite insecure. I'm working on managing that. If you by "symbol
twiddler" mean mathematician, then yes – and quite proud of it too. You'll do
well to get back on track to the topic at hand though, seeing as you're
currently coming off a bit like the people one sometimes see yelling
incoherent nonsense on subway trains.

> Let me guess, as kid you were crap at everything, especially sports except
> symbol twiddling so you latched onto those praises your teacher gave you and
> as an adult that is the only source of your self-esteem.

Not at all. While I was quite mediocre at sports (though far from crap), I did
really well in most things. I was not a favorite of the teachers, because I
had (and probably still have) a bit of problem with authority. Are you done
derailing the discussion now? I'll remind you: we're discussing the usefulness
of mathematics, not my childhood or athletic abilities.

> And you can't handle it when someone on the internet thinks abstract
> mathematicians are full of shit.

I can handle it just fine, primarily because what raving lunatics believe has
no influence on the extreme actual power and usefulness of mathematics. The
reason I care to have the discussion is to set the record straight for third
parties' sake.

------
zitterbewegung
This looks like a similar approach to TLA+ but, it looks more similar to a
markup language that is domain specific.

I think that he has a good idea for the most part and if you did formalize
this notation there is a good chance that someone in the computer science
domain would eventually program something that could interpret it. Lisp comes
to mind.

------
pubby
Imagine I give you a list of words and ask you to remember them. 5 minutes
later, I ask you to give me those words in reverse order. Not too hard, right?

Now imagine if those words I gave you were in Vietnamese, or some language you
don't speak. Suddenly the task becomes much more confusing. You aren't
remembering a small handful of objects and ideas, but instead trying to juggle
the individual syllables in your head.

Math notation sucks because none of it maps to things non-mathematicians know.
Every time a new symbol is introduced, whether it be a greek letter or a
operator, it's one more mapping your brain has to create to remember it. And
on top of this, you also have to remember the English names too. Yes I said
names - most math concepts have so many different names it's crazy. Even basic
arithmetic can't escape this. There are two names for multiplication
(multiply, product) and four common notations for representing it (*, x, ·,
and whatever you call it when two variables are next to each other).

~~~
tmpz22
I dropped out of college at 19 and attempted to return at 21. My first math
course back was discrete math and my teacher was a grad student who very
clearly had no interest in teaching and was only there in order to subsidize
his "real work". Keep in mind this is a large public university charging
$40k/year. Going into his office hours was like going to another country,
because his only method to explain math was really really advanced level math
notation. He became visible frustrated that I didn't pick up on his notation.
I wish I could say that was my only teacher who has no business teaching.

I dropped out again and haven't looked back. That system wasn't for me and
didn't care about my success or its ability to transfer knowledge.

~~~
wtallis
In many university mathematics curriculums, discrete math is the course used
to transition students from the computation-oriented mindset (instilled in
them by high school and early undergraduate courses up through calculus) to
the more abstract and proof-oriented mindset. Thus, teaching you to use that
"really really advanced level math notation" is often one of the primary goals
of the course, even if it seems like unnecessary overkill at the time. If you
refuse to learn it, you're setting yourself up for failure in any future math
course that uses that notation as the starting point for building new concepts
and abstractions.

~~~
tmpz22
We were never taught the notations, he would just pull out dozens of glyphs
from his vast experience in mathematics - not the standard notations you might
find in a text book. I won't deny that I could've researched these topics
deeply in my own time enough to keep up with the graduate student, but I
myself had no interest in being a graduate math student just to pass the
course.

