The problem with the author's proposals is this: he proposes replacing well-known, well-distinguished symbols (alphabets and digits) with symbols that reflect a certain geometric interpretation.
First, the geometric symbols are difficult to read, and a small error (either in printing or in reading) can mean a huge change in meaning.
Second, the obvious criticism that they are too radical a departure from the current convention for them to be widely adopted.
Most salient, however, is that they offer no benefit to the development of mathematics. The mathematical enterprise is based on abstraction, generalization, and symbolic manipulation. The symbols proposed reflect only the most banal representation of the mathematical objects they discuss.
For example, he discusses representing the natural numbers with a number line. But this is an elementary schooler's notion of natural numbers. Mathematicians would like to discuss other representations and constructions of the natural numbers, say, their Peano axiomatization, as Church numerals, as an algebraic structure, as indices, as a countable set, and many more. Hence it is honestly more convenient, and less intrusive to the discussion of novel mathematical ideas and perspectives, to simply use a flavorless symbol of the alphabet, rather than one that is associated with a particular geometric interpretation.
Finally, I'd like to point out that while many things have changed in mathematics since ancient times, we still follow Aristotle in using letters to denote mathematical objects, and Euclid's rigorous exposition and proof, and actually use them today with far more frequency than they ever did.
Thanks for your comments -- I agree with many of them. The only part of this that I still think is at all interesting is the section on quantifiers, and to a lesser extent the octal numbers. And I don't think they're that interesting. In general, while I do think there is interesting potential for improving notation -- Feynman diagrams come to mind as a somewhat modern example of notation impacting research -- I don't think this is what it looks like.
> he proposes replacing well-known, well-distinguished symbols (alphabets and digits) with symbols that reflect a certain geometric interpretation.
That's what most of it is, although the quantifiers are a more "grammatical" change. I think the best defense I can level of the symbol proposals is that the people most effected by low-level notational choices like symbols are probably young students learning them for the first time. It seems plausible there might be significant pedagogical gains in other symbols.
My other criticism is that it seems less natural to read that notation. With the standard notation, you can simply replace the symbol with English words and read it. While the transliteration in your notation is likely possible, it seems more complicated. In this sense, I would probably view this notation more as a shorthand. For pedagogical purposes, I feel like separating the qualifier from the usage of the variable is a good idea.
Probably you could solve also this by adding another index to quantified quantities, but at that point are you really solving the problem of complexity? Complexity is not only measured in terms of verbosity (i.e., length of the expressions you produce); ability to quickly locate information and ease of manipulation are also important, and it seems to me that this formalism fails completely. In the "traditional" formalism you separate the "final fact" you want to assert from its "conditions", given by the quantifiers; usually, you want to process those pieces of information at different times. Also, you have rules for quantifier introduction and elimination, which seems to be a nightmare with the proposed formalism.
That said, I do encourage new proposals and experimentation with notations, as with anything else in mathematics, even from young students. There have been cases in maths in which some good idea for a new notation made an entire field much easier (I am thinking for example to Einstein notation for tensor calculus).
- Variable Scope. In mathematical writing, the convention is that every variable is a global variable - and in a paper you quicky run out of variable names to use. There should be a way to declare the scope of a variable. For this chapter only, x is this vector.
- Strict Typing. You skim a paper, and you see "x + y". But what is x? a matrix, a vector, a scalar, an operator, a set, a random variable, you have no idea. The "+" operator is so overloaded x and y could be anything. Having to hunt down the definitions for every variable is a time consuming and frankly wasteful process. Conventions are built up around this (upper case letters A,B at the beginning of the alphabet are matrices, but upper case letters at the end X,Y are random variables) but they are mostly rater silly. There needs to be an easy way to infer the type of a variable quickly.
- Anonymous Objects. There is no way to declare a function without either giving a name to it (f(x) = x^2) or at least declaring its input symbols (x |=> x^2). But you can't go, say
(x |=> x^2)(y). Using a symbol to represent a disposable function seems a waste of a valuable global variable (see again, 1) and is inelegant.
This goes against the very nature of what mathematical variables are like! The whole point is that the symbols are suggestive. This kind of thing allows the reader to quickly get up to speed with what the symbols are all about. As a side-effect, an unconventional use of symbols is usually quite grating to the experienced mathematician. Usually:
- x, y, z stand for variables
- i, j, k stand for subscripts and superscripts
- m, n stand for integers
- theta, phi stand for angles
- alpha, beta stand for real numbers of some sort
- capital letters are usually sets, transformations and such
> Strict typing
Again. Mathematics is about abstraction. Here "+" means that you can add the two objects, whatever they are. That's all there is to it. Of course, they are mentally typecast to the more general type. The point of this overloading is that, again, its suggestive nature makes it easier to read and write. The abuse is so pervasive that something like 1 + A can easily mean that, yes, 1 is the identity matrix.
> Anonymous Objects
I agree. Although it's much easier to, once and for all, declare "Let f(x) = x^2" and use "f" instead of the anonymous function every time.
All that this kind of thing highlights is, IMHO, the fundamental difference between being a mathematician and being a software developer (and I've said it before if you look through my comment history):
- Mathematicians manipulate a given symbol, often in the thousands of times, in their heads or on paper
- Software developers read a given line of code, often in the thousands of times, in order to understand what the code does
These differences drive, IMHO, 99% of the difference in notation.
There's a great spoof paper that demonstrates this by giving each variable the next available letter of the alphabet. Something like "Let a be a set of points in the plane. Let b, c and d be elements of a. Let e be the angle formed by..."
It becomes unreadable very quickly. Sadly I can't find the original now - anyone know what I'm talking about?
Mathematicians handle variable scope all the time--I would argue that a variable's scope is generally the section, subsection, lemma, theorem, etc that it is defined in. If I write "let x be ..." in the proof of a lemma, I sure don't intend for it to be a global variable.
It's not unusual to have an index of symbols at the end of a paper or book.
Also, anonymous objects are fine and used all the time. "Applying z => z^2 to the region U, ..." or some such thing.
I'm not sure it would be very useful to try to coordinate some standard notation for all of this. Anyway, one nice thing about writing math is that you can make up whatever notation you need, as you need it, and no compiler can tell you not to :)
With regards to notation and mathematical writing in general, the problem is that it is never explicitly taught. Too many students of mathematics never learn to properly structure sentences in mathematics. In studying programming languages, you have no choice but to understand the structure explicitly. I do think mathematics has all the features you mention, it's just not explicit.
> Variable Scope. In mathematical writing, the convention is that every variable is a global variable - and in a paper you quicky run out of variable names to use.
Variables usually have local scope. In a mathematical statement like "for a continuous function f ...", the letter f is not defined after the sentence ends. Another common way to "initialize" variables is by statements like "Let f be a continuous function", and while it is true that it is never formally cleared, it is reintroduced when needed in another context. In some rare cases, you might start a chapter with "In this chapter, we let $H$ denote a fixed Hilbert space...", which is the mathematical equivalent to a global variable.
> Strict Typing. You skim a paper, and you see "x + y". But what is x? a matrix, a vector, a scalar, an operator, a set, a random variable, you have no idea. The "+" operator is so overloaded x and y could be anything.
I don't understand how programming languages are different here. Most modern programming languages also have operator overloading or polymorphism which has precisely this problem. Do you prefer C-style add_object_of_type_T(x, y)?
> Having to hunt down the definitions for every variable is a time consuming and frankly wasteful process. Conventions are built up around this (upper case letters A,B at the beginning of the alphabet are matrices, but upper case letters at the end X,Y are random variables) but they are mostly rater silly. There needs to be an easy way to infer the type of a variable quickly.
The common convention is to introduce variables just before you use them. That's as I would do it in a programming language? You seem to desire an IDE for mathematics, so that I could ctrl-click any variable to the definition. Though I have to say, if that's such a problem, the paper is probably horribly structured.
I don't understand your problem with conventions. Conventions are suggestive, to help you remember the definition, it's much in the same way we choose suggestive variable names in programming languages.
> Anonymous Objects. There is no way to declare a function without either giving a name to it (f(x) = x^2) or at least declaring its input symbols (x |=> x^2). But you can't go, say (x |=> x^2)(y).
Sure you could write (x |-> x^2)(y), you're allowed to introduce any notation you like. But why wouldn't you just write that expression as y^2?
You could declare a function as "the function which ...", which doesn't give it a name. If you want a symbolic expression, I'm not sure how you would do that without declaring any input symbols.
Likewise, base-8 numerals look fun. However, one benefit of the regular numbers is that they look both pleasant and distinctive from each other in the flow of the regular text. Both in print and also if written by hand (at least if you add the extra horizontal bar to the 7).
Actually, if one were to adopt all of the proposed notation, it appears that one might lost in the forest of slightly different combinations of arrows that mean different things.
The quantifier notation is the only thing I actually might like. (It's also backwards-compatible.)
"While I was doing all this trigonometry, I didn't like the symbols for sine, cosine, tangent, and so on. To me, "sin f" looked like s times i times n times f! So I invented another symbol, like a square root sign, that was a sigma with a long arm sticking out of it, and I put the f underneath. For the tangent it was a tau with the top of the tau extended, and for the cosine I made a kind of gamma, but it looked a little bit like the square root sign. Now the inverse sine was the same sigma, but left -to-right reflected so that it started with the horizontal line with the value underneath, and then the sigma. That was the inverse sine, NOT sink f--that was crazy! They had that in books! To me, sin_i meant i/sine, the reciprocal. So my symbols were better."
And some example here: https://tex.stackexchange.com/questions/274463/feynman-trig-....
I slightly modified it to avoid subscript and keep it closer to existing notation for powers: http://i.imgur.com/hOz9MXa.jpg
A simple exercise, often used to teach children, is to start without notation. Do exercises like these (translated from Hungarian way of saying numerals):
Write the questions the teacher speaks and give answers:
"Two tens three plus one ten six equals?" Three tens nine.
"Seven tens one plus two equals?" Seven tens three.
... Do about five of these. Children will ask if they can write less. Do a few more with comparisons ("is greater than or equal to"), simplifying equations ("is equivalent to"), lines of equations ("has the same solutions as") and children understand.
These leads to fixing notation that causes errors:
"6x3" can read like "6x times 3".
Poor handwriting confuses "6' with "8".
Pi is usually a less interesting number than 2*pi.
Kinematics use all four corners of a letter, e.g.,
"joint x, in frame r, at time t, ..."
There is space for better notation, or at least an appreciation of how it ends up where it does.
Almost all the time once I see the code that implements the mathematics then it is easy to understand, but making sense of all the different symbols is very hard especially when different fields seem to use the same symbols to meant totally different things.
That is the problem: there are by far too few letters and symbols to describe all the objects used in modern mathematics. You need to reuse symbols, and therefore any paper must declare the symbols they use. Sometimes they are even forced to use the same symbol for different things, so you need to have some experience in the field to understand which is the correct reading each time.
The symbol declaration can be done explicitly (by stating the fact, usually at the beginning or in a dedicated section; some books also have a table of notations with references to definitions) or implicitly, when the paper is targeted to scholars that are most probably already familiar with that notation. The latter is probably not very friendly to newcomers, but in most cases if you do not know what implicitly defined objects are, then you probably cannot understand the paper anyway.
In short, there are very few "short ways" into mathematics. If you want to understand things, you have to spend time and study them.
About the only way I have been able to short-circut the process is outsource to someone who can turn the paper into code - once they do this the maths becomes easy for me to understand.
Also, as long as we are on the topic of bad math notation, I would caution that some sources use the symbols in subtly different ways, so you should be sure to check the paper itself for specific conventions.
Edit. This project looks interesting .
TL,DR; A better Mathematic Notation is a VS for math.