There are some big problems with mathematical notation.
The first is that it is terse--often excessively so. The clearest way this manifests is the sheer predilection for single-variable names. Every variable in every equation invariably gets reduced down to a single letter, and sometimes you need to go so far as distinguishing based on font face or boldness or what random diacritic you can throw on that variable name to keep it a single-letter variable name. Even when it doesn't reach that extreme of an issue, it makes skimming difficult, because you now have to go back through the prose to figure out what 'H' means, and spotting the definition of a single-letter variable in prose is really easy, right? (No, no it is not).
Another factor that can make it infuriating to read mathematical notation is the sheer overloading. Subscript notation is a good one for this--if you see a single letter subscripted by something else, it could mean either getting a particular entry in a sequence of entries, it could be an index into a vector or a matrix or a tensor (and are you getting a row or a column or whatnot? all of the above!), maybe it might be a derivative. Or maybe it's an elaboration of which of the possible terms that get summarized to that letter is actually being referred to (e.g., E_a for activation energy).
Of course, the flip side of the notation problem is the fact that some concepts have multiple different notations. Multiplication is a common example: to multiply a and b, you can use a × b, a · b, or screw any symbol altogether and just say ab (thus a(b) is also a way to indicate multiplication, which totally has no potential confusion with applying a function [1]). Differentiation is the worst; virtually every introduction to derivatives starts with "oh there's multiple notations for this"--is it really necessary that so many need to exist?
[1] There was one paper I once read where I literally had to spend several minutes staring at a key formula trying to figure it if I was looking at function application or multiplication.
> is it really necessary that so many need to exist?
Yes it is necessary - often different contexts require different notation, sometimes as an abbreviation, sometimes not.
I do not believe that mathematical notation can be improved (a poor analogy would be trying to improve, say, English language itself); what does evolve, though, is the understanding of mathematical objects and mathematical frameworks - which can often lead to simplifying things and, sometimes, notation, too.
> I do not believe that mathematical notation can be improved (a poor analogy would be trying to improve, say, English language itself)
I strongly hope you mean that heavy-handed top-down approaches aren't likely to work. Because this reads as though you're saying we've somehow reached the optimal point on every axis for both mathematics and general-purpose communication.
The (original) purpose of mathematical notation is to facilitate (to some degree, automate) reasoning and calculations (mathematicians tend to conflate them) on paper. For the existing frameworks it’s already as good (efficient) as it gets, given the 2-dimensional nature of said paper. Would adding new characters beyond the existing set (which is already quite large), introducing new alphabets in addition to the Latin and Greek (OK, we have the aleph), adding more font styles and sizes - would any of that constitute an improvement worthy of note? We have already exhausted the freedom given to us by the 2-dimensional paper-space as a computational medium - consider, for example, machinery that heavily relies on graphs (diagram chasing; Dynkin diagrams, etc.) or tables (matrices, tensors, character tables of groups, etc.).
And that is why perhaps so much of us like programming: there is exactly one correct way to interpret code. And stylistic differences (different notation for the same thing) are usually discouraged with style guides
> there is exactly one correct way to interpret code
No, there are different compilers, languages or even language versions or compilers settings. Programming notation is way harder and more confusing to learn than the math equivalent just because they are so many more styles languages or versions etc. And we all know that learning all of those symbols and keywords isn't the main difficulty with learning programming or new programming languages.
Of course different languages have different syntax, but it is a pretty hard requirement that within a (specific version of a language the syntax rules are both explicit and consistent.
Those are issues with sloppily written math papers, not with high school math. This repo is just high school and intro college course math, there isn't much overloading or confusion there.
The first is that it is terse--often excessively so. The clearest way this manifests is the sheer predilection for single-variable names. Every variable in every equation invariably gets reduced down to a single letter, and sometimes you need to go so far as distinguishing based on font face or boldness or what random diacritic you can throw on that variable name to keep it a single-letter variable name. Even when it doesn't reach that extreme of an issue, it makes skimming difficult, because you now have to go back through the prose to figure out what 'H' means, and spotting the definition of a single-letter variable in prose is really easy, right? (No, no it is not).
Another factor that can make it infuriating to read mathematical notation is the sheer overloading. Subscript notation is a good one for this--if you see a single letter subscripted by something else, it could mean either getting a particular entry in a sequence of entries, it could be an index into a vector or a matrix or a tensor (and are you getting a row or a column or whatnot? all of the above!), maybe it might be a derivative. Or maybe it's an elaboration of which of the possible terms that get summarized to that letter is actually being referred to (e.g., E_a for activation energy).
Of course, the flip side of the notation problem is the fact that some concepts have multiple different notations. Multiplication is a common example: to multiply a and b, you can use a × b, a · b, or screw any symbol altogether and just say ab (thus a(b) is also a way to indicate multiplication, which totally has no potential confusion with applying a function [1]). Differentiation is the worst; virtually every introduction to derivatives starts with "oh there's multiple notations for this"--is it really necessary that so many need to exist?
[1] There was one paper I once read where I literally had to spend several minutes staring at a key formula trying to figure it if I was looking at function application or multiplication.