Hi all! I'd really like to learn "higher level than highschool" math as a (long time ago) college drop out, but I find it really hard to read anything because of the math notations and zero explanation of it in the context. I didn't find on the web any good resource on the topic, do you any advice / link? Thanks!
I think a real problem in this area is the belief that there is "one true notation" and that everything is unambiguous and clearly defined.
Yes, conventions have emerged, people tend to use the same sort of notation in a given context, but in the main, the notation should be regarded as an aide memoire, something to guide you.
You say that you're struggling because of "the math notations and zero explanation of it in the context." Can you give us some examples? Maybe getting a start on it with a careful discussion of a few examples will unblock the difficulty you're having.
> I think a real problem in this area is the belief that there is "one true notation" and that everything is unambiguous and clearly defined.
One main cause for this belief is that in a programming there is one true noation (or rather, a separate one for each language) that is unambiguous and clearly defined.
I dislike maths notation as I find it lacks rigour.
Came here to say the same thing harshly and laced with profanity. I guess I can back off a bit from that now.
I was filled with crushing disappointment when I learned mathematical notation is "shorthand" and there isn't a formal grammar. Same goes for learning writers take "shortcuts" with the expectation the reader will "fill in the gaps". Ostensibly this is so the writer can do "less writing" and the reader can do "less reading".
There's so much "pure" and "universal" about math, but the humans who write about it are too lazy to write about it in a rigorous manner.
I can't write software w/ the expectation the computer "just knows" or that it will "fill in the gaps". Sure-- I can call libraries, write in a higher-level language to let the compiler make machine language for me, etc. I can inspect and understand the underlying implementations if I want to, though. Nothing relies on the machine "just knowing".
It's feels like the same goddamn laziness that plagues every other human endeavor outside of programming. People can't be bothered to be exact about things because being exact is hard and people avoid hard work.
"We'll have a face-to-face to discuss this there's too much here to put in an email."
You seem to be complaining that math isn't programming, that it's something different, and you've discovered that you don't like how mathematicians do math.
Math notation is the way it is because it's what mathematicians have found useful for the purpose of doing and communicating math. If you are upset and disappointed that that's how it is then there's not a lot we can do about it. If there was a better way of doing it, people would be jumping on it. If a different way of doing it would let you achieve more, people would be doing it.
It's not laziness, and I think you very much have got the wrong idea of how it works, why it works, and why it is as it is. Your anger comes across very clearly, and I'm saddened that your experience has left you feeling that way.
Maths is very much about communicating what the results are and why they are true, then giving enough guidance to let someone else work through the details should they choose. Simply giving someone absolutely all the details is not really communicating why something is true.
I'm not good at this, but let me try an analogy. A computer doesn't have to understand why a program gives the result it does, it just has to have the exact algorithm to execute. On the other hand, if I want you to understand why when n is an integer greater than 1, { n divides (n-1)!+1 } if and only if { n is prime } then I can sketch the idea and let you work through it. Giving you all and every step of a proof using Peano axioms isn't going to help you understand.
Similarly, I can express in one of the computer proof assistants the proof that when p is an odd prime, { x^2=-1 has a solution mod p } if and only if { p = 4k+1 for some k }, but that doesn't give a sense of why it's true. But I can sketch a reason why it works, and you can then work out the details, and in that way I'm letting you develop a sense of why it works that way.
Math isn't computing, and complaining that the notation isn't like a computer program is expressing your disappointment (which I'm not trying to minimise, and is probably very real) but is missing the point.
Math isn't computing, and "Doing Math" is not "Writing Programs".
Thanks for the pingback ... I appreciate that. And thanks for acknowledging that I'm trying to help.
It might also help to think of "scope" in the computing sense. Often you have a paragraph in a math paper using symbols one way, then somewhere else the same symbols crop up with a different meaning. But the scope has changed, and when you practise, you can recognise the change of scope.
We reuse variable names in different scopes, and when something is introduced exactly here, only here, and only persists for a short time, sometimes it's not worth giving it a long, descriptive name. That's also similar to what happens in math. If I have a loop counting from 1 to 10, sometimes it's not worth doing more than:
for x in [1..10] {
/* five lines of code */
}
If you want to know what "x" means then it's right there, and giving it a long descriptive name might very well hamper reading the code rather than making it clearer. That's a judgement call, but it brings the same issues to mind.
I hope that helps. You may still not like math, or the notation, but maybe if gives you a handle on what's going on.
PS: There are plenty of mathematicians who complain about some traditional notations too, but not generally the big stuff.
This example works against you. Scope shadowing is nearly universally considered bad practice, to the point that essentially every linter is pre-configured to warn about it, as are many languages themselves (eg prolog, erlang, c#, etc)
To a programmer, you're saying "see, we do it just like the things you're taught to never ever do"
.
> You may still not like math, or the notation,
The notation is probably fine
What I personally don't like is mathematicians' refusal to provide easy reference material
There are lots of maths cheat sheets like that. Maths is big, like all-programming-languages big. Just like in programming, notations are re-used in different areas with different meanings, and different authors sometimes use different notation for the same meaning. A universal cheat sheet is impossible (just like a general programming cheat sheet is), but many cheat sheets or notation reference pages exist for particular contexts, one of which is "the basics", e.g. https://www.pinterest.nz/pin/734016439237543897/. Try searching or image searching for [math cheat sheet], [linear algebra cheat sheet], etc.
> mathematicians' refusal to provide easy reference material
This is an absurd claim. There is no such general refusal. On the contrary, many mathematicians provide their students with relevant easy reference material constantly. We sometimes spend entire semester-long courses providing easy reference material, and there are many books with exactly the kind of cheat-sheet you want inside the cover, or in an appendix or front matter (as well as the ones on the internet mentioned above).
That one is far too basic. It doesn't even have things like average, or absolute value. It includes things nobody needs explained, like subtract, and things that aren't math, like logical operators.
This is why I said "yes, people have tried, but nobody has succeeded."
The explicit context was the average software paper. We're talking about programmers.
.
> > mathematicians' refusal to provide easy reference material
> This is an absurd claim.
Lots of people in here seem to agree with me. YMMV.
Feel free to provide me easy reference material.
.
> On the contrary, many mathematicians provide their students with
> We sometimes spend entire semester-long courses
The explicit context is "to people who aren't mathematicians or mathematics students."
Remember, we're talking about programmers who are appealing to the mathematics community for help.
If your response to "you guys won't give programmers a short easy two page PDF at the level we need" is to remind me that you give your own students semester long courses, then you've absolutely failed to understand what's being said.
.
> there are many books with exactly the kind of cheat-sheet you want inside the cover
Every time I ask for one, I get something with symbols meant for children learning arithmetic, like the one you gave. It explains plus and percent.
If you think we need plus explained to us, and that the next step is an insular semester long lecture course that isn't offered to us, then how can you possibly be surprised that we think you failed us?
.
> as well as the ones on the internet mentioned above
You only mentioned one. The other one is something I gave, from our community, trying to explain to you the kind of thing I want.
It covers topics like monads, pattern matching, infix, operator precedence, typeclasses, infinite lists, codata, higher order functors, special folds, tuples, numerics, modules, tracing, list comprehensions, and dealing with the compiler itself.
You patted me on the head and taught me that * means times.
I continue to feel that the mathematics community refuses to understand the needs of the programming community, or provide appropriate reference material.
It's either "this is arithmetic" or "let's do linear equations in russian"
There's no practical middle ground and you seem resistant to even understanding that such a thing exists
Programmers aren't ignorant like the other mathematicians in here have repeatedly said. You can't do our things any more than we can do yours. We've seen your code.
It's just that when you ask us for appropriately costed reference, we comply, and you do not even grok.
You really put up a thing that explained `less-than`, as if that was what you were being asked for.
It turns out that most programmers know what the equals sign means.
There is nothing of practical value in the actual domain space being talked about, here.
Nothing here is beyond a highschool pre-calculus class. That is not the level that professional programmers need.
I don't mean to seem rude, but it feels a little bit like being talked down to, having it suggested that this is the level of help my occupation is asking for.
> I have never found one that gets me through the average undergraduate CS paper.
That cheat sheet would be written by CS folks, since every applied domain uses their own quirks in their notations. Mathematicians can't help you there. You can't blame mathematicians for the shortcomings of CS researchers.
Many textbooks have pages that explain the mathematical notation used. Here's an example from a linear algebra textbook: http://linear.ups.edu/html/notation.html
But it doesn't make sense to put lists of notation everywhere mathematical notation is used like in a journal or because the audience is already expected to know it. If the author does something weird or non-standard it's typically explained, sometimes it's even explained if it's pretty standard.
Different branches of math, physics, statistics,etc. will redefine the same symbols to mean different things but that's not much different from programming languages. & in C++ has a different meaning than & in R. Like the first step of understanding someone else's code is to know what language your looking at it's important to understand the context of what you're looking at. Look at previous cites, relevant textbooks, ask around, reread the paper again. I've read some papers a dozen times easy before it clicked.
> This example works against you. Scope shadowing is nearly universally considered bad practice
So you never used the same variable name in two different scopes ever? Like, if a function takes argument "name", no other function you ever write again in any program can have a variable named "name" unless it is the same exact usage?
Or, as is commonly complained about in math, every programmer in the world then use the variable "name" only for that usecase and otherwise comes up with a new name for it?
Having different scopes doesn't imply shadowing, it just means that you define it and then use it and then scope goes out and it no longer exists. No mathematician knows even close to every domain, so different domains of math uses notation differently. It is like how different programmers programs in different programming languages. It is such a waste to have so many programming languages, but people still do it for legacy reasons.
> So you never used the same variable name in two different scopes ever?
That's not what shadowing is.
.
> Having different scopes doesn't imply shadowing
I didn't say that it did.
.
> No mathematician knows even close to every domain
this is irrelevant to a lightweight two page cheat sheet for simple mathematial symbols
part of the problem is that if we ask you for a simple thing that isn't perfect or exhaustive, you lecture us on how no document could contain every concept
that's very clearly not what's being requested of you. the same is true of haskell. that sheet doesn't contain all of haskell. i doubt anybody knows all of haskell, which of course is far smaller than mathematics.
i'd like to stay in the practical world. it was clearly stated that an exhaustive solution was a non-goal.
let's try to do one thing that doesn't have a limit at infinity. (i'm sorry, i'm a programmer, math jokes are hard)
surely you've made food, right? did you learn the recipe from a cookbook? did it contain every ingredient and recipe that any cook ever knew? did it go over the chemistry of the protein denaturing, the physics of the water boiling, the ethics of importing the burner fuel, allergy responses, cultural backgrounds, molecular weights, how to make things in a duck press?
no?
was that because the cookbook was just good enough? it was just like "use this much chicken and this much onion, two tortillas, some cilantro and lime?"
nobody wants exhaustive anything. if you managed somehow to produce that (ie by just giving the manual page to the wolfram language) it would be rejected as the exact opposite of what was being requested.
the thing you're protesting is the i'm saying is the wrong job.
cool beans. the thing i'm actually asking for is straightforward.
someone already gave me one, but the difficulty level was aimed at children, instead of professional programmers, the space requested, while also calling me ignorant. it's a shame; that one was almost it.
but it should be things like `ŷ` and `||x||` and whatever. it should include sum, integral, and product for the juniors. For `|x|` it should say "Absolute value, magnitude, length, or cardinality."
it doesn't need `||x||` because we need someone to teach us what absolute value and cardinality and so on mean. it needs `||x||` because we forgot what double-bar says, and if we have that list of four things, we can figure out which one it is just like you can.
we know what magnitude is. we just don't know what `||foo||` says.
we just need our cracker jack decoder rings. we get the ideas. we don't get your letters.
It's not explaining anything. It's just a cheat sheet. You aren't solving education. C'mon.
> every programmer in the world then use the variable "name" only for that usecase and otherwise comes up with a new name for it?
this isn't terribly uncommon, primarily because our culture is to use long descriptive names, which have a far lower natural collision rate
i do get that symbols can overlap. that's okay! nobody's complaining about that. it's fine for `||x||` to mean four things. we're just as bad as you are about that. there's half a dozen meanings for stack, another five for heap, seven for map, five for vector, i don't even want to get into what a mess "array" is, et cetera.
but if i was writing a cheat sheet for you, I could write `< > generally means a generic type, a tuple, greater than less than, an HTML/XML/SGML tag, an email inclusion, or an IRC handle`
Is that exhaustive? Naw. I can name another dozen off the top of my head.
But that's generally going to be good enough.
All I want is generally good enough.
just please write down what they are?
> Math notation is the way it is because it's what mathematicians have found useful for the purpose of doing and communicating math.
That's only really a good description for the most well trod areas, where people habe bothered to iterate. I think a more realistic statement would be:
"Math notation is the way it is because some mathematician found it sufficient to do and communicate math, and others found it tolerable enough to not bother to change."
Personally, though, my problem has always been where publications use letters and symbols to mean things that are just "known" in some subfield that isn't directly referenced. It's not a problem for direct back and forth communication during development, true, but it dramatically increases the burden on someone who wants to jump in.
That all said, it would still be quite nice if it was somehow more accessable. A lot of papers containing material that's probably actually quite standardizable remain opaque to me, and the notation invariably falls by the wayside if there's a code or language description available.
Many times, math notatons have been thought to be minimal, or most clear possibly, only to fall by the wayside
Whereas this notation serves domain specialists well, it still leaves people like me somewhat confused
A cheat sheet - even to the practical norms - would go a long way
Here's a take from a mathematician-in-training, and it's biased toward research-level math, or at least math from the last hundred years:
Math is difficult, and a lot of what we have is the result of the sharpest minds doing their best to eke out whatever better understanding of something they can manage. Getting any sort of explanation for something is hard enough, but to get a clear theory with good notation takes an order of magnitude more effort and insight. This can take decades more of collective work.
Imagine complaining about cartographers from a thousand years ago having sketchy maps in "unexplored" regions. Maps are supposed to be precise, you say, there's actual earth there that the map represents! But it takes an extraordinary amount of effort to actually send people to these places to map it out -- it's hardly laziness. Mathematics can be the same way, where areas that are seemingly unrigorous are the sketches of what some explorers have seen (and they check that their accounts line up), then others hopefully come along and map it all in detail.
When reading papers, there's a fine balance of how much detail I want to see. For unfamiliar arguments and notation, it's great to have it explained right there, but I've found having too much detail frustrating sometimes, since after slogging through a page of it you realize "oh, this is the standard argument for such-and-such, I wish they had just said so." You tend to figure that something is being explained because there is some difference that's being pointed out.
I've been doing some formalization in Lean/mathlib, and it is truly an enormous amount of work to make things fully rigorous, even making it so that all notation has a formal grammar. It relies on Lean to fill in unstated details, and figuring out ways to get it to do that properly and efficiently, since otherwise the notation gets completely unworkable.
> There's so much "pure" and "universal" about math, but the humans who write about it are too lazy to write about it in a rigorous manner.
Are you sure it's laziness? Maybe it's a result of there not actually being any universal notation (not even within subfields) or the exactness you refer to really isn't necessary. This doesn't mean that unclear exposition is a good thing. Mathematical writing (as with all writing) should strive towards clarity. But clarity doesn't require some sort of minutely perfectly consistently notation which would be required by a computer because humans are better than computers at handling exactly those kinds of situations.
> People can't be bothered to be exact about things because being exact is hard and people avoid hard work.
I think you have it wrong. People can't be bothered to be as exact because they don't need to. People can understand things even if they are inexact. So can mathematicians. Honestly this is a feature. If computers would just intuitively understand what I tell them to do like a human assistant would, that would be a step up not a step down in human computer interfaces.
> But clarity doesn't require some sort of minutely perfectly consistently notation which would be required by a computer
I made this point in another comment, but I think it bears repeating and elaboration: Consistency isn't required (at least outside any single paper), but explicitness would be a tremendous boon.
Software incorporates outside context all the time, but it pretty much always does it explicitly (though the explicitness may be transitive, ie. dependencies of dependencies). Math papers often assume context that is not explicitly noted in the citations, nor those papers' citations, etc.
Instead, some of the context might only be found in other papers that cite the same papers you are tracking down. You sometimes need to follow citations both backward and forward from every link in the chain. And unlike following citations backward (ie. the ones each author considered most relevant), the forward links aren't curated and many (perhaps most) will be blind alleys (there also may be cycles in the citation graph, but these are relatively rare). But somehow you have to collect knowledge (or at least passing familiarity) with an encyclopedic corpus in order to at least recognize and place the context left implicit in any one paper in order to understand it.
I totally agree. I think that many mathematical papers aren't explained as well as they can be. My advisor was pretty adamant that papers should not be written in some proof-chasing style like you describe and that the author should clearly include the arguments they need (citing those authors they might have learned it from) unless those arguments are truly standard. No "using a method similar to [author] in Lemma 5 of [some paper]" and instead just including it in your paper and making sure if fits in well.
That is just an example of bad exposition in my opinion. It's also not technically "unclear" in any notational sense so it's a bit of an aside from this argument. But I agree with you 100% that it is bad bad bad. This is a perfect example of why arguments like "does this proof make coq happy" totally misses the point.
> That is just an example of bad exposition in my opinion [and] a perfect example of why arguments like "does this proof make coq happy" totally misses the point..
In theory, some kind of checker could validate the semantics of a paper to just tell you whether the arguments made are complete. Not whether there is a correct formal proof, just pointing out any obscured leaps of faith[0]. A rough analogue to test suite coverage for code (which is also not any sort of guarantee of correctness, just basic reassurance that all (or most of) the code is tested and isn't broken in any obvious way, especially while making changes).
I'm trying to think of an equivalent for prose, and am coming up with examples like detecting conflicting descriptions of locations or named characters, or whether the author lost track of which character said which lines in the dialog.
> It's also not technically "unclear" in any notational sense
Perhaps not necessarily, but unfamiliar/borrowed/idiosyncratic notation is a perfect (and common) place for insufficient exposition to be hiding.
People can also understand each other through combinations of obscure slang, garbled audio, thick accents, and drunken slurring. It's still an unpleasant way to communicate.
Shall we be satisfied with the same low standards in a technical field, because it is how it is?
Hands-on users of math notation are complaining that it sucks. I'm not sure why a dismissive "works for me" is so often the default response.
> Hands-on users of math notation are complaining that it sucks. I'm not sure why a dismissive "works for me" is so often the default response.
It is really easy to complain. People also complain about every popular programming language, but it is really hard to make something that is actually better. It is easy to make something that you yourself think is better, but it is hard to make something that is better in practice.
> Hands-on users of math notation are complaining that it sucks. I'm not sure why a dismissive "works for me" is so often the default response.
Are you sure this is because the notation is unclear/imprecise or because you just don't like it? I like certain programming languages and certain programming styles and really don't like others. But in none of the cases (those I like nor those I don't) are they not 100% "clear". The code compiles and executes after all so there really isn't much of an argument that somehow it's underspecified.
The same thing exists in mathematics. There are certain fields of math whose traditional notation/style/approach/etc. are totally incomprehensible to me. There are also many mathematicians who would say the same about my preferences as well.
So my point is that all people are _different_. Some people like certain things and some people like others. How can you hope to please everyone simultaneously? In my experience, there is no field at all that is as precise as mathematics. Sure "code" is precise, but (imo) professional programmers are nowhere near as precise in any general design or conversation than mathematicians. So I find the attack on supposedly bad mathematical notation a bit odd.
Mathematicians constantly try to come up with better methods of explaining things. They put more effort into it than basically any field in my experience. The problems are really that we as humans don't all think the same and that mathematics is just plain hard. We've improved mathematical communication immensely throughout history and we will continue to do so. But we'll never reach some sort of perfect communication style because no single such style could ever exist.
Yes, we are too lazy to be 100% formal and many times we are too lazy to be mostly formal. This is mostly because we target our writing to other mathematicians who have no need to see every small step and including every step makes the proofs long. On the other hand, I do feel that generally speaking mathematicians should show more of their work and skip fewer steps.
I find your statement "People can't be bothered to be exact about things because being exact is hard and people avoid hard work." to be very true. Being precise is difficult.
> I dislike maths notation as I find it lacks rigour.
I see this a lot from programmers, but in essence, you seem to be complaining that maths notation isn't what you want it to be, but is instead something else that mathematicians (and physicists and engineers) find useful.
As someone who's studied math and CS extensively, it's not that mathematicians don't need that rigor it's only certain sub-fields have a culture of this kind of notational rigor. You absolutely see little bubbles of research, 2-4 professors, get sealed off from the rest of the research community because their notational practices are so sloppy that no one wants to bother whereas others make it easy to understand their work.
CS as a field just seems to have a higher base standard for explaining their notation and ideas. It helps in cross-collaboration by making it significantly easier to self study.
Related to this, I'd say math books have a significantly worse pedagogical culture in regards to both notation and defining pre-requisites. It's very common for a math book to say "we expect readers to have taken a discrete math course" and not defining notation despite knowing the topics covered in discrete math vary greatly from school to school and may not overlap. Math professors frequently have to paper over these problems at Uni as they realize the class does not understand some notation. CS are just better about this, and I can only explain it as a part of the culture and tradition.
> CS are just better about this, and I can only explain it as a part of the culture and tradition.
CS professors writes just as incomprehensible math as everyone else, as you can see many here brings up examples of CS professors writing incomprehensible math in their papers.
Moreover, you might think that Lisp notation would improve it, but CS papers using S-expressions are just as incomprehensible, even to a seasoned Lisp programmer.
Math notations are two-dimensional and don't suffer very badly from structural ambiguities, so that actually fixes almost nothing.
The problem in unfamiliar math notations is rarely the chunking of which clump is a child of which clump.
E.g say that some paper uses, say, angle brackets, with some deep meaning that you can learn about if you recurse three levels down in the list of references.
I'm not confused that in <Ap>, the Ap thing is a child of the angle brackets; and calling it (frob (A p)) doesn't help much in this regard.
However, at least you can search literature for the word frob more easily than for angle brackets.
>I'm not confused that in <Ap>, the Ap thing is a child of the angle brackets; and calling it (frob (A p)) doesn't help much in this regard. However, at least you can search literature for the word frob more easily than for angle brackets.
Also it's perhaps more likely to occur to the person using the frob notation that maybe they should define frob somwewhere.
I think you can easily pick examples from any field of terribly written papers but I'm attempting to describe my experience of interacting with both disciplines over many years; reading many books and papers from both.
You may not realize that in a given field, the same variable that represents the same basic thing may be negated depending on the part of the world the paper is published from. This can be fine, if it’s your subfield, you happen to know to be careful with said variable. I don’t personally dig into a lot of disperate maths in different papers very often, but this is the single biggest complaint my polyglot friend talks about. The second biggest is when he has to read and parse the math from a dozen unrelated papers in a field to find out what some random undefined variable means in the actual paper he cares about.
I graduated in physics so I am no stranger to math notation quirks and I think I also do understand their usefulness at times (conciseness in notation, etc). And it can be dangerous, too. As soon as the notation lures you into doing transformations that are invalid.
Doesn't help that then notation is often poorly defined, and sometimes a weird mix of notations is presented.
Overall the situation is also not pleasant for math people changing topics, or physicists reading papers from physical chemistry professors who 'grew up' in mathematical chemistry.
I think "useful" is doing a lot of work here. A lot of math notation exists clearly to gate keep. It's often nonsensical. It's a shame because it really makes mathematicians look bad (re:annoying) to those who can see through it. It's not hard to see through it or anything, but it is obnoxious. All you need is an english explanation of the notation, and then you're good, but often all of the sources on the topic are written in the same obnoxious babble language.
This is supposed to be an algorithm implemented in code. It's essentially illegible without code examples, which it doesn't feature. Code examples tell you what the cipher signifies; at no point does the cipher provide any value to the learner. Fanciful bayes-theoretical statements and so on basically reduce to "iteratively build enlarging valid states." Given the fact that this simple statement is missing, I question if the professor has some sort of communication disorder or if they're just a troll. Similar to pomo philosophers, it's probably a mix.
Lecture powerpoints are bad everywhere since you are meant to listen to the lecturer speaking about them, they aren't meant to be read independently like this.
Try to understand programming based on a programming lecture powerpoint, it is usually impossible.
Edit: Also you can't write code for what he is talking about in that lecture. Code cannot deal with infinities or continuous values. You'd get approximations which isn't the same thing, then you'd need to prove that those approximations are good enough which would have to be done without code anyway.
Yes. What's wrong with changing math notation? Why wouldn't you do it if you know that it would make it easier for others to approach? What's the rationale behind doing exactly nothing to make the notation more approachable for the masses?
Math notation has evolved to be what it is because it is useful for the actual doing of math, and the communication of math to those who have sufficient background. It's not deliberately designed to keep people out, and there are literally hundreds of thousands of books that introduce people to the notations used, to help on-board them.
Haskell is unreadable to one who has not trained in it or similar languages ... why don't they make the syntax more readable? Or C++ with its modern templating ... why don't they change the syntax to make it more readable?
You might be tired of wandering into someone else's area of expertise and telling them:
You must change! You must make it more accessible!
Believe me, mathematicians are tired of non-mathematicians wandering up and saying:
Look! Computer programs are easy and intuitive and everyone can understand them, even without training! Make math like that!
Do you really believe that math notation is deliberately designed to make it hard for people untrained in math to learn how to use it? Do you really believe that no one has tried to make it more accessible?
Do you really believe you know more about why math notation is what it is than mathematicians and trained mathematics educators do?
> It's not deliberately designed to keep people out,
It looks that way, to many people, even in this thread.
> why don't they change the syntax to make it more readable?
They do, actually. Quite often at that. It's called releasing new version.
> Look! Computer programs are easy and intuitive and everyone can understand them, even without training! Make math like that!
No. Computer code is as far from intuitive as it can be. Nobody says otherwise. So you don't need to do anything to get there, the notation's good on that front (meaning: completely non-intuitive).
That's where the IDEs come in. And debuggers. And other tools. Lots of tools. They really help. You could use them, because the IDEs-for-math already exist. In college I had exactly one semester to familiarize myself with one of them, and it was never mentioned again until graduation.
> Do you really believe that math notation is deliberately designed to make it hard for people untrained in math to learn how to use it?
> Do you really believe that no one has tried to make it more accessible?
Why did they fail? (If they didn't - where's the exponential growth of first years' mathematicians in training)
> Do you really believe you know more about why math notation is what it is than mathematicians and trained mathematics educators do?
I'm 100% not interested in why it is like this, it's not my problem, so I really wouldn't know. Would you be interested in how at some point you had to write `class X(object):` and that it later changed to simply `class X:`? Would you go hunt on the mailing list to see who exactly came up with the idea? Or why they thought it would be better that way? Would you be interested in that if you just had to write a 10-lines of Python, to scrape some web site?
> Did you just use an example from 2600 years ago to make a point?
Yes? What's wrong with that?
I'm pointing out the most widely known example, to make a point, which is: "it is possible to design notation specifically for keeping outsiders out". I'm not saying that modern math notation is like that. I think, as a layman, that it probably evolved over a long time and so is full of idiosyncrasies that made perfect sense back when they were introduced (my GP seems to describe it in similar terms, so I hope I'm not that far removed from reality).
> It's not deliberately designed to keep people out
Surely you must realize that you're protesting this because it has this reputation, though?
And surely you must realize that it has this reputation for a reason?
When I was a teenager and took my first calculus course, I struggled with summation for three days. When I finally went to my dad he looked at me funny and said "your teacher is an idiot, isn't he? It's a for loop."
I had been writing for loops for seven years at that age. I almost cried. It was like a lightswitch.
The problem was always that nobody had ever actually explained what the symbol meant in any practical way. Every piece of terminology was explained with other terminology, when there was absolutely no reason to do so.
Mathematics has the reputation for impermeability and unwelcomingness for a reason.
It's because you guys are ignoring us saying "we want to learn, please write out a cheat sheet" and saying "yes, but don't you see" instead of just building the easy on-ramp that every other field on earth has built
.
> > You might be tired of wandering into someone else's area of expertise and telling them:
>
> You must change! You must make it more accessible!
No, we generally just fix the problem. If people are saying "this isn't accessible enough," we just work on it.
I would like for you personally to be aware of Bret Victor's work. He's incredibly potent and clear on these topics.
Programmers work reallyreally hard on learnability and understandability. This is a big deal to us. That's why we can't understand why it's not a big deal to you.
We have, in fact, mostly given up on waiting for you, and started to make our own tooling to understand your work, using obvious principles like live editors and witnessable effects.
We frequently think of our programming languages as new modes for thought. This line of discussion is particularly popular in the Lisp, Haskell, and Forth communities, though it crops up at some level everywhere.
We frequently think that the more opaque the language, the less useful it is in this way.
That's why programming languages, which are arguably 70 years old as a field, have so much more powerful tools for teaching and explanation than math, which is literally older than spoken language
You guys don't even have documentation extraction going yet. We have documentation where you have a little code box and you can type things and try it. You can screw with it. You can see what happens.
This is why we care about things like Active Reading and explorable explanations.
Math hasn't grokked non-symbolic communication since Archimedes, that's why it took nearly two thousand years to catch up with him.
We are asking you to come into step with the didactic tools of the modern world. It's not the 1850s anymore. We have better stuff than blackboards.
Are these flat symbolic equations cutting it for you guys to communicate with one another? Sure.
Are they cutting it for you guys to onboard new talent, or make your wealth available to the outside? No. (Do you realize that there is an outside to you, which isn't true of most technical fields anymore?)
These problems are not unique to mathematics, of course. Formal logic is similar. Within my own field of programming, the AI field is similar, as is control theory, as tends to be database work. They don't want to open the doors. You have to spend six years earning it.
But the hard truth is there are more difficult fields than mathematics that have managed to surmount these problems, such as physics (which no, is not applied mathematics,) and I think it might be time to stop protesting and start asking yourself "am I failing the next generation of mathematicians?"
An example of who I believe to be genuinely good math communicators in the modern era are Three Blue One Brown.
.
> > Believe me, mathematicians are tired of non-mathematicians wandering up and saying:
>
> Look! Computer programs are easy and intuitive and everyone can understand them, even without training! Make math like that!
Then fix the problem.
It IS fixable.
.
> Do you really believe that math notation is deliberately designed to make it hard for people untrained in math to learn how to use it?
Given the way you guys push back on being asked to write simple reference material?
No, but I understand why they do.
.
> Do you really believe that no one has tried to make it more accessible?
No. Instead, I believe that nobody has succeeded.
Try to calm down a bit, won't you? People tried to explain Berkeley sockets in a simple way for 12 years before Beej showed up and succeeded. The Little Schemer was 16 years after Lisp.
Explaining is one of the very hardest things that exists.
We're not saying you didn't try! The battlefield is littered with the corpses of attempts to get past Flatland.
We're just saying "you haven't succeeded yet and this is important. Keep trying."
.
> Do you really believe you know more about why math notation is what it is than mathematicians and trained mathematics educators do?
No. The literal ask is for you to repair that. Crimeny.
> Surely you must realize that you're protesting this because it has this reputation, though?
I've never heard anyone make this accusation until I read it here on HN today. The reputation doesn't seem to be widespread.
> Programmers work really really hard on learnability and understandability. This is a big deal to us. That's why we can't understand why it's not a big deal to you.
How to better teach math is like one of the most studied topics in education since it is so extremely important for so many outcomes. People learn programming faster since programming is simply easier, not because more effort has been done to make programming easy. There hasn't, way more effort has been put into making math easy and the math we have is the results of all that work.
> Given the way you guys push back on being asked to write simple reference material?
Nobody pushes back on writing simple reference manuals. There are tons of simple reference manuals for math everywhere on the internet, in most math papers, in most math books, everywhere! Yet still people fail to understand it. Many billions has been put trying to improve math education, trying to find shortcuts, trying to do anything at all. You are simply ignorant thinking that there are some quick fix super easy to implement things that would magically make people understand math. There isn't. It is possible that math education could get improved, but it wont be a simple thing.
> > Surely you must realize that you're protesting this because it has this reputation, though?
>
> I've never heard anyone make this accusation until I read it here on HN today. The reputation doesn't seem to be widespread.
blinks really?
It's kind of a famous thing. Teachers are taught how to cope with math phobia; other subjects (with the exception of dissection) generally do not create this effect.
.
> How to better teach math is like one of the most studied topics in education since it is so extremely
Unsuccessful
Many other disciplines are equally important and don't have the desperation for repair that math does
It's so bad that it's gotten into popular music a dozen times; you can easily point to Tom Lehrer, Weird Al, even 2 Chainz for broaching the topic
.
> People learn programming faster since programming is simply easier
Keep telling yourself that
Remember who's cracking things like four coloring.
.
> There hasn't, way more effort has been put into making math easy and the math we have is the results of all that work.
You seem not to be results focused.
.
> Nobody pushes back on writing simple reference manuals.
The person I was responding to was. Three different people have with me in this thread alone.
Notice that you've changed the topic to "reference manuals," which 𝗶𝘀 𝗻𝗼𝘁 𝘄𝗵𝗮𝘁 𝘄𝗮𝘀 𝗯𝗲𝗶𝗻𝗴 𝗿𝗲𝗾𝘂𝗲𝘀𝘁𝗲𝗱
Ostensibly this is the part at which you launch into an explanation about how a simple cheat sheet that says things like " || usually means absolute value or magnitude" somehow isn't possible, even though it totally is
Next you'll explain how simple summary symbols like that are somehow harder than Haskell, which was able to produce a cheat sheet just fine
In reality, mathematicians just don't have the faintest clue how difficult other fields are, and are too busy patting themselves on the backs for their challenging intellects to realize that their work is under-used due to their inability to produce legends and keys
.
> Yet still people fail to understand it.
You're not producing what's being requested, then you're surprised that what you're actually producing isn't working.
.
> trying to find shortcuts, trying to do anything at all.
All that's being requested is a two page operator cheat sheet. If you'd stop wisely pontificating and just try it, you'd find it's really quite straightforward.
I produced one for my students. It's just that my own education in mathematics only goes so far.
𝗜𝘁 𝗴𝗲𝗻𝘂𝗶𝗻𝗲𝗹𝘆 𝗶𝘀 𝗻𝗼𝘁 𝗱𝗶𝗳𝗳𝗶𝗰𝘂𝗹𝘁
You are doing the refusing you insist nobody's doing, right now
.
> You are simply ignorant thinking that there are some quick fix super easy to implement things
Not really, no.
I've succeeded at this.
You are protesting against things that were not requested. Nobody asked for any "quick fixes." Nobody asked for any "magical make understand."
These are things you said, not things I said.
I said something very simple and very practical. Something that very much can be done.
I want a two page PDF which puts operators next to their names, so the load on my memory is lower.
Every time I ask that, some math fan goes on forever about how things I never asked for aren't possible, and ties that up with personal attacks.
> It is possible that math education could get improved
Please stop trying to solve every problem. Nobody asked you to take on that Herculean task.
What was actually requested was a simple two page PDF. That's a thing that a single person can do in an hour (well, maybe as a webpage, making an actual PDF is a hassle.)
The constant attempt to change the topic to some greater thing blinds you to how easy fixing this actually is.
Seriously, nobody is saying "solve math education for all people."
For `|x|` it would just say "absolute value, magnitude, length, or cardinality."
It's a quick reference chart for people who already know the material but haven't actively used it for ten years, and to help know what to google for the rare cases where you don't. It doesn't teach anything.
This shouldn't be a big deal, and people keep asking for it because it would *really* help
> What was actually requested was a simple two page PDF.
But this is impossible, because, as I learned today, "math is a small field without money for things like this".
Thanks to my discussion here I was able to shift my perspective. Instead of thinking about an abstract thing, like "improving readability of math papers to non-mathematicians", I understood that I have to focus on people! In other words, on a bunch of guys frustrated that other STEM guys earn multiples of what they earn, even though their fields are so much "easier". Why would they help those various others use their work? They worked so hard to create the papers, living in a (shared with a philosopher) one bedroom apartment, writing their revolutionary insights on tissues stolen from McDonalds with a piece of charcoal - while those others can afford to buy computers (and even paper) easily, even though they don't work as hard as mathematicians at all!
It's just so unfair, and of course it's not like mathematicians are jealous or anything (not at all! really! honest!), it's just genuinely so much "harder" to do math, that most of those others won't be able to understand no matter what! Even if they actually listened to the masses and did as requested, the mathematicians already know that it would be useless, so there's no point in trying. Also, this is obvious, but worth noting: saying that most people are simply too dumb to ever understand the intricacies of math is not a prejudiced bias, ok, that's simply a conclusion of centuries of research into math education!
I honestly stopped expecting they'll do anything sensible ever, unless they really have no other choice. You might win against some people, but you won't ever defeat human nature.
The fact that the teaching material is so horrible is probably the biggest reason learning programming is so easy, because if the teaching material would be any good then the CS grads would actually know difficult things and it would be harder for me to compete. But as is it is trivial to learn what they did, because they don't learn much. For programming it didn't take me long to learn programming well enough to get into Google and work on services with a million QPS or distributed machine learning training, because programming is really really simple. Not because you are good at teaching, no, I just sat and learned on my own just writing programs, all program tutorials and helps were horrible.
> But this is impossible, because, as I learned today, "math is a small field without money for things like this".
Of course it isn't impossible, that was never the problem here. Mathematicians have written plenty of such cheat sheets, you can find them everywhere just by Googling. Here is one for physics:
Can you please understand that you are complaining that nobody created something that many many people have created many times over? I found those by googling "physics notation" and "math notation". Was that really too hard for you? If you want a cheat sheet specific for CS related papers, then that cheat sheet has to be written by someone with a CS background since a mathematician wont know what is useful to you, and neither will he know what quirky notation CS people use in their own papers since every domain of math has its own quirks.
What you are doing here is the equivalent of a guy wanting to learn python, and then going to the C++ comitee and complaining that no language is well documented online, and even when the C++ committee nicely shows you that in fact there is documentation of python online you still complain and say that nobody cares about your woes because that python documentation was in fact not easy enough for you to read!
Millions of programmers that work hard and for a long time to master their trade are just dumb for not learning it instantly, like you. Poor bastards, deluding themselves they do somehow advance the field, while in reality programmers are just doing "tabs vs spaces" over and over again, and any kind of advancement is given to them by our benevolent mathematical overlords.
No. We've worked for decades to make programming this simple. You're driving an obviously overpowered car in a race, but choose to conveniently forget about its specs once you win.
> Mathematicians have written plenty of such cheat sheets, you can find them everywhere just by Googling. Here is one for physics
Sure they were created. I never said they weren't, so please stop insinuating that. I (not GGP) said specifically:
> It has literally nothing to do with explaining the syntax *close to where it's used*.
And that is not being done, so I really don't understand how the rest of the paragraph has anything to do with me.
> say that nobody cares about your woes because that python documentation was in fact not easy enough for you to read!
So? What's wrong with that? Are you telling me I'm dumb? Is it really all you can do?
Well, it's either that I'm simply an uneducated idiot, or that the documentation indeed could be better. You seem to default to the former. Well, I feel differently about that.
> So? What's wrong with that? Are you telling me I'm dumb? Is it really all you can do?
No, I am saying that people take a long time to understand this. Programmers expects things to have a simple explanation behind it since in programming everything does have a simple explanation, since computers can only do very few things. But in math you quickly expand to concepts beyond, even as early as calculus you add infinities and continuous quantities and how to work with those, you can't ever program those things since those operations cannot be expressed using finite instructions. There is no "this function performs an integral on this other thing and is expressed using these steps of operations".
For example, lets say you want to sum 1/N^2 for N from 1 to infinity. How would you express that? You can't do it in a loop, since the loop never ends. You can stop at an arbitrary point, but how do you know that stopping there results in a good value? You can't, unless you do the math, calculus is a good tool to solve that. With it we can show that summing 1/N results in an infinite result, while 1/N^2 settles on a value and creates a way for you to calculate that value with error bars.
> you can't ever program those things since those operations cannot be expressed using finite instructions.
And now you're telling me that Mathematica, Maple and the like don't exist. Yeah, we can't express the concepts directly, but if we're very good at one thing, it would be emulation.
> How would you express that?
Like this:
>>> from sympy import *
>>> i, n, m = symbols('i n m', integer=True)
>>> # For example, lets say you want to sum 1/N^2 for N from 1 to infinity.
>>> Sum(1 / n ** 2, (n, 1, oo))
∞
____
╲
╲ 1
╲ ──
╱ 2
╱ n
╱
‾‾‾‾
n = 1
Would do the same with the integral. There's even that strange elongated S and everything...
I said: emulation. You don't actually need to stretch your mind to infinity to reason about it, right? I mean, you're not a matrioshka brain the size of a star, right? (Just making sure) So if your mind can cope with infinities by using just the very finite storage accessible to you, what prevents computers from doing the same? Like this:
>>> summation(1 / n ** 2, (n, 1, oo))
2
π
──
6
Dunno, seems legit? How come it appeared, even though my computer has only 32gb of ram, waaaay too little to hold infinity?
EDIT: obviously, I still didn't program it myself! I'm dumb, after all. But someone apparently did.
EDIT2: and that person also made sure that even the dumber coders (like me) can use, and benefit from, the tool they wrote.
A computer understands instructions. You can give it instructs to loop and sum values. Those aren't instructions telling the computer what an infinite sum is. Math is about making humans understand such concepts, we have yet to be able to program those concepts.
For example, when you write a sum in python then the computer will sum the values and do the work for you. But when you write an infinite sum like that, you the human still has to do all the math work figuring out what to do with it, so you didn't circumvent the need to do math at all. The computer doesn't understand the sum. You can program simple simple rules for it to try to simplify it, but it still isn't nearly as versatile as a skilled human. For example this problem is a basic problem from a first year course, it makes sense that someone programmed in the human derived solution for it, but they haven't actually programmed the logic for infinite sums into the computer.
That's obvious? Computers don't understand anything. They're just machines. What did you expect?
> you the human still has to do all the math work figuring out what to do with it
Yeah. But with this tool, as dumb as I-the-human am, I am able to do that math even if I don't understand it, but more importantly I also have a much better chance of understanding what's happening. That's because the explanation is close to the place where it's needed (in a docstring, one key press away) and it is expressed in a way that I'm most familiar with (as code).
You're going to tell me that if I want to use math, I have to let go of every preconception I had and crawl into Plato's cave, to see the light in the darkness. I'm too dumb to do this, unfortunately, and a little bit too busy. It took me almost five years to really master C++ programming after all, while you skimmed Stroustrup's book and went on to write Qt the next day.
> but they haven't actually programmed the logic for infinite sums into the computer
They did. Just not the whole of it. Are you sure it's impossible to improve to the point where a skilled human no longer holds the definitive advantage?
Computers can't understand anything. But they are very good at blindly following the rules. The question is not what to do to make computers understand infinity, but what set of rules can emulate the real thing well enough. We're getting there. Maybe in another 100-200 years CS will swallow maths, who knows?
> That's obvious? Computers don't understand anything. They're just machines. What did you expect?
They do arithmetic's perfectly.
> You're going to tell me that if I want to use math, I have to let go of every preconception I had and crawl into Plato's cave, to see the light in the darkness.
No, math is a huge with an extreme amount of flexibility and power. You have lists of solutions like this one, you can add those in a program and then have the program look them up and spit out answers, with maybe some simple algebra testing:
But nobody will be able to just program all solutions to all math problems like that, or give you a bible of everything to look up, as nobody knows what math is useful to you specifically. This is why every field that applies math tend to create their own subfield in math, like mathematical physics or chemistry or statistics or computer science. They create their own compendiums of useful notation and results that you can use. Nobody understands even a fraction of all of that and where it is useful, instead you will have to understand what discipline the paper you are reading is coming from, usually noted somewhere on the paper, and then look that up. For someone doing computer science you mostly look at the subfields statistics, probability, combinatorics and algorithmic complexity. Most of those papers will be written by people who do math mostly on the side though, not pure mathematicians. Pure mathematicians mostly works on problems that are too abstract for programming. (Pure vs applied, probability statistics and combinatorics are examples of applied math fields).
Edit: So from my perspective the main problem you have is that CS professors have yet fleshed out and formalized the subfield Computer Science math. When I studied higher level physics all the math was taught by physics professors, since mathematicians doesn't understand that math, it is mostly developed by physicists. Why doesn't CS do the same? You can't expect mathematicians to do that work for your field, as there are too many fields depending on math and too few mathematicians to handle everything.
So does abacus. Even something like this: https://www.youtube.com/watch?v=GcDshWmhF4A is capable of doing arithmetic. Does that mean that the marbles somehow understand what addition is?
> look them up and spit out answers,
Wolfram Alpha does this, and let me tell you: things I couldn't understand listening to lectures and trying to work through handbooks (well, most of them were hand-me-downs from the '60s, so take that as you will), became clear in just a few days of work. I didn't cancel my subscription for 2 years after that, out of gratitude.
But, most of the papers out there don't use interactive platforms for publishing. Even though they could. But as you said, that would be too much work, so they simply don't. I accepted it, it's all good now.
> Pure mathematicians mostly works on problems that are too abstract for programming.
I was pretty sure we're not talking about "pure" math. I mean, we both agree that that particular branch is of interest to a tiny minority. Of course the pure mathematicians can stay inside their ivory tower forever, nobody cares. Notice that I repeated the word "use" multiple times. I was trying to imply having a goal that's not mathematical in nature.
EDIT:
> Why doesn't CS do the same? You can't expect mathematicians to do that work for your field, as there are too many fields depending on math and too few mathematicians to handle everything.
I... don't know. That's a very good question. Yes, teaching math in CS depts is generally outsourced to mathematicians. It was like that for me. You may be right, that could be the reason to all my problems.
So, you mean that if we want to have math that's understandable for programmers, we need to make it ourselves? Fair enough if so, I think.
> Wolfram Alpha does this, and let me tell you: things I couldn't understand listening to lectures and trying to work through handbooks (well, most of them from the '60s, so take as you will), became clear in just a few days of work. I didn't cancel my subscription for 2 years after that, out of gratitude.
Wolfram Alpha didn't program everything, but it includes most things you will see in an undergrad. In my undergrad we used a 500 page reference book containing most of the formulas and notations of STEM undergrad programs, wolfram alpha is basically that book written as a program.
> I was pretty sure we're not talking about "pure" math. I mean, we both agree that that particular branch is of interest to a tiny minority. Of course the pure mathematicians can stay inside their ivory tower forever, nobody cares. Notice that I repeated the word "use" multiple times. I was trying to imply having a goal that's not mathematical in nature.
But then your main problem are mostly with computer scientists or statisticians math papers, not mathematicians. The problem with papers written by people who apply math is that usually they don't really understand the math, they just apply it, so they can't explain it well. There isn't much you can do about that. And papers written by pure mathematicians, they sure do understand what they do, but it uses mountains of abstractions in order to make the problem possible for a human to reason about so it wont be easy to understand either.
For example, most who study statistics views the math as a tool to solve problems. They wont put in the effort to fully understand it, and can therefore just give you the second hand explanations they learned themselves, and the result is that those explanations likely will be missing a lot of things. Pure mathematicians however wont understand the way you want to use the math as a tool, so they might have a full explanation or the math but they wont understand how to apply it in your field, so their explanations will also be bad. What you need is someone who did the work to study the pure math side, and then did a lot of work applying math, and then explain it. Such people are extremely rare though. I went that route, I knew a few others who did, but most of those I know didn't. They either went pure or applied, not both.
> wolfram alpha is basically that book written as a program.
Exactly! Which is why I love it, and why I think it would be good to extend it to encompass more than just that one book, too.
> The problem with papers written by peo I went that route, I knew a few others who did, but most of those I know didn't. ple who apply math is that usually they don't really understand the math, they just apply it, so they can't explain it well.
This is already a third time you said something that made me really think again about the issue...
It sound very plausible. I think I've probably never even seen a proper maths paper. I've been struggling with math, but it didn't occur to me that a part of the reason (other than me being dumb) was that the authors struggled with it too! Were I in their place, I would definitely also try to reduce the amount of explanations to the absolute minimum, so that there's less of a chance I'll be called out for it.
EDIT:
> I went that route, I knew a few others who did, but most of those I know didn't.
I'm not joking, this is very serious: is it possible to learn from you somewhere/somehow? Because you're incredible, honestly, your patience is not running out even now, this deep into comments thread; and you really can speak in programming - this much I can see - and I think you're at least as competent on the other side (though I obviously can't judge it myself, so I'm deferring to (your) authority that I came to believe in). I'd be incredibly happy if I could somehow get a bit of your help.
(Seriously, I'm not sarcastic or anything, this is 100% honest. EDIT2: well, I still disagree with you on some points :P but I think if it's from you, I wouldn't have a problem with actually listening to you)
> there is one true noation (or rather, a separate one for each language) that is unambiguous and clearly defined.
This is such a disingenuous take. How many of the source code files you write are 100% self contained and well defined? I'd bet not a single one of them are. You reference libraries, you depend on specific compiler/runtime/OS versions, you reference other files etc. If you take a look at any of these scientific papers you call "badly defined", did you really go through all of the referenced papers and look if they defined the things you didn't get? If not then you can't be sure that the paper uses undefined notation. If you argue that it is too much work to go through that many references, well that is what you would have to do to understand one of your program files.
One can look at the source code to a program, the libraries it uses, the compiler for the language, and the ISA spec for the machine language the compiler generates. You can know that there are no hidden unspecified quantities because programs can't work without being specified.
When you get down to the microcode of the CPU that implements the ISA you might have an issue if it's ill-specified. You might be talking about an ISA like RISC-V, though, specified at a level sufficient to go down to the gates. You might be talking about an ISA like 6502 where the gate-level implementations have been reverse-engineered.
You can take programming all the way down boolean logic if you need to and the tools are readily available. They don't rely on you "just knowing" something.
> One can look at the source code to a program, the libraries it uses, the compiler for the language, and the ISA spec for the machine language the compiler generates. You can know that there are no hidden unspecified quantities because programs can't work without being specified.
I doubt you actually can do that and understand it all. A computer can do it, but I doubt you the human can do that and get a perfect picture of any non trivial program without making errors. Human math is a human language first and foremost, its grammar is human language which is used to define things and symbols. This lets us write things that humans can actually read and understand the entirety of, unlike a million lines of code or cpu instructions.
Show me a program written by 10 programmers over 10 years and I doubt anyone really understands all of it. But we have mathematical fields that hundreds of mathematicians have written over centuries, and people still are able to understand it all perfectly. It is true that a computer can easily read a computer program, but since we are arguing about teaching humans you would need to show evidence that humans can actually read and understand complex code well.
> because programs can't work without being specified.
Someone hasn't read the C spec, with all its specified as undefined behavior.
Programs working on real systems is very different from those systems being formally specified. I suspect that if you only had access to the pile of documentation and no real computer system - if you were an alien trying to reconstruct it, for example - you'd hit serious problems.
Undefined behavior isn't a feature. A spec isn't an implementation, either.
All behavior in an implementation can be teased-out if given sufficient time.
> if you were an alien trying to reconstruct it, for example - you'd hit serious problems.
I can't speak to alien minds. Considering the feats of reverse-engineering I've seen in the IT world (software security, semiconductor reverse-engineering) or cryptography (the breaking the Japanese Purple cipher in WWII, for example) I think it's safe to say humans are really, really good at reverse-engineering other human-created systems from close-to-nothing. Starting with documentation would be a step-up.
Yes, humans are incredible at reverse engineering. My point was about specification, and what happens if you have only a specification and no implementation. Because that's more closely analogous to the mathematical situation, where you're manipulating the specification.
You said:
> because programs can't work without being specified.
.. what I think you may have meant was "can't work without being implemented", because your subsequent comments are all about implementation.
> Undefined behavior isn't a feature
Yes it is, it's a feature of the C specification.
This is where a whole load of pain and insecurity comes from, because as you say the implementations must do something when encountering undefined behavior, and people learn what usually happens, then an improvement is made to the optimizer which changes the implementation.
> All behavior in an implementation can be teased-out if given sufficient time.
Can it? Given what? You would need to understand how the CPU is supposed to execute the compiled code to do that. In order to understand the CPU you would need to read the manual for its instruction set, which is written in human language and hence not any better defined than math. At best you get the same level of strictness as math.
If you assume you already have a perfect knowledge of the CPU workings, then I can just assume that you already have perfect knowledge of the relevant math topic and hence don't even need to read the paper to understand the paper. Human knowledge needs to come from somewhere. If you can read a programming language manual then you can read math. Every math paper is its own DSL in this context with its own small explanations for how it does things.
> Every math paper is its own DSL in this context with its own small explanations for how it does things.
That's really the point though: not every piece of software defines it's own DSL, nor does it necessarily incorporate a DSL from some library or framework (which in turn may or may not borrow from other DSLs, etc.). It is also impossible to incorporate something from other software without actually referencing it explicitly.
Math, though, is more like prose in this respect – while any given novel probably has a lot of structure, terminology, and notation in common with other works in its genre, unless it is extremely derivative it almost certainly has a few quirks and innovations specific to the author or even unique to that particular work that you can absorb while reading or puzzle out due to context, as long as you accept that the context is quite a lot of other works in the genre (this is more true of some genres/subfields than others). Unlike novels, at least in math papers (but not necessarily books) you get explicit references to the other works that the author considered most relevant, but those references are not usually sufficient on their own, nor necessarily complete, and you have to do more spelunking or happen to have done it already.
Finally, like prose, with math you have to rely on other (subsequent) sources to point out deficiencies in the work, or figure them out on your own. Math papers, once published, don't usually get bug fixes and new releases, you're expected to be aware (from the context that has grown around the paper post-publication) what the problems are. Which means reading citations forward in time as well as backward for each referenced paper. The combinatorial explosion is ridiculous.
It would be great if there were something like tour guides published that just marked out the branching garden paths of concepts and notation borrowed and adapted between publications, but textbooks tend to focus on teaching one particular garden path.
> It is also impossible to incorporate something from other software without actually referencing it explicitly.
No, some programming languages just injects symbols based on context. You'd have to compile it with the right dependencies for it to work, so it is impossible to know what it is supposed to be.
And even if they reference some other file, that file might not even be present in the codebase, instead some framework says "fetch this file from some remote repository at this URL on the internet" and then it fetches some file from the node repository, which can be another file tomorrow for all we know. This sort of time variance is non-existent in math, so to me math is way more readable than most code.
And you have probably seen a programming tutorial or similar which uses library functions that no longer exists in modern versions, tells you to call a function but the function was found in a library the tutorial forgot to tell you about, or many of the other things that can go wrong.
> Well, okay, yes, not all software projects deliver reproducible builds of their software. Some software is, in fact, complete garbage.
And not all math papers are properly documented either. Some math papers are in fact complete garbage. Why are you complaining about an entire field just because some of it is garbage.
As someone trained in mathematics, I can tell you that using single character variables allows one to focus better on the concepts abstractly which is one of the goals of mathematics. That is to say, it is a practice well-suited to mathematics.
It doesn't carry over to programming where explicit variables are better suited. In mathematics one is dealing with relatively few concepts compared to a typical program so assigning a single letter (applied consistently) to each is not a problem. This is not so in programming, except for a few cases like using i and j for loop variables (back when programs had explicit loops).
As far as programmers, forget about the names. Does every C source file that uses pointer arithmetic include an explanation of how it works? Nope. They just use it and assume the reader understands it or is clever enough to ask for help or read up on the language.
Mathematical writing is similar. At some point you have to assume an audience, which may be more or less mathematically literate. If you're writing for graduate students or experts in a domain, you don't include a tutorial and description of literally every term, you can assume they're familiar with the domain jargon (just like C programmers can assume that others who read their program understand pointers and other program elements). Whenever something is being used that is unique to the context, a definition is typically provided, at least if the writer is halfway decent.
If the audience is assumed to be less mathematically literate (like a Calculus course textbook audience), then more terms will be defined (chapter 1 of most Calculus books include a definition of "function"). But a paper on some Calculus topic shouldn't have to define the integral, it should be able to use it because the audience will be expected to understand Calculus.
Scientists and engineers write code with single variables all the time and don’t seem to have any large amount of trouble with it. Long variable names seriously limit the ability to put complexity in one place and make it understandable.
It’s not horrible, it’s different, has different goals and different audiences. Context is king, and the bulk of professional programmers criticizing scientist code is just lack of context and a different set of priorities.
From a more science based background i often think programmers write horrible code as i search in vain for where anything actually happens in a sea of abstractions.
I recall my disappointment when as an artist I started studying different maths for use in animation. I would open a book from a university library and expected to find a page with summary of notation used in the book. Maps have this, I would grumble, why not math books?
I'm glad I'm not the only person like this. I've never liked tradition math notation and found it about as useful as traditional musical notation, that is, hard to read for the layman and for no other reason than "this is how people have been doing it for a long time". Maybe I'm the minority, but when I read a CS paper I mostly ignore the maths and then go to the source code or pseudocode to see how the algorithm was implemented.
> ...for no other reason than "this is how people have been doing it for a long time".
I disagree. Math notation has evolved to be as it is because it is useful for the purpose of doing math. If there were some way of doing it better, people would be evolving to be doing so.
In some ways they are ... people are using computer algebra packages more for a lot of the grunt work, and are using proof assistants to verify some things, but there's a lot of math that's still done by sketching why something is true and letting the reader work through it. Math notation isn't about executing algorithms, it's about communicating what the result is, and why it works.
"Doing Math" is not "Writing Programs", so math notation is different.
> If there were some way of doing it better, people would be evolving to be doing so.
I don't see why wouldn't it be some kind of local maximum. Maybe there are better ways, but they are sufficiently far away from current notation, that they aren't even thought about.
Edsger Dijkstra, who was a mathematician by training, wrote a wonderful little monograph on this subject called The notational conventions I adopted, and why[1]. I am particularly fond of his commented equational proof format.
> I think a real problem in this area is the belief that there is "one true notation" and that everything is unambiguous and clearly defined.
Just to back up this point: In probably every university-level math book I've read, they introduce and explain all the notation used. In the preface and/or as concepts are introduced.
There are lists at wikipedia [1] and other places, but I'm not sure how valuable it is out of context.
It's not entirely unlikely that I am remembering just the good stuff :) But I was surprised how many books would define even the most common notation, like ⊂, ∀, and ∃.
I guess if you call your book "Introduction to..." you ought to do that. And it seems that all books were called that, regardless of how narrow and advanced the rest of the title was :)
Often books assume some prerequisites, the question here is the level of those prerequisites. Some books try to include all the necessary background, others assume a pre-existing base level of knowledge.
Different authors, different books, different audiences, and different contexts.
Why are you telling OP what his problem is? Shouldn't you address his pain points, not your rationalization of them?
I wrote it many times already and am bit tired of it, so just a quick summary:
- programmers[1] also use cryptic notation and tend to think in concepts rather than syntax
- nevertheless, programmers spend a lot of time commenting the code, documenting it, specifying it, and so on.
- why can't mathematicians emulate it? What is so wrong about attaching additional few pages to every paper that nobody wants to do it? Pages with explanation of the syntax used, even the common bits. And you know what they could also do? Link to external resources with explanations! But no. This is not happening. Do their PDFs have a size limit or something? Is inserting a link into a paper considered some kind of blasphemy?
I don't know the reason, but in all the discussions on this topic mathematicians almost always underestimate the importance of knowing the syntax. It's much more important for comprehension than they tend to admit. And in the end they do exactly nothing to make the syntax more approachable for newcomers. And then newcomers are out-goers in a heartbeat. It's so obvious that I can't help thinking it's premeditated...
Mathematicians do document and comment, that's what papers and textbooks are: commentary on the math. They don't throw out formulae and equations and call it a day. Attaching a full tutorial for every level of reader is tantamount to attaching Stroustrup's C++ books to every C++ program, or K&R to every C program. You wouldn't do that, you'd expect the reader to ask you for references or to seek them out themselves.
That's actually doable... ;) K&R is rather terse, what, 1/5 of Stroustrup or something like that. But I digress.
More on topic: there's also a class of programs that DO come with a book attached - or rather, multiple books, for every level; if not included outright in the distribution then at least linked to in the "learn" tab on a homepage. They're called programming languages. So, it can be done. That's all I want to say.
> What is so wrong about attaching additional few pages to every paper that nobody wants to do it? Pages with explanation of the syntax used, even the common bits.
Programs don't do this, why do you expect every math paper to do it?
> Link to external resources with explanations!
This is called a bibliography, every book that isn't so old that it is the definition and paper includes one. In many textbooks there are also appendices which cover (some of) the foundational material. And most include sections (often in the front and back covers) that show the symbols and their names, if not their definitions.
> Programs don't do this, why do you expect every math paper to do it?
Well, I don't. It was you moving the goalpost. I talked about "a few pages", and you made "a book" out of it. I simply don't agree with you here and so I have very little to add at this point, sorry.
> This is called a bibliography, every book that isn't so old that it is the definition and paper includes one.
No. Bibliography is like a list of libraries you depend on. It has literally nothing to do with explaining the syntax close to where it's used.
> appendices which cover (some of) the foundational material.
Ha, ha, ha. No. If it's not front and center, then it doesn't count. I'm sorry, but I'm really tired of this subject. I would be willing to compromise more if that wasn't the case, believe me.
> show the symbols and their names, if not their definitions.
Ok. Putting that on the cover is a bit strange, but ok. That's a nice, but very small, step in the right direction. Please iterate and improve upon it!
EDIT: again, because I missed it at first:
> Programs don't do this, why do you expect every math paper to do it?
Programs do come with man pages! And tutorials, interactive tours, contextual help, and more. Emacs comes with 3 books, and a tutorial. (GNU) libc has a book to it. Firefox has a whole portal (MDN) as its documentation. Visual Studio comes with MSDN and a huge amount of explanatory material. And when it comes down to code, you have auto-completion, go to definition, search for callers; you can hover over a symbol and you get a popup with documentation and types; you can also trace execution, stop the execution, rewind the execution (if you have good debugger), experiment with various expressions evaluated at different points.
The most important difference between math and programming (or CS)is that programmers can (and do) build automated tools that help the next generation of newbies get into programming, while mathematicians can't. It's just that they don't want to admit this is a weakness, and only fortify more in their ivory towers.
TLDR: I just can't see how you can even put math papers and programs on the same scale in terms of accessibility!
> Programs do come with man pages! And tutorials, interactive tours, contextual help, and more. Emacs comes with 3 books, and a tutorial. (GNU) libc has a book to it. Firefox has a whole portal (MDN) as its documentation. Visual Studio comes with MSDN and a huge amount of explanatory material. And when it comes down to code, you have auto-completion, go to definition, search for callers; you can hover over a symbol and you get a popup with documentation and types. I just can't see how you can even put math papers and programs on the same scale in terms of accessibility!
You are comparing big teams and products to a single guy writing a paper intended for a niche audience and to be read maybe a few hundred times if he is lucky. People makes mistakes and sometimes forget to document everything, they try to document everything though as can be seen in their papers where most things are documented well, but sometimes they miss things and unlike code you don't have compiler warnings telling you about it. And given how few people read those papers it isn't worth investing in a team to go through and update all of those papers to properly add definitions for everything they missed.
The equivalent to those programs in math would be high school textbooks, and they are extremely well documented and easy to read in most cases.
Thanks for understanding, math is a small field without money for things like this, there is no way anyone should expect those niche papers to be as well documented as big programming projects used by millions.
If you still think that is a problem then start some open source organization to fix that. Nobody has done that yet though since so few people care about math papers, but since you feel so strongly about this you could do it, someone has to be the one to start it.
No, I mean, well, it's very understandable when you describe it that way. Actually, I think your post here changed my perception of the problem the most out of all discussions I had on the subject. It made me think about people who are behind the papers. I somehow missed it. Thank you.
(And, sorry for being a jerk in this thread. I said too much in a few places, exactly because I didn't think of innocent mathematicians who might read it. I'm still convinced that there is a lot that math can borrow from CS and SE, but I'm definitely going to argue this differently.)
I wrote one math paper before I went into programming. It is a lot of work, like code reviewing but much much longer. It isn't fun. A big reason I got into programming is because that process is so much work. Of course I, the professor who reviewed it and the professors who looked at it afterwards understood it, but I can't guarantee that someone who hasn't read a lot about research level topology or combinatorics will easily understand much at all. However I doubt that anyone who didn't do those things will ever read it since it is an uninteresting niche topic. I'd be surprised if even 10 people read it fully.
Yeah, I didn't think about it at all - I didn't realize that what I'm saying is basically demanding people to work for free (and on things that won't be useful to anyone in 99% of cases), and that's on top of already huge effort that is writing the paper in the first place. Honestly, I was behaving like people who open tickets in an open source project just to demand that someone implements a particular feature, just for them, and right now. I dislike such behavior, and realizing that I'm doing the same hit me hard :)
Anyway, lots of mathematicians works really hard to make everything as understandable as possible. Learning programming isn't the same thing as learning math. In programming you learn an instruction set, and then you use those instructions to compose programs, therefore learning that instruction set is relatively easy. Mathematics isn't like that at all, instead you just continue to learn ever more complex instruction sets, you don't compose much at all in mathematics because the new instructions typically can't be expressed using the old ones. Therefore learning the instruction set in math is equivalent to learning math, it isn't something you just do once and then is done with it, learning the instruction set is close to the entirety of learning math.
For example, when learning programming you have to learn about arithmetics, pointers, functions and structs. And that is it, now you can program everything as long as you have an API to program against since everything else builds on top of those. That is equivalent to learning algebra in math. You can do a lot of things using algebra, but every new course introduces a lot of new concepts that can't be expressed in elementary algebra. Programming however becomes the art of expressing everything using these very simple instructions, while math is the art of creating instructions that can express things simply.
> Anyway, lots of mathematicians works really hard to make everything as understandable as possible
You just said they don't because it's not worth it. So which is it in the end?
> In programming you learn an instruction set, and then you use those instructions to compose programs, therefore learning that instruction set is relatively easy.
Yeah, sure. Elementary arithmetic, like 22 and so on, is also not that hard to learn. What you're saying is barely scratching the surface of what programming is, then misrepresent it as the whole truth.
It's not that programming is easy. It's just that you can still earn a very good salary without ever touching the harder and more complex parts. People who knew how to do 22 were also well-paid at some point in the past, yet Euclid still managed to write his Elements. You're like an actuary (working under a Pharaoh to count the cattle, let's say) who says that writing Elements is as easy as his current job (you could do it yourself, but you're too busy right now, so maybe later).
> For example, when learning programming you have to learn about arithmetics, pointers, functions and structs.
And then you try to write a JIT for a dynamic (and concurrent, let's say, as Erlang succeeded in this in the last release) language and, with just "arithmetics, pointers, functions and structs", it becomes obvious it's not nearly enough. Then you start reading books and papers and soon you realize that 5 years have just passed. And JITs are not really that complex!
> Programming however becomes the art of expressing everything using these very simple instructions.
Programming is an art of creating new instructions, both out of thin air and out of already existing instructions. The fact that you seem to think "the instruction set" (meaning concepts that have to be understood, I think?) is constant suggests to me that you you're not very interested in history of the field.
> And then you try to write a JIT for a dynamic (and concurrent, let's say, as Erlang succeeded in this in the last release) language and, with just "arithmetics, pointers, functions and structs", it becomes obvious it's not nearly enough. Then you start reading books and papers and soon you realize that 5 years have just passed. And JITs are not really that complex!
I actually wrote a JIT for one of Googles internal machine learning frameworks, so yes. It was used in production for Google search, Google ads etc. Only a very small part of that traffic, of course, I didn't do something huge, but at least I know what it takes to make such systems production ready and performant enough, as these systems are extreme resource hogs. So I know very well what it takes to create a language, parse it, execute it, optimize it and ensure that there are no errors in production and help the researchers who are supposed to use it to debug when errors happen.
I learned programming by just implementing a lot of complex algorithms and systems. I wrote several high throughput http servers using sockets and threads for example. I also got good enough at competitive programming to occasionally place in single digits in some world programming competitions, but mostly double digits. All of that is very easy to understand compared to math, math just forces you to bend your mind in strange ways, programming is super concrete.
I have no doubts that programming will one day get as deep and mindbending as math, physics definitely is for example, but today it isn't even close.
I think the GP post is criticizing the lack of documenting syntax. Math papers tend to document semantics, whereas the understanding of the syntax by the reader is presumed.
Note that the OP is asking about college-level math, not cutting-edge papers.
Textbooks routinely have a list of symbols and their definitions.
But, from my experience, notation is rarely the problem. I’d bet that the root cause of OP’s frustration is lack of understanding of concepts, not notation. (But, of course, it’s hard to say more without specific examples).
What you're looking at is calculus, specifically differentiation. This is pretty core to understanding physics, because so much of physics depends on the time-evolving state of things. That's fundamentally what's happening here.
The triangle, for example, is the upper-case greek letter delta, which in calculus represents 'change of'. You might have heard of 'delta-T' with respect to 'change of time'.
In calculus, upper-case delta means 'change over a finite time' vs lower-case delta meaning 'instantaneous change'. The practical upshot, for example, is that the lower-case is the instantaneous rate-of-change at an instant in time, whereas the upper-case is the change over a whole time (e.g. the average rate of change per second for time = 0 seconds to time = 3 seconds).
If you are trying to grok this, I would suggest an introductory calculus or pre-calculus resource. It doesn't have to be a uni textbook - higher-level high school maths usually teaches this. In this particular case, the Khan Academy would be my recommendation because it is about the right level (we're not talking esoteric higher-level university knowledge here) and it is eminently accessable. For example, this link may be a good starter in this instance:
You say "There's a formula with a triangle ..." without telling me where. That's not real helpful, and you're making me do the work to find out what you're talking about. If you want assistance to get started, you need to be more explicit.
However, I have done that work, so I've looked, and in the second column of page 210 there's a "formula with a triangle":
t_c = 5 \middot 10^{-5} \sqrt( V / Dt )
... where the "D" I've used is where the triangle appears in the formula.
But that can't be it, because just two lines above it we have:
"For a pulse of width Dt, the critical time ..."
So that's stating that "Dt" is the width of the pulse, and should be thought of as a single term.
So maybe that's the wrong formula, or maybe it was just a bad example. So trying to be more helpful, the "triangle" is a Greek capital delta and means different things in different places. However, it is often used to mean "a small change in".
FWIW ... at a glance I can't see where that result is derived, it appears simply to be stated without explanation. I might be wrong, I've not read the rest of the paper.
I feel you're coming at this without appreciating your body of prior knowledge. Intended or not, your statment "But that can't be it, because just two lines above it we have..." assumes a whole lot of knowledge.
You and I both know that it reads as one term, but for someone unfamiliar with calculus but exposed to algebra they are drilled to understand separate graphemes as separate items, because the algebraic 'multiply' is so often implied, e.g. 3x = 3 * x as two individual 'things'.
I think there's merit in explaining the concept of delta representing change, because it's not obvious. For example, when I was taught the concept in school, my teacher explicitly started with doing a finite change with numbers, then representing it in terms of 'x' and 'y', then merged them into the delta symbol. That's a substantial intuitive stepping stone and I think it's pretty reasonable that someone may not find this immediately apparent.
I agree completely that I'm coming at this with a lot of background knowledge, but if I'm reading in an unfamiliar field and I see a symbol I don't recognise, I look in the surrounding text to see if the symbol appears nearby. As I say, "Δt" appears immediately above ... that's a clue. As you say, it's drilled in at school that everything is represented by a single glyph, and if these are juxtaposed then it means multiplication, and that is another thing to unlearn.
But I think the problem isn't the specifics of the "Δ", it's the meta-problem of believing that symbols have a "one true meaning" instead of being defined by the scope.
I agree that explaining the delta notation would be helpful, but that's like giving someone a fish, or making them a fire. They are fed for one day, or warm for one night, it's the underlying misconceptions that need addressing so they can learn to fish and be fed, or set on fire and be warm, for the remainder of their life.
I absolutely agree with your comments regarding teaching the underlying approach to digesting a paper. You definitely raise good points, especially the 'one true meaning' comment. I should state that I'm not discounting the value of your point, especially given this clarification, however I guess that when I reflect on my experience in my time learning this, the time I best learnt was via initial expalnation, then worked example, then customary warning of corner-cases and here-be-dragons.
e: I also think, on reflection, that a signfigicant part of your ability to grok a new paper per your comments is your comfort in approaching these concepts due to your familiarity. Think of learning a new language - once you have a feel for it, you're likely more comfortable exploring new concepts within it, however when you're faced with it from the start you probably feel very lost and apprehensive.
I feel that understanding calculus is a fairly fundamental step in the 'language of maths', teaching that symbols don't necessarily represent numbers but can represent concepts (e.g. delta being change). This isn't something you encounter until then, but once you do you begin to understand the characters associated iwth integrals, matricies, etc. in a way that you may not have previously with algebra alone.
> with calculus but exposed to algebra they are drilled to understand separate graphemes as separate items
But most will already be familiar with the family of goniometric functions such as sin and cos, there’s log and possibly exp and sqrt. There’s min and max; advanced math has inf and sup.
I think that this is indeed the formula in GP's question. And indeed sometimes math notation is obtuse like that. It looks like 2 terms, but the triangle goes together with the t as a single term. At other times it might be called "dt" and despite looking like a multiplication of 2 variables (d and t, or triangle and t in this case) it's just a single variable with a named made of 2 characters.
The important thing here is that "For a pulse of width Dt" is the definition of this variable, but this can be easily missed if you're not used to this naming convention.
That’s because “Δ” means “a change of” or “an interval of”. So, Δt is “an interval of time”. It is like a compound word, really. It conveys more information than giving it an arbitrary, single-letter name.
This convention is used in a whole bunch of scientific fields, like quantum mechanics, chemistry, biology, mechanics, thermodynamics, etc.
It’s also very useful in how it relates to derivatives, which is a crucial concept in just about any kind of science you could care to mention.
So yes, there is a learning curve, but we write things this way for good reasons, most of the time.
Multiplication should be represented by a (thin) space in good typography, to avoid this sort of things. Not doing it is sloppy and invites misreading. Same with omitting parenthesis around a function’s argument most of the time (e.g. sin 2πθ instead of sin(2 π θ) ).
> it's just a single variable with a named made of 2 characters.
I have this same problem with programming, when I have to deal with code written by non-mathematicians. They tend to use all these stupid variables with more than one letter and that confuses the heck out of me.
Sorry I didn't mean to make you work for me, but it's a PDF and I didn't know how to explain better the position (maybe I should have told you the first formula on page X).
For you it was a D, for me it was a triangle and I didn't get the meaning of that Dt. Maybe it's just a too advanced paper for my knowledge.
> Maybe it's just a too advanced paper for my knowledge.
Maybe it is for now ... the point being that if you start at the beginning, chip away at it, search for terms on the 'net, read multiple times, try to work through it, and then ask people when you're really stuck, that's one way of making progress.
You can, instead, enroll in an on-line course, or night-school, and learn all this stuff from the ground up, but it will almost certainly take longer. Your knowledge would be better grounded and more secure, but learning how to read, investigate, search, work, then ask, is a far greater skill that "taking a course".
Others have answered your specific question about the delta symbol, but there are deeper processes/problems/questions here:
* Not all concepts or values or represented by a single glyph, sometimes there are multi-glyph "symbols", such as "Δt" in your example.
* When you see a symbol you don't recognise, read the surrounding text. The symbol will almost always be referenced or described.
* The notation isn't universal. Often it's an aid to your memory, to write in a succinct form the thing that has been described elsewhere.
* In these senses, it's very much a language more akin to natural languages than computer languages. The formulas are things used to express a meaning, not things to be executed.
* Specific questions about specific notation can be answered more directly, but to really get along with mathematical notation you need to "read like math" and not "read like a novel".
* None of this is correct, all of it is intended to give you a sense of how to make progress.
I'm just saying "D" because I can't immediately type the symbol here and it was easier just to use that. Not least, I didn't know if that was the formula you meant.
But as I say, immediately above the formula it says:
"For a pulse of width ∆t, the critical time ..."
So that really is saying exactly what that cluster of symbols means. There will be things like this everywhere as you read stuff. Things are rarely completely undefined, but you are expected to be reading along.
That gives you a lot of context for what the symbol means, and this is the sort of thing you'll need to do. You need to stop, look at the thing you don't understand, read around in the nearby text, then type a question (or two, or three) into a search engine.
Please don't take this the wrong way. It is not meant to be demeaning, and it is not meant to be gatekeeping (quite the contrary!). But: If you do not know what a derivative is, then learning that that symbol means derivative (assuming that it does, I have not actually looked at what you link to) will help you next to nothing. OK, you'll have something to google, but if you don't already have some idea what that is, there is no way you will get through the paper that way.
I hope you take this as motivation to take the time to properly learn the fundamentals of mathematics (such as for example calculus for the topic of derivatives).
The triangle, or “delta”, is used to indicate a tiny change in the following variable.
Let’s say you go on a journey, and the distance you’ve travelled so far is “x” and the time so far is “t”.
Then your average velocity since the beginning is x / t .
But, if you want to know your current velocity, that would be delta x divided by delta t .
The delta is usually used in a “limiting” sense - you can get a more accurate measurement of your velocity by measuring the change in x during a tiny time interval. The tinier the interval, the more accurate the estimate of current velocity.
What I’m talking about here is the first steps in learning differential calculus. You could look for that at kahnacademy.com. You might also benefit by looking at their “precalculus” courses.
Just keep plugging away at it, the concepts take awhile to seep in. Attaining mathematical maturity takes years.
Yes, small changes usually use lowercase delta, e.g. δt. Not to be confused with the derivative symbol dt, nor with the partial derivative symbol ∂t !
Before I continued my maths learning after highschool (ie before UK A-levels) I learnt the Greek alphabet to make it easier to understand maths notations as I could 'voice' (internally) all the funny glyphs adopted from Greek.
At uni I learnt how to properly write an ampersand (for logic classes) and how to write Aleph and Beth (for pure maths, particularly transcendental numbers).
Some professors have a fondness for the more confusing Greek letters (lowercase xi, lowercase eta) ... is it n or eta, epsilon or xi, ...
But this is a physics paper, that isn't how you use uppercase delta in physics. It is just a range. In physics however you do a ton of approximations all the time in ways mathematicians hate (you don't care about errors smaller than you can measure), so uppercase delta is often approximated with derivatives etc, but it isn't a derivative. Math in physics is way more practical and uses very different techniques than math in math, often because physicists invented the math first and mathematicians later went and formalized it.
Everyone is talking about the Δ symbol, but the real problem that you'll encounter will be later in the paper where they start talking about H(ω), which is the Fourier transform of the impulse function (equation 4 and following). You'll need to know a fair bit about Fourier transforms and impulse responses and filter design to get through this section. The notation is the least of the problems.
Wikipedia is truly atrocious for learning math, the articles are like man pages in that they precisely describe the concepts in terms that will only make sense if you already know the thing. They just aren't written for pedagogy.
Like in 300 BC today, there's no royal road to geometry.
1. You're reading a journal article. They will assume you know the notation not just of the broader discipline (e.g. physics/electrical engineering), but of the subdiscipline and at times the subsubdiscipline. Journal papers are explicitly written not to be easy to comprehend by beginners.[1] Notation will be only one problem you'll face.
2. As has been pointed out, this is not a mathematics paper. Mathematicians have their own notation, as do physicists and engineers. As I mentioned in the above bullet, they can have their own notation even in subdisciplines (e.g. circuit folks use "j" for the imaginary number, and semiconductor folks use "i"). There is a lot of overlap in notation amongst these parties, but you should never assume because you know one notation that you'll easily understand the math written by other fields.
3. Most introductory textbooks will explain the basic notation. Unfortunately, I often do find gaps where you go to higher level textbooks and they use notation that they don't explain (i.e. they assume you've seen it before), but is not covered in the prior textbooks.
4. Finally, sorry to say this, but "delta" (the triangle) for representing change is used in almost all sciences and engineering. It was heavily used in my high school as well. If you're struggling with this you really need to read some introductory textbooks in, say, physics.
[1] I'm not kidding. I've spent time in academia and I've complained how obtuse some articles are, and almost universally the response is "We write for other experts, not for new graduate students". One professor took pride at the fact that in his field, one can comprehend only about one page of a paper per day - and this coming from someone who is an expert. These people have issues.
Looks like you need to grind through an elementary calculus book. With the exercises, you may think you build intuition by reading just the definitions, but half of the understanding is tacit and you get through the exercises.
If you're trying to get into signal processing, it'll involve calculus in complex numbers, and knowledge of that is often gained through plodding through proofs and exercises over and over.
> I think a real problem in this area is the belief that there is "one true notation" and that everything is unambiguous and clearly defined.
No, that belief isn't the problem; that actual status quo itself is obviously the problem. There are numerous notations and authors don't explain what they are using, assuming everyone has recursively read all of their references depth-first before reading their paper.
Of course you still wont be able to understand most math papers written by pure mathematicians, but it should be fine for whatever you need in CS. I know all the topics on that page, it is just a very fleshed out math major.
But why are you reading research papers with math without having studied math? If you want to understand them fully then you need to do the relevant courses, people spend years learning these things. You don't have to read them all, just the branch relevant to the paper.
We are talking about nonstandard notation that is often specific to the author or to a limited research subfield, for which there are no standard courses or books that would explain the notation. You’d need to take a specific course by someone in the respective research community. Or sometimes, as noted above, it’s possible to follow back a dozen or more papers to retrace the idiosyncrasies in an almost archaeological manner.
If I try to read a program written by you without having learned the language, would you expect me to understand everything? Why aren't you explaining the symbols? That is the same thing. Learn the fundamentals of a paper before reading the paper, that is just common sense.
If math were like code: the program relies on macros that are not defined in the program. Those who have the author's previous two programs loaded in the image don't notice any problem.
There pretty much is one true notation. There could be some slight variations, like bolding vectors, putting an arrow over them or not distinguishing them at all from scalars. But 95% of the time everyone uses the same notation.
I don't know your background, but I wonder how broad it is in terms of mathematical topics. The notations used in Algebraic Topology vs Category Theory vs Algebraic Number Theory vs Analytical Combinatorics vs Complex Analysis.
This isn't a criticism, it's just that notations vary wildly in those areas, and there's lots of cross-over of notations, not all of which agree with each other.
I'm not an expert, but I've had some exposure to the problem(s).
I studied diff geo at phd level and met stat at undergrad level, plus a sprinkling of category theory, some discrete mathematics and some physics, so I’ve been exposed to most of these.
I presumed we were talking about basic mathematics here since new notation is the least of your worries when your thinking about fibre bundles and cohomologies, but I can’t really think of any significant overlap in notation that would be different between the fields I’ve come across. Could you give some examples?
I'm trying to be more general than specific questions at the mid-undergrad level, because looking in from the outside, people seem to thing that if only the notation weren't so mysterious then they could understand everything. But this comment -- https://news.ycombinator.com/item?id=29344238 -- gives a flavour, talking about coming across "π" in different contexts and having to give different interpretations.
But I remember sketching an algorithm to someone and just inventing notations on the fly as I did so, knowing that they would simply be ways to remember the underlying ideas.
Even so, at 1st year undergrad the notations used in Mathematical Physics vary from those used in Introductory Graph Theory, and again from Real Analysis. But once the reader knows the underlying semantics, the actual notation is mostly a non-issue (as you know).
Alright, but are there really any overlapping concepts between graph theory and analysis? There can’t be many!
The comment you linked to is pretty strange, given the limited number of symbols in the Greek and Latin alphabets, there’s obviously going to be a lot of reuse, but I can’t see how that could really cause any confusion though, unless you’re just grabbing books from the shelf and opening them at random. And even then, it should almost always be clear from context if pi is a number or a plane, and if it’s a function that will be visually distinguished.
I’ve seen non-mathematicians use words as names of variables and functions, it always makes me shudder. I unsuccessfully tried to introduce Hebrew letters as an alternative,when I discovered how to use the in Latex, but it never caught on…
I actually find math notation incredibly intuitive and effective, I think it’s close to optimal. In fact it’s only after getting into programming that it even occurred to me how elegant and magical it is. I understand what things mean and can write things myself, without being able to exactly explain how, or to translate it into a fully specified system that a computer would understand.
I'm a math professor, and my students find it revelatory to understand math as I talk and draw.
Math notation is not math, any more than music notation is music. Notably, the Beatles couldn't read sheet music, and it didn't hold them back.
The best comparison would be is reading someone else's computer code. At its best computer code is poetry, and the most gifted programmers learn quickly by reading code. Still, let's be honest: Reading other people's code is generally a wretched "Please! Just kill me now!" experience.
Once you realize math is the same, it's not about you, you can pick your way forward with realistic expectations.
Great insight! I’ve definitely encountered mathematically inclined people but who cannot read or write math. Now it makes sense to me.
Also I’ve found the converse true. There are people who can manipulate mathematical symbols very well but actually don’t understand the big picture or general direction. The analogy would be that there are people who can write and read music notes (even transpose to different keys) without hearing it in their head (I was one of them).
Actually, making my own notation is how I start puzzling out any problem. When one reformulates a problem, the appearance keeps changing but the problem is still there. It often takes multiple reformulations to "see" a problem, unencumbered by artifacts of birth.
The other day an EE grad student came to my office to show me his lab's research. After changing notation a couple of times, I was able to recognize that their novel combinatorial problem was an instance of graph coloring.
It sounds like you're trying to read papers that assume a certain level of mathematical sophistication without having reached that level. Typical engineering papers will assume at least what's taught in 2 years of college level mathematics, mainly calculus and linear algebra, and no they aren't going to be explaining notation used at that level.
But it isn't just about the notation. You also need to understand the concepts the notation represents, and there aren't really any shortcuts to that.
These days there are online courses (many freely available) in just about every area of mathematics from pre-high school to intro graduate level.
It's possible for a sufficiently motivated person to learn all of that mathematics on their own from online resources and books, but it isn't going to be an easy task or one that you can complete in a few weeks/months.
The author explained his problem and asked for resource recommendations.
Your response is to scold him for having the problem he already said he had and instead of recommending resources you told him to go look on the internet.
I hear this question asked quite often, particularly on HN. I think the question is quite backwards. There is little value alone in learning "math notation", even ignoring what many people point out (there is no one "math notation"). "Math notation", at best, translates into mathematical concepts. Words, if you will, but words with very specific meaning. Understanding those concepts is the crux of the matter! That is what takes effort – and the effort needed is that of learning mathematics. After that, one may still struggle with bad (or "original", or "different", or "overloaded", or "idiotic", or…) notation, of course, but there is little use in learning said notation(s) on their own.
I've been repeatedly called a gatekeeper for this stance here on HN, but really: notation is a red herring. To understand math written in "math notation", you first have to understand the math at hand. After that, notation is less of an issue (even though it may still be present). Of course the same applies to other fields, but I suspect that the question crops up more often regarding mathematics because it has a level of precision not seen in any other field. Therefore a lot more precision tends to hide behind each symbol than the casual observer may be aware of.
That covers most of the basics, but I think your real question is how to learn all those concepts, not just the notation for them, which will require learning/reviewing relevant math topics. If you're interested in post-high-school topics, I would highly recommend linear algebra, since it is a very versatile subject with lots of applications (more so than calculus).
As ColinWright pointed out, there is no one true notation and sometimes authors of textbooks will use slightly different notation for the same concepts, especially for more advanced topics. For basic stuff though, there is kind of a "most common" notation, that most books use and in fact there is a related ISO standard you can check out: https://people.engr.ncsu.edu/jwilson/files/mathsigns.pdf#pag...
Good luck on your math studies. There's a lot of stuff to pick up, but most of it has "nice APIs" and will be fun to learn.
For about $5 you can find an old (around 1960-1969) edition of the "CRC Handbook of Standard Mathematical Tables. I've owned two of the 17th edition published in 1969, because back then hand calculators didn't exist and many of the functions used in mathematics had to be looked up in books, like what is the square root of 217. Engineers used these handbooks extensively back then.
Now, of course, you have the internet and it can tell you what the square root of 217 is. Consequently, the value of these used CRC handbooks is low and many are available on eBay for a few dollars. Pick up a cheap one and in it you will find many useless pages of tables covering square roots and trigonometry, but you will also find pages of formulas and explanations of mathematical terms and symbols.
Don't pay too much for these books because the internet and handheld calculators have pretty much removed the need from them, but that is how I first learned the meanings of many mathematical symbols and formulas.
You might also look for books of "mathematical formulas" in you local bookstores. Math is an old field and the notations you are stumbling over have likely been used for 100 years, like the triangle you were wondering about. (Actually the triangle is the upper case greek letter delta. Delta T refers to an amount of time, usually called an interval of time.)
Unfortunately, because math is an old subject it is a big subject. So big that no one person is expert in every part of math. The math covered in high school is kind of the starting point. All branches of mathematics basically start from there and spread out. If you feel you are rusty on your high school math, start there and look for a review book or study guide in those subjects, usually called Algebra 1 and Algebra 2. If you recall your Algebra 1 and 2, take a look at the books on pre-calculus. The normal progression is one year for each of the following courses in order, Algebra 1, Geometry, Algebra 2, Pre-Calculus, and Calculus. This is just the beginning of math proficiency, but by the time you get through Calculus you will be able to read the paper you referenced.
Is it really a year for each of those subjects? It can be done faster but math proficiency is a lot of work. Like learning to be a good golfer, it would be unusual to become a 10 handicap in less than 5 years of doing hours of golf each and every week.
Calculus is kind of the dividing line between high-school math and college level math. Calculus is the prerequisite for almost all other higher level math. With an understanding of Calculus one can go on to look into a wide range of mathematical subjects.
Some math is focused on its use to solve problems in specific areas; this is called applied math. In applied math there are subjects like Differential Equations, Linear Algebra, Probability and Statistics, Theory of Computation, Information & Coding Theory, and Operations Research.
Alternatively, there are areas of math that are studied because they have wider implications but not because they are trying to solve a specific kind of problem; this is called pure math. In pure math there are subjects like Number Theory, Abstract Algebra, Analysis, Topology & Geometry, Logic, and Combinatorics.
All of these areas start off easy and keep getting harder and harder. So you can take a peek at any of them, once you are through Calculus, and decide what to study next.
All math notation was created by mathematicians who wanted to quickly represent something, either to:
- better see the structure of the problem; or
- reduce the amount of ink they need to write the problem
Very similar to how programmers use functions, in fact.
To this end, mathematicians in different fields have different notation, and often this notation overlaps with different meaning. Think how Chinese and Japanese have overlapping characters with different meanings.
As others have stated, there is no "one true notation" -- all notation is basically a DSL for that math field.
Instead, choose a topic you are interested in, find an introductory text, and start reading. They will almost certainly explain the notation. Unfortunately, even within a field, notation can vary, but once you have a grasp of one you will probably grasp the rest quick enough.
I will mention, though, that some notation is "mostly" universal. Integrals, partial derivatives, and more that I can't recall right now all use basically the same notation everywhere, since they underlie a lot of other math fields.
Specialized math tends to have specialized notation. For ex Linear Algebra, Calculus, Combinatorics. Any decent textbook will have an appendix or table with what the notation means.
Could it be that you are trying to read things that are a bit too advanced? Maybe look for some first year university lecture notes? In general, if you cannot follow something, try to find some other materials on the same subject, preferably more basic ones.
> […] I find it really hard to read anything because of the math notations and zero explanation of it in the context.
I suggest finding contexts first, and exploring math within those contexts. Different subfields have their own conventions and notation.
For example, you might be working in category theory, and see an arrow labeled “π”. When I see that, I think, “Ah, that’s probably a projection! That’s what π stands for!”
Or you might be in number theory, and see something like π(x). When I see that, I think, “Ah, that’s the prime number counting function! That’s what π stands for, ‘prime’!”
Or you might be in statistics, and see 1/2√π e^(-1/2 x^2). When I see that, I think, “Ah, that’s the number π! It’s about 3.14”
Or you might see a big ∏ which stands for “product”.
The fact that such a common symbol, π, stands for four different things in four different contexts can be a bit confusing. So if you want to learn mathematical notation, pick a context that you want to study (like linear algebra), and look for accessible books and videos in that subfield. The trick is finding stuff that is advanced enough that you’re getting challenged, but not so advanced that it’s incomprehensible. A bit of a razor’s edge sometimes, which is unfortunate.
There is no single authoritative source for mathematical notation. That said, there are a lot of common conventions. You could do worse than this NIST document if it's just a notation question:
Of course, if the real problem is that you need to learn some mathematical constructs, that is a different problem. The good news is that there's a lot of material online, the bad news is that not all of it is good... I often like Khan Academy when it covers the topic.
I also used get hung up on “mathematical notation”. But it turns out the problem wasn’t the notation. I was just bad at math. Well, out-of-practice is more like it.
Once you have the fundamentals clearly explained and you’re doing some math on a regular basis the notation, even obscure non-standard notation becomes relatively intuitive.
I think the problem is that there is no authoritative text, that I know of, and as ColinWright says, the same ideas can be notated differently by different fields or sometimes by different authors in the same field (though often they converge if they are in the same community).
Wikipedia has been helpful sometimes but otherwise I have found reading a lot of papers on the same topic has been useful. However, this is kind of an "organic" and slow way of learning notation common to a specific field.
The Greek alphabet would like to thank all the scholars for the centuries of overloading and offer a "tee hee hee" to all of the students tormented by attendant ambiguities.
1) Search youtube for multiple videos by different people on the topic you want to learn. Watch them without expecting to understand them at first. There is a delayed effect. Each content creator will explain it slightly differently and you will find that it will make sense once you've heard it explained several different times and ways.
I will read the chapter summary for a 1k page math book repeatedly until I understand the big picture. Then I will repeated skim the chapters I least understand until I understand its big picture. I need to know the terms and concepts before I try to understand the formulas. I will do this until I get too confused to read more then I will take a break for a few hours/days and start again.
2) You have to rewrite the formulas in your own language. At first you will use a lot of long descriptions but quickly you will get tired and you will start to abbreviate. Eventually, you get the point where you will prefer the terse math notation because it is just too tedious to write it out in longer words.
3) You might have to pause the current topic you are struggling with and learn the math that underlies it. This means a topic that should take 1 month to learn might actually take 1 year because you need to understand all that it is based on.
4) Try to find an applied implementation. For example photogrammetry applies a lot of linear algebra. It is easer to learn linear algebra if you find an implementation of photogrammetry and try to rewrite it. This forces you to completely understand how the math works. You should read the parts of the math books that you need.
Maybe a problem is trying to learn it by reading it.
I was a college math major, and I admit that I might have flunked out had I been told to learn my math subjects by reading them from the textbooks without the support of the classroom environment. It may be that the books are "easy to read if a teacher is teaching them to you."
Talking and writing math also helped me. Maybe it's easier to learn a "language" if it's a two way street and involves more of the senses.
Perhaps a substitute to reading the stuff straight from a book might be to find some good video lectures. Also, work the chapter problems, which will get your brain and hands involved in a more active way.
As others might have mentioned, there's no strict formal math notation. It's the opposite of a compiled programming language. In fact, math people who learn programming are first told: "The computer is stupid, it only understands exactly what you write." In math, you're expected to read past and gloss over the slight irregularities of the language and fill in gaps or react to sudden introduction of a new symbol or notational form by just rolling with it.
Try reading a good undergraduate calculus textbook. It would be hefty and a bit wordy, and it may take a few months to go through, but calculus requires surprisingly little amount of prior knowledge - even the concept of limit should be defined in the textbook (the famous epsilon-delta).
Also remember that math notations are meant for people. If you learn the sigma summation notation, and if you wonder "So I understand what is \Sigma_{i=0}^{10}, but what is \Sigma_{i=0}^{-1}?" then you're wondering irrelevant stuff. If a math notation is confusing to use, good mathematicians will simply not use it and devise an alternative way to express it (or re-define it more clearly for their purpose).
Also, don't skip exercises. Try to solve at least 1/3 of them after each chapter. Exercises are the "actually riding a bike" part of learning how to ride a bike.
Practice, just like you learned programming.
"The Context" gives you the meaning for the notation, sadly. You have to kind of know it to understand the notation properly.
You can also get sufficiently angry and just write out linear algebra books and what not in Agda / Coq / Lean if it pisses you off so much (I've done a bunch of exercises in Coq)
I should really pick that one up some day. It had an inspiring story, I believe the author wanted to understand the classical mechanics and just wrote them out in Scheme.
Pretty much, yea. And because they are literally a 100× programmer, they also extended Scheme to support stuff you usually use a computer algebra system for at the same time. After all, if your CAS can take the derivative of a function, why can’t your programming language?
It's actually executable, which is part of why they wrote this particular book. The intent was to have a more uniform syntax for presenting the math and being able to (programmatically) use it.
Classical mechanics is deceptively simple. It is surprisingly easy to get the right answer with fallacious reasoning or without real understanding. Traditional mathematical notation contributes to this problem. Symbols have ambiguous meanings that depend on context, and often even change within a given context.¹ For example, a fundamental result of mechanics is the Lagrange equations. In traditional notation the Lagrange equations are written
d/dt ∂L/∂q̇ⁱ − ∂L/∂qⁱ = 0.
The Lagrangian L must be interpreted as a function of the position and velocity components qⁱ and q̇ⁱ, so that the partial derivatives make sense, but then in order for the time derivative d/dt to make sense solution paths must have been inserted into the partial derivatives of the Lagrangian to make functions of time. The traditional use of ambiguous notation is convenient in simple situations, but in more complicated situations it can be a serious handicap to clear reasoning. In order that the reasoning be clear and unambiguous, we have adopted a more precise mathematical notation. Our notation is functional and follows that of modern mathematical presentations.² An introduction to our functional notation is in an appendix.
Computation also enters into the presentation of the mathematical ideas underlying mechanics. We require that our mathematical notations be explicit and precise enough that they can be interpreted automatically, as by a computer. As a consequence of this requirement the formulas and equations that appear in the text stand on their own. They have clear meaning, independent of the informal context. For example, we write Lagrange’s equations in functional notation as follows:³
D(∂₂L ∘ Γ[q]) − ∂₁L ∘ Γ[q] = 0.
The Lagrangian L is a real-valued function of time t, coordinates x, and velocities v; the value is L(t, x, v). Partial derivatives are indicated as derivatives of functions with respect to particular argument positions; ∂₂L indicates the function obtained by taking the partial derivative of the Lagrangian function L with respect to the velocity argument position. The traditional partial derivative notation, which employs a derivative with respect to a “variable,” depends on context and can lead to ambiguity.⁴ The partial derivatives of the Lagrangian are then explicitly evaluated along a path function q. The time derivative is taken and the Lagrange equations formed. Each step is explicit; there are no implicit substitutions.
I think you can see that the Scheme code is a direct and very simple translation of the equation.
And it has the advantage that you can run it immediately after typing it in, assuming you have a coordinate path to pass to it. They immediately go to a concrete example:
Mathematics is a lingo and notations are mostly convention. Luckily people generally follow the same conventions, so my best advice if you want to learn about a specific topic is to work through the introductory texts! If you want to learn calculus find an introductory college text. Statistics? There are traditional textbooks like Introduction to Statistical Learning. The introductory texts generally do explain notation which may become assumed knowledge for more advanced texts, or as you seem to be wanting to read, academic papers. If those texts are still too difficult, then maybe move down to highschool text first.
Think about it this way. A scientist, wanting to communicate his ideas with fellow academics, is not going to spend more than half the paper on pedantics and explaining notations which everyone in their field would understand. Else what is the purpose of creating the notations? They might as well write their formulas and algorithms COBOL style!
Ultimately mathematics, like most human-invented languages, is highly tribal and has no fixed rules. And I believe we are much richer for it! Mathematicians constantly invent new syntax to express new ideas. If there was some formal reference they had to keep on hand every time they need to write an equation that would hamper their speed of thought and creativity. How would one even invent something new if you need to get the syntax approved first!
TL;DR: Treat math notation as any other human language. Find some introductory texts on the subject matter you are interested in to be "inducted" into the tribe
1] learn the greek alphabet if you haven’t already.
2] dive deep into the history of math.
3] youtube…
3 blue 1 brown, stand up maths, numberphile, kahn academy. These channels are your friends.
4] don’t give up and make it fun. Once you’re bit by the bug of curiosity and are rewarded with understanding you’ll most probably be unstoppable but still, its a long road. Better to focus on the journey.
Lastly, the notation is what it is because of the nature of math itself coupled with the history of who was doing the solving exacerbated by the cultural uptake. There have been and will continue to be new notation. Its unfortunate that often to learn a new concept the barrier is with parsing the syntax. Stick with it and stay curious and those squiggles will take on new magical and profound meanings.
To be honest, I have never understood this question. You learn notation as you go, along with the subject itself (whether it’s math, chemistry, or electronics). You definitely do not learn “notation” for its own sake - it just doesn’t make sense.
Well, the real fun is deciphering a lower case xi - ξ - when written on the blackboard (or whiteboard), specially compared to a lower case zeta - ζ (fortunately way less commonly used).
As all the others already told you. you don't learn by reading alone.
Ah, yes. I remember the time when I saw someone write something vaguely like the following
[0,ξ[={x|0<=x<ξ}
Which was fun trying to figure out when written in handwriting where ξ,{,} all look the same.
If you can't figure out what it's supposed to be, this equation starts with a half-open interval denoted: [ξ,0[. This notation has some advantages but can be make things hard to read.
"The Probability Lifesaver" has a lot of good mathematics tips (which are not even mathematics related) most of which are not probability-specific. It's a goldmine.
I think good first resource would be the book and lecture notes in an introductory university course treating the specific domain you are interested in because often lots of things in notation are domain specific. Lots of good open university lectures out there, if not sure from where to start the MIT open courseware used to be a good first guess for accessing materials.
As a sidenote I have MSc in Physics with a good dollop of maths involved and I am quite clueless when looking at a new domain so it's not as if university degree in non-related subject would be of any help...
Through a really nice and helpful math prof who took time out of her day to explain it to those in the "im in trouble" additional course. Forever grateful for that, would have failed otherwise.
Math notation becomes very readable, as soon as the teacher writes a example out on the black board, and that is why i will never forgive wikipedia / wolfram / latex for not having a interactive "notation to example expansion". They had such a chance to reform the medium - to make it more accessible to beginners and basically forgot about them.
Had been in the same situation for years. Read a paper, encounter the first equation, scratch my head and search around trying to understand it, give up. That changed half a month ago, after watching the Linear Algebra and Calculus course at https://www.youtube.com/c/3blue1brown/playlists?view=50&sort....
Let me explain a little bit. Just like a foreign language you stopped learning and using after high school, what prevents you from using it fluently is not just the vocabulary and grammar, but also the intuition and the understanding of the language as a whole. Luckily, math is a human designed language, with linear algebra and calculus being the fundamentals. And again, learning them is about building intuition on why and how they are used, so whenever you encounter transformation, you think in terms of vectors and matrices, and derivative for anything relevant to rate of change. By using carefully designed examples and visual representation, Grant Sanderson greatly smoothed the learning curve in the video courses. Try it out and you'll see.
Beyond that, different fields do have slightly different notation. When you first encounter them, just grab some introduction books or online courses and skim over the very first chapters.
I learned it by asking peers in grad school what stuff meant. And working through the math myself (it was a slog at first) and then writing stuff out it in LaTeX. When one is forced to learn something because one needs to take courses and to graduate, the human brain someone figures out a way.
A lot of it is convention, so you do need a social approach - ie asking others in your field. For me it was my peers, but these days there’s Math stack exchange, google, and math forums. Also, first few chapters of an intro Real Analysis text is usually a good primer to most common math notation.
When I started grad school I didn’t know many math social norms, like the unstated one that vectors (say x) were usually in column form by convention unless otherwise stated (in undergrad calc and physics, vectors we’re usually in row form). I spent a lot of time being stymied by why matrix and vector sizes were wrong and why x’ A x worked. Or that the dot product was x’x (in undergrad it was x.x). It sounds like I lacked preparation but the reality was no one told me these things in undergrad. (I should also note that I was not a math major; the engineering curriculum didn’t expose me much to advanced math notation. Math majors will probably have a different experience.)
Math papers can be pretty sloppy, and you don't realize this until you start working with formal mathematics—then it's obvious.
Almost all hand "proofs" in math papers have minor bugs, even if they're mostly correct in the big picture sense.
Even math designed to support programming (e.g. in computer graphics) is almost always incomplete/outright wrong in some meaningful way.*
But with a struggle, it's still largely usable/useful.
I've used advanced mathematics most of my career to do work (i.e. read a paper, implement it), but the ability to actually use math to do new things in computer science that mattered only to me only happened after I learned TLA+, which took a few weeks of solid study to click. Since then, it's been a pleasure. My specs have never been this good!
Lamport's video course on TLA+ is pretty good, but honestly I've read everything I can find on the topic so it's difficult to know what helped me the most.
*I think this is because, short of doing formal mathematics, there's no way to "test" your math. It's the equivalent of expecting programmers to write correct code the first time with no tests, and without even running the code.
You might be better picking an area, and trying to work out the notation relating to that area e.g. vectors / matrices / calculus etc. As Colin says below there are often multiple equivalent ways of representing things across different fields and timeframes. I seem to remember maths I studies in Elec Eng looking different but equivalent to the way it was represented in other disciplines
>... I'd really like to learn "higher level than highschool" math...
This sounds somewhat abstract, as the math field is vast. If you consider the next level from where you believe your present standing is, I would try to revisit the college-level math which you probaby experienced back in time.
Generally, the textbooks rely on previous knowledge and gradually feed the new concepts, including the math notation as needed in the new scope.
I find it easier to get the feel for the notation by actually writing it by hand. Indeed it's just an expression tool. Also, you may develop your own way of making notes, as you go on dealing with math-related problems.
But in the core of this you are learning the concepts and an approach to reasoning. Of course, for this path to have any practical effect, you would need to memorize quite a bit, some theorems, some methods, some formulas, some applications. Internalizing the notation will help you condense all of that new knowledge.
Picking a textbook for your level is all that is needed to continue the journey!
First, just to state the obvious, if you can accurately describe a notation in words, you can do an Internet search for it.
When that fails, math.stackexchange.com is a very active and helpful resource. You can ask what certain notation means, and upload a screenshot since it’s not always easy to describe math notation in words.
If you don’t want to wait for a human response, Detexify (https://detexify.kirelabs.org/classify.html) is an awesome site where you can hand draw math notation and it’ll tell you the LaTeX code for it. That often gives a better clue for what to search for.
For example you could draw an upside down triangle, and see that one of the ways to express this in LaTeX is \nabla. Then you can look up the Wikipedia article on the Nabla symbol. (Of course in this case you could easily have just searched “math upside down triangle symbol” and the first result is a Math Stackechange thread answering this).
I have a masters in engineering, but there was a lot of pure math things that I never understood until recently. I found the same approach to learning software concepts and APIs. Just start at the one you don't know and recursively explore the concepts until you find stuff you do know.
My advisor's advise was basically "find a notation that you yourself like and understand well" and stick consistently to it. He said this in a context of having seen many standard notations before (so he's not saying to re-invent the wheel), but his point was just that notations and ways of thinking are personal. Try to be clear and precise (for yourself and others), but realize that you are crafting something that reflects you and your way of thinking.
It's kind of a cop-out, but to be fair it's basically what I would say for programming as well. Try to simultaneously write code that clear to yourself and clear to others. There's no perfect method. Just constantly self-critique and try to improve.
Related question, does anyone know of any websites/books that have mathematical notation vs the computer code representing the same formula side by side? I find that seeing it in code helps me grasp it very quickly.
One of the best things I figured out that at least in the last 70 years or so ago it's pretty easy to find the "first" or foundational paper for a particular construct where they have to explain their notation for the first reader or they have the vibe of working with the new idea in the raw rather than 40 years later where it is matured. One example I use for this is hamming codes where some of the recent examples or explanations don't build it from first principles, but the original articles do explain it very clearly.
I learned it the hard way at college. The prof just blasting off in a flurry of greek and us trying to keep up. Eventually he resorted to using the gothic alphabet and at that point I gave up. Every single field of math appears to have their own conventions and even these are but guidelines smart people often bend for perceived benefit. So I can’t really recommend anything other than learning the notation with the field. If you find yourself at loss, chances are you have missed some of the context and should backtrack.
Do you mean all the introductory mathematics books you tried fail to properly explain the notation ?
Or that the notation differs from books to books ?
(In my case, I learned the notation via French math textbooks, and in the first day of college/uni we litteraly went back to "There is a set of things called natural numbers, and we call this set N, and there is this one thing called 0, and there is a notion of successor, and if you keep taking the successor it's called '+', and..." etc..
But then, the French, Bourbaki-style of teaching math is veeeeeeeery strict on notations.
I’ve run into this problem as well and it’s put me off learning TLA+ and information theory, which bums me out. I assume there’s a Khan Academy class that would help but it’s hard to find.
You get better at it the more you do. A tip is also to actually change a mathematical exposition into a form you better understand (e.g. by writing it in a different notation and/or expanding it out in words to make the existing notation less dense). Basically convert the presentation into the way you would personally like to see it.
If you do this enough, the process becomes easier and the original notation becomes easier to understand. But it takes a lot of time and patience (as I'm sure it took for you understand undocumented code did as well).
It can be quite provincial. Could you please post a link to a paper or website that has notation you'd like to understand? Which domains are you interested in particularly?
My suggestion to you is going to sound pithy, but its what worked for me: do problems. Lots and lots of problems.
Pick a direction (maybe discrete math, if you're trying to do CS) and get a book (I like EPP, as it is super accessible) and go, in order, through each chapter. Read, do the example problems, and do EVERY SINGLE PROBLEM in the (sub)chapter section enders.
Its a time commitment, but if you really want to learn it, this is one way to do so. IMO finding the right textbook is key.
I’d highly recommend this book. It’s what I had for my intro to proofs class in college and it was the best book I found for understanding. I found many other books on this topic to be kinda garbage but this one was amazing.
> I find it really hard to read anything because of the math notations and zero explanation of it in the context.
So many answers and no correct one yet. Read and solve "How to Prove It: A Structured Approach", Velleman. This is the best introduction I've seen so far. After finishing you'll have enough maturity to read pretty much any math book.
I learned most of my university math through "Calculus a Complete Course". But it's a bit expensive so I would recommend you buy an older version of the book where you can find a free solution pdf.
But you'll have to be a bit realistic when going through the book, it's going to take a good while.
I have a notation problem. I want to write "approximately 24 volt" on my printed circuit board, but I have little space. I could write "≈24V", but the wavy symbol makes it look like it is AC instead of DC. How to solve this without adding more characters or changing my circuit?
If you haven't already,
I would start by learning the Greek alphabet
and the sounds that the letters make.
Conventions like Σ for sum and Δ for difference seem much less strange
when you realize that they're basically just S and D.
If it’s not, the book is badly written. Most of the time, you can’t rely on a specific bit of notation to be consistent across books or articles. Smart arses who try to impress the readers with their fancy unique notations are the bane of scientists doing literature reviews.
90% of the time, there needs to be a keyword when a symbol is introduced, e.g. “where Λ is the time-dependent foo operator” so you can get a textbook to find what the fuck a “foo operator” is. Then, the first time you spend a day learning what it is, and the next million times you mumble “what a stupid notation for such a straightforward concept”.
I found that there is a physicality/motion to the progression of notation that you learn by solving a lot of problems, especially solving them quickly during tests
Is there any particular topic? I agree with other posters though that the notation is a short hand for the concepts and you need the concepts, not the notation.
after starting to take quantum chemistry the professor wrote up on the board:
E Psi = H Psi
and we all joked you could just cancel the Psi and so E=H.
several very kind people explained vector calculus to me ("bold means a matrix, and this dot means matrix multiplication") but to be honest, I still can't read math notation but if you show me anything in numpy I'll understand it immediately.
I sometimes think math notation is a conspiracy against the clever but lazy.
Being able to pronounce the greek alphabet is a start, as you can use your ear and literary mind once you have that, but when you encounter <...>, as in an unpronouncable symbol, the meaningless abstraction becomes a black box and destroys information for you.
Smart people often don't know the difference between an elegant abstraction that conveys a concept and a black box shorthand for signalling pre-shared knowledge to others. It's the difference between compressing ideas into essential relationships, and using an exclusive code word.
This fellow does a brilliant job at explaining the origin of a constant by taking you along the path of discovery with him, whereas many "teachers" would start with a definition like "Feigenbaum means 4.669," which is the least meaningful aspect to someone who doesn't know why. https://www.veritasium.com/videos/2020/1/29/this-equation-wi...
It wasn't until decades after school that it clicked for me that a lot of concepts in math aren't numbers at all, but refer to relationships and relative proporitons and the interactions of different types of things, which are in effect just shapes, but ones we can't draw simply, and so we can only specify them using notations with numbers. I think most brains have some low level of natural synesthesia, and the way we approach math in high school has been by imposing a three legged race on anyone who tries it instead.
Pi is a great example, as it's a proportion in a relationship between a regular line you can imagine, and the circle made from it. There isn't much else important about it othat than it applies to everything, and it's the first irrational number we found. You can speculate that a line is just a stick some ancients found on the ground and so its unit is "1 stick" long, which makes it an integer, but when you rotate the stick around one end, the circular path it traces has a constant proportion to its length, because it's the stick and there is nothing else acting on it, but amazingly that proportion that describes that relationship pops out of the single integer dimension and yields a whole new type of unique number that is no longer an integer. The least interesting or meaningful thing about pi is that it is 3.141 etc. High school math teaching conflates computation and reasoning, and invents gumption traps by going depth first into ideas that make much more sense in their breadth-first contexts and relationships to other things, which also seems like a conspiracy to keep people ignorant.
Just yesterday I floated the idea of a book club salon idea for "Content, Methods, and Meaning," where starting from any level, each session 2-3 participants pick and learn the same chapter separately and do their best to give a 15 minute explanation of it to the rest of the group. It's on the first year syllabus of a few universities, and it's a breadth-first approach to a lot of the important foundational ideas.
The intent is I think we only know anything as well as we can teach it, so the challenge is to learn by teaching, and you have to teach it to someone smart but without the background. Long comment, but keep at it, dumber people than you have got further with mere persistance.
If math was a programming language, all mathematicians would be fired for terrible naming conventions and horrible misuse of syntax freedom.
Honestly, most math formulas can be turned into something that looks like C/C++/C#/Java/JavaScript/TypeScript code and become infinitely more readable and understandable.
Sadly, TypeScript is one of the languages that is attempting to move back to idiocy by having generics named a single letter. Bastards.
Yes, conventions have emerged, people tend to use the same sort of notation in a given context, but in the main, the notation should be regarded as an aide memoire, something to guide you.
You say that you're struggling because of "the math notations and zero explanation of it in the context." Can you give us some examples? Maybe getting a start on it with a careful discussion of a few examples will unblock the difficulty you're having.