That being said, for the most basic concepts the notation is pretty consistent so if you skim through one or two books you'll be able to get the feel for it. Understanding the actual math—that will take longer.
As for references, here is a very comprehensive standard, ISO 80000-2 that defines recommendations for many of the math symbols, with mentions of other variations:
For something shorter (and less complete), you can also check the notation appendices in my books:
This is the thing that trips me up most often. It's especially a problem when the author believes that their notation and variables are universal and therefore don't need to be defined. I've spent a lot of time these past few months trying to "reverse engineer" calculations done in research papers to verify their notation.
Textbooks and theses generally do a better job of defining everything because they aren't as concerned about document length.
Notations that are more recent are... all over the place.
But as parent said, good papers will always include a notation section to disambiguate. Bad papers won't.
- Mathematics for Computer Science: https://courses.csail.mit.edu/6.042/spring17/mcs.pdf
- Calculus Made Easy: http://calculusmadeeasy.org
Not directly related to your question but useful for interviews and programming puzzles nonetheless:
- Algorithms and Data Structures, The Basic Toolbox: https://people.mpi-inf.mpg.de/~mehlhorn/ftp/Mehlhorn-Sanders...
- Basic Proof Techniques: https://www.cse.wustl.edu/~cytron/547Pages/f14/IntroToProofs...
If Wikipedia is too hard to follow, you can learn this from early chapters of a discrete mathematics textbook.
"Mathematical Notation: A Guide for Engineers and Scientists"
Having thought in code (with verbose variables and structure) for many years, I needed a Rosetta Stone for the ambiguous symbology of mathematics - and this is it!
It’s tinier than you’d think, but is an absolutely incredible reference. An absolute requirement for any engineers bookshelf.
Sure, one can perhaps expect that something that uses an integral sign shares some properties with ordinary integration of real functions, but to really understand what the notation entails, one really has to study the underlying material.
I feel that what you're asking for is kind of akin to wanting to read a novel in a foreign language using only a dictionary of the 10% most commonly used words of said language, with each entry resolving only to one meaning of the word.
I think your answer is akin to telling a french tourist that they shouldn't try and learn basic conversational french because they could never hope to understand the complexity of the complete language.
That doesn't mean all hope is lost. For now, I won't say more.
Two excellent resources are:
1. Introduction to Mathematical Thinking (if you prefer moocs) - https://www.coursera.org/learn/mathematical-thinking?
2. How to think Like a Mathematican - https://www.amazon.co.uk/How-Think-Like-Mathematician-Underg...
I think that the single biggest advantage one can have (in programming that does something "non-trivial" - loaded term I know, rather than as a person) is to have a firm grasp of the mathematical basis of their work. It's so much easier to start something new when you can derive it yourself.
If you have the time, I recommend "Advanced Engineering mathematics" for the gap between calculus to applications and other topics like Linear Algebra, analysis, and graph theory.
If you just want a mapping of symbols to words try LaTeX documentation
A la https://oeis.org/wiki/List_of_LaTeX_mathematical_symbols
Previous HN discussion / it was on the front page earlier this week:
Source: I am a mathematician
In more specialized areas like type theory, first order logic, predicate calculus, temporal logic, etc you have to pick it up as you go.
(if you're typesetting math it's invaluable, not just decent)