One thing to note is that there is no universally agreed upon convention for denoting math objects. Textbook and research paper start with a section on notation to clarify the symbols they will use. Notation often varies between fields, academic schools, and sometimes there are even differences between the notation you would use when writing on a blackboard vs. the notation you would use in print.
That being said, for the most basic concepts the notation is pretty consistent so if you skim through one or two books you'll be able to get the feel for it. Understanding the actual math—that will take longer.
"One thing to note is that there is no universally agreed upon convention for denoting math objects."
This is the thing that trips me up most often. It's especially a problem when the author believes that their notation and variables are universal and therefore don't need to be defined. I've spent a lot of time these past few months trying to "reverse engineer" calculations done in research papers to verify their notation.
Textbooks and theses generally do a better job of defining everything because they aren't as concerned about document length.
I think the reason is that math notation is used in context. Which is what explains the two different notations for calculus derivatives. They were interested in different things! Similarly in quantum mechanics (not my area) you may want to use bra-ket notation or matrix representations if you care about implementation details.
Usually notations that have been here for a while are quite stable, but can still diverse depending on countries. For example in France you would write [2, 5[ for the range of real numbers between 2 included and 5 excluded, but in the US it's more common to see [2, 5)
Notations that are more recent are... all over the place.
But as parent said, good papers will always include a notation section to disambiguate. Bad papers won't.
For interviews and programming puzzles you only need to know notation for basic mathematical logic, basic set theory and the summation notation and maybe some bits and pieces from number theory:
An upvote isn’t enough for this book, so I need to comment that it’s the best I’ve come across for my needs. When I was getting into the more mathematical aspects of coding when I was getting started with machine learning 5 years ago, this book was invaluable.
Having thought in code (with verbose variables and structure) for many years, I needed a Rosetta Stone for the ambiguous symbology of mathematics - and this is it!
It’s tinier than you’d think, but is an absolutely incredible reference. An absolute requirement for any engineers bookshelf.
This book single handedly saved my butt a few years ago when I was diving deep into deep learning without a math background. It’s very slim, which is a huge selling point in a world of 9000 page textbooks. I love it.
I'd argue that any attempt at understanding mathematical notation universally will fail. Different fields and different sub-topics and different authors have vastly different conventions, for good reason.
Sure, one can perhaps expect that something that uses an integral sign shares some properties with ordinary integration of real functions, but to really understand what the notation entails, one really has to study the underlying material.
I feel that what you're asking for is kind of akin to wanting to read a novel in a foreign language using only a dictionary of the 10% most commonly used words of said language, with each entry resolving only to one meaning of the word.
Surely the optimist would take this question to assume that someone is interested in expanding their math knowledge and might not have an issue getting exposed to more math.
I think your answer is akin to telling a french tourist that they shouldn't try and learn basic conversational french because they could never hope to understand the complexity of the complete language.
I understand your point. I just suspect, perhaps wrongly, that people asking the question on HN may be inclined to base serious stuff on their tourist's understanding of French. There's a place for tourist math, but it can be dangerous when misused.
See the book recommended by @kasbah above - one of the best parts of that book is that this ambiguity is addressed front and center, and the reference is broken up by domain...it’s the best attempt I’ve seen and works very well.
I've been working on some of these ideas. Even if you had mathematica-grade program implementations of all mathematical objects and properties, you can't get over things like "you can't solve the quintic this way, dummy!" Or "no, you can't have a good visual of R^n in a 2D screen!" Etc.
That doesn't mean all hope is lost. For now, I won't say more.
In addition to some great responses already on here, I would suggest picking up a functional programming language as a way to bridge the gap between math and the C-style syntax that most of us learned to program in. Haskell and PureScript are good for this; many programs actually use even more mathy aliases for common tokens (e.g. `∀` for `forall`).
What does understand mean? Notation is just that, notation.
I think that the single biggest advantage one can have (in programming that does something "non-trivial" - loaded term I know, rather than as a person) is to have a firm grasp of the mathematical basis of their work. It's so much easier to start something new when you can derive it yourself.
If you have the time, I recommend "Advanced Engineering mathematics" for the gap between calculus to applications and other topics like Linear Algebra, analysis, and graph theory.
If you just want a mapping of symbols to words try LaTeX documentation
To understand basic notation like summations and matrix multiplications, I created Math to Code which is a quick tutorial to translate math into NumPy code:
There is no rule of mathematical notation except that things that are written as an index (whether as a subscript or superscript or argument) are stuff the given object depends upon. Everything else builds upon that rule and is defined in some context.
In addition to other comments, I would also recommend "A Programmer's Introduction to Mathematics" by Dr. Jeremy Kun [0]. The HN submission [1] may have more interesting stuff around the topic.
Notation varies depending on the author and subject area but a good resource for "programmer/computer science" notation is to skim through Concrete Mathematics or the preliminaries to The Art of Computer Programming -- I find this notation to be common.
In more specialized areas like type theory, first order logic, predicate calculus, temporal logic, etc you have to pick it up as you go.
This won't solve all your problems, but it _can_ be a big help to know what to search when you see a wall of symbols, and detexify.kirelabs.org is a decent resource for that -- you can draw a single symbol and get the latex code that would generate it.
(if you're typesetting math it's invaluable, not just decent)
There are a lot in Abromowitz and Stegun handbook, last section "Index of Notation". It's not quite what you're asking for, but it's fairly authoritative.
That being said, for the most basic concepts the notation is pretty consistent so if you skim through one or two books you'll be able to get the feel for it. Understanding the actual math—that will take longer.
As for references, here is a very comprehensive standard, ISO 80000-2 that defines recommendations for many of the math symbols, with mentions of other variations: https://people.engr.ncsu.edu/jwilson/files/mathsigns.pdf#pag...
For something shorter (and less complete), you can also check the notation appendices in my books: https://minireference.com/static/excerpts/noBSguide_v5_previ... https://minireference.com/static/excerpts/noBSguide2LA_previ...