Article is mostly fine, but there's a point I vehemently disagree with: you should not, absolutely not translate math notation to code.
Math is equational and declarative. When you write:
S = \sum_{i=1}^{100} (2i + 1)
you are declaring that we shall call S the sum of the elements of a certain set. You are not prescribing how this sum ought to be calculated: S is a pure value that doesn't depend on what practical operations you do -- be them on paper or on a CPU -- to actually find out what S is.
For example, using equational reasoning, we may write:
S = \sum_{i=1}^{100} 1 + 2 \sum_{i=1}^{100} i = 100 + (100 * 101) = 101^2 - 1
Or, without computing anything at all, we may prove by induction that:
(1) \sum_{i=1}^{n} (2i + 1) = (n + 1)^2 - 1
The base case is obvious, and to prove the inductive step it is sufficient to observe that indeed:
Tying math to code is more like tying your own hands behind your back.
All of this is basically an excuse to point out that the code for the product is wrong, the initial value should be 1 and not 0 (or, in general, your monoid identity).
Exactly, that was my take-away. You don't need to think of the Sum function as a for-loop every time you use Sum, but it certainly helps explain it to a programmer who doesn't have a sense of what Sum is, but knows what a for-loop is.
Can someone recommend a good resource for a game developer who wants to understand mathematical notation so he can dissect game development papers faster? Currently it takes me multiple days to understand a single formula.
It feels a bit like how I found it impossible to memorise all the countries of the world as an adolescent, but then when I discovered Seterra as an adult I was able to memorise them all in less than a week. I need an equivalent perspective shift, but for math.
Beyond a relatively small set of standard symbols and functions, notation is dependent on the field or subfield you're studying, so a "guide to mathematical notation" is not really possible. However:
> it takes me multiple days to understand a single formula
This is not weird. Papers are not hard because notation is hard but because the ideas behind them are difficult to understand. It usually takes me several hours of uninterrupted work to read a paper; I'm not a researcher and have never been in academia, but everyone I know says the same thing, so I'm comfortable reassuring you that you're definitely not the odd one out.
A few pointers to go faster:
- Write down your observations (on the paper itself, if you can print it). Ask yourself questions and see if you got things right. Try to replicate their computation steps. Actively engage the topic.
- Keep a dictionary of symbols. What do the authors mean with this ridiculous scribble? Ah, that thing.
- Mathematical notation suffers from catastrophic overloading. For example, if A and B are numbers, then AB is multiplication in their set. If they are matrices, it's matrix multiplication. If A is a matrix and B is a vector, it's the image of B through A. If one is a vector and the other is a number it's multiplication by a scalar, and so on. Try to undo this process and figure out what the virtual method call resolves to :)
- Try to assign "types" to variables. Notation is "dynamically typed", in the sense that a variable could (syntactically) be anything. If you have trouble understanding a formula, try to understand what kind of object each variable represents. Is this squiggle a set or an element? If an element, from what set does it come from? What are the arguments of this function? What is its image? Is this "i" a real variable or a mute variable? To what quantifier is this variable bound?
- Sometimes notation is abused. E.g. we may write:
10n^2 + n = O(n^2)
This is, strictly speaking, nonsense: O(n^2) is a set, and we're saying it's equal to some undefined stuff. What even is "n"? Obviously, it "compiles" to:
f(n) = 10n^2 + n
f(n) \in O(n^2)
It's normal to abuse notation, but it may cause confusion if one is unfamiliar with the topic. Try to undo this process if you find it's preventing you from understanding a formula.
Sorry I don't have a recommendation but this seems hard, symbols in math can mean different things depending on the context (the field you are looking in) and at least while I was in undergrad they seemed to change between professors all the time while meaning the same thing.
Hopefully I am wrong and there is a way to learn with a game as with Seterra.
For many equations (e.g. PDEs), there is no direct translation of math to code anyways. This is the point of contact with numerics, integration methods etc. the translation layer between what math expresses and a computer can evaluate.
I think its highly context sensitive whether a declarative approach, like "translate the maths into code", is bad.
Not to pick fights, but your example seems sort of orthogonal to that aspect. What seems more relevant is how you are choosing to structure your code, and how mutable your state is.
That would not be a problem in Practal. That just falls into the non-executable subset. And all that means is that you have not set up equations that are executable and that handle this case. Actually, you could set things up so that this evaluates up to a certain error bound based on a proof of monotone convergence. All without redefining \sum, but by proving additional equations for \sum.
Note that I am not saying that math is code. Practal is based on logic, and logic transcends code. What I am saying is that code is math. At least, that's obviously true for purely functional code. So there is no reason to write the subset of code which is purely functional not in math notation.
Unless the compiler or runtime is smart enough to recognize the limit of that series [1], the naive implementation is an infinite loop which will never terminate.
> Article is mostly fine, but there's a point I vehemently disagree with: you should not, absolutely not translate math notation to code.
Translating math to code in itself is an art (or rather, a technical discipline called Numerical Methods). That's (at least one) full course at any real engineering school.
The article is good, but I would expect anyone calling himself a software engineer to have already studied and mastered the material covered.
Yes, of course there are. Now, imagine you're a language model, and given the prompt "I'm going to write code to explain what a finite summation is", try to predict the probability that what follows is "and I'm going to do it in Idris".
"Compiling" down the product to a fold brings you to the same issue. Though of course it's a lot easier to reason about math in functional programming than in imperative programming, you still lose some of the equational properties that later allow you to derive closed formulas.
I think, there is one more topic a game developer should know about that fell through the cracks. It's computer algebra systems.
The rationale is simple. If you want to learn math fast, be prepared that you will stumble upon the lack of training as soon as you go from theory to applications. Normally, you need to go through a few books of exercises to start feeling comfortable with calculus or discrete math. And that's where computer algebra systems step in. They do your math for you.
It's much simpler to learn the basics of any CAS than to train yourself to solve all kinds of equations with pen and paper. It's easier (and more fun to be honest) to practice with a CAS and gain experience on the go too.
In my book, an introduction to SymPy takes only a few pages, but enables practicing with linear systems, matrix operations, calculus, polynomial approximation and interpolation, Bezier curves, NURBS, and polynomial transformations as the book progresses.
And SymPy is not even the most potent CAS out there. It is free and easily accessible though. If you know a little Python, you already know a little SymPy.
I used a lot of SageMath (https://www.sagemath.org) when I was an undergrad. Very easy to use piece of softawre for Python in case anyone want to explore computer algebra systems.
Sage is beautiful. And, as far as I understand, it is build on top of SymPy. But it brings in its own language. Nothing wrong with that though.
But what I love about pure SymPy is that it doesn't bring anything new to the language at all. You just write in Python, business as usual, it's just half of your variables are now computable symbols, and you can solve things symbolically whenever you want to.
Yes, +1 for SymPy being the perfect tool for teaching (very little to new learn learn after the initial call to `symbols` to create the symbolic vars).
The best part for me is the SymPy API methods have the same names as the concepts I am trying to teach: solve, expand, simplify, factor, integrate, etc.
Hm... I don't know of any. For precalculus it wouldn't make sense, since the whole point of precalculus is to make sure learners can do the `solve`, `factor`, `expand`, etc. verbs using pen and paper. These are generally useful skills whenever you're manipulating math expressions. It wouldn't make sense to "skip" this part by using SymPy.
For calculus it would make a lot of sense though. You can learn the formulas and general rules so you can do derivative and integrals by hand, but I think we could cut-down significantly on the "integration techniques" topics without any loss. I would love to see a course like that... but I doubt any teacher would be "allowed" to teach it this way, since CALC I and CALC II course curriculum is usually imposed by the university.
I might misinterpret the parent, but I think the parents idea is more that it helps when deriving the math used in a lot of video game programming. (e.g. simplifying expressions in shader code, or deriving closed forms for intersection tests).
Where this might help is if you don't have a strong background in algrabra, and don't know all the properties of e.g. quaternions by heart. You'd implement essentially the answer that you get after playing with a computer algebra system
Computer Algebra Systems basically help you do math. If you want to integrate an expression, or solve a system symbolically, you can, of course, sit down and do that with pen and paper. Or you can "ask" a CAS to do this for you.
I recently did a mobile app which de-warps live video from a fisheye camera. Admittedly, not a game app, but it uses OPENGL and Metal to do the job. I could not have done it without articles like this. I am fan of the way he explains. The scope of his articles go beyond gaming.
This is not a comprehensive answer. Please DM me if you are interested in a longer answer. But the idea was to map the video into a 2d grid(25x25). Map the 2d grid into a half dome 3d model(25x25 vertices). Then divide the flat video into textures that fit into each to the smaller rectangles/triangles on the 3d half-dome.
Once you have the 3d model, manipulate it using OpenGl or metal primitives.
Great overview, though every one of those topics could be the subject of pages and pages of code and writing. I came across a good book recently that covers much of the material in a somewhat project-oriented way (all Python code, but if you're really motivated you can rewrite them into a more commonly used game language like C++ using SDL2 or similar). Lots of detailed examples and exercises (it's a bit more oriented towards machine learning uses than game engines):
I think the most important kind of math is linear algebra if you are serious about building something from absolutely zero. Achieving mastery over the rendering pipeline requires lots of experience with matrices.
Some of the more esoteric stuff might take you multiple papers/books to fully-grok. An example of this for me was clipping of geometry against the view frustum (for building a software rasterizer - GPUs handle this for you now). I've got 3 different papers on this exact topic still sitting on my coffee table. I think the hardest overall aspect is thinking in 4 spatial dimensions and getting your head around all of the transforms.
If you are using a 3rd party engine like Unity or Godot, you may find that mastery of this stuff is not as essential. I think it still helps to understand how the scene graph is ultimately rendered, but unless you are building these engines with your own hands, worrying about acquiring this knowledge could be prohibitive to progress.
There is an awesome playlist [0] on YouTube called "CodingMath" that I think is perfect for anyone interested in computer-graphics or video game development. I think it's useful for every developer, too.
The author uses JS and HTML5 to implement what he explains but you do not need any JS or HTML5 experience to understand any of it and you can recreate all the code in any language. All you need is experience in a programming language
Coding Math is really good. I think I've watched most of his(?) videos. Sometimes I think better drawings could help instead of hand-written graphs and diagrams, but the main point gets across.
I like that the author spends time on notation, mapping math concepts to their equivalent in code. In my experience this is a big stumbling block for coders trying to learn math.
It's difficult enough for the general audience to learn math notation (alien symbols), but coders face the extra difficulty of "namespace conflicts" for the meaning of math symbols (e.g. = sign does not mean assignment).
I found bi-vectors[1] to be a really interesting algebra for graphics. Unfortunately, I've not had much use for this kind of math to be able to really dig in.
Gustavo has created superb courses around Maths and Physics of game development.
I did 2 of his courses and RoI in terms of learning is huge. The crucial point is: he starts with an empty file and codes up the stuff with you at an understandable pace, line by line. A lot of concepts I just understood in hand-waving sense became much more clearer after going through this routine.
I'm planning to next take his Physics course but maybe code it up in Rust so that I'm not re-inventing some wheels (like dynamic arrays). But if anyone wants to brush up on most important pieces of maths: vectors, matrices, calculus I will heartily recommend his courses.
This is excellent. However, some may feel a large portion on fearing math in school is out of scope (for them). If that's you, after the first few paragraphs of "Our Weird Relationship With Math" skip to "Going Back To The Basics".
Math is equational and declarative. When you write:
you are declaring that we shall call S the sum of the elements of a certain set. You are not prescribing how this sum ought to be calculated: S is a pure value that doesn't depend on what practical operations you do -- be them on paper or on a CPU -- to actually find out what S is.For example, using equational reasoning, we may write:
Or, without computing anything at all, we may prove by induction that: The base case is obvious, and to prove the inductive step it is sufficient to observe that indeed: From which it follows: Tying math to code is more like tying your own hands behind your back.All of this is basically an excuse to point out that the code for the product is wrong, the initial value should be 1 and not 0 (or, in general, your monoid identity).