It is a reading and discussion group that focuses on math and CS books, articles, and papers, as well as using software tools like Python, WolframAlpha, Desmos, etc. to explore new concepts we come across. As an example of what we discuss in this forum, see "From Vector Spaces to Periodic Functions" that was a result of one of the discussions in this forum. We plan to discuss Munchausen numbers next.
If this sounds interesting to you, please join us.
I didn't carefully read through the section on modular arithmetic, so maybe this is already there, but one thing worth noting that bites a lot of people is that using modular arithmetic for integer representations means that addition, subtraction, and multiplication work the same for signed and unsigned integers. This means that you can typically write arithmetic expressions involving these operations without worrying too much about whether things are being regarded as signed or unsigned at intermediate steps. As soon as you introduce division into the mix, though, the whole thing falls apart, and you have to be very careful about how the language is interpreting each subexpression.
The reason behind that is so that the compiler is allowed to assume that expressions like (x + 1 > x) are true, and thus can optimise more.
Can you provide an example?
Now all of this combines to make addition, subtraction, and even multiplication essentially identical for signed and unsigned numbers. On an 8 bit system (so integers can be between 0 to 255 inclusive) you could be trying to add the signed number 40 and the unsigned number 200. Naturally, this is 200+40 or 240. Now inside the computer, 240 is stored in the exact same way that -16 is so it is important that the software knows the correct way to handle the variable. This is equivalent to subtracting 56 from 40, and the signed integer -56 is represented in the exact same way as the unsigned integer 200.
Now this is where it starts to get crazy. What if you add the signed integer -5 (represented by 251 in the unsigned world) and the unsigned integer 100? Well it is equivalent to 251 + 100 = 351. However, now we encounter modular arithmetic, because this is an 8 bit system and the maximum value for an integer is 255 we calculate everythihg mod 256, so ar left with 351 % 256 = 95. You can mentally view this as the calculation occurring with 1s and 0s, where everytime a 1 is added to a 1, there is a single bit carried and the last bit is dropped, which is exactly how full and half adders work in a processor!
This also works for multiplication! If we multiply -5 by 5, we get -25 easily. But for computers we do 251 * 5 = 1255. The modulo math starts to get tricky for us puny humans but for a computer it is trivial, 1255 % 256 = 231. As a signed integer that is equal to -25, as it is equivalent to (255+1-25).
For division, everything is just crazy. 250/5 is way different than -5/5, no amount if mod arithmetic will save it. Unsigned 50 and signed -1 have completely different representations, and this is even an example without fractions and rounding.
I may have went long winded, but hopefully this helps someone!
As an example when working modulo 17 and you want to work out what division by 3 means. You want to solve equations like: x * 3 = 10 (mod 17). For this case, x = 9; because 9 * 3 = 27 = 10 (mod 17).
In general when working modulo 17, 6 is the multiplicative inverse of 3. Ie 6 * 3 = 1.
Thanks to Fermat's Little Theorem you can find multiplicative inverses very easily, y^(p-2) mod p gives you the inverse of y.
(Most programming languages won't know that you are going to apply a mod afterwards, so they just give you eg rounded or truncated integer division. Which is completely different in general.)
Technically you can get fields with p^n elements, too; but they are constructed in a more complicated way than just taking a modulo.
I just did some Google Scholar searches, and found a lot about FPGAs etc, but not too much about stock CPUs. But that might reflect more on my search skills than on availability of material.
Some standard CPUs now have instructions to specifically run eg AES.
https://www.snia.org/sites/default/files/files2/files2/SDC20... was probably the most interesting of the bunch I found. It's about error correcting codes, not crypto. (Though interestingly, the holy trinity of error correcting codes, information theory and cryptography all flew once through the common conduit of Claude Shannon.)
Now I'm not aware of any language where there is an integer type that has only two elements, unless you're talking about booleans - but booleans are so special that, while they're of course isomorphic to GF(2), we don't really use the same words (addition and multiplication) but different ones (exclusive disjunction (xor) and conjunction (and)).
So, in any real-life situation, your fixed-size integers won't form a field, because the ring of integers modulo n is only a field when n is prime, and so in particular, division by nonzero elements will be undefined in general.
And yes, of course, you can't divide by zero, but that's also true of the real numbers themselves, so no surprises there...
(I guess a better point would be that division is mathematically "broken" for integers anyway, since integers technically also don't form a field and, depending on the language, you may either get back a truncated result from division or will get a different type (ratio or floating point).)
If you want to be even more nit-picky, 2^0 might also work. But whether your definition of a field admits fields with only a single element is just a question of taste, that is even less important or deep than whether your flavour of natural numbers includes 0 or not.
It is my understanding that some mathematicians are looking into some objects that are kind of "like" a field with one element, but not in the sense of classical algebra, see: https://en.m.wikipedia.org/wiki/Field_with_one_element
However, I haven't really looked into that.
Certain languages like Python or Ruby automatically convert a smaller integer type to a larger, unbounded one. Other languages like Swift (or Rust in debug mode, I believe) prefer to throw an error upon integer overflow instead of silently wrapping. I believe that the silent wrapping behaviour in e.g. C and Java is a decision that prioritises performance over safety, as this is behaviour that you rarely want, and if you do, there should be better ways of expressing it.
If it’s 11 o’clock and we agree to meet in two hours, you instinctively know I’m talking about 1 o’clock. But what you’ve just done is add two numbers mod(ulus) 12.
In effect your brain added 11+2 to get 13, then took away 12 to make it fit, and came up with the modulus (or remainder) of 1.
Modulus arithmetic is therefore sometimes known as clock arithmetic.
Not my German brain. But mod 24, I'm with you again :) And don't let me start with the Russians, they did it in hardware:
Say we wanted to work with mod 5
Then 6 + 8 = 14 = 4 mod 5
There are 1-2 other books constantly recommend here, which are (probably) fantastic but throw you off the deep end with assumptions that you understand algebra, trigonometry, fundamental calculus, and statistics.
Essentially, that you finished school and paid attention. Which is a well-founded assumption. And I don't necessarily expect an author to assume responsibility for covering fundamentals, but it's nice if there are authors willing to do this.
I passed the course (Analysis I) with the minimum score by memorising previous exams and their answers, without understanding any of it, or ever needing the skills on my professional career.
Logic and algebra classes were a lot more approachable, enjoyable and eventually useful. I am looking forward to learn something from this material, too.
As a high school drop out that only later on developed an interest in learning its pretty frustrating. My BS is in Information Systems so I've missed out on a lot of math that seems to make learning computer science and programming easier. I'll have to give this resource a try. I'm a big fan of his reverse engineering material already.
I’ve tried to relearn Calc a handful times, but find myself frustrated when I have to go look up basic trig identities, and order of operations stuff just to get through practice problems. I’d love to relearn math from scratch if anyone has any recommendations.
For calculus itself, I’d recommend Thompson’s enlightening and witty 1910 textbook “Calculus Made Easy”. It’s available for free at:
I’m in a similar position. I’ve rebuilding my understanding of mathematics using Khan Academy and 3Blue1Brown videos. It’s going pretty well.
The other one is in web format, it's something along the lines of "Visual Linear Algebra". Really unsure of title, but it's centered around visualizations of the math.
On my laptop I would post bookmark but I'm in bed on phone, sorry =/
and mathematics for machine learning: https://mml-book.github.io/
There are also lectures by one of the authors (Tom Leighton) covering those topics.
Other types of math are relevant depending on what you’re interested in, but I think students should learn discrete math before diving into writing any serious code.
Teaching a mathematically literate student about a discrete mathematics concept when they need or want to know it is quick and easy. Forcing students to take subjects they may not find interesting is a sure way to set them up for failure.
Discrete mathematics also makes it easier to introduce rigorous proof. Calculus courses are not even "proof-based", compared to real analysis which is the actual level of proof you'd get in discrete math.
I find discrete mathematics proofs very doable and useful, but pretty boring compared to analysis and algebra, and I very much enjoyed practicing mathematics skills in the latter rather than the former. (This is coming from someone who has published papers in combinatorics).
Discrete math is lots of fun, but less applicable to the kind of glue code many people write most often.
(Category theory is in some sense exactly the theory of glue code. But it's also generalised abstract nonsense.)
Sure, knowing this doesn't "help you write code" in the same way that knowing "you press the keys on the keyboard to get text to appear on the screen" but I've found this way of thinking to be beneficial to some degree when programming.
But I think that eru is trying to emphasise that one should learn how to program first and then use category theory as a tool to organise your understanding.
I’ve personally gotten a lot more out of treating tuples as product types or logical AND, and disjoint unions as sum types or logical OR, as opposed to categorical limits.
(Basically, what’s the extra value added from Curry-Howard-Lambek, as opposed to just Curry-Howard, in terms of programming?)
(I’m not trying to be argumentative, I’m genuinely curious.)
For what it's worth, https://bartoszmilewski.com/2015/09/01/the-yoneda-lemma/ does a pretty good job of explaining the Yoneda lemma in a programming context. Look for the section "Yoneda in Haskell".
As I also tried to say earlier, in practice I suggest learning programming first, and then using the likes of category theory to organise your understanding.
If you can explain this to the satisfaction of my understanding, I would like to subscribe to your newsletter.
One simple example would just be a discussion around the different use cases of Functor, Applicative Functor, Arrows and Monads. And how they relate to common programming constructs like functions, tuples, arrays, Maybes, futures, whatever's happening in react, etc.
Category theory won't help you learn those concepts, but once you know those concepts, even a superficial understanding of some category theory, can help you organize your thoughts.
As a genuine application of category theory, whenever you find a useful abstraction (like eg functors or monads again), you can have a look at its dual, and see whether that's another interesting structure you haven't seen yet.
Just like tuples and Either are duals of each other.
(As an analogy, dualizing is also a concept in linear programming.
Linear programming can help solve many interesting algorithmic problems, like sorting or matching or assignment or median selection, basically almost anything that's in P, and if you allow integer linear programming than you can solve what's in NP.
However, I wouldn't recommend anyone who wants to learn about algorithms and data structures to start with linear programming.
But once you know a bunch of algorithms, you can re-express the problems they are solving in terms of linear programming, and see whether you get anything interesting out of looking at their duals.)
It's also not that helpful to learn unless you already know a lot of deeper math, like introductory algebraic topology, etc.
These sorts of interesting functors are not common outside of graduate math.
Enough to build your intuition at least, and then start learning category theory from there. (And then try to use that knowledge to kick off some further investigation into the other areas of math that category theory historically comes from, if you are so interested.)
I think it's a bit like learning Spanish first and then Latin later. Vs learning Latin first and then some romance languages.
Also entirely another kind of math is also used often by CS, the "numerical methods", and also the fundamental arithmetic on digital domain, which also is usually treated separate from discrete math.
If we were to think there exists a subset of programmers who don't find Discrete Math useful, what would those programmers look like?
I had a job building e-commerce websites on top of a framework. Our code was a thin layer invoking and customising the framework. It was difficult work that we charged a lot for, because it required extensive knowledge of the framework (which was badly designed and poorly documented). But it never involved thinking really hard about programming itself.
I would imagine most Rails programmers are in a similar place.
I also have not a clue what I'm talking about with math (sub-highschool understanding) so I could be very mistaken about the path thing.
If you want to do video games you'll need to read something else anyway, so better to get into the habit now.
So maybe what I’m asking for is this: is there a “refresh your math, sharpen your sword” resource that exists that one could tune to what they do today?
If you want to refresh your math, take an elementary result of any particular discipline, e.g. existance and multiplicity of eigenvalues in linear algebra, or uncomputability of the halting problem, and see if you can prove it yourself from what you still recall. You will have to look up on definitions and theorems on the way there, refreshing your skills.
Very illuminating book. Turning coffee into SAT solver is found in most CS books, using SAT to buy more coffee less so.
Other people can debate category theory's utility: I have no expert insight of my own to lend.
I'm interested in learning about category theory in general just to get a feel for the subject, but I also want to have a better concrete understanding of the category theory that is put to use in Haskell.
Category Theory for Programmers is just that, for programmers. In fact, if I recall correctly its preface explicitly answers the question 'Why write this when Category Theory by Steve Awodey exists?', with the answer being what I summarise above. If you're a computer scientist, programmer, or otherwise a non-mathematician first, Awodey is not for you and this book I believe should serve you very well instead. I haven't read it, mind.
I haven't come across Seven Sketches in Compositionality, but hopefully that helps compare the other two. It's mostly about your intention (e.g. theoretical mathematics vs practical application) and background.
"Programming with Categories"  is probably the best resource to start, the course notes are great and the videos are easy to follow.
Also yes, Awodey is much more in-depth and technical.
(Whoever can stomach it, should give the book a try. It's awesome and fun!)
The OP looks good as well, a bit more applied it seems.
What I want is their latex/pdf setup. I've messed around with it enough to know I'm no good at it and this seems like it would be a great base.
One section that may be added would be about interpolations. I think the formula I implemented the most is a * p + b * (1.0-p), i.e. a simple linear interpolation, which is also an extrapolation when p is not in [0..1]. Obviously useful in graphics and animation but also every time you want to give the user a non-binary choice.
A lot can be said on that topic: bilinear, cubic, smoothstep, etc...
I find Effective Java an easy read for the exact same reason, it makes a point from the very start without telling so many stories.
I feel that its missing from this list. I rarely see geometry mentioned on these kinds of lists?
I quite like geometry for itself, but it's probably more of a niche for most programmers?
What prime numbers are was explained earlier in it, and what first 5 is ... probably assumed that was clear enough?