Hacker News new | past | comments | ask | show | jobs | submit login
Mathematics for Programmers [pdf] (yurichev.com)
589 points by signa11 on Sept 24, 2020 | hide | past | favorite | 115 comments



For all the mathematics enthusiasts and scholars out here, we have Slack and Freenode IRC channels[1][2] to discuss mathematics and computer science literature like this.

It is a reading and discussion group that focuses on math and CS books, articles, and papers, as well as using software tools like Python, WolframAlpha, Desmos, etc. to explore new concepts we come across. As an example of what we discuss in this forum, see "From Vector Spaces to Periodic Functions"[3] that was a result of one of the discussions in this forum. We plan to discuss Munchausen numbers[4] next.

If this sounds interesting to you, please join us.

[1]: https://bit.ly/euclidslack

[2]: https://webchat.freenode.net/#euclidpoint

[3]: https://susam.in/blog/from-vector-spaces-to-periodic-functio...

[4]: https://arxiv.org/pdf/0911.3038.pdf


Thanks for this! As fresh IT Dev student, this will help me out!


Just a suggestion: Why not add a Discord link too ?


I think Slack is better suited for information oriented conversation


Why?


Maybe because there are no threads in discord. If you want to reply to someone on discord you can only use 'cite' or @. And if there alot of people in a channel its hard to follow a discussion.


Isn't it possible to send them a DM?


And then nobody else benefits from the discussion


Thanks, will join


One subject that's pretty useful for programmers to understand reasonably well is numerical analysis, even if it's at a fairly basic level. It's amazing how quickly you run into problems in practice as soon as you try to do anything with floating point numbers.

I didn't carefully read through the section on modular arithmetic, so maybe this is already there, but one thing worth noting that bites a lot of people is that using modular arithmetic for integer representations means that addition, subtraction, and multiplication work the same for signed and unsigned integers. This means that you can typically write arithmetic expressions involving these operations without worrying too much about whether things are being regarded as signed or unsigned at intermediate steps. As soon as you introduce division into the mix, though, the whole thing falls apart, and you have to be very careful about how the language is interpreting each subexpression.


If you are using C or C++, you need to be extremely careful with signed integers, even when working modulo. Their overflow is undefined.


Is that due to the division taking place like the parent mentions, or something else?


Due to how C and C++ are defined in their standards.

The reason behind that is so that the compiler is allowed to assume that expressions like (x + 1 > x) are true, and thus can optimise more.


I'm not sure understand what you're saying with mod arithmetic.

Can you provide an example?


Not OP, but I'll give it a shot. At a low level pretty much all standard integer math is mod arithmetic, typically mod 2^32 or 2^64 depending on if a computer is 32 or 64 bit. This means that when a number goes above this limit it "overflows" and only the part left after dividing by the mod size is left. Additionally, negative numbers are stored by essentially counting down from the maximum number that can be stored + 1, in scholarly terms this is the twos complement. For instance on a 32 bit system -7 as a signed integer would be stored identically to the unsigned integer (2^32)-7. For anyone new to programming, unsigned meaning an integer variable that can only be positive and signed being an integer variable that could be positive or negative.

Now all of this combines to make addition, subtraction, and even multiplication essentially identical for signed and unsigned numbers. On an 8 bit system (so integers can be between 0 to 255 inclusive) you could be trying to add the signed number 40 and the unsigned number 200. Naturally, this is 200+40 or 240. Now inside the computer, 240 is stored in the exact same way that -16 is so it is important that the software knows the correct way to handle the variable. This is equivalent to subtracting 56 from 40, and the signed integer -56 is represented in the exact same way as the unsigned integer 200.

Now this is where it starts to get crazy. What if you add the signed integer -5 (represented by 251 in the unsigned world) and the unsigned integer 100? Well it is equivalent to 251 + 100 = 351. However, now we encounter modular arithmetic, because this is an 8 bit system and the maximum value for an integer is 255 we calculate everythihg mod 256, so ar left with 351 % 256 = 95. You can mentally view this as the calculation occurring with 1s and 0s, where everytime a 1 is added to a 1, there is a single bit carried and the last bit is dropped, which is exactly how full and half adders work in a processor!

This also works for multiplication! If we multiply -5 by 5, we get -25 easily. But for computers we do 251 * 5 = 1255. The modulo math starts to get tricky for us puny humans but for a computer it is trivial, 1255 % 256 = 231. As a signed integer that is equal to -25, as it is equivalent to (255+1-25).

For division, everything is just crazy. 250/5 is way different than -5/5, no amount if mod arithmetic will save it. Unsigned 50 and signed -1 have completely different representations, and this is even an example without fractions and rounding.

I may have went long winded, but hopefully this helps someone!


Purely on the mathematical side, you can define division as the inverse of multiplication, and then when you are working modulo a prime number, division works just fine. (But computers usually don't implement it that way.)

As an example when working modulo 17 and you want to work out what division by 3 means. You want to solve equations like: x * 3 = 10 (mod 17). For this case, x = 9; because 9 * 3 = 27 = 10 (mod 17).

In general when working modulo 17, 6 is the multiplicative inverse of 3. Ie 6 * 3 = 1.

Thanks to Fermat's Little Theorem you can find multiplicative inverses very easily, y^(p-2) mod p gives you the inverse of y.

(Most programming languages won't know that you are going to apply a mod afterwards, so they just give you eg rounded or truncated integer division. Which is completely different in general.)


Sure, but computer arithmetic is modulo some power of two, which is not a prime. So you don't get a proper field and in particular, all even numbers don't have an inverse.


Yes, that is true.

Technically you can get fields with p^n elements, too; but they are constructed in a more complicated way than just taking a modulo.


Yes. These sorts of fields are quite useful for crypto, so I believe there is even some hardware support for them in many CPUs, IIRC?


https://en.wikipedia.org/wiki/CLMUL_instruction_set might be the closest to what you meant?

I just did some Google Scholar searches, and found a lot about FPGAs etc, but not too much about stock CPUs. But that might reflect more on my search skills than on availability of material.

Some standard CPUs now have instructions to specifically run eg AES.

https://www.snia.org/sites/default/files/files2/files2/SDC20... was probably the most interesting of the bunch I found. It's about error correcting codes, not crypto. (Though interestingly, the holy trinity of error correcting codes, information theory and cryptography all flew once through the common conduit of Claude Shannon.)


thanks, hadn't really looked into it much before!


Huh? GF(2) is the smallest finite field, and 2 is a prime...? 0 is also excluded when we’re talking about multiplicative inverses.


Yes, I should have been more precise: "modulo some power of two, which is not a prime, unless we're talking about 2^1".

Now I'm not aware of any language where there is an integer type that has only two elements, unless you're talking about booleans - but booleans are so special that, while they're of course isomorphic to GF(2), we don't really use the same words (addition and multiplication) but different ones (exclusive disjunction (xor) and conjunction (and)).

So, in any real-life situation, your fixed-size integers won't form a field, because the ring of integers modulo n is only a field when n is prime, and so in particular, division by nonzero elements will be undefined in general.

And yes, of course, you can't divide by zero, but that's also true of the real numbers themselves, so no surprises there...

(I guess a better point would be that division is mathematically "broken" for integers anyway, since integers technically also don't form a field and, depending on the language, you may either get back a truncated result from division or will get a different type (ratio or floating point).)


> Yes, I should have been more precise: "modulo some power of two, which is not a prime, unless we're talking about 2^1".

If you want to be even more nit-picky, 2^0 might also work. But whether your definition of a field admits fields with only a single element is just a question of taste, that is even less important or deep than whether your flavour of natural numbers includes 0 or not.


I've never seen anybody allow a field with just a single element, usually it says "a field is a ring with 1!=0...". But yeah, I suppose you could allow it, it's just an incredibly boring ring in which you actually can't divide at all, because there is no nonzero element.


eh that was wrong, you can of course divide in the zero ring, because the single element is its own inverse.


Yes. Of course, still an incredibly boring field.


Has anyone yet demonstrated a field with a single element?


It's not a question of demonstration but of definition. I've only ever people define fields as requiring two distinct elements 0 and 1. However, every field is also a ring and there can be, up to isomorphism, only a single ring with one element, because you don't really get any choice as to how you define the operations. That's called the trivial ring or zero ring and it basically satisfies all of the axioms for a field too, except for not having two distinct elements. So if it were possible for a field with a single element to exist, it would have to be this one.

It is my understanding that some mathematicians are looking into some objects that are kind of "like" a field with one element, but not in the sense of classical algebra, see: https://en.m.wikipedia.org/wiki/Field_with_one_element

However, I haven't really looked into that.


While this is true on the machine level, it also really depends on your programming language.

Certain languages like Python or Ruby automatically convert a smaller integer type to a larger, unbounded one. Other languages like Swift (or Rust in debug mode, I believe) prefer to throw an error upon integer overflow instead of silently wrapping. I believe that the silent wrapping behaviour in e.g. C and Java is a decision that prioritises performance over safety, as this is behaviour that you rarely want, and if you do, there should be better ways of expressing it.


You're a cool bear! Thanks for the detailed explanation!


You probably do this all the time with time without realising it.

If it’s 11 o’clock and we agree to meet in two hours, you instinctively know I’m talking about 1 o’clock. But what you’ve just done is add two numbers mod(ulus) 12.

In effect your brain added 11+2 to get 13, then took away 12 to make it fit, and came up with the modulus (or remainder) of 1.

Modulus arithmetic is therefore sometimes known as clock arithmetic.


> In effect your brain added 11+2 to get 13, then took away 12 to make it fit

Not my German brain. But mod 24, I'm with you again :) And don't let me start with the Russians, they did it in hardware:

https://en.wikipedia.org/wiki/24-hour_analog_dial#/media/Fil...


You can think of it as applying a mod operator after each operation.

Say we wanted to work with mod 5 Then 6 + 8 = 14 = 4 mod 5


Doesn't work for division. At least not in general in a naive fashion.


At first glance, this is actually a refreshing take on "teach math to coders".

There are 1-2 other books constantly recommend here, which are (probably) fantastic but throw you off the deep end with assumptions that you understand algebra, trigonometry, fundamental calculus, and statistics.

Essentially, that you finished school and paid attention. Which is a well-founded assumption. And I don't necessarily expect an author to assume responsibility for covering fundamentals, but it's nice if there are authors willing to do this.


That seems somewhat universal. I remember attending the first suggested university course on math and barely recognising what was written on the first page of the material. It turned out that the material was written well before the high school curriculum on math in my country was substantially dumbed down, and was not very accessible for recent high school graduates. Other universities provided more verbose study material, while mine was trying to make the material as concise as possible. For whatever reason, math seems to be the only field in which conciseness usually trumps readability.

I passed the course (Analysis I) with the minimum score by memorising previous exams and their answers, without understanding any of it, or ever needing the skills on my professional career.

Logic and algebra classes were a lot more approachable, enjoyable and eventually useful. I am looking forward to learn something from this material, too.


Most math programs wouldn’t through students (unless they choose) into heavy duty analysis without a proof based prep course.


In many parts of Europe, they actually do: if you study maths or even something related like CS or physics, you'll basically see proofs from day one. The split between "calculus" (non-rigorous) and "analysis" (with proper proofs) seems to be mostly a US thing.


It would be cool if the authors offered a list of pre-reqs. Sometime I look at a math book and don't understand the first few chapters but I am willing to build up to it provided I know what to read first.


Coding and math books frequently list prerequisites by saying something like "only really trivial mathematics knowledge that any adult should be familiar with" and then you start working through the book and realize they meant any adult that took multiple years of calculus in high school and graduated with a BS in Comp Sci or mathematics.

As a high school drop out that only later on developed an interest in learning its pretty frustrating. My BS is in Information Systems so I've missed out on a lot of math that seems to make learning computer science and programming easier. I'll have to give this resource a try. I'm a big fan of his reverse engineering material already.


I identify so hard with your comment! I’m proficient with a handful of programming languages, very comfortable with most Sys admin tasks on Unix-like OSes, and I’m almost always able to reverse engineer someone else’s similar solution to meet my needs. However, I’ve had any formal computer science education. Now, in my early 30s, I’ve found a successful career in a non-tech field but find it unfulfilling. I still love to program and I’d love to make a change, but my lack of a math and comp sci foundation gnaws at me.

I’ve tried to relearn Calc a handful times, but find myself frustrated when I have to go look up basic trig identities, and order of operations stuff just to get through practice problems. I’d love to relearn math from scratch if anyone has any recommendations.


I’ve found https://betterexplained.com an excellent resource for developing an intuition for fundamental concepts in math.

For calculus itself, I’d recommend Thompson’s enlightening and witty 1910 textbook “Calculus Made Easy”. It’s available for free at:

https://www.gutenberg.org/files/33283/33283-pdf.pdf


This probably isn’t novel advice, but Khan Academy is great for learning the fundamentals.

I’m in a similar position. I’ve rebuilding my understanding of mathematics using Khan Academy and 3Blue1Brown videos. It’s going pretty well.


What are the 1-2 books that are constantly recommended here?


"A Programmer's Introduction to Mathematics", by Jeremy Kun is the one you'll see a lot

https://pimbook.org/

The other one is in web format, it's something along the lines of "Visual Linear Algebra". Really unsure of title, but it's centered around visualizations of the math.

On my laptop I would post bookmark but I'm in bed on phone, sorry =/

Edit: Here

https://textbooks.math.gatech.edu/ila/

http://immersivemath.com/ila/index.html


there's also Intro to mathematical thinking: https://www.coursera.org/learn/mathematical-thinking/home/we...

and mathematics for machine learning: https://mml-book.github.io/


MCS - Mathematics for Computer Science - is one. From MIT, free. A google should find it.


https://courses.csail.mit.edu/6.042/spring18/mcs.pdf

There are also lectures by one of the authors (Tom Leighton) covering those topics.

https://www.youtube.com/playlist?list=PLB7540DEDD482705B


Thanks.


In my opinion, the only type of mathematics that is useful to ALL programmers is discrete math. This is applied math that barely resembles the kind of math that you learned in high school, but it’s easy to see how these thoughts lead to the creation of computers.

Other types of math are relevant depending on what you’re interested in, but I think students should learn discrete math before diving into writing any serious code.


I strongly disagree. Having taught mathematics to many computer science students, virtually any mathematics course in which students need to write a proof is invaluable, regardless of content. While I agree that discrete mathematics is more applicable than some other parts of mathematics, I think that the real value mathematics provides to programmers is the skill to think through and reason out complicated problems.

Teaching a mathematically literate student about a discrete mathematics concept when they need or want to know it is quick and easy. Forcing students to take subjects they may not find interesting is a sure way to set them up for failure.


> Having taught mathematics to many computer science students, virtually any mathematics course in which students need to write a proof is invaluable, regardless of content.

Discrete mathematics also makes it easier to introduce rigorous proof. Calculus courses are not even "proof-based", compared to real analysis which is the actual level of proof you'd get in discrete math.


Rigorous proofs can be introduced in many courses. Personally, my first course making a large emphasis of proof was real and complex analysis (which was the calculus course), then linear algebra. I also took “discrete mathematics for computation”, a course specifically aimed towards computer science students, but that course was so chock-a-block full of content that there was no time for proofs.

I find discrete mathematics proofs very doable and useful, but pretty boring compared to analysis and algebra, and I very much enjoyed practicing mathematics skills in the latter rather than the former. (This is coming from someone who has published papers in combinatorics).


There's a decent series of "Math for programmers"[1] on YouTube from FreeCodeCamp that covers the basics of discrete math. I've been recommending it to people who want to learn to code for a little while and it seems to go down well.

[1] https://www.youtube.com/playlist?list=PLWKjhJtqVAbndUuYBE5sV...


Category theory is also very relevant to programming. But, curiously enough, learning and practicing programming is probably a better intro to eventually get to category theory, than the more usual approaches mathematicians take.

Discrete math is lots of fun, but less applicable to the kind of glue code many people write most often.

(Category theory is in some sense exactly the theory of glue code. But it's also generalised abstract nonsense.)


I don't understand how learning the definition of a limit, colimit, adjunction, pushforwards, pullbacks, equalisers, co-equalisers and the Yoneda lemma, can help you write code. But I'm willing to be enlightened.


Some starting points: tuples are categorical products (an example of limits), sum types are categorical coproducts (an example of colimits), effects are often monadic and monads arise from adjunctions, the Yoneda lemma tells you that if you have "f a" and "a -> b" and you know nothing more about the "a" then all you can do is fmap the function over the functorial value.

Sure, knowing this doesn't "help you write code" in the same way that knowing "you press the keys on the keyboard to get text to appear on the screen" but I've found this way of thinking to be beneficial to some degree when programming.

But I think that eru is trying to emphasise that one should learn how to program first and then use category theory as a tool to organise your understanding.


Do you think category theory is more useful than type theory or logic?

I’ve personally gotten a lot more out of treating tuples as product types or logical AND, and disjoint unions as sum types or logical OR, as opposed to categorical limits.

(Basically, what’s the extra value added from Curry-Howard-Lambek, as opposed to just Curry-Howard, in terms of programming?)

(I’m not trying to be argumentative, I’m genuinely curious.)


I would put category theory very far down the list of things a programmer should learn. Once a programmer has learned a lot of other things about programming, category theory can act as a good organisational and explanatory tool.


Yes, that was what I was going for.

For what it's worth, https://bartoszmilewski.com/2015/09/01/the-yoneda-lemma/ does a pretty good job of explaining the Yoneda lemma in a programming context. Look for the section "Yoneda in Haskell".

As I also tried to say earlier, in practice I suggest learning programming first, and then using the likes of category theory to organise your understanding.


> Category theory is in some sense exactly the theory of glue code.

If you can explain this to the satisfaction of my understanding, I would like to subscribe to your newsletter.


First a qualification: Category Theory doesn't teach you how to write glue code, but it can be seen as a theory to organizing some of our understanding of glue code.

One simple example would just be a discussion around the different use cases of Functor, Applicative Functor, Arrows and Monads. And how they relate to common programming constructs like functions, tuples, arrays, Maybes, futures, whatever's happening in react, etc.

Category theory won't help you learn those concepts, but once you know those concepts, even a superficial understanding of some category theory, can help you organize your thoughts.

As a genuine application of category theory, whenever you find a useful abstraction (like eg functors or monads again), you can have a look at its dual, and see whether that's another interesting structure you haven't seen yet.

Just like tuples and Either are duals of each other.

(As an analogy, dualizing is also a concept in linear programming.

Linear programming can help solve many interesting algorithmic problems, like sorting or matching or assignment or median selection, basically almost anything that's in P, and if you allow integer linear programming than you can solve what's in NP.

However, I wouldn't recommend anyone who wants to learn about algorithms and data structures to start with linear programming.

But once you know a bunch of algorithms, you can re-express the problems they are solving in terms of linear programming, and see whether you get anything interesting out of looking at their duals.)


I disagree, it depends on the kind of programming you do. Build fancy typeclasses to do black magic in Haskell? Then _perhaps_. Write kernel modules for custom functionality in C? Then learning ct is like a fish learning to climb a tree.


Yes. Though CT can help you with systems programming as well. But you wouldn't want to use C nor Haskell.


Category theory is mostly useful for modeling computation theoretically and thus designing languages.

It's also not that helpful to learn unless you already know a lot of deeper math, like introductory algebraic topology, etc.


I suspect the relation with algebraic topology is mostly an artifact of history.


That is true, but it still provides tons of examples of functors such as fundamental group and homology.

These sorts of interesting functors are not common outside of graduate math.


Yes. I think programming provides a lot of interesting examples of eg functors and other such structures.

Enough to build your intuition at least, and then start learning category theory from there. (And then try to use that knowledge to kick off some further investigation into the other areas of math that category theory historically comes from, if you are so interested.)

I think it's a bit like learning Spanish first and then Latin later. Vs learning Latin first and then some romance languages.


Atleast till some years ago Queuing Theory was considered fundamental for CS, and certainly statistics and probability and spatial geometry is very important for CS, apart from discrete math.

Also entirely another kind of math is also used often by CS, the "numerical methods", and also the fundamental arithmetic on digital domain, which also is usually treated separate from discrete math.


> In my opinion, the only type of mathematics that is useful to ALL programmers is discrete math.

If we were to think there exists a subset of programmers who don't find Discrete Math useful, what would those programmers look like?


Web developers, or at least the folks who just do A->B frontend stuff. Without being elitist, I've done some of it and the only stress I felt mentally was due to the baggage of learning the frameworks and things (I might have used an integer or two but nothing more)


More generally (and less insultingly!), any programmers who mostly work with frameworks, rather than writing solid chunks of their own code.

I had a job building e-commerce websites on top of a framework. Our code was a thin layer invoking and customising the framework. It was difficult work that we charged a lot for, because it required extensive knowledge of the framework (which was badly designed and poorly documented). But it never involved thinking really hard about programming itself.

I would imagine most Rails programmers are in a similar place.


My impression after a quick once-over is that there's an order of magnitude too little linear algebra in this. Can't have 3D video games without it, can't do large parts of machine learning and statistics without it, etc etc.


It says it's an early draft, and the lack of a coherent path in the Linear Algebra section (currently final section) plus only two sub-chapters makes me think it's under work atm.

I also have not a clue what I'm talking about with math (sub-highschool understanding) so I could be very mistaken about the path thing.


Came here to say this. I think he is focusing on the math that "typical" coders need without any emphasis on what ML practicioners need. Nevertheless I agree more linear algebra should be included.


I think that a course like this should really prepare you to be able to go and look things up yourself i.e. you only need enough linear algebra to recognise what it looks like, then you can go and read one of many excellent linear algebra books.

If you want to do video games you'll need to read something else anyway, so better to get into the habit now.


These kinds of things pop up a lot on HN, so I want to ask: are they helpful to people? I save basically every single one, planning to come back to them (I studied math and economics in college, and do a little programming now but not enough to call myself a developer), but can’t find time between work and everything else to dive in and actually advance my skills.

So maybe what I’m asking for is this: is there a “refresh your math, sharpen your sword” resource that exists that one could tune to what they do today?


If you studied math, or if you have any kind of respect for the mere concept of rigorous proofs, then these kinds of resources will leave you deeply dissatisfied.

If you want to refresh your math, take an elementary result of any particular discipline, e.g. existance and multiplicity of eigenvalues in linear algebra, or uncomputability of the halting problem, and see if you can prove it yourself from what you still recall. You will have to look up on definitions and theorems on the way there, refreshing your skills.


Dennis Yurichev also wrote an awesome free book about creating SMT/SAT code to solve real-world (and less real-world) problems.

[0] https://sat-smt.codes/


That's where I recognised the name from then.

Very illuminating book. Turning coffee into SAT solver is found in most CS books, using SAT to buy more coffee less so.


I’m in the middle of speed running khan academy’s entire math section. This’ll compliment my task well. Thanks for sharing.


Unlike the other comments so far, I have a link that is in my opinion NOT generally useful for most programmers. I figured it is worth a share for the programmer who knows all of the math in this post and wants to learn yet more:

https://bartoszmilewski.com/2014/10/28/category-theory-for-p...

Other people can debate category theory's utility: I have no expert insight of my own to lend.


Does anyone have any opinion on this book as an introduction to category theory compared to something like Category Theory by Steve Awodey (https://books.google.com/books/about/Category_Theory.html?id...) or Seven Sketches in Compositionality (http://math.mit.edu/~dspivak/teaching/sp18/)?

I'm interested in learning about category theory in general just to get a feel for the subject, but I also want to have a better concrete understanding of the category theory that is put to use in Haskell.


Category Theory by Steve Awodey is great, but in my opinion only for a graduate mathematician wanting to do more mathematics and understand category theory in an abstract sense. Preferably one will have done some algebraic topology when reading it.

Category Theory for Programmers is just that, for programmers. In fact, if I recall correctly its preface explicitly answers the question 'Why write this when Category Theory by Steve Awodey exists?', with the answer being what I summarise above. If you're a computer scientist, programmer, or otherwise a non-mathematician first, Awodey is not for you and this book I believe should serve you very well instead. I haven't read it, mind.

I haven't come across Seven Sketches in Compositionality, but hopefully that helps compare the other two. It's mostly about your intention (e.g. theoretical mathematics vs practical application) and background.


Seven Sketches is a nice introductory read, giving examples of how CT may be used model practical problems and introducing concepts along the way.

"Programming with Categories" [0] is probably the best resource to start, the course notes are great and the videos are easy to follow.

Also yes, Awodey is much more in-depth and technical.

[0] http://brendanfong.com/programmingcats.html



Also there was a course earlier this year that teaches an intro to Category Theory but doesn't assume a previous math background with the same author.

http://brendanfong.com/programmingcats.html


I would also recommend Concrete Mathematics by Donald Knuth, Oren Patashnik, and Ronald Graham as another refreshing approach.


I really like Concrete Mathematics, but I fear it's a bit too heavy as an intro for most programmers.

(Whoever can stomach it, should give the book a try. It's awesome and fun!)


I have "A Programmer's Introduction to Mathematics" (2018) by Jeremy Kun [0] and can recommend it. HN thread for this book: [1].

The OP looks good as well, a bit more applied it seems.

[0] https://pimbook.org/

[1] https://news.ycombinator.com/item?id=18579076


Nice.

What I want is their latex/pdf setup. I've messed around with it enough to know I'm no good at it and this seems like it would be a great base.


Oh yeah, would love to have a go at it as well. Yet, it is an early draft so they may have some things planned. In my experience, books, presentations, really everything that tries to convey information, works better when looking "good". And "good" is prone to opinion, but some general rules doe apply. I think that the writer has chosen not to bother with styling all to much until the content is set in stone, which is smart.


I bet you could write the author. A brief, polite, note will almost always get a reply.


Also interested, I would love to be able to use it.


See also Category Theory for Programmers[1] book.

[1] https://github.com/hmemcpy/milewski-ctfp-pdf


This looks great. I'm looking forward to reading future revisions. I hope you'll post it here when a final draft is available. Keep up the good work. Cheers.


Also see Matters Computational (the fxtbook): https://www.jjj.de/fxt/


Jörg Arndt (jj) also wrote an awesome book about π.

https://www.maa.org/press/maa-reviews/pi-unleashed


I jumped straight to chapter on graphs and I found it quite weak. It basically documented author's attempts to query few datasets for cliques, using Mathematica. Who is the target audience for this content?


The author says in the first page that it is an early draft, and I think he has a long ways to go still, being less than 100 pages.


Really nice, thank you I'm keeping it.

One section that may be added would be about interpolations. I think the formula I implemented the most is a * p + b * (1.0-p), i.e. a simple linear interpolation, which is also an extrapolation when p is not in [0..1]. Obviously useful in graphics and animation but also every time you want to give the user a non-binary choice.

A lot can be said on that topic: bilinear, cubic, smoothstep, etc...


Unlike almost other books recommended in this thread, this jumps straight to the point without a lot of text. From the very start it makes a point and continue without delving into too much intro or background or other plain (useless) text.

I find Effective Java an easy read for the exact same reason, it makes a point from the very start without telling so many stories.


I think it would be awesome if OP open-sourced this. That way we could make suggestions via pull request.



As a frontend developer I also encounter I need Geometry from time to time. I used it a lot in my days as a flash developer drawing vector graphics programmatically.

I feel that its missing from this list. I rarely see geometry mentioned on these kinds of lists?


You can do a lot of geometry with linear algebra. (And that's more or less what you'll have to do on a computer anyway. Ruler and compass approaches won't cut it.)

I quite like geometry for itself, but it's probably more of a niche for most programmers?


Should add quadratic reciprocity to the modular arithmetic section..

https://www.youtube.com/watch?v=X63MWZIN3gM


In the IRC network social graph section, it was interesting to see my nick pop up in there along with a lot of regulars from when I was active on #ubuntu


My favourite: https://pimbook.org/ by Jeremy Kun. It's a great read!


It has way too much text when compared with the shared book which jumps straight to the point.


Oh wow, this led me to the author's book on reverse engineering to assembly, and it is fierce. Damn!


Great


You lost me at: "Let’s find a first 5 prime numbers, each number for each character" <BR/> "first 5 prime number" - I would need to leave your text to figure out what that is.


It’s “the first 5 prime numbers”, native speakers of Eastern European languages often get “a” and “the” swapped around or left out completely in English.

What prime numbers are was explained earlier in it, and what first 5 is ... probably assumed that was clear enough?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: