Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Programmers should know math.. just not all of it (giorgiosironi.blogspot.com)
33 points by fogus on Oct 16, 2009 | hide | past | favorite | 39 comments


Why is single value Calculus on his list of things that every programmer should know? People learn it early because of how important it is to the hard sciences, but it isn't particularly important to computer science. I would definitely move it down to the list of specific applications that you can learn if you need it.

I would also be inclined to have basic probability theory join combinatorics for the simple reason that the two subjects are so closely related that you tend to learn about them together. However statistics should remain a specific application of mathematics.

Otherwise a good list.


If you ever plan on doing anything with machine learning, you will need statistics and basic calculus. Well, you do if you want to progress in understanding beyond the "use someone else's library and hope it gives me a magic answer" phase.

Besides, calculus is one of the crown jewels of mankind's intellectual achievement. Not knowing any calculus ought to be considered on par with not knowing anything about the theory of evolution, or that the Earth orbits the Sun.


Yes, there is no doubt that calculus is very useful and a great achievement. Yes, I think it is wonderful for people to learn Calculus. Yes, I have learned Calculus, in fact I've even taught the subject.

However I don't think that Calculus is applicable to most programmers. For many programmers it is important, and that is why I suggested it be moved to the application specific list. But more programmers will get more value from, say, basic combinatorics than from Calculus.


Calculus is very useful in CS. Many CS papers use it, for explaining algorithms that have a discrete implementation, for example (some algorithms for finding edges in images for example), or for probability stuff.


i think OP might be thinking along the lines of, "how many working programmers need to know or use calculus in their day jobs?" sure, CS researchers working with physical medium like robotics or computer vision need to use calculus, but the millions of people writing code to shuffle data back-and-forth between sources (which is lots of programming jobs) don't need to know a single bit of calculus


That is exactly what I was thinking. I do not deny that there are many specific areas where Calculus is useful in programming. However most working programmers won't encounter them.


But what if they encounter them?

"I could theoretically develop a solution within a day, but I have no idea what you are talking about!"

I hope such programmers make sure that they can work on the next twitter and such things and don't have to implement solutions for financial controllers and the like.


Well, there is definitely a strong distinction between "programmer" (a trade) and "computer scientist".

I'd argue that beyond arithmetic and maybe a basic understanding of functions (e.g. f(x)), most "programmers" need to know very little math. But they also tend to produce shoddy, inefficient code and look at problems as "moving bits around".

Computer Scientists on the other hand need to have spent some time understanding the theory of computation. Calculus is one way of computing things, so is lambda calculus, and turing machines, and formal grammars, various algebras like regular languages, etc. Slinging code is just another way of computing things to a Computer Scientist.

Given this, most typical programming trade jobs are approached by Computer Scientists as a computation problem (or at least an application of a computation theory) vs. just hauling electrons about. This gets you something both qualitatively different in their code output as well as quantitatively different.


Graph theory is conspicuously absent from the list. Is that considered computer science these days, or does the author not think it is particularly important for programmers?


I'd throw in some order theory too. Posets, lattices, boolean algebras etc. All crop up pretty often, if you know how to recognise them.

In fact I think often when people recommend graph theory for computer science, they're thinking more about order theory (trees, DAGs, posets etc), or failing that, more the "basic algorithms over graphs" stuff than the "let's prove a bunch of clever theorems about k-colourings" kind of graph theory which you might get if you bought a book on it.


Boolean algebra is conspicously missing from the Basic List, probably because the parts of Boolean algebra that are most useful are the first thing any programmer learns (although there are plenty of people making a living programming who can't even do basic bit twiddling). Babbage conceptually invented a universal computer a couple of decades before Boole published his major opus.


The Graph Theory which is actually useful has been absorbed into Computer Science, and long abandoned by Mathematics.


I strongly disagree (as a post-grad working on Graph Theory). Mathematics has never abandoned Graph Theory and no Graph Theoretician I've met would ever consider Graph Theory to be a part of Computer Science.

I take issue with the "actually useful" part. Had we been early 20th century citizens, you might have said that about number theory, but where would modern cryptography be without it?


Well I speak from my own experiences, other universities may divide things differently. But here it seems the Math folk are concerned with Graphs for the elegant connections they have to Abstract Algebra, whereas using Graphs to solve logistics problems is more the domain of Computer Science. If you have some interesting examples of practical graph theory done by the math department, I would love to hear them.

And to clarify, there is nothing wrong with non useful Math.


And to clarify, there is nothing wrong with non useful Math.

You mean: there is nothing wrong with not yet useful Math. Math is always useful for finding Truth, and only sometimes useful when scientists find phenomena which can be described by it.


That's wrong.


A lot of modern graph theory is connected to Ramsey theory which is not particularly practical at this time nor do I anticipate it becoming so within, say, a decade. There's a lot of important stuff on the subject, but because most nontrivial algorithms on graphs are NP-complete it's all heuristics.


We're running into this right now. Our development lead is struggling with some of the finer points of an application of graph theory and a lack of formal training in both the theory and the algorithmic complexity of working with graphs is biting us in the ass.


It's basic thing that is usually always part of a CS curriculum.


You'd be surprised at how many places don't dig at all into graph theory as part of the their CS coursework.


I would say that statistic and probability theory are probably the most important thing to learn for anyone that analyses data, yet most programmers are clueless and statistics isn't that popular in CS programs (at least the one I had or have heard of).


If you come anywhere near Machine Learning, statistics becomes extremely important. Statistics, Calculus and Linear Algebra are definitely prerequisites for even a rudimentary understanding of the algorithms. You are dealing with inherently probabilistic problems, and optimizing functions on many dimensional spaces. It's pretty much all math, actually.


As someone currently studying math in a masters program, I am a huge fan of mathematics itself and of programmers learning at least the basics.

With that said, it is not necessary for a working programmer to know large amounts of math. I have known more than one programmer with a successful career that never studied any math beyond the required introduction to calculus and had long since forgotten most of that.

In short, I think the study of mathematics is very helpful to a programmer in both the general sense of improving their overall thought processes and in the specific of helping with certain types of problems, but it is not necessary for most.


>I think the study of mathematics is very helpful to a programmer in both the general sense of improving their overall thought processes and in the specific of helping with certain types of problems, but it is not necessary for most.

This is true. Most of the absolutely best developers I've ever worked with were Math majors. The second best were Physics majors.

They had long since forgotten most of their formal training, but they knew how to think about problems in the right way.


How about some Category Theory? That's used in the theory behind type systems, if memory serves me. On the other hand, it's a field that even mathematicians refer to as "general abstract nonsense"...


When it comes to first-order logic, you don't actually need this to reason about conditionals.

That's just Propositional Logic (or Boolean Algebra, if you want to put it in a slightly more abstract setting, which wouldn't hurt since you can then apply it to other related things like set union/intersection/complement).

First-order logic is really useful for Relational Algebra though. Which you should know if you do anything with relational databases.


Really? Why do you need relational algebra to do anything with relational databases? It actually seems to me to be wholly irrelevant if you're just using one.


You can even make one without knowing about FOL.


I'm starting to realize that calculus (or rather analysis) is pretty important for UI design. Programmers think in terms of discrete things but users think in terms of a continuous world (no sudden jumps between 'things'). However, user interfaces often consist of discrete elements. Yet since users think in terms of continuity they create a mental model that fills in the apparent gaps. So I'm beginning to appreciate that good UI's are the ones where arbitrarily small actions have similarly small effects (e.g. the scroll bar and certain animations) which allows for a smooth learning curve (I'm using the term 'smooth' in the same sense that we use it in calculus).

I think one of the reasons that OSX and other Apple products have a reputation of being so approachable is that they give the illusion of a continuous mapping between user actions and results.

(I just re-watched the video on seam-carving and even that seems like one of those obvious ideas once you have a good grasp of analysis.)


The writer is not talking about average programmer I guess.


Then you start looking at foundational mathematics. So you learn ZFC set theory, but wait axioms in ZFC are in FOL. Right so FOL is a foundation that comes before ZFC, but wait sets are used in FOL... Back and forth like this for a while and then you're tired of looking for where the beginning is.


I'm not sure what this has to do with the original post, but I'll go with it...

You also have category theory, proof theory, model theory, recursion theory, type theory... They all fail as "foundations" because they all emit paradoxes. In some cases these can effectively be ignored because they don't play into anything (ZFC, for example), and in others they can be ignored because we can work around them (e.g. category theory).

In fact, few working mathematicians actually care about mathematical "foundations" because there is no way to know if they are "correct".


In what way is ZFC paradoxical? One could certainly accuse it of being arbitrary (given the independence proofs) and incomplete (given Gödel's 1931), but I wasn't aware that one could derive a contradiction in ZFC. Russell's paradox, for example, is not a paradox in ZFC but a proof that there is no set of all sets.


Good catch. I'm using "paradox" very loosely here. What I was trying to say is that we ignore the shortcomings of ZFC: undecidable statements, unnecessarily strong axioms added (regularity and replacement). In some ways avoiding Russell's paradox have made ZFC a weaker theory.


Obviously paradoxes don't have to be actual contradictions, merely results which fail to conform to our intuitions (e.g. the Banach-Tarski paradox), but in my experience when the term is used in the context of the foundations of mathematics, it does mean that a contradiction is derivable within a foundational theory (the class paradoxes being the obvious case in point). This is not a complaint, I'm just explaining why I took your remark in slightly the wrong way.

Working mathematicians ignore the shortcomings of ZFC because foundational issues just aren't what they concern themselves with day-to-day; I'm sure you're aware of the remark that mathematicians are platonists during the week and formalists at the weekend. It's generally left to logicians and philosophers of mathematics (two tribes which, while not coextensional, have a rather large intersection) to worry about these things. Certainly according to structuralists like Shapiro, this is not actually a problem. [1]

[1] Shapiro 1997, http://bit.ly/3om0CO


Well that's not entirely true. There has been a vast effort to make a foundation for mathematics with the various set theories. Naive set theory is usually used as a language for creating the mathematical primitives most working mathematicians deal with.


I agree, there was a vast effort, but that was decades ago. As I said, most working mathematicians don't concern themselves with these issues and just stick with ZFC.


I am offended by this post, mostly because math and I generally don't get along. I think very procedurally, and the math-y abstract way of thinking about things seems almost at odds with this. Obviously, abstraction exists in programming, especially if you're using a functional language... which is probably why I feel more at home with oo languages ^-^.


Procedural programming is still mathematical.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: