I think this is a pretty silly post, to be honest. CS covers so much, and everytime I see a list of "things you should know", I have to resist the urge to roll my eyes and ignore it. But then I read it, and inevitably roll my eyes anyway.
An unfortunate side-effect is that many people who could have become highly productive mid-level programmers are scared off because they don't know advanced math.
Yes, studying math will teach you this, because - as mentioned - math is a subset of logic. But studying math by far is not the only way to gain these abilities, or to obtain a deeper understanding of CS problems.
This opinion alone will get (and has got) me branded as a heretic amongst many people, and that alone proves how deeply we are stuck in this tar-pit. If we want to get more people to be interested in and eventually deeply understand CS, we need to get rid of the math barrier. Math is a tool used by computer scientists, and not the fundamental basis of our field. Limiting yourself to a single tool is bad, and you don't need abstract algebra or calculus to understand this.
Caution is advised when interpreting opposition to a position as justification for the position. This kind of tenuous logic is frequently used to justify continued oppression in e.g. obscure religious communities ("They all hate us, so we must be right!"). I'm not saying your position is invalid (nor as a non-mathematician am I qualified to do so), just that it could be better supported.
What I've been trying to convey is that I've come with what I think is a sensible position, and often got responses that ranged from fervent opposition to (very occasionally) verbally violent backlash. Maybe it was just me incorrectly stating my point, though.
I think of points in an n-dimensional space as objects holding n different types of information, which is a practical approach, and has lit a candle for some other people I explained it to this way as well.
For example, images are often stored as 4 dimensional structures, with the dimensions representing height, width, depth and color value. Every point in that structure is a pixel of the image. You can easily expand this to any number of dimensions, and it still makes sense - you're just adding information (eg, add a fifth dimension for time, and you've got yourself a sequence of images - a simple video).
My beef with the way math is often approached is that on the one hand, it's asserted that math is the universal solution to all problems - including practical ones - but on the other hand, more often than not math is removed so far from reality that it's nearly impossible to make any practical use of it.
Mathematically, an n-dimensional space is R^n, that is, the cross product of the real numbers (R) with itself n times. Each copy of R lets you specify a number independently of all the others.
Mathematics is just careful abstraction. If you are abstracting away certain aspects of your problem and carefully making deductions about what is left, you are doing mathematics. As you point out, this can be taken too far but any tool can be used poorly in the hands of a novice.
See, that's exactly what I'm talking about. This gibberish makes sense to no one except a mathematician. The explanation I've given - while maybe the same in essence - conveys the meaning in a comprehensible way that's not completely removed from reality.
My point is that mathematicians often do neither realize that they are talking gibberish that makes no sense to anyone but themselves nor do they accept that math without proper context is devoid of any meaning (outside of math, that is).
NOTE: I would like to clear up that I'm not attacking math itself. I neither think math for math's sake is bad, nor do I think math is useless. I respect math as a self-contained field. The problems I'm pointing out are all happening where there is application within other fields - such as CS.
An isolated concept is worthless. It is only when you are about to apply it by reasoning with it that becomes valuable. The problem with intuitive explanations is that they don't nail down enough details to allow a person to reason with them.
"I think of points in an n-dimensional space as objects holding n different types of information"
Imagine walking into a room and drawing a straight line on the floor. What is the dimension of that line?
Answer = One dimensional.
Proof: We can describe each point by one type of information. Point = (Distance of that point from the start of the line.)
Answer = Two dimensional.
Proof. We can describe each point by two types of information. Point = (Distance of that point from the East wall, Distance of the point from the North wall.)
Answer = Three dimensional.
Proof: We can describe each point by three types of information. Point = (Distance of that point from the East wall, Distance of the point from the North wall, Distance of that point from the roof.)
Answer = Four dimensional....
So by the intuitive explanation we can make this single line any dimension that we want.
This isn't just a problem within mathematics. For a simple programming example:
Question: How does a computer program work?
Intuitive Answer: You give the computer a list of instructions for it to carry out.
Result: The guy opens up notepad and types in "Make a computer game where I walk around shooting Zombies."
And if this makes sense in the given context - sure, why not?
You seem to be missing my point. What I'm saying is that it's useless to have totally generalized abstractions (outside of pure math) since more often than not, they are so far removed from reality that most people can no longer make any connection to use cases.
My entire argument is that there's a heavy communication failure between mathematicians and scientists of every other field where math is used as a tool. Sure, it's convenient for a mathematicians to be able to use shorthand gibberish to talk to other mathematicians. It doesn't justify pushing this jargon on other fields.
Besides which, convenience is no excuse for making something hard to understand. Sure it's convenient to name all your variables in a program a, b, c etc but you're going to get lynched by any programmer that tries to read your code later, including yourself.
When it comes to a point where gibberish becomes the only way to explain mathematical abstractions, then you should step back and ask yourself "where the hell did this go wrong?".
>This isn't just a problem within mathematics. For a simple programming example:[...]
I believe in giving simple explanations and expanding them whenever there's a loophole that needs to be fixed. You don't make programs that cover every single niche use case, either (if you don't have to, that is). That's the problem with math - the generalizations, while useful in math itself, are sheer overkill in many situations outside of math.
Lastly, sorry taking this out of order, but:
>An isolated concept is worthless.
So is a generalized abstraction without any context. I'm saying that the sweet spot is somewhere in between for most people to understand and apply concepts, and that it's better to generalize upwards from reality and actual use cases instead of starting utterly removed from reality and trying to apply the generalizations downwards.
Define: An integer n is `even' if there exists some integer m such that n = 2m.
Theorem: For any two even integers n and a, the sum n + a is an even integer.
Proof: Since n and a are even there exist integers m and b such that n = 2m and a = 2b. Now,
n + a = 2m + 2b; by assumption
= 2(m+b); by the distributive property
= 2z; for the integer z = m+b
That is to say, in mathematics we introduce some definitions/gibberish/jargon (in this case `even') and then we use logic to reason about the implication of our choice of definition (the sum of even integers being even.)
The important thing is that the definition plays an essential role; definitions are the building blocks on which all of mathematics operates. To emphasize: if you strip away the definitions you literally have nothing to build on - we can't apply logic to nothing and arrive at something.
This leads to the point I made in my earlier comment: the reason we need definitions rather than intuitive explanations is that you can't logically reason about a concept unless you nail down the relevant details of what that concept is exactly. We can't do the 'proof the theorem' part of the above example.
So how does mathematics then fit into application?
Guy 1: In this basket I have as many stones as I have fingers and in that basket I have as many stones as I have toes. For each basket I can pair up the stones so that each has a partner. Will this still be the case if I combine the stones from each basket?
Mathematician: Well, lets represent the number of stones in each basket with the integer 10. Pairing stones corresponds to the integer being even and combining the baskets corresponds to adding the two integers. I note that 10 is even since 10 = 2x5 and so I can apply my theorem to conclude that the sum 10+10 is even. Thus I conclude that when you combine the baskets you will still be able to pair each stone with a partner.
Guy 1: Wait, wait, wait! I don't understand this 'even' jargon. Do it again without the jargon.
Mathematics: The definition of 'even' was central to my whole processes. Without it I can't even set up the problem, let alone apply the theorem used to justify the answer. I could perhaps just give you an answer, "MATHEMATICS SAYS YES", but then you wouldn't be able to repeat it yourself for different numbers of stones.
If the above is understood then I can quickly address the claims you have made.
> it's better to generalize upwards from reality and actual use cases instead of starting utterly removed from reality and trying to apply the generalizations downwards.
Mathematics is generalisation utterly removed from reality. This is why we have "Adding integers" and not "Adding together collections of dogs" and "Adding together collections of apples" and "Adding together collections of hats" and ...
> Sure, it's convenient for a mathematicians to be able to use shorthand gibberish to talk to other mathematicians.
Mathematics is the practice of defining new gibberish and then reasoning about that gibberish. The gibberish isn't a shorthand for something, it is the thing.
> It doesn't justify pushing this jargon on other fields.
Mathematics is definitions/gibberish/jargon. Applying mathematics to a field thus means applying definitions/gibberish/jargon to that field.
> When it comes to a point where gibberish becomes the only way to explain mathematical abstractions, then you should step back and ask yourself "where the hell did this go wrong?".
At least since Euclid's formulation of geometry.
>> So by the intuitive explanation we can make this single line any dimension that we want.
> And if this makes sense in the given context - sure, why not?
The problem is that it doesn't. Your explanation of an n-dimensional space is more a description of the larger space in which our space of interest is embedded.
In all instances the space (the line) remains unchanged, the only thing which changes is how we are describing it. For the dimension of the space to be a property of the space it needs ignore how we choose to describe it.
Personally, I'm really into physics, so I've always really wanted to like math so that I could delve deeper into the subject more easily. It's not that I'm terribly bad at it, but most math texts are so obscenely terse and cryptic, that it makes you wonder whether the authors are actually even trying to teach people about what they're talking about...
The terminology (gibberish) I used is nothing more than a convenient but very precise shorthand for communicating abstractions. The reason these particular abstractions are given special names by mathematicians is that they occur over and over again. There is a lot of conceptual leverage to be gained if you are able to see the same patterns in seemingly unrelated problems.
As Poincaré put it, "Mathematics is the art of giving the same name to different things". I'd argue it's worth knowing some of those names.
I've never heard a mathematician claim math is the universal solution to all problems... sounds like you had a bad high school experience. Do you need a math-hug? :)
Not exactly high school. I'm a university student in Germany.
But yes, it's pretty much a bad experience. I've seen quite a lot of other students which are otherwise bright persons struggle with math simply because they aren't good at learning to interpret arcane sequences of symbols which pretty much represent nothing relevant to reality. Math the way it is taught here is essentially useless for 95% of all CS students.
Symbols are just convenient and useful abbreviations. In my experience, they facilitate both speed and precision in mathematical work, far and above what is possible in a natural language like English. There is no particular reason why certain concepts are associated with certain symbols apart from convention. Any decent mathematician should be able to operate with any set of symbols, although not necessarily with equal proficiency.
I won't deny that many mathematicians have a hard time expressing themselves in natural-language speech and writing. Perhaps for them the symbols have become a crutch. But I still think you have erected a mental block around an ultimately inconsequential aspect of mathematics, which is needlessly impeding your understanding.
That having been said, I think there are definitely bad ways to approach the use of symbols in instruction. Concepts and their related symbols should be introduced incrementally, and you should always have ample opportunity to practice with the ones you have just learned before having to learn new ones. I think that a well rounded mathematics curriculum should include a course with a significant focus on understanding and manipulating common mathematical and logical techniques, with mathematical language being a significant if not necessary part thereof.
Yes, that's exactly what I've been getting at. As I said in other comments, I respect math as a field. What I'm pointing out is that there's a heavy communication problem between math and other fields, and this is something that needs to be solved.
They should probably stop trying to dumb down the math for computer scientist. Math is hard, and pretending otherwise, won't save one from reality.
In math, we just define n-dimensional Euclidean space, but no implication is made about the physical world.
That's not really true anymore, now that so many technologies from gaming to communications rely on concepts and algorithms from higher mathematics. While I'm skeptical about the importance of the particular equations on the author's list, I don't think it's reasonable any longer for software devs to plead ignorance (your humble correspondent included).
There is far more to CS than what that list implies. I realize that it's not supposed to be all-encompassing, but seriously, saying it's "something all CS geeks should know", in my mind is unfair - I have little understanding of most of the OP's article, but still consider myself to have a very solid understanding of CS, based on my level of education.
I don't need to know most of the things in the article. There are plenty of other things in CS, however, that I do need to know.
I think putting together lists like this does a disservice to people just about to enter the field. You can't possible break down CS into a list of 11 equations.
But I tend to agree with you: many of these are skewed by the author's perspective on what should be considered foundational. Someone doing webdev will not ever calculate eigenvectors. Someone doing machine learning will not be able to avoid them.
If you actually ever find yourself needing to show that a particular language is non-regular, it’s almost always clearer to use an ad hoc argument or appeal to the Myhill-Nerode theorem. Actually the latter is much better, because Myhill-Nerode completely characterises the regular languages, whereas there are non-regular languages that pass the pumping lemma test.
Can you give an example of that? You may be right, but I'm having a hard time coming up with a language that is not regular but fulfils the pumping lemma.
As you can see, this example is a little complicated – and I don’t know a simpler one, so I’m not surprised you struggled to just come up with an example off the top of your head.
Given the article's slapdash, loosely-coherent English and tendency to pad out each equation's section with references to even more equations--and not explanation of usefulness--I wish I hadn't tried to read it, for it was very frustrating. I suppose the article might provoke interesting discussion among the mathy set, the same way that a list of "top patterns" might start a discussion among OO fans.
I agree the applications were glossed over or skipped, but that is probably the most interesting aspect of the equations as well.
Without practical information, the article is sorely missing true sustenance.
Also, I'd generally say that almost all jargon is bad. A quote that's frequently attributed to Einstein says that "If you cannot explain something in simple terms, you do not have adequately understood it". Math and science in general should take this to heart, because I feel that many people have forgotten this or never really understood it in the first place.
It can be a bit daunting at first, but I find mathematical concepts and notation extremely useful tools for programmers, especially when designing algorithms. It allows you to express powerful, abstract ideas in a few lines whereas the same reasoning in pseudo-code would take much more time and effort.
I'm glad my undergraduate program was heavy on the math (although I hated it at the time)– it made graduate CS courses easier to understand, and programming easier for me in the long term. I actually miss my pure math courses now.
A code-like representation may have the disadvantage of being more verbose, but it has the advantage of being unambiguous. It gives up conciseness in favour of precision. And no, it is not impossible to represent mathematical concepts in a code-like manner -- it just forces you to actually say all of what you mean rather than using an ambiguous and incomplete shorthand.
You have to fight the gloss-over effect. It takes conscious effort. Remember that "deliberate practice" all those blog entries say it takes to master something? Well, same thing. You have to deliberately overcome your brain's tendency to go "math is hard, let's go shopping!" Eventually you'll be able to look at an equation and recognize patterns. When I hit the link I was like "Oh, man, MATH, so bo-- hey wait, that's Bayes's theorem! I know this!" (Yes, just like the Jurassic Park girl.)
So keep working at it. Maybe you'll want to try -- gasp! -- actually cracking a math book and studying the material they present.
Bayes's theorem is a direct consequence of this axiom and the commutativity of conjunction.
Let P'(A) be the distribution on A when we know nothing about B (in a world with only A and B). Plausible reasoning is possible through Bayes/joint-conditional probability rule and yields:
P(B=true|A=true)=1, P(B=true)=1 |- P(A=true) > P'(A=true)
where logic alone can't conclude (A->B, B |- ?)
P(B=true|A=true)=1, P(A=false)=1 |- P(B=true) < P'(B=true)
For example: the halting problem. Isn't it interesting that you can't write a computer program which can decide (in general) whether another program terminates? The proof is simple and it gives you a real tool to help you avoid trying to solve impossible problems. It's good to know the limits of your craft.
I'll agree that the y-combinator is less essential, however if you even have a sense of what's going on it means that you have an understanding of the basic framework of functional programming and a minimal grasp of the lambda calculus
I think, too, that what was meant by 'contrived' was that the pumping lemma is not really an equation. But you don't get hits if you talk about the top ten theorems CS geeks should know...
"The prefix eigen- is adopted from the German word "eigen" for "own" in the sense of a characteristic description" (from wikipedia)
I think Shannon's Information Theory is pretty fair too. It's about how much information is in a message.
Plus, Computer Science has it's own grievous names: A* search, B* search, B tree, B+ tree, B* tree. That always drove me nuts.
Mathematical notation is always supposed to be used in a context similar to literal programming.
There's also o(f(n)), g(n) is a member of o(f(n)) if the limit as n goes to infinity of g(n)/f(n) is zero. Finally, there is asymptotic equality: f(n) ~ g(n) if the limit as n goes to infinite of g(n)/f(n) = 1. O,o and ~ are all subtly different, but if you're just trying to prove upper bounds then O(f(n)) is the one that comes up most frequently, which is why it's probably the only sort of asymptotic analysis most CS grads know.
This equation represents the number of edges on a complete graph with N+1 vertices or the number of possible pairings given N+1 objects. Useful when estimating O(N) for certain algorithms that involve comparing an item to every other item in a set.
One of my math professors said that this and multiplying by things equal to one were among the tricks he used most often.
So don't be afraid! (And besides, there are certainly many CS degrees which don't require anywhere near the amount of mathematical sophistication mentioned in this article.)
Still, I think you should look at college as a way to learn what you don't know and get better where you suck, rather than bolster your strengths and avoid your weaknesses. For example, I wrote really well in high school and at one point wanted to get an English degree. Then I thought, "I already know how to write, let's put that tuition money to better use" and started considering math and science heavy options.
There are many paths you can take through a CS degree, and it's entirely possible to come out with useful and applicable knowledge without having to deep-dive in math. It just means you're more likely to end up in, say, web-dev than building 3D graphics engines. For me that's been a perfectly acceptable outcome.
And besides, a CS degree takes a few years - there's plenty of time to brush up on math if you're really interested.
Thanks everyone for the words of encouragement, but when I was going to school it would take me 4 hours to complete 15 simple homework math equations, even with the ritalin. 'Hard work' in the form of 4 years of that stuff would make me commit suicide.
Moreover, I just don't care to understand complex math. I do want to know how to design compilers and write embedded kernel drivers. If advanced math is required to be able to do these things, CS isn't for me.
THAT SAID I totally agree with what people have said, and the amount of "maths" you need beyond basic arithmetic in CS depends very much on your chosen field of interest, which, given that you don't care about complex maths, probably won't be too mathematically heavy.
Enough about me. If you want it, go get it. Period.
Certainly I consider myself relatively slow, but with enough perseverance, I get told I am smart. :)
If you want to do 'interesting' stuff, you'll need solid math skills. If your goal is to make money, you will likely do fine with average math skills. Oddly, however, many employers currently interview for CS/math knowledge even though their jobs will never, ever make use of such knowledge.
Can?? When is it not true? It's a theorem, after all.
I meant more along the lines of when the assumptions aren't met, e.g. P(B) == 0.
Or, where unitarity in physics isn't met: