Hacker News new | comments | show | ask | jobs | submit login
Ask HN: Which areas of math are practical to programming/algorithms and why?
149 points by kevindeasis on Sept 21, 2016 | hide | past | web | favorite | 95 comments

I've been recently trying to make my way though Elements of Programming (https://www.amazon.com/Elements-Programming-Alexander-Stepan...) by Alexander Stepanov and Paul McJones and it makes me say abstract algebra. It's the first and only rigorous foundation of software engineering that I've seen. It basically maps abstract algebra to this somewhat simple subset of C++, introduces new algebraic objects (such as memory) and then shows this really nice correspondence between the the C++-- and the abstract algebra.

If you are not familiar with the author, Alexander Stepanov is the guy who basically figured out generic programming, was instrumental in the design of C++ templates and the C++ STL. C++ gets a lot of flak but templates are very powerful (if you disregard the complexity). I think that they are actually one of the main reasons why C++ is still relevant. Also I'm starting to think that generic programming might actually be the most powerful paradigm out there (this is just a hunch). This book doesn't take the middle road, only the low level (C++) and extreme high level (abstract algebra) and totally cuts out the middle part (aka boiler plate).

Funnily enough, this C++-like language actually translates very nicely to the modern C++ successors like Swift and Rust (or it seems, I'm in the progress of exploring this).

Has anyone here tried to explore the contents of this book in either Swift or Rust?

But remember that this is not an easy book, I've met very smart people who told me they read only a part of this and are still wrapping their heads around that.

Favorite Stepanov quote below - apparently mixing mathematics and bad fish is quite potent:

Question: What is the origin of STL? Has STL been conceived to be what it is now, that is "the" C++ Standard Library, or does it come from some other project? Could you tell us a history of STL?

Answer: In 1976, still back in the USSR, I got a very serious case of food poisoning from eating raw fish. While in the hospital, in the state of delirium, I suddenly realized that the ability to add numbers in parallel depends on the fact that addition is associative. (So, putting it simply, STL is the result of a bacterial infection.) In other words, I realized that a parallel reduction algorithm is associated with a semigroup structure type.

The full interview can be found at:


I just picked this up after having finished 'From Mathematics to Generic Programming' which is a ride in the park compared to what you're reading now. Looking forward to the task of slowly working my way through it.

[Edit] Interested in Rust for the same reason, but I'm not attracted to Swift. It looks so much like Kotlin, I am not sure I understand the fuss over it other than it gives Apple devs a way out of ObjC.

Anyway, I am going full on Android now, not in the hopes of app money, but the sheer numbers of devices out there, Google's backing, and it is more of a hacker's platform than iOS. I will most likely try Kotlin again, and this will prepare me for any possible switch to Swift.

Giving devs a way out of ObjC is a pretty big draw :).

I'm using a very similar approach in the design of my library of data structures and algorithms:

(0) The interface of an abstract data type (set, heap, etc.) is the signature of a variety in the universal algebra sense. Implementations are concrete algebras of this variety.

(1) Homomorphisms of varieties can be used to bootstrap abstract data types from simpler ones in a generic way. Chapter 10 of Okasaki's book “Purely Functional Data Structures” is entirely dedicated to this technique, although he doesn't make the connection to universal algebra.

But are also some differences:

(2) I'm using a functional programming language (Standard ML), rather than an imperative one (C++).

(3) I emphasize persistent data structures over ephemeral ones. I don't reject destructive updates that are compatible with persistence (e.g. laziness).

(4) I use algebraic data types to statically rule out unreachable code paths. Inexhaustive pattern matching is considered a bug. Raising an exception saying “this path was supposed to be unreachable” is also considered a bug. This would be at best very difficult to enforce in C++.

Keean Schupke has started to explore the Elements of Programming in Rust. See: http://lambda-the-ultimate.org/node/5330

I know Alexander Stepanov personally, and Paul McJones through him, and they are both interested in seeing the ideas in the book implemented in other languages. Rust is a particularly compelling candidate, because it allows you to specify type constraints on generic functions through traits. This is similar in some ways to "concepts" which Alex has been hoping to see implemented in C++ for a very long time but which keeps getting kicked down the road.

Good ideas are often reinvented by different people in different contexts: An STL concept is a type class equipped with laws is an algebraic variety. Universal algebra tells us that we can gain a lot from paying attention not just to individual varieties in isolation, but also to homomorphisms of varieties, which translated back to programming languages correspond to bootstrapping data structures from simpler ones in a generic way.

Also, a while back, I reinvented InputIterators, OutputIterators and `std::copy` in a more type-safe way, using algebraic data types: https://github.com/eduardoleon/shit . (Sorry for the repo's name!) Although I only provide Standard ML and Scala implementations, this is perfectly doable in Rust as well. (In fact, my code is linearly typed, not just affinely typed.) The only requirements for a port to another language are parametric polymorphism and algebraic data types.

Discrete math first comes to mind first: lists, trees, graphs, etc. Used in basic algorithms 101, but you need some more serious probability theory for probabilistic algorithms.

Linear algebra is useful for computer graphics, but also for general "system thinking" concepts like inputs spaces, output spaces, transformations, and properties of transformations.

Basic differential calculus, meeeh, but multivariable calculus—specifically optimization—is really important in many programming contexts (e.g. machine learning).

Of course, the most important and most basic of all is the notion of a function f(x), its definition, inputs, outputs, properties, etc.

Rather than thinking just of how math ideas are used in CS, we should think of the relation as a two-way street: computing can help us better understand math too. Here is a short tutorial on basic math for programmers: https://minireference.com/static/tutorials/sympy_tutorial.pd...

It serves to review basic high school math, calculus, and even a bit of linear algebra. SymPy is very powerful stuff...

Also related to functions: big-O notation.

Math is rarely necessary for software development. However, math is one way great engineers distinguish themselves from okay engineers.

I work primarily on audio software. Linear algebra is most important, followed closely by signal processing. Those are sufficient to write good software. Great software requires statistics, calculus (mostly for optimization but also modeling), and discrete math (again mostly for optimization). Other specialties probably require more discrete math and less linear algebra and signal processing.

People that don't know math just reinvent it poorly. A lot of people use databases, and their SQL query with joins is performing set operations and they have no idea it's math. If You gave them the same dataset in an array, they wouldn't use a Set collection to solve it, but would loop over the dataset multiple times plucking what they need.

This is another reason why I would like to improve my math skills. I sometimes sense that there would be a much more elegant and efficient way to solve a problem if I could apply some math theory to it. 2 examples:

1:When I try to learn a new language I try to work through some of the problems on project Euler. I'm always impressed by some people's one line math solutions to something that took me 50 lines of code.

2: I once try to implement a timetable scheduling Web app (matching available teachers to courses) I could see that a solid understanding of set theory would have allowed me to - a: reason about the problem domain more easily and b: find the best way to set up my SQL queries.

> Math is rarely necessary for software development. However, math is one way great engineers distinguish themselves from okay engineers.

I recently realized that math is not necessary for programming... if you want to be stuck doing boring stuff like webdev or CRUD apps.

Abstract reasoning is necessary for programming. We typically develop those skills doing and applying math, but it is an open question if abstract reasoning is math.

Also, there are many kind of maths, the Europeans even get this right over the Americans by making it plural. Math is necessary is like saying things are necessary...which thing? It really depends on what you are doing, and experience with a known field of math might not be useful for some problems (beyond the abstract reasoning you need to write code at all).

Applies also to mobile, DevOps, and data engineering. Take the average ~$150K SV software engineering job and you don't need continuous math (just discrete to get in the door and maybe once every 6-12 months after that, when an interesting problem arises like some kind of divine salvation).

I want to stress before answering that everything you learn is useful if your mind is limber and willing to make connections. Some of the most valuable lessons in programming I learned from editing a philosophy journal in graduate school. I'll answer in terms of mathematical areas/concepts I've found immediately applicable in my programming career (format: what | why):

linear algebra | graphics, scientific computation, mostly graphics.

discrete probability | a whole shitload of perf solutions, understanding risk while planning

number theory, particularly factorization | modular arithmetic for crypto, a lot of very clever hacks for compressed representation of state spaces

predicate logic, basic set theory | I cannot count the number of times someone I work with has expressed something that required one elegant logical operation in a horribly convoluted way.

ammortized analysis | just learn it

statistics | operational reasoning using performance metrics, how to test/alarm rationally, how to reason about your customers

calculus | marginal returns (important when managing a team and optimizing where to spend resources). also if you ever end up doing any kind of convex optimization, which you might, maybe. (I do, but I don't think that's super normal outside datascience).

TL;DR: there's a reason CSCI curricula look like they do.

>I cannot count the number of times someone I work with has expressed something that required one elegant logical operation in a horribly convoluted way.

Would be great if you can write/blog about it or just provide some examples here...

> I want to stress before answering that everything you learn is useful if your mind is limber and willing to make connections

There are some things you don't want to learn, and there is also an oppurtunity cost to learning/practicing A instead of B.

Of course. As I said, there's a reason most CSCI curricula look similar. That said though, most applied programming work does not require hard/deep math with a steep learning cost. It requires good instincts and precise logical reasoning.

Good instincts require a breadth of knowledge, moreso than a depth in any particular domain. Precise logical reasoning requires practice and patience in any of a very wide range of fields, not exclusive to mathematics. In both cases, I think developing a careful knowledge of history or neuroscience or any rigorous intellectual discipline is effective practice, although I think a foundation in logic is irreplaceable.

Being an excellent programmer in some specific domains requires deep domain specific knowledge in applied math, which does have a steep opportunity cost. Absolutely. But I did not believe that was the question at hand.

What did you learn from editing that journal?

Mostly that simplicity and clarity are hard to achieve but very worth it. Learning what it feels like to correct and simplify logic into something that is easy to read and easy to verify is experience that has transferred well.

From my experience:

Everything you publish in science/academia will be read by a wide, potentially hostile audience. It is very easy to be misinterpreted, and misinterpretation wastes a lot of time and effort.

Stating what you mean in clear, precise terms is far more work than writing things that make you sound 'smart'. Good writers make their work seem terribly obvious, but it comes at the labor of many, many drafts. Most of the hardest work in editing is helping an author subtract and simplify to get right to the point. The labor of being precise and explicit exposes errors that the illusion of understanding tends to conceal.

Young/inexperienced writers often have a gigantic blind spot when it comes to their own writing, and correcting that is a very painful process.

Also, sometimes the original author was right, and the only precise way to express something was really ugly and horrific and your attempts to make it simple did violence to a lot of careful thought.

The world would be a better place if everybody took the effort to make their point with precision, simplicity and clarity.

However the truth in many situations is that if person A spends an hour constructing a beautifully worded five-liner, and if person B spends the same time to produce several rambling pages, then person B's argument may well win out. Readers who have spent ten times as long wading through B's contribution are quite likely to have forgotten entirely about A's.

I will definitely take that advice to heart. I literally just hit the submit button on my first conference article.

I'd recommend reading Knuth's The Art of Computer Programming for his perspective. Very math intensive and things you won't find in other textbooks.

Definitely a lot of discrete math (e.g. combinatorics), some calculus, number theory (esp. for cryptography), coding theory (e.g. for error correction), information theory, computational complexity theory. The border between theoretical computer science and math in general is very blurry, pretty much anything that is CS pretty quickly touches on more "traditional" mathematics.

A lot depends on your area of programming, things like machine learning will definitely involve more linear algebra, calculus, statistics. If you're programming a 3d engine then again algebra, geometry, etc. Things like finite state machines have a mathematical underpinning.

EDIT: From what I've seen at a university level CS program, Calculus, Linear Algebra, Discrete Math, Group Theory, Logic and Statistics were (IIRC) the core required math courses. Doesn't mean everything you learn there is applicable to any kind of program you might write but at least that's what someone thought was important...

Important to note that the maths used in AOCP is briefly covered at the start of Vol I. This was expanded upon in the form of Concrete Mathematics.

If you want just the maths and at a slightly slower pace, go with Concrete Mathematics. If that's a struggle, I found the first part of Discrete Mathematics and its applications by Kenneth Rosen was enough for me to make progress in Concrete Mathematics.

I also found learning a bit of linear algebra and calculus was a mind changer. I expect more fruits by my continuing of those studies.

This seems like a good read on the topic:


Disclaimer: I know nothing about math but have been researching it recently as I want to go to university next year to study CS so I have about 8 months to 'learn' math (45 year old street programmer here :).

I would appreciate any feedback on if this post has any merit, and will be following this thread as it's relevant to my interests :)

I am a self-taught programmer in my 30s (with an undergrad degree in humanities) now back in school. One of the main reasons I decided to return to school rather than continuing to program on the job was that I wanted to improve my math skills.

When you say you know nothing about math, does that mean you have no Calculus? That was where I was starting. To prepare for Calculus, get solid in algebra and trig. Then do single-variable Calculus (Calc 1 & 2). My recommendation would be to do linear algebra before multivariate calculus. You can do Calc 2 and Linear Algebra at the same time.

If you have no Physics experience you might also consider studying it. Physics helps you learn by applying math concepts. Also, I think it is very good for teaching you to problem solve as an engineer.

First I would recommend Cal Newport's blog about using time wisely. http://calnewport.com/blog/2008/11/25/case-study-how-i-got-t... Plenty more great info in the archive: http://calnewport.com/blog/archive/ Obviously the best way would be to go through your school's course syllabus for whatever classes you want to take and look at the material you will be doing, but these are good for a general preparedness:

Axler - Precalculus 2nd version http://precalculus.axler.net/ He works through every odd exercise solution, in full. Also gives you a good intro to sets and series, summation notation, binomial theorem, all this will come up later in discrete math. He assumes the reader knows nothing about Trig as well.

Gilbert Strang's videos "Highlights of Calculus" https://ocw.mit.edu/resources/res-18-005-highlights-of-calcu... just to get an overview of what calculus really is since I assume your university will throw you into an applied single variable class first year. MIT Open Courseware has their calculus (18.01/18.02) course lectures up if you want to watch them too to get an idea of what you'll run into.

"Elements of Mathematics: From Euclid to Gödel" gives quite a good explanation of Mathematics as a whole, like why we learn elementary math the way we do and how it applies to more advanced concepts, explains Rings/Fields and has a really good introduction to Logic. I wish this book existed 5 years ago when I started http://press.princeton.edu/titles/10697.html

"An Introduction to Mathematical Reasoning" by Eccles is short and excellent. You can also download the lecture notes here from CMU's course on proofs http://math.cmu.edu/~svasey/concepts-summer-1-2014/ and try some of the homework if you want but you probably won't see any of this until second year judging by most university calendars and recommended program course list I've seen.

CLRS https://en.wikipedia.org/wiki/Introduction_to_Algorithms which has a great first chapter "Foundations" that compliments Knuth's 'Mathematical Preliminaries' chapter in TAOCP vol 1. Knuth's book you can really drill yourself in these concepts though with the many, many exercises writing proofs. The skills I learned doing these exercises paid in full years later when I started more advanced math and even at work.

Thanks for the info. Do you think I will be able to use the Elements of Mathematics to kind of build a "Tech Tree" of math? One of the hardest things for me to find so far is what should be my order of acquisition for math. I did find this:


But I have no others to go on to compare.

This is a common complaint with a simple solution.

Just look at the curricula for various undergrad math programs at some number of decent universities. They have a set of required courses and the course descriptions usually list the prerequisites, and sometimes this is even accompanied with a diagram of the DAG as a flowchart.

For any given course, there's usually a canonical text or set of roughly equivalent in quality texts.

You can do the same thing for basically any area of study that can be found as an undergrad major at most universities.

Set theory is useful, as well as category theory (although I never formally studied category theory myself...maybe eventually I will).

Basic logic is also important - understanding symbolic logic, and methods of proof (direct inference, proof by contradiction, induction, etc.) also prove important.

I have had to use basic trigonometry in the past as well, and linear algebra (for CSS rotations/translations driven by JS).

Algorithmic number theory, closely related to algebraic number theory, drives cryptography.

Harmonic analysis drives audio.

Algebraic geometry powers facial recognition software.

The list goes far deeper, but I have been away from the math world for 6 years & have forgotten much.

Matrix multiplication is a must-have for any graphics programming and knowledge of them is very useful for learning neural networks.

And on the more esoteric side of things, in Quantum Computing every logic gate is actually a Matrix Multiplication

Category theory!

It's cool to learn recognize how simpler math concepts apply to computing. Ie, associativity allows for map/reduce

Lots of areas are important. Every job I have worked has had a specific math specializations that were important. Reed-Solomon codes are used in storage and communication systems. They make use of abstract algebra (finite/Galois fields). Linear systems theory is essential for signal processing and control systems. Quaternions are important in aerospace and graphics.

About 40 years ago the company I was working for bought a big videotape recorder, and procured a custom controller for it to use it as a large random access storage device. The controller used a large Reed-Solomon code to detect and correct fairly large multi-bit errors. It worked quite well, storing images for Boeing, Lockheed, and McDonnel-Douglas aircraft maintenance manuals.

It's a bit strange that nobody mentioned Boolean algebra: https://en.wikipedia.org/wiki/Boolean_algebra

Could you perhaps be a little more specific? Sure, we use OR, AND, and even XOR to deal with conditionals that map to T and F, but this seems like kind of a surface level use of booleans. When do we invoke the algebraic properties of the structure when coding?

De Morgan's laws! I am sure people use it routinely to simplify complicated boolean expressions while writing code or mentally simplify complicated boolean expressions while reading code.

And then you can probably reduce it with a Karnaugh map. It's pretty neat.

Lambda Calculus since it's a computer science field of study that makes math, in my opinion, useful for most situations in our field.

Math isn't majorly needed and you can learn what you need on the fly. For instance, if you want to get into game development you need to understand some fairly complex math concepts but nothing is stopping you from attacking the problem and either a) working around it with your understanding of the computing devices you are using or b) by learning what you need on the fly.

If you want to practice in a field that operates on computers, don't learn another field. That's like saying "what kind of philosophy most correctly correlates to learning about horticulture?" Just start to learn horticulture and as you need philosophy it will be tied in.

>Math isn't majorly needed and you can learn what you need on the fly.

I really, really hope that whatever you are doing does not involve cryptography.

Cryptography isn't a computer science field. It is an extension of math. It's the exception to the rule, not the rule itself. Cryptography as a field of study has existed long, long before computer science was a twinkle in anyone's eyes. Modern day cryptography only centers around computer science in the practice of implementation.

Getting crypto right is not only about having a strong encryption algorithm. I'd say that what makes crypto hard is that it requires a lot of knowledge on both math and computers, but it might be considered a not-only CS field as the math requirement is higher than usual on CS.

As I said, crypto is not CS but CS can be used to implement cryptography.

Crypto is no more CS then it is EE. You can implement cryptographic elements in hardware but you don't see the math people setting up shop in the EE labs at a college so I don't know why you see them setting up shop in computer labs.

Have you tried interviewing at game development companies? Many will test the hell out of your 3D math concepts and will reject you if you don't already know it well. They don't accept that you can just 'learn it on the fly'.

Honestly I'm not sure if I'm be able to handle a modern graphics programming jobs unless I studied and practiced constantly for at least six months straight. And I've worked on a bunch of 2D games before (and one 3D game).

What I said was "Math isn't majorly needed and you can learn what you need on the fly."

You said "Have you tried interviewing at game development companies"

These are two separate things. It's like saying "Most people can learn how to fill out their tax forms on the fly. You don't need anything special you can just read up what to do when it comes time to." and then you coming back and asking "have you ever applied for a job at H&R Block? You need to be well versed in the tax system to be a full time tax filer"

This is not at all what I was referring to.

The question at hand was "Which areas of math are practical to programming/algorithms and why?" Would you say that game development is the representation of the entire programming field? No it's a small portion of our profession. If you want to do that professionally round the clock then you need to prepare for what you will be doing in that field but that doesn't mean that if you want to get into programming or algorithmic thinking that you should learn game development. That's just one small element.

> Honestly I'm not sure if I'm be able to handle a modern graphics programming jobs unless I studied and practiced constantly for at least six months straight

What you have to ask yourself is what is it that I'm practicing for this job

You didn't learn the maths first and said "you know game development looks good let me go there" and sit down and start to learn how to program and think algorithmicly. You sat down and said "I need to learn everything that directly applies to this field" which in most cases (for rendering) is vectors, frustum view angles, Eular and maybe Polar coordinates, an understanding of how to use a quaternion, what a matrix is. The remaining, and much more important part in this equation, is understanding how to interface with OGL/DX/Vulkan. This is what a graphics person does best, not math.

It boils down to what our field is: understanding, in great depth, the systems of computation and how to interface with them in a structured, maintainable, terse, and fast manor. Math is a cart and programming pulls it to the goal post.

Basic algebra for the substitution model and basic calculus for high order functions and the notion of a transformation in general.

Set theory is, probably, the most fundamental. Everything could be defined as a set or as a function.

Lambda calculus, obviously.

Some combinators. Basics of linear algebra.

No category theory and other bullshit is needed. Sets will do.

It is actually very important skill to avoid wasting time in disconnected from reality obscure academic bullshit, be it philosophy, physics or math. Do not follow other people's hallucinations. Have your own.)

As a rule of thumb - you need just enough math to understand The Wizards Lectures and SICP.

Again, the substitution model, sets (for notion of types and basic collecttions) and high order functions (the "domain and range" mantra) is enough.

For algorithms the notion of being bound by some function and orders of growth.

Not to start a math war, but why is set theory okay but category theory bullshit?

While they're both useful, I think category theory encompasses a bit more than set theory and it's only a little more abstract. I also like the more functional approach that a lot of category theory requires you to take; if you're used to imperative programming it's a pretty different and useful way to think about things and it's great for adding to your critical thinking/general problem solving skills.

The difference between the set theory and category theory is the same as between the philosophy of Spinoza and Hegel. The first is grounded in reality, while the second one is an abstract metaphysics bullshit. Developing some abstract vocabulary to describe spherical horses in vacuum contribute nothing to programming, which is a discipline of describing and modeling some aspects and processes of reality.

Knowing where to stop in a heuristic-guided search is the most difficult part. In my opinion one should stop after realizing that the subject becomes too abstract and too academic and cease to be a tool of clarification.

Let's say that it has something to do with the pragmatism of Ayn Rand, Quality of Robert Pirsig, and to the general scientific method of removing bullshit, dogmas and nonsense in order to let the truth standing.

Abstract algebra is certainly abstract, but it is far from "metaphysical bullshit". Abstract algebra is just the precise articulation of patterns that we see across different mathematical formalisms.

Now, the fact that it is so abstract does mean that your average programmer won't use it on a day to day basis. I'm certainly not going to argue that it is the most important math to understand. However, we are finding that there is uses for it. Not just in a "oh look, I can describe my code using abstract algebra, nifty" sort of way, but in a "knowing this was essential to finding the correct solution" sort of way.

For instance, the folks working on .NET's LINQ knew they were implementing monadic comprehensions; they just had the good taste to not use the word "monad".

Furthermore, it seems that having an understanding of abstract algebra is essential to making a distributed real-time analytics engine[0] [0]https://www.infoq.com/presentations/abstract-algebra-analyti...

Oh, monads.. sooner or later they will be mentioned.) Actually it is a canonical example of abstraction for the sake of having an abstraction, which only increases confusion.

Monads make no sense in a non-lazy language.


> Oh, monads.. sooner or later they will be mentioned.

Yep, monads :).

> Actually it is a canonical example of abstraction for the sake of having an abstraction, which only increases confusion.

I intentionally brought them up with a concrete, real-world example where they were useful.

> Monads make no sense in a non-lazy language.

A more accurate statement might be "monads make no sense in a non-lazy evaluation context". There are plenty of cases when working in an otherwise strict language you will compose a series of computations over a lazy data-source. For example: you want to process a potentially large series of rows coming from a database without blowing through RAM.

You can solve this problem on an ad-hoc basis, but as far as I understand it, the folks working on LINQ wanted to develop a declarative, readable, sql-like syntax for creating/composing such computations in a way that would work generically across data-sources. The question is, what is the pattern of operations and constraints that is common to all of the disparate data-sources that can be used in this way? The monadic functions + monadic laws is a precise (if arcane) description of those operations and constraints.

Using monads is not the only way to structure computation in the context of lazy evaluation, but it is one that is relatively well understood and worked well with the goal of being "sql-like" because SQL queries can be modeled as monadic computations.

That blog post is a mean-spirited rant and mostly lacking in substantial argument.

Haha! How is category theory "metaphysical"?

Your value judgments in this case actually seem dogmatic and nonsensical... which makes sense if it's derived from Ayn Rand.

Literally. An abstract category is an abstract abstraction. That's the realm of metaphysics.

Ayn Rand was a student of the classic Greek philosophy, nothing wrong with her.

I have a great deal of respect for Rand and her ideas, despite not agreeing with them.

However, her status as an intellectual heir of Aristotle is highly debatable. She's as much a disciple of Nietzsche as anyone.

Meanwhile, metaphysics is a subject that stems directly from Greek philosophy: specifically the subjects addressed in Aristotle's work Metaphysics (literally "the book that came after Physics"). Spinoza's theory of monism is itself a metaphysical theory in this sense.

Perhaps by metaphysics, you mean to certain metaphysical theories, such as the Idealism of the 18th century?

edit: missing punctuation

Let's look at Wikipedia's definition.

> Metaphysics is a branch of philosophy concerned with explaining the fundamental nature of being and the world that encompasses it. Metaphysics attempts to answer two basic questions in the broadest possible terms:

> Ultimately, what is there? What is it like?

Does category theory have anything to do with that? No.

> > Ultimately, what is there? What is it like?

> Does category theory have anything to do with that? No.

I could argue that category theory is claiming to answer those questions for mathematics. (So is the axiomatization based on set theory.)

Maybe some mathematicians think like that, but many don't.

> It is actually very important skill to avoid wasting time in disconnected from reality obscure academic bullshit, be it philosophy, physics or math. Do not follow other people's hallucinations. Have your own.

This. So much this. I've lived in a few hacker houses and worked at hacker spaces in the last year. The most impressive programmers I met had picked up just one set of skills (mostly web dev or mobile dev) and while being good at just one thing is less glamorous, these people were actually finishing projects. It was really incredible to watch them create and transform a project into real life in a very short amount of time. The most frustrating people I met were ones trying to learn everything--Machine Learning, IoT, VR/AR, mobile, web full stack--and getting nothing done on a daily basis.

The math you need to learn is completely dependent on what programming you want to do. What will be most impressive is your ability to focus on one or two areas and not get distracted into trying to learn all of it, and not feel envy towards those who are experts in domains that you don't know.

What's "The Wizards Lectures"?

The set of video lectures from 1986 by Abelson and Sussman.


Timeless classic. And nostalgic diving back into 80s. And MIT Scheme on a monochrome monitor with a clicky keyboard.

Type theory. The Curry-Howard-Lambek correspondence shows that type theory, logic, and category theory are equivalent. Even ideas such as set theory can be grounded in type theory (due to the univalence axiom), as type theory can be used as a foundation (via homotopy type theory) for all of mathematics.

There are lots of practical uses. For example, the billion dollar mistake [1] is painfully obvious when viewed through the lens of type theory.

Software such as Coq can be used to write verified (provably correct) software and can be used by mathematicians for proving theorems.

[1] http://lambda-the-ultimate.org/node/3186

It really depends on your domain. Knowing more math is never a bad thing. It provides you tools for you to model and solve problems. That said, two areas that I felt most broadened my horizons are probability and linear algebra. They are pretty ubiquitous topics that come up regularly in more sophisticated and exotic algorithms.

The book "Concrete Math" (https://en.wikipedia.org/wiki/Concrete_Mathematics) was written as a book-sized expansion of the Mathematical Preliminaries chapter of TAOCP ... so if you want to have a base of knowledge to help you analyze algorithms, it would appear this is a good place to start. I'm working my way through it now.

Someone asked this a few months ago and the book that was referred was Concrete Mathematics by Knuth (https://www.amazon.com/Concrete-Mathematics-Foundation-Compu...) which I have since purchased and have enjoyed re-learning from college. Highly recommended, and should cover all you need to know. Very approachable text.

Formal languages[0]. Well, if you want to learn about compilers and programming languages.

[0] - https://en.wikipedia.org/wiki/Formal_language

Something I haven't seen anyone mention yet: the part of discrete math where proofs, particular proofs by induction.

It is a huge help to be able to think in terms of recursive algorithms in many applications, and proofs by induction both give you the tools for thinking this way and the framework for demonstrating that recursive algorithms are correct.

Maybe this is off topic (but in the spirit of the question), but: High (graduate) level formal logic in a philosophy department.

Understanding (that there is) a relationship between computability theory and just basic thought is one of the (if not the) most important takeaway from my (disastrous) college career.

Benson Mates [1] is the canonical (succinct) textbook.

[1] https://www.amazon.com/Elementary-Logic-Benson-Mates/dp/0195...

> Which areas of math are practical to programming/algorithms and why?

The basics: + - x /

Generally speaking, any vector or array can be thought of as a matrix (not that it is important).

Some problems and domains involved many operations on [multi dimensional] matrices. Where some math knowledge is advised.

Then there is "graph" both theory and practice. It's usually classified as math but has little to do with mathematics. I'd rather consider that as a unique separate domain with many practical and theoretical applications in other fields.

Forget about "crypto". Nothing of that is needed for programming. You'll use some algorithms but you'll never have to write or invent them.

In a theoretical sense. "Functional [programming]" is a branch of mathematics.


Overall. Math is seriously overrated and has nothing to do with programming.

If you're into programming. Maths are simply not required to be a programmer. Don't worry about that.

There are some domains (maps=graph, 2D/3D=geometry, variousstuff=matrices) in where programming regularly require some maths. The maths part is always a very specific subset that can usually be learnt alone.

If you're into abstract mathematics. There are some theoretical stuff that can be worked on which are very far from practical programming.

At my previous job ( https://www.rsa.com/ ), where I worked for 7 years, I've often had to use various mathematical techniques to do my work as a software engineer.

Here are some concrete examples from my career.

* Combinatorics and probability theory - Entropy analysis of authentication schemes, algorithms for enumeration and analysis of related combinations and permutations, etc.

* Calculus and probability theory - Implementation of bloom filter for space-efficient indexing, tests and analysis of false-positive rates, etc.

* Statistics - Adaptive authentication schemes, performance measurements and predictions, network event correlation, etc.

* Discrete mathematics and algorithms - Tree traversals, graph search algorithms, etc. were useful in a variety of situations, e.g. data retrieval in tree-based distributed databases.

* Asymptotic analysis, Big O notation, etc. - Data deduplication in distributed databases.

* Modular arithmetic - I didn't really implement any cryptography algorithms; I only used them. But the understanding of modular arithmetic was essential to understanding some of the limitations of cryptographic algorithms based on modular arithmetic, e.g. why the message size cannot exceed the size of the modulus, as well as for software engineering tasks like decoding certificates found in network traffic.

* Formal language theory, DFAs, etc. - Parser generators for network event parsing, SQL engine development and related query optimizations, etc.

So these are 7 examples from 7 years. There may be more examples but I cannot remember them right now.

I have felt from my personal experience that although most of the work does not involve mathematics, once in a while an interesting problem comes up that can be solved efficiently with the knowledge of mathematics. It is not easy to predict when such a problem would come and it is not always easy to recognize whether the problem at hand can be solved or analyzed efficiently by known mathematical techniques. So it is good to have someone in the team or be the person who has a good breadth of knowledge in mathematics, so that when such problems do come up, they are addressed efficiently. Fortunately for me, I love mathematics and there were people around me who also loved mathematics, so we got a lot of interesting work done.

It is very likely that the most important new algorithms in the next few decades will involve probability theory.

Mathematical logic for playing around with conditionals, Principle of counting for understanding complexity

Surprised nobody has mentioned any/all of Alan Turing's work, it is the mathematics which caused the foundation of Computer Science. See Turing Completeness and Universal Turing Machines.

Graph Theory is important for understanding data structures, I would also stress that the math equivalence of idempotency and isomorphism. We use these concepts a LOT in our work - but admittedly, we're a database ( http://gun.js.org/ ), which is a very different line of work than building consumer apps.

Other people already mentioned Big-O notation. Combinatorics, etc.

Mining Massive data sets kind of things. http://mmds.org

Algorithms that are giving not exact, but pretty good results on giant data sets.

Better question: what areas aren't?

Good question... Calculus? Of course it has a long history with computing, but I would say that probably less than 10% of programmers outside scientific fields use any calculus. There is calculus in machine learning but I think that counts as less than 10% of programmers, and it will probably become its own specialty separate from programming in the coming decades (to some extent it is already)

From what I know of topology, you use logic and algebra as tools to learn about it, just as you use logic and algebra in computing. But you probably wouldn't use topology to say anything about computing.

Number theory is relevant to crypto, but I would be hard pressed to say it is practical for programmers in general. You could probably have a pretty long and successful career in programming without knowing basic things like what a prime factorization is.

> Good question... Calculus? Of course it has a long history with computing, but I would say that probably less than 10% of programmers outside scientific fields use any calculus.

Calculus is pretty useful in computer graphics and vision, if you want to understand what is going on there are integrals left and right. Audio processing and finance too.

Calculus (yes, continuous calculus) is pretty much the most important area of math you need to know to program well. In my books, if one can't integrate/differentiate a formula, they shouldn't be allowed to program a computer. Because that's how you end up with algorithms that take way too long to finish because the author couldn't bother to estimate how much computation the algorithm needs to perform at large inputs.

Most programmers can forget the specifics of calculus, but the conceptual ideas of integration and differentiation are important. Many problems are better solved when you transform a series of changes into a whole (integration) and/or vice versa (differentiation).

Topology is useful in robotics and control to define the set of achievable states in a system and their connectivity.

On further reflection, it does seem pretty hard to find a subfield of math which isn't relevant to SOME subfield of computing. I was going to say non-Euclidean geometry, but that's very relevant if you're doing mapping software.

But that's a different thing than saying that a particular subfield of math is "practical to programmers".

I would say that programmers need the foundations of logic, algebra, and probability, and a bit of calculus (less than I was taught). And THEN they can learn topology, number theory, non-Euclidean geometry or advanced calculus if they need to, for their specific domain. You never know what you're going to end up doing 10 years down the road.

And of course that seems to be what happens in a good undergrad CS education, so apparently people better than us have thought about it :) Indeed college is supposed to give you the ability to learn how to learn, when new things come up, as they always do in programming. Quantum computing is basically a ton of linear algebra as far as I can tell.

Programming is pretty broad, as others have observed - in part of my world, scene reconstruction for machine vision tends to drag in tensor calculus, but the average enterprise developer is safe with the basics - also, good libraries mean you have the ability to use functions and processes you need, as opposed to getting recurrent nightmares about numerical stability when you have to code the realization of a complex process yourself

Proof and problem solving (It's not always taught in its own course). These assumptions about this code lead to these conclusions about its validity. If you can't prove your code works (to a human level of proof, not a machine's), why do you think it won't break? The skills of writing a simple proof (break it down so that each logical component is clear) is the same skill as writing clean and testable code.

Category theory.

Operations research is 50/50 compsci and maths. Lots of interesting hard problems that require good maths and compsci skills.

We employ some very talented people working on these sorts of problems for rail: http://biarrirail.com

Linear algebra for computer graphics, computer games, machine learning, and engineering simulation.

The CS departments in universities around the world have worked out good answers for this. Their thinking is summarized in the curricula they outline: which math courses they require as core subjects in the undergraduate CS degree program.

False. Decisions like this can be made at a higher level than the department. The mandatory courses in my programme were applied univariate calculus and an introduction to statistical modelling (with examples drawn from biology), imposed by the faculty of science on all science majors - all the while the CS professors were screaming for an introduction to logic and discrete maths to be added to the curriculum.

Well, yes; you have to look at the set difference between the CS math courses, and the common ones that all science students have to take.

That said, nobody in a STEM field should go without knowing at least univariate calculus. Without calculus, you can have only a poor intuition for situation involving rates of change, or little deltas being applied in one place resulting in other little deltas elsewhere. It's also necessary for stats, because you're dealing with oh, integrals such as the area under sections of a probability density function.

Part of computer science is numerical analysis, too. Once upon a time, numerical analysis constituted the bulk of "computer science".

CS undergrads arguably need some exposure to numerical analysis, and in such a course, the knowledge from other math courses provides support.

The math courses in most CS programs are not particularly useful, except as a weed-out function, and even there, I would argue that it's not particularly useful, since the Calc 1/2/3/Diff EQ sequence is not all that beneficial for anything I've ever encountered, in school or my career.

Tensor math seems to be very important

I wish I had've learned linear algebra in college because it is essential to machine learning

To quote Terminator 2: "Erm... all of them, I think."

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact