Hacker News new | comments | show | ask | jobs | submit login
Category Theory for Programmers (2014) (bartoszmilewski.com)
267 points by pkd 234 days ago | hide | past | web | favorite | 63 comments

This article reminds me of a couple of things.

1. Wasn't Feynman the one who was promoting replacing classical mechanics with relativity for freshman physics? How has that turned out?

2. Fuzzy logic makes a similar argument. Bart Kowasaki down in San Diego I believed proved mathematically that fuzzy logic equates Calculus and Nyquist equations. I'm probably not getting that quite right. But, Fuzzy Logic was called the "cocaine of math" and yet the Japanese have done quite well in using fuzzy math in software and manufacturing. The arguments put forth here about why every developer should learn category theory smack of the same arguments that have been made about Fuzzy logic and why fuzzy math should be taught in elementary school. I personally like and use fuzzy logic for programming but wide spread adoption of fuzzy logic never took hold. From what I've read of Category Theory, Fuzzy Logic seems to have a better argument as a programming shift because Fuzzy is a lot less abstract. Just start with a bunch of if statements that approximate the math.

Fuzzy logic is good for fuzzy things. Dealing with nature, in the widest sense of the term.

Category theory is good for making extremely watertight conceptual arguments for discrete properties being preserved under transformations: it's "general abstract nonsense", yes (a famous characterization of the subject) but it really does elucidate the underlying structures in common across a lot of different mathematical settings.

If there's something wrong with the computational flavour of category theory, it's probably that it lacks fuzziness and is too much part of the linguistic/logical picture of the world. Giuseppe Longo, a category theorist, has a paper about this: http://www.di.ens.fr/users/longo/files/PhilosophyAndCognitio...

Thanks! for the paper reference!

It might be interesting to see what mathematicians-in-the-wild think about higher category theory, which is viewed in a somewhat similar way by people who don't work in certain fields.[0]


I suspect the reasons are broadly analogous. To paraphrase a quote of Atiyah's, the usefulness of theory is that once a field is complex enough, a little bit of it can serve to provide "organizing principles" (quoting Qiaochu Yuan from the MO post above) that help both learners (who can exploit similarities in different fields so that unfamilar things feel less alien) and experts (for whom many things are made easier, or even possible, by working at higher levels of abstraction[1]).

Knowledge just keeps piling up, and abstraction is one way of increasing the "density" of information one takes in so that one doesn't have to spend one's whole life trying to learn enough to work -- or, in the researcher's case, getting to the frontiers of current knowledge.

[0]: Roughly, my understanding of it is that it's what happens when you don't require properties like associativity, but say that (ab)c should be "deformable" into a(bc) in some way. Complications arise, interesting stuff happens. The usual.

[1]: Akin to the birds of Freeman Dyson's "birds and frogs".


My manager dropped a bomb on me about 20 years ago as a young engineer and had me be a liaison with some of the people doing research on the Pi calculus.

Still somewhat shaken to this day by that very first meeting, it really didn't go that well.

I really don't grok the math behind it and it is very nice to see this pop up, and Bartosz definitely knows his stuff. I'll give it another whirl. Hats off to him for trying to make it approachable.

Also Brian Beckman and others have often had interesting things to say in these areas.


These days I understand the formal math behind it even less than I used to, which is to say abysmally; however it did teach me some things about how to think about composition that have subconsciously been influential I think over the years.

Nice post.

If you have FP experience and are interested in practical applications of category theory, I encourage you to reach out: https://news.ycombinator.com/item?id=14025110 - we're hiring mathematicians and computer scientists.

Hi. I'm a junior in college studying Math + CS and with Haskell experience. Are you looking for interns?

Not this summer, but thanks for checking!

if you squint, it all looks like state machines!

State machines aren't even Turing complete. Even if they where, that doesn't mean we should be using them to program Von Neumann machines. Building levels of abstraction just helps you get more work done. Otherwise you might as well go back working in a tarpit (https://en.wikipedia.org/wiki/Turing_tarpit).

Understated comment of the decade!

What research is being done in functional programming/category theory in the academic world? This interest me quite a bit.

Categories of cubicle sets/Cubical Type Theory http://dlicata.web.wesleyan.edu/pubs.html

Can anyone point to any concrete benefits theyve gained from learning Category Theory? I've used Haskell and I see some value from that (mostly in getting more comfortable with functions and typings I suppose), but not sure if I should spend any time on Category Theory.

I didn't learn CT in school, but I got comfortable with the functional programming version of it during a few years of writing Haskell-flavored Scala on a team.

To me, the main value in learning FP CT is becoming handy at recognizing more of the patterns underlying the code I've been writing and working on for ages.

Consider the GoF Design Patterns book -- if you haven't been programming very long, many of the patterns might look odd or needless, but the more code you work on, the more you start to recognize the patterns as applied in real life.

Category theory for programming is like that, plus with some added formalism that proves how it's safe/possible to compose these patterns.

There is a amusing video from Runar Bjarnason about the GoF book, the Interpreter Pattern and functional programming (Free Monad): https://www.youtube.com/watch?v=hmX2s3pe_qk

I've applied the Monoid and Semigroup patterns often in Python and JS. Knowing about CT helps you leverage their properties (associativity, often commutativity, having an explicit identity element for a monoidal operation), and writing tests against those properties.

They're common patterns a lot of people use without knowing anything about CT, for example if you fold/reduce over a collection with an initial value, and the collection may be empty, you're looking at a monoidal operation. You can call the initial value a constant identity. Like sum, product.

If you fold/reduce over a collection where the collection may not be empty, and you need to use the first element as the initial value, you're looking at a semigroup operation. Like min/max/mean.

Applying semigroup in JavaScript is a wholly different enterprise than studying adjunctions, pullbacks, and the Yoneda lemma. I know we programmers like to call using Monoids "CT", to smash Things together with some associativity (those things existed in abstract algebra long before CT), but I don't think those things capture what CT is about or what the OP article is investigating.

So after looking into Category Theory for a bit, it seems like everything CT that most people use (and I had wanted to learn) is covered in abstract algebra. Monoid / Semi-groups, algebraic properties. And an in an easier presentation than CT.

What are some programming applicable Category Theory topics that aren't covered in abstract algebra?


Not to be pedantic, but if someone understands monoids well from abstract algebra and someone else says:

"By the way, in addition to all the existing monoids you know (Ints under '+', strings, lists...), functions under function composition also form a monoid. Here's an example."

Isn't that pretty much getting them to the same place? Do they really need a study of category theory?

Function composition is a monoid, but it is not a monad, even though the words sound similar. So it's not really getting them to the same place, but I'm not going to offer any opinion on whether someone needs to study category theory.

Not quite function composition, but given a type a, (a ->) is a monad (the so called "reader monad"). We have

   return :: x -> (a -> x)
   return x = \a -> x
   (>>=) :: (a -> x) -> (x -> (a -> y)) -> (a -> y)
   m >>= f  = \a -> f (m a) a
(Incidentally, these are two of the Łukasiewicz axioms for propositional calculus.)

It is a functor with

   fmap :: (x -> y) -> (a -> x) -> (a -> y)
   fmap f m = \a -> f (m a)
In the case x=y, then fmap takes the monoid of functions on x to the monoid of functions on (a -> x).

Venturing into more abstract territory, "a monad is a monoid in the category of endofunctors of a category C." (This has never helped me with understanding how to use monads in Haskell.) Basically, the choice of endofunctor is the type constructor for the monad, the unit map is `return` and the composition map is `join`.

> return :: x -> (a -> x)

> return x = \a -> x


> (>>=) :: (a -> x) -> (x -> (a -> y)) -> (a -> y)

> m >>= f = \a -> f (m a) a

>(Incidentally, these are two of the Łukasiewicz axioms for propositional calculus.)

Also the K and S in SKI combinator calculus https://en.wikipedia.org/wiki/SKI_combinator_calculus

The Wikipedia article mentions "The combinators K and S correspond to two well-known axioms". I wonder whether Schönfinkel and Curry had Łukasiewicz's axioms in mind when creating it? I'm finding it difficult to find the history of either the combinators or the axioms.

Isn't the identity monad isomorphic to normal functions? In that sense normal functions are a monad.

I checked that while I was writing my comment, and I don't believe there is such an isomorphism. For the identity monad, we have the following functions:

   return :: a -> a
   return = id

   join :: a -> a
   join = id
If this were isomorphic to the function composition monoid, there would be some way to interpret join as function composition, but I don't see it. Please show me if you know the isomorphism! (My thinking is a bit fuzzy on this, but it appears to me that the identity monad is actually isomorphic to a trivial monoid --- if you want, you can model the trivial monoid as the one-element set {0} under addition.)

Bind is

   (>>=) :: a -> (a -> b) -> b
   (>>=) = flip id
that is, function application. Still no function composer in sight, though.

At this point I admit I'm way beyond my depth.

This article seems to imply a correlation between the fish operator (<=<) and composition. Specifically his section on Kleisli monad.


But I don't know the space well enough to know if that's answering your question.

It's true that the Kleisli category gives a composition law from a monad, but let's not lose sight: the point isn't whether there is a conceivable way to compose things, but whether a monad is just a monoid of functions under composition.

Monads are like a monoid at the type level plus a bunch of coherence rules. For every type a, there is a "composition" m (m a) -> m a and a "unit" a -> m a. (Compare with a monoid, where there is a composition m x m -> m and a unit {1} -> m.) Also, the composition must be natural in the sense that whenever there is a map f :: a -> b then you have a bunch of "commuting squares": the composition m (m a) -> m (m b) -> m b must equal m (m a) -> m a -> m b and the composition a -> m a -> m b must equal a -> b -> m b. (Some of these maps are fmap f or fmap (fmap f).)

The naturality thing is important and shows up quite a lot, and category theory was invented to understand naturality. It's sort of a higher higher order functional programming.

Thanks this was helpful. And you're right. I was blurring the concept of composition and actual functional composition.

Clearly much more to learn.

Functions of type A -> A for some A form a monoid under composition, but functions of arbitrary type don't. The nifty phrase "a monad is just a monoid in the category of endofunctors" doesn't refer to the function composition monoid, but something much stranger, where the "composition" operation has type m (m a) -> m a, and the "identity" operation has type a -> m a. To understand how that can be seen as a monoid, note that the composition operation composes two effects on the same computation, and the identity operation adds an empty effect.

> note that the composition operation composes two effects on the same computation, and the identity operation adds an empty effect.

Thanks - While I did have this vague idea, and it says it from the types, seeing it written out in this way was clarifying.

The way I see it is, just as in Object Oriented design patterns, category theory are the design patterns of functional programming. It just so happens that the mathematicians got there first and gave everything greek names.

I recall this guys blog, he seems a bit boastful, but it seems like a decent response to this question.


Jeez. Whatever he's got to say, I couldn't see past the mountainous ego to get to it.

Most of his stories are more related to immutable data, and proper programming, than anything else.

It's the design patterns if you are trying to use a 100% pure language, which is a waste of time. Here's the thing...at some point being functionally pure stops paying off. At some point you end up paying more to fit your work into a pure model, then getting any benefit from that model. This is the problem I have with CT. It seems cool and may make you feel smart, but it doesn't actually provide any benefit to writing maintainable code

Why is maintainable code the goal and not expressiveness, or being able to write close to the domain?

Not that being maintainable isn't a worthy goal or that expressiveness and DSLs can't be maintainable, just wondering why it would be the primary goal.

IS CT only used when writing a piece of code that's supposed to be long living and have regular updates or something?

Maintainable code is very important, unless you work for some company that just produces code then dumps it on a client. The key is to balance functional purity with understandability.

Let's take something like Clojure. I get immutable data, sane multithreading (I haven't had thread related problems in my code for years), etc. All without monads. Whenever I've started doing things like introducing state monads, or seen code involving applicative and "free" stuff, it basically becomes an opaque blob of functions. Zero ability to have the code self introspect or to leverage that pile of functions.

The better method is immutable data and data-first designs. Data is searchable, introspect-able, and transformable. Most CT stuff is all about composing functions via functions. Making your entire system opaque, at least from the point of view of the program itself.

Category theory is mostly useful if you're getting very deep into Haskell, or if—for some reason—you need to generalize the lamba calculus to handle strange new kinds of computations.

In particular, the lamba calculus is closely related to Cartesian closed categories, which are in turn related to topoi (which are used in many cases of mathematical logic). And from there it's a small leap to probability and generalizations of probability theory.

So basically, if you're a programming language designer with esoteric tastes, category theory contains some great abstractions.

Here's​ an example showing how to represent quantum superposition as a monad: http://blog.sigfpe.com/2007/02/monads-for-vector-spaces-prob... If this kind of thing makes you drool, go learn some category theory. :-)

If you're on the fence about it, and are looking for things you can "drop into" your existing programming experience, I don't think you should spend any significant time on it, over just studying idioms of a programming language directly.

I think if folks are interested, they should investigate CT for it's own sake (it is interesting mathematically) and not for some expected payoff.

I will say that though I only have a basic understanding, I at least acknowledge that go far with it seem to be more comfortable with how "things can be embedded in other things" and "prove compositional properties" at least in certain carefully constructed situations.

I'd describe category theory as the study of connected things. It turns out that many things are connected, and often the connections are exactly what makes something both interesting and complicated. Getting better at those sorts of problems is a concretely useful skill. And it's not just functional programming, where connections are functions between things. You can have things like orders, where a single connection exists between x and y iff x <= y. There's a nice connection between database schemas and categories too. I've found it useful in formalizing difficult problems.

I can't find the source of the quote now (it was a Part III Introduction to Category Theory paper, I think) so I might have remembered it incorrectly, but Peter Freyd once said that the purpose of category theory was to prove that that which is trivial is trivial for trivial reasons. It certainly helped me as a mathematician, because it tells you what's actually hard and what is merely complicated. A while back, I wrote the final section ("Why is the product interesting?") of https://arbital.com/p/product_category_theory/?l=5zv, which attempts to explain one part of why CT is helpful.

Every time I've studied Category Theory I've later found a better, simpler, easier to understand, cleaner approach to doing the same thing, perhaps with a little mutation.

And then there's languages like Eff that completely remove the need for stuff like monads: http://math.andrej.com/eff/

So yeah, for me it's been more-or-less a waste of time reading up on this stuff.

You may want to talk to Andrej Bauer about that ;)

Specifically, the semantics of Eff involve free Monads.

"In particular, algebraic effects are combined seamlessly, whereas monad transformers are needed in the monadic style." Implies otherwise...but to be frank, I don't care. As long as I can get these features without type pollution (as monads do in Haskell) then I'm happy.

My point is that category theory is useful for discovering nice abstractions such as algebraic effects and handlers or Monads for that matter. It's like all math, just a clever way to write/solve problems...

As for the semantics of algebraic effects and handlers, if you are not really interested I won't bore you with a long winded explanation. Suffice it to say that you can get the advantages of Eff in Haskell by using free Monads (the syntax of Eff is nicer though).

Monads and monad transformers are different things, of course. However, it's all really quite doable in Haskell. There's two main reasons it's _not_ done, though: a) effect style isn't so great when the effects don't commute b) it's slower.

If someone ever cracks the speed thing, I bet you'll see a lot more Haskell with Eff monads in it.

We can look at effects as a dsl built upon a monad. The state effect lets us get and put state, for instance. There are two extensible ways to represent this:

    data State a self = Get self | Put a self

    class State repr where
      get :: repr a
      put :: a -> repr ()
With the first implementation we write a function that interprets the adt, writing different interpreters is easy. We parametrized over self so we can combine languages by combining their adt's and give that as self. To tie the knot we use use recursion and end up with the free monad.

With the second version we depend on multiple type classes to combine languages and implement them for different types to get different interpreters. The big advantage with this approach is that we don't have to construct the adt's before interpreting which is where the speed difference comes from.

Tl;dr: if you want to go fast you have to skip constructing adts and if you do that you end up with something like haskell's mtl.

Probably a similar question, but headed in the other direction "Why should I learn assembly?" You probably shouldn't. But in the process of learning it you might gain a greater understanding and intuition for how and why things work.

It's not clear to me that category theory, functional programming, what have you, provide insight into why things work. In fact, it seems like they might cut the opposite way -- moving from vicissitudes to simple abstractions.

Category theory provides the tools (a language, a set of theorems, etc) to understand why certain things are necessarily true, rather than just "true in practice" or true because of a happy coincidence, similar (IMO) to how complexity analysis gives you the tools to know that merge sort necessarily has a certain baseline performance, rather than just happening to run through all your unit tests quickly.

Disclaimer: I'm currently learning CT (basics), so I'm not really qualified to give an answer yet.

My impression so far is that CT teaches you a set of orthogonal abstractions (call them patterns if you like) that will allow you to talk about ideas very precisely. To some degree, being able to talk about ideas makes it easier to have new ones.

In yesterday's lecture, we learned about products (think: tuple types) in CT, and of course their definition is more abstract than that. Now I know that other things can be products, too -- and next time I'll see one of them, I'll try thinking of that as a tuple.

Additionally, next time we'll flip the arrows in the diagram and talk about co-products. Then, the knowledge I have from yesterday, and the knowledge I have from a few lectures back (that you can flip arrows) will be composed to produce new knowledge.

Compared to haskell, category theory is free of practical limitations, and you work with graphical models, not code.

Both of these are advantages, IMO.

Also, CT is quite beautiful in a way I can't describe.

Another thing I understood yesterday (somewhat) is how injective and surjective functions are related. This has been bugging me since high school -- they seemed "too different to be opposites", but they should've been opposites. CT has a very simple graphical representation of how the two are, in fact, opposites and my life is richer for it :-)

I learned it, and I wouldn't call it "useful" by any means. If you appreciate theory and elegant mathematical constructs, go ahead and learn it, but you're not going to become a better programmer or anything like that in practice.

CT is the design pattern of a kind of programming that - as P. Wadler puts it - is discovered rather than invented.

I think this quote (from the first chapter of the "book") sums it up nicely:

> Category theory is extreme in the sense that it actively discourages us from looking inside the objects. An object in category theory is an abstract nebulous entity. All you can ever know about it is how it relates to other object — how it connects with them using arrows.

I think it's a great way to learn a different approach to solving math problems and, more importantly, computer science problems. When you start to think of objects not as what they are but what they can be, you get an extremely natural inclination towards immutability and, assuming your chosen language has a powerful type system, an awesome "free" mitigation against programming errors.

With category theory, you easily formulate your thoughts into function composition, and can more readily rely on the type system to validate your code. This is pretty much what you must do in Haskell, but it's useful in imperative languages as well. There's a reason functional paradigms are leaking into Java, C++, Python, etc: they're mighty useful.

> This is pretty much what you must do in Haskell

I think you're being a bit generous saying we are doing CT in programming languages. Fine to say inspired by CT, but the OP comment is much more true than what we do in say Haskell. Haskell encourages composition but it does in fact look into (pattern match) on data all the time

Category theory is more of a way of thinking rather than a bag of techniques to brute your way through specific problems. Thinking in this way can sometimes bring more clarity. As with all abstract math, the concept are widely applicable but does'nt say very much, as opposed to more applied fields of math which says very specific stuff about very specific situations.

I think, in programming, practical experience (and intuition) is way more important than familiarity with any theory. I mean, sure, it is nice, I guess, to be able to think about functions as "arrows" in the "category of types" and about generics as "endofunctors" (in the same category), and all that; on the other hand, this may be very distracting when you have a real-world problem to solve, and, in the worst-case scenario, it might even lead to a failure - kind of like what happened to a centipede who was unable to move ahead as soon as it started wondering which of its legs it should move first.

[This][0] is an example of it but I suppose this is just the tip of the iceberg.

[0]: http://rea.tech/how-we-used-category-theory-to-solve-a-probl...

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact