Hacker News new | past | comments | ask | show | jobs | submit login
Computational Category Theory in Python III: Monoids, Groups, and Preorders (philipzucker.com)
119 points by EvgeniyZh on May 3, 2020 | hide | past | favorite | 56 comments



Can someone explain to me what Category theory actually contributes to mathematics beyond the contents of abstract algebra, graph theory, and set theory? Because every interesting result I've seen just looks like it comes from those other branches with some different terminology. Does it actually lead to any unique or synergistic results that the others fail to produce with their own axioma?


It's more helpful to view category theory as a language.

Most undergraduates learn mathematics in the language of sets. If they're doing algebra, then the underlying structures of their groups, rings, etc. are always sets. Same for topology and logic (e.g. classical model theory). An extreme example of this set-theoretic thinking is defining the real numbers in terms of Dedekind cuts.

Category theory, very slowly, changes this perspective. If you study this for years, you will come to realise there is more to mathematics than sets. And your way of thinking will shift. You will learn to ask different questions. For example, instead of defining things in terms of their underlying set (often breaking symmetries), you will ask 'does it have a universal property?' and so on. Note, there are still tons of reasons to use sets / ZFC, even when working with categories mostly. But I do not want to get into that and it's irrelevant for the point I'm trying to make.

Final note: the 'language of sets', i.e. the language of undergraduate mathematics, is very different from set theory. 'Set theory' is like the language talking about itself. The same goes for pure category theory.


Set theory was developed to make analysis “rigorous” by the standards of 19th century mathematicians, who were probing the edge cases of the more intuitive (or maybe handwavey) assumptions from the 17th–18th century. Here’s Rudin to explain: https://www.youtube.com/watch?v=hBcWRZMP6xs

In the 20th century it became trendy to (a) try to base everything on set theory, (b) write mathematics in a very dry and formal style, taking inspiration from the Bourbaki project.

My impression is that undergraduates are taught in the language of sets partly because it is serviceable for describing the subjects they are trying to learn (esp. analysis) but also more importantly because mid-20th century mathematicians who set up the curriculum we are still using wanted to thoroughly teach (indoctrinate?) undergraduates the trendy style of the time.

Personally I think it has been a mixed bag; the style is a big turn-off to many people, and ends up chasing people out of mathematics who could otherwise make valuable contributions. The people who remain seem to mostly like it okay though. YMMV.


I have some problems with this comment.

First, why the scare quotes around "rigorous"? Set theory literally did make the foundations of analysis more rigorous.

Second of all, we don't use set theory because it's "trendy" or because we want to "indoctrinate" undergraduates, we use it because it's the best known foundation for mathematics. If there was something less complicated or annoying that worked, we'd use that instead. But as far as we know, there isn't, and there are good reasons to believe we won't find something better.


Your observations strike a serious chord with me, because the first class I took where a professor wielded set theory like a ruler to the back of students' hands was exactly the point I realized that mathematics was not my calling.

But I've come to understand that I was not realizing that I didn't like mathematics or thought it was too hard, but instead that if I made that my life's work, I'd be working with many more people like that, rather than the people I wanted to - bright and creative, yet kind and humble.

I don't think set theory or any other set of tools is the problem behind potential academics getting turned off. It's the people, and the culture of glorified monkhood.


Note that "sets" and ZFC are not the same thing - ZFC is simply one set theory among many. In fact, structural set theories like ETCS ("Elementary Theory of the Category of Sets") or SEAR ("Sets, Elements And Relations") are even more cleanly suited to typical undergrad mathematics than ZF(C), while also being easier to characterize categorically.



Very good point. To add on this, even category theory is often taught in the language of sets and classes (“A category is a class of objects with a set Hom(X,Y) of morphisms for each pair of objects, etc.”)

It is possible to use categories as basic building blocks instead if sets but, in my anecdotal experience, this us not what the majority of graduate programs in Mathematics do.

It will be interesting to see whether this will change in the next 20 yea.


They don't do it for good reason: set theory is basically a strictly better foundation for mathematics, and you have to ape all the set theoretic constructions when doing various things anyway (e.g. constructing the real numbers), so it doesn't buy you anything.

This issue has been litigated extensively, and in my view successfully, by Harvey Friedman on the Foundations of Mathematics mailing list, if you want to check its archives.


OK, but this doesn't answer the question. You intimate there are benefits but never say what they are.

The vast majority of mathematicians go their entire lives without using the word "category" in a paper. What are they missing out on?


Much of theoretical computer science uses categorical methods, for example, as semantics for type theory. In that field, such techniques are often more natural than set-theoretic ones due to issues of computability, decidability, etc. So at the very least, much of that research as originally written would be inaccessible to a non category theorist. Whether that field counts as mathematics and if it does whether it is worth missing out on depends on taste of course, but an example would be 'homotopy type theory' https://homotopytypetheory.org


HoTT is a strictly worse foundation for mathematics than ZFC, and in the end has to end up copying a bunch of the usual constructions anyway (like defining the real numbers). So this is not convincing.

One amusing problem is discussed here: https://mathoverflow.net/questions/289711/defining-sun-in-ho....

But suppose HoTT were equally good. What compelling reason is there for a working mathematician to learn it? We already know about set theory, and it meets all of our needs.


Honestly; I'm not sure; I'm a working computer scientist, not a working mathematician, and I use a proof assistant (Coq) all the time to have confidence that the proofs I write are correct (and more and more this is a requirement for publication in CS conferences). I want HoTT to succeed because it would turn some the axioms I must assume in the current Coq proof assistant into theorems, with significant implications for 'proof engineering' at scale (eg, verifying an OS kernel or compiler or database). When I read the math over flow post my emotional response is gladness that I can't accidentally conflate a topological space with an infinity groupoid. Many of the HoTT researchers are mathematicians (including the late Vladimir Voevodsky); presumably, the HoTT book (which I understand is available online) can give you a better answer than I can from the perspective of a working mathematician.


I can't comment on the CS stuff with any degree of expertise; the question was from the perspective of a mathematician wanting to know about mathematical applications. And not to pick on you, but again and again I ask category theory people to give me concrete examples, and again and again I get responses like yours, which amount to fairly vague gestures at theoretical purity. Contrast this with something like Galois theory. If an undergraduate asks me why Galois theory is important, I can point to about a half-dozen important problems (in terms of their place in the theory and in history) that are inaccessible without the concepts of Galois theory. For those problems, Galois theory not just another language or a cool way of looking at things, they are (as far as I know) intractable without it. I have never seen a category-theoretic example as compelling outside of algebraic topology and adjacent fields.

I also think it's important to point out for anyone reading that one certainly doesn't need category theory to do proof verification. Some other options, and some gripes with the Coq community along the lines of the previous paragraph, are listed here: https://xenaproject.wordpress.com/2020/02/09/where-is-the-fa....

Also, if your foundational theory doesn't allow me to (easily) define a fundamental geometric object like SU(n), then it's just a non-starter. Again, I have not been able to find a reason why we ought to endure such pain to do simple things when we already have a perfectly good foundational theory. Re: the HoTT book, it is precisely because I've looked at the book that I have these questions.


From a programming perspective I think there are a couple contributions:

- Category theory has a number of very intuitive looking graphical notations which nevertheless translate to very exact algebraic expressions. This is pretty dang nice. Category theory is a road to a very principled formulation of things that are already done in dataflow languages, tensor diagrams, and UML and things like that. These graphical languages are a reasonable user facing interface.

- Empirically in mathematics, category theory has been found to be a flexible and "simple" language to describe and interconnect very disparate things. In programming terms that means that it is a plausible source of inspiration for consistent interfaces that can do the same.

- Point free DSLs are really nice to work with as library/compiler writer. It makes optimizations way easier to recognize and implement correctly. Named references are phantom links attaching different points in a syntax tree and make it hard to manipulate them correctly. In slightly different (and vague) terms, category theory gives a methodology to turn abstract syntax graphs into trees by making all links very explicit.

I can't comment much on what category theory contributes to mathematics, but I think there are similar points. I think category theory can make precise analogies between extremely disparate fields like logic, set theory, and linear algebra. I think categorical intuitions lead to clean definitions when specialized to some particular field. Combinatory categorical logic does make some aspects of logic less spooky.


Can you give an example of some of these, because I'd love to learn more.


Many results in algebraic geometry about rational points on curves are impossible to prove without category theory. Here's a pop-sci kind of take on it from quanta that I like. The "bridge" between areas of mathematics they talk about are functors between categories. Also many proofs of the theorems you encounter in an undergrad level abstract algebra course are much easier using categories. Also, just from the tautological stand-point, if you can prove something about multiple types of mathematical objects at once using category theory, then you don't need to prove it for each of those objects individually.

https://www.quantamagazine.org/amazing-math-bridge-extended-...


None of those results would be impossible to prove without category theory, and all are easily translatable to non-category theoretic language. The use of categories doesn't provide any substantive input, it's just a useful shorthand. (I might have to make an allowance for the use of derived categories here, but I think I could still pull it off by being sufficiently concrete.)

Further, none of the proofs from an undergrad abstract algebra are going to be "easier" with categories. (Show me one?) At the end of the day, you have to prove the same propositions. Categories just give a different and sometimes more convenient language.


I'd love to see a proof of Fermat's or Mazur/Tate without using any category theory. I suppose you could define lifts between categories by being explicit about the morphisms and functors but that's just category theory without the name.

As for the second part, it's not that one single proof or another is easier with category theory. But the entire presentation is very elegant. Thinking of Aluffi Chapter 0 where he uses universal property extensively to simplify a lot of proofs. My personal favorite presentation of Algebra, but everyone has an opinion here.


Well yeah, that's my point re: the number theory. It's a linguistic gloss that doesn't do much substantive work. Hatcher's book on Algebraic Topology, to give a random example, is able to cover lifts perfectly well without any category theoretic terminology. I'm all for good notation but the hype on certain HN stories seems really out of control.

Re: Aluffi, it's a fine book, but I'm not sure I'd teach it that way, because I'm not sure what introducing universal properties and so on really buys you at that level. I guess it make sense if you commit to getting to the end with the abstract perspective on homological algebra. But for a grad student just looking to pass his or her quals? The discussion in Hatcher is concrete, motivated, and involves zero category talk.


I love Hatcher's book so I can't disagree with you. (Although he does have a section on category theory in chapter 2, but never revisits it again). But doing anything with sheaves and schemes without categories would feel a lot like doing linear algebra without matrix notation and just using lots of double sums. I also have a copy of JP May's a Concise course in Algebraic Topology next to me and I also love the presentation there. (Although I would absolutely never want to use that book as a first course haha. Hatcher is a classic for a reason.)

Being slightly facetious here as obviously Griffiths and Harris uses sheaves extensively and no category theory.


Yeah, I totally agree.

It's just I see a lot of category hype on here and I want to yell that it's mostly just linguistic! It's not some deep or scary thing. The analogy to matrix notation is really good.


Oh yea, the computer science side of things seems to rename anything related to abstract algebra as "category theory."


Every branch of mathematics formalizes and studies some underlying phenomena. For example, in calculus, it's rates of change: their division into instantaneous changes (derivatives) and their accumulation into changes over time or space (integrals). In topology, it's properties like continuity and connectedness.

Of course, all mathematics is abstraction, and all abstractions are leaky. Calculus just chooses not to study non-differentiable and non-integrable functions. Topology chooses not to study discontinuous transformations, and abstracts away the spatial metric.

In category theory, the property under study is compositionality: the property that given a path from A->B, and another from B->C, you have a path A->C. Of course, anyone who's ever written a computer program knows that this is not as guaranteed as it sounds. All abstractions are leaky, and if your "paths" are functions, in the programming sense, then the second one could throw an exception, leaving you high and dry with the intuitive transitive property being straight-up wrong. Likewise, if your "paths" are mathematical functions, composition can preserve continuity (f . g for continuous f and g is continuous), but if either one is discontinuous, the result will fail to be continuous.

So category theory abstracts away the "exceptions" and "discontinuities" by ruling them illegal: in a category of topological spaces and continuous functions, everything, by construction, preserves continuity. We thus have an entire field of mathematics devoted to asking and answering the question: "what sorts of properties are preserved under transitive composition in what sorts of settings?"



Thank you. I'm finding it fascinating.


Among other things it contributes a precise definition of the notion of products and sums of algebraic spaces, as well as generalizations and proofs of their basic properties.


What is set theory useful for beyond the contents of arithmetic, algebra, calculus, etc...

Category theory is useful for studying mathematical structures in general — by its nature, it’s results are going to be fairly abstract and general, but that doesn’t make it useless.


There's no doubting the important role of category theory in modern mathematics. Less clear to me is why programmers seem so keen on learning category theory. I studied a fair amount of abstract algebra in grad school and never once have I found it useful in my day to day life as a programer. At best, I can say that if you've mastered category theory then you know how to think abstractly and modularize complicated concepts. But there are many less circuitous routes to obtaining those skills that don't involve understanding what a monad is. The moment I'm reading a programming tutorial that starts talking about monads and monoids is the moment I go read some other tutorial that actually teaches a useful skill.


You have basically the right idea, despite what some category theory boosters on this site will say. It's a very useful language for algebraic topology, algebraic geometry, and related fields, to the point where working in those subjects without its terminology would be exceedingly clumsy. Otherwise, its value is inversely proportional to your distance to those areas, more or less.


It makes certain kinds of representation theory easier to deal with. Rather than just group homomorphisms to automorphisms of a vector space (or ring homomorphisms to endomorphisms of a vector space) you can more generally study functors from a category to Vect, the category of vector spaces. This point of view has proven useful in the study of knot invariants like the Jones polynomial. In this case, there is a category of framed tangles and a functor to something called the representation category of quantum sl2, which is a fancy kind of category of vector spaces.


“Diagram chasing”, which is the standard method of reasoning in Category Theory, turns out to be very useful in other branches of mathematics, such as topology. (By the way, do not underestimate the power of abstract algebra, it’s widely used in the rest of mathematics, and its applicability to computing is more obvious.)


Category theory actually replaces set theory in the chronology of things.

So more or less the history of mathematics was something like:

Geometry -> Analysis -> Set Theory -> Category Theory

Number theorists should not be discounted; they are a separate split after analysis and graph theory is technically more like number theory than category theory. Later on they come back and join category theory again. This is my own opinion and I say it because of the methods that they like to use. You have bridges between them in the form of things like algebraic geometry or functional analysis. Analysis also includes calculus and I would say set theory includes topology.

Now, I think your question was a sort of general question about why one should do category theory, but I think the best way is to answer it in two ways. A third point is added as an afterthought.

I) Does it lead to synergistic results? Yes, for example Cayley's theorem is an instance of the Yoneda embedding. You can embed any group injectively into the permutation group of the underlying set. But Yoneda is a bunch of other things at the same time too. So by doing the correct generalisation, you solve all the specifications thereof. And then sometimes when people say category theory is foundational, they mean that: Within a special type of category called a topos, you can do all of mathematics. [1]

II) Why should we use category theory in computer science? This is maybe the more relevant question for the audience and I would say we aren't using it. So yes, there are these monad like things in functional programming and functional programming has monoids (a monad is a higher dimensional monoid). But they don't use much of the mathematics that Grothendieck invented and also I haven't seen many posts on functional programming that mention Lawvere. My opinion is that we still haven't implemented the real insights of category theory and that functional programming is actually the extension into practice of maybe Church's lambda calculus, not category theory. In the same way that quantum mechanics has waited 50 years to start using category theory I think computer science doesn't use exactly category theory. I guess my point is that there is a difference between ideas percolating down and ideas being applied over. One area where I would like to see more formal methods being used is reactive programming. So for example, why can't your programming language be specified entirely in category theory? But I think the problem is not that category theory cannot be applied, it is that category theory is percolating to other fields; those fields are only glancing at the mathematics; and a lot of mathematicians can't program at all for that matter. I don't say this as a type of lament or apologetically; I think it's just a matter of time and the right commitment.

This is my own opinion and I would like to be disproven by top researchers that are experts in both category theory and functional programming. I also think it is that we are not always aware of which work is important until it has become important. And said differently: I think Excel probably has a very interesting reactive framework and I have never seen anything about that posted on HN. So the fact that no-one is mentioning it doesn't mean that it doesn't exist. (Could Excel's reactivity be undocumented?)

III) Does it end at category theory? This is not really a question, but more a footnote. I asked my supervisor this once and he answered (rather surprisingly quickly): Probably not, and probably it will look different. For example, a lot of category theorists have not even heard of something like the Langlands Project [2] and the Langlands group seem to also be looking for unifications of theories too. I also asked my supervisor about whether he knew the Langlands project and he just said "No," and he didn't seem to want to say anything else. I have read of lone mathematicians that have their own unique way of approaching mathematics. So it is more a case of: Take what you need, see where you can get. Personally, I would prefer to linger a while with category theory as you can get further with less than in something like traditional analysis (where you have perhaps a thousand rules for integrals).

[1] You can do all of mathematics in set theory and you can do set theory in category theory. A topos is like a generalised set theory where functions have taken preference as the route of abstraction. Topos theory is also the reason why I say that classical mathematics includes in it constructive mathematics, but that is another topic that seems to have a somewhat emotive history. The bottom line is that the logic you use to spend your money at the supermarket is not going to change when someone discovers something new in mathematics.

[2] https://en.wikipedia.org/wiki/Langlands_program


I don't think anyone's really answered my question but you've hit the closest to it I think, so I'll rephrase it in the hopes that you can clarify it for me: what do we _need_ Category Theory to do that the other branches of math cannot do either in the absolute sense, because its core axioma contain something that other branches of math don't, or in the practical sense of something it VASTLY simplifies in application? On the theoretical example, I could tell someone that using graph theory, I could prove things about visiting nodes and vertices that I couldn't simply with tree theory. On the practical example, if someone came to me and said, "Why do I need abstract algebra"? I could point out to him that if I had a vehicle with a really complex set of tires, that if you could prove an isomorphism of permutations of the tires was a group over the set of permutations, you could create a rotation schedule that would move every tire to a different spot of the vehicle on each rotation, maximizing their useful life - I don't think there's a LESS abstract branch of math that could do this. What would be an instance in either case for graph theory?


I think you can do a rotation schedule for the tires using other techniques that are more concrete than group theory.

If you think about the quadratic formula, the Babylonians used it without having a way to write down the equation.

But, having mentioned the caveats, I think the answer is that we don't have a nice example yet for category theory. I personally think that the key application of category theory will be in quantum mechanics and in reactive behaviour of programs and formal methods around that. So, for example when you prove your program does something rather than testing it 1000 times. (You'll have to test it in any case due to things like implimentation bugs.)


Oh b.t.w., I see I didn't mention the view-update problem in databases here. That would be a direct application of category theory. [1]

[1] [1] M. Johnson and R. Rosebrugh, Fibrations and universal view updatability, Theoretical Computer Science 388.1, 2007, 109–129.


I suggest to the author that they rephrase the title to something like "...Theory in Python Part 3..." to make it clear that this is the third installment in a series, not specific to Python version 3.


Yes. I was thinking, “I’ve never seen Python 3 written as Python III before”.


Interesting point, that hadn't occurred to me. Luckily that won't be a problem for the other parts. I updated it to III which hopefully should make it a little more clear. The title is so long already!


Not a bad attempt at all, but it doesn't showcase how expressive things could be. What would be really cool (and arguably Pythonic) could would be to use Python's operator overloading to what these abstract entities would look like, in the same way that the the builtin `set` is used.


Hey, author here! Neato! I'm on HN!


I haven't read all the way through yet but this seems pretty great. Some rough edges, sure, but it's accessible, fairly comprehensive, and concise.

Have you seen "Multi-Level Languages are Generalized Arrows" by Adam Megacz? https://arxiv.org/abs/1007.2885


Thanks! I haven't seen this one before. I'll check it out!


Ok, the basic question: what is dom and cod and why does this article spend the time explaining the basic concepts but never gives a definition of the only obviously unknown concept?


Sorry, this is the third part of a series. I think I said what those were in the first part but I'm not sure

dom and cod are the domain and codomain of a morphism. They are functions that given a morphism will return the object at the head or tail of the morphism.

I don't think that these articles are a very good self contained explanation of category theory. At the moment, I've been aiming at a supplement written in concrete python to something more thorough.


From a programming persepctive, I could recommend

Bartosz Milewski's series https://bartoszmilewski.com/2014/10/28/category-theory-for-p...

RydeHeard and Burstall - http://www.cs.man.ac.uk/~david/categories/book/book.pdf

Milewski, Fong, and Spivak's course - http://brendanfong.com/programmingcats.html

From a mathematical perspective:

MacLane, Awodey's book, Conceptual Mathematics by Lawvere and Schanuel.


More generally, I think all the naming could be improved. Why not write out "domain" and "codomain"? Why abbreviate "identity" as "idd", is that standard anywhere? Why use "mplus" and "mzero"? The latter is as in Haskell, I believe, but there they must do it that way because they can't overload the identifiers. Inside Python classes you don't have that restriction.


A good point. If my goal is educational (and it 2/3 is), these stupid short names are a huge oversight. I honestly didn't think about it much.

I think I started from known Haskell names, some conventions that I may have absorbed from catlab.jl and other places, and some unthoughtful accidents. I really really wanted id to be the identity, but I'm pretty sure it's a python keyword. I added a d and vowed to never speak of it again. Ever written a program about schools and had to use the variable Klass?

I don't feel too bad about mplus and mzero. It's still pretty clear that they are something like plus and zero, but the m gives you pause. And it should, since in one case mplus is in fact multiplication.

If this was rolled out into a library maybe I'd have both long and short names? I'd never want a library to have only long names for really common things. cod and dom don't actually give me a seconds pause. I like them having the same length too. Makes code prettier and easier to read. I also intend to replace some things with operators which would be even more inscrutable to a newcomer. @ has to be compose for example (to mimic numpy).


> I really really wanted id to be the identity, but I'm pretty sure it's a python keyword.

It's a built-in function, but you can overload it within a scope you control without problems:

    >>> id
    <built-in function id>
    >>> id = 7
    >>> id
    7
> And it should, since in one case mplus is in fact multiplication.

Good point! I take this part back.

As for having identifiers the same length, I know the urge, but usually find the superficial "elegance" it's not worth it.


In CS the tradition is to use longer and more descriptive names, in maths instead short and generic ones. I guess that on a blackboard it’s more useful to maximize information density, in a text editor or an IDE instead clarity is more important.

Even different programming languages have different conventions (short identifiers fir C and Fortran, long ones for Java and C#), which reflect the habits and preferences of their creators and users.


What’s the point of using the short names? You could call them domain and codomain and then everyone could Google them


If you don't have knowledge on category theory, even the first part is going to like like spaghetti.


(I realize that I am probably too late to get an answer.)

Besides Category Theory, what other topics are worth reading about to broaden one's CS background? I write a lot of code without formal CS training so I was hoping to expose myself to interesting, preferably "mind-blowing" topics without having to take a course or go through the whole SICP.


Author here. I don't recommend category theory. There are lower hanging more mind blowing fruit.

I'm having a hard time thinking about a blog posts worth of content that is mind blowing. Mind blows tend to take more investment than that.

I'd recommend checking out an algorithms course https://ocw.mit.edu/courses/electrical-engineering-and-compu... This one is very good. Has free videos.

I'd recommend giving functional programming a try. I'm an FP fiend. I found SICP tough going though.

I'd recommend giving low level programming a try.

Learning prolog is pretty mind blowing http://www.learnprolognow.org/

Digital Circuitry is pretty mind blowing https://www.nand2tetris.org/

Making interpreters can be pretty mind blowing https://craftinginterpreters.com/contents.html


One day I might read something that will undo my conviction that category theory is the mathematical equivalent of literary critical theory. But alas, it is not this day.


This article doesn’t properly use self within python classes. On top of that the author couldn’t even apply a code formatter to the code so there’s a bunch of missing/extra spaces.

It also doesn’t even try to make the given code applicable to real problems. It seems as if someone writing an article of how to use functions.partial while never mentioning when those partials could be practically used.

All in all, 0 respect for the reader; seems like a badly typed retelling of a university course.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: