Mathematicians who work in category theory have often seen many, many examples of algebraic structures before they get abstracted away to "a monoid object in the category of blah", and many mathematicians still find it an exercise to unpack these compact definitions back to something more intuitive, or more familiar. And even so, most of the time identifying your favourite object as some abstract category thing doesn't bring anything new or useful to the table for the study of that object. (It can help spot patterns between objects though).
I say this for any aspiring Haskell programmers: don't feel like understanding this statement will make you a better programmer. The best way to get better at using monads is to just use monads more, and play around with them. Tracking down all the definitions in the statement is a neat exercise, though.
Curious what are these? They place into a startup after you are done?
surprised that they are teaching category theory if that is the case.
My one-line description of the business doesn't really do it justice -- it's programming school with a focus on web & app development, and they have fairly good (dev-)community engagement. They also teach some business skills along the way, and students often end up with startups, or founding their own. Also many of their instructors currently work at startups.
next time don't pass the opportunity
You can write high reusable, generalized, clean functional code without knowing a darn thing about the theory behind it. Moreover, if you do learn these words you probably shouldn't use them around your peers because they'll serve to socially alienate you and confuse them.
Category theory type stuff is a very useful thing to know when designing programs, because it is useful to know things like, "Well, if I give this data structure these properties, then it'll be well-behaved when I map over it."
But you do not need to learn all the technical jargon for the concepts in order to develop a command of them, any more than you need to memorize a natural language's formal grammar rules in order to speak it competently. (Since the grammar needs to be explained using natural language, one could even argue that it isn't really possible to do the one before the other, in a big picture sense.) Also similar to learning a natural language, learning all the technical, formal stuff isn't necessarily all that useful for helping you to achieve practical competence.
To stretch that analogy even further, getting too enamored of the category theory stuff is likely to earn you a certain measure of benevolent disdain from experienced functional programmers in the same way that getting too enamored of grammar rules is likely to earn you a certain reputation of disdain from people who write for a living. It's not that they don't value these things, it's that talking about them too much sends signals that you're more interested in reading the rulebook than playing the game.
Most programmers are familiar with "map" today, and "flatMap" is not much of a stretch. But once you understand these two things, you're 90% of the way to understanding monads and why they're useful.
But on the other hand, I look at some of the stuff Edward Kmett has produced, particularly his lens library which I use all the time, and its very obvious that the design has been informed by a deep knowledge of category theory.
I agree that the danker category theory is more useful for research than for teaching. In particular when you want to try to import definitions and theorems from one context to another, or you want to decompose a proof into its abstract skeleton and concrete particularities.
As an example of how "A monad is a monoid... etc." can be useful for the abstraction (i.e. library) builder, if you import this definition of a monad into the bicategory of endoprofunctors, you get arrows .
My specific gripe is with expecting that a statement like "a Lie group is a group object in the category of manifolds" is going to lead to any new theorems in the theory of Lie groups. It's just not, and you are not going to become any better at Lie groups by understanding that statement. It's good to understand at some point, because it unifies the definition of group / Lie group / algebraic group, etc, but it will really not give you anything new to work with.
I think that's true at an elementary level, but on the other hand when one doesn't know what concepts they're looking for in a new area, category theory can be a machete cutting a swathe through the jungle. I think Grothendieck was right :)
The point is of course to motivate the definition of a Lie group. Whether such strong motivation counts as "a better understanding of Lie groups" is largely a matter of semantics that there's some room to disagree over. Same goes for the categorical definition of a ring that's been mentioned at the start of this thread, or indeed the definition that OP talks about.
(Notably, this includes application to a "graphical" treatment of fairly-elementary linear algebra, which a different HN user is mentioning even as I write this, as a part of math where CT cannot possibly be useful and will only ever confuse students!)
The first volume is roughly about set theory and the second volume is roughly about abstract algebra.
It's better than "eh just use it a lot and you'll figure it out."
The analogue of this approach in Haskell is probably something like sections 1 and 2 of "Tackling the awkward squad: monadic input/output, concurrency, exceptions, and foreign-language calls in Haskell" , which is quite an old paper (from around 2000 or something?). It does not even contain the word "monoid", but instead shows what an IO action is, and how functions like >> and >>= can be used to sequence and thread them. It also provides a lot of context as to why monadic IO exists. After reading this, a new programmer could start playing around with making new IO actions using these operators. After reading about monoid objects, a new programmer is no closer to being able to write a Haskell program.
For education, there is a happy medium between providing theoretical context and hand-waving over technicalities and learning by example.
Very nice diagram though.
That is my impression of the top level comment's point, and I largely agree with it. If you understand what a group is, I can explain what a ring is to you in a very straightforward way that still shows you the connections between the two things. But setting up category theoretic language would take up a nontrivial portion of the course itself, and at the end you'd have a much less specific understanding of the relevant theorems on rings and groups. You'd be missing all the trees for the forest.
My perspective on this is basically informed by the following: I like category theory a lot. I've chatted with Baez about this on a few occasions and insofar as I need to use algebra, I like the category theoretic formalisms that come along with it. I think if you had students with a lot of mathematical maturity but no prior exposure to abstract algebra (except maybe a rigorous treatment of linear algebra), you could accomplish what you're proposing.
To your point, one of my favorite math textbooks is Aluffi's Chapter 0. You're probably familiar with it, but if not: it builds up abstract algebra in a rigorous and modern fashion alongside category theory. However, there are a few caveats here:
1. Aluffi explicitly states it's a textbook for upper level undergrads and preferably graduates. In my experience, when authors say this they actually mean it's appropriate as a year one graduate course. That's not the time to learn abstract algebra for the first time!
2. Aluffi does a great job of covering things like morphisms and categories, but the size of the book is massive. He doesn't have any nontrivial coverage of deeper category theoretic concepts like functors until much later in the book; it's probably far enough in that you couldn't reasonably cover it in a single semester course.
3. I really don't think most math majors would benefit from it. Those who are applied math majors will get questionable benefit from a slower, more comprehensive approach to algebra than a faster approach that lets them get to applications. Moreover, not all pure math majors have the mathematical maturity to approach category theory before grad school. Proofs in traditional abstract algebra are a lot less abstract than category theory, and it's easier to motivate them even if they aren't ultimately as elegant.
Perhaps the third course is a nice place to start introducing some category theory, since they will now have actual examples on which to draw, and learn about the unifying concepts of "morphism", "categorical product", and so on. They might even be ready for some interesting functors, such as the fundamental group of a topological space.
If you were to teach category theory as a very first course, what would you teach? What examples of categories do you have? What examples of functors?
But understanding bilinearity and its connection to tensor products is the key part of changing between those definitions, and also the most useful step to understand. The final description is pretty uninteresting in contrast with universal mapping properties of a tensor product.
Turns out, not so much, though. The ring theory bears surprisingly little resemblance to the group theory, to the point that these concepts seem to have nothing to do with each other...
Do you seriously mean to suggest there's no pedagogical value in using groups to teach rings?
Otherwise I agree with you, there's a lot of pedagogical value in combining coverage of algebraic structures.
So I wouldn't be so quick to dismiss the idea of associating these concepts. After all, category theory is all about making that kind of associations.
What worked best for me was both playing with code, reading the definitions in code, and then look up some CT to get to the mathematical roots and where they fit in the intuitive picture. (Then you can continue with arrows, monad transformers, etc.)
Since my primary purpose for learning monads (and other algebraic constructs) was and is their use in programming, I concentrate on "practical" intuitions, and periodically check if I can go down to the algebraic base of them in a particular case. If I still can, I suppose I know enough math to get by in the particular area. If not, I open a book and clarify my understanding, and maybe glean something new.
I'm going to go out on a limb, though, and speculate that the average person starting to use FP does not already know category theory.
It's all a bit paradoxical, and I'm not explaining it very well. (And I have no clue about the Haskell stuff.)
- Every commutative ring has a maximal ideal.
- The quotient of a commutative ring by a maximal ideal is a field.
- The quotient of a commutative ring by a prime ideal is an integral domain.
- The classification of finitely generated modules over a principal ideal domain.
There are some universal properties which define what it means to be a quotient, which can help you work with them. But for example, the only "categorical" definition of prime ideals that I know is based on that third theorem.
I think it is easy to forget just how much underlying theory you need to know before you can check that the categorical analogues of definitions are even the same as the original ones.
Hmm, someone managed to do it at Wikipedia:
"In mathematics, a ring is one of the fundamental algebraic structures used in abstract algebra. It consists of a set equipped with two binary operations that generalize the arithmetic operations of addition and multiplication. Through this generalization, theorems from arithmetic are extended to non-numerical objects such as polynomials, series, matrices and functions."
They were saying that the definition you found, on wikipedia, is better than the definition of "a monoid object in the category of abelian groups" because the definition on wikipedia doesn't require knowing about tensor products etc.
You're proving their point. Wikipedia doesn't define it with that language for exactly the reasons the comment pointed out.
That's precisely what the parent commenter is getting at.
1. A monoid in the Ab-category.
2. A set R which is equipped with addition and multiplication.
The first definition is category theoretic, and requires you to know what abelian groups are and what it means for something to be 1) a category, 2) a monoid, and 3) a monoid in the category of abelian groups.
The second definition is straightforward if you can follow a few axioms and know naive set theory. It is helpful but unnecessary to understand that a ring is an abelian group which also supports multiplication in order to get the second definition. But even if you know this about rings, you'll still need to understand all the heavy lifting behind what the category of abelian groups actually means.
I'm not saying it's not useful. But I am saying one is clearly more accessible than the other, with fewer prerequisites.
Numbers are abstractions
Variables are higher level abstractions
You can have properties that you abstract and then see what you have. Banach spaces. Metric spaces. Three different ways to define compactness.
I liked to teach calculus by teaching the limit of a sequence first and then defining limits of functions by considering all sequences x_n that converge to x, and then looking if y_n converge. Building up definitions from simpler definitions, instead of the STUPID handwavy definition they use in Calculus 1 that can’t even handle limits of functions like cos 1/x and answer about limits of certain subsequences of x_n that approach 0.
I started a course about mathematics this way, building up from the very basics:
This goes the other direction. Which is sometimes useful, but can be hard to follow unless every step is meticulously explained and its necessity is justified.
Same with more complex concepts in math or CS - it helps me understand and retain them better if I know where they fit into a bigger scheme. Category theory is particularly interesting in this regard, as it serves to connect different branches of math into a mesh of Categories and Functors describing their interrelationships.
I think this is where a lot of the Haskell explanations fall down, since they start with (definition of functor) -> example, example, example; then go into (definition of natural transformation) -> (no examples), and the problems multiply from there.
They know how to map an object into an array or an array into another array. Many even know how to flatMap promises that resolve to promises.
But they don't know that they follow general rules that make their behavior predictable.
Monads and functors are not really all that complicated per se, but because C++ templates and Java-style generics are not able to operate at their level of abstraction, people that are only used to thinking in these terms fail to map them into their "comfort zone" and usually give up. And instead of pointing out that monads are on a higher abstraction level, you get statements like "monads are like burritos" which only confuse people further.
The explanation that finally made sense to me was when someone didn't only explain things in the abstract and them link them to the next ridiculous non-programming thing, but instead laid out the basic operations and then pointed out that options, lists and asynchronous tasks are monads because they support those operations. I was basically in the exact situation you describe.
Of course, that people can't agree on a single terminology to save their lives doesn't help either (return vs. lift, bind vs. flatMap vs. SelectMany). From the outside it's like one of those math papers where you spend a day figuring out which non-standard notation they used in order to understand the equations in it.
That's what "patterns" are for, in C++/Java and any low-level languages that don't provide the level of abstraction you're after. Functors and monads are patterns in these languages, just like subroutines are a pattern in assembly language but not in C, and "objects" or "closures" are patterns in C but not in C++/Java. Monoids are usually called the "Composite pattern", hence saying that monads are monoid objects is just stating that they share analogies with the Composite pattern at a higher level of abstraction. So what's the problem?
Keen observation. If you read through most explications for monads that float around the internet, though, you will find that design patterns are usually introduced to the reader with much more didactic care. Texts explaining, say, the Command Pattern will usually try to lead the "average" (or even junior) Developer to understanding by providing motivation ("let's write an Undo feature!"), introducing new concepts in the context of some simple, but usually tangible scenario ("let's say we have this editor…") and explain things in terms of the stuff the reader can be assumed to know already.
Even if you look for explanations of monads aimed at non-FP developers, usually they start with something like "For some monad `M`, …", followed by either mathematical expressions or some very abstract function signatures in Haskell (which a majority of readers has probably never seen before). At that point, you already have lost 90% of the audience.
I have yet to try to explain the concept to an unsuspecting colleague, but if I were to try, my approach would be to start with familiar things. For a C# dev, this could be LINQ, for example. In this case, you could even get people pretty far, since the inline query syntax is pretty much just a do notation that was bludgeoned with an SQL hammer. If you then point out how other things than IEnumerables and IQueryables (IObservables or Tasks, for example) work in similar ways and could be treated the same way, I suspect you should be able to nudge the average C# dev along far enough to give them a basic understanding of the concept and some of the upsides without instantly resorting to single letter type signatures.
I'll really have to try that sometime.
But some patterns are easier to understand than others.
Most are created through experience gathered while programming and are rather parctical solutions to frequent problems.
Mathematical concepts are patterns too, but they were often "developed" by people who had nothing to do with day-to-day programming challanges and seem rather foreign for the average developer.
Someone said "Functors are objects with a map method, monads are objects with a flatMap method" and then explained some different type of objects that have these methods and how the work in that specific cases.
After 2-4 cases I saw the principles and rules behind them.
my mneumonic for distinguishing "inductive" vs "deductive" is "PIG":
Particular -> Inductive -> General
See also "infer" vs "deduce".
The idea was to guide people through various important particulars of monads. It still needs some work, but I think the basic approach is promising.
For example, it would be weird if we had three statements A B C that wrote to console, and sequencing A with B and then sequencing the result with C resulted in a different program than sequencing B with C and then sequencing A with the result.
Or, for non-IO things: it would be weird if we had three levels of nested lists, and concatenating two times "from the outside" resulted in a different list than concatenating each inner list first, and then the outer list.
I wish some of the category theory driven Haskell tutorials would swap out their “prove the laws” exercises with implementation exercises. While it’s great to be able to prove the laws yourself, part of benefit of using an existing theory as an underlying model is that you get to take advantage of the laws (and long history of proofs and developments it took to ascertain said laws) to create implementations or gain particular insights—it seems like it’d be a more useful exercise for programmers to implement some programms utilizing the ideas rather than proving te laws yet again—this seems like a more expedient way to develop insights as to how you would actually use these structures.
I have a PhD in mathematics in a field that uses quite a few category theoretic notions, so I've had extensive hardcore contact with the theory. After my PhD I converted to being a developer, and have by now quite a bit of experience doing that.
I have not once found category theory useful, beyond the absolute basics of the idea of a universal property, either in my work as a mathematician or as a developer, despite going on many expeditions to find some use for it.
I'm not saying category theory is not useful. I'm just saying you might do just fine without it.
Also, in an attempt to maybe say something useful, "Categories for the working mathematician" by MacLane is a good introduction if you know some maths already.
As a more practical point, trying to learn category theory without a good understanding of abstract algebra and linear algebra would be difficult. The material would seem very unmotivated and unintuitive. Math undergrads don’t typically encounter the subject except in an upper level topics course; more usually it’s a grad-level math course which requires a lot of exposure to abstract algebra.
with monoids, you can combine things, and then monads result from choosing a clever combining operation in a certain contexts