He also covers it all in few youtube series - https://www.youtube.com/user/DrBartosz/playlists
Call me old-fashioned, but when it comes to learning math, I believe it pays to seek out people who really know how to teach the subject, rather than (what might uncharitably be described as) self-promoters churning out half-baked introductory materials. This goes for the "Catsters" video series as well. Everybody points to them as if they're brilliant, but compared to what standard? Sometimes it feels as if the people who blithely link to these resources are indifferent to whether the teaching is good or bad.
Even Mac Lane's book "Categories for the Working Mathematician" has been credibly criticised for being poorly written—but because it's so famous/prestigious it still gets loads of five-star reviews.
Personally, I'd recommend Categories and Computer Science by RFC Walters, or Steve Awodey's Category Theory for a more general overview.
(This might all sound peevish, but it feels like a real hazard of trying to learn difficult things from the web: the most conspicuous and superficially charismatic offerings/lecture notes/blogs don't always turn out to be reliable or well thought-out when it comes to an average person actually trying to learn the material.
Monads are a good example of this bad situation. A good way to learn about monads would be to have a few, shortish, interactive conversations with an expert. Ten thousand cute online monad analogy tutorials is the opposite of this: it's actively counterproductive because each new analogy just creates more vagueness and FUD.)
The roads are:
* Through experience as a type-system in e.g. Haskell
* Through experience using this in pure mathematics
From my experience, it seems that trying to swap between the two paths at the start is reaaally confusing. You need to stick to one path until it makes sense. Then perhaps later, you can get back to the other path. The point where "it makes sense" is when you get out of the toy-examples and manage to put it to actual use. My hypothesis is that: because the first actual usage examples are so different between the two paths, switching only confuses you.
Obviously, this doesn't mean sticking to a single source. When you get stuck on one source an alternative explanation is great. It just needs to stay within the same path.
* Math heads that make their way into the CS world, and for loops were harder to understand in the beginning, but Set algebra makes sense.
* Programmers by trade that have had to deal with the math side of things that probably learned for loops at age 10, and seeing sigma and epsilon symbols drives them totally nuts.
I'm in the second group and I definitely say I live off content created by people that came into it the same way.
Shorter answer: there are Java people and there are Python people.
Hits home. Haskell is difficult but solid, mainstream languages are easy but based on arbitrary foundations.
If I may try to point out what's missing: it is the motivation which seems to get lost for lack of examples.
Here's a theory, called category theory, and many of us believe it can inform their designs, providing a perspective on compositionality, and a higher kind of equational/algebraic reasoning.
Where are the ideas and examples that will actually help us inform or designs and achieve a higher degree of compositionality? Where is our chance to apply algebraic reasoning to the programs we write?
So I will provide a lot of C++ examples. Granted, you’ll
have to overcome some ugly syntax, the patterns might not
stand out from the background of verbosity, and you might
be forced to do some copy and paste in lieu of higher
abstraction, but that’s just the lot of a C++ programmer.
What should an experience programmer learn? One suitable answer seems to be the connection of lambda calculus with products and CCCs. That, though, would also need motivation for "functional programming", referential transparency. An alternative answer could be to point out the connection between topos theory and logic (or query languages).
It almost seems that when flipping to Haskell examples, Bartosz is making a leap that let's him ignore the motivation: anybody who is writing code in Haskell won't need to be convinced of referential transparency. There is simply a forest of "patterns" and category theory seems to be a systematic path through it. Maybe potential applications in physics provide a similar motivation for physicists.
... but if you don't bring the motivation yourself, you're not going to get it from reading the posts.
I also believe that to really understand and get an intuition for a concept, you need to see it applied, preferably in different contexts.
It really shows that the notes not the battle-tested product of several years of teaching a course to undergrads, like Walters or Awodey's books are.
(Admittedly, Walters' book is a bit eccentric by current standards of what should be on the syllabus, but it is well written.)
To be fair, I’ve come to CT through Haskell development, which Bartosz caters heavily towards. While it is absolutely not necessary for the daily aspect of my job, it really helps drive home the underlying mechanics of the language.
That all said, now it’s brought me in a roundabout manner to wanting to learn more advanced math, although while I am grasping CT I can’t understand what people are talking about when they compare it to pure mathematical concepts (Sets, Groups, Rings, etc), which tends to be frustrating. I am attempting to find a path to understanding all of this without having to grab another 4 year degree.
Hopefully this book could get picked up by a publishing company so it becomes easier to find in book stores.
It's been very fun. Our discussions help forge links between the category world and our day-to-day programming. We're only 3 chapters in, so I can't say it's given me some huge insight into programming. But as we move from Go to Rust/Typescript, it gives us a useful way to think about the correctness of our programs and a useful shared vocabulary for talking about types in a language-agnostic way.
We don't learn things together, instead what happens is if I present something new, another person stays silent looks it up at home on google and comes to work the next day pretending he's an expert on the topic. It's toxic.
I've found when I'm writing libraries that will be leveraged by other programmers, especially in a modern/high level language, insights I've derived from my understanding of category theory have been incredibly valuable.
If you're working very close to the metal (eg, embedded), very close to non-technical users (UI stuff, edge layer business logic), or in less featureful languages, then you'll get much less value.
Function programming heavily impacted how I write imperative code. Would I have similar revelations with category theory – maybe write better abstractions?
Maybe, but it's not a given. (Haskell folks might have a different view due to their language being a closer match to the theory)
My sense is Category Theory is one of those things that is useful for explaining things in retrospect, rather than used as a constructive tool. It exposes inconsistencies in your current model, and helps clear up your thinking. John D. Cook wrote a piece on Applied Category Theory which makes the point that theory itself is perhaps less useful than the discipline of thinking categorically .
I think Category Theory can guide the design of certain types of tools that deal with relationships. LINQ (in C#) for instance was guided by category theory. If you wanted to come up with a new kind of SQL, category theory could be helpful in designing abstractions that are orthogonal and clean.
But the design of these types of tools remain a small, specialized domain within programming. It's somewhat analogous in my mind to metaclasses in Python: classes are sufficient for the most part, only framework developers will likely ever need metaclasses (to generate classes).
In most day-to-day programming, it seems unlikely that category theory will have as much an impact on practice as does, say, simple principles derived from functional programming. In fact, there may even be negative consequences from over-abstracting your code.
The category theory should also be viewed as more a close analogy to the programming problem, rather than an exact correspondence. These abstractions sometimes leak, especially in numerical code.
edit: More complex and specialized abstractions are useful on fewer problems than simple, general abstractions. In addition, they have more mental overhead. So you work harder for a less useful abstraction, and you make it harder for others to use and understand your work.
Yes. Very much so. Especially if you're programming functionally. It will explain why all this is the way it is. Why sum types and product types are what they are, the big picture on functors, monads, etc, etc.
I like Bartosz Milewski's introduction  where he states (paraphrasing) that he started programming back in the day with assembly, and as programs got more difficult we needed higher level abstractions, so he moved to procedural languages, and then after that another higher abstraction was OO, but he realised that OO has a fundamental problem: objects don't compose, and so he found himself functional programming with Haskell... each time looking for better abstractions.
Category Theory is the ultimate abstraction and although you can't write code with it, you can step outside of the detail of complex solutions and think about the bigger picture and then use that bigger picture to help you build your detail. And that's why I think it's valuable.
I particularly like his journey too, as it exactly reflects my own.
 Category Theory for Programmers - https://github.com/hmemcpy/milewski-ctfp-pdf
Heavy FP folks sometimes study it out of natural curiosity though.
There are a few examples of libraries that are directly influenced by category theory, but they are few in number.
It's worth reading this if you've got a math background at all. It's a nice summary of what Category Theory is and potential applications for it.
I don't have a particular aptitude for maths (as an engineer by training, maths makes sense to me as a tool for solving problems - not as a source of study in and of itself).
Most all the introductions to CT I've read get into the maths very early. That's entirely understandable. I personally find I need a more intuitive description of the concepts first, using non-mathematical exemplars. OP is, thus far, the introduction that best articulates things for me in those intuitive terms.
That's in no way a criticism of more formal/abstract treatments: I've great admiration for those who naturally create and assimilate things that way. I wish I was one.
It assumes a bit more knowledge coming in, but it's really what made the entire thing stick for me.
Category Theory for Computing Science (Revised 2013)
Erik Meijer Explains it well: https://www.youtube.com/watch?v=JMP6gI5mLHc (check out FP101 on edx)
Statebox guys doing stuff with category theory, blockchain and petrinets: https://www.youtube.com/user/wires0 https://statebox.org/
Graphical linear algebra: https://graphicallinearalgebra.net/
Jules Hedges working on game theory : https://julesh.com/
(Once one has identity relationships then in fact the structures themselves can be disregarded and one is left with a study of simply structure preserving relationships.)