Sounds to me as an uninitiated programmer like set theory had issues when wielded in the nether regions of modern mathematical theory, and category theory is the kludge that followed.
Maybe I should have done maths; yet when most fascinated by its theories under the influence of a three letter traditional psychoactive, a former IBM employee and multi-decade pure mathematics researcher literally begged me not to, declaring it an enending path that wastes decades of people's lives. Ho hum.
Categories are relatively uninteresting without functors. Functors are more or less mappings or transformations between categories.
Category theory by some accounts initiated in the foundations of algebraic topology. In that subject, one wishes to codify some of the aspects of a high-dimensional shape or geometric object in algebraic terms. The hope is that the algebraic image of the shape is easier to understand than the shape itself. The mapping of shapes into their algebraic "shadows" is a functor.
For instance, you can take a torus and look at the collection of all closed loops drawn on the surface of the torus. Now identify two loops as the same if one can be smoothly deformed into the other by moving it along the surface of the torus. Long story short, this gives the set of such loops a structure known as a group. There is a very extensive theory developed around groups, so you might surmise that you could partially classify shapes by looking at this associated group (called the fundamental group or first homotopy group).
This mapping from geometric objects to groups of loops drawn on them is a functor from the category of topological spaces to the category of groups. This is the beginning of homotopy theory.
As for your comment on sets, there is the category Set which consists of all sets. That is, the objects of Set are sets and, given two sets X and Y, the morphisms between X and Y in Set are simply the functions f : X -> Y. An interesting technical point here (which is related to your quote) is that Set itself cannot be a set, as there is no set of all sets within Zermelo-Fraenkel set theory (the most common axiomatic formulation of set theory). Rather, there is a notion of class, and one says that the collection of all sets forms a class rather than a set.
> there are only two universal properties
Yeah? So what are they???
Hm, Wikipedia (http://en.wikipedia.org/wiki/Universal_property) thinks there are a lot more than two:
"This article gives a general treatment of universal properties. To understand the concept, it is useful to study several examples first, of which there are many: all free objects, direct product and direct sum, free group, free lattice, Grothendieck group, product topology, Stone–Čech compactification, tensor product, inverse limit and direct limit, kernel and cokernel, pullback, pushout and equalizer."
That list is a list of examples of universal properties being used to define things. As such, it's a testament to how mindbogglingly convenient universal properties are.
He generalized a algorithm by using an abstract data type (a Monad of course) instead of a specific number type. Now the algorithm can be applied e.g. to probability distributions as well. The category theory tells you about bind and lift. However, the hard part is generalizing the algorithm, because you have fewer assumptions now.
For another example, greatest common divisor is a simple algorithm for numbers. Try generalizing it so you could also use polynomials instead of integers.
There are a lot of areas of math that have no obvious connection to abstract algebra or real analysis (for instance combinatorics, logic, number theory...) yet the vast majority of mathematicians will know what you mean when you ask whether they are an algebraist or an analyst, and will know how they should be classified. In fact if I see a mathematician eating corn on the cob, I can generally tell which they are without asking. (Analysts eat in spirals, analysts in rows. My best guess for why is that analysts pay attention to the cue from how their teeth eat corn, while algebraists follow the visual cue from the lines on the corn.)
This classification is a matter of taste. Any professional mathematician is adept at both areas of mathematics, but will have a clear preference.
Category theory serves as an excellent litmus test. It seems to have a magnetic attraction for algebraists, and a similar repulsion for analysts. At its heart, category theory is an abstraction of common patterns of algebraic manipulation where the details of what you are manipulating become entirely irrelevant. (Indeed you can often replace one type of thing with an apparently unrelated type of thing. Such as a geometric problem about toruses with an algebraic question about pairs of integers.)
Speaking personally, I learned enough category theory to pass all of the courses I had to take, but words like "category", "monad" "cohomology" and "functor" are red flags warning me that I'm about to be uninterested in the rest of the discussion. But YMMV.
When I studied math I was definitely, in your taxonomy, an algebraist. But if I went back to it now I'm pretty sure I would be an analyst. I say that because my approach to software has evolved in an analogous way. Also I eat corn-on-the-cob in rows :)
I'd love to properly re-learn all the math I learned before. The way I did it at the time was mostly just symbol manipulation, and that way of thinking no longer interests me.
Not intentional. I was trying to be fair, but my interests are clearly on the analysis side, and that shows.
Even traditionally algebraic subjects can be often approached analytically. For example if you do group theory by starting with group actions, it gets an analytic feel (and in my opinion, everything becomes much clearer). The reverse is also true, that's how many of the algebraic subjects initially started: take a bunch of proofs from the analytic side, and see for which classes of objects they remain valid.
In programming you have something similar. Programmers that try to generalize their code as much as possible are algebraists. Dependency injection is an example. Programmers that say "YAGNI" are analysts. If you like interfaces and design patterns you're probably an algebraist. If you like bottom up programming you're probably an analyst. If you like Haskell, you're probably an algebraist. If you like Lisp you're probably an analyst.
After a few years of graduate school, with lots of colleagues working on the same material as me, it has gradually become much easier. The best text I've seen so far is Paolo Aluffi's "Algebra: Chapter 0". It's an algebra textbook meant for first year graduate students in pure mathematics, but it was the major factor in clarifying category theory for me. Of course, the book also contains pretty much all topics in abstract algebra from basic groups to abelian categories. So if you're not pegging to be a research mathematician, chances are the book is not worth your time.
Part of my goal with this series is to demystify the subject as it was demystified for me, and to incorporate as many concrete applications to programming concepts as I can.
I hope you find some value in what I produce :)
(Sorry, somebody had to say it..)
One of the things I've noticed when programming and dealing with abstractions is how your problem gets translated to the tools of your language (what feels natural to a language). Some problems are easier in one language than another. Imagine your language being a shadow puppet, and your problem you'd like to solve is a light source. When you put your language in front of the problem, can you still see a shadow? Today most programming languages will, but if instead of binary filter with your hands, imagine your hands were some weird grayscale thing (like a monochrome LCD). Wherever the light shines through the most, you need to put a bit more work into solving the problem.
Low level languages seem to be much better at watching and reacting to state changes and making decisions on those states (because of the speed of the language, you could do 100s of checks on pins and combinations of pins and get good performance on something like an Arduino). In something like Python, those little checks become more costly. You might need to rethink your problem to get to your solution. You need to abstract from the trees, and get to the forest. The valid solution you had before because of your perspective has changed, and now you need a new approach.
I think a lot of that has to do with finding a pattern in your problem. If you can find a pattern, you can probably make it recursive, write a couple steps and maybe a helper function, and you're done. Problem solved.
Take the fibonacci sequence in Haskell (from ):
fib = 0:1:[a+b | (a,b) <- zip fib (tail fib)]
Now take something like Fizzbuzz - Python makes this extremely easy, whereas in Haskell it seems a little unnatural (at least for me, although they're pretty similar solutions) .
fizzbuzzers = [(3,"fizz"), (5,"buzz"), (7, "baz")]
for x in range(1, 110):
string = ''
for num, word in fizzbuzzers:
if not x%num: string += word
print string if string else x
 - http://www.haskell.org/haskellwiki/Haskell_Tutorial_for_C_Pr...
 - http://www.haskell.org/haskellwiki/Haskell_Quiz/FizzBuzz/Sol...
I think that's the cleanest method.
You build the sequence with only the necessary amount of starting information (0:1 for fib, ( (3, "fizz"), (7, "buzz") ), and generate the rest as you need. These are the (I think) most beautiful solutions to problems.
fizzbuzzers = [(3,"fizz"), (5,"buzz"), (7, "baz")]
fizzbuzz k =
let str = concat [s | (n,s) <- fizzbuzzers, k `mod` n == 0]
if str == "" then show k else str
putStrLn $ intercalate "\n" $ map fizzbuzz [1..110]