If you are expecting to apply something immediately to your work then combinatorics and probability theory are way more relevant and practical for day to day programming work.
Abstract algebra has also been a fun subject for me, but hasn't really affected my programming as much as I'd anticipated. I suppose by the time I learned about abstract algebra, I'd already been well-versed in ADTs (both kinds - abstract and algebraic data types) and object-oriented design, and, apart from some new terminology, the underlying concepts didn't feel particularly new or profound to me.
If you're thinking about new programming paradigms I think the theory behind CT versions is worth knowing, but if most of your monads are maybes or lists it'll be of little practical value.
I think even this is maybe too much: if all you use are monads and functors, then it's easy enough to learn monads and functors on their own, without any "real" category theory.
An abstract framework like category theory can actually be harmful where there isn't that much concrete detail to begin with. A personal example I faced was being overwhelmed by category theory jargon when starting to learn Haskell a couple of years back. My confidence returned only when I realized that there was just one category in play with types as the objects and functions as the arrows. The jargon was unnecessary. Today Haskellers discuss so many interesting issues about the language and its implementation which do not fit into the category theoretic framework at all.
Category theory provides useful tools for identifying structure. It's not necessarily a good vocabulary to talk about these things. Forcing everything into a first-order mold is typically confusing. On the other hand, there has been a lot of work using category theory to solve real problems. This has resulted in some sophisticated tools and a very "scalable" way of looking at things (e.g. universal properties and adjunctions are everywhere).
Category theory is needlessly complicated. There are many redundant side conditions, and the definitions are ... let's be charitable and call them idiosyncratic. This is just more apparent when you apply catgeory theory to programming languages. In semantics you already have a precise way of writing statements (type theory) and you know that everything you can write down is parametric. This eliminates almost all side conditions on most definitions in category theory. So yes, coming from Haskell, catgeory theory probably looks extremely complicated and not worth the effort to learn it.
However, some of the tools from category theory are as useful to computer science as differentiation is to engineering. There are plenty of questions to which you can calculate the answer using some elementary category theory. What is the dual concept to the reader monad transformer? What is the eta law for sum types? Can this data structure be a monad? If not, can I embed it into a monad (e.g. using codensity)? These are all simple questions, but if you don't know the general strategy for dealing with them it seems like I'm asking you to solve a puzzle instead of asking you to compute 7 * 4.
Much of math is this way. Find a subject that you can see from two perspectives and abuse the best parts of each!
That's interesting. Can you give an example of a side condition that parametricity would allow you to avoid?
f : forall a b. (a -> b) -> F a -> F b
In type theory, every equivalence is automatically natural.
There's a lot more and it's made worse by the fact that most of the side conditions are redundant even in normal category theory. For instance, you can specify an adjunction using two functions F : A -> B and G : B -> A, together with an equivalence phi : forall a b. Hom(F a, b) = Hom(a, G b) such that phi satisfies one additional equation. It follows from this that F, G are functors and that the equivalence is natural. Since a lot of definitions in category theory mention adjunctions you can usually simplify the definitions like this. For example the definition of a Cartesian closed category in most textbooks expands to something like 20 equations, but you can simplify it down to the usual few equations which specify a model of simply typed lambda calculus.
Has that been written down somewhere convenient? I'd like to have a look.
By the way, I found lots of your comments on your post history also very interesting!
One of the most applicable use-case I remember being constantly mentioned about category theory was classification of some viruses.
Category theory makes sense when your business is large enough and you are looking to make everything streamlined. You could use some Category theory to rearrange everything from employee, customers and products into better types.
But the idea that you absolutely need to understand this stuff to get going does not make sense to me.
All computer science is divided into Theory A (algorithms and complexity) and Theory B (logic and programming language design). Applications of category theory are part of Theory B. If you're a programmer who wants to have real world impact, you should study tons of Theory A and completely ignore Theory B.
That sounds inflammatory, but it is unfortunately 100% true. All of Theory B combined has less impact than a single hacker using Theory A to write Git or BitTorrent.
His doctoral advisor was Church, his work based on Gödel, his thesis named Systems of Logic Based on Ordinals.
Comments like these show the perils of treating mathematics like a cookbook. It's a very limiting mindset.
The wealthy US was doing much more in the way of computations on real hardware and algorithms and complexity (Theory A) became a crucial focus for equally pragmatic reasons. You still design the new theoretical object on paper, of course...
I think I've mentioned this theory before, and I'd love some corroboration or counterexamples.
pandoc -f markdown -t latex -o foo.pdf foo.txt
[^tag]: Is the footnote denoted by the corresponding tag.
[^1]: Is the footnote '1' above.
Multi-line footnotes may be designated by indenting subsequent paragraphs 1 or more spaces.
At other times, exporting to LaTeX and adding additional structure or elements may prove useful.
Example: a markup of Dugald Stewart's "Account of the Life and Times of Adam Smith, LLD" I did a few days ago:
0 - http://wiki.docbook.org/WhyDocBook
1 - https://github.com/MSmid/markdown2docbook
Also the index.
1.3 What is requested from the student
The only way to learn mathematics is by doing
exercises. One does not get fit by merely
looking at a treadmill or become a chef by
merely reading cookbooks, and one does not
learn math by watching someone else do it.
There are about 300 exercises in this book.
Some of them have solutions in the text,
others have solutions that can only be
accessed by professors teaching the class.
A good student can also make up his own
exercises or simply play around with the
material. This book often uses databases as
an entry to category theory. If one wishes
to explore categorical database software,
FQL (functorial query language) is a great
place to start. It may also be useful in
solving some of the exercises.
Added in edit: To reply to both (currently) comments, you don't learn how to program by just reading and not doing. You can learn about programming, but you won't actually be able to program, and your knowledge will be superficial. If that's your objective, just to know about this topic, and others in math, then fine. If you want to be able to use your knowledge in any meaningful way, my comment stands. It's not enough just to read. You have to engage, and do the work.
And this is why I don't usually end up doing the exercises.
That might be fine for you, but it's an inherent limitation in reading about something, rather than actually engaging in it.
I've edited my original comment to make a comparison, and expand on why I say this.
But to take you up on this - some exercises have answers in the text. You could do just those. Other exercises don't have the answers, just like code that you write doesn't already exist. When you write the code you then have the ability to check that it does what it's supposed to do.
The idea of having exercises without answers it to foster the ability to understand when your answers are right. Most of the time it's easy to know if you got the right answers, but it's finding the answer that's the challenge. If you choose not to learn this topic, fine, that's your choice. My comment just says: don't expect to understand the material at anything other than a superficial level if all you do is read about it.
As it says in the quoted section: One does not get fit by merely looking at a treadmill.
You don't need the solutions, the whole point is to write your own, which is much more work and which will invariably differ from the official ones -- sometimes slightly, sometimes drastically.
This, in my experience, is utterly false.
It does seem to be a pervasive bias in the community, though --cf. the ubiquitous "[missing step] is left as an exercise to the reader", and so forth.
Can you tell me how you learned mathematics without doing any exercises?
> without doing _any_ exercises?
The only way to learn mathematics
is by doing exercises.
This, in my experience, is utterly false.
The word "utterly" seems additionally to imply that this extra something completely outweighs the exercises, perhaps even to the point of no longer requiring exercises at all. And so my question.
So I believe my question to have been completely reasonable. Rephrasing gone35:
In my experience, is utterly false that
the only way to learn mathematics is by
gone35: I would be very interested to
know what methods of learning
mathematics you espouse other
than, or perhaps in addition
to, doing exercises.
Read carefully, go to seminars, ask people, and think through stuff.
I'm in lowly applied math/theoretical CS though, so it might be different in other, more abstract fields.
Perhaps you are simply talking past each other? If "think through stuff" involved coming up with your own questions about the material and answering them, this would fall under what many mathematicians would call "exercise". In fact, I've found the belief that this is often more valuable than in-book excercises to be pretty pervasive in mathematics.
The fallacy here is thinking that the exercises in a book are special and must be completed and/or that you must have completed one before you could complete another outside the book.
Or just a case of mixed up disjunctions... It happens.
Also I happen to be born with two X chromosomes... Not that it should matter tho.
Fortunately, doing the exercises in books isn't the only way to do that.
I disagree. Much of Category Theory isn't complicated -- it is just really abstract. So you need to have some concrete knowledge that you are going to apply it to before you start learning it or alternatively "mathematical maturity" (or as I like to call it "the ability to just accept abstract statements and go with it").
As a tool it clarifies and unifies relationships between mathematical concepts and domains like nothing else has.
If category theory is as cool as it sounds (and as simple as it doesn't) then somebody could do us a neat service by working out a game like http://worrydream.com/AlligatorEggs/ did for the lambda calculus. (Added: I guess it must've already been done, as formalizations of category theory in interactive proof assistants. Any tips on what's the most accessible?)
Maybe the current formulation of the diagrams is just right for people who use them to do a lot of heavy lifting. It would be interesting to hear some opinions on this.
Complicated means there are many strongly coupled parts that you have to understand before you can understand the whole. Computer programs tend to be complicated (one sign of a good computer program is that you can understand and work on pieces of it without having to understand the entire thing).
Abstract means that the idea itself isn't tied to a particular instance of what it is describing. Interfaces in Java are abstract. The idea of a group in mathematics is abstract. Abstraction is usually an attempt to make an idea less complicated so you can focus on its essence.
An other reason, maybe category theory isn't that useful? I'm just speculating here as I have only basic knowledge of category theory, maybe someone else can enlighten us! While all the maths taught at undergraduate level are extremely useful and pervasive in all areas of science, I'm still unsure about category theory. It's certainly nice and elegant as it allows us to reframe different theories and definitions in a same context, but is it useful for an engineer or an applied mathematician?
Also you're quite right that category theory only really starts to make sense when you have a big stock of examples, like "group theory" and "poset theory" and "topology" and "set theory". Once you've got those examples, you can detect these common threads between them and call them "category theory".
Nullable (in the C# sense) does not form a monad, because it does not form a functor, because you cannot write fmap, because it does not nest.