Hacker News new | past | comments | ask | show | jobs | submit login
Kleisli Categories – Composition of Logs (bartoszmilewski.com)
57 points by signa11 on Dec 24, 2014 | hide | past | web | favorite | 6 comments



An astute reader may notice that it would be easy to generalize this construction to any monoid, not just the string monoid. We would use mappend inside compose and mempty inside identity (in place of + and ""). There really is no reason to limit ourselves to logging just strings. A good library writer should be able to identify the bare minimum of constraints that make the library work — here the logging library’s only requirement is that the log have monoidal properties.

If you take anything away from this (excellently written) article, I think it should be the above paragraph. All programming philosophies espouse creating reusable interfaces, but at least for me, the first time it really clicked and I started writing truly composable code, was when I started structuring my ideas in terms of categories and objects from abstract algebra.

It's one thing to write an interface that can act on any object that responds to a "foo" method. It's a completely different level of flexibility to support any object that behaves like a ring, or a monoid, or whatever other crazy mathematical formalizations you can come up with to describe your code. Once I made that connection, I started making fewer assumptions about my code, and as a result it's been less prone to edge cases, and magnitudes easier to test.

All of this with Ruby being my primary language, too. Fancy languages like Haskell help, but they're not a prerequisite for you incorporating these sorts of highly abstract concepts into your work.


> crazy mathematical formalizations you can come up with to describe

The probably with informal speech is that it is ambiguous.

Also, using informal speech would really hurt you when generalizing these concepts abstractly. Plus after coming up with informal speech to talk about all of these things, you'd probably realize your words aren't that much different than theirs.


Atop these constructions you want laws. The constructions alone only give you about 50% of the power. Laws help you control the multiple ways to interpret a structure (multiple theories). This is a great reason to steal things from abstract algebra: mathematicians have collected many great sets of laws.


Would love to see some examples of how you put these concepts into practice.

I'm familiar with the concept of duck typing and generic interfaces, but I'm not sure that's quite what you're getting at and I'm curious to see what an implementation of what you describe looks like.


My favourite example is probably CRDTs - commutative replicated data types (http://pagesperso-systeme.lip6.fr/Marc.Shapiro/papers/RR-695...)

Some researchers a lot smarter than me realized that when you're dealing with distributed systems, you're able to reliably achieve eventual consistency by structuring your data as a CRDT.

The idea is that if operations on your data types are both commutative and idempotent, nodes in a distributed system are able to arbitrary leave and rejoin a network, possibly sending duplicate messages at any point, with eventual consistency still being maintained. Since it doesn't matter with what order operations on the data types are performed, and since performing an operation multiple times has no effect on the state, you're able to guarantee that all nodes will achieve the same state after applying all pending operations - regardless of the order they're performed in.

Given the idea of a CRDT then, we know that in designing data types with eventual consistency as a goal, we can gain a lot of power by ensuring that they uphold two properties: idempotence, and commutativity. By examining our needs for that data type further, we're able to build up as complicated a structure as required. Do we need an append-only datastore? That's a commutative, idempotent monoid.

Now you're able to go and implement a commutative, idempotent monoid in as naive a fashion as possible. If you ever need to optimize it or swap out a more complicated structure, you're guaranteed to end up with a system that has the same semantics, as long as that new implementation forms a commutative, idempotent monoid. As Tel mentioned in another comment, you also end up being able to draw upon all of the laws regarding that structure. It's like implementing an interface in Java, but instead of just knowing that the resulting object supports an "append" method, you also know that the resulting object follows all of the other laws that mathematicians have spent the past 4 billion years proving.

In practice, this line of thinking usually doesn't explicitly show up in my code, but it's definitely on my mind while I'm designing out new systems on a whiteboard. My coworkers might not understand a thing about abstract algebra, but that's okay! My goal isn't to fill our codebase with mathematical jargon, it's to build small, composable interfaces, that document their assumptions, and I've found that through this process I can do that.

Category theory is very much like UML in my opinion, in that its use lies in describing systems, rather than implementing them, and I feel like this is the reason so many people are able to proclaim its virtues, while at the same time struggling to point to a specific line of code, and being able to say: "This is why category theory is important!"


Interesting claim that a discussion of Kleisli categories isn't the place to get into a discussion of monads, since they're identical.

I do think Kleisli categories are a better way to explain the benefit, though.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: