Hacker News new | past | comments | ask | show | jobs | submit login
Understanding Monoids using F# (gettingsharper.de)
62 points by numo16 on Mar 4, 2015 | hide | past | favorite | 30 comments

This is still a bit over my head. I never had to do functional stuff and in college we only glossed over these concepts. Where should I start?

Here, try this...

A monoid is made up of:

- Some arbitrary type of thing. (Lists, sets, strings, numbers, whatever.) Let's call it 'A'.

- A value of type A. Let's call it the 'empty' value.

- A function that takes two A values as arguments, and returns a single one. Let's call it 'combine' function.

Here are some examples of monoids:

- String concatenation: 'combine' is string concatenation, and 'empty' is the empty string.

- Integer plus: 'combine' is the usual + operation, and 'empty' is the value 0.

- Set union: 'combine' is the union of two sets, and 'empty' is the empty set.

You can come up with monoids for all sorts of things: maps, lists, functions, bloom filters, futures / promises, ...

There are, however, a couple rules:

- It doesn't matter which order you combine things in: `("hello" + "world") + "!"` has the same result as `"hello " + ("world" + "!")`; and `1 + (2 + 3)` has the same result as `(1 + 2) + 3`.

- You can add the 'empty' value to either end without changing anything. `"" + "hello"` is the same as `"hello" + ""` and plain old `"hello"`. Likewise, `3 + 0 == 0 + 3 == 3`.

That's it! To get an intuition, you might want to pick a couple more types from the list above and pick a good 'empty' / 'combine' function that satisfies the rules.

Of course, why you'd want to do this is another question. (And one I'd be happy to answer, if you're curious.)

This is probably the clearest explanation of what a monoid is that I've seen. Basically, a monoid is a reduction function that takes multiple inputs of type A and returns a single output of type A. The only caveats on top are that type A must support the concept of an empty value, combining the empty value with x results in x, and that combinations are associative.

> why you'd want to do this is another question. (And one I'd be happy to answer, if you're curious.)

If you would, please.

Sure! Here's a couple concrete applications:

- Any monoid operation is trivially parallelizable: take the dataset, split it into chunks, combine all the elements in each chunk together, then combine those results together as a final step.

- If I'm updating a row in my database with a monoid operation, I can always 'pre-aggregate' a batch of values together on the client side before taking that result and combining it with the stored value.

- If I store some statistics for every day of the year, I can calculate the monthly statistics very cheaply -- as long as those stats are a monoid.

The monoid abstraction seems weird at first, because it's so dang general, but it ends up hitting a bit of a sweet spot: the rules are just strong enough to be useful for a bunch of things, but simple enough that be applied to all kinds of different situations. You can think of it kind of like a hub-and-spokes thing -- this interface connects all kinds of different data types to all kinds of different cool situations, so you get a lot of functionality with a lot less typing and thinking.

Another answer as to "why" is to unask the question.

You don't "construct" monoids intentionally, you instead realize that types you already have are monoids under the covers.

Realizing this structure and accentuating it is a good way to discover and enforce good program structure.

The introduction of stream processing in Java 8 is a concrete example - it can auto-parallelize computations. It was not immediately clear to me why an identity value was needed in this function:

After reading up on monoids, it became clear. Note how the reduce() variant that does not take an identity value is allowed to return null!

A recent post, was about "How to Build Your Distributed Database" (1). The word 'monoid' is not used (and the word 'commutative' is even misused for 'associative') but this post shows well why monoids matter and how they can be used.

(1) https://news.ycombinator.com/item?id=9101971

Thank you for the great explenation; I read the article,a nd if it only had taken it out of the pure abstract and given the int example; it would have been 100% more relatable. I always get the feeling that they fear that giving an actual example would hurt the article and that the pure abstract is to be preferred.

That is a beautifully clear explanation.

Can you do one for monads?

1) Functional Programming: What? Why? When? by Robert C. Martin @ https://vimeo.com/97514630

2) Erik Meijer Introduction to FP @ https://www.edx.org/course/introduction-functional-programmi...

3) Do you want to learn F# and Functional Programming? Well, you better start coding! @ https://github.com/jorgef/fsharpworkshop

I found it easiest to start with the practical cases, and work my way up to the generic concepts. http://m50d.github.io/2013/01/16/generic-contexts.html is a description of my experience.

This whole video is worth watching, but a simple monoid overview is here:


It's not a difficult concept.

Thanks for the explanation. Such things continue to be impenetrable to me, I guess I will keep trying.

Monads are legitimately a bit tricky. They do things with closures that you do not generally get exposed to in imperative languages, even ones with very solid closure support. There is an "aha!" moment for most people. Perhaps less so than some people's attempts to explain it would imply, but there is legitimately an "aha!"

Monoids are so easy that the biggest impediment to understanding them is believing that they really are that easy, possibly due to a "halo effect" from the similarity with the word "monad".

Do you have:

    1. An associative operation, that is, a <+> (b <+> c) = (a <+> b) <+> c
    2. A single "zero" such that 0 <+> a = a <+> 0 = a
Then you have a monoid.

That's it.

Seriously, that's it.

There's nothing more.

I've represented the operation by <+> because traditionally it involves a "+" since it is most like the plus operator, and I'm too lazy to go get the traditional +-with-circle operator.

If you "de-math" the linked page you'll find that's what it says, but it does make sort of heavy weather of it by comparison. Category theory tends to do that to people. I've on occasion been tempted to try to write a "practical category theory" guide myself, because I really feel like even the ones so labeled get caught up in the math and never provide any solid examples. Just a handful of trivial examples that leave a casual reader unable to figure out how to apply the idea to more complicated ones.

So, in the interests of me not being a hypocrite, let me reiterate the trivial examples, then provide a non-trivial example of something a fluent Haskell programmer would not even blink at implementing.

For lists, the <+> operator is "list concatenation", which is associative, and the 0-element is the empty list.

Integers are an interesting case, as both conventional addition (with the 0-element being literally 0) and conventional multiplication (with the 0-element being 1 here!) form a monoid.

So, how do you "use" monoids? Like any other interface, you create functions that generically take in "a monoid" and do something useful with its operation. Here's an example. Suppose you have two hash tables, and the values are known to be monoids. Now you can create a "combine" function that can losslessly combine two hash tables. If a key exists in either table alone, it will just be copied through, but if it exists in both tables, the resulting value is "left value <+> right value".

Now what happens? Well, if the hash table is full of "strings", you'll get the concatenation of both. If the hash table is full of "addition-monoid numbers", the exact same function will now return the summed total of both hash tables. If the hash table is full of lists of some value of type "X", you'll now get a hash table that has all of the values in the lists. All the same function.

As someone else points out, monoids are also parallelizable. You're 80% of the way to a distributed map-reduce-esque counting algorithm with just hash tables and the monoidal combining operator I just described.

This is the value of "monoids"... not that you would use the interface when you have a concrete class in hand, but that you can use the interface to write generic functions where you just have to pass them some 0-value and some way of combining two values together, and they know how to do "something" based on that information, despite not understanding the type itself. The hash table implementation above does not know about "lists" or "numbers" or "strings"... it just knows that it has "things it can combine".

Because monoids are so simple, you have them all over the place and don't realize it. Here's a really complicated example from the world of practical software engineering: Permissions ought to be monoids. If you have one object that gives you permissions A, B, and C, and another that gives you A, D, and E, you ought to have an operation that will combine them into an object that gives permissions for A, B, C, D, and E. If you have the ability to grant "permission X, except not X's starting with 'a'", you have a permission system that will be insanely difficult to understand once people start actually using it. How do you combine "permission X" with "permission X but not starting with 'a'"? (Either of the two obvious answers is guaranteed to be both what one user intended and not what another user intended. This is not a good characteristic to put into your security system.) And of course you have a "zero-object", a permissions object that grants no permissions at all.

And... if you have this and you've explained to your system that it is a monoid, the aforementioned hash table will happily take permissions as its values and happily combine them together even though the person writing the hash table implementation certainly did not have your permissions objects in mind at the time!

Anyhow, that's really all there is. 0-value, associative combiner, wrapped in an interface. You can use that in a huge variety of ways, but the concept itself is really that simple.

Wow, I'm going to use this to explain monoids in the future. Great explanation!

I hate doing memes, but sometimes they're so spot on.

"Stop trying to make monoids happen. They're not going to happen."

I'm not familiar with F#, but the Haskell world has very much accepted the Monoid typeclass.

Even the Scala world has found them to be extremely useful. I know Twitter's Algebird library defines Monoid instances for a lot of structures, and I believe that SummingBird leverages the properties of Commutative Monoids somewhere (could be wrong on this, it's second hand knowledge I got from someone who manages the SummingBird team).

HLearn (Haskell Machine Learning library) uses a cross validation algorithm that exploits the properties of Monoids.

So Monoids are certainly happening, stop trying to ignore that they're happening.

Earnest question: Have you used a monoid?

I know my original comment is sort of bratty, and I was confusing monoids with monads, which... is a different functional thing that people apparently use?

It feels like mon(a/oi)ds are a concept that gets regularly explained on various blog posts. I read/skim the blog post and don't understand what they are or how they're useful (which is clearly on me). And then I read the comments, and they're split between some folks saying "I still don't get it" and others saying, "Yeah, these are common, easier than you think, and incredibly useful in Haskell/Scala/Clojure/F#/whatever". And then the world moves on, and the cycle repeats the next month. It doesn't seem like the number of people who say "I still don't get it" ever goes down. Which is why the comment about how they're not happening.

(int) Addition, (int) Multiplication, String Concatenation, AND, OR, XOR? I think I can say that I've used all of these.

I find this is a bit of a nicer treatment of monoids (still F#): http://fsharpforfunandprofit.com/posts/monoids-without-tears...

I absolutely promise you that you have used a monoid. Addition is a monoid!

The thing you might not have done is recognized that a subset of the rules of addition end up showing up all over the place and therefore you can translate some of your intuition about addition to these other places.

As a simple example, you might notice that the `length` function converts strings to numbers in such a way that

    length(concat(X, Y)) = length(X) + length(Y)
in other words, length converts the "string monoid" to the "addition monoid" in a very neat way. You can sort of see that it converts "concat" to "+".

This isn't even some kind of mumbo-jumbo "intuition translation". It's a literal dyed in the wool functional translation.

So is there reason to believe that monoids won't be going away anytime soon? I think so—they've already been around and in HEAVY use since the dawn of abstract algebra in the, I dunno, perhaps the 30s?

Yes, monoids and monads are different concepts. Monoids are the easier one

It's not the names that matter, or even necessarily the properties, but the general shape of monads and monoids is one we find ourselves using all over in functional programming. A very large percentage of the Scala standard library is classes with monadic properties, even though Scala does not have a monad type built in.

So it's not that people really need to understand monads to do functionall programming: I think that understanding them is unimportant. The difficulty of the concept is proven by what you mention: Tons and tons of blogposts trying to, and failing to, explain monads.

As far as the utility of monads, just look at actual monads that are very useful everwhere. For instance Option avoids large amounts of null checks, making code easier to write and read. And why is this? Because it's a monad. I can transform any type A into an Option[A], and an Option[A] into an Option[B], by just having a function that transforms A into B, unaware that option exists.

Understanding category theory makes it easier to write good, useful mechanisms that take functions as parameters. But we can use a monad without understanding that it's a monad, and that's fine for most people. It never has to be a thing.

> Earnest question: Have you used a monoid?

Yes. I have two explicit monoids in the production codebase I'm working on professionally right now. The simple case is one where I use foldMap to put together some values in a list. In isolation it's only saving a few lines compared to an explicit summing loop - but if you can "replace a few lines with a single method call" in a lot of places in your code, you end up with much more readable/maintainable code.

The other is more complex but also shows the power a bit more: a report that wants to accumulate some statistics separately (something like "12 successful, 45 error A, 2 error B"). So I have a type representing that:

    case class Stats(successful: Int, errorA: Int, errorB: Int)
I use https://github.com/typelevel/shapeless-contrib to automatically derive a Monoid instance for this type.

And then I can use this type with the Writer monad: http://eed3si9n.com/learning-scalaz/Writer.html

So with very little code, my report steps all include a "stats for this step", and they automatically accumulate into an overall stats object. Of course this is nothing I couldn't have written myself, but the library methods already exist and help.

More importantly though, I have another report that uses the same infrastructure, but makes async HTTP calls. So that report uses Future. But I can share common code between the two reports easily, because my abstract superclass is just `Report[F[_]: Monad]`.

I think they are happening, slowly but surely. Five years ago I wouldn't have dared suggest explicitly using Monad in production code. Two years ago we used one single Monad across our whole application, and even that was controversial. Now our codebase has more than I care to count.

> It doesn't seem like the number of people who say "I still don't get it" ever goes down.

Which is great, that means new people are getting into it every month. Newbies disappearing isn't a sign of success, it's a sign of stagnation.

Monands are fairly mandatory in a lot of functional programming for I/O, most of the time you don't realise your using them.

The thing is that the idea "using a monoid" will not make much sense unless you're using a language which makes such things explicit. Type classes let you use monoids in the most obvious way; that is, by creating a type class called Monoid, writing instances of the class for various types, and writing functions that are generic over any monoid. In this sense you're obviously "using monoids" because you're writing Monoid somewhere.

In most other languages, the idea of a monoid will appear in the form of an abstraction of operations such as addition. For example in python, the `int` and `str` types both have a sense of addition via the `+` operator, such that 'a' + 'b' == 'ab' and 1 + 2 == 3, etc. Now this is a step in the right direction, but for example in python the same doesn't hold for sets: you might think that `{1, 2} + {2, 3} == {1, 2, 3}`, but you'll get a type error with that. The operator is `&`. Furthermore, for some bizarre reason, even though you can add two strings and two numbers, you can `sum` a list of numbers, but you'll get a type error if you try to use the `sum` function on a list of strings.

The problem is that there's no real concept of what `+` really is, in the abstract sense. Is it addition? Then why can't I sum a list of strings? Is it joining two things? Then why can't I add two sets? This kind of arbitrariness is what monoids (and other type classes) help address. Any type which has an associative operation and a neutral element is a monoid, full stop. This allows us to write, for example, a sum function which can take any list of elements of some monoid. In Haskell this is called mconcat:

    mconcat :: Monoid m => [m] -> m
    mconcat [] = mzero
    mconcat (object:objects) = object <> mconcat objects
Now we will have `mconcat ["hello", "world] == "helloworld"`, and `mconcat [1, 2] == 3`, just like we expect. We also have `mconcat [{1, 2}, {2, 3}] == {1, 2, 3}` as well! (Pretending there is such a syntax for set literals :)). Because integers, strings and sets are all monoids.*

Worth noting is that in haskell the `+` operator comes from a separate type class, that of numbers. In that sense Haskell draws a distinction between things that can be added, in a numerical sense, and those that can be joined in a more abstract sense (for that the operator is `<>`).

So in summary, the concept of a Monoid is just one example of the power of type classes to provide convenient abstraction, while simultaneously giving a clear sense of the behavior the abstraction is meant to provide. You've probably used monoids before, and I'd wager there were times when you wished you had monoids (or even implemented them yourself), even though you didn't know it. ;)

Regarding the rest of your post: it's an unfortunate fact that monoids and other functional programming concepts often go in one ear and out the other to those who aren't avid proponents of them already. This is probably in part due to the hifalutin-sounding names, absence of familiar syntax and paradigms, and sometimes writings which assume too much of the reader (or too little). However, if you look at the evolution of people do programming, it's pretty clear that the world is embracing more and more concepts from the functional world every day. Despite the number of people "not getting it", I have to think the situation wasn't that much different a good 10+ years or so when first-class functions were a fancy new concept.

* Oddly enough, Ints aren't monoids natively in Haskell, because you can define them as monoids in two ways (with 0 and addition, and with 1 and multiplication). But for the purpose of demonstration, I glossed over this...

  you can define them as monoids in two ways
  (with 0 and addition, and with 1 and multiplication)
Thank you for pointing that out!

If you or others are struggling, I recommend "Learn You A Haskell". It's a great book that explains them well.

Monoids are learned in high-school, long before people people learn OOP, MVC, or whatever else acronym sits on people's resumes these days, with the difference that a Monoid is a recipe that's actually simple and well defined. And they are everywhere, so it makes sense to think about generic code that works over monoids.

They already happened.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact