Hacker News new | past | comments | ask | show | jobs | submit login
What Monoids teach us about software (deque.blog)
187 points by adamnemecek on Sept 13, 2017 | hide | past | favorite | 187 comments

Here's the ELI5 of monoids used in programming that might help get the idea across:

Conceptually, a monoid is anything that:

- Has a "zero-value" (mempty, e.g., 0)

- Can be "appended" together (mappend, e.g., +)

- Has an identity with the zero-value: x + 0 == x, 0 + x == x

- Is associative: x + y + z == (x + y) + z == x + (y + z)

This isn't limited to just arithmetic with operators like +. You can define whole new types as monoids if they have these characteristics. For example, we can merge together configuration files by using monoid operators (in pseudo-code):

    -- Zero Value
    mempty == {}

    -- Append
    {"color": "Blue"} `mappend` {"fontSize": 12} == {"color": "Blue", "fontSize": 12}

    -- Identity
    {"color": "Blue"} `mappend`               {} == {"color": "Blue"}
                   {} `mappend` {"fontSize": 12} == {"fontSize": 12}

    -- Associativity
    {} `mappend` {"a": "b"} `mappend` {"c": "d"}
        == ({} `mappend` {"a": "b"}) `mappend` {"c": "d"}
        ==  {} `mappend` ({"a": "b"} `mappend` {"c": "d"})
There's no need to break out the really formal definitions if you're just trying to get some nice working code.

Borrowing from mathematics is beneficial because we can work with structures that have simple but useful guarantees. In Haskell, you can expect all the above to be true for correct type instances of Monoid, without the implementation details at hand. It's a core tenet of functional programming to disarm complexity with simplicity.

Thanks. This is a lot more helpful for non-math geeks than the two initial definitions in the article:

> A Monoid can be defined a tuple (M, op, e)

This definition lost me completely because I'm not even sure what "defining it as a tuple" is meant to imply. Based on your explanation it sounds like that is fancy talk for saying it is defined by having three properties.

> In a statically typed language, we can translate the notion of set into the notion of type.

The second definition again completely lost me because I'm not even sure what "set" and "type" mean in this context and being able to "translate the notion of set" is unhelpful when the definition using a set is the one I didn't understand in the first place.

I'm a programmer with over a decade of professional experience. I don't understand the article because I have no formal education in higher math nor in anything beyond basic computer science. Your explanation proves it's possible to explain these concepts with practical examples rather than academic jargon.

A tuple is just an ordered bag of elements. E.G. a graph is a tuple (V, E) where V (vertices) is a set and E (edges) is a binary relation between vertices.

Defining stuff as tuples makes sense, because then you can define the set of all such tuples (e.g. the set of all monoids for a given set M or the set of all graphs for the given vertices V) and formally reason about it :)

Leading up to monads, monoids are covered in this excellent presentation: https://www.youtube.com/watch?v=ZhuHCtR3xq8

> I'm not even sure what "defining it as a tuple" is meant to imply

Yeah, in a computer science situation it's needlessly obfuscatory. It's literally just an interface with a "plus" method and an identity value.

"interface", "method"... One man's obfuscatory is another man's revelatory. It heavily depends on the "axiom set" you first learned about computer science concepts on. I would contend that "interface" and "method" are objectively more obvious concepts.

If you keep talking like that, you'll start to lose sight of the mathematics upon which your field was founded. Like when someone comes out with a fantastic new theorem about posets or monoids, you're probably going to go around thinking it doesn't bear any meaningful impact on your job.

You learn both languages. You do it because the vast majority of human invention and ingenuity consists if acts of translation. Anyone serious about working with computers should understand that.

Monoids and posets have very little structure, so you really aren't gonna get deep useful theorems out of them. Maybe you can make some clever reductions, but those are hardly deep theorems.

I'm not arguing the underlying math isn't beautiful. I took grad level algebra and logic classes, so believe me when I say I know the math is awesome. But it just doesn't have any practical utility in programming, and more often than not just makes things more confusing.

edit: Also, since I majored in math, with a focus on logic, I don't think I'll have any trouble with losing sight of the foundations :)

Well, maybe not you personally then. :) But most programmers don't have a strong maths background, and yet it is a very relevant subject.

Sure. Convince me that the concepts are useful and I may be willing to invest the energy to learn the maths around it to extract more value out of it.

But I'm a professional, not a CS student. I'm already spending my free time honing my craft and looking into new techniques and developments in software engineering. I can't afford spending yet more time on mostly orthogonal higher mathematics based on the hearsay of some guy on the Internet arguing it will allow me to understand a point somewhere down the line that might make me a better developer.

This is exactly the same thing people say who come to programming from playing jazz music or artistic painting or figure skating. They all claim what they did before makes them a better programmer and they're probably even right. But that doesn't mean I should pick up figure skating as a hobby at this point.

There are many different fields related to software development that can directly improve your performance as a developer. Not all of them are equally to all tasks a software developer might have to perform. I think it helps to have a mix of them, especially at the level of teams. There are times where the UX zealot can save the day and there are times where it's the one with the Master's degree in Metamathemagics. I wouldn't want to miss either of them, even they're mostly oblivious to each other's specialisation.

>But I'm a professional, not a CS student.

"I get paid, I don't learn."

You want to get paid, get paid. I won't care if you fail.

I suppose Alan Turing and Charles Babbage are just shit programmers next to you.

I found "tuple" to be an odd word, when I first came across it.

Eventually I figured out it is a generalisation of "couple", "triple", "quadruple", etc. It's one of those things, but we don't want to say the number.

(I just tried googling it, and the definitions I get are very maths-y and hard to understand)

This is a great summary. I think it's valuable to say that learning how to map your problem domain onto instances of categories like Monoid is useful because it immediately makes your problem embarassingly parallel, because you can parallelize the addition process (just making sure to preserve overall ordering of operations). I.e., map/reduce. And if it's a commutative monoid, you don't even have to sweat the ordering.

It's maybe worth noting that we have a choice of how values are merged on collision. I think it's a monoid only when that choice forms a semigroup.

In the case of configurations, "right bias" (aka Last) is probably a good choice. I don't like that Haskell chose it for the general Map instance.

Yeah the Map monoid instance is very unfortunate :( When I came from Scala I really missed the one in scalaz (which is as you describe)

reducers has a UnionWith newtype which gives you the better Map monoid. But I think it uses a lazy unionWith, which will cause your Map to leak space when used in a foldMap over many elements :(

Yep, when I was trying to understand the fuss about monoids (I blame Gabriel Gonzales excelent talk [1]), I made myself a little toy EDSL for configuration in Python, that was made mostly by implementing mappend for dictionaries [3] :)

[1] https://www.youtube.com/watch?v=WsA7GtUQeB8 [2] http://notes.asaleh.net/posts/monoid-pattern-in-python/ [3] http://notes.asaleh.net/posts/monoid-for-specifying-configur...

Can I use a composite object or dict for the type?

So all you need to do is to create the null object and the add and sub function on the type? And I can use an implementation of that type as the state for something?

How can I guarantee the order? Would I need to use something like a blockchain or is a distributed database enough, as long as only I need the data and trust myself? In a cluster would it make sense to use etcd

I think, even better would be a commutative monoid, that doesn't depend on the order. It should be possible to scale the global state in a distributed system very concurrent, because of the commutativity.

I would have to store a log (easiest part) and the current state. To sync state between the nodes in a distributed system, they would have to send update messages to each other. They're able to batch process multiple messages to improve throughput.

oops thanks

That reminds me of Vector Spaces. Is there any relation? Are Vector Spaces monoids?

tome speaks the truth. More generally, in mathematics there are hierarchies of structures [0]. In this case monoids are both groupoids/magmas (a set with a binary operation returning elements of that set), and semigroups (a set with an associative binary operation returning elements of that set). Monoids add in an identity element.

Vector spaces have addition over vectors and multiplication of vectors by scalars. Considering vector spaces and addition only, that is a monoid. It also happens to be further along this hierarchy as you have commutativity, inverses, and other properties.

A bit like the statement: all squares are rectangles, not all rectangles are squares. We can say that all vector spaces are monoids (under addition), but not all monoids are vector spaces.

[0] https://en.wikipedia.org/wiki/Algebraic_structure

Yes, a vector space is a monoid under its addition.

It's always amusing to watch programmers learn a tiny piece of mathematics and turn into a totem. All those data structures would work just as fine without a monoid formalization, and one can use all their power with no attention given to algebraic axioms that define a monoid.

It's always baffling to watch programmers learn something new and immediately reject it as useless. When did the search for knowledge and better ways to think about code become a reason to mock someone?

Any decent programming language will let you write code that uses an abstraction, limiting your interaction with some of its input values to only operations defined in the abstraction.

Then the question becomes "how interesting is this abstraction free of concrete details?" It turns out monoids allow you to write general code that gives you incredible flexibility when you later choose a concrete instance of the abstraction to work in. See http://www.soi.city.ac.uk/~ross/papers/FingerTree.html for an example of how you can specialize a general data structure to do a number of specific functions based on a choice of monoid.

The best abstractions are the ones that introduce a lot of flexibility based on choices that code doesn't depend on. A lot of the deepest abstractions in that sense come from math. This isn't an accident. Math is about searching for good abstractions. When one of them is found to be applicable to programming, that should be celebrated.

Don't mock the ones who are celebrating.

Ah. "Flexibility."

This is a much deeper debate, but through my subjective experience programming for two decades, stand-alone as PhD, and also leading a 200-person engineering team, I thing flexibility is by and large the last thing to optimize for, and almost always a sign of a novice engineer.

You don't know the future. You don't know what will work with the end users of your software, or where your research will take you. The best thing you can do is to accomplish the task in front of you in the simplest, most directly performant way possible, and put it to use - in front of customers, or in doing your research.

True flexibility is not having to wait for every iteration so engineers can program in every iteration of the future.

There is a huge cost you pay today for 'flexibility', most things advertised as such are not in fact flexible, and the trend in designing complex real world systems is in fact for each project to do exactly ONE thing and do it very well.

Btw you're assuming I've rejected algebra, and I've done so immediately. I have no objections to using abstract algebra or category theory in designing and thinking about programming languages, and none of my opinions are a jerk-reaction to anything.

i would consider self driving cars complex real world systems, and they are designed to be 'flexible'...i would hope at least associative (pedestrian + car == car + pedestrian == bad)

sometimes 'paying' for flexibility has high ROI.

That's not what parent was talking about...

Well, the article certainly doesn't give one enough to chew on to celebrate.

I don't at all get what the hype is about from this article, the concept of "you can combine two things and get a thing" isn't novel or radical.

I get a feeling that the kicker is about associativity, which is to say "when combining a bunch of things in a row, you can combine them any way as long as you only combine the adjacent ones". The kicker being that if you have that, you can parallelize.

So, seeing associativity allows one to identify parallelizable problems.

it's weird how the article neglected that at all. maybe it was assumed to be obvious? I've given brownbags about monoids and they were all focused on parallelization. things like summingbird were good examples i've used.

>All those data structures would work just as fine without a monoid formalization

That was has happening in the last 50+ years.

>and one can use all their power with no attention given to algebraic axioms that define a monoid.

Not really. When you can actually identify a pattern, and treat different instances of the same phenomenon as a single concept, you get much more power over simple using those individual instances as different cases.

It's the very reason why we create and use abstractions.

Yes, I know that as a mathematician. Imagine you are a military strategist. And some engineer says, you know what all our tanks, rifles, and missiles have in common? They are all WEAPONS satisfying a few simple formal properties!

Your sarcasm is noted (and pretty funny), but you should know that there are military theorists who are taken seriously despite being superficially silly.

For example, an Army field manual [1]: "Any use of force generates a series of reactions."

Like, duh, right? But if you read on, the manual makes a highly cogent, precise point (one that's especially relevant to a military that all-too-recently used "body count" as a success metric).

"There will be times when an overwhelming effort is necessary to destroy or intimidate an opponent and reassure the populace. However, counterinsurgency forces should calculate carefully the type and amount of force to be applied and who wields it for any operation."

If you fail to generalize from tanks, rifles, missiles, etc. to "force", it's a lot harder to think about how to apply force to get the outcome that you want.

The same is true of programming. For small programs, you don't need abstractions or patterns. But if you want to build large, resilient, performant systems, you'll need formal definitions for those terms, and formal definitions of components you can use to build them, and so on.

Re-reading, it sounds like you're saying that the concept of a monoid isn't useful by itself, and I guess that's fair. But I didn't know anything about them before, and now that I read this article I do. I've got a long way to go!

[1] https://fas.org/irp/doddir/army/fm3-24.pdf

what a refreshing reply - good point & I agree with you. For certain questions, it helps to think in this fashion, and to make non-obvious points that might be lost in details otherwise. I have nothing against formal systems in computing, programming language research, or other forms theoretical CS. In support of this, in mathematics, Grothendieck took an exceedingly abstract perspective, eventually leading to the solution of some outstanding conjectures of the time.

While I haven't articulated this particularly well in my original comment, what I despair is the singular obsession with Monoids. It is popular because it is easy for people to wrap their minds around, they convince themselves that they've learned deep math and Something Very Important(tm), and it has a cool sounding name.

It's like you've given someone a book about the origins of the French Revolution and because they've heard the word "french", keep repeating "but the Eiffel Tower though amirite?" and keep writing blog posts and articles about the Eiffel Tower and nothing else.

In some ways I suppose my criticism is unfair and has an elitist / snobby vibe to it. It is common for people to find some 'sexy' hooks to initially get attracted to an area, and one shouldn't judge too much newbies. After all no one is forcing me to hang out at forums :)

>While I haven't articulated this particularly well in my original comment, what I despair is the singular obsession with Monoids. It is popular because it is easy for people to wrap their minds around, they convince themselves that they've learned deep math and Something Very Important(tm), and it has a cool sounding name. It's like you've given someone a book about the origins of the French Revolution and because they've heard the word "french", keep repeating "but the Eiffel Tower though amirite?" and keep writing blog posts and articles about the Eiffel Tower and nothing else.

That's not the case (though it might be for some).

The "singular obsession with Monoids"/monads is because we have a popular-ish language that has them as an abstraction. For many people that's a new abstraction they weren't aware of -- in the same way that in 2000 or so everybody talked about GoF Design Patterns.

If linear types where used in some other popular(ish) language and enabled new things and abstractions people would talk about that concept all the time (which is even simpler as a notion than monoids).

Well, the military has its own share of "simplistic" abstractions exactly like that, that are nonetheless extremely useful to its operations.

Good ol' "Rock or something"

If there was a natural isomorphism between rifles and panzer tanks you can imagine it'd have pretty big implications for both manufacturing and the common foot solder's field tactics.

And then you start wondering whether there are other kinds of weapons you could be exploring, and hey, turns out that's pretty useful.

That's absolutely how it would go, and of course, the generals would never think of exploring weapons without the engineer enlightening them.

Well, how many historical instances we have of generals, as opposed to engineers, creating weapons?

You don't need to give names to things, but often it's useful. It can reduce cognitive load (if you're comfortable with the terminology) and it can be useful when communicating to other engineers. Mathematicians did their thing without symbols (only words) a few hundred years ago, but once everyone agrees to use them you can start worrying about higher-order problems.

The problem with this argument is that a monoid is one of the most basic constructs you can find in abstract algebra. Using monoids as a shorthand for a couple of simple axioms doesn't buy you much; heck, the shape example didn't even have a use case for the neutral element, so all it basically said was: "intersection is associative". Just using a whole lot more words.

Had the shapes example be modified to describe (say) a topological space via open sets, you may have a point.

The real advantage is that we can express the abstraction in the language, which lets us write libraries around it. A single function like fold is much nicer than either having one per type or having to pass in the operation and identity manually:

    fold :: (Monoid m, Foldable t) => t m -> m
The fact that monoids are so simple is actually what makes this powerful: fold works for a large set of the types I use day-to-day, whether they're from the base library or specific to my own codebase. There is only a handful of other generic operations that rely on the Monoid class, but that's enough to make the class quite useful. It's a simple abstraction that applies to a lot of different types.

I actually did a count recently; something like 30% of our modules import Data.Monoid. Many of them were just using the (<>) operator, but that's useful in and of itself: no need to define a distinct operator for each of our types that wants one.

It's a simple, incredibly general abstraction that pulls its weight—partly because it doesn't have much to pull.

The cool thing about algebra is that it lets you reuse concepts and computations between elementary arithmetic and a wide range of abstract or complex structures. It's probably true that most programs use only very simple monoids, but look at e.g. this article to see how composition of monoids can be a very powerful design pattern in functional programming.


And the article introduces a whole lot more concepts, such as homomorphisms in order to be sufficiently interesting. I don't say that you can't build interesting stuff on top of monoids (I have several colleagues that are doing just that), just that the concept of a monoid by itself isn't very interesting or deep.

The goal of programming is to reduce all parts to such simplicity that they become boringly simple, and thus impossible to get wrong. Concepts are only interesting while we don’t fully understand them yet; when we’re done fully understanding them, they become boring and we move on to the next thing. Thus, the solution to programming is to compose programs out of boringly simple parts, using a boringly simple method of composition.

> The goal of programming is to reduce all parts to such simplicity that they become boringly simple, and thus impossible to get wrong.

No, that's a strategy to achieve a goal of programming. Programming is about using machines that accept well defined instructions to automate processes for people. A complex and poorly understood program that still ends up reducing the overall amount of work required for individuals is still worthwhile, even if not reduced to being boringly simple.

The main selling point is that reduction and "boring simplicity" can lead to a formally correct program, or at least one where most errors are caught by static analysis/type checking/etc.

I'm not saying it's not worth while, I'm just saying that it's not the goal (in general. Some academic exercises might have goals such as that because the point is to learn).

It's sort of like saying the goal of owning a store is to match inventory to sales as closely as possible. That's a strategy or sub-goal towards reaching the real goal at best, which is to make enough money that the store pays for itself and hopefully supports the owner. Some stores don't even have inventory, so that strategy doesn't even apply to them. Similarly, that programming strategy may not apply well to some problems (and some problems cannot be proven formally correct, at least in whole).

My original reply was really just meant to point out that what they viewed as the goal of programming, is really just one of many competing strategies for producing software, and many others don't care about that at all.

And how do monoids play into this? This is just a vague generality; unless you can show exactly what more complex programming concepts you can reduce to monoids and how that helps with understanding programming, that doesn't mean much.

Sure. It's funny with these algebraic things. If they're simple, people complain that they're not interesting... but when you add some depth, people complain that it's too difficult!

"I don't want to have to understand abstract algebra to use Haskell!"

"Abstract algebra is too simple to bother writing programming blogs about!"

To be fair, it may not be the same people.

I'm sure it's not! But even so it's interesting to be caught between these two antipodes. As your probably know there's another group of people, Andrej Bauer among them, who provide a different antithesis to "I don't want to have to understand category theory to use Haskell!" by exclaiming "Haskell doesn't use real category theory!".

It's just generally quite amusing to be caught in the middle of all this.

>The problem with this argument is that a monoid is one of the most basic constructs you can find in abstract algebra.

That's irrelevant, because we are not naming them to use them in abstract algebra, but as abstract constructs in a different domain with its own complexity (programming).

The lack of complexity comes from the fact that monoids are just defined by two simple axioms. Using them in a different domain does not change that. Associativity or having a neutral element do not become more complicated concepts in computer science.

>The lack of complexity comes from the fact that monoids are just defined by two simple axioms. Using them in a different domain does not change that.

You'd be surprised. A pointer (as in C) is an even simpler notion, but the complexity it represents to new programmers is big.

Complexity in a programming element from the interplay it has with everything else -- not from it isolated.

The original argument was that naming them "monoids" reduces cognitive load. Can you explain how exactly you see this reduction in cognitive load coming to pass in programming? Because from your previous two responses, I don't see it. What aspects of associative operations with a neutral element are so central to teaching programming that naming such a construct "monoid" gives us a stepping stone towards better understanding programming?

>The original argument was that naming them "monoids" reduces cognitive load. Can you explain how exactly you see this reduction in cognitive load coming to pass in programming?

My original argument (in this thread above) is that identifying patterns like monoids (and monads) gives more power through better abstractions.

Not that it's simpler than not knowing those abstractions (obviously assembler which ditches all abstractions is as simple as it gets conceptually).

>What aspects of associative operations with a neutral element are so central to teaching programming that naming such a construct "monoid" gives us a stepping stone towards better understanding programming?

It's about unifying similar usage patterns and having a mechanism to treat things like optionals (Maybe), errors, IO etc in a uniform way.

As a simplified no-monady example, consider languages with a string distinction between primitives (ints, etc) and objects (like Java, at least before generics/auto-boxing). A language that comes and treats both of these things (primitives and objects) as the same -- all objects, is an eye opener, and allows for more kinds of expressions than a language that treats them as distinct kinds of variables does.

The original argument was that naming them "monoids" reduces cognitive load.

The argument as I see it written is only that naming them reduces cognitive load. I don't see anyone suggesting that the specific name they happen to have carries any special power.

In what way is the specific name relevant? What alternative name instead of monoid for a monoid do you have in mind that would accomplish this goal?

Has someone claimed to have a better name?

Which is easier to comprehend at a glance?

* Under my Foo API you are able to combine Bars monoidally

* Under my Foo API you are able to combine Bars using baz such that `baz (baz bar1 bar2) bar3 == baz bar1 (baz bar2 bar3)` for all `bar1, bar2, bar3 :: Bar`, and also `baz == Bar quux baz == Bar baz quux` for all `baz :: Bar`.

How often do you document an API in such a way? Does the frequency with which this occurs justify introducing new terminology? Plus, you artificially inflated the wording to make "x is associative and has a neutral element y" look more complicated than it actually is, while not even naming in your "short" case what the operation and neutral element are.

If you are writing Haskell, you document your code this way first¹, and what is missing from this kind of definition you write down with words.

It's incredibly useful, since you can just go into Hoogle or some similar tool and query a function that:

    Monoid m => (a -> m) -> [a] -> m
And it will tell you it's called `foldMap`, and is available by default. Not long ago I have made this exact query because I forgot the name.

1 - Well, the autodoc write those comments for you.

I think at this point, I'll settle on "code people like taking fancy words from math people and running with them".

Associativity is a boring word, because high-schoolers have seen it.

Eh. Your patience here is admirable.

> How often do you document an API in such a way?

Every time `instance Monoid Foo` occurs in the Haddock documentation of a Haskell datatype which is pretty often!

> Does the frequency with which this occurs justify introducing new terminology?

"Justify" according to what set of criteria? Personally I prefer it.

> you artificially inflated the wording to make "x is associative and has a neutral element y" look more complicated than it actually is

So your suggestion is

* baz is associative and quux is a neutral element for it

Really, if you're going to go that far you may as well go all the way and just say

* Foo is a Monoid under baz and quux

> while not even naming in your "short" case what the operation and neutral element are.

Well, in the Haskell world they're implied by the typeclass instance, but I take your point.

"`baz` is associative and `quux` is an identity element."

(Though normally the latter would be more like "`quux` is a noop", "`quux` is an empty set", or whatever domain-specific term you need, and the associativity claim would be a brief parenthetical inside meaningful documentation.)

It depends on what knowledge you have. To me "monoidally" could just as well be "bryllyg"[1] for all it's worth.

I think jQuery satisfies monoids most of the time, and it worked and could be explained just fine without saying the word "monoidally" even once.

[1] From Jabberwocky, of course

How about this:

"Under my Foo API, Bars are combined associatively".

Associativity is a word. It's also a word that is taught to people in high schools. So use that.

That's absolutely fine. Then you also have to mention that there's an identity element. Once you've used the words "associatively" and "identity" it's no longer scary to use the word "monoid" so you may as well use it.

Well, you still have to use the words "associativity" and "identity" when you describe the operation and say what the identity element is. You can't just say that you can "combine" elements "monoidally" for it to mean anything.

Also, the elements are still combined associatively. With closure and and identity properties, the set forms a monoid under this operation. "Monoidness" is not a property of operation (in a way that the language is usually used, at least in math).

For that matter, "combine monoidally" is a phrase that has 2 Google hits at the moment, which indicates to me that in real life, you still have to resort to less-shiny words to describe things.

> You can't just say that you can "combine" elements "monoidally" for it to mean anything. ... For that matter, "combine monoidally" is a phrase that has 2 Google hits at the moment, which indicates to me that in real life, you still have to resort to less-shiny words to describe things.

There doesn't seems anything hugely objectionable to me in saying "Bars combine monoidally". But if you prefer, one can say "Bar forms a monoid under baz and quux".

It may not be obvious to you that it increases simplicity merely to combine two words into one but across an ecosystem of perhaps a thousand Monoid instances it indeed is.

* Under my Foo API you are able to combine Bars just like adding numbers.

But seriously, the use of Bar/Baz etc. in this example makes it more convoluted, rather than less, precisely because the names offer no hint for how to understand the nature of each thingy.

> Under my Foo API you are able to combine Bars just like adding numbers.

No. Firstly, Bars may not be commutative. Secondly, would you really say that taking the unions of sets is "just like adding numbers"?

I think it increases the cognitive load. These things already have names. A Monoid is an abstract category that doesn't 1-1 correspond to the particulars of each data structure, increasing overall cognitive load. It's not like I'm going to forget how sets work otherwise. I'm a mathematician. I like category theory. Mathematicians in fact strive for the right level of generality. I've never seen an algorithmic or programming challenge where thinking of something as a monoid helped me solve it better.

I only think it is useful once you're comfortable with it. Before you are you are probably better off using whatever mental intuition you already have about the idea. Personally I consider it useful to associate the concept to the idea of distributed systems (together with other concepts like commutativity). Having this association can help guide me towards either common appropriate ways of solving problems in that space, or as a good search-phrase to include when I google for algorithms.

In contrast, I, personally, have run into programming/design problems where monoids or semigroups seemed to be the correct level of abstraction to work at. People aren't appropriating language and concepts from algebra/category theory for no reason (well, perhaps some are), they're doing so because they find them to be useful tools when building and thinking about systems.

Please provide examples of such problems, because TFA is lacking in those.

Of course we invent new words all the time, but the downside to using specialized jargon is that many people won't understand you. Many good writers will intentionally avoid obscure vocabulary to reach a larger audience.

This is a judgment to be made on a case-by-case basis. For example, when writing programs, I don't think the word "monoid" is useful enough to be worth explaining, so I would avoid using it in API's or documentation and see if I can get the point across in some clearer way.

You’re right that there’s not much to monoids. Though much hay has been made about monads, they barely rate having a name either, and you’d be surprised if they didn’t work the way they do.

That being said, they are useful abstractions and in Haskell, for example, open up all of Control.Monad[0] and Data.Monoid[1] to your code for free.

[0]: https://hackage.haskell.org/package/base-

[1]: https://hackage.haskell.org/package/base-

Not if you're constraining yourself to parametric polymorphism. In that case the mzero property is pretty significant and comes up a lot.

There's this whole discussion about "theorems for free" and why these formalisms are quite useful and relevant to languages like Haskell.

Similarly, All those GoF patterns would work just as fine without a pattern formalization, and one can use all their power with no attention given to the common pattern that all instances share.

When recruiters / interviewers ask me whether I know "Design Patterns" I shy away. It's sort of a red flag. I could just say "Singleton" and package them a cat in an object and the interview would go on.

I've recently preferred to be reluctant about this and just tell them I really don't like these words (OOD / Design Patterns)... One interview was over really quick, while at another place, I've actually signed now. Any thoughts?

Hit them back with a question about how they conceive the difference between design patterns as a general concept and methodology stemming from Alexander's work in collaborative architecture, and the one specific GoF pattern language for OO coding that accidentally took over the whole meme.

Link? I mean, I can't tell which Alexander you mean.

Sorry, Christopher Alexander, who wrote the book called "A Pattern Language" (and several others) which was the origin of the inspiration for design patterns in coding.

He also later wrote a foreword for a book by Richard P. Gabriel, "Patterns of Software", with this gem of a quote: (the book itself I really recommend for anyone interested in design patterns)


In my life as an architect, I find that the single thing which inhibits young professionals, new students most severely, is their acceptance of standards that are too low. If I ask a student whether her design is as good as Chartres, she often smiles tolerantly at me as if to say, “Of course not, that isn’t what I am trying to do. . . . I could never do that.”

Then, I express my disagreement, and tell her: “That standard must be our standard. If you are going to be a builder, no other standard is worthwhile. That is what I expect of myself in my own buildings, and it is what I expect of my students.” Gradually, I show the students that they have a right to ask this of themselves, and must ask this of themselves. Once that level of standard is in their minds, they will be able to figure out, for themselves, how to do better, how to make something that is as profound as that.

Two things emanate from this changed standard. First, the work becomes more fun. It is deeper, it never gets tiresome or boring, because one can never really attain this standard. One’s work becomes a lifelong work, and one keeps trying and trying. So it becomes very fulfilling, to live in the light of a goal like this. But secondly, it does change what people are trying to do. It takes away from them the everyday, lower-level aspiration that is purely technical in nature, (and which we have come to accept) and replaces it with something deep, which will make a real difference to all of us that inhabit the earth.

I would like, in the spirit of Richard Gabriel’s searching questions, to ask the same of the software people who read this book. But at once I run into a problem. For a programmer, what is a comparable goal? What is the Chartres of programming? What task is at a high enough level to inspire people writing programs, to reach for the stars? Can you write a computer program on the same level as Fermat’s last theorem? Can you write a program which has the enabling power of Dr. Johnson’s dictionary? Can you write a program which has the productive power of Watt’s steam engine? Can you write a program which overcomes the gulf between the technical culture of our civilization, and which inserts itself into our human life as deeply as Eliot’s poems of the wasteland or Virginia Woolf’s "The Waves"?

Christopher Alexander, an architect who wrote among others A Pattern Language: Towns, Buildings, Construction. A Pattern Language later inspired Gamma et al to write Design Patterns: Elements of Reusable Object-Oriented Software.

Right, like one of the examples is summing two amounts of money. As far as I know, that's just adding integers if you're doing it right. It's not clear what all the formalisms are bringing to the table there.

If you know that amounts of money are a monoid under addition then you know that

    sum (map sum xs) == sum (concat xs)
And there starts a whole slew of possible optimizations, including MapReduce. See, for example, https://userpages.uni-koblenz.de/~laemmel/MapReduce/paper.pd...

It may not be clear because it's very obvious but one of the primary optimizations in haskell that make it fast despite heavily using currying and lamba functions is that haskell doesn't have dynamic dispatch. This is completely unrelated to purity and lazyiness. This means it's possible to inline every lamba and partial function therefore use them at zero cost without the overhead of a JIT compiler or vm at runtime.

The JVM can do similar optimizations because it knows the class hierarchy at runtime and therefore declares every method as final until it finds a class with a method that overrides the previous definition. It's also capable inlining dynamic dispatch by profiling the type of the object it's dispatching on. If 99% of all calls go to class X then it can inline the method of class X and catch the remaining 1% with a fallback clause. Of course this comes at a cost. You need to retain the bytecode, class metadata, profiling information and duplicated code that is generated from inlining in memory.

I don't really know what you are trying to say.

It's saying that the operation can be made parallel.

Applying an operation to a total workload is identical to:

1) Split the total workload into smaller workloads.

2) Apply the operation to each smaller workload.

3) Apply the operation to the results from step 2.

Each smaller workload in step 2 can be run in parallel.

While that may seem obvious for adding numbers or money it may be less obvious when you have different operations for example a method in an OO class hierarchy.

I agree. But in some cases it might be nice to omit an explicit 0 or use some higher order combinator that is ergonomic just because it expects a monoid instance to be already prepared.

So they make things a bit nicer, sometimes. Alrighty then.

Amongst the things they do is that they sometimes make things a bit nicer in a completely generic way which is reusable across many problem domains.

I'm not sure why it's interesting to label all these things as monoids. It seems kind of the tail wagging the dog.

More concretely, "monoid" seems both too general and too specific to be useful. Things that commute and things that don't commute are lumped together as monoids, and that seems to be a much more useful distinction.

Consider the monoids described in the article. For concatenation and file paths, order is important. On the other hand, multiplication, addition, maps, sets, and the shape examples all commute, so the ordering a monoid gives you is irrelevant. How does it help to consider both concatenation and addition as monoids?

(I'm honestly trying to see the relevance of monoids.)

I really recommend this article about designing a graphics library with a lot of monoid structures.


For me it gets interesting when you compose monoids, and that article has several elegant examples.

Otherwise I think the monoid structure is not super interesting in itself, but it's a really useful building block when thinking of software "algebraically", which to me is the most interesting aspect of Haskell as a programming tradition.

As for commutativity, I'm not sure how to answer. The monoid notion is useful in both commutative and noncommutative settings. When composing monoids one might be commutative and others noncommutative (like, say, a tuple of int and list of strings, the int being a total size and the strings being file names).

It's useful when you write functions that take generic monoids as arguments, such as many aggregation functions on data structures. For instance, finding the largest element in a tree, or the first successful result in a list of actions. Structuring the code like this separates the concerns of traversing the data structure and performing the particular type of aggregation. Haskell has the function foldMap which implements this idea.

If this is interesting to you Dan Piponi goes into significantly more detail (http://blog.sigfpe.com/2009/01/haskell-monoids-and-their-use...).

> (I'm honestly trying to see the relevance of monoids.)


I can't speak to the relevance of monoids (I don't have much experience with FP) but to me those examples are helpful in illustrating the definition. It's easy to assume associativity <=> commutativity until presented with counterexamples.

The question, though, is whether using the definition justifies its cognitive overhead, by making some class of ideas substantially easier to communicate or represent.

Yeah. I mean, I work in computer algebra, and these things strike me as trivialities.

"Early on, Peter Freyd wrote

> 'Perhaps the purpose of categorical algebra is to show that which is trivial is trivially trivial.'

"This can be taken to mean that one thing category theory does is help make the softer bits seem utterly natural and obvious, so as to quickly get to the heart of the matter, isolating the hard nuggets, which one may then attack with abandon. This is an invaluable service category theory performs for mathematics; therefore, category theory is plain good pragmatics."


I would say equally "This is an invaluable service category theory performs for programming; therefore, category theory is plain good pragmatics."

Sure. Especially in abstract algebra we have a whole, complex edifice that builds upon semigroups and monoids. But computer science doesn't have the same edifice (or anything resembling it), so you're mostly confined to "look, this is a monoid, isn't this neat?" and then things sort of end there. Actual use cases for monoids as building blocks for more complex concepts are pretty rare in computer science. And even in mathematics, the fact that (say) a ring is also a multiplicative monoid becomes mostly irrelevant soon enough.

> in abstract algebra we have a whole, complex edifice that builds upon semigroups and monoids. But computer science doesn't have the same edifice (or anything resembling it)

Have you heard of applicative functors and monads? These are part of an edifice of concepts that builds on monoids that we do indeed have in computer science.

Yes. And this is tiny compared to what's there in mathematics, limited mostly to people who write Haskell fanfic (from Haskell proper to Scalaz), and, to be blunt: the terminology is used more as a shibboleth than as an abstraction tool. Contrast this with F#'s terminology ("computation expressions"), for example. I mean, if your goal is to have adoption levels for Haskell rival that for abstract algebra and to keep the unwashed masses out, then the more arcane and unrelatable your terminology is for the average person, the better.

To give another example. "Strength reduction" as a compiler optimization relies basically on the fact that the distributive laws apply to rings. Yet you won't find a mention of rings in compiler textbooks that explain the technique. It's unnecessary to convey the idea and confuses more than it explains, unless you're one of the few people who already are familiar with rings as algebraic structures.

I don't really think mass adoption is a goal for Haskell. It's still a really inspiring system, and I've seen it bring a lot of people into contact with the rich traditions of pure functional programming and the use of algebra in code. Which is pretty cool, even if some people think it's irritating when these people write excited blog posts about a concept they find trivial, like monads.

I linked to a paper about composing monoids to build a graphics library. For me that particular paper was, years ago, one of the most interesting things I'd seen on how to structure programs.

Maybe this whole experiment with using algebraic concepts in programming is a dead end, and maybe it is even just a way for a bunch of nerds to feel smart. But it seems like a pretty valuable tradition to me.

There probably wouldn't be "computation expressions" in F# if it hadn't been for Moggi and the Haskell gang. Sure it's great that we can take these concepts and rebrand them to make them nice and cuddly (although "computation expression" is no paragon of simple english).

Blah blah blah. I'm kinda tired of the programming forum dialectics...

> Contrast this with F#'s terminology ("computation expressions")

OK, great! And what did F# call monoids, profunctors, semigroups, free constructions, adjoints ... ? The fact is that there are a lot of concepts we use in computer science (maybe not your corner, but yes in computer science) that already have names and history in the mathematical world. If you can suggest better names that we can use in computer science then that's great! Please tell me. Otherwise it seems reasonable to stick with the existing names.

They strike me as something less charitable: a vehicle for certain coders to pretend they're engaged in higher mathematics by using the jargon when less formal language will do (often with more clarity in fact).

OK... So exactly why is it useful to me, as someone writing programs in the real world rather than theorising about how computation works, that a list type under concatenation and a money type under summation are monoids? Can I actually write useful code with this knowledge?

At least in C++, knowing your type forms a monoid with your operation informs you when you are safe to use various parallel algorithms. e.g. `std::accumulate` with an associative operator is safe to run in parallel.

I think, I've always known if my type was a monoid without knowing the word.

It's a simple concept that appears all over in computing. I'm not surprised that that's the case.

Sometimes learning things like this actually changes the way you think. You might not notice it the first month you study these things, but little by little you'll grow comfortable with it and eventually it becomes second nature. Eventually this can help you reason about things in ways you weren't able to before, which is useful for certain especially challenging scenarios.

On the other hand, a little knowledge is often worse that no knowledge at all.

A lack of experience using abstract concepts - the experience which can only be gained by learning much more of, let's say, abstract algebra, than just the definition of monoid - will inevitably lead to wasting one's time or, worse yet, to the finished product being an over-engineered monstrosity.

OK, so when was the last time knowing something is a monoid helped you solve a problem that you were having difficulty with?

Not sure about monoids themselves. But learning about monoids was a stepping stone towards mastering parts of pure mathematics, which has definitely contributed to my problem solving abilities. Sometimes you need to just bite your tongue and learn some stuff that short-term feels useless but long-term becomes valuable.

Or if people are generally incapable of explaining how something they learned has helped them solve a problem in a domain I work in/close to, perhaps it's not that necessary to learn after all.

Here's an example of a monoidal structure used in the design of an elegant algebra for manipulating graphs (the network-of-nodes kind)


Representing graphs is a bit of a problem in Haskell, due to the difficulty of creating/working with cyclical structures, and the previous solutions were not really convincing. So in some ways this might be seen as a problem immutable data purists created for themselves. (Not my opinion)

But, either way, the ability to reason about and derive algebraic models for some problem domain is worth pursuing in general. Mathematically structured software is a fast track to robust, reliable, verified solutions to problems. That's propaganda but there's a fair amount of support for it.

An idea that comes up in functional programming that has a similar algebraic quality to monoids etc. is the notion of a type which contains only one value (usually called "Unit") and a type which contains no values ("Bottom" or _|_). The latter is described as "uninhabited" and is used, among other things, to usefully describe the type of a function which never returns. https://en.wikipedia.org/wiki/Bottom_type

Unit represents a value which is like a point in a space which contains only one point. It gets used to send signals with no content. For example, in predecessors of Go, there's the pattern of sending on a "channel of unit" as a way to essentially indicate an event to the receiver. Obviously in that use case, a type with any distinguishable values would be superfluous.

Interesting, although I am not sure how Unit and Bottom are related to monoids. Also, how does this translate, say, into C or C++? Unit is easy:

  struct Unit {};
but what about Bottom?

Even Haskell doesn't have a way to declare a value precisely of type Bottom.

As far as the connection with monoids goes, Bottom is a subtype of all types in the same way that the empty element of a monoid is a "zero" of the elements of the monoid. I don't know if that helps, but the similarity is that the types form a lattice, and so do the elements of the monoid. AIUI lattices (in the mathematical sense) have top and bottom elements, and in between they look like a DAG...

How about:

class Bottom { private: Bottom(){} };

This does not look right. If I understand correctly, the position of Bottom in the hierarchy of types is exactly the opposite to that of Unit: the latter could serve as the "universal base" (like the type Object), whereas the former could be seen as the "universal descendant" (some kind of Null type). In other words, Unit is "anything" and Bottom is "nothing".

This leaves out the possibility that breaking down problems as abstract mathematical structures helps you avoid difficult problems.

If you code Haskell then you'll run into monoids all the time, and if you understand what that word means then you can write more concise and idiomatic code that makes use of the wide range of monoid instances available under the common monoid API.

You can then make sure to expose the monoid interface for users of your Haskell libraries and they'll be happier because your values will cleanly fit with the rest of the ecosystem.

If you don't code Haskell then it's not like monoids are crucial learning for a successful career. Maybe you could approach them with some curiosity and find something interesting and useful about basic algebraic structures.

But probably all these monoid nerds are just puttin' you on...

No, I don't code Haskell. So is all this about monoids/monads/other-simple-ideas-with-complex-names just people solving problems in their language of choice that have already been solved in other languages, or just don't come up in other languages?

First of all, what makes "monoid" a "complex" name? Is your complaint basically about terminological aesthetics?

I'd say no to your question. I've linked an article elsewhere in the thread that to me shows pretty clearly that monoid is a useful and generative concept.

On a more basic level, I personally really enjoy taking inspiration from math and logic into programming. You mentioned composing programs in a way that is safe and correct; I think the algebraic approach to programming is a really excellent and deeply fascinating way to approach that goal.

IMHO, naming things sometimes only makes sense in a specific, wider context. Thus, using a domain-specific jargon to name a simple thing like an associative operation with an identity does not make much sense outside of the domain (in this case, abstract algebra).

The extreme example of the same nature is the notion of "magma" that finds some use in algebra. Would you use this word to describe a function with two arguments (that have the same type)? There's nothing to be gained by doing this outside algebra.

I guess the underlying discussion here is about viewing programming as a kind of applied algebra, which is common within functional programming and not so common in other traditions.

I happen to find that view really fascinating and generative, but I don't blame others if they aren't interested.

I also wouldn't insert the monoid concept into a codebase that isn't Haskell or doesn't already have some alignment with algebra, because it would seem weird... and Haskell has features (typeclasses and purity) that make monoids really nice to use.

I also don't generally do language advocacy, and I have a great respect for the diversity of ways of thinking and learning. But I also wish people wouldn't dismiss concepts that other people find useful and good just because they don't immediately see the point.

In other languages this kind of problem is typically solved with an explicit accumulator and a for loop. Plus a little crutch in places where higher order functions would really make sense :-)

monoids are perfectly parallelized. map/reduce are built on top of that.

Parallelising things is the easy part. Making them run faster than the single threaded version is often the harder part because the overhead will eat all your gains if your workload isn't large enough.

The easy cases are already covered by things like OpenMP and the difficult problems also usually benefit from being written in a language that allows you to tweak your memory layout and allocation patterns manually.

I just wish that higher level languages would give me this freedom without sacrificing productiviy or safety. I'm not even asking for completely unreasonable things. At the bare minimum all I really need are off gc heap allocation (with reasonable safety via reference counting or whatever rust does), value types and a way to launch threads that bypass the garbage collector stop the world pause. It's just a checkbox of features and the latter is not that important because you already can sort of do that by calling into C and launching an OS thread. For some reason though nobody is checking those boxes.

I still stick to C++ and C but it gets old to spend hours fixing memory leaks and other possibly security related bugs from other developers.

I feel like I'm writing too much semi-offtopic things just because I have no other place to express them.

Which is great, if you're theorising about why map/reduce work or trying to recreate it from first principles. I don't think many of us are trying to do that.

If you don't work in a language with a type system sufficiently powerful enough to express monoids, or weak enough to express monoids dynamically, there's not much practical value. But note that while cleanly expressing the monad interface is challenging for a type system, monoid is not so hard, and most things that have interfaces or concepts or traits or something like that can do it. At that point you can write generic code that uses them, and you'd be surprised how much value you can get out of that. You write a lot of code manually doing things to monoids that you could have done once.

(It is unfortunate that monoids sound like monad's more complicated brother, when in fact it is exactly the opposite way around. Monoids are so simple that a common reaction once you actually get it is to go "Seriously? That's it?")

The other useful thing about monoids is that because of the associativity, they provide a useful, yet easy, way of breaking up a task for multiprocessing. If you can prove that your data type and the operations you desire to use on it are monoidal (and by "prove" I mean the informal sort of things programmers do all the time, not grabbing a proof system and going at it), then you have some really easy options, and it's really easy to write a system that very cleanly breaks up "the strategy of how I'm going to multiprocess this" from "what it is I want to multiprocess".

Because of this, it is likely that you're going to continue hearing more about these things over time, rather than less. Monoids are just about the simplest non-trivial example of such things, and the more sophisticated steps up (lattices, CRDTs [1]) are probably also things you're going to hear more about in the future.

Personally, I think in a lot of ways we are still have collectively done next to nothing as a community to address the multicore world we live in. Right now we're taking advantage of some things like Go's green threads or "asynchronous" programming to deal with the common case where we have some problem where a single core is adequate to solve it, but we need some complicated scheduling or we have huge numbers of these relatively small problems coming at us at once, but we still have made very little progress in doing complex tasks truly in parallel. It is very likely that those solutions are going to involve coming to understand these mathematical abstractions and why they are important. Monoid is going to be the first of them you'll encounter, because it is, as I said, probably the simplest non-trivial one. But on its own it won't get you all that far in general.

[1]: https://en.wikipedia.org/wiki/Conflict-free_replicated_data_...

I don't tend to handle things in the domain you're talking about here - 99% of problems I solve are along the lines of "how do I split the function to solve this problem up in such a way that it obviously matches the spec and each part is difficult to misuse". Think security-centric applications - an identity server implementing OpenID Connect and SCIM, for example.

As a result, I currently use languages with decent type systems - Rust, recently F# - but I've still never come across a reason to know that something is a monoid, and I don't regularly come across duplicated code, so I'm not exactly sure what problem knowing something is a monoid is supposed to solve in my domain.

In my experience, it's more important to know that something isn't a monoid / semilattice / <insert structure here>

The CRDTs linked by the parent are a great example. If you're working on an eventually consistent system, and you see a structure that doesn't form a monoid, it's most likely the case that there's going to be some sort of race condition. That isn't to say that any given monoid is going to solve the problem, but it can definitely help to pinpoint possible problems in otherwise complicated code.

It definitely doesn't come up in every discipline, but studying these structures has improved my engineering a ton. If design patterns are about class and object composition, then I'd argue that algebraic structures are the equivalent for function composition.

Full disclosure: I write mostly functional Erlang code for a living

This post does a great job of linking the elements of the mathematical definition of a monoid with their usefulness in code, helping explain why monoids are so outstandingly useful in programming. If you've ever wondered why it's monoids that come up over and over again in programming instead of another mathematical abstraction, this article will help.

I think "outstandingly useful" is a bit strong and I think it polarizes people who are on the fence on whether the concept is practically useful or just formalistic jargon.

Setting aside relevance to common problem domains, I'd say monoids (as monads) are the most used mathematical abstraction in programming after sets, functions, induction, total orders, and various forms of graphs. That's pretty outstanding, especially given their B-list status in mathematics. There are dozens of explanations of monads on programming blogs and not nearly as many explanations of, say, rings or models or vector spaces.

This is a great introduction to monoids. The shape example actually appeared in a study performed in the early 90s [0]. I'm not convinced that you can draw too many valuable conclusions from the study, but it's a fun, quick paper to read.

[0] "haskell vs. ada vs. c++ vs awk vs ... an experiment in software prototyping productivity" - http://www.cs.yale.edu/publications/techreports/tr1049.pdf

The shape example reminds me of how traditional raytracing renderers work: the entire scene is a function whose input is [x,y] view space coordinates and output is [r,g,b] color. To produce an image, you call the function for each pixel and store the results.

Could be an interesting tutorial to see a raytracer explained in terms of monoids.

There's a good blog post as a kind of follow-up to this one that talks about how thinking in terms of monoids helps enable flat architectures and composability in Haskell: http://www.haskellforall.com/2014/04/scalable-program-archit...

This is awesome. That's the first explanation of the subject that actually made sense to me, and made me understand why I should care. Thanks for sharing!

fun fact: you can run shortest path algorithms on pretty much everything as long as it fulfills the monoid constraints (taking min and combining paths, operating on distances is a monoid)

so basically, for the cost of implementing two operations you can reuse dijkstra or bellman-ford to do crazy stuff like belief propagation etc


Could anyone provide a "real world," or generic business coding (database, input validation, etc, etc..) example that demos rates their usefulness outside of a pure "mathy" domain.

I find that most of these fp examples tend to focus on simple things like shapes which tend to already have monoid properties built in. I'd be interested in seeing examples of how they help your kind of run of the mill business coding.

Coming from an OOP background what made me connect some dots about what the monoids are and what are they good for after reading the article is that they can also be viewed as the GOF composite pattern.

I've just stumbled upon an article that describes this:


Monoids (& monads) are one of those tools that I think help shape your thinking (like FP) and, even if you don’t use them daily or regularly... still help make you a better programmer to understand.

The concepts are actually very simple but their effects are complex. This is great introduction though I think a little high level for beginners.

Why monoids? Why not groups? Why not vector spaces?

I can't tell why monoids are supposed to be more directly applicable to programming than any other concept from algebra.

Intuitively, it's because lots of things we work with in CS don't have a good notion of inverse. In practice, this usually means you're building up a structure out of smaller structures: you can always add more to the structure, but there's nothing you can add to undo what was added previously.

Semirings are interesting for the same reason. "Fun with Semirings"[1] has some great examples, if you're curious.

[1]: https://pdfs.semanticscholar.org/702d/348c32133997e992db362a...

I'll assume this a genuine question and give a genuine answer.

> Why not groups?

Because many things you would want to combine don't have inverses. Does the list [1,2,3] have an inverse? No. That's why lists form a monoid and not a group.

> Why not vector spaces?

Because all finite dimensional real vector spaces are isomorphic to R^n so we tend just to use the concrete instances of tuples of floating point numbers.

They are all applicable, it's just that the notion of monoid is more general and thus its area of applicability is bigger.

[Edit] On the other hand, it is important to realize that even though these structures look (and indeed are) related, each focuses on a completely different aspect of reality. Monoids are useful in describing composability (where associativity is essential); groups are instrumental in analyzing all kinds of symmetries; vector spaces form a scene on which one deals with linear independence, subspaces, and linear transformations. (All this finds its application in computational algorithms, but it would be interesting to see if the program structure also could be understood using the notion of symmetry and/or linearity.)

Associative operators with an identity are extremely common in programming, inverses are much less common. Most ways to turn a lot of data into a little data have a monoid hidden in them with the identity serving as a default value in the case of missing data.

There is nothing you can comcatenate to a list to get the empty list, nothing you can OR with True to get False, and nothing you can union with a set to get the empty set.

General application vs specifics, mostly.

But yes, a good grounding in algebra will help anyway. It’s not an exclusive set ;)

monoids taught me that you can use overly complex statements to solve trivial ideas

For those that like SICP, the picture language in section 2.2.4 describes a monoid, though they don't call it that because the term hadn't yet been adopted. Even so, I found it very helpful in understanding what all the monoid literature was going on about because I'm much more familiar with Lisp than I am statically typed ML-family languages.


The word "monoid" has been used with its current meaning since the 50s, maybe before.

I think the authors of SICP didn't describe the picture language as such because it just isn't a very useful concept, not because they didn't know the word.

A monoid is just an algebraic group which is missing the concept of the inverse element.

This has been gutted out of the group so that the category theorists can then see monoids in situations where they would otherwise have to rack their brains to come up with the inverse operation.


For instance, the set of character strings and the catenate operation are almost a group. The identity element is the empty string.

Closure holds: if a and b are strings, (cat a b) is a string.

Associativity holds : (cat a (cat b c)) is (cat (cat a b) c). (Which is then just (cat a b c) to a Lisp programmer, (cat a) is a, and (cat) produces the empty string.)

Identity element: empty string. (cat a "") -> a

Inverse element: oops!

We would have to postulate some sort of "negative string" so that (cat "abc" -"abc") produces "".

Then we have the problem of what is (cat "abc" -"xyz"): the combination of a string and the inverse element of a different string.

Let's not bother; just drop this axiom and call it a "monoid".

It wasn't a term used in computer science then. Gerald Sussman himself said this about the term "monad" in one of his talks.

I still didn't get it. I need a simpler explanation and examples

You have a value of a certain type.

You have another value of the same type.

If there exist a function (let's call it "combine") that combines those two values and emits a third value of the same type.


There exists a value of this type that does nothing when used as argument to this "combine" function.

then your type is a monoid. (basically)

You left off the associativity; you need (a + (b + c)) == ((a + b) + c), where + is the combining operation.

I did. I left it out for two reasons: - the comment asked for a "simpler" explanation and associativity isn't a specially simple concept if you are not familiar with it and does not really help understand the core idea of a monoid. - in practice, monoid associativity isn't checked. Most type system aren't able to express it. Therefore, whenever you encounter a monoid you have no guarantee that it is indeed associative.

There may be no guarantee, but that doesn't make a structure without associativity any more of a monoid, nor is having an identity element without associativity the core of idea of a monoid. If anything, associativity is the more important part of the concept since without an identity element, you are still left with a semigroup.

A bit of advice about understanding concepts like these:

You don't understand every single instance. You understand the concept, and then you begin to learn individual instances.

The shape and the use of monoids is really easy to grasp. Sometimes understanding how a thing is specifically is a monoid or how it's implemented can be obscure. They are similar to monads in this respect, although often less complicated.

In summary, understand that there is a beach and that the beach is made of many grains of sand. Each grain of sand may not be obvious from just staring at the beach.

People are awful at explaining Monad-related concepts. I think everyone starts with a reasonable explanation and then tweak it to be more succinct until they end up "a monad is just a monoid in the category of endofunctors"

This article is about Monoids, not Monads. Monoids are simple. Just imagine a bunch of things, and imagine you can "combine" any arbitrary thing to another thing. The combination has the property of being associative (whatever) and you're only allowed to called it a monoid (otherwise convention says you need to call it a semigroup) if there is also an item (the "identity" or "neutral" element) which when you combine it with any arbitrary item, produces that same item as a result.

Common examples are: strings which you're allowed to combine with concatenation (with the empty string being the "neutral" element). Integers with addition as the combiner is also a monoid that also has a few additional named properties (like commutativity and existence of inverses) but they're still monoids.

By "monad-related concepts" do you just mean abstract algebra? Otherwise, how are monoids and monads related, other than that they are both algebraic structures?

My comment was pretty poorly written, and is probably better understood in terms of just "explanations of things starting with mon-" :-D

The concepts aren't tough, people are just pretty bad at putting themselves in the shoes of a learner, and throw bizarre over-complicated descriptions and examples into the mix when explaining them. The description I gave is someone's tongue-in-cheek stab at these which has stuck with me (unless I'm horrendously off and that is actually someone's legit attempt to describe monads)

I summarized a decent Reddit thread on the topic a while back here: http://metaleap.net/blog.hasplained-monoids.html

It's a stupid over-complication of stuff you do every day while programming. It's these annoying science-y, elitist words that keep people away from programming.

A monoid is just an operation you perform on things that are similar. The concat() function in JS is a monoid.

> f is associative: for all a, b and c, we have f(f(a, b), c) == f(a, f(b, c))

> Relative paths in a file system form a Monoid under appending

How can appending be associative?

Because ABC could be built by appending C to AB, or appending BC to A.

You might be confusing the associative property with the commutative property (I do this all the time):

associative: (a + b) + c = a + (b + c)

commutative: a + b = b + a

For example, appending strings is associative, but not commutative. Appending items to an unordered set is both associative and commutative.

I was! Thank you.

Well, I guess, it's useful to remember that String type is a monoid after all! :)

> Given a value, we do not have to care how it was built, and whether it is a composite or not. We might not be even to tell (*).

missed a word: Able to tell.

Thank you and good catch :) I corrected it.

When I tried haskell I wanted to use wreq library to do some simple post and get requests.

I took a look at docs and found the function definition for a post request.


    post :: Postable a => String -> a -> IO (Response ByteString)
We can see that the second parameter should be an instance of the Postable typeclass.

The examples mention using a JSON value as a parameter

    >>> r <- post "http://httpbin.org/post" (toJSON [1,2,3])
By looking at the definition of toJSON definition we know that toJSON returns a value


Just seeing a single example isn't enough for me to generalise though.

I proceed to go to the definition of the Postable typeclass.


Unfortunately it doesn't list the instances of the typeclass so I decide to use hoogle.


No results found.

I then start looking at the source code of the library until I found this file which contains the Postable instance for json values.


    import Data.Aeson (Value, encode)
    instance Postable Value where
        postPayload = putPayload
Why is it so difficult to do a simple task like finding all instances of a typeclass? In a Java IDE finding the implementations of an interface is just a single button away and fully automated. Of course I'm assuming that you've already imported the library that contains the typeclass instance. In this case the instances are part of the wreq library themselves and not of aeson or some plugin that you use to connect wreq and aeson. If hoogle worked as I think it should it could even be superior to a local search that only considers libraries that are part of your project. I'm a big fan of self documentation and like the type-aware documentation but this is a case where although it took a short time I've wasted more time than I'm comfortable with for a simple program that could be written in python or whatever instead.

I don't care about category theory at all and you can have all the practical benefits of category theory without putting it on a pedestial. It's neat to write more generic functions and stuff but there is no need to glorify it.

From a practical standpoint monoid, monad and functor you can think of them as an interface. When your code accepts a specific implementation like ArrayList you can in some cases instead use the java interface List to make your code more generic. When your code accepts a specific implementation like Maybe or List then sometimes you can use the Monad typeclass to make your code more generic.

Dear fellow Haskellers who care about the usability of our language,

imtringued has raised an important point here. His/her comment is quite long but the summary is:

Postable is exported here


but defined in some internal module. Its instances are technically orphans because they are defined directly in the module that Postable is reexported from. That means that the documentation for them is detached from the documentation for the class.


This is confusing and Haddock should be improved to address this.

You can find all the instances of a typeclass by loading all your libraries into GHCi and then using the :i command:

    λ> :i Monoid
    class Monoid a where
      mempty :: a
      mappend :: a -> a -> a
      mconcat :: [a] -> a
      {-# MINIMAL mempty, mappend #-}
      	-- Defined in ‘GHC.Base’
    instance [safe] (Monoid a, Monoid b) => Monoid (a :<|> b)
      -- Defined in ‘Servant.API.Alternative’
    instance [safe] Applicative f => Monoid (Traversed a f)
      -- Defined in ‘Control.Lens.Internal.Fold’
    instance [safe] Monad m => Monoid (Sequenced a m)
      -- Defined in ‘Control.Lens.Internal.Fold’
    ... etc ...

I don't think that will work in this case. After all, that's essentially what Haddock does to generate its pages. The problem is that the Postable instances are orphans, for some reason which eludes me.

> I don't care about category theory at all and you can have all the practical benefits of category theory without putting it on a pedestial. It's neat to write more generic functions and stuff but there is no need to glorify it.

Agreed. Here's part of a recent email correspondence between me and a fairly prominent functional programmer who's written at least one celebrated paper:

> > I didn't want anyone to get the impression that they had to understand "category theory" or "maths" in order to use Haskell. This is a misconception about Haskell that is sadly far too prevalent in my opinion.

> I agree.

That being said, I haven't seen anyone in this discussion glorifying category theory, unless I missed something?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact