Hacker News new | comments | show | ask | jobs | submit login
Monoids, Functors, Applicatives, and Monads: Main Ideas (monadmadness.wordpress.com)
94 points by abishekk92 on Jan 5, 2015 | hide | past | web | favorite | 51 comments



I've had an idea for a monad tutorial that I think might be quite instructive, but haven't had the motivation yet to write.

In essence, the idea is that you are led through the process of implementing: early exit, nondeterminism, threaded state and coroutines using call/cc in scheme. Then monads are introduced as a way of typing those patterns.


I think most monad tutorials would be quite ok if they didn't use words like "monad", "bind", "return" and so on. It's just stupid to insist on using names which mean nothing or when they mean something (return) it's actually opposite of what they're supposed to do. And no, there are no upsides to this: you math-inclined people may think so because you were trained to think that way, but it's simply not true.

I suspected this for a long time, but recently I saw a talk from Gilad Bracha - http://www.infoq.com/presentations/functional-pros-cons - which convinced me that I was right. Scroll to ~22 minutes into the talk for relevant section.

Ok, I finally said it. I feel better now, thanks ;)


See here: http://fsharpforfunandprofit.com/posts/recipe-part2/

I suspect that one could get through the entire thing, handily grasp all of it, and be ready to apply what was learned in practice without ever realizing this had anything to do with the dreaded M-word.

On a somewhat related note, I'm impressed by how Microsoft has their entire developer community comfortably using a few basic monads without even realizing it. I've even had some success with hijacking it. The terms they chose are explicitly coupled to the idea of querying, but I've been able to produce some implementations of .NET's version of the pattern that still produce good and readable results when you use the LINQ query syntax with them. I wouldn't say it's a perfect renaming of all the concepts, but if the goal is to make the concept intuitive enough for most users to tackle without fears then damn if it isn't an impressive 90% solution.


I don't know if it was intentional, but most of the intro literature around Scala seems to avoid terms like monad, just introducing the reader to Option, Future, etc., and letting the reader notice the common patterns (map, flatMap).

Eventually, the reader learns about monads without ever explicitly learning about monads.


Yes, 'return' is egregious. I believe it was co-designed with haskell's do-notation. So that you could write purely declarative code that was a spitting image of the imperative equivalent.

E.g.

    x <- foo 2      -- x = foo(2)
    y <- bar 4      -- y = bar(4)
    return (x+y)    -- return (x+y)
But something that's always missing in such discussions is that haskell allows you to define

    a_better_word = return
and then you can freely forget about 'return' in your own code, except when you roll your own monad instances, which is rarer than a blue moon.


With AMP, it should always be possible to use pure in place of return.

https://www.haskell.org/haskellwiki/Functor-Applicative-Mona...


Watched the video. It was interesting. Gilad Bracha certainly knows his stuff. However, while he knows his stuff, his grasp of the stuff he's criticizing is less thorough.

He does seem to have a reasonable grasp of the what, but is shaky enough on the why that he's presented a few straw men.

Regarding Hindly-Milner type inference, I believe that there have ever been advocates that have gone from "you don't need to specify a type anywhere" to "you shouldn't specify a type anywhere". That is not how practitioners use these languages, these days. That you need redundancy for error correction (an accurate and important point) does not mean that your particular level of redundancy is best. Inference allows you to specify types where they will make things clearer, and lets the compiler fill in the gaps. It also lets you ask questions of your interpreter about what shape of thing will fit in a given hole. Even before GHC added TypedHoles, you could open GHCi and say :t (\ x -> some expression involving x) and get back the type of x as the first argument.

There's not enough room in this margin to fit the rest...


I believe Monad is fine as a term, because you cannot describe it based on resemblance with real-world objects, as more conventional OOP design-patterns are doing. And I think that the name of a design pattern should be first of all unique and easy to remember and not necessarily descriptive, because the whole point of coming up with a design pattern is to grow the vocabulary of those using it and naming clashes are bad.

I do prefer "flatMap" to "bind", as practiced in say Scala, because it's a direct reference to this property: "m.flatMap(f) == m.map(f).flatten"


I don't know enough Scala... "flatten" seems to be equivalent to "join" from Haskell/math - is "flatten" specialized to lists? For that matter, is "map"?


What do you mean by specialized?

Don't know the equivalent of flatten in Haskell, but if you want a signature then it is something like:

   def flatten(m: M[M[A]]): M[A]
Or another way of thinking about it ...

   flatten(m) == flatMap(m)(x => x)
So basically, in order to define a monad, implementing flatMap is equivalent to implementing flatten. And I think that "flatten" as a verb is very intuitive, although intuitiveness is subjective since it's tightly related to familiarity - but flatten has been used in many languages as an operation available for collections and it works for monads in general just as well.


Specialized in the sense of a more restricted type. From your answer, it seems like "no" - though my Scala is superficial and rusty.

The Haskell equivalent is join :: Monad m => m (m a) -> m a

We also have the equivalent specialized to lists: concat :: [[a]] -> [a]

And as you say, join = bind id = (>>= id).

Incidentally, in math that function is the "eta" in the triple representing a monad.


While I generally disagree with you here, I actually do agree about "return". Hopefully we can deprecate it in favor of pure now that all Monads must be Applicatives.

My broader disagreement is that these names do mean something - they mean the same thing they do in math. Using the same name makes it easier for me to find more relevant things, and it can be tremendously useful to be able pull over results and intuitions from other contexts. There's no good reason to demand an additional translation step when we're talking about the same things.


I find that talk really painful. I do not recommend it. It is mean spirited, full of misleading characterizations, and - I think - it misses the core point of functional programming. Gary Verhaegen's comment "this was a waste of time" addresses the many problems effectively. Brian Craft's "ugh" addresses others. Either of those comments are more worthwhile than the video itself.


> I find that talk really painful.

I suspect it may be painful in some cases, it's somewhat like a clash of different cultures. The guy is firmly rooted in Smalltalk and Lisp schools of thinking and everything he says is rather obvious for those who share the same background. On the other hand it may be obviously false for people following ML tradition and even more so for recent converts.

Watching this talk calmed me down. It turns out that I'm not an idiot, that I know how to do FP (because I quite like it!) and that there's no need to change my ways just because some vocal minority proclaims me a heretic for preferring meaningful names or no-nonsense IO handling. On the whole I think it was an hour well-spent. YMMV ofc.


That makes sense. Your points are well taken. Also, in retrospect, I realize that my reply wasn't helpful for the discussion at hand.


None of those names "mean nothing"; they all have very specific meanings. Its true that in the case of return, the meaning in the context of Haskell monads is somewhat special, but since its part of the actual syntax of Haskell do-notation, its particularly important to address it in Haskell-oriented monad tutorials, whether or not it was the best choice from a syntax design perspective.


You may already be aware of this, but for those who aren't: `return` isn't actually part of do-notation syntax, it's a function name. It's true that do-notation must result in a monadic value, but that's just because of the type. `return` can be used to construct that monadic value, but there are many other ways to do so:

    func1 :: Maybe Int
    func1 = do x <- foo
               y <- bar
               return (x + y)  -- Using return

    func2 :: Maybe Int
    func2 = do x <- foo
               y <- bar
               Just (x + y)  -- Building a monadic value explicitly

    func3 :: Maybe Int
    func3 = do x <- foo
               y <- bar
               plusJust x y  -- Using some other function

    plusJust :: Int -> Int -> Maybe Int
    plusJust x y = Just (x + y)
Also, since `return` is a function, we can use it outside do-notation:

    wrapAndApply :: a -> (a -> b) -> [b]
    wrapAndApply x f = fmap f (return x)
What makes `return` "special" is that it's a method of the `Monad` typeclass. In other words, the name `return` is overloaded to work with any instance of `Monad` (`IO`, `Maybe`, `List`, etc.), depending on the type that's required of it.

Loads of other functions are overloaded like this, eg. `+` has implementations for `Int`, `Float`, etc. so it's not monad-specific.

The part which is monad-specific is that Haskell's do-notation is hard-coded to the built-in Monad typeclass. We're completely free to make our own monad implementation, separate to Haskell's built-in one, but we won't be able to use do-notation with them unless we implement the built-in `Monad` typeclass as well. That could be as simple as mapping one name to the other:

    instance MyMonad a => Monad a
        return = myReturn
        (>>=)  = myBind


"The part which is monad-specific is that Haskell's do-notation is hard-coded to the built-in Monad typeclass."

That's not quite true, at least in GHC. It's hard-coded to use (>>=) and fail, but the only thing that prevents you from defining your own is the name collision. With -XNoImplicitPrelude, you can (and they can even be locally scoped - I did some cute things with this once that I'd never want to see in production code...).


No. You desperately insist on giving them meaning. And sometimes, which is worse, the meaning you want to give them is completely unrelated or even opposed to everything they are or do.

Aside: You realize that the word "monad" (and "dyad") already has a very specific meaning and that you assaulted that meaning and now are trying to beat the poor word into submission? If I had to choose I'd say that J programmers have much more of a right to use and define this word: a single-argument function ("mono") makes more sense to be called a "monad" than an instance of "flat-mappable" interface.

> since its part of the actual syntax of Haskell do-notation

It's not, which makes using it even dumber - as it's no problem at all to change it.

F# uses monads extensively, as one of the selling points of the language, yet the m-word rarely if ever is mentioned in the docs. That's because designers of that language were rational enough to realize that "computation expression" is a name that both conveys some basic intuition about the thing in question and is concise enough to allow talking about the thing abstractly. From my perspective I don't see any, at all, arguments for persistent use of the M-word by the community of some programming language. Other than trying to be different, or something.

Believe it or not, names do matter. Good name speeds up learning/understanding considerably; a bad name slows it down. It's sometimes - rarely! - worth it to invent completely new name. That's when the thing you want to name really is dissimilar to anything else and when re-using some name would introduce false intuitions about the object. But monads ARE NOT SUCH THINGS (sorry for shouting). "You could have invented monads", and quite possibly you did a few times already, and that probably wouldn't be the case if the idea was that ground-breaking, that unprecedented.

Ok, nevermind - I'm getting angry for no reason, I should stop it already.


> You desperately insist on giving them meaning.

They have meaning in the same sense that any words have meaning -- that is, they communicate meaning between an existing group of people.

> And sometimes, which is worse, the meaning you want to give them is completely unrelated or even opposed to everything they are or do.

Like many words, their meaning in a specific technical context differs from their meaning in other contexts. This is not particularly unusual.

> Aside: You realize that the word "monad" (and "dyad") already has a very specific meaning

"Monad" has several distinct well-established meanings in different domains besides its use is Category Theory (and thus Haskell). So what?

> and that you assaulted that meaning and now are trying to beat the poor word into submission?

The meaning of "monad" in Haskell comes directly from its meaning in Category Theory, which is inspired by (though not the same as) its earlier use in mathematics stretching back to its use in metaphysics (particularly, through Leibnitz's Monadology), which has its roots in its use in Classical philosophy, from whence also comes all its other uses.

> If I had to choose I'd say that J programmers have much more of a right to use and define this word

Well, you don't have to or even get to choose that; words can and do have uses in different contexts and you don't get to choose one and make it the exclusive use of the word because you like it more.

> a single-argument function ("mono") makes more sense to be called a "monad" than an instance of "flat-mappable" interface.

Not really. Sure, a single argument function might have a better claim to the title "mono-argument function" but "-ad" isn't generally a suffix that means "-argument function".


Ok, so I was rather rude unnecessarily, sorry about that. I agree with what you wrote about words:

> words can and do have uses in different contexts and you don't get to choose one and make it the exclusive use of the word because you like it more.

it's just that I think the same applies to concepts. You shouldn't be able to "own" the concept and to insist that it should be called as you want in every context. I see comments and posts about how "X or Y is just a monad!" to be exactly this: people trying to "own" a concept.

Just as both musicians, mathematicians and programmers can all use a single word for different things, different programmers should be able to use the same concept without using the same word for it.

In short (directed to random commenters on the Internet, no to you personally): stop correcting me when I say "computation expression builder", don't say that what I "really mean" is a monad. It's not, what I "really mean" is an abstract concept and we both know its properties, and you have no right to oppress me because I chose different word for it.


Almost everyone who "know monads well" agrees with you. Studying monads is silly. It's a pattern you'll eventually recognize and just so happens to have a syntactic trick. Study particular instances.


I'm 99% sure this is the way to do it. You can't explain any field of math, much less category theory, without doing exercises.


It seems like those with functional programming experience either already understand monads or grasp it quickly. At the same time, those without it have an instinctual eye-glaze when shown Haskell, or a lisp.

Would it be better to use Python or Javascript which more people understand but still has that functional flair? Just an idea. Either way I look forward to reading it!


Dynamically typed languages are not the best choice when you try to explain algebras that work on the type level I guess.


> It seems like those with functional programming experience either already understand monads or grasp it quickly.

That's probably survivor-bias (amongst Haskellers, at least); ie. those who haven't grasped it yet probably won't class themselves as Haskellers.

> Would it be better to use Python or Javascript which more people understand but still has that functional flair?

The problem with using such languages is that they're always dealing with concrete values; in particular they have no concept of interfaces (like Java) or typeclasses (like Haskell). It's certainly possible to use monads in such languages, but it's often hard to justify their use, since there are always less-complex ways to implement particular examples, and without interfaces it's hard to relate the examples together.

For example, here are a bunch of monad implementations ("return" and "bind", AKA ">>=") in Javascript, but it's not at all obvious why they are related to each other:

    function state_return(val) {
        return [null, val];
    }

    function state_bind(x, f) {
        return f(x[0], x[1]);
    }

    function maybe_return(val) {
        return [val];
    }

    function maybe_bind(x, f) {
        return (x.length > 0)? f(x[0])
                             : [];
    }

    function list_return(val) {
        return [val];
    }

    function list_bind(x, f) {
        return [].concat.apply([], x.map(f));
    }

    function read_return(val) {
        return function(x) { return val; };
    }

    function read_bind(x, f) {
        return function(arg) {
            f(x(arg))(arg);
        };
    }
The problem is that in Javascript, the implementation details are there for all to see. I might want to give you a string which depends on some integer state, but you can always see that it's actually implemented as an array containing an int and a string. You're also free to mess up that array by adding or removing elements from it, which are perfectly valid array operations, but which makes no sense for "stateful values".

Interfaces like monads are useful when we're not free to mess around with the implementation; when we must perform all actions through a limited API. When we're dealing with such concerns every day ("I'd like to allow X, but don't want anyone to abuse it and end up with Y") then monads (and functors and applicatives, which they're based on) are really useful as standard APIs for allowing arbitrary operations to be performed, under the control of the API.

The only example above that even comes close to this idea is the "reader" monad, implemented with `read_return` and `read_bind`. This is because values in the reader monad lets us manipulate the return values of functions, and even in dynamic languages a first-class function is pretty much a black box, so there's little we can do to mess it up.


What about monads in terms of JS over here:

https://blog.jcoglan.com/2011/03/05/translation-from-haskell...


Sounds not-so-dissimilar to Tom Stuart's "Refactoring Ruby with Monads" (https://speakerdeck.com/tomstuart/refactoring-ruby-with-mona...) which is excellent.


Have you seen "You Could Have Invented Monads"?


I understand concepts better when taken outside their ''''natural'''' environment (namely strongly paramectrically typed [lazy] languages). I often link these two :

http://stackoverflow.com/questions/11871065/monads-in-javasc...

http://dorophone.blogspot.fr/2011/04/deep-emacs-part-1.html


Im not getting the idea of monads here. From my understanding monads does series of transformation( composition, unit , lift ) of a function to do its task. So does monad is similar to Adapter pattern in java? which does make two classes work even though they aren't meant to work.

Because it does transform the class/ interface to work with other class, right?

Am I missing anything here?


It's not quite like the Adapter pattern. Instead of adapting two different classes to work together, a monad's for extending a type to add new behavior.

Think of something like Java's Iterable<T>. It takes a type T, and extends it with the concept of a sequence of items. It also gives you all sorts of extra goodies for doing useful stuff with sequences such a function to transform from a sequence of one type to a sequence of another (Iterables.transform) and Iterables.concat, which takes two instances of Iterable<T> and concatenates them.

This is close to, but not quite, what a monad really does. To get all the way there you'd want to have a function called Iterables.transformAndConcat() with a signature roughly like this:

    Iterable<T> transformAndConcat(Iterable<F> from, Function<F,Iterable<T>> func)
It differs from basic transform because the function it takes maps from F to Iterable<T> rather than simply from F to T. It uses that to get a set of Iterable<T> that it then concatenates to produce the final output.

This ends up being a really powerful function. It can be used to do all sorts of things, including replacing the behavior of both transform (by supplying a function that always returns an iterable of length 1 for every input) and concat (by supplying an identity function). More generally, it lets you take functions that take some input and output an Iterable of some type and compose them to produce more complex behaviors.

As a more expansive example of the concept's power, even for OO languages, let's skip over to the other managed platform: A version of this function (in this case called SelectMany) forms the core of .NET's LINQ library. LINQ's query syntax includes facilities for all sorts of SQL-style querying, including grouping and joins, and virtually all of it ends up being compiled down to calls to SelectMany(). In turn, this means that if you're writing code in C#, you can the query syntax with any generic class that conforms to a few rules, including having an implementation of SelectMany().


I don't disagree with anything you said, but to pin down some concepts a tiny bit in the hopes that someone might find it helpful...

Java's Iterable is a "type constructor" - a function that turns types into other types. The domain of this function is any (non-primitive?) Java type. The range is things of the form Iterable<T>.

Some type constructors have a special property. In programming speak, they're "things you can sensibly map over." From a math perspective, they're one half of a functor - a "structure preserving map between categories". The type constructor maps the category's objects, we need to add a function to map arrows. In Haskell, we do that by creating an instance of Functor, where we define the function "fmap" for the specific type constructor.

For some functors, you can reasonably define a function like your "transformAndConcat" ("bind" in math and Haskell) and something that produces a singleton of whatever iterable (or the equivalent, for things that aren't containers). The triple of functor, transformAndConcat, and singleton (such that they obey the Monad laws) is a monad - in the same sense that a pair of a set and an associative operation is a semigroup.

The way math defines a monad is subtly different - they use concat (called "join" or "eta") directly, but you can define transformAndConcat using only concat and map (and concat using only transformAndConcat and id) so these are equivalent they just look a little different.


I like to think of a Monad in terms of the Monad laws, it makes it much less abstract. https://www.haskell.org/haskellwiki/Monad_laws

Different monads can have drastically different implementations but they should all obey those laws.


To be honest, I have tough time reading Haskell code or even laws.

I understood the mondas in terms of javascript, as it is written over here: https://blog.jcoglan.com/2011/03/05/translation-from-haskell...


And anything that obeys those laws is a monad (if I understand correctly).

[Edit: And if the implementing pieces are named correctly.]


I think "fmap f m = (m >>= f >>> return)" also needs to obey the functor laws. I can quickly show that "fmap id = id" follows from the monad laws, but I don't know whether "fmap (p . q) = (fmap p) . (fmap q)" does.


Ah, it does seem to follow:

    M1: return a >>= f  = f a
    M2: m >>= return    = m
    M3: (m >>= f) >>= g = m >>= (\ x -> f x >>= g)

    (fmap f . fmap g) m = (fmap f . fmap g) m

    by defn of (.)      = fmap f (fmap g m)
    by defn of fmap     = fmap f (m >>= return . g)
    by defn of fmap     = (m >>= return . g) >>= return . f
    by M3               = m >>= (\ x -> (return . g) x >>= return . f)
    by defn of (.)      = m >>= (\ x -> return (g x) >>= return . f)
    by M1               = m >>= (\ x -> (return . f) (g x))
    by defn of (.)      = m >>= (\ x -> return (f (g x)))
    by defn of (.)      = m >>= (\ x -> (return . (f . g)) x)
    eta reduction       = m >>= return . (f . g)
    by defn of fmap     = fmap (f . g) m

    (fmap f . fmap g) m = fmap (f . g) m


Monads, applicatives, functors, and monoids are different algebras. OK. That's really helpful, actually. (At least, I suspect it's going to be really helpful after I've thought about it for a few days.)

But monads are the way to do sequencing? Only if you insist on putting yourself in a functional straight-jacket that's so tight that you cannot escape. Then, yes, you do sequencing via monads because you don't have any other way to do it. But if you're a pragmatist rather than an ideologue, it seems more reasonable to just do sequence by doing things in sequence, rather than forcing yourself to do unnatural contortions.


"Monads, applicatives, functors, and monoids are different algebras. OK. That's really helpful, actually. (At least, I suspect it's going to be really helpful after I've thought about it for a few days.)"

I think it's not quite accurate. They're all algebraic structures, but an "algebra" is a more specific thing and I don't think they are algebras specifically (except maybe monoid?).

http://en.wikipedia.org/wiki/Algebraic_structure http://en.wikipedia.org/wiki/Algebra_%28ring_theory%29

"But monads are the way to do sequencing? Only if you insist on putting yourself in a functional straight-jacket that's so tight that you cannot escape."

"Chaining actions with bind is the only way to combine actions" in Haskell is more strongly motivated by keeping things sane in the face of non-strict semantics than by a "functional straight-jacket". In fact, it wasn't the way Haskell initially did IO.

"Then, yes, you do sequencing via monads because you don't have any other way to do it. But if you're a pragmatist rather than an ideologue, it seems more reasonable to just do sequence by doing things in sequence, rather than forcing yourself to do unnatural contortions."

"You have to do things my way, or you're not pragmatic!" seems to be your dogma.

Reifying actions in a way that lets you talk about them in the same way you talk about the rest of your data is sometimes a very useful thing.


> "Chaining actions with bind is the only way to combine actions" in Haskell is more strongly motivated by keeping things sane in the face of non-strict semantics than by a "functional straight-jacket".

OK, if you have non-strict semantics (by which I presume you mean non-sequential), then you need some way to make things sequential when they have to be, and you have no ordinary (non-jumping-through-hoops) way to make them so.

> "You have to do things my way, or you're not pragmatic!" seems to be your dogma.

Shoe's on the other foot. Haskell's the one that insists that I have to do it Haskell's way. I complain, and you accuse me of being dogmatic. Doesn't work that way.

But I suppose you'd say that I have the choice to not use Haskell, so stop complaining. And in fact, I take that choice, precisely because I consider the language's dogmatism to be less pragmatic (and therefore useful) than a multi-paradigm approach.


"if you have non-strict semantics (by which I presume you mean non-sequential)"

http://en.wikipedia.org/wiki/Strict_programming_language

"Haskell's the one that insists that I have to do it Haskell's way."

... when you're writing Haskell. But I am the one who sometimes wishes I could manipulate actions in that way when I am writing C.

"But I suppose you'd say that I have the choice to not use Haskell, so stop complaining."

I'm saying that it obviously doesn't make pragmatic sense to require things in contexts where they don't make sense, but that 1) some things can make sense in more contexts than you might think, and 2) that you would do well (for pragmatic reasons) to actually learn about this stuff before deciding you don't need it.


> I'm saying that it obviously doesn't make pragmatic sense to require things in contexts where they don't make sense...

Absolutely.

> you would do well (for pragmatic reasons) to actually learn about this stuff before deciding you don't need it.

I keep learning about it. I still haven't seen any need for it. But then, my whole career is in a context where it doesn't make sense...


"But then, my whole career is in a context where it doesn't make sense..."

Well, like I said, applicability can be occasionally surprising. What is your background like?


Embedded systems. It's all about driving external hardware, and much of it has to be sequential.


Gotcha. I certainly agree (albeit with low confidence - never sure what someone somewhere has done) that would make no sense as a GHC target. But that doesn't mean the concepts necessarily have no applicability. As I said, I've found myself wishing for some of Haskell's capabilities while writing C. That said, when the usefulness is subtle it's probably not the place to start. I'd dig deeper into some of the other monads first and basically forget about IO until you actually want to code in Haskell.

Incidentally, you can write Haskell to write your embedded C: https://hackage.haskell.org/package/atom

I haven't actually played with it, but understand it's used in production.


'there’s no such thing as the “IO monad” or the “List monad”'

I don't think I agree (depending on just what is meant by "such thing as")... I would say there is such thing as "the IO monad" in the same sense as there is such thing as "the rational field" or "the Z/5 group". It's a particular example of a more general algebraic structure, and you'd use the phrase when you're talking particularly about the structure as it occurs in that instance.


Further evidence for my hypothesis that everyone who has ever understood monads has gone on to write a blog post trying to explain them.


This is because monads are quite simple and effective abstractions, yet they've proven hard to explain effectively, because contrary to other well known abstractions (let's take as an example the Iterator), monads are more abstract and have been used to describe things that don't bear a resemblance to each other in the minds of beginners.

Lets take the Iterator - which is simple to explain (an API that allows you to iterate over a collection of items only once, a collection of items which may be infinite), but that has on the other hand strong limitations in what you can implement under it (e.g. the API calls are synchronous and you can only get the items one by one, sequentially). An Iterator can be a Monad if you provide a flatMap / bind implementation with the right behavior and lo and behold, this operation doesn't care about how the items are produced (synchronously or asynchronously or in what order). In OOP terms, you could have Iterator be a subtype of Monad, with the Monad being the more generic interface.

So it's hard to explain because it is very generic, can be applied to many things and everybody has an aha! moment once they spot the resemblance in the various monad implementations, hence the urge to write a blog post.


... and failed:

> Once you understand what monads are, and why they exist, you lose the ability to explain them to anybody. -- Douglas Crockford [1]

[1] http://www.youtube.com/watch?v=dkZFtimgAcM


How can one lose it? Before understanding it, one also has no ability to explain.




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: