
Monads without pretension (2011) - signa11
http://rcrowley.org/2011/09/21/monads.html
======
throwaway283719
The whole point about monads is that they provide a uniform interface for
working with values that carry some extra context. The genericity of the
interface is nice, in that it allows you to write generic monad combinators,
but they are only useful because of the context (effects, control flow, etc).
If you don't describe any of the implementations, you are missing the point!

The best introduction to monads was written in 2006 (more than eight years ago
now) and I don't imagine that a better one will be written in the near future
-

[http://blog.sigfpe.com/2006/08/you-could-have-invented-
monad...](http://blog.sigfpe.com/2006/08/you-could-have-invented-monads-
and.html)

~~~
chriswarbo
It's not even about the effects, really. It's simpler to think of Monad as an
interface for manipulating encapsulated data in a way which keeps it
encapsulated.

The simplest way to encapsulate data is to wrap it using a constructor which
we keep private to our module:

    
    
        module M1 (Encap(), val1, val2)
        data Encap a = mkEncap a
    
        val1 :: Encap Int
        val1 = mkEncap 5
    
        val2 :: Encap Int
        val2 = mkEncap 10
    

Other modules importing M1 get access to the Encap type, val1 and val2 but
_not_ the mkEncap constructor. They can use val1 and val2 as-is, but they
can't construct new Encap values or destruct existing Encap values to get at
their contents. The problem is, there's not much we can do with this
interface.

One way we can make this more useful is being able to apply some function to
an encapsulated value. That's what Functor is for, so we can add this to M1:

    
    
        instance Functor Encap where
          fmap f (mkEncap x) = mkEncap (f x)
    

Now users of M1 can transform encapsulated values without being able to break
the encapsulation. For example:

    
    
        val1Plus7 :: Encap Int
        val1Plus7 = fmap (+7) val1
    
        val2Str :: Encap String
        val2Str = fmap show val2
    

We're still pretty limited though, since there's no way to combine
encapsulated values into new encapsulated values, or to encapsulate our own
values. That's what Applicative provides, by letting us construct encapsulated
values _without_ gaining the ability to destruct them, and by allowing
encapsulated functions to be applied to encapsulated values (since functions
are closures, this lets us gather up encapsulated values and combine them
arbitrarily):

    
    
        instance Applicative Encap where
          pure x = mkEncap x
          (mkEncap f) <*> (mkEncap x) = mkEncap (f x)
    

Now users of M1 can encapsulate and combine values, like this:

    
    
        -- val1 + val2
        val1PlusVal2 :: Encap Int
        val1PlusVal2 = fmap (+) val1 <*> val2
    
        -- New encapsulated string
        val3 :: Encap String
        val3 = pure "Hello world"
    
        -- val3 repeated val1 times
        val3Repeated :: Encap String
        val3Repeated = fmap rep val1 <*> val3
                       where rep n _ | n <= 0 = ""
                             rep n s          = s ++ rep (n-1) s
    

This is quite a powerful interface, but one thing we can't do is `collapse`
double-encapsulated values into single-encapsulated values. That's what Monad
provides:

    
    
        join :: Encap (Encap a) -> Encap a
        join (mkEncap (mkEncap x)) = mkEncap x
    

An alternative, but equivalent, definition is to allow calls to encapsulation-
producing functions without encapsulating their result. Haskell's Monad is
defined this way:

    
    
        instance Monad Encap where
          (>>=) :: Encap a -> (a -> Encap b) -> Encap b
          (mkEncap x) >>= f = f x
    

These kind of encapsulated values turns out to hold effects without breaking
the language, and these interfaces turn out to be powerful enough for general
computation.

~~~
jberryman
>These kind of encapsulated values turns out to hold effects without breaking
the language

I wonder if you're hand-waving a bit here. To me "encapsulated values"
describes Identity or Maybe, but really doesn't work (IMO) for e.g. State or
IO where bind is composition.

~~~
chriswarbo
I would use the term "wrapped-up" where you've used "encapsulated", and in
fact I specifically avoided talking about wrappers for this exact reason :)

Maybe my terminology could have been better, but I meant "encapsulated" in
analogy to OOP, which advocates "encapsulating" all data via methods. The OOP
definition of encapsulation includes using getters/setters, which is like
having "wrapped-up" properties, but the idea is that we can go beyond this to
calculate the data in arbitrary ways without our clients having to know about
the implementation. That's what I was trying to get at here; for example, the
implementation of IO involves horrible imperative yukiness, but we (the
client) don't need to know that: we just use the interface, and if our
functions ever get called, they will be given an appropriate argument.

------
elclanrs
In dynamic languages I don't see the need to create a Monad interface. I'd
create classes/objects and simply use instances of those objects. As long as
they conform to the desired interface, and you can check their relationships,
then it should work; and use monkey-patching if necessary, embracing the
dynamic nature of the language. I don't think it is possible to fully
translate Haskell examples without the types, but you can adapt the ideas to
other languages and get very similar functionality, in JavaScript for example:

    
    
        class Maybe {
          constructor(value) {
            if (value != null) {
              return new Just(value)
            }
            return new Nothing()
          }
          bind(f) {
            if (this instanceof Just) {
              return f(this.value)
            }
            return new Nothing()
          }
        }
    
        class Just extends Maybe {
          constructor(value) {
            this.value = value
          }
          toString() {
            return `<Just ${this.value}>`
          }
        }
    
        class Nothing extends Maybe {
          constructor() {
            this.value = null
          }
          toString() {
            return '<Nothing>'
          }
        }
    
        var result = Maybe(2).bind(x => {
          return Maybe(3).bind(y => {
            return Maybe(x + y)
          })
        })
    
        console.log(result.toString()) // <Just 5>

------
mjburgess
Monads have so little pretension these days, you'd think they were the salt of
the earth. Working classes, for working people.

~~~
chriswarbo
Monads were pretentious in the '90s. The pretention switched over to Arrows in
the '00s and to Algebraic Effects in the '10s ;)

~~~
mjburgess
Yeah, Arrows is my "next thing".

~~~
jerf
You may not want to bother. Their luster has faded as it has been determined
they were a particularly complicated way of representing Applicative +
Category: [http://just-bottom.blogspot.de/2010/04/programming-with-
effe...](http://just-bottom.blogspot.de/2010/04/programming-with-effects-
story-so-far.html) In theory there's nothing necessarily wrong with them but I
think the consensus is that in practice you're better off understanding
Applicative (very simple) and Category (also very simple), and then freely
combining those however you need to accomplish things, rather than trying to
use Arrow (surprisingly complicated).

By contrast, both Applicative and Category as very, very useful.

------
bkeroack
Total nitpick, but it annoys me when people implement something in Python but
use ancient syntax.

    
    
      return '<M a: %s>' % self.a
    

Should be:

    
    
      return '<M a: {}>'.format(self.a)
    

And:

    
    
      class M(object):
    

Should be:

    
    
      class M:
    

etc.

~~~
jonhohle
I can see why you might want the second change (discarding superfluous
information), but string formatting looks better to me in the first case: it's
more terse, and embeds type information in the format string using a well
known and understood convention.

Why do you prefer the explicit method call over an operator?

~~~
bkeroack
It's arguable, but operator overloading should only be used (if at all) when
there is conceptual symmetry between the standard and overloaded usage (eg,
adding two datetime objects with '+').

I can't think of any way that taking the modulo of two strings logically
results in interpolation. If anything, it should do something like return the
original string with all instances of the argument _removed_. It's just
simpler to not support it at all and use the standard .format() method.

There's also the fact that it's deprecated, of course. :)

------
ddellacosta
What is the point of implementing monads in Python? I thought the point of
monads was that they provide a mechanism for introducing side effects
gracefully into a pure language (like, say, Haskell) while still allowing for
type safety, per correspondences between category theory and type theory.
Python is not strictly typed and makes side effects pretty easy, if I recall
from the last time I wrote Python.

Seriously, what is this constant obsession with monads that has infected the
programmers?

Why not be obsessed with, say, functors? I can then make statements like, "A
functor is a wrapper. Anyone who tells you otherwise is being obtuse" and be
just as correct, and sound just as smart.

 _wanders off grumbling to himself..._

~~~
freyrs3
> I thought the point of monads was that they provide a mechanism for
> introducing side effects gracefully into a pure language

That's roughly a description of the IO monad, which is a specific monad. The
IO monad is really a degenerate case, and most monads ( List, Cont, Maybe,
Writer, State, ... ) have nothing to do with effects.

Half the problem of talking about monads is that in their full generality
they're really just a description of a trivial algebraic structure that
doesn't really impart much intuition. Monads just get a lot of press because
they're one of the simplest examples of a structure that can't be compressed
easily in terms of common experience. There's nothing I can point to in our
our everyday experience and say "monad" is like this.

------
lelf
This article can confuse people even more.

This is what monad is (read: _interface_ with two functions):

    
    
      class Monad m where
        (>>=)  :: m a → (a → m b) → m b
        return :: a → m a
    

Read: function >>= of arguments M[a] and (fun from a to M[b]) that returns
M[b]. You can probably write it somehow in MyPy annotations.

(It's actually different in haskell, doesn't matter)

The author however just implemented _one instance_ of Monad: Identity (the
most trivial). And called it a Monad. And then semi-checked laws on one value.

~~~
ksherlock
Do haskell people realize that using haskell describe monads isn't actually
helpful?

~~~
orbifold
Well you could explain them in terms of category theory as an endofunctor T: C
-> C, together with two natural transformations eta : 1 -> T and mu : T^2 ->
T, that satisfy a number of axioms. The Haskell definition encodes a special
case, where the underlying "category" is "Hask", whose objects are Haskell
types and morphisms are Haskell terms. Of course this is strictly speaking not
a category, but it is close enough.

In mathematics monads usually arise as adjunctions between two functors, for
example beginning with a set of elements, you can consider the free monoid
generated by it and forget the group structure, this gives you a much larger
set. If you did this operation on a set of characters, you would get the set
of all strings of those characters, eta would in this case be the operation
that given a character in the character set gives you the corresponding string
of length one and mu would concatenate two strings.

~~~
corysama
Unfortunately, you've taken it two steps in the opposite direction from
helpful. The point is that when your audience is neither haskellers nor
mathematicians (I.e. the vast majority of programmers on Earth) repeatedly
explaining monads in Haskell and/or mathematics is not helpful.

Imagine if you wanted to learn about monads, but every article went "Monads
are a simple and powerful idea that, interestingly enough, can be very, very
well expressed in Latin. Therefore, I will switch to Latin for the remainder
of this article. Oh... You haven't studied Latin? You really should! It's
really very useful. Moving on... Cogitus sin extricatus..."

If you want your audience to understand, you need to explain it in C.

~~~
freyrs3
C is probably the worst language to implement monads in because the type
systems really lacks polymorphism, higher order functions and any sort of
type-class overloading to clean up the syntax. I mean you could hypothetically
pass around a record full of void function pointers to get around all this,
but it would be so ugly and unintuitive that it would be silly to try and
explain monads this way.

~~~
dllthomas
Actually, you can do a bit more than that if you specialize. It'll still be
ugly, but it might help people grasp what's going on.

~~~
corysama
I think you have the same idea I do. In a statically compiled language, all of
the high order polymorphism eventually compiles down to actual, concrete code
that should be expressible in hand-specialized C structs and functions
performing a single, specific task. Haskell makes that specialization process
incredibly convenient and C makes it incredibly inconvenient. But, the point
is not to ship a bunch of production code quickly. The point is to provide a
low-abstraction example to demonstrate concretely what the compiler is doing
for you without prefacing the example with "Assuming of course that you have
already studied Haskell..."

~~~
freyrs3
You'd end up building a small functional runtime. That's a fun project to
understand compilers better. But given that Haskell types are erased at
runtime the very thing that makes monads monads wouldn't even be around
anymore. All the values would be represented uniformly by some *StgClosure
struct and the whole program would just be a mess of casts and projections
into these values.

~~~
corysama
So, we're talking about Haskell passing around void pointers and later doing
cross-your-fingers typecasts on them? That doesn't sound very much like the
Haskell I keep hearing about. I was expecting something more like an ungodly
stack of C++ templates eventually building up a return type containing a
record of all side delayed effects of the function. That C++ template could
then be manually flattened to a C struct. It would be a gross amount of manual
labor. But, it would also be typesafe in plain C.

~~~
dllthomas
There is some broken reasoning here.

There is a translation from any typed language into an untyped language.
Writing code in that untyped language is not going to be type safe, while the
code _generated_ (correctly) in that untyped language _from_ the typed
language is still guaranteed to be correct.

It is entirely possible that the only way to get anything safe out of some
Haskell code is to rely on checks the Haskell compiler gives you at compile
time, which the C compiler _cannot give you_.

That said, people often underestimate the kinds of guarantees you can bang out
of a C compiler, at the cost of a bit of verbosity.

------
kazinator
The following may help some fellow Lisp-heads understand some aspects of
monads:

[http://www.kylheku.com/cgit/lisp-
snippets/tree/monads.lisp](http://www.kylheku.com/cgit/lisp-
snippets/tree/monads.lisp)

Monads are developed starting with CLOS, then wrapping macros around them for
doing monadic comprehensions. Finally, a monad-defining macro is introduced
which generates the class and methods from a succinct syntax, and is used to
write several monads, including a state transformer monad.

------
louisdefunes
Is it me or the title is spelled wrongly? Shouldn't it be "pretention"?

~~~
bhauer
According to Wiktionary, you are using an older form of "pretension."

[http://en.wiktionary.org/wiki/pretention](http://en.wiktionary.org/wiki/pretention)

