
Monads for functional programming (1995) [pdf] - jxub
http://homepages.inf.ed.ac.uk/wadler/papers/marktoberdorf/baastad.pdf
======
triska
In my experience, Haskell's comparatively complex and - arguably - also rather
_ad hoc_ type system can make a discussion of monads quite involved and hard
to follow.

For those who approach Haskell with some Prolog or Mercury background, it may
be useful to know that Prolog's Definite Clause Grammars (DCGs) are very
similar to monads. In Prolog, a DCG clause of the form

    
    
        head -->
            goal_1,
            goal_2,
            ...,
            goal_n.
    

is implicitly augmented with 2 additional arguments that are _threaded
through_ as follows:

    
    
        head(S0, S) -->
            goal_1(S0, S1),
            goal_2(S1, S2),
            ...,
            goal_n(S_(n-1), S).
    

It is in these 2 arguments that you can _implicitly_ pass around additional
information such as counters, list differences, "global" variables, and states
in general. Prolog provides dedicated syntax (called _semicontext notation_ )
to access these implicit arguments if needed. Often, they are simply threaded
through implicitly throughout large portions of code without any modification.

Thus, it is obvious that DCGs, and hence monads, can save a lot of typing that
would arise if you had to pass around these arguments explicitly. Also, they
let you focus on the _essence_ by making data that is only relevant to a small
set of predicates (or functions) _implicit_ in your programs.

It may be easier to understand these concepts if you can discuss them in
isolation from a type system.

~~~
michielderhaeg
I find calling Haskell's type systen ad-hoc rather surprising. Would you care
to elaborate on that?

~~~
triska
For example, aptly named _ad-hoc_ polymorphism violates referential
transparency:

    
    
        ( (7^7^7`mod`5`mod`2)==1, [False,True]!!(7^7^7`mod`5`mod`2) )
    

This yields:

    
    
        (True,False)
    

suggesting that the same arithmetic expression is both 1 and 0.

In GHC, there is a dedicated flag to spot such cases (-fwarn-type-defaults).

In general, the guarentees that the type system actually gives and also the
ways to specify them appear somewhat unfinished and are also quite hard to
understand, often necessitating semantic restrictions and syntactic
extensions. For further examples, see for instance:

[https://stackoverflow.com/questions/27019906/type-
inference-...](https://stackoverflow.com/questions/27019906/type-inference-
interferes-with-referential-transparency)

[https://stackoverflow.com/questions/14865734/referential-
tra...](https://stackoverflow.com/questions/14865734/referential-transparency-
with-polymorphism-in-haskell)

~~~
joel_ms
They’re not the same expression though. The first gets defaulted to Integer,
the second is specialized to Int by the (!!) function. (Integer is unbounded,
Int is bounded)

The rest of your argument is fairly vague and unspecific. But I would suggest
trying to understand the underlying type theory instead of focusing on the
affordances that the type system makes for practicality and usability
concerns. In my experience that makes it a lot clearer why the type inference
works the ways it does in some of the weird cases. (That’s not to say that
Haskell doesn’t have it’s warts..)

------
edflsafoiewq
The universal algebra perspective on monads is that m x is the set of
expressions (ie. ASTs) for a particular algebraic structure with variables
drawn from the set x. For example, you have a monad for abelian groups where
the expressions are things like a+b, 2a-b, 1. Note that m encodes both the
operations of the algebraic structure, like +, and the laws that they obey,
because we regard a+b and b+a to be the same AST (ie. these are considered
equal as a matter of syntax).

A concrete algebraic structure is a map m x -> x that evaluates expressions,
producing their value, eg. 7+3 -> 10\. It essentially provides concrete
definitions for the syntactic operations in the ASTs.

Under this interpretation, the monad operations say "a bare value is an
expression, eg. a, b" (x -> m x) and "nested expressions can be simplified,
eg. (a+b)+(c) = a+b+c") (m m x -> m x). The monad laws essentially are
ensuring that no matter what order you evaluate subexpressions and simply an
expression, it always comes out to the same value in the end. It essentially
lets us state a Church–Rosser theorem for the maps m x -> x.

~~~
superlopuh
That's pretty neat. Can you recommend a resource to read up on this a bit
more?

~~~
edflsafoiewq
Not really. You can see the basic idea on the nLab:
[https://ncatlab.org/nlab/show/variety+of+algebras#free_algeb...](https://ncatlab.org/nlab/show/variety+of+algebras#free_algebras)

Continuing the example I gave, the monad law "Identity" says that "trivial
subexpressions are already simplified"

    
    
         a+b  = (a)+(b)
         ||       ||
        (a+b)  =  a+b
    

and the monad law "Associativity" says that "order of simplification doesn't
matter"

    
    
        ((a)+(b))+((c+d)+(e)) = (a)+(b)+(c+d)+(e)
               ||                    ||
          (a+b)+(c+d+e)     =     a+b+c+d+e
    

The laws for an algebra for a monad say "Trivial expressions evaluate to
themselves"

    
    
        a = (a) = a
    

and "you get the same result no matter the order you do evaluations and
simplifications"

    
    
        (6+1)+(3) = 7+3 
           ||        ||
          6+1+3   =  10

------
KirinDave
Much kerfuffling comes up about "What monads are" and "Are monads just pipes"
and "is X or Y a monad." I'd like to volunteer to answer questions and provide
examples people may have. This paper is a really practical reference for how
to use them, but without an insight into what "them" is, it may not be
helpful.

So feel free to fire away questions. I can provide simplified examples in a
variety of examples upon request.

~~~
dustingetz
I once saw a paper proposing a generic system connector type for composing
systems-of-systems (it was much more sophisticated than just I/O) - I can no
longer find the paper, do you know it or what is the state of the art here?

~~~
nickpsecurity
Based on CMCDragonkai's comment, I'm guessing the one you're looking for is
below. I thank both of you since it was an interesting concept and paper.
Hopefully, the one you were looking for.

[http://categoricaldata.net/operadics/OperadicSoS.pdf](http://categoricaldata.net/operadics/OperadicSoS.pdf)

~~~
CMCDragonkai
I tried to understand operads before. But I got sidetracked by graphical
linear algebra:
[https://graphicallinearalgebra.net/](https://graphicallinearalgebra.net/)

------
still_grokking
The best explanation for monads I have seen until now is this one:

[http://adit.io/posts/2013-04-17-functors,_applicatives,_and_...](http://adit.io/posts/2013-04-17-functors,_applicatives,_and_monads_in_pictures.html)

Enjoy! :)

~~~
Aearnus
Saw this post a while back, it's absolutely adorable. As someone who forgets
what an applicative is sometimes, I can vouch for the fact that it really does
get the idea across.

------
newen
I see a monad as just an interface, as in a set of generic functions with
specific type signatures that you can overload with different types. The
functions being identity : a -> m a, and bind : m a -> (a -> m b) -> m b. The
type m you use should have some specific properties but it's not like you can
enforce these properties in haskell.

It's nice for use in haskell since it combines well with the do notation,
allowing for an imperative programming style primarily when you use the IO
monad. The do notation basically converts into a sequence of bind and return
operations, so that it's impossible to call print a second time on an IO
object while having reference to the same IO object that you called print the
first time on. This is needed because you can't make a copy of the IO object.

------
nickpsecurity
"mitchty" on Lobste.rs told me I should just read the source material from
Wadler instead of the various metaphors and tutorials. Since jxub posted it
here, I figured I'd go ahead and try. I'll admit Wadler did a terrific job
trying to explain it. I don't get it after one read from imperative
background. It does seem a lot more clear, though, than many attempts to dumb
the material down. He gives useful examples in both functional and imperative
styles. We see why imperative makes some things easier, exactly why they're
harder in functional style, and then how monad form solves that problem.

The monad itself is still weird. That we have different examples, application,
and definitions are a good combo that might help it click if I think on it
more just going through the steps of his examples or similar programs. Might
help to learn functional programming first esp to get used to things like
let/binding which he draws on. Although he avoids reliance on category theory
and Haskell, he still might expect people to know basics of functional
programming given how he describes it. So, a good paper like this may reduce
the problem down to (a) learn functional programming or, if time-constrained
folks are lucky, (b) find a few tutorials on just the concepts he uses here
done in ways that imperative programmers can approach. Lambda's, binding, and
typing seem to be the core operators on top of functions and expressions that
we're familiar with. I say typing since types work differently in various
languages. Gotta understand how he's using types along with their interactions
with those examples and monads.

~~~
mitchty
Thats me here too! >.<

Yeah the hidden subtext here was my presumption in that suggestion was someone
was learning monads in Haskell and has at least a basic understanding of
Haskell. Sorry if those papers threw you into the deep end. Monads really were
thrown into play due to trying to make a pure functional programming language
useful. I found the more mathematical explanation a lot easier than any
analogies involving say burritos or pipes etc...

At first glance to imperative programmers I will admit all of this just looks
like a bunch of navel gazing. I thought the same thing, but once you let go of
thinking of computation as a series of steps and at a bit of a higher
abstraction this makes a lot more sense.

For a more soft introduction to the concepts this channel 9 video is probably
a bit better:
[https://www.youtube.com/watch?v=ZhuHCtR3xq8](https://www.youtube.com/watch?v=ZhuHCtR3xq8)

Another option might be to just get the haskell programming from first
principles book: [http://haskellbook.com](http://haskellbook.com)

Does a good job grounding you up to monads and gives copious amounts of
reference material. Even though I already knew haskell when I read the book it
helped clear up some gaps I didn't know I had.

But monads are going to be really weird no matter if all you've ever seen is
imperative programming. Despite the fact that you already use them and have
probably discovered them without knowing you did. Much like learning stack
based programming languages, sometimes the only way up is to climb off the
computational hill you're familiar with to learn another way to the other hill
side to see things from their perspective.

------
calebh
There's a lot of confusion around understanding monads, and I remember having
trouble understanding them. Here's the two things that helped me the most:
understanding kinds, and staring at the type signature for >>= for a long
time. I'll attempt to explain both of these here:

Just as expressions have a type signature, a type expression has a kind
signature. In its most basic form, a kind tells us how we can construct a
type. We represent kinds by using asterisks * and kind functions ->. The
asterisk is pronounced as "type". The easiest way to understand kinds is by
looking at a bunch of examples of types and type constructors. Monomorphic
types such as Int and Bool have kind * . Type constructors are handled
differently. An example of a type constructor is [] (list), which has kind *
-> * . So list is a type constructor that takes in a type (which we represent
with an asterisk), and returns another type. Therefore [Int] has kind * ,
since we applied the type Int to the list type constructor [], resulting in
the type [Int]. Types constructors can also in some situations be partially
applied, just like value constructors. Kinds are right associative, so the
kind * -> * -> * is the same as * -> ( * -> * ). I have a table of different
kinds on my blog here: [http://www.calebh.io/Type-Inference-by-Solving-
Constraints/](http://www.calebh.io/Type-Inference-by-Solving-Constraints/)

Now that we understand kinds, we are now ready to understand monads. In
Haskell, type classes are used to overload functions in a disciplined way. One
such function is >>=, which is defined in the Monad type class. When we want
to make a new Monad for a different type, we overload the >>= function. Since
the >>= function is the most important function in a Monad definition, here is
its signature:

(>>=) :: forall a b. m a -> (a -> m b) -> m b

How do we interpret this signature? Well we can see that the >>= function
takes in two arguments, one of type "m a" and another of type "a -> m b".
Remember the kinds from earlier? In this case, "m" is a type constructor of
kind * -> * . So "m" could be the list type constructor, the Maybe type
constructor, or really any other type constructor that has this kind. It can
even be a type constructor that we define ourselves. So what can an instance
of the >>= function do with the first parameter? Well it can do anything that
it wants, as long as it follows some laws, which I'll talk about later.
However notice that bind also takes in a second parameter of type "a -> m b",
which is a function that takes in a value of type "a" and returns a value of
type "m b". So the >>= function might end up calling this "callback function"
and using its result. It could even call this function multiple times if it
wanted to. The point is that >>= can do anything as long as it adheres to the
type signature and follows the monad laws.

The monad laws are not as relevant when learning how to use monads, but I will
cover them anyway. When you write your own overloaded instance of the Monad
type class, you have to make sure that your overloaded functions follows these
laws:

Left identity: return a >>= f ≡ f a

Right identity: m >>= return ≡ m

Associativity: (m >>= f) >>= g ≡ m >>= (\x -> f x >>= g)

You can think of these laws as analogous to the laws for operations on numbers
such as commutativity and associativity.

