

Why monads have not taken the Common Lisp world by storm (2008) - momo-reina
http://marijnhaverbeke.nl/blog/common-lisp-monads.html

======
marijn
I [author] must say I'm slightly embarrassed to see this turn up here now.
This is not a terribly deep or interesting post. Read
[http://marijnhaverbeke.nl/blog/tern.html](http://marijnhaverbeke.nl/blog/tern.html)
or
[http://marijnhaverbeke.nl/blog/acorn.html](http://marijnhaverbeke.nl/blog/acorn.html)
or [http://marijnhaverbeke.nl/blog/browser-input-
reading.html](http://marijnhaverbeke.nl/blog/browser-input-reading.html)
instead.

~~~
moomin
Don't be. The fundamental observation that monad's usability is tied closely
to Haskell's features is important, and still hasn't been taken on board
elsewhere. There's always someone trying to do monads in Clojure...

~~~
octo_t
Scala has plenty of monads and they work fine.

~~~
fusiongyro
Yes, but doesn't it also have the three requirements mentioned by the article:
ML-style function foo, something akin to type classes, and polymorphic return
types?

~~~
yohanatan
In Scala, currying is not all-pervasive. You have to be explicit about it (and
it's a definite pain point).

------
Peaker
> It appears that in the presence of mutable state, a lot of the advantages of
> monads become moot.

In the simpler cases where use of monads can be replaced by simple mutable
state -- you still lose out on the explicit types of the mutating vs. pure
code.

For example, STM is possible because mutating effects are typed, and so can be
ruled out of STM transactions.

And in the more complex monads (e.g: transformer stacks), mutable state is
just not good enough. You can compose transformers to build things that
mutable state simply cannot express, and you'd have to CPS transform your code
and avoid mutable state to express those things.

For example, these two monads:

    
    
      ListT (ParsecT m)
      ParsecT (ListT m)
    

Have no corresponding "mutable state" representations.

~~~
tel
That's a really great point that I rarely see come up. Yes, Haskell makes you
"recreate" the RWST monad stack that imitates impure languages... but the same
tools also let you build other "language domains" which are fiendishly
difficult to implement in non-pure settings (on par with CPS transforming
yourself into the mother monad and then working backwards from there).

On Lisp has a chapter devoted to making a leaky macro-based CPS transformer
for Common List. ContT adds CPS to any monad stack as quickly as

    
    
        newtype ContT r m a = ContT { runContT :: (a -> m r) -> m r }

~~~
jeremyjh
>Yes, Haskell makes you "recreate" the RWST monad stack that imitates impure
languages

This is really a point that seems to get lost in most of learning materials on
the web. LYAH doesn't even acknowledge transformers. I am pretty new to
Haskell but it seems to me that transformer stack decisions are a pretty
significant design decision for your application. What has happened to me is
almost all the code in my application has wound up inside this stack; the only
pure functions are trivial helpers. I'm not aware of what has been written on
this topic in terms of guidance or advice. But it seems like it is definitely
one of the later-stage humps for a new Haskell developer like me.

~~~
tel
That's definitely a stage of a Haskell program design. The stack helps to
separate out layers of effects, but it can be easy to fix that stack
concretely at an early stage and then feel the weight of it later on with
overly specialized functions.

The solution is usually to use the mtl-style transformer classes which let you
write generalized functions which depend upon capabilities of the transformer
stack instead of the concrete stack itself. You end up with a sort of natural
dependency injection framework. For instance, here's a function which works on
_any_ RW-style monad stack

    
    
        augment :: (MonadReader a m, MonadWriter a m) => a -> m ()
        augment a = ask >>= \a' -> tell (a' <> a)
    

where the Monoid constraint allowing us to use (<>) is coming from the
typeclass Prolog involved with the MonadWriter class.

You can then use augment in ANY monad transformer stack which involves Reader
and Writer over the same state type.

Note that this defers the extremely important decision about your monad
transformer layer _order_ \---the caller of augment decides whether to call it
with "ReaderT w (Writer w) a" or "WriterT w (Reader w) a". Reader/Writer
commute so it's not a problem, but this changes things in the op's example
using "ParsecT (LogicT m)" versus "LogicT (ParsecT m)". If your function
depends on a particular ordering of the effects then it must have a less
general type.

~~~
nandemo
s/Prolog/Monoid/

~~~
tel
I meant the Prolog-like inference where MonadWriter w m implies Monoid w.

~~~
nandemo
I see. That was a bit obscure, though.

~~~
tel
Ah, sorry, I was accidentally writing in the context of another comment I
wrote in this thread where I talked about the typeclass resolution machinery.
Without that context, it is pretty out of place.

------
asgard1024
Incidentally, I am just reading through All About Monads
([http://www.haskell.org/haskellwiki/All_About_Monads](http://www.haskell.org/haskellwiki/All_About_Monads))
and his point 3 ("Allow polymorphism on return types") is what confused me in
this example:

    
    
      getAny :: (Random a) => State StdGen a
      getAny = do g      <- get
                  (x,g') <- return $ random g
                  put g'
                  return x
    

I was like.. how on Earth Haskell knows which function "get" should it call? I
think this is a point which should be more stressed in the tutorials (I read
Learn yourself a Haskell and glanced at couple of others..)

~~~
tel
It's easy for a Haskell expert to sweep typeclass resolution under the rug,
but it's definitely some pretty black magic at first no matter how "simple"
its implementation is. The reality is that the compiler works really hard to
ensure that it can guess the right "get" and it's pretty possible for it to
fail.

Of course, when it fails you know immediately and can remedy it by type
annotations, but that can still be challenging.

To answer the question, `get` has its principle type resolved by the type
inference engine. In this case, `get`'s most general type is `MonadState s m
=> m s` and it can resolve what `m` is by unifying it with the type annotation
of `getAny`, so we know that we need `get :: MonadState s (State StdGen) =>
State StdGen s`. From here, the compiler searches through the type class
instances in a Prolog-like style to find that `MonadState s (State StdGen)`
occurs when `s ~ StdGen`. This resolves more type information and also tells
us which definition of `get` is needed—the one that was defined as `instance
MonadState s (State s) where`!

The end result is that get ends up with the type `get :: State StdGen StdGen`
and is the function defined as `get = State $ \s -> (s, s)`.

~~~
asgard1024
Thanks. I sometimes wonder, is there a way in Haskell to display a type
signature of a function for a particular instance? For example, is there a way
to display type of get when used as an instance of State ? I know I can derive
it but I would sometimes like to see if I am correct..

~~~
acomar
Some vim and emacs plugins can give you the type of subexpressions like this.
Namely, you want the ghc-mod plugin since it provides the hooks into the
compiler to query type information.

~~~
asgard1024
Is there a way to do this in ghci? I know the :t command, but I don't know how
to say that I want a specific instance.

------
quarterto

      * Try to quickly write the whole thing as a single recursive descent parser. Note the exploding amount of ugliness. Give up.
      * Separate out the tokenizer (novel idea, huh?) to keep parser complexity down. Parser is still a mess. Ugh!
      * Play around with some CL parser frameworks. This helps a bit, but none of the systems I tried produce errors with enough information.
      * Remember the breeze it was to write a parser with the Haskell Parsec library. Mess around with monads for a while, learn a few things, but not how to write elegant parsers in Common Lisp.
    

I've been going through these exact steps but in JavaScript. Writing languages
is fun! And makes me want to kill things!

~~~
spacemanaki
I have no idea what you're working on, but I spent a few weeks this summer
writing parsers and trying to figure out how to tackle the ugliness that seems
to be inherent in hand-written recursive descent parsers. While I learned a
lot, I am not an expert by a long shot. However I did find an interesting
technique that is not well covered elsewhere.

I came across this* article, which uses JavaScript as the implementation
language, and found it very interesting, in large part because this approach
(Top down operator precedence) sort of pulls the precedence hierarchy out of
the call graph of a recursive descent parser and into a table, but also
because it's an approach that OOP (and JavaScript in particular) is well
suited to. I've used it (in combination with traditional recursive descent) in
a functional setting (Standard ML) as well, and would use it again, especially
for parsing infix expressions (arithmetic expressions, type annotation
expressions).

This is a bit of a tangent, but I thought you might be interested given the
intersection of parsing and JavaScript. I've been meaning to write this up in
a short blog post...

* [http://javascript.crockford.com/tdop/tdop.html](http://javascript.crockford.com/tdop/tdop.html) There's another article on this using Java as the implementation language: [http://journal.stuffwithstuff.com/2011/03/19/pratt-parsers-e...](http://journal.stuffwithstuff.com/2011/03/19/pratt-parsers-expression-parsing-made-easy/)

~~~
anaphor
I like the precedence climbing algorithm since you can easily fit it into a
normal recursive descent parser and it allows you to easily add new operators
just by putting them in a table. See:
[http://eli.thegreenplace.net/2012/08/02/parsing-
expressions-...](http://eli.thegreenplace.net/2012/08/02/parsing-expressions-
by-precedence-climbing/)

------
tonetheman
I came here to make a joke about the real reason why was that no one
understands them... but apparently everyone here does. Off to google monads
for idiots...

~~~
gernotk
This talk by Douglas Crockford is quite informative:

[http://www.youtube.com/watch?v=dkZFtimgAcM](http://www.youtube.com/watch?v=dkZFtimgAcM)

~~~
recursive
As a connoisseur of monad explanations (if not monad understanding)
Crockford's talk sounds rushed. It spends a fair amount of time on setup, but
when it gets to the meaty bits, it has the feeling of glossing over the
details. It could be that all the essential elements are there, but they're
covered to quickly to impart any comprehension.

~~~
runT1ME
Actually, Crockford is completely wrong in his definition. He is confusing
them for functors.

------
dustingetz
Nah, I think its just because they result in really nested types which is hard
to keep track of without a really good type system. Using a single monad by
itself like error or continuation is straightforward in Clojure; there are
smart & vocal people who do this. But the super awesome OMFG of monads is that
they can be combined (e.g. parser = error + state), and this more or less
requires abstracting in the type system to keep track of all the lambdas.

------
emiljbs
>That's a little too blatantly useless to be interesting though. But note how
ugly CL's multiple namespaces make liftmaybe and its uses.

Oh yeah, incredibly ugly \s

OP, you may be interested in checking out LiL, the Lisp interface Library.

[https://github.com/fare/lisp-interface-library](https://github.com/fare/lisp-
interface-library)

------
ggchappell
There is a point made here about how Haskell's namespace handling makes monads
easy, while CL's makes them ugly. I don't know enough about CL's namespaces to
understand this. Can someone explain?

~~~
emiljbs
Functions and other values are stored in separate namespaces so when you want
to call a function that is stored in a "regular" variable you must write

(funcall (function variable) arg1 arg2 ... argn)

Or shorthand (funcall #'variable arg1 arg2 ... argn)

Basically, some people find this ugly, I mostly find it to say "HEY!! We're
doing this specific thing right here". To me it's more helpful than ugly.

