Hacker News new | past | comments | ask | show | jobs | submit login
Why monads have not taken the Common Lisp world by storm (2008) (marijnhaverbeke.nl)
67 points by momo-reina on Sept 17, 2013 | hide | past | web | favorite | 48 comments



I [author] must say I'm slightly embarrassed to see this turn up here now. This is not a terribly deep or interesting post. Read http://marijnhaverbeke.nl/blog/tern.html or http://marijnhaverbeke.nl/blog/acorn.html or http://marijnhaverbeke.nl/blog/browser-input-reading.html instead.


Don't be. The fundamental observation that monad's usability is tied closely to Haskell's features is important, and still hasn't been taken on board elsewhere. There's always someone trying to do monads in Clojure...


Yup. If I had read this article when it was first written instead of now, I probably would have produced a fair bit less unnecessarily complicated and difficult-to-maintain code over the past several years.

Instead, I had to figure out why I was having such a hard time coming up with a satisfactory implementation of the pattern in C# - and why I maybe shouldn't have wanted one in the first place - the hard way.


Just curious, why would you argue that you shouldn't have wanted it the first place? Monads are one thing, which you can approximate somewhat via LINQ, but there are other simpler patterns such as Monoid, that really makes sense to use as programming construct, that just somehow never works in C#.


Because it seems like there always turns out to be some other approach that works better and is more idiomatic in C#. Usually something really forehead-smackingly obvious like mutation or a callback.


Scala has plenty of monads and they work fine.


Yes, but doesn't it also have the three requirements mentioned by the article: ML-style function foo, something akin to type classes, and polymorphic return types?


In Scala, currying is not all-pervasive. You have to be explicit about it (and it's a definite pain point).


Scala has a much more powerful type system than Clojure


100% true, but not as powerful as Haskell's. Not sure you can usefully have inheritance and return type overloading.


The Scala and Haskell type systems are incomparable, and the typeclass aspects that are most relevant to monad usability are doable with implicits.


Scala also has some pretty powerful syntax to go with


Don't be embarrassed about this. I think it's a very important point. I've heard monads getting buzz outside the Haskell community and it almost always misses the point. I gave a presentation at the NYC Haskell meetup that made essentially the same point as you, but came at it from a different angle. You wrote this post quite awhile earlier, but it's nice to know that we independently came to the same conclusions.


I enjoyed reading the code =)


> It appears that in the presence of mutable state, a lot of the advantages of monads become moot.

In the simpler cases where use of monads can be replaced by simple mutable state -- you still lose out on the explicit types of the mutating vs. pure code.

For example, STM is possible because mutating effects are typed, and so can be ruled out of STM transactions.

And in the more complex monads (e.g: transformer stacks), mutable state is just not good enough. You can compose transformers to build things that mutable state simply cannot express, and you'd have to CPS transform your code and avoid mutable state to express those things.

For example, these two monads:

  ListT (ParsecT m)
  ParsecT (ListT m)
Have no corresponding "mutable state" representations.


That's a really great point that I rarely see come up. Yes, Haskell makes you "recreate" the RWST monad stack that imitates impure languages... but the same tools also let you build other "language domains" which are fiendishly difficult to implement in non-pure settings (on par with CPS transforming yourself into the mother monad and then working backwards from there).

On Lisp has a chapter devoted to making a leaky macro-based CPS transformer for Common List. ContT adds CPS to any monad stack as quickly as

    newtype ContT r m a = ContT { runContT :: (a -> m r) -> m r }


>Yes, Haskell makes you "recreate" the RWST monad stack that imitates impure languages

This is really a point that seems to get lost in most of learning materials on the web. LYAH doesn't even acknowledge transformers. I am pretty new to Haskell but it seems to me that transformer stack decisions are a pretty significant design decision for your application. What has happened to me is almost all the code in my application has wound up inside this stack; the only pure functions are trivial helpers. I'm not aware of what has been written on this topic in terms of guidance or advice. But it seems like it is definitely one of the later-stage humps for a new Haskell developer like me.


That's definitely a stage of a Haskell program design. The stack helps to separate out layers of effects, but it can be easy to fix that stack concretely at an early stage and then feel the weight of it later on with overly specialized functions.

The solution is usually to use the mtl-style transformer classes which let you write generalized functions which depend upon capabilities of the transformer stack instead of the concrete stack itself. You end up with a sort of natural dependency injection framework. For instance, here's a function which works on any RW-style monad stack

    augment :: (MonadReader a m, MonadWriter a m) => a -> m ()
    augment a = ask >>= \a' -> tell (a' <> a)
where the Monoid constraint allowing us to use (<>) is coming from the typeclass Prolog involved with the MonadWriter class.

You can then use augment in ANY monad transformer stack which involves Reader and Writer over the same state type.

Note that this defers the extremely important decision about your monad transformer layer order---the caller of augment decides whether to call it with "ReaderT w (Writer w) a" or "WriterT w (Reader w) a". Reader/Writer commute so it's not a problem, but this changes things in the op's example using "ParsecT (LogicT m)" versus "LogicT (ParsecT m)". If your function depends on a particular ordering of the effects then it must have a less general type.


For maximum modularity, you can combine this with functions like "zoom" and "magnify" of module Control.Lens.Zoom. They let you convert stateful operations that work on a fragment of the global state/environment into operations that work with the whole state/environment (provided that you have the appropriate lenses.) Very useful in complex monad stacks.


s/Prolog/Monoid/


I meant the Prolog-like inference where MonadWriter w m implies Monoid w.


I see. That was a bit obscure, though.


Ah, sorry, I was accidentally writing in the context of another comment I wrote in this thread where I talked about the typeclass resolution machinery. Without that context, it is pretty out of place.


Incidentally, I am just reading through All About Monads (http://www.haskell.org/haskellwiki/All_About_Monads) and his point 3 ("Allow polymorphism on return types") is what confused me in this example:

  getAny :: (Random a) => State StdGen a
  getAny = do g      <- get
              (x,g') <- return $ random g
              put g'
              return x
I was like.. how on Earth Haskell knows which function "get" should it call? I think this is a point which should be more stressed in the tutorials (I read Learn yourself a Haskell and glanced at couple of others..)


It's easy for a Haskell expert to sweep typeclass resolution under the rug, but it's definitely some pretty black magic at first no matter how "simple" its implementation is. The reality is that the compiler works really hard to ensure that it can guess the right "get" and it's pretty possible for it to fail.

Of course, when it fails you know immediately and can remedy it by type annotations, but that can still be challenging.

To answer the question, `get` has its principle type resolved by the type inference engine. In this case, `get`'s most general type is `MonadState s m => m s` and it can resolve what `m` is by unifying it with the type annotation of `getAny`, so we know that we need `get :: MonadState s (State StdGen) => State StdGen s`. From here, the compiler searches through the type class instances in a Prolog-like style to find that `MonadState s (State StdGen)` occurs when `s ~ StdGen`. This resolves more type information and also tells us which definition of `get` is needed—the one that was defined as `instance MonadState s (State s) where`!

The end result is that get ends up with the type `get :: State StdGen StdGen` and is the function defined as `get = State $ \s -> (s, s)`.


It's also quite possible for this type of functions to get a polymorphic type, in which case the compiler generate code that will pass along essentially a virtual table object for the particular instance of the call site. Hardly black magic.


I agree that the implementation is quite obvious. It's the inference—the collusion between the HM type constraints and the Prolog-like typeclass constraints—that's magical.


Thanks. I sometimes wonder, is there a way in Haskell to display a type signature of a function for a particular instance? For example, is there a way to display type of get when used as an instance of State ? I know I can derive it but I would sometimes like to see if I am correct..


There's a hack you can use until Type Holes lands in production GHCi. If you turn on ImplicitParams you can write

  getAny :: (Random a) => State StdGen a
  getAny = do g      <- ?whatAmI get
              (x,g') <- return $ random g
              put g'
              return x
which will fail the type checker because your type didn't include the information about what ?whatAmI is---the error will be something like "could not find definition of ?whatAmI :: State StdGen StdGen -> State StdGen StdGen", although you may only get partial information as the compiler may complain about your undefined function before it's resolved what you're looking for.

You can also leave off the type annotation and GHCi will infer the type of ?whatAmI. Then if you check the type of "getAny" you'll see

    getAny
      :: (?whatAmI::m s -> n s', RandomGen s', Random b,
          MonadState s m, MonadState s' n) =>
         m b
which provides all of the information the typechecker knows all in one go. Take note that the type it derives is somewhat more general than the one we want since it allows ?whatAmI to transform the involved monad. Obviously "get" doesn't do this.


Some vim and emacs plugins can give you the type of subexpressions like this. Namely, you want the ghc-mod plugin since it provides the hooks into the compiler to query type information.


Is there a way to do this in ghci? I know the :t command, but I don't know how to say that I want a specific instance.


  * Try to quickly write the whole thing as a single recursive descent parser. Note the exploding amount of ugliness. Give up.
  * Separate out the tokenizer (novel idea, huh?) to keep parser complexity down. Parser is still a mess. Ugh!
  * Play around with some CL parser frameworks. This helps a bit, but none of the systems I tried produce errors with enough information.
  * Remember the breeze it was to write a parser with the Haskell Parsec library. Mess around with monads for a while, learn a few things, but not how to write elegant parsers in Common Lisp.
I've been going through these exact steps but in JavaScript. Writing languages is fun! And makes me want to kill things!


I have no idea what you're working on, but I spent a few weeks this summer writing parsers and trying to figure out how to tackle the ugliness that seems to be inherent in hand-written recursive descent parsers. While I learned a lot, I am not an expert by a long shot. However I did find an interesting technique that is not well covered elsewhere.

I came across this* article, which uses JavaScript as the implementation language, and found it very interesting, in large part because this approach (Top down operator precedence) sort of pulls the precedence hierarchy out of the call graph of a recursive descent parser and into a table, but also because it's an approach that OOP (and JavaScript in particular) is well suited to. I've used it (in combination with traditional recursive descent) in a functional setting (Standard ML) as well, and would use it again, especially for parsing infix expressions (arithmetic expressions, type annotation expressions).

This is a bit of a tangent, but I thought you might be interested given the intersection of parsing and JavaScript. I've been meaning to write this up in a short blog post...

* http://javascript.crockford.com/tdop/tdop.html There's another article on this using Java as the implementation language: http://journal.stuffwithstuff.com/2011/03/19/pratt-parsers-e...


I like the precedence climbing algorithm since you can easily fit it into a normal recursive descent parser and it allows you to easily add new operators just by putting them in a table. See: http://eli.thegreenplace.net/2012/08/02/parsing-expressions-...


The parser is a bastard mix of recursive descent and parser combinators borne out of looking at the Dragon Book and thinking "fuck me, 2000 pages? I'm sure I can remember some CS325...". The language will eventually have macros I'll use for the built-in operators. I guess it's a Lisp? The syntax is sweet-expressions, so infixes are {l op b} which is just (op a b).


If your needs fall towards JS as the target vs. the source language, then it's worth having a look at Fay[1] and GHCJS[2]. Both allow calls from regular Javascript, but Fay currently has a much better interface for that vs. GHCJS' still very low-level support.

[1] https://github.com/faylang/fay/wiki

[2] https://github.com/ghcjs/ghcjs



I came here to make a joke about the real reason why was that no one understands them... but apparently everyone here does. Off to google monads for idiots...


This talk by Douglas Crockford is quite informative:

http://www.youtube.com/watch?v=dkZFtimgAcM


As a connoisseur of monad explanations (if not monad understanding) Crockford's talk sounds rushed. It spends a fair amount of time on setup, but when it gets to the meaty bits, it has the feeling of glossing over the details. It could be that all the essential elements are there, but they're covered to quickly to impart any comprehension.


Actually, Crockford is completely wrong in his definition. He is confusing them for functors.


Cool thanks!


I actually prefer Brian Beckman's explanation:

http://www.youtube.com/watch?v=ZhuHCtR3xq8


I think the proper presentation of this material is as how Learn you a haskell[1] does it: functors, applicatives, monoids ... then monads but I don't think i've seen this done for other languages. Each of those structures should be learned as a set of properties, rather than something concrete (container, wrapper, sequencer, programmable semicolon) and then you see how important the applicative vs monadic style decision is for library design in haskell

[1]http://learnyouahaskell.com/functors-applicative-functors-an...


Nah, I think its just because they result in really nested types which is hard to keep track of without a really good type system. Using a single monad by itself like error or continuation is straightforward in Clojure; there are smart & vocal people who do this. But the super awesome OMFG of monads is that they can be combined (e.g. parser = error + state), and this more or less requires abstracting in the type system to keep track of all the lambdas.


>That's a little too blatantly useless to be interesting though. But note how ugly CL's multiple namespaces make liftmaybe and its uses.

Oh yeah, incredibly ugly \s

OP, you may be interested in checking out LiL, the Lisp interface Library.

https://github.com/fare/lisp-interface-library


There is a point made here about how Haskell's namespace handling makes monads easy, while CL's makes them ugly. I don't know enough about CL's namespaces to understand this. Can someone explain?


Functions and other values are stored in separate namespaces so when you want to call a function that is stored in a "regular" variable you must write

(funcall (function variable) arg1 arg2 ... argn)

Or shorthand (funcall #'variable arg1 arg2 ... argn)

Basically, some people find this ugly, I mostly find it to say "HEY!! We're doing this specific thing right here". To me it's more helpful than ugly.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: