
Why monads have not taken the Common Lisp world by storm - andreyf
http://marijn.haverbeke.nl/monad.html
======
shrughes
Another reason monads are cumbersome in other languages is that other
languages use lazy evaluation. If you consider the naively implemented
function

    
    
        skipMany :: Parser a -> Parser ()
        skipMany p = (p >> skipMany p) <|> return ()    
    

and then consider how you might write it in some other language, you'll note
that the translation, strictly evaluated, gives infinite recursion. To
practically work with monadic computation requires some form of non-strict
evaluation. Another, less catastrophic example would be the following.

    
    
        printNTimes :: Int -> String -> IO ()
        printNTimes 0 s = return ()
        printNTimes n s = putStrLn s >> printNTimes (n - 1) s
    

Strictly evaluated, that would build up Theta(n) memory usage.

For this reason, the more computation-oriented monads cannot be used
practically in other languages without added weight to avoid strict
evaluation.

Ironically, it's the non-monadic uses that cause the problems -- those that
don't need the full power of 'bind', i.e. uses where the monad's user doesn't
pass return values from previous actions to future actions. When you pass a
return value, you end up needing a lambda expression, which cuts off strict
evaluation. When you only use them in ways that applicative functors can be
used (which often applies for parsers), that's when you suffer. It's not the
end of the world; there certainly are workarounds, but that requires adding
explicit promises everywhere, which is just a pain.

~~~
chandler
The first example, naively translated to scheme, results in:

    
    
        (define (skip-many a)
          (or (and (a) (skip-many a))
              (void)))
    

This is not infinite recursion. The Parser monad used in the Haskell version
is simply a stream dressed in monadic clothes (and even C has streams a-la
FILE*).

As for the second example, the naive translation to scheme is:

    
    
        (define (print-n-times n s)
          (cond 
            ((zero? n) (void))
            (else (put-str-ln s)
                  (print-n-times (sub1 n) s))))
    

Here, both Haskell & scheme demonstrate constant memory usage; not because of
Monads, or strict/lazy evaluation, but because of tail recursion.

Also, in your first sentence, I think you meant "strict".

~~~
shrughes
Neither of your translations convert your code to anything that has to do with
_monads_ , and your first translation is just plain broken.

Your print-n-times doesn't return an action---it's an imperative function that
performs an action.

The naive translations to Scheme would be as follows:

    
    
        (define (skip-many p)
          (<|> (>> p (skip-many p))
               (return 'nothing)))
    
        (define (print-n-times n s)
          (if (= 0 n)
              (return 'nothing)
              (>> (put-str-ln s)
                  (print-n-times (- n 1) s))))
    

In order to fix the infinite recursion in my first example, the parsing
framework I made in Scheme had a macro `thunk-parser` (actually, it had a
shorter, stupid-sounding name) which for all intents and purposes just wraps
the expression in a (lambda ()).

    
    
        (define (skip-many p)
          (<|> (>> p (thunk-parser (skip-many p)))
               (return 'nothing)))
    

Also, both your examples rely heavily on non-strict evaluation. 'and' and 'or'
are both macros that non-strictly evaluate expressions.

~~~
chandler
Agreed; neither of the translations use monads, because monads are a semantic
hand-job proscribing order within a declarative language. However, Scheme is
not a declarative language, therefore order is implied in the structure,
therefore monads aren't necessary.

>> Your print-n-times doesn't return an action...

Nope, look again. It returns the results of applying `void'. Who knows what
(void) may return in a loosely-typed language?

>> ...it's an imperative function that performs an action.

Well, considering Scheme is an imperative language with side-effects, it's
hard to dodge such criticism.

>> The naive translations to Scheme would be as follows...

That's not a translation from Haskell to Scheme, that's writing Haskell using
a prefix syntax.

>> Also, both your examples rely heavily on non-strict evaluation. 'and' and
'or' are both macros that non-strictly evaluate expressions.

I went back and re-read your original post:

>> ...you'll note that the translation, strictly evaluated, gives infinite
recursion.

Mea culpa--I completely missed your point that translating from a turing
complete language to a non-turing complete language might have some pitfalls.

~~~
shrughes
Then maybe I'm misunderstanding the point of your reply. I was pointing out
how monads are inconvenient to use in strictly evaluated languages. Why did
you follow up with an example about how not using monads is convenient?

~~~
chandler
>> Then maybe I'm misunderstanding the point of your reply.

I was pointing out that your examples don't make sense, as it's trivial to
translate the Haskell code into another general purpose language (such as
Scheme, C, Perl, Fortran, etc), while impossible to translate into an eager
(strictly evaluated) language (such as...?).

Additionally, the reason monads are cumbersome in other languages has nothing
to do with eager/lazy evaluation, or conditional computation, and everything
to do with the fact that monads exist in Haskell to solve it's sequencing
problem (arising from the fact that it's a declarative language). Imperative
languages have no problem with sequencing, and therefore have no need for
monads.

------
wingo
I liked the article, but it was very much "in the weeds".

I hack Scheme, and have never written any Haskell, though you have to know
some to read the papers. But when programming in Scheme I find myself wanting
monadic operations, like the seed in Oleg's SSAX parsers. But my code that
rhymes with monads _requires_ tail recursion, and the author didn't mention
tail recursion at all.

I prefer to see monads as fold + tail recursion -- and without the latter, it
seems you need the mutation that Common Lisp encourages.

~~~
sctb
"I prefer to see monads as fold + tail recursion"

I don't think this is accurate, monads are orthogonal to both folding and tail
recursion. Monads are types representing computation, and providing the
operations `bind' and `return'. They're much simpler than many imagine at
first.

"... and without the latter, it seems you need the mutation that Common Lisp
encourages."

I agree that when using looping constructs other than recursion, it is often
required that a counter or accumulator be mutated. Common Lisp does not
encourage _general_ mutation, however, and the standard functions and macros
illustrate that.

~~~
jrockway
> the standard functions and macros illustrate that

You do know that nearly all of the standard list functions destroy at least
one of their arguments, right?

People like to call Lisp a functional language, and you can use it that way,
but it is really just a very nice imperative language.

Statements like:

    
    
       (loop for x in some-list sum (* 2 x))
    

are just more efficient in CL than the "functional" approach:

    
    
       (reduce #'+ (mapcar (lambda (x) (* 2 x)) some-list)
    

In this case, the LOOP is about 10x faster than the reduce, probably because
it conses about 5000 times less memory. CL is not set up the same way Haskell
is, and hence you want to reuse data structures (or never create them) where
possible. This is very imperative and very non-functional. (But I am not
saying it's bad, LOOP (or ITERATE, which is what I use "in real life") is very
easy to read and write.)

~~~
Hexstream
I have a feeling you don't have much CL experience...

There really aren't very many CL destructive (mutating) list functions, and I
personally almost never use them except on fresh lists (like for example when
I nreverse a list I just created with successive pushes) and I'm pretty sure
it's standard practice.

Let's see what destructive list functions I can find in CL (ref:
[http://www.lispworks.com/documentation/HyperSpec/Body/c_cons...](http://www.lispworks.com/documentation/HyperSpec/Body/c_conses.htm))

    
    
      rplaca, rplacd, nsublis, nsubst*, nconc, nreconc, nbutlast, mapcan, mapcon, nintersection, nset-difference, nset-exclusive-or
    

I took some liberties but that's about 12/58 = 20% destructive functions
(push, pop, pushnew are usually used on variables, not list structure). And
good style dictates to use them very sparsely... Also, (loop for x in some-
list sum (* 2 x)) is pretty functional, you're just mutating x locally but who
cares, this is exactly like saying (apply #'+ (mapcar (lambda (x) (* 2 x))))
which is obviously functional.

~~~
jrockway
Point conceded. The post I replied to says "CL discourages mutation" which is
not true. I said "nearly all sequence functions are destructive", which is
also not true.

Upon further reflection, I think we can say CL doesn't really care if you
mutate or not. There's more than one way to do it :)

~~~
Hexstream
CL _does_ discourage mutation... With all the structure sharing going on in CL
programs you can't afford mutating things thoughtlessly.

~~~
jrockway
But OTOH, if you are careful about mutating when possible, your program will
run faster. Not the most important thing in the world, I agree, but it is a
"reward" for mutating things.

------
sayrer
Not knowing a lick of Haskell, these sorts of articles always make me wonder
whether it makes Lisp look like Blub.

~~~
jrockway
It depends on how important you find purity and strict typing. They are both
useful, but the mistakes that experienced programmers make are not usually
related to types. Having recently started writing "real" software with
Haskell, I've noticed that the type system doesn't save me a lot of time. I
still have to debug stupid errors, like calling a function with the right type
signature but with the wrong effect. By the time I've debugged and written
tests, I haven't gained a _whole lot_ over Lisp or Perl. The advantage of
Haskell is that it's as expressive as either of those, but executes _a lot_
faster. (One concurrent app we ported was very quick to use 100% CPU on Perl
with just a few outstanding operations. In Haskell, it gets to 10% CPU and
runs out of file descriptors. Wow!)

As for monadic computation in Haskell, it is very comfortable. You can create
an environment where you only have exactly what you need, which makes your
program easier to read and reason about. (No "IO a" signature? This function
doesn't do IO.) It is not something I can't live without in other languages,
but it does solve the intended problem very cleanly. (Similar abstractions
like monoids and functors are also nice ways of looking at their respective
problems.)

The author of this article is right, though -- in a language with mutable
state and without type inference (for return), monads are just not worth the
bother. They won't give you anything you don't already have.

~~~
andreyf
_I still have to debug stupid errors, like calling a function with the right
type signature but with the wrong effect_

I wonder if a way of writing tests which is significantly easier (taking one
or two magnitudes less time) would help here?

Say for example, something like Python's doctests which run inside your IDE...
it seems my standard way of development anyway - to write a piece of code,
test it in the REPL, repeat.

~~~
jrockway
Haskell's quickcheck is really nice for this case.

------
lacker
Has _anything_ taken the Common Lisp world by storm in the past decade?

~~~
jjs
SLIME, perhaps.

------
shiro
One of the problem using monads in dynamically typed languages is, as this
article suggests, the need to overload 'return' operation by the type of
results, not by the type of arguments. I once tried to address this issue.

[http://practical-
scheme.net/wiliki/wiliki.cgi?Scheme%3aExpli...](http://practical-
scheme.net/wiliki/wiliki.cgi?Scheme%3aExplicitMonad%3aEnglish)

