
Escaping Hell with Monads (2017) - networked
https://philipnilsson.github.io/Badness10k/escaping-hell-with-monads/
======
chousuke
What may not be obvious here is that the Haskell function in each of these
situations is the exact same generic function. The flow of data is abstracted
via the Monad interface, so that it becomes possible to write code that works
generically regardless of whether parallelism, IO, databases or whatever else
is involved. This makes testing such functions quite pleasant.

~~~
tom_mellior
But is _writing_ such functions, and all the auxiliaries, pleasant as well?
What type would I have to give the "getData" function to work in all these
examples? What if I just want to call an existing getData function that
returns a Maybe, not some generic monad thing? Would I have to tweak the
syntax for all the different monads I want to call it from?

Also, what would the print call really look like in actual code? This post
talks about composing IO with other monads. I've never seen a monad
composition example that wasn't peppered with calls to unintuitively named
functions like liftM_.

The pseudocode looks nice, but if the pseudocode isn't a faithful
representation of actual use, then it's not very useful.

~~~
Munksgaard
Maybe is already a monad.

~~~
tom_mellior
Yes. That would allow me to use a getData returning a Maybe in the Maybe
monad. But I wouldn't be able to use it in the list monad, or the others
mentioned in the article.

~~~
marcosdumay
Well, yes.

You will need to convert your type into something else to use in another
monad.

As an example, it's common to track failure with an `Either Text` monad, but
functions that can fail on a single way usually have an `Maybe a` return type.
That means you will probably have an `Text -> Maybe a -> Either Text a`
function around and write code like this:

    
    
        toEither "Error message" $ functionInMaybe a b

------
sn41
Brilliant article. The best way to explain a strange feature like a monad is
to list out programming situations to which monads are a good solution.

In the same vein, I finally understood what Lisp macros are for, from a
similar article explaining in what programmatic situation you can benefit from
writing a macro: [1]

[1]
[http://www.defmacro.org/ramblings/lisp.html](http://www.defmacro.org/ramblings/lisp.html)

------
ndh2
I don't get it. The article doesn't explain anything. It's just a list of
statements. "This is bad, that is good." But it doesn't explain why the bad
things are bad, and how using monads is better. It also doesn't explain what
monads are. "We note that lists are Monads." Why? How did you come to that
conclusion?

My guess is that all the people praising this article already "got monads".
But for the unenlightened, it doesn't do anything.

~~~
andrewprock
I don't get it either. These are probably toy examples written to motivate a
specific solution. Note that the semantics change between versions. In some
versions, erroring out will not print anything, in others it will.

If this is Java, why not use exceptions, or @NonNull. If this is C++, why not
use exceptions?

I'll note that for loops and if statements are fundamental constructs that
many people don't really understand. I've seen a lot of bad code written with
if statements and for loops. If these fundamental constructs can be misused to
create bad code, how much likely is it that more abstract constructs like list
comprehensions, futures, and monads will be used to create bad code?

~~~
leshow
> Note that the semantics change between versions.

Because each one describes a different type's implementation of the monad
interface. The blog post is just trying to illustrate:

"all these problems have the same interface. If we have a general solution to
a problem, why use a different ad hoc solution for each?"

~~~
andrewprock
I think something was missed, and maybe I missed it. It looks to me like this:

    
    
        var a = getData();
        if (a != null) {
          var b = getMoreData(a);
          if (b != null) {
             var c = getMoreData(b);
             if (c != null) {
                var d = getEvenMoreData(a, c)
                if (d != null) {
                  print(d);
                }
             }
          }
        }
    

will only print on non-null, whereas this:

    
    
        do
          a <- getData
          b <- getMoreData a
          c <- getMoreData b
          d <- getEvenMoreData a c
          print d
    

will always print something.

~~~
foldr
>will always print something.

Not necessarily. It depends on which monad is being used here (the article
doesn't specify). If you replace the 'print' with 'return' to simplify things,
then this could be an expression in the Maybe monad, which would return either
(Just d) or Nothing.

The article is lying a bit by suggesting that it's trivial to layer short-
circuit-on-error behavior on top of the IO monad. It's possible for sure, but
some relatively subtle issues can arise (in Haskell at least).

~~~
andrewprock
This is what I always hated with C++, and to a lesser extent Java. The code
that you cannot see makes it very difficult to understand the code when
reading the source. If the only person that can truly understand the code is
the author, that is an anti-pattern.

The for loops and if statements were never a problem, it was the default
constructors, overloaded operators, and the like that always caused confusion
on my teams.

~~~
foldr
In simple cases you'd be able to see the concrete type if this were real
Haskell code, so you'd know which monad instance was the relevant one. Your
complaint does apply to some uses of the monad transformer library, however.

------
kpmah
This is a really good way of explaining monads. Generally, when people ask
about monads, they're not really asking 'what' monads are - they're asking
'why' and 'how' you use them.

The advantage shown here is that Haskell has special syntax for monads, and
this single abstraction can be used for multiple features that many other
languages have specialised syntax for.

~~~
majewsky
That can also be seen as a disadvantage. There is value to having different
things look different.

~~~
klmr
The difference is given by the code context (which is unfortunately entirely
missing from the article). What’s shown here are the things all these patterns
have _in common_. The whole point of abstraction is to have different, but
related, things look the same.

Haskell (and these examples) certainly raise the level of abstraction to
extremes (compared to other languages). But as long as the context (explicit
type annotations, if necessary) make it clear what the code does, I don’t see
this (even conceivably) as a disadvantage.

~~~
xenomachina
> But as long as the context (explicit type annotations, if necessary) make it
> clear what the code does, I don’t see this (even conceivably) as a
> disadvantage.

I spent 3 years trying to learn Haskell, and I never found this clear. When
you see a "do" block, where do you look to figure out what it's actually
doing?

~~~
dllthomas
> When you see a "do" block, where do you look to figure out what it's
> actually doing?

If you need to know, you look at what consumes the result. If it's abstract
(say it's a top level binding `... -> m a` where the m is abstract) you don't
_need_ to know "what it is actually doing" \- it should be correct for any
choice of `m`.

Whether this stuff is easy to find/follow certainly depends on the quality of
the code, as well as your experience. It's not something I struggle with,
working day-to-day in Haskell.

~~~
dmitriid
> you don't need to know "what it is actually doing" \- it should be correct
> for any choice of `m`.

I don’t know what it’s doing, but it’s correct. :-/

So _what_ is it doing that’s correct? Or is it some abstract correctness?

~~~
dllthomas
It's not abstract correctness, it's satisfying contracts up to an interface.

Take `sequence :: Monad m => [m a] -> m [a]`

That takes a list of actions and gives us an action that runs each in turn and
collects the results in order. The correctness _relative to that description_
is clear regardless of whether each input action is threading state or
handling failure or printing to the screen.

~~~
dmitriid
Then this clearly doesn’t answer the original question, does it?

~~~
dllthomas
Not on its own. Fortunately, it wasn't the entirety of my answer to the
original question. When you're writing something like `sequence`, you don't
need to know the implementation of the interface you're working against. As
you get comfortable in Haskell, more things are "like `sequence`". There will
always be things that aren't - for those, as I said, you should look where the
value is consumed. And as I neglected to say, you can always add a type
annotation when things get unclear.

~~~
dmitriid
Ah, the old Java adage: any abstraction can be solved with many more layers of
abstraction.

------
hota_mazi
All the ad hoc solutions look at least as clear if not clearer than their
monad equivalents.

Also, that article conveniently ignores that as soon as you start mixing
monads, you enter a different kind of hell: monad transformer hell.

Because you see, monads don't compose. Not without these transformers. And if
you think the monad examples in that article were hard to follow, wait until
you see monad transformers in action.

Also, I don't think replacing a `for` loop with a `for` comprehension is that
much of a gain. I'll pick a `map`/`filter`/... any day over such `for` loops.

~~~
dkarl
I worked on a Scala system with two rookie Scala programmers who had been
taught that `for` expressions were a simple and easy way of composing futures,
because it was the monadic way. The code looked very neat, but we had an
epidemic of swallowed Failures, and the ones that weren't swallowed were often
interpreted incorrectly. (For example, a 500 from service A would be logged as
a 500 from service B, because someone miscalculated how the Futures and Trys
were being composed.) This was due to people not understanding the for
expressions they were writing. Rather, I seemed to be the only one _trying_ to
understand the `for` expressions: I often had to mentally (or even manually)
translate them into flatmaps and maps according to the language spec to
discover that they were not doing what the person who wrote them intended. The
other programmers, who had been promised that `for` expressions were obvious
and self-explanatory, felt that this should be unnecessary, and that if they
were making mistakes, then Scala was failing to live up to its billing, and we
should stop using it. It was extremely frustrating. I love Scala, and I feel
that promoting `for` expressions to beginners like that was ideologically
motivated and bad for the language.

It's much easier to just use map and flatMap. But that won't do, because the
virtue of monads is supposedly the abstractions built on top of them.

~~~
julienfr112
What was your solution with the rookies ? Keep the for expression and fixing
bugs, or rewriting them more explicitly ?

~~~
dkarl
A little bit of both. I kept hoping something would click and I would end up
loving the for expressions, but for a few monsters that I could only untangle
by translating the code line by line into map, flatMap, and filter, I
committed the translations so I would never have to do that again. Neither of
the other programmers is writing Scala anymore. I've mostly steered clear of
for expressions since then.

------
toolslive
The way I often explain it to a C++ developer is by asking this question:
"what if the semi-colon was merely an operator, that you can overload?"

Then they sort of start to see the power.

~~~
chopin
How would you overload that for getting something monadic-like? Is there a
simple example (C++ is obviously not my native tongue)?

~~~
toolslive
well the (;) is the `bind. The problem with the question is that it only works
on a conceptual level as C++ does not have type system support for statements.
(ie, in {a;b} what's the type of a ?)

But, it allows them to see how you could build
fiber/promises/lightweight_threads support using this concept, which is most
of the times what they are interested in anyway.

~~~
nickpsecurity
I never thought about shifting perspective on the semicolon from imperative to
function. I'll have to revisit the idea after I learn functional programming.
For now, maybe what you describe could be possible with tools that do C++
metaprogramming in Haskell or Racket like below. I liked what I was reading
despite not even using those languages. ;)

[http://aszt.inf.elte.hu/~gsd/s/cikkek/abel/haskelltmp.pdf](http://aszt.inf.elte.hu/~gsd/s/cikkek/abel/haskelltmp.pdf)

[http://matt.might.net/papers/ballantyne2014metameta.pdf](http://matt.might.net/papers/ballantyne2014metameta.pdf)

------
rattray
Few nits with this:

1\. This seems like an article about Haskell's do-notation, not Monads. The
article even decries Promises, which _are_ Monads.

2\. "Call you language implementor and ask for do-notation today!" isn't very
satisfying. I would be curious to read such a proposal for eg; JS, though it
would need another name (do-expressions already have a proposal). Meanwhile,
it might have been helpful to show how to implement these Monads in a common
tongue (eg; JS).

3\. The article did not display the bodies (or even type signatures) of the
functions at hand. It's unclear to a reader unfamiliar with Haskell whether
they are all the same, and what interface they must conform to.

4\. The examples listed as "Hell" don't sound like hell to me. They're minor
annoyances that I run into occasionally. The code listed as "problematic"
isn't blissful to write and doesn't feel elegant, but is universally easy to
write and read.

As an aside, it feels like Go and Haskell would be at opposite ends of some
spectrum (obvious vs elegant?). In almost any business/production setting, I
would choose the former.

~~~
networked
>do-expressions already have a proposal

For those who are curious, it's this one: [https://github.com/tc39/proposal-
do-expressions](https://github.com/tc39/proposal-do-expressions).

------
dahart
So, the only problem here is none of the examples actually handle the errors.
Chains of partial functions, using any language I've ever tried, are easy when
you don't have to do anything about the null value. The article demonstrates
this in multiple languages. In practice, the hell only starts when you have to
send control flow somewhere else or show the user a different response for
every failed partial. I haven't used monads intentionally, can they do
anything special about the real world case of having to handle every error
separately?

~~~
bjterry
People would use something like an Either monad in that case (in real code
they may use an Error monad). If all your getData functions have a return type
`Either MyError GoodResult` then you could write something like:

    
    
      data MyError = MyFirstError | MySecondError
    
      type GoodResult = String
    
      myFunc = 
        case doEither of
          Right x -> *handle good result*
          Left MyFirstError -> *handle error*
          Left MySecondError -> *handle error*
        where 
          doEither = do
            a <- getData
            b <- getMoreData a
            c <- getMoreData b
            d <- getEvenMoreData a c
            return d
    

If one of those functions returns a "Left" (or error) value, the remaining
functions are not evaluated. If they all return a "Right" value, doEither
evaluates to a successful "Right" value.

This works much better when all of your functions fail in the same way. If
some functions fail with an empty Maybe and some fail with an Either and some
with Error you will have to wrap all of those to take advantage of do-
notation.

~~~
hota_mazi
Please don't use Either to carry errors, it's really silly to use a language
with a strong type system and then rely on convention to represent the error
on the correct side of the monad.

Is error left or right again?

That's right, you shouldn't have to remember.

Use a more specific type for this, some GADT with clearly named instances
(e.g. "Error" and "Value").

~~~
nemetroid
Right is the right result and Left is the wrong result. That's about as easy
it gets when it comes to remembering things. There is already a strong
convention for using Either for this purpose and it's built into the standard
library, since the Monad instance is designed to allow you to do this.

If you create your own data type for this, you are designing yourself into the
Monad transformer hell you mention in another comment.

~~~
hota_mazi
> That's about as easy it gets when it comes to remembering things

I use a statically typed language so I have fewer things to remember because
the compiler does the bookkeeping for me.

`Either` is a generic variant record, it should not be used to carry errors.

And as a general rule, any programming construct that relies on convention
instead of static typing is prone to generating bugs.

~~~
nemetroid
> `Either` is a generic variant record, it should not be used to carry errors.

The name suggests that, but the Monad instance suggests that Either is an
asymmetric data type, where the left and right branches carry different
meaning.

> And as a general rule, any programming construct that relies on convention
> instead of static typing is prone to generating bugs.

If the Either branches were named `Error` and `Success`, using `Error` to
carry errors would be just as much of a convention as using `Left` for that
purpose (a more sensible convention, but nonetheless a convention). The only
way you'd get around that is if you'd somehow encode in the language what is
and what isn't an error and use that to statically force correct branch usage.

------
m1el
I'd like to point out that do-notation in Haskell is a syntax sugar for
something nasty.

Let's look at the example:

    
    
        do
          a <- getData
          b <- getMoreData a
          c <- getMoreData b
          d <- getEvenMoreData a c
          print d
    

This desugars to:

    
    
       bind getData (
           bind (\a -> getMoreData a) (
           bind (\b -> getMoreData b) (
           bind (\c -> getEvenMoreData a c)
           \d -> print d)))
    

_Edit_ : the above code was initially wrongly desugared.

Which is exactly the callback hell criticized in the article.

Sure, monads and do-syntax provide a nice abstraction and are interesting
constructs from the perspective of type theory, but in my opinion it's not
principally better than Promises or await.

~~~
unhammer
`bind` is usually written as `>>=` in Haskell.

Haskell also lets you write simply

    
    
        print
    

instead of

    
    
        \d -> print d
    

I wouldn't consider this part syntactic sugar – if one did, I suppose one
would have to say that

    
    
        f(g())
    

in Python is sugar for

    
    
        (lambda i: f(i))((lambda: g())())
    

Here's how the desugared version looks in more idiomatic Haskell:

    
    
        getData 
          >>= \a -> getMoreData a 
          >>= getMoreData 
          >>= getEvenMoreData a 
          >>= print
    

(where I kept the lambda around the first getMoreData since `a` was used in
getEvenMoreData)

------
Sharlin
This is probably the most concise and elegant justification for the monad
pattern that I've seen. And I've seen _many_.

------
AndrewDucker
I'm not sure why

    
    
      do
        a <- getData
        b <- getMoreData a
        c <- getMoreData b
        d <- getEvenMoreData a c
        print d
    

is better than

    
    
      var a = getData();
      var b = a?.getMoreData();
      var c = b?.getMoreData();
      var d = c?.getEvenMoreData(a);
      print(d);
    

Other than the "var" at the start of each line they look like they take up
about the same amount space, and the pattern is near-identical, so they're
equally readable.

~~~
alipang
The point is not that any one example is better than the other, rather it's
that the same syntax can be used to uniformly solve multiple different
problems (partiality, async, "non-determinism", and state-passing).

~~~
xenomachina
One thing this article doesn't really make clear is how someone reading the
code can actually determine which of these things is being done. Code that
looks the same but magically does something different doesn't seem like an
improvement to anyone who wants to be able to maintain their code.

I know enough Haskell to know that the "magic" comes from which instance of
Monad is being used, but between the weirdness of overloaded values, the fact
that the "do" syntax doesn't really make it obvious which value is the
important one, and Haskell programmers' aversion to using parens, I still find
this stuff impossibly hard to read. This is largely why I gave up on Haskell
after trying to learn it for 3 years.

~~~
marcosdumay
Well, the `do` syntax makes it patently obvious what is the important value,
it is the one being assigned into variables within the block and used as
argument of functions. It hides the not so useful one that is the name of the
monad you are using.

You know the kind of "magic" by looking at the type declaration on the line
just above that `do`. If there's no type declaration there, I hope the code is
simple enough that it is self explanatory, otherwise it's plain bad code.

Things are made more complex when the monad type is generic. But then, this
just makes it patently visible that the monad does not matter at all and the
code must work as intended on any kind of "magic" you can throw at it.

About the aversion to using parens, matched operators make your code brittle
and hard to change. This is not obvious if you have never seen any other way
of organizing code, but it is a large effect.

~~~
xenomachina
> the `do` syntax makes it patently obvious what is the important value

Given that it seems that the majority of people who attempt to learn Haskell
end up giving up, I'd argue that very little about it is "patently obvious".

> it is the one being assigned into variables within the block and used as
> argument of functions

I think this is part of what confounds me when it comes to Haskell (aside from
the terrible syntax and horrible coding conventions): when I'm reading an
expression I understand the overall expression by understanding what the sub-
expressions do. That is, I can start at the leaves of the expression tree and
work my way up to the root.

In Haskell, sometimes this is not possible, because sometimes the type of the
subexpression is constrained by something higher up in the AST. This leaves me
not knowing where to start with trying to decipher the type of an expression.

> About the aversion to using parens, matched operators make your code brittle
> and hard to change. This is not obvious if you have never seen any other way
> of organizing code, but it is a large effect.

What are "matched operators"?

The aversion to using parens means that I can't even parse code unless I know
the fixity rules of all of the operators in an expression. I'm sure you'll say
this is easy to pick up after a while, but that has not been my experience.

~~~
marcosdumay
> In Haskell, sometimes this is not possible, because sometimes the type of
> the subexpression is constrained by something higher up in the AST.

The point is that this constraint always have the exact same format. It may
have some different semantics, but Haskell makes you abstract over those to
get into the operational function, while the compiler verifies anything that
isn't local operations to fit your overall structure.

Really, learning Haskell is the process of learning to abstract semantics away
from your code. It's a hard process, it's not something natural.

> What are "matched operators"?

Those are operators that require a matching. Usually parenthesis, square
brackets and braces. Haskell has the unmatched operators `.` and `$` that
people use instead, and for reading it you do have to understand their
priority. Most operators do not a set fixity, but those two are a must.

~~~
xenomachina
> > In Haskell, sometimes this is not possible, because sometimes the type of
> the subexpression is constrained by something higher up in the AST.

> The point is that this constraint always have the exact same format.

I'm not sure how to even parse that sentence. Let me phrase as a question:
when trying to understand a given piece of code, how can I determine the type
of an expression when its type is someone determined by its sub-expressions
(like virtually any other typed language), but sometimes is determined by what
consumes it?

The way people can deal with the complex is by decomposing them into smaller
parts, understanding those, and then composing that to understand the whole.
It seems that in Haskell you are required to understand the whole thing at
once, because you can't reliably determine the type of a sub-expression out of
context.

> > What are "matched operators"?

> Those are operators that require a matching. Usually parenthesis, square
> brackets and braces.

And how do "matched operators" make code more brittle and harder to maintain?

I think the opposite is true: to maintain code you must first be able to read
it, and what you call "matched operators" are far superior for readability
compared to having to remember fixity rules. Even after 3 years of dealing
with Haskell I found myself constantly misreading code because I didn't get
the fixity right.

~~~
tome
Could you please give a concrete example? The majority of Haskell expressions
have what is called a "principal type". The type of such an expression is
independent of its subexpressions or super- expressions.

~~~
xenomachina
A concrete example of what?

~~~
tome
A concrete example of an expression that you have difficulty determining the
type of.

~~~
xenomachina
It's been a few years since I gave up on Haskell, but I remember anything
involving `do` (especially containing a `return`) or `sequence` seemed to
require a lot of guesswork about what type was involved. Pretty much anything
where the type of a subexpression would be an overloaded type has a high
potential for being confusing.

Another example of what I'm talking about is the fact that even the expression
"5 == 5" gives an error unless the kludgy type defaults are enabled. The fact
that they felt the need to add type defaults and MR to the language is a
pretty strong sign that they screwed up the usability of the type system.

~~~
tome
Haskellers would tell you that what they appreciate about this is that you
don't have to understand sequence and return! They do the "same thing" for all
monads in the sense that they obey well-defined laws.

"5 == 5" is a different issue, and ironically is ambiguous because it's so
simple. Most practical arithmetic expressions would have some way of
disambiguating the type.

------
z3t4
You can flatten the pyramids with logical AND-gates. Eg. it only continues if
it's successful. Here's an example:

    
    
      var a = getData();
      if (a == null) return null;
      
      var b = getMoreData(a);
      if (b == null) return null;
    
      var c = getMoreData(b);
      if (c == null) return null;
    
      var d = getEvenMoreData(a, c)
      if (d == null) return null;
    
      print(d);
    

For async code I use counters. But async code is usually thick, not because
lack of abstraction, but because you _do not want any loose ends_ eg. you want
to handle all and every exception that might arise. As well as having code for
abortion and rate-limiting.

------
julienfr112
What happens when you mix what <\- means :

do

    
    
      a <- getData // this is a awaiting a future
    
      b <- getMoreData a // this is looping a list
    
      c <- getMoreData2 b // this is null checking
    
      d <- getEvenMoreData a c // this is awaiting another future
      
      print d
    

Monads' hell, isn't it ?

~~~
mdellabitta
Being able to combine monads like this is part of the point.

~~~
julienfr112
it means that the fact that the monad is looping or checking for null or
awaiting a future is coded in the function itself, and hidden when reading the
for loop. When types are heavily overloaded, it can take minutes to know if we
are looping a iterable, awaiting a futurable or checking for null a maybeable.
And what about a Maybe(List) ? a Future of list ? Common, how is this not Hell
?

~~~
igorbark
Haskell's type inference is optional: if you felt the above code is difficult
to understand, you could manually specify the type of each statement.

In a language with a less powerful type system, you could write the same
information in comments, but then the compiler wouldn't warn you if you
accidentally changed the side effects of a statement.

------
jstimpfle
I think this is another instance of solving the wrong problem.

    
    
        var a = getData();
        if (a != null) {
          var b = getMoreData(a);
          if (b != null) {
            var c = getMoreData(b);
            if (c != null) {
              var d = getEvenMoreData(a, c)
              if (d != null) {
                print(d);
              }
            }
          }
        }
    

Why not simply

    
    
        var a = getData();
        if (a == null)
            return;
    
        var b = getMoreData(a);
        if (b == null)
            return;
    
        var c = getMoreData(b);
        if (c == null)
            return;
    
        var d = getEvenMoreData(a, c)
        if (d == null)
            return;
    
        print(d);
    

And the other example

    
    
        var a = getData();
        for (var a_i in a) {
          var b = getMoreData(a_i);
          for (var b_j in b) {
            var c = getMoreData(b_j);
            for (var c_k in c) {
              var d = getMoreData(c_k);
              for (var d_l in d) {
                print(d_l);
              }
            }
          }
        }
    

Yes, there is the issue that there is too much nesting. But there is no point
in tackling this problem with a clever monad. That's not solving the issue,
it's just hiding it. Operationally there is no difference.

What really needs to be done is to avoid data dependencies. It's typically
possible to do something like

    
    
        for (var x in a)
            foo(x, a_state);
        for (var x in b)
            bar(x, a_state, b_state);
        for (var x in c)
            baz(x, a_state, b_state, c_state);
        for (var x in d)
            quux(x, a_state, b_state, c_state, d_state);
    

This not only looks cleaner, it _is_ cleaner. The path of execution is
simpler. It will also often run much faster, since its only tight loops and
uniform data accesses.

~~~
lobster_johnson
The example in the article opens itself to an attack like this because it's
simple (in an attempt at explaining things simply). It's easy to mistake the
trivialness for naivety.

But the "hell" part that monads solve really arise when you go further than
this -- when you have error handling, null handling and asynchronicity and
mutable state involved. For example, if each of these functions could return
an Either or Maybe, your non-monadic, "if"-based version would get a lot
hairier.

Monads abstract and generalize the concept of chaining computations. _Of
course_ you can write the chaining manually, but you would have to replicate
the same logic for every chain of function calls, with all the ad-hoc error
handling and so on in place. It may look "cleaner" to you, but there's no
generalization happening, and monads exist to generalize patterns like these
into a single tool that can used over and over again.

This is why every single monad code example in the article is identical.

~~~
jstimpfle
Yes, you can implement an "error monad" instance for each of a given set of
procedures (returning Either, Maybe, or whatever).

You can also wrap each of a given set of procedures that don't conform to a
uniform interface (say, return null or < 0 on error) to be conformant. That's
just basic hygiene. I don't see a problem.

> Monads abstract and generalize the concept of chaining computations.

I get that, but in the real world I don't find any need beyond sequencing
(first do this, then do that) and the occasional nested-for or early return.
Everything else is only giving up on other things that are extremely
important, like modularity (complex types / protocols are inherently anti-
modular) and deterministic resource usage.

> This is why every single monad code example in the article is identical.

I didn't even notice that, and I do not consider this a desirable goal at all.
The examples do totally different things, and you _want to see that_. There's
no need to swap one out for the other in any realistic scenario. Ever.

~~~
dllthomas
> in the real world I don't find any need

And I'm sure you're pretty productive with your choice of language, but bear
in mind that tools shape our thinking. You don't find yourself reaching for
those tools in part because you've never expected them to be there. That
doesn't mean they can't be very useful.

~~~
dmitriid
Exactly. That’s why I would never write any of the ad-hoc examples the way
they are written. And that’s why the article is criticized.

------
lazulicurio
Definitely a really neat set of examples. As someone not familiar with
Haskell, I have two questions though:

1) What would the method signatures look like in these examples? I.e on the
list example, I'm assuming getData would be () -> list(A). And then is
getMoreData (A) -> (B) [which gets lifted to list(A) -> list(B)]?

2) How would you combine multiple of these approaches at once? E.g. using both
a state monad and a list monad? Would you have to use nested do blocks? Or
would you typically define your own custom monad that is a combination of
state and list, writing your own lift method, etc.?

~~~
legopelle
1) Maybe monad

    
    
      getData :: Maybe Data1
      getMoreData :: Data1 -> Maybe Data2
    

Note: the monad is probably not just Maybe if it talks to the outside world
but more in 2)

List monad

    
    
      getData :: [Data1]
      getMoreData :: Data1 -> [Data2]
    

Dont' know about continuations.

2) There are a few approaches but the most popular are monad transformers,
building stacks to combine the effects. For example, to speak with the outside
world we need 'IO'. To add failure we might use the Maybe transformer

    
    
      MaybeT m a
    

to build

    
    
      type IOMaybe a = MaybeT IO a

~~~
chopin
For the list example: I don't get why that loops over the content of the list
(I don't understand this in the OP either). Wouldn't it need to be:

    
    
      getMoreData :: [Data1] -> [Data2]
    

at least? As I understand the OP, wouldn't the function signature for the non-
monadic example be:

    
    
      getMoreData :: Data1 -> [Data2]
    

and the function body fundamentally different for that reason?

------
rattray
Am I the only one who found the "ad-hoc" solutions more clear?

I appreciate that they make it obvious to the reader which things are loops,
which are null-checking, which are async, etc.

I'm sure the unified do-notation is very satisfying to write, but it seems
confusing to read.

~~~
jimbokun
One of the few empirically findings of programmer productivity, is that
programmers can produce about the same number of working lines of code per day
regardless of the language. And the ratio of bugs / lines of code is
relatively constant across languages.

So expressing the exact same behavior in fewer lines of code directly leads to
higher productivity, and fewer bugs for equivalent functionality.

In other words, there is a concrete cost associated with overly verbose
programming language constructs.

~~~
rattray
source?

If that's actually true, we should all be using CoffeeScript:

[http://redmonk.com/dberkholz/2013/03/25/programming-
language...](http://redmonk.com/dberkholz/2013/03/25/programming-languages-
ranked-by-expressiveness/)

According to that analysis, it's more expressive than Haskell, Scala or
Clojure (all of which otherwise rate quite well).

Regardless, the "ad-hoc" solutions are all about the same LOC as the do-
notation syntax, so I'm not sure the LOC argument is relevant here.

I would assert that bugginess and readability vary across coding styles, which
is ultimately the question at hand, not language (ie; you can write
comprehensions in Haskell too).

------
hoprocker
Discussions like this are always helpful for people with more experience in
dynamic languages. My first experiences with Monads, like the one in
LYAHFGG[0], left me with the sense that Monads were a tool primarily used in
statically-typed languages. After much searching, I stumbled across Tom
Stuart's _Refactoring Ruby With Monads_[1] talk, which helped describe Monads
as just another useful programming abstraction independent of the language.
(Jim Duey's _Monads in Clojure_[2] also helped.) Monads now make sense to me
as a general-purpose programming paradigm, like recursion or immutable data
structures.

[0] [http://learnyouahaskell.com/a-fistful-of-
monads](http://learnyouahaskell.com/a-fistful-of-monads) [1]
[http://codon.com/refactoring-ruby-with-monads](http://codon.com/refactoring-
ruby-with-monads) [2] [http://www.clojure.net/2012/02/02/Monads-in-
Clojure/](http://www.clojure.net/2012/02/02/Monads-in-Clojure/)

~~~
moocowtruck
not sure if you've seen this clojure lib
[http://funcool.github.io/cats/latest/](http://funcool.github.io/cats/latest/)

------
bitL
The elegance of this solution IMO comes to this:

\- you mastered the art of reading math papers, love seeing f: XxYxZ -> D
notations and enjoy (partially) composing functions in papers you write. You
are on academic track for PhD or higher. It's your natural habitat; ideas are
always more important than current mediocre reality; real-world is just a side
effect. You are going to love monads. You'd treat monad transformer hell or
other warts as necessary and unproblematic.

\- you went into programming by observing real-world, how each action modifies
environment and how interactions affect each other. You saw a great analogy to
those when you started with your first language such as
Python/Basic/Java/Pascal/etc. Then you started to optimize runtime more and
more and now can't write a line of code without addressing all kinds of
performance and consistency issues. You are likely going to hate monads if you
ever manage to understand them.

~~~
pwm
This is very black and white thinking. I started with imperative languages,
loved them. Then I learnt some FP theory and Haskell, loved it. I now use both
and what's most important imho is that knowing both made me a better dev
overall.

------
bunderbunder
This paper calls attention to my biggest source of discomfort with Haskell's
approach to monads: They end up hiding so much under the covers that
everything starts to look the same. The author leaned on this particularly
hard for rhetorical effect in the article, but it's not always so different in
the real world. It really harms readability. I've noticed, for example, that
it's not unheard of for someone to come asking for help with a do-block's
behavior, and the first response they get is someone having to ask what monad
that do-block is using.

F#'s approach (which it calls computation expressions) makes a small tweak
that makes all the difference in the world - instead of always using the word
"do", you start a block with the name of the monad you want.

So instead of

    
    
      do { ... }
      do { ... }
      do { ... }
    

everywhere, you get:

    
    
      maybe { ... }
      seq { ... }
      async { ... }

~~~
tome
Firstly, that's almost never a problem as you almost always have a type
signature to hand.

Secondly, F#'s approach is not a small tweak. It's forced on them by a lack of
higher-kinded types.

Thirdly, if it really bothers you use TypeApplications to write

    
    
        with @Maybe $ do { ... }
        with @[] $ do { ... }
        with @Cont $ do { ... }

~~~
dllthomas
I assume `with :: forall m a. m a -> m a; with = id` - is that defined
somewhere?

~~~
tome
No, I just made it up but if enough people shared the OP's feelings it _would_
be defined somewhere. The fact that it doesn't exist is evidence that it's not
needed.

~~~
dllthomas
A fair point, but weakened somewhat by the fact that type applications are as
new as they are.

------
pgt
Clojure has some conditional threading macros like `some->` and `cond->` that
are useful for transforming data conditionally.

`cond->` takes pairs of tests and forms where a form is threaded through the
subsequent forms if the test expression is true.

Example from documentation
([https://clojuredocs.org/clojure.core/cond-%3E](https://clojuredocs.org/clojure.core/cond-%3E)):

    
    
      (cond-> 1          ; we start with 1
          true inc       ; the condition is true so (inc 1) => 2
          false (* 42)   ; the condition is false so the operation is skipped
          (= 2 2) (* 3)) ; (= 2 2) is true so (* 2 3) => 6 
      ;;=> 6
      ;; notice that the threaded value gets used in 
      ;; only the form  and not the test part of the clause.

~~~
justinhj
This is conceptually different to a Monad in Haskell. Monad bind operations
have short circuit failure, so in the example above all of the operations
would have to succeed or none of them would.

cond-> looks like a useful function but you have to hold the state in your
head at each point to debug it. What happens if two steps change the type from
int to string then the follow step requires the string, but if the previous
step fails you would get an int? Seems it would be very prone to runtime
surprises

~~~
networked
_cond- >_ doesn't, but _some- >_ short circuits on _nil_. Similarly, _with_
expressions in Elixir short circuit on a failed pattern match.

[https://clojuredocs.org/clojure.core/some-%3E](https://clojuredocs.org/clojure.core/some-%3E)

[https://hexdocs.pm/elixir/Kernel.SpecialForms.html#with/1](https://hexdocs.pm/elixir/Kernel.SpecialForms.html#with/1)

------
neonblack
My one gripe is the author doesn't exactly explain what the problem with those
ad-hoc solutions are. I lied, I also have the gripe that the author doesn't
talk about the noncomposition of monads and tease monad transformers.

------
saurik
Deterministic finalization is another example, with "as hoc solutions"
including manually using a finally statement, using an interface and a custom
statements (such as the with statement from Python and the using statement in
C#, or now the try-with-resources statement from recent Java), or syntax like
stack allocation (as in C++). And of course error handling in general (not
just null value checking), with exceptions being the typical "ad hoc
solution". Any time you have pieces of code going together in some boiler-
plate fashion, monads are pretty epic, and it is incredibly depressing to see
people not really learning this lesson (such as in Rust, which went with all
of crazy operators, macros everywhere, and "just do it manually as we don't
believe in exceptions but also don't fully understand monads" as their "ad hoc
solution" to error handling, when what the language really needs is do syntax
to go along with its attempt to rely on the Either monad for all of its
errors: monads are largely interesting because of do syntax, which is the
payoff for the common abstraction).

~~~
pjmlp
Usually someone complains to me that GC FP languages lack deterministic
finalization, I end up explaining how to use higher order functions to achieve
it.

The problem with _with_ , _using_ or similar approaches is that they grew with
the language and the majority of them don't allow for trailing closures.

More modern languages like Swift or Kotlin make it much nicer to achieve that,
making something like _using_ a simple function call.

~~~
junke
I don't get how higher order functions help with deterministic finalization in
the case of closures, which typically have an unbounded lifetime. How, and
when, is the resource captured by a closure disposed of?

~~~
pjmlp
When the closure itself is disposed of, depends on the GC algorithm (RC,
tracing, parallel...).

My point is about other kinds of resources, namely files, database
connections, IPC channels, sockets, native memory buffers, transactions and so
forth.

The call stack of higher order functions can be used as an arena.

------
nacc
I am still pretty new learning haskell, and find this very helpful. Not
necessarily agree whether monadic solution is prefered, but I like how it
shows how general monad is: it's hard to grasp the generalization from only a
couple of simple examples normally found in any haskell book. I hope there are
more articles like this showing how these abstract concepts can be applied in
somewhat unexpected places.

------
bsenftner
How is this not just an architecture that insures the return
values/struct/object is a 'context object' that can be passed as input, and
the architecture is arranged for as many functions as possible to pass and
receive these standardized parameter contexts. This is just function chaining,
a la jQuery, right? Or where is this assessment not right?

~~~
pwm
It kind of is, but this architecture has mathematically sound properties and
is built in the language so people don't have to reinvent it in an ad-hoc way
all the time. Your context object would be a Functor, types are employed to
ensure that functions work with these and chaining of such functions is
possible via Monads.

------
maxxxxx
How is error handling done if something goes wrong in one of the functions?

~~~
js8
It depends. In Haskell, functions do not fail in traditional sense - they just
return indication of failure. So the return type of function can be, for
instance, Either type (as someone already noted) which basically lets you have
proper value and error value, one or the other.

And Either is also a monad. Typically, you would derive your own monad by
somehow combining the two monads (for instance using a monad transformer) that
would have a behavior you want.

In ordinary languages, the error handling is done ad hoc, in each situation
the semantics can subtly differ. You can do that too but then you have to
explicitly handle the fact that functions return Either in the function body
(this would be somewhat similar to checked exceptions). Or you can define your
own monad that prescribes the correct semantics of interaction between the
monads.

~~~
gmfawcett
> In Haskell, functions do not fail in traditional sense - they just return
> indication of failure.

That's too general a statement. Haskell functions certainly can fail in the
traditional sense. As a trivial example, "head []" is a well-typed Haskell
expression that will raise an exception when evaluated; if the exception is
not caught (using a traditional exception handler), then it propagates to an
error, and the program exits.

This article [1] about the many ways to handle errors in Haskell is a bit old,
but as far as I know, all his points are still relevant. The conclusion (for
me) is that if modern Haskell tends to use return-values on failure, it's due
to convention (and good taste) among developers, and not due to a design
constraint of the language.

[1] [http://www.randomhacks.net/2007/03/10/haskell-8-ways-to-
repo...](http://www.randomhacks.net/2007/03/10/haskell-8-ways-to-report-
errors/)

------
jeffreyrogers
I like the ad-hoc solutions. They correspond more closely to what your code is
actually doing. I used to work on a Scala code base that heavily used a
functional style with monads, custom operators, and elaborate type
hierarchies.

At the micro level these things seemed great, they removed a lot of
"boilerplate", but when you zoomed out and tried to figure out how larger
parts of the application worked it quickly became overwhelming because these
language features made understanding what the program was actually doing much
harder.

------
westoncb
Hmm, maybe I'm missing something, but I lost interest after seeing that pretty
much all the 'ad hoc' solutions listed (similar non-monad based solutions is
the idea I think) were nearly identical to the monadic solutions, except some
syntactic differences. As someone who doesn't care that much about little
syntax nuances, the motivation to look further here escapes me.

------
bitL
It's more like escaping one hell to another one. Monads introduce brutal
conceptual complexity and only few managed to master them, usually at the top
of intelligence range in the math spectrum. It's unlikely this is going to be
a solution for average or even many advanced developers that won't be able to
comprehend it.

~~~
rossng
> usually at the top of intelligence range in the math spectrum

As someone who is definitely not at the top of the `math intelligence
spectrum' but does have a reasonable understanding of monads (in a practical,
not category theory sense), I can promise you that this isn't the case.

Yes, it takes a while to get your head around them - but this is mostly due to
poor teaching materials. I wouldn't class them as significantly harder than
many of the other concepts required to become a good programmer.

This isn't to say that monadic solutions are an answer to all problems - but
they're interesting to learn about.

------
_pmf_
Here's a little secret: in Java, try + generic catch is my error handling
monad.

~~~
beojan
They're not nearly as good as as an Either type, but if you don't have do
notation, exceptions are much, _much_ , easier to use than sum types for error
handling.

~~~
np_tedious
Yep. If you want to be really broad, even golong's (val, err) is a sum type
even tho tuples are more literally product types. Since it's generally one or
the other that has meaningful value, not both.

But it sure as hell isn't easy since there is no do syntax or
flatMap/selectMany/whatever to handle them nicely.

------
iskander
This article is more religious than technical. You're meant to be inspired by
the elegance of it all, not necessarily to understand much about the use of
monads in practice.

------
leshow
I love this blog, monoid morphisms was very eye-opening for me.

------
beeforpork
So the monad style hides completely what is going on. That all the solutions
look exactly the same impairs readability of the code.

~~~
tytytytytytytyt
Isn't that the point of encapsulation?

------
catnaroek
> In this post we’ll have a look at some instances of such sitations, their
> “ad hoc” solutions provided at the language level, and finally at how these
> problems can be solved in a uniform way using Monads.

I wonder if the blog post's author is actually aware that the original
application of monads to programming languages was _precisely_ to give a
denotational semantics to strict higher-order languages. In other words, you
don't “add monads” to JavaScript. A monad, perhaps an entire monad transformer
stack, is already hardwired into its semantics.

Now, I'm aware that the author says at the end:

> We’ve had a look at this from a syntactic perspective.

But monads are about semantics, not syntax. Of course, if you want uniform
syntax for using monads, then _say just that_.

\---

> For-loop Hell occurs when iteration through multiple dependent data sets is
> needed. Just as for null-checking, our code becomes deeply nested, with a
> lot of syntactic clutter and needless bookkeeping.

You know what's a simple, clean, straightforward solution to “for-loop hell”?
Helper procedures. It baffles me why people always reach for the bazooka when
a fly swatter will do.

> Both the State and ST monads bound the lifetime of a stateful computation,
> ensuring that programs remain easily reasoned about in the general case.

Yet neither can express the following very useful pattern:

    
    
        allocate A
        allocate B; use A B; deallocate A
        allocate C; use B C; deallocate B
        ...
        allocate Z; use Y Z; deallocate Y
        deallocate Z
    

> The `ST` Monad provides more performant state with references, at the cost
> of some higher order behaviour.

`ST` is not about “performance”. It's about semantics. With `State`, you have
to know beforehand the state type, and you cannot change it in the middle of
the computation. With `ST`, you can allocate whatever you want in the middle
of a computation, and it doesn't have to be reflected in the computation's
type.

~~~
tathougies
you can use monads to do your last example....

~~~
catnaroek
No, you need proper substructural types. Not even monadic regions will help.

~~~
tathougies
You do not need a type system to use monads. In particular, your example could
be written (in a statically typed Haskell-like language) using indexed monads.
In a dynamically typed language, you wouldn't need that.

~~~
catnaroek
Of course, you don't need types to write programs. Types are just a way to
mechanically verify some desirable properties. But, as it turns out, resource
management in large programs is pretty damned hard to get right without this
kind of mechanical assistance. So, of course, when I say “it's inexpressible
without <type system feature>”, what I mean is “it's impossible to
mechanically verify it without <type system feature>”.

~~~
tathougies
Monads have nothing to do with verification, so I'm now unsure what your
original point was.

It's not surprising that you can't use a banana to edit text, so it's not
surprising you can't use the general idea of monad to do verification.

~~~
catnaroek
> Monads have nothing to do with verification, so I'm now unsure what your
> original point was.

Indeed, monads are about language semantics, prior to any verification
attempt. Have you tried actually tried reading my original comment?

~~~
tathougies
Your original comment was about language semantics. You then provided an
example of something you claimed could not be done monadically. I pointed out
that you could implement such a thing with "indexed monads", which are a
subclass of monads (although not one that's easily used in any current
programming language, Haskell included).

You then claimed that in order to implement the memory allocation example you
would need substructural types, which is not the case. Then you claimed that
you needed the type system feature to verify the memory was allocated/released
correctly.

You are correct that you need an advanced type system and type checker to do
the memory validation. However, the validator tool only needs to know about
one monad (your resource one). This one monad is a small subset in the very
large space of 'things that form a monad'.

It's like if I said, 'groups are about arithmetic'. While true that many
arithmetic operations can be expressed as a group, groups are not inherently
about arithmetic. That would be conflating common use cases with the
generalization.

~~~
catnaroek
> You then provided an example of something you claimed could not be done
> monadically.

Using a _monad_ , i.e., an endofunctor with unit and join that satisfies the
well-known laws, you cannot do it.

> I pointed out that you could implement such a thing with "indexed monads",
> which are a subclass of monads (although not one that's easily used in any
> current programming language, Haskell included).

Is a so-called “indexed monad” really a monad, now?

> You then claimed that in order to implement the memory allocation example
> you would need substructural types, which is not the case.

It is the case. What else could be the point to the threaded typestate in the
definition of so-called “indexed monads”? Moreover, even with the threaded
typestate, so-called “indexed monads” may fail to achieve the desired goal of
manipulating resources linearly, e.g. the “indexed list monad” provides no
such guarantees.

------
vesak
> Monads can help solve certain classes of problems in a uniform way.

You solve 3 completely different problems using the exactly same code and have
the gall to call this less complicated? I'm forced to sometimes wonder whether
functional programmers have any idea how to create software.

If this was a joke I didn't understand, I apologise for my thickness.

~~~
greydius
It's called abstracting. The problems fit the Monad interface. Details are in
the implementations.

