Hacker News new | more | comments | ask | show | jobs | submit login
Escaping Hell with Monads (2017) (philipnilsson.github.io)
261 points by networked 12 months ago | hide | past | web | favorite | 213 comments



What may not be obvious here is that the Haskell function in each of these situations is the exact same generic function. The flow of data is abstracted via the Monad interface, so that it becomes possible to write code that works generically regardless of whether parallelism, IO, databases or whatever else is involved. This makes testing such functions quite pleasant.


But is writing such functions, and all the auxiliaries, pleasant as well? What type would I have to give the "getData" function to work in all these examples? What if I just want to call an existing getData function that returns a Maybe, not some generic monad thing? Would I have to tweak the syntax for all the different monads I want to call it from?

Also, what would the print call really look like in actual code? This post talks about composing IO with other monads. I've never seen a monad composition example that wasn't peppered with calls to unintuitively named functions like liftM_.

The pseudocode looks nice, but if the pseudocode isn't a faithful representation of actual use, then it's not very useful.


We have a library called `mtl` that addresses both abstracting over specific monads in order to achieve polymorphism and do all the `lift`ing ahead of time. You should find some reading material on it, it’s really quite a nice solution!

I’m on mobile, otherwise this would’ve been an answer instead of a comment.


Writing abstract functions is usually easier than writing concrete functions, because there are fewer ways things can fit together that look anything like correct.

(Abstract in this way, in this kind of context...)


> What if I just want to call an existing getData function that returns a Maybe, not some generic monad thing?

Maybe is a Monad. In Haskell, Monad is sort of like an interface (called typeclass) that a bunch of types implement. Maybe has an instance of Monad already, same with lists, and continuations, etc.


You don't necessarily need to implement each function perfectly generically. You maybe could, but that's not the point here. The benefit, as I see it, is that the main method doing all of these things is clear in its purpose regardless of which monad is being used.

It's moving code more towards "this is what I want" rather than "here's how to do what I want".


The most common way you generate such generic functions is by using higher order functions like fmap etc, not by directly coding them using do notation, though it is nice to have when you do need it.

In languages that have no access to the monad abstraction you'll still need a way to replace your database access with something that you can use during your testing. A monadic database interface won't save you from having to check that your application doesn't do silly things with the database as a side-effect, but it will make it much easier to test that your actual data processing is correct.


Maybe is already a monad.


Yes. That would allow me to use a getData returning a Maybe in the Maybe monad. But I wouldn't be able to use it in the list monad, or the others mentioned in the article.


Well, yes.

You will need to convert your type into something else to use in another monad.

As an example, it's common to track failure with an `Either Text` monad, but functions that can fail on a single way usually have an `Maybe a` return type. That means you will probably have an `Text -> Maybe a -> Either Text a` function around and write code like this:

    toEither "Error message" $ functionInMaybe a b


You could use it in code that's operating in the list monad, you just wouldn't be able to stitch the computation in using the monad interface. But you could run it and get a Maybe back, and look at the Maybe, and turn it into something you can stitch in. `maybe (fail "got back a Nothing") pure` is probably the most generic way, but it may or may not be what you want.


You could use the getData function just fine from anywhere. It's just that you wouldn't be able to use both the list monad instance and the maybe monad instance at the same time without a library like mtl.


As I read it, the question was "I have `... -> Maybe a` and I want to use it in a context where I am chaining together some unspecified monad. Will that be awkward?"

And the monad interface doesn't help us with that. But the answer is that you have a Maybe, you can do any Maybe thing with it.


Therein lies the problem. A given piece of code looks the same for any possible monad, but it only provides useful functionality for a specific monad. The reader of the code is left wondering _for what benefit_ the code was written, and may waste some time instantiating it over the wrong concrete monad[s], until it finally clicks.

Secondarily, I wish “monads are cool” tutorials would pay some attention on how the do syntax interacts with the rest of the language, and especially with conditionals. For example, assuming a function that computes a value and also collects some side effects, how does one turn the pure variant into a nomadic variant? This is trivial with a mutable collection, but a laborious exercise in monad world.


Hmm, would you be willing to share with us what the typedefs (and ideally implementations) of these `getData` functions might be?

Those unfamiliar with Haskell might have a hard time visualizing.


Brilliant article. The best way to explain a strange feature like a monad is to list out programming situations to which monads are a good solution.

In the same vein, I finally understood what Lisp macros are for, from a similar article explaining in what programmatic situation you can benefit from writing a macro: [1]

[1] http://www.defmacro.org/ramblings/lisp.html


I don't get it. The article doesn't explain anything. It's just a list of statements. "This is bad, that is good." But it doesn't explain why the bad things are bad, and how using monads is better. It also doesn't explain what monads are. "We note that lists are Monads." Why? How did you come to that conclusion?

My guess is that all the people praising this article already "got monads". But for the unenlightened, it doesn't do anything.


> The article doesn't explain anything.

Sure it does, it provides a pragmatic rather than abstract answer to the question “what are monads”.

> It's just a list of statements. "This is bad, that is good." But it doesn't explain why the bad things are bad, and how using monads is better.

As I see it, that's because it's based on targeting an audience in which:

(1) it is widely accepted that the “Hell” things are bad, and

(2) it is widely accepted that simple, general solutions to broad classes of problems are better than myriad special solutions to narrow subsets.

> It also doesn't explain what monads are.

Not in the abstract sense; otoh, the whole piece is a pragmatic explanation of what monads are: the common solution to an array of problems for which a series of examples is provided.


Hmm, if I had to guess, I'd say you were in the camp of people who already "got" Monads...


> Hmm, if I had to guess, I'd say you were in the camp of people who already "got" Monads...

On the level the article seeks to explain? Yes, sure.

On the abstract level lots of other monad resources try to convey? Not particularly.


"My guess is that all the people praising this article already "got monads"."

You are correct. This is an advocacy piece for using Monads to solve common programming problems.

However, I believe this article can still be useful for people who do not yet "get monads". Much of the prose written on Monads never bothers to first address "Why should I care? What problem that I already have can Monads help me solve?"

This shows multiple concrete examples where Monads can improve code clarity and readability. So now you can make an informed decision about whether you want to learn more.

Of course, if you dislike code clarity (and many programmers fall into this category), then Monads may not be right for you.


I don't get it either. These are probably toy examples written to motivate a specific solution. Note that the semantics change between versions. In some versions, erroring out will not print anything, in others it will.

If this is Java, why not use exceptions, or @NonNull. If this is C++, why not use exceptions?

I'll note that for loops and if statements are fundamental constructs that many people don't really understand. I've seen a lot of bad code written with if statements and for loops. If these fundamental constructs can be misused to create bad code, how much likely is it that more abstract constructs like list comprehensions, futures, and monads will be used to create bad code?


> Note that the semantics change between versions.

Because each one describes a different type's implementation of the monad interface. The blog post is just trying to illustrate:

"all these problems have the same interface. If we have a general solution to a problem, why use a different ad hoc solution for each?"


I think something was missed, and maybe I missed it. It looks to me like this:

    var a = getData();
    if (a != null) {
      var b = getMoreData(a);
      if (b != null) {
         var c = getMoreData(b);
         if (c != null) {
            var d = getEvenMoreData(a, c)
            if (d != null) {
              print(d);
            }
         }
      }
    }
will only print on non-null, whereas this:

    do
      a <- getData
      b <- getMoreData a
      c <- getMoreData b
      d <- getEvenMoreData a c
      print d
will always print something.


It won't. If any of the functions on the right-hand side of the '<-' return Nothing, then the later lines do nothing. Think of the '<-' operator as a 'flatMap' call for Optionals in Java, if you are more familiar with that.

This particular syntax for chaining together monads is called 'do notation', and is just syntax sugar for calling the bind function (flatMap in Java) manually.


>will always print something.

Not necessarily. It depends on which monad is being used here (the article doesn't specify). If you replace the 'print' with 'return' to simplify things, then this could be an expression in the Maybe monad, which would return either (Just d) or Nothing.

The article is lying a bit by suggesting that it's trivial to layer short-circuit-on-error behavior on top of the IO monad. It's possible for sure, but some relatively subtle issues can arise (in Haskell at least).


This is what I always hated with C++, and to a lesser extent Java. The code that you cannot see makes it very difficult to understand the code when reading the source. If the only person that can truly understand the code is the author, that is an anti-pattern.

The for loops and if statements were never a problem, it was the default constructors, overloaded operators, and the like that always caused confusion on my teams.


In simple cases you'd be able to see the concrete type if this were real Haskell code, so you'd know which monad instance was the relevant one. Your complaint does apply to some uses of the monad transformer library, however.


The two blocks of code are exactly equivalent, the "if != null" logic is moved into the <-.


And the first snipped is purposely badly written. One can use early returns to make code flat just like the monadic example.


Sure, you can flatten that example, but I'd still be interested in losing the explicit and repetitive null checks.


In the old days, we used #define macros to do that.


"If these fundamental constructs can be misused to create bad code, how much likely is it that more abstract constructs like list comprehensions, futures, and monads will be used to create bad code?"

If your developers are consistently writing bad code, fire the developers. They will write bad code in any and every language.


Assuming people are always getting better yet always producing. Things are and always will be short of the ideal.


This is a really good way of explaining monads. Generally, when people ask about monads, they're not really asking 'what' monads are - they're asking 'why' and 'how' you use them.

The advantage shown here is that Haskell has special syntax for monads, and this single abstraction can be used for multiple features that many other languages have specialised syntax for.


That can also be seen as a disadvantage. There is value to having different things look different.


I remember a great quote by Bjarne Stroustrup along the lines of "people want big, loud syntax for things they are unfamiliar with and small, quiet syntax for things they are familiar with." It made me realize how arbitrary some of my opinions on language design were/are.

On the posted code, I'm totally OK with the Maybe monad, but the State monad makes me a little uneasy. If the state is an important part of the code, I want to see it! But would I feel the same way if I were more familiar with the State monad?


The difference is given by the code context (which is unfortunately entirely missing from the article). What’s shown here are the things all these patterns have in common. The whole point of abstraction is to have different, but related, things look the same.

Haskell (and these examples) certainly raise the level of abstraction to extremes (compared to other languages). But as long as the context (explicit type annotations, if necessary) make it clear what the code does, I don’t see this (even conceivably) as a disadvantage.


> But as long as the context (explicit type annotations, if necessary) make it clear what the code does, I don’t see this (even conceivably) as a disadvantage.

I spent 3 years trying to learn Haskell, and I never found this clear. When you see a "do" block, where do you look to figure out what it's actually doing?


I work on largeish commercial Haskell codebase and I never have to look anywhere to figure it out—I know what the do block is doing because I know what function it's in and what the return type is. With a bit of experience I started keeping track of this context subconsciously so the question just doesn't come up.

Haskell code tends to rely on context quite a bit. This does make it more difficult to learn, but once you're over the hump, context is something you end up understanding automatically.


What makes this confusing for me is that this is backwards from the way expressions work pretty much anywhere else. It should be possible to know the type of a sub-expression without knowing anything about the expression that contains it. That is, start at the leaves, and work your way up. In Haskell, sometimes you have to go the other way. It's like trying to read a novel where some chapters randomly assume you know what happens several chapters later.


> When you see a "do" block, where do you look to figure out what it's actually doing?

If you need to know, you look at what consumes the result. If it's abstract (say it's a top level binding `... -> m a` where the m is abstract) you don't need to know "what it is actually doing" - it should be correct for any choice of `m`.

Whether this stuff is easy to find/follow certainly depends on the quality of the code, as well as your experience. It's not something I struggle with, working day-to-day in Haskell.


> If you need to know, you look at what consumes the result. If it's abstract (say it's a top level binding `... -> m a` where the m is abstract) you don't need to know "what it is actually doing" - it should be correct for any choice of `m`.

That seems to explain a lot. So to make an ad-hoc version of the Elvis Operators section that's actually equivalent to the Haskell code would be more like this?

    ___ a ___ getData();
    ___ b ___ a ___ getMoreData();
    ___ c ___ b ___ getMoreData();
    ___ d ___ c ___ getEvenMoreData(a);
    print(d);


> you don't need to know "what it is actually doing" - it should be correct for any choice of `m`.

I don’t know what it’s doing, but it’s correct. :-/

So what is it doing that’s correct? Or is it some abstract correctness?


It's not abstract correctness, it's satisfying contracts up to an interface.

Take `sequence :: Monad m => [m a] -> m [a]`

That takes a list of actions and gives us an action that runs each in turn and collects the results in order. The correctness relative to that description is clear regardless of whether each input action is threading state or handling failure or printing to the screen.


Then this clearly doesn’t answer the original question, does it?


Not on its own. Fortunately, it wasn't the entirety of my answer to the original question. When you're writing something like `sequence`, you don't need to know the implementation of the interface you're working against. As you get comfortable in Haskell, more things are "like `sequence`". There will always be things that aren't - for those, as I said, you should look where the value is consumed. And as I neglected to say, you can always add a type annotation when things get unclear.


Ah, the old Java adage: any abstraction can be solved with many more layers of abstraction.


Look at the return type of the function and you can tell what instance it's using, if it's using something that's a type variable, then it's instance isn't important, just it's interface.


All the ad hoc solutions look at least as clear if not clearer than their monad equivalents.

Also, that article conveniently ignores that as soon as you start mixing monads, you enter a different kind of hell: monad transformer hell.

Because you see, monads don't compose. Not without these transformers. And if you think the monad examples in that article were hard to follow, wait until you see monad transformers in action.

Also, I don't think replacing a `for` loop with a `for` comprehension is that much of a gain. I'll pick a `map`/`filter`/... any day over such `for` loops.


I worked on a Scala system with two rookie Scala programmers who had been taught that `for` expressions were a simple and easy way of composing futures, because it was the monadic way. The code looked very neat, but we had an epidemic of swallowed Failures, and the ones that weren't swallowed were often interpreted incorrectly. (For example, a 500 from service A would be logged as a 500 from service B, because someone miscalculated how the Futures and Trys were being composed.) This was due to people not understanding the for expressions they were writing. Rather, I seemed to be the only one trying to understand the `for` expressions: I often had to mentally (or even manually) translate them into flatmaps and maps according to the language spec to discover that they were not doing what the person who wrote them intended. The other programmers, who had been promised that `for` expressions were obvious and self-explanatory, felt that this should be unnecessary, and that if they were making mistakes, then Scala was failing to live up to its billing, and we should stop using it. It was extremely frustrating. I love Scala, and I feel that promoting `for` expressions to beginners like that was ideologically motivated and bad for the language.

It's much easier to just use map and flatMap. But that won't do, because the virtue of monads is supposedly the abstractions built on top of them.


If you are writing functional Scala you should be using cats and the ecosystem. Futures are bad for this use case and you should use a proper Effect monad like IO. The for comprehensions will come alive, trust me :)


Have you looked at Scalaz's Task? It's used all over Verizon, it solves a lot of these issues that people run into with Futures (on how it handles errors in a lawful way).


What was your solution with the rookies ? Keep the for expression and fixing bugs, or rewriting them more explicitly ?


A little bit of both. I kept hoping something would click and I would end up loving the for expressions, but for a few monsters that I could only untangle by translating the code line by line into map, flatMap, and filter, I committed the translations so I would never have to do that again. Neither of the other programmers is writing Scala anymore. I've mostly steered clear of for expressions since then.


`For` expressions are not boolean blind compared to map/flatmap/filter. (Boolean blindness: https://existentialtype.wordpress.com/2011/03/15/boolean-bli...)

That might be an advantage.


While I am a fan of the few monads I'm using in C# (IEnumerable[T] and Maybe[T] among them), I will admit that composing them is still a black art for me. I haven't been able to find any solutions besides "write specific code for each possible case", which unfortunately grows exponential in the number of different monads.


The way I often explain it to a C++ developer is by asking this question: "what if the semi-colon was merely an operator, that you can overload?"

Then they sort of start to see the power.


How would you overload that for getting something monadic-like? Is there a simple example (C++ is obviously not my native tongue)?


Imagine each statement in a sequence is a function of the previous state to a subsequent state (+ optionally something extra); and the semicolon is a function of the previous state (+ something extra) and the statement function to the subsequent state (+ something extra).

That's a description of the monad pattern in terms of semicolon overloading.


It would be a sort of bind operator, I think. It wouldn't work exactly because you can't distinguish between `>>=` and `>>`.


It would work, it's just incomplete. `;` is `>>`, `>>=` would be involved in declaration of local variables.


well the (;) is the `bind. The problem with the question is that it only works on a conceptual level as C++ does not have type system support for statements. (ie, in {a;b} what's the type of a ?)

But, it allows them to see how you could build fiber/promises/lightweight_threads support using this concept, which is most of the times what they are interested in anyway.


I never thought about shifting perspective on the semicolon from imperative to function. I'll have to revisit the idea after I learn functional programming. For now, maybe what you describe could be possible with tools that do C++ metaprogramming in Haskell or Racket like below. I liked what I was reading despite not even using those languages. ;)

http://aszt.inf.elte.hu/~gsd/s/cikkek/abel/haskelltmp.pdf

http://matt.might.net/papers/ballantyne2014metameta.pdf


It could pass statements as functions. Imagine a syntactic transformation from `a; b` to `[&](){a;} >> [&](){b;}` (though of course there are lots of details left unspecified...)


> ie, in {a;b} what's the type of a ?

Well, C++ is entirely run on IO, so `a :: IO a`. It's exactly the first argument of Haskell's `(>>)` when specialized into IO.


Few nits with this:

1. This seems like an article about Haskell's do-notation, not Monads. The article even decries Promises, which _are_ Monads.

2. "Call you language implementor and ask for do-notation today!" isn't very satisfying. I would be curious to read such a proposal for eg; JS, though it would need another name (do-expressions already have a proposal). Meanwhile, it might have been helpful to show how to implement these Monads in a common tongue (eg; JS).

3. The article did not display the bodies (or even type signatures) of the functions at hand. It's unclear to a reader unfamiliar with Haskell whether they are all the same, and what interface they must conform to.

4. The examples listed as "Hell" don't sound like hell to me. They're minor annoyances that I run into occasionally. The code listed as "problematic" isn't blissful to write and doesn't feel elegant, but is universally easy to write and read.

As an aside, it feels like Go and Haskell would be at opposite ends of some spectrum (obvious vs elegant?). In almost any business/production setting, I would choose the former.


>do-expressions already have a proposal

For those who are curious, it's this one: https://github.com/tc39/proposal-do-expressions.


So, the only problem here is none of the examples actually handle the errors. Chains of partial functions, using any language I've ever tried, are easy when you don't have to do anything about the null value. The article demonstrates this in multiple languages. In practice, the hell only starts when you have to send control flow somewhere else or show the user a different response for every failed partial. I haven't used monads intentionally, can they do anything special about the real world case of having to handle every error separately?


People would use something like an Either monad in that case (in real code they may use an Error monad). If all your getData functions have a return type `Either MyError GoodResult` then you could write something like:

  data MyError = MyFirstError | MySecondError

  type GoodResult = String

  myFunc = 
    case doEither of
      Right x -> *handle good result*
      Left MyFirstError -> *handle error*
      Left MySecondError -> *handle error*
    where 
      doEither = do
        a <- getData
        b <- getMoreData a
        c <- getMoreData b
        d <- getEvenMoreData a c
        return d
If one of those functions returns a "Left" (or error) value, the remaining functions are not evaluated. If they all return a "Right" value, doEither evaluates to a successful "Right" value.

This works much better when all of your functions fail in the same way. If some functions fail with an empty Maybe and some fail with an Either and some with Error you will have to wrap all of those to take advantage of do-notation.


> If some functions fail with an empty Maybe and some fail with an Either and some with Error you will have to wrap all of those

While this is true, the wrapping can be quite simple. Turning a Maybe into an Either is just a `fromMaybe (Left MyErr) Right`, and you can of course give it a name to clean it up even more.


That particular function is tied to Maybe and Either. Good luck with that when your functions start returning all types of different monads and welcome to monad transformer hell.


Maybe and Either was the topic of discussion. If you want to discuss a different example, present a different example.

Attempting to speak usefully to vague generality... if your functions are returning different concrete monadic values you need to either run them and get something pure out or translate them into your current context. It's usually better practice, however, to leave the choice of monad up to the call site and only ask for the interfaces you need (MonadState, MonadReader, etc), at which point there's no call-site-local translation needed at all, you just have to make sure you're supporting the interfaces requested.


Please don't use Either to carry errors, it's really silly to use a language with a strong type system and then rely on convention to represent the error on the correct side of the monad.

Is error left or right again?

That's right, you shouldn't have to remember.

Use a more specific type for this, some GADT with clearly named instances (e.g. "Error" and "Value").


I agree that would be a good convention to move away from - I don't think it takes a GADT, though; a regular data type should be fine.

For my part I think we should also deprecate the Functor (and dependent) instances for Either and tuple - but I know there are some who strongly disagree and I don't care all that much.


Right is the right result and Left is the wrong result. That's about as easy it gets when it comes to remembering things. There is already a strong convention for using Either for this purpose and it's built into the standard library, since the Monad instance is designed to allow you to do this.

If you create your own data type for this, you are designing yourself into the Monad transformer hell you mention in another comment.


> That's about as easy it gets when it comes to remembering things

I use a statically typed language so I have fewer things to remember because the compiler does the bookkeeping for me.

`Either` is a generic variant record, it should not be used to carry errors.

And as a general rule, any programming construct that relies on convention instead of static typing is prone to generating bugs.


> `Either` is a generic variant record, it should not be used to carry errors.

The name suggests that, but the Monad instance suggests that Either is an asymmetric data type, where the left and right branches carry different meaning.

> And as a general rule, any programming construct that relies on convention instead of static typing is prone to generating bugs.

If the Either branches were named `Error` and `Success`, using `Error` to carry errors would be just as much of a convention as using `Left` for that purpose (a more sensible convention, but nonetheless a convention). The only way you'd get around that is if you'd somehow encode in the language what is and what isn't an error and use that to statically force correct branch usage.


To be fair, this is more relying on a pun than it is convention.

I don't think this changes your point, though.


You can. Instead of passing null through the chain you could pass an actual error object.

Check out Elixir With statements for example: http://openmymind.net/Elixirs-With-Statement/

We do something similar in ruby with deterministic gem. Each action returns Success(value) or Failure(error). The error object is one of our own defined error instances (e.g. Responses::Unauthorized.new(msg)). These are then translated to actual error codes and messages in the web layer.

We added something similar to do-notion for ruby in deterministic gem. Not very nice but makes using deterministic a bit more convenient. https://github.com/pzol/deterministic#chaining-with-in_seque...


I'd like to point out that do-notation in Haskell is a syntax sugar for something nasty.

Let's look at the example:

    do
      a <- getData
      b <- getMoreData a
      c <- getMoreData b
      d <- getEvenMoreData a c
      print d
This desugars to:

   bind getData (
       bind (\a -> getMoreData a) (
       bind (\b -> getMoreData b) (
       bind (\c -> getEvenMoreData a c)
       \d -> print d)))
Edit: the above code was initially wrongly desugared.

Which is exactly the callback hell criticized in the article.

Sure, monads and do-syntax provide a nice abstraction and are interesting constructs from the perspective of type theory, but in my opinion it's not principally better than Promises or await.


> I'd like to point out that do-notation in Haskell is a syntax sugar for something nasty. [..] Which is exactly the callback hell criticized in the article.

What's even worse - the resulting code gets compiled to machine code. That's usually very ugly, there is a limited number of variables (registers), and you have to juggle them around. So, obviously, it's no better than writing the machine code by hand.

See, that's the purpose of compilers - to take readable abstractions and to turn them into unreadable, but efficient, mess that is actually executed.


`bind` is usually written as `>>=` in Haskell.

Haskell also lets you write simply

    print
instead of

    \d -> print d
I wouldn't consider this part syntactic sugar – if one did, I suppose one would have to say that

    f(g())
in Python is sugar for

    (lambda i: f(i))((lambda: g())())
Here's how the desugared version looks in more idiomatic Haskell:

    getData 
      >>= \a -> getMoreData a 
      >>= getMoreData 
      >>= getEvenMoreData a 
      >>= print
(where I kept the lambda around the first getMoreData since `a` was used in getEvenMoreData)


IMHO callback hell is only called hell because of the syntax. So the solution admired here is mostly the do notation, not monads themselves. (But it’s a great feature that the do notation can be used for a general class of problems represented by monads, not a particular one as await, for example.)


If do-notation is the solution, does that mean that adding syntax sugar for nested functions / promises in JS would solve the problem of callback hell as well? I somehow doubt that.


Yes — but only if, as in Haskell, that syntactic sugar was generalisable. The whole point of monads is that they are a powerful generalisation that can be used to solve many apparently very different problems. Compared with the syntactic sugar that other languages introduce to solve specific problems (list comprehension/LINQ, `await`, …). You suddenly get a grammar explosion because you keep expressing the same core concept differently.


> Yes — but only if, as in Haskell, that syntactic sugar was generalisable.

Only if you just need one type of monad. If you need interleave multiple monads, haskell syntax is worse because it makes no visible distinction between them.


There is no sensible interleaving of multiple monads, and any sensible language would prevent you from any kind of implicit interleaving.


In the article there are few monads mentioned: async, nested iteration, maybe, state passing. I can easily imagine sensible mixture of those, and languages with "ad hoc" solutions allow you to mix them.


The distinction is visible in the type.


... you mean like async / await?


If async await worked for the continuation monad, the typical list monad, and the maybe monad, then yes, async/await would be a good, if somewhat strangely named, construct


The whole point is that do syntax is a generic solution to an entire class of problems, while async/await is a one off solution to a single problem.


This is probably the most concise and elegant justification for the monad pattern that I've seen. And I've seen many.


I'm not sure why

  do
    a <- getData
    b <- getMoreData a
    c <- getMoreData b
    d <- getEvenMoreData a c
    print d
is better than

  var a = getData();
  var b = a?.getMoreData();
  var c = b?.getMoreData();
  var d = c?.getEvenMoreData(a);
  print(d);
Other than the "var" at the start of each line they look like they take up about the same amount space, and the pattern is near-identical, so they're equally readable.


The point is not that any one example is better than the other, rather it's that the same syntax can be used to uniformly solve multiple different problems (partiality, async, "non-determinism", and state-passing).


One thing this article doesn't really make clear is how someone reading the code can actually determine which of these things is being done. Code that looks the same but magically does something different doesn't seem like an improvement to anyone who wants to be able to maintain their code.

I know enough Haskell to know that the "magic" comes from which instance of Monad is being used, but between the weirdness of overloaded values, the fact that the "do" syntax doesn't really make it obvious which value is the important one, and Haskell programmers' aversion to using parens, I still find this stuff impossibly hard to read. This is largely why I gave up on Haskell after trying to learn it for 3 years.


Well, the `do` syntax makes it patently obvious what is the important value, it is the one being assigned into variables within the block and used as argument of functions. It hides the not so useful one that is the name of the monad you are using.

You know the kind of "magic" by looking at the type declaration on the line just above that `do`. If there's no type declaration there, I hope the code is simple enough that it is self explanatory, otherwise it's plain bad code.

Things are made more complex when the monad type is generic. But then, this just makes it patently visible that the monad does not matter at all and the code must work as intended on any kind of "magic" you can throw at it.

About the aversion to using parens, matched operators make your code brittle and hard to change. This is not obvious if you have never seen any other way of organizing code, but it is a large effect.


> the `do` syntax makes it patently obvious what is the important value

Given that it seems that the majority of people who attempt to learn Haskell end up giving up, I'd argue that very little about it is "patently obvious".

> it is the one being assigned into variables within the block and used as argument of functions

I think this is part of what confounds me when it comes to Haskell (aside from the terrible syntax and horrible coding conventions): when I'm reading an expression I understand the overall expression by understanding what the sub-expressions do. That is, I can start at the leaves of the expression tree and work my way up to the root.

In Haskell, sometimes this is not possible, because sometimes the type of the subexpression is constrained by something higher up in the AST. This leaves me not knowing where to start with trying to decipher the type of an expression.

> About the aversion to using parens, matched operators make your code brittle and hard to change. This is not obvious if you have never seen any other way of organizing code, but it is a large effect.

What are "matched operators"?

The aversion to using parens means that I can't even parse code unless I know the fixity rules of all of the operators in an expression. I'm sure you'll say this is easy to pick up after a while, but that has not been my experience.


> In Haskell, sometimes this is not possible, because sometimes the type of the subexpression is constrained by something higher up in the AST.

The point is that this constraint always have the exact same format. It may have some different semantics, but Haskell makes you abstract over those to get into the operational function, while the compiler verifies anything that isn't local operations to fit your overall structure.

Really, learning Haskell is the process of learning to abstract semantics away from your code. It's a hard process, it's not something natural.

> What are "matched operators"?

Those are operators that require a matching. Usually parenthesis, square brackets and braces. Haskell has the unmatched operators `.` and `$` that people use instead, and for reading it you do have to understand their priority. Most operators do not a set fixity, but those two are a must.


> > In Haskell, sometimes this is not possible, because sometimes the type of the subexpression is constrained by something higher up in the AST.

> The point is that this constraint always have the exact same format.

I'm not sure how to even parse that sentence. Let me phrase as a question: when trying to understand a given piece of code, how can I determine the type of an expression when its type is someone determined by its sub-expressions (like virtually any other typed language), but sometimes is determined by what consumes it?

The way people can deal with the complex is by decomposing them into smaller parts, understanding those, and then composing that to understand the whole. It seems that in Haskell you are required to understand the whole thing at once, because you can't reliably determine the type of a sub-expression out of context.

> > What are "matched operators"?

> Those are operators that require a matching. Usually parenthesis, square brackets and braces.

And how do "matched operators" make code more brittle and harder to maintain?

I think the opposite is true: to maintain code you must first be able to read it, and what you call "matched operators" are far superior for readability compared to having to remember fixity rules. Even after 3 years of dealing with Haskell I found myself constantly misreading code because I didn't get the fixity right.


Could you please give a concrete example? The majority of Haskell expressions have what is called a "principal type". The type of such an expression is independent of its subexpressions or super- expressions.


A concrete example of what?


A concrete example of an expression that you have difficulty determining the type of.


It's been a few years since I gave up on Haskell, but I remember anything involving `do` (especially containing a `return`) or `sequence` seemed to require a lot of guesswork about what type was involved. Pretty much anything where the type of a subexpression would be an overloaded type has a high potential for being confusing.

Another example of what I'm talking about is the fact that even the expression "5 == 5" gives an error unless the kludgy type defaults are enabled. The fact that they felt the need to add type defaults and MR to the language is a pretty strong sign that they screwed up the usability of the type system.


Haskellers would tell you that what they appreciate about this is that you don't have to understand sequence and return! They do the "same thing" for all monads in the sense that they obey well-defined laws.

"5 == 5" is a different issue, and ironically is ambiguous because it's so simple. Most practical arithmetic expressions would have some way of disambiguating the type.


There are two answers I can think of:

1. You know what the code is doing to the extent that you know how monads work. As in, you know that you’re going to do some monadic action that returns a, then another one that returns b, etc. The code as written is generic so it’s not specified what the actions are.

2. And this is the much more common case; the code in question is being used within a particular context (I.e. for a particular monad that the programmer is working with) and the context makes it obvious what is actually being done by the monadic actions. Here the genericness is not in writing a function that can be used in multiple places, but in having a portable concept of “a pipeline of actions that return things and could fail” which you can use wherever you need it.


Type at point in an IDE helps. You can also use type holes and let the compiler infer types that you can use as a guide to fill in.


Yeah, I feel like a powerful enough IDE could probably make Haskell usable. Something that made the types obvious, and also the nesting of subexpressions. Despite trying for 3 years, I never felt like I could even reliably parse Haskell code in my head (too many fixity rules, not enough parens) let alone infer the types of things.


What resources did you use


What kind of resource do you mean?

It's been a few years now. I know I read LYAH, and I think also one of the O'reilly books. I used GHC and vim+syntastic. I used Hoogle for looking stuff up. I asked lots of questions on Stack Overflow, Reddit, and one of the IRC channels. Even after 3 years, everything with Haskell felt like rolling a boulder up a hill. The only reason I'd persisted as long as I did was because the idea of a purely functional statically typed language really appeals to me. Despite those features being what I've wanted in a language for decades, too much about Haskell makes it unusable for me.

I did learn some useful stuff in the process, but came away with the conclusion that virtually all of the good stuff in Haskell does not really need to be coupled to all of the terrible stuff in Haskell.

I think for some people (I suspect a tiny minority) of the people who attempt to learn Haskell, they get past the terrible stuff and either stop seeing it, or become brainwashed into thinking it's unavoidable/necessary. Perhaps it only works for the 5% of people who's brain happens to be wired the right way, I don't know. Most people, though, give up. To make matters worse, if you are not one of those people for whom the weirdness of Haskell "clicks", and you mention that parts of it make no sense to you, you'll get very little help from the Haskell community. Most will tell you things like "I use Haskell every day, and this doesn't bother me". Great.

That said, during the time I was trying to learn Haskell I did see some small improvements being made in the usability department. The biggest by far was that they finally turned off MR by default. Still, too little too late for me, so I ended up giving up on Haskell and moving on to other languages where I can actually understand my own code a day or two later.

As an aside, MR was a great example what's wrong with Haskell: 99% of the time the language designers don't seem to care about usability at all, but the 1% of the time they actually try to do something to help usability, their attempts completely backfire. map vs fmap is another example of this, though not nearly as horrendous as MR.


Because it is a general solution that solves every problem listed on that page, and more, instead of only solving that one specific problem of null value checking.


Others have brought up good points, but I'd also add that the ad-hoc solution only supports calling methods on possibly-null objects. If you want to pass a possibly-null object to a function that does not accept null values, only the monadic solution works.

In fact, you can see this in the example. AFAIK, the ad-hoc example will end up printing a null value, while in the monadic example, print is never called at all.


In the Haskell example, a/b/c/d can be lists of simple data structures.

In your c# style example, getData/getMoreData must either be methods on the a/b/c/d objects, or extension methods on lists of simple data structures. Both are pretty bad in comparison, IMO.


If you just focus on specific problems, yeah, I agree, it's hard to see a great advantage.

The magic is that having something like do syntax allows you to take matters into your own hands. You can relatively easily create your own solutions for these kinds of problems, without having to wait around for possibly years (or possibly forever) before the language implementors get around to giving you your Elvis operator or iterator blocks or async/await syntax or whatever.


And the question mark. And see how for d, there's no symmetry between c and a. The do notation is definitely more readable.


You can flatten the pyramids with logical AND-gates. Eg. it only continues if it's successful. Here's an example:

  var a = getData();
  if (a == null) return null;
  
  var b = getMoreData(a);
  if (b == null) return null;

  var c = getMoreData(b);
  if (c == null) return null;

  var d = getEvenMoreData(a, c)
  if (d == null) return null;

  print(d);
For async code I use counters. But async code is usually thick, not because lack of abstraction, but because you do not want any loose ends eg. you want to handle all and every exception that might arise. As well as having code for abortion and rate-limiting.


What happens when you mix what <- means :

do

  a <- getData // this is a awaiting a future

  b <- getMoreData a // this is looping a list

  c <- getMoreData2 b // this is null checking

  d <- getEvenMoreData a c // this is awaiting another future
  
  print d
Monads' hell, isn't it ?


The do block syntax is actually just a syntax sugar for the `bind` function (also written `>>=`). The above would de-sugar to something like this (I may have syntax wrong):

  getData >>= \a ->
  getMoreData a >>= \b ->
  getMoreData2 b >>= \c ->
  getEvenMoreData a c >>= \d ->
  print d
The type signature of the `bind`/`>>=` function requires that both sides be the same type of monad, so it won't actually typecheck unless all of the get data functions return the same type of monad (future, list, optional, etc).

There's a number of ways to get around this in haskell (monad transformers, free/freer monads, etc) but they're all pretty complicated unless you're pretty familiar with the language.

(Disclaimer I'm probably a little wrong in my description - I'm still learning haskell)


This is not possible in regular Haskell. For one, it isn't obvious how to do such a desugaring and, even if it were, this is pretty gross.

I remember a co-worker finding some weird combination of old extensions which allowed you to do something gross-ish of this form. I've _never_ seen such a thing in real life.

The only practical issue that comes close to this is sometimes trying to figure out _what_ monad a particular `do` block is running with (my mental type inference falls short of the compiler's).


i'm currently deciphering such code in scala ...


I believe that monad transformers can be used to solve this issue.


Being able to combine monads like this is part of the point.


it means that the fact that the monad is looping or checking for null or awaiting a future is coded in the function itself, and hidden when reading the for loop. When types are heavily overloaded, it can take minutes to know if we are looping a iterable, awaiting a futurable or checking for null a maybeable. And what about a Maybe(List) ? a Future of list ? Common, how is this not Hell ?


Haskell's type inference is optional: if you felt the above code is difficult to understand, you could manually specify the type of each statement.

In a language with a less powerful type system, you could write the same information in comments, but then the compiler wouldn't warn you if you accidentally changed the side effects of a statement.


But you can't combine monads like that in Haskell.


I think this is another instance of solving the wrong problem.

    var a = getData();
    if (a != null) {
      var b = getMoreData(a);
      if (b != null) {
        var c = getMoreData(b);
        if (c != null) {
          var d = getEvenMoreData(a, c)
          if (d != null) {
            print(d);
          }
        }
      }
    }
Why not simply

    var a = getData();
    if (a == null)
        return;

    var b = getMoreData(a);
    if (b == null)
        return;

    var c = getMoreData(b);
    if (c == null)
        return;

    var d = getEvenMoreData(a, c)
    if (d == null)
        return;

    print(d);
And the other example

    var a = getData();
    for (var a_i in a) {
      var b = getMoreData(a_i);
      for (var b_j in b) {
        var c = getMoreData(b_j);
        for (var c_k in c) {
          var d = getMoreData(c_k);
          for (var d_l in d) {
            print(d_l);
          }
        }
      }
    }
Yes, there is the issue that there is too much nesting. But there is no point in tackling this problem with a clever monad. That's not solving the issue, it's just hiding it. Operationally there is no difference.

What really needs to be done is to avoid data dependencies. It's typically possible to do something like

    for (var x in a)
        foo(x, a_state);
    for (var x in b)
        bar(x, a_state, b_state);
    for (var x in c)
        baz(x, a_state, b_state, c_state);
    for (var x in d)
        quux(x, a_state, b_state, c_state, d_state);
This not only looks cleaner, it is cleaner. The path of execution is simpler. It will also often run much faster, since its only tight loops and uniform data accesses.


The example in the article opens itself to an attack like this because it's simple (in an attempt at explaining things simply). It's easy to mistake the trivialness for naivety.

But the "hell" part that monads solve really arise when you go further than this -- when you have error handling, null handling and asynchronicity and mutable state involved. For example, if each of these functions could return an Either or Maybe, your non-monadic, "if"-based version would get a lot hairier.

Monads abstract and generalize the concept of chaining computations. Of course you can write the chaining manually, but you would have to replicate the same logic for every chain of function calls, with all the ad-hoc error handling and so on in place. It may look "cleaner" to you, but there's no generalization happening, and monads exist to generalize patterns like these into a single tool that can used over and over again.

This is why every single monad code example in the article is identical.


Yes, you can implement an "error monad" instance for each of a given set of procedures (returning Either, Maybe, or whatever).

You can also wrap each of a given set of procedures that don't conform to a uniform interface (say, return null or < 0 on error) to be conformant. That's just basic hygiene. I don't see a problem.

> Monads abstract and generalize the concept of chaining computations.

I get that, but in the real world I don't find any need beyond sequencing (first do this, then do that) and the occasional nested-for or early return. Everything else is only giving up on other things that are extremely important, like modularity (complex types / protocols are inherently anti-modular) and deterministic resource usage.

> This is why every single monad code example in the article is identical.

I didn't even notice that, and I do not consider this a desirable goal at all. The examples do totally different things, and you want to see that. There's no need to swap one out for the other in any realistic scenario. Ever.


> in the real world I don't find any need

And I'm sure you're pretty productive with your choice of language, but bear in mind that tools shape our thinking. You don't find yourself reaching for those tools in part because you've never expected them to be there. That doesn't mean they can't be very useful.


Exactly. That’s why I would never write any of the ad-hoc examples the way they are written. And that’s why the article is criticized.


> That's not solving the issue, it's just hiding it. Operationally there is no difference.

It don’t think that’s a fair argument against abstraction and syntax improvements.


I'm not making an argument against (good) abstraction, or practical syntax. Don't you agree that it is better to solve the problem where it is: at a lower level?


I don’t consider your example of error handling superior to the Maybe monad. It’s more explicit and arguably simpler, but has more boilerplate and is less safe.


Your last example of keeping multiple states is less legible than using the list monad. Also, some problems are naturally highly dependent, like matrix multiplication


>Why not simply

Because people are too busy demonstrating how smart and talented they are. Simple, clean code is too dumb.


Definitely a really neat set of examples. As someone not familiar with Haskell, I have two questions though:

1) What would the method signatures look like in these examples? I.e on the list example, I'm assuming getData would be () -> list(A). And then is getMoreData (A) -> (B) [which gets lifted to list(A) -> list(B)]?

2) How would you combine multiple of these approaches at once? E.g. using both a state monad and a list monad? Would you have to use nested do blocks? Or would you typically define your own custom monad that is a combination of state and list, writing your own lift method, etc.?


1) Maybe monad

  getData :: Maybe Data1
  getMoreData :: Data1 -> Maybe Data2
Note: the monad is probably not just Maybe if it talks to the outside world but more in 2)

List monad

  getData :: [Data1]
  getMoreData :: Data1 -> [Data2]
Dont' know about continuations.

2) There are a few approaches but the most popular are monad transformers, building stacks to combine the effects. For example, to speak with the outside world we need 'IO'. To add failure we might use the Maybe transformer

  MaybeT m a
to build

  type IOMaybe a = MaybeT IO a


For the list example: I don't get why that loops over the content of the list (I don't understand this in the OP either). Wouldn't it need to be:

  getMoreData :: [Data1] -> [Data2]
at least? As I understand the OP, wouldn't the function signature for the non-monadic example be:

  getMoreData :: Data1 -> [Data2]
and the function body fundamentally different for that reason?


1) I would asusme `getData :: [a]`, `getMoreData :: a -> [a]` and `getEvenMoreData :: a -> a -> [b]`

Notice the type of bind: `(>>=) :: Monad m => m a -> (a -> m b) -> m b`, in our case `m` is list, so the instantiation becomes `(>>=) :: [a] -> (a -> [b]) -> [b]`. (In our case we can also deduce that `a = b`, since the output of `getMoreData` is passed back into `getMoreData`, but that's less important.)

A simple example:

    Prelude> a = \x -> [x,x]
    Prelude> [1,2] >>= a
    [1,1,2,2]
2) There is something called Monad Transformers, which does this https://en.wikibooks.org/wiki/Haskell/Monad_transformers It's not quite as easy as just working with one Monad though, iirc.


GetData is `Maybe a` (what means it's not a function). GetMoreData is `a -> Maybe b`. And so on.

What means that those variables do not do what their names are claiming they do. It's a case of an example that lost its meaning once carried to a too different context. By the way, if you are wondering how functions that actually get data would look, there's a great architecture here:

http://www.parsonsmatt.org/2016/11/18/clean_alternatives_wit...


Am I the only one who found the "ad-hoc" solutions more clear?

I appreciate that they make it obvious to the reader which things are loops, which are null-checking, which are async, etc.

I'm sure the unified do-notation is very satisfying to write, but it seems confusing to read.


One of the few empirically findings of programmer productivity, is that programmers can produce about the same number of working lines of code per day regardless of the language. And the ratio of bugs / lines of code is relatively constant across languages.

So expressing the exact same behavior in fewer lines of code directly leads to higher productivity, and fewer bugs for equivalent functionality.

In other words, there is a concrete cost associated with overly verbose programming language constructs.


source?

If that's actually true, we should all be using CoffeeScript:

http://redmonk.com/dberkholz/2013/03/25/programming-language...

According to that analysis, it's more expressive than Haskell, Scala or Clojure (all of which otherwise rate quite well).

Regardless, the "ad-hoc" solutions are all about the same LOC as the do-notation syntax, so I'm not sure the LOC argument is relevant here.

I would assert that bugginess and readability vary across coding styles, which is ultimately the question at hand, not language (ie; you can write comprehensions in Haskell too).


Discussions like this are always helpful for people with more experience in dynamic languages. My first experiences with Monads, like the one in LYAHFGG[0], left me with the sense that Monads were a tool primarily used in statically-typed languages. After much searching, I stumbled across Tom Stuart's _Refactoring Ruby With Monads_[1] talk, which helped describe Monads as just another useful programming abstraction independent of the language. (Jim Duey's _Monads in Clojure_[2] also helped.) Monads now make sense to me as a general-purpose programming paradigm, like recursion or immutable data structures.

[0] http://learnyouahaskell.com/a-fistful-of-monads [1] http://codon.com/refactoring-ruby-with-monads [2] http://www.clojure.net/2012/02/02/Monads-in-Clojure/


not sure if you've seen this clojure lib http://funcool.github.io/cats/latest/


The elegance of this solution IMO comes to this:

- you mastered the art of reading math papers, love seeing f: XxYxZ -> D notations and enjoy (partially) composing functions in papers you write. You are on academic track for PhD or higher. It's your natural habitat; ideas are always more important than current mediocre reality; real-world is just a side effect. You are going to love monads. You'd treat monad transformer hell or other warts as necessary and unproblematic.

- you went into programming by observing real-world, how each action modifies environment and how interactions affect each other. You saw a great analogy to those when you started with your first language such as Python/Basic/Java/Pascal/etc. Then you started to optimize runtime more and more and now can't write a line of code without addressing all kinds of performance and consistency issues. You are likely going to hate monads if you ever manage to understand them.


This is very black and white thinking. I started with imperative languages, loved them. Then I learnt some FP theory and Haskell, loved it. I now use both and what's most important imho is that knowing both made me a better dev overall.


This paper calls attention to my biggest source of discomfort with Haskell's approach to monads: They end up hiding so much under the covers that everything starts to look the same. The author leaned on this particularly hard for rhetorical effect in the article, but it's not always so different in the real world. It really harms readability. I've noticed, for example, that it's not unheard of for someone to come asking for help with a do-block's behavior, and the first response they get is someone having to ask what monad that do-block is using.

F#'s approach (which it calls computation expressions) makes a small tweak that makes all the difference in the world - instead of always using the word "do", you start a block with the name of the monad you want.

So instead of

  do { ... }
  do { ... }
  do { ... }
everywhere, you get:

  maybe { ... }
  seq { ... }
  async { ... }


Firstly, that's almost never a problem as you almost always have a type signature to hand.

Secondly, F#'s approach is not a small tweak. It's forced on them by a lack of higher-kinded types.

Thirdly, if it really bothers you use TypeApplications to write

    with @Maybe $ do { ... }
    with @[] $ do { ... }
    with @Cont $ do { ... }


I assume `with :: forall m a. m a -> m a; with = id` - is that defined somewhere?


No, I just made it up but if enough people shared the OP's feelings it would be defined somewhere. The fact that it doesn't exist is evidence that it's not needed.


A fair point, but weakened somewhat by the fact that type applications are as new as they are.


Clojure has some conditional threading macros like `some->` and `cond->` that are useful for transforming data conditionally.

`cond->` takes pairs of tests and forms where a form is threaded through the subsequent forms if the test expression is true.

Example from documentation (https://clojuredocs.org/clojure.core/cond-%3E):

  (cond-> 1          ; we start with 1
      true inc       ; the condition is true so (inc 1) => 2
      false (* 42)   ; the condition is false so the operation is skipped
      (= 2 2) (* 3)) ; (= 2 2) is true so (* 2 3) => 6 
  ;;=> 6
  ;; notice that the threaded value gets used in 
  ;; only the form  and not the test part of the clause.


This is conceptually different to a Monad in Haskell. Monad bind operations have short circuit failure, so in the example above all of the operations would have to succeed or none of them would.

cond-> looks like a useful function but you have to hold the state in your head at each point to debug it. What happens if two steps change the type from int to string then the follow step requires the string, but if the previous step fails you would get an int? Seems it would be very prone to runtime surprises


cond-> doesn't, but some-> short circuits on nil. Similarly, with expressions in Elixir short circuit on a failed pattern match.

https://clojuredocs.org/clojure.core/some-%3E

https://hexdocs.pm/elixir/Kernel.SpecialForms.html#with/1


My one gripe is the author doesn't exactly explain what the problem with those ad-hoc solutions are. I lied, I also have the gripe that the author doesn't talk about the noncomposition of monads and tease monad transformers.


Deterministic finalization is another example, with "as hoc solutions" including manually using a finally statement, using an interface and a custom statements (such as the with statement from Python and the using statement in C#, or now the try-with-resources statement from recent Java), or syntax like stack allocation (as in C++). And of course error handling in general (not just null value checking), with exceptions being the typical "ad hoc solution". Any time you have pieces of code going together in some boiler-plate fashion, monads are pretty epic, and it is incredibly depressing to see people not really learning this lesson (such as in Rust, which went with all of crazy operators, macros everywhere, and "just do it manually as we don't believe in exceptions but also don't fully understand monads" as their "ad hoc solution" to error handling, when what the language really needs is do syntax to go along with its attempt to rely on the Either monad for all of its errors: monads are largely interesting because of do syntax, which is the payoff for the common abstraction).


There are strong technical reasons why writing a monad abstraction in Rust is particularly hard[1], and to my knowledge nobody has found a way to do so whilst preserving fairly basic properties like being able to use loops or avoid allocations.

Using "?" with appropriate traits is much simpler (important for understandability), more globally applicable (important to be general) and preserves ease of important reasoning (eg. of performance). It is certainly not out of ignorance of monads.

[1] For example, https://m4rw3r.github.io/rust-and-monad-trait


If you go all-in with monadic error handling, you end up with code that is isomorphic with and semantically equivalent to checked exceptions.

The only difference, in so far as there is a difference, is in how low-level of a primitive you permit to (throw|return Either). For example, do you model floating-point division as returning an Either, since the divisor might be zero - that kind of thing. Exceptions tend to encourage errors from a lower level of primitive.


I agree. ! However, to go "all-in" on monadic error handling requires you to have do notation to abstract away the monad, lest you end up with something (a thing we happen to call Rust ;P) that is much worse than just giving someone checked exceptions.


Usually someone complains to me that GC FP languages lack deterministic finalization, I end up explaining how to use higher order functions to achieve it.

The problem with with, using or similar approaches is that they grew with the language and the majority of them don't allow for trailing closures.

More modern languages like Swift or Kotlin make it much nicer to achieve that, making something like using a simple function call.


Doing that with higher-order functions instead of with a monad requires the caller to care about something they could let the type system guarantee. I mean: every single example of using a monad in do donation is just syntax sugar for passing a continuation of the rest of the function to a higher-order function...


I don't get how higher order functions help with deterministic finalization in the case of closures, which typically have an unbounded lifetime. How, and when, is the resource captured by a closure disposed of?


When the closure itself is disposed of, depends on the GC algorithm (RC, tracing, parallel...).

My point is about other kinds of resources, namely files, database connections, IPC channels, sockets, native memory buffers, transactions and so forth.

The call stack of higher order functions can be used as an arena.


I am still pretty new learning haskell, and find this very helpful. Not necessarily agree whether monadic solution is prefered, but I like how it shows how general monad is: it's hard to grasp the generalization from only a couple of simple examples normally found in any haskell book. I hope there are more articles like this showing how these abstract concepts can be applied in somewhat unexpected places.


How is this not just an architecture that insures the return values/struct/object is a 'context object' that can be passed as input, and the architecture is arranged for as many functions as possible to pass and receive these standardized parameter contexts. This is just function chaining, a la jQuery, right? Or where is this assessment not right?


It kind of is, but this architecture has mathematically sound properties and is built in the language so people don't have to reinvent it in an ad-hoc way all the time. Your context object would be a Functor, types are employed to ensure that functions work with these and chaining of such functions is possible via Monads.


How is error handling done if something goes wrong in one of the functions?


It depends. In Haskell, functions do not fail in traditional sense - they just return indication of failure. So the return type of function can be, for instance, Either type (as someone already noted) which basically lets you have proper value and error value, one or the other.

And Either is also a monad. Typically, you would derive your own monad by somehow combining the two monads (for instance using a monad transformer) that would have a behavior you want.

In ordinary languages, the error handling is done ad hoc, in each situation the semantics can subtly differ. You can do that too but then you have to explicitly handle the fact that functions return Either in the function body (this would be somewhat similar to checked exceptions). Or you can define your own monad that prescribes the correct semantics of interaction between the monads.


> In Haskell, functions do not fail in traditional sense - they just return indication of failure.

That's too general a statement. Haskell functions certainly can fail in the traditional sense. As a trivial example, "head []" is a well-typed Haskell expression that will raise an exception when evaluated; if the exception is not caught (using a traditional exception handler), then it propagates to an error, and the program exits.

This article [1] about the many ways to handle errors in Haskell is a bit old, but as far as I know, all his points are still relevant. The conclusion (for me) is that if modern Haskell tends to use return-values on failure, it's due to convention (and good taste) among developers, and not due to a design constraint of the language.

[1] http://www.randomhacks.net/2007/03/10/haskell-8-ways-to-repo...


You end up with something that F# community calls railroad programming, each monad works like a rail switch driving which of the functions actually gets called on it.

So if they are in an error state it is like a nop, until someone actually retrieves the error value.


This is what I don’t get about FP, in exchange for code that looks more elegant, you get something that is much more of a PITA to debug. If there is a problem, I often find getting rid of the indirected plumbing and directly exposing control flow to be the best way to find out what is going on when and where.


It is only a PITA to debug on paper, not when using a graphical debugger.


How so? Visual studio still doesn’t reveal the right context when you actually find out the error has occurred.



Nice! Tooling is a large part of my trouble with FP, better tooling would make it easier to debug, especially those complex signal networks that require different debugging paradigms from imperative languages.


This suggest that it's difficult to reason about the code. That's the last thing I want.


The answer to that is `Either` monad (similar to `Result` in Rust).


You have to then combine multiple monads though, which isn't quite as easy iirc.


Well, monad transformers make things easy but the abstraction impacts runtime performance :(


I like the ad-hoc solutions. They correspond more closely to what your code is actually doing. I used to work on a Scala code base that heavily used a functional style with monads, custom operators, and elaborate type hierarchies.

At the micro level these things seemed great, they removed a lot of "boilerplate", but when you zoomed out and tried to figure out how larger parts of the application worked it quickly became overwhelming because these language features made understanding what the program was actually doing much harder.


Hmm, maybe I'm missing something, but I lost interest after seeing that pretty much all the 'ad hoc' solutions listed (similar non-monad based solutions is the idea I think) were nearly identical to the monadic solutions, except some syntactic differences. As someone who doesn't care that much about little syntax nuances, the motivation to look further here escapes me.


It's more like escaping one hell to another one. Monads introduce brutal conceptual complexity and only few managed to master them, usually at the top of intelligence range in the math spectrum. It's unlikely this is going to be a solution for average or even many advanced developers that won't be able to comprehend it.


> usually at the top of intelligence range in the math spectrum

As someone who is definitely not at the top of the `math intelligence spectrum' but does have a reasonable understanding of monads (in a practical, not category theory sense), I can promise you that this isn't the case.

Yes, it takes a while to get your head around them - but this is mostly due to poor teaching materials. I wouldn't class them as significantly harder than many of the other concepts required to become a good programmer.

This isn't to say that monadic solutions are an answer to all problems - but they're interesting to learn about.


Here's a little secret: in Java, try + generic catch is my error handling monad.


They're not nearly as good as as an Either type, but if you don't have do notation, exceptions are much, much, easier to use than sum types for error handling.


Yep. If you want to be really broad, even golong's (val, err) is a sum type even tho tuples are more literally product types. Since it's generally one or the other that has meaningful value, not both.

But it sure as hell isn't easy since there is no do syntax or flatMap/selectMany/whatever to handle them nicely.


This article is more religious than technical. You're meant to be inspired by the elegance of it all, not necessarily to understand much about the use of monads in practice.


I love this blog, monoid morphisms was very eye-opening for me.


So the monad style hides completely what is going on. That all the solutions look exactly the same impairs readability of the code.


Isn't that the point of encapsulation?


In real code, you would have type signatures on top-level definitions.


> In this post we’ll have a look at some instances of such sitations, their “ad hoc” solutions provided at the language level, and finally at how these problems can be solved in a uniform way using Monads.

I wonder if the blog post's author is actually aware that the original application of monads to programming languages was precisely to give a denotational semantics to strict higher-order languages. In other words, you don't “add monads” to JavaScript. A monad, perhaps an entire monad transformer stack, is already hardwired into its semantics.

Now, I'm aware that the author says at the end:

> We’ve had a look at this from a syntactic perspective.

But monads are about semantics, not syntax. Of course, if you want uniform syntax for using monads, then say just that.

---

> For-loop Hell occurs when iteration through multiple dependent data sets is needed. Just as for null-checking, our code becomes deeply nested, with a lot of syntactic clutter and needless bookkeeping.

You know what's a simple, clean, straightforward solution to “for-loop hell”? Helper procedures. It baffles me why people always reach for the bazooka when a fly swatter will do.

> Both the State and ST monads bound the lifetime of a stateful computation, ensuring that programs remain easily reasoned about in the general case.

Yet neither can express the following very useful pattern:

    allocate A
    allocate B; use A B; deallocate A
    allocate C; use B C; deallocate B
    ...
    allocate Z; use Y Z; deallocate Y
    deallocate Z
> The `ST` Monad provides more performant state with references, at the cost of some higher order behaviour.

`ST` is not about “performance”. It's about semantics. With `State`, you have to know beforehand the state type, and you cannot change it in the middle of the computation. With `ST`, you can allocate whatever you want in the middle of a computation, and it doesn't have to be reflected in the computation's type.


you can use monads to do your last example....


No, you need proper substructural types. Not even monadic regions will help.


You do not need a type system to use monads. In particular, your example could be written (in a statically typed Haskell-like language) using indexed monads. In a dynamically typed language, you wouldn't need that.


Of course, you don't need types to write programs. Types are just a way to mechanically verify some desirable properties. But, as it turns out, resource management in large programs is pretty damned hard to get right without this kind of mechanical assistance. So, of course, when I say “it's inexpressible without <type system feature>”, what I mean is “it's impossible to mechanically verify it without <type system feature>”.


Monads have nothing to do with verification, so I'm now unsure what your original point was.

It's not surprising that you can't use a banana to edit text, so it's not surprising you can't use the general idea of monad to do verification.


> Monads have nothing to do with verification, so I'm now unsure what your original point was.

Indeed, monads are about language semantics, prior to any verification attempt. Have you tried actually tried reading my original comment?


Your original comment was about language semantics. You then provided an example of something you claimed could not be done monadically. I pointed out that you could implement such a thing with "indexed monads", which are a subclass of monads (although not one that's easily used in any current programming language, Haskell included).

You then claimed that in order to implement the memory allocation example you would need substructural types, which is not the case. Then you claimed that you needed the type system feature to verify the memory was allocated/released correctly.

You are correct that you need an advanced type system and type checker to do the memory validation. However, the validator tool only needs to know about one monad (your resource one). This one monad is a small subset in the very large space of 'things that form a monad'.

It's like if I said, 'groups are about arithmetic'. While true that many arithmetic operations can be expressed as a group, groups are not inherently about arithmetic. That would be conflating common use cases with the generalization.


> You then provided an example of something you claimed could not be done monadically.

Using a monad, i.e., an endofunctor with unit and join that satisfies the well-known laws, you cannot do it.

> I pointed out that you could implement such a thing with "indexed monads", which are a subclass of monads (although not one that's easily used in any current programming language, Haskell included).

Is a so-called “indexed monad” really a monad, now?

> You then claimed that in order to implement the memory allocation example you would need substructural types, which is not the case.

It is the case. What else could be the point to the threaded typestate in the definition of so-called “indexed monads”? Moreover, even with the threaded typestate, so-called “indexed monads” may fail to achieve the desired goal of manipulating resources linearly, e.g. the “indexed list monad” provides no such guarantees.


> Monads can help solve certain classes of problems in a uniform way.

You solve 3 completely different problems using the exactly same code and have the gall to call this less complicated? I'm forced to sometimes wonder whether functional programmers have any idea how to create software.

If this was a joke I didn't understand, I apologise for my thickness.


It's called abstracting. The problems fit the Monad interface. Details are in the implementations.


I hope you are joking or else a lot of people like me who don’t have any idea about software are just somehow fooling everyone quite successfully :)


Personally, I feel the reason we think of these as completely different problems is that it isn't necessarily natural to see the abstraction between all of them. I don't understand monads fully myself, but it's still kind of missing the point to say this.


> You solve 3 completely different problems using the exactly same code and have the gall to call this less complicated?

That's the point, though. If the code is identical, then the problems are identical. There is less complexity through abstraction.


The problems are indeed similar, but using the same syntax and obscuring what monad is being used is itself a worse programming language hell than the original bad examples.


Why is recognizing the fundamental sameness of the problems, and solving them once, a "worse programming language hell"?


Because that sameness is irrelevant, and building something on it decreases clarity. Just because you can show something in mathematical terms doesn't make it useful.


That's pretty much the point. I'll detail the first example (the Maybe monad).

The idea is that you have a maybe object that is either an just an object (denoted Just(a)) or nothing at all (denoted Nothing). If you want to manually apply a function f on object m, the code would look like this

  if(m is Just(a)){
      f(a)
  } else{
       // there is nothing to do
  }
You can take f as a parameter to create a generic function apply. You want to your new function to be total (to return a value in all cases). Since you may (or may not) have a value within the maybe monad, the return type is also a Maybe value. If there was no value, then return Nothing, else apply `f` on `a` and wrap it back within the JUst part of the monad. The code looks like this.

  apply(m,f){
    if(m is Just(a)){
         return Just(f(a))
    } else{
         return Nothing.
    }
  }

From an initial maybe value m, you can chain calls:

  apply(apply(apply(m,f),g),h)
With a bit of syntactic sugar, the code looks like this.

  do
    m <- getData
    b <- f a
    c <- g b
    d <- h c
    print d

And the code is similar for the others cases. In all solutions, <- denotes an apply function on some monad. There is a function for each type of monad and it is responsible for doing all repetitive plumbing according to the given type of the monad.

I hope it helps =)


You (and the other repliers) seem to be confused. It's not that I don't understand monads, it's that I don't value them. It seems obvious to me that they serve mostly to harm code quality.


chaining together the outputs of these very different problems is the same problem, and should be solved the same way. The implementation details are within the monads themselves, not the do block.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: