Hacker News new | past | comments | ask | show | jobs | submit login
Monads for functional programming (1995) [pdf] (ed.ac.uk)
69 points by jxub 10 months ago | hide | past | web | favorite | 49 comments



In my experience, Haskell's comparatively complex and - arguably - also rather ad hoc type system can make a discussion of monads quite involved and hard to follow.

For those who approach Haskell with some Prolog or Mercury background, it may be useful to know that Prolog's Definite Clause Grammars (DCGs) are very similar to monads. In Prolog, a DCG clause of the form

    head -->
        goal_1,
        goal_2,
        ...,
        goal_n.
is implicitly augmented with 2 additional arguments that are threaded through as follows:

    head(S0, S) -->
        goal_1(S0, S1),
        goal_2(S1, S2),
        ...,
        goal_n(S_(n-1), S).
It is in these 2 arguments that you can implicitly pass around additional information such as counters, list differences, "global" variables, and states in general. Prolog provides dedicated syntax (called semicontext notation) to access these implicit arguments if needed. Often, they are simply threaded through implicitly throughout large portions of code without any modification.

Thus, it is obvious that DCGs, and hence monads, can save a lot of typing that would arise if you had to pass around these arguments explicitly. Also, they let you focus on the essence by making data that is only relevant to a small set of predicates (or functions) implicit in your programs.

It may be easier to understand these concepts if you can discuss them in isolation from a type system.


I find calling Haskell's type systen ad-hoc rather surprising. Would you care to elaborate on that?


For example, aptly named ad-hoc polymorphism violates referential transparency:

    ( (7^7^7`mod`5`mod`2)==1, [False,True]!!(7^7^7`mod`5`mod`2) )
This yields:

    (True,False)
suggesting that the same arithmetic expression is both 1 and 0.

In GHC, there is a dedicated flag to spot such cases (-fwarn-type-defaults).

In general, the guarentees that the type system actually gives and also the ways to specify them appear somewhat unfinished and are also quite hard to understand, often necessitating semantic restrictions and syntactic extensions. For further examples, see for instance:

https://stackoverflow.com/questions/27019906/type-inference-...

https://stackoverflow.com/questions/14865734/referential-tra...


They’re not the same expression though. The first gets defaulted to Integer, the second is specialized to Int by the (!!) function. (Integer is unbounded, Int is bounded)

The rest of your argument is fairly vague and unspecific. But I would suggest trying to understand the underlying type theory instead of focusing on the affordances that the type system makes for practicality and usability concerns. In my experience that makes it a lot clearer why the type inference works the ways it does in some of the weird cases. (That’s not to say that Haskell doesn’t have it’s warts..)


The universal algebra perspective on monads is that m x is the set of expressions (ie. ASTs) for a particular algebraic structure with variables drawn from the set x. For example, you have a monad for abelian groups where the expressions are things like a+b, 2a-b, 1. Note that m encodes both the operations of the algebraic structure, like +, and the laws that they obey, because we regard a+b and b+a to be the same AST (ie. these are considered equal as a matter of syntax).

A concrete algebraic structure is a map m x -> x that evaluates expressions, producing their value, eg. 7+3 -> 10. It essentially provides concrete definitions for the syntactic operations in the ASTs.

Under this interpretation, the monad operations say "a bare value is an expression, eg. a, b" (x -> m x) and "nested expressions can be simplified, eg. (a+b)+(c) = a+b+c") (m m x -> m x). The monad laws essentially are ensuring that no matter what order you evaluate subexpressions and simply an expression, it always comes out to the same value in the end. It essentially lets us state a Church–Rosser theorem for the maps m x -> x.


That's pretty neat. Can you recommend a resource to read up on this a bit more?


Not really. You can see the basic idea on the nLab: https://ncatlab.org/nlab/show/variety+of+algebras#free_algeb...

Continuing the example I gave, the monad law "Identity" says that "trivial subexpressions are already simplified"

     a+b  = (a)+(b)
     ||       ||
    (a+b)  =  a+b
and the monad law "Associativity" says that "order of simplification doesn't matter"

    ((a)+(b))+((c+d)+(e)) = (a)+(b)+(c+d)+(e)
           ||                    ||
      (a+b)+(c+d+e)     =     a+b+c+d+e
The laws for an algebra for a monad say "Trivial expressions evaluate to themselves"

    a = (a) = a
and "you get the same result no matter the order you do evaluations and simplifications"

    (6+1)+(3) = 7+3 
       ||        ||
      6+1+3   =  10


Much kerfuffling comes up about "What monads are" and "Are monads just pipes" and "is X or Y a monad." I'd like to volunteer to answer questions and provide examples people may have. This paper is a really practical reference for how to use them, but without an insight into what "them" is, it may not be helpful.

So feel free to fire away questions. I can provide simplified examples in a variety of examples upon request.


Something I've always wondered, you may be able to help with - why are monads? I.e. what is the problem they try to solve? I use objects and first class functions and they're pretty useful, occasionally even do a bit of currying (mainly because I think - ooh I can do a bit of currying here :-)), so how would Monads make my programs better/ easier etc. I've heard they can sequence your IO operations but in non functional languages thats not a problem. I feel there must be something they do that I'm missing so any pointers would be appreciated. Thanks.

Edit: like the two things I spend most of my day writing code for are UI stuff and database code - can monads do anything for me there?


Sure. This is probably a much more useful question than "what".

For Haskell, Purescript, Agda, Idris and other languages that like to rely on them, Monads represent some cool properties that make code extremely reusable.

1. Monads have a uniform, universal description of sequenced code. They can model essentially any form of computation that depends on its input and known state. This makes them a very nice level of abstraction to write highly generic tools.

As an example I wrote about here awhile ago (https://news.ycombinator.com/item?id=16831985), it makes it easy to write async code execution strategies without ever even considering any aspect of the code other than that it's sequenced.

2. Monads describe behaviors, but a subtle point is that in a language with generic code (paramaterized or higher-kinded types, etc.) Monads let you write code to a GENERIC monad.

As an example, what does this code do?

    data SomeThing = SomeThing { name :: Text, age :: Int }
    
    makeSomeThing :: (Monad m) => m Text -> m Int -> m Something
    makeSomeThing nameE ageE = do
        name <- nameE
        age <- ageE
        return $ SomeThing name age
This code Makes SomeThing (and yes, I could write this more neatly as `liftM2 SomeThng`, and often we use Generics to mechancially derive this code, but I'm making a point here). But we haven't discussed anything except that:

a. We are given effects to get name and age.

b. We seqentially use those effects to produce two values.

c. We then have an effect to produce SomeThing.

That's pretty amazing, if you think about it. Depending on what the monad is, we could be calling over the network, or executing things in parallel, or parsing it off of stdin. We could use the Maybe monad to represent values that may be missing, or we might use Envy (https://hackage.haskell.org/package/envy) to read it from the environment.

Monads give you a way to parameterize the structure of the logic you need to describe a task and make statements within that logic.

This is why monadic programming tends to get really addictive. And as time has gone on, everyone has found new compiler techniques to make this kind of code easier AND more powerful.

The next generation of programming languages folks are researching focus on Dependent Types (hi Agda and Idris) and Linear types (hi Rust and Pony and Idris), which let us take monads even further, allowing for a princpled approach even with previously complex tricks like interleaved effects, sophisticated type indexes, and consistent resource management.

> Edit: like the two things I spend most of my day writing code for are UI stuff and database code - can monads do anything for me there?

I just posted this code elsewhere, I just wrote a for-production Haskell microservice that I made generic over the database actions (type "p" in the code). This code does not know what database OR header validation scheme it's going to use, only that we've explained how to make CanValidate and Patchable actions in our action monad. This is better than writing for DI or mocking, DI and mocking usually couple your tests and underlying code structure deeply. This is Faking, where I have a trivial alternative service stubbed in so I can test my API logic without appealing to a database, and then run integration tests on my database actions separately.

https://gist.github.com/KirinDave/9b4380376f9f748bcf0f8440de...

You might also see that I have "ValidAction" and "BaseAction" and there is a "validationHook." The framework I chose here lets me use "hooks". These hooks help the compiler check that I only run my database actions inside a Validated context, and don't try to validate again once I've already Validated. More precisely, they're helper functions to let us pretend indexed monads are normal monads.

Here are sample implementations of CanValidate and Patchable from unit tests from that code so you can see the constraints in play.

https://gist.github.com/KirinDave/fe317bffb5e4b2f1316c207325...


Thanks for that, I can see these things would be useful - but are they a 'killer app'? Like it seems a lot of work to figure out the abstraction to build the monad, when I don't really need to do this, cause I only have to do this one thing. Or are there a bunch of monads people have already figured out? and I can say when I strike this problem I can use this monad write a few lines of code and go home? Maybe thats the goal? I can see technically its interesting but I can't see a really good reason to spend a lot of time learning this (and I am curious about them). Or are these tools for compiler writers and they'll filter down to us mere mortals? Like Swift can define types as optionals, which sort of looks like the Maybe in haskell (to me any way).

Edit: Like I started out as a C programmer, and then objects came along and I could see that would be useful because it localised the namespace among other things, then generics came along and they were instantly useful, languages have a lot of functional operations like first class functions and they're useful. Like all these things seemed obvious that they'd help write good code, Monads I just can't see why they're good - you've given a couple of examples and they seem useful, though they're not things I can't do with other language constructs, perhaps not as elegantly. I'm wondering if I'm missing something huge? :-)


> Thanks for that, I can see these things would be useful - but are they a 'killer app'? Like it seems a lot of work to figure out the abstraction to build the monad, when I don't really need to do this, cause I only have to do this one thing.

Most of the core monads are already done. You make new ones to define new compositions of new ones and bring in new features. I've only ever needed to make a genuinely novel monad once. You often don't even make custom monads, you make custom typeclasses that explain how to do your work within a monad, and then pick the ones you want.

> and I can say when I strike this problem I can use this monad write a few lines of code and go home? Maybe thats the goal?

It is. But as I said, a lot of time what you're actually dealing with is a custom transformer stack with either Identity or IO at its root. So you just write generically and specify what you want. In practice, it's not very difficult.

> Like Swift can define types as optionals, which sort of looks like the Maybe in haskell (to me any way).

They are. Good example.

> Monads I just can't see why they're good - you've given a couple of examples and they seem useful, though they're not things I can't do with other language constructs, perhaps not as elegantly.

What's incredibly hard to do is write code that's generic in this axis. There are other examples exercising these principles and pretty amazing things can fall out of that. For example, lenses use a weaker abstraction (functors)and they can make a function that depending on how its' called, either gets a value inside a struct or rebuilds the struct with a new value. Pretty amazing!

> I'm wondering if I'm missing something huge? :-)

Maybe. But as many people have noted, Haskell is a comittee language that is surprisingly good as an accident of its features all happening to stack together in a somewhat unplanned way. So there is more to the story than this, but this IS a pretty unheard-of story.


well thanks very much for that, you have piqued my curiosity (again :-)), time to get out that haskell book, and finish it this time. If nothing else its a whole new set of language constructs that I'm sure will come in handy some day, I am starting to see how they could be used. The idea of being able to stack monads to do something sounds intriguing. Or is there a better language to learn?


You might start with PureScript, which is much smaller and runs in browsers and on Node.

There's also Idris if you want to see what driving a not-quite-complete future hovercar is like.


thanks, think I'm starting to see, so with one sort of Monad I can chain a bunch of functions together and carry along things like an error value (or a null), and just skip out of the chain if an error occurs but without worrying about a bunch of if tests. That seems useful :-). The do syntax makes the chaining a lot more friendly than in other languages.


I think the killer app of the monad is really the language feature that Haskell has called "do notation". This is where it really shines. You can definitely use the monad interface in other languages, but it can feel clunky and unintuitive sometimes because you end up either foregoing some of the effectiveness to better fit the language, or you end up in essentially callback hell. Example:

In Haskell, you can use do notation:

  do
    x <- someFunction
    y <- anotherFunction x
    z <- yetOneMoreFunction y x
    return (x + y + z)
This is syntactic sugar for something like (using `flatMap` in place of >>= (bind) ):

    flatMap someFunction (\x ->
      anotherFunction x (\y ->
        yetOneMoreFunction y x -> (\z ->
          return (x + y + z)
        )
      )
    )
 
If Haskell programmers had to write code like this (which is what monads look like in other languages), there's no way they would use monads!

In languages with objects, you can sometimes get away a minor victory by using what I'll call a "shallow bind". Consider promises in javascript (which form a monad with unit a = Promise.resolve(a) and flatMap p f = p.then(f) ):

  someHttpRequest().then( response => {
    if (response.body == undefined) {
      return Promise.reject("no body error");
    } else {
      return Promise.resolve(response.body);
    }
  }).then(body => {
    try {
      return Promise.resolve(JSON.parse(body));
    } catch (e) {
      return Promise.reject(e);
    }
  }).then(parsed => {
    if (parsed.status == undefined) {
      return Promise.reject("no status");
    } else if (parsed.status == 500) {
      return Promise.reject("server failure");
    } else {
      return Promise.resolve(data);
    }
    // what would you do if you decided you wanted to reference `response` here somewhere?  
  });

This is definitely an improvement over a deeply nested callback style from 10 years ago, but it does sacrifice some flexibility. You can't refer to previously handled promises because the callbacks are shallow. You can do that with do-notation, and that's what makes the concept of monads stick in Haskell.

Once you have that, and in Haskell, it's amazingly implemented as an overloadable piece of syntax, the doors are wide open. Example: at my work, we use free monads to get a similar effect as dependency injection in other languages (clean separation of concerns, easy mocking, etc) but with two huge benefits: 1. Any function is limited to using only the dependencies/effects you allow it to, it won't type check if you try to sneak something in, and 2. Because of the overloadable syntax, we use it to collect timing info on all of our operations that are doing IO in a way where we get a flame graph style hierarchy of effects _for free_. We never have to specify when to start and a stop the timers, the syntax knows when to. It's pretty powerful.


Aside, don't you find the performance costs of Free too great? Even with Codensity optimization, I hit painful performance limitations on some work.


oh ok, thats a great example - many times I've had to kludge my way through doing just that with promises. thats very exciting :-)


It sounds like you are literally offering to give a monad tutorial, which is brave because, notoriously, so many people have attempted that, usually using metaphors.

Wadler's misleadingly titled earlier paper "The Essence of Functional Programming" is actually more elementary than the one linked to, and gets to the core of the issue:

"2.1 What is a monad?

For our purposes, a monad is a triple (M,unitM,bindM) consisting of a type constructor M and a pair of polymorphic functions.

unitM :: a -> M a

bindM :: M a -> (a -> M b) -> M b"

If you can read this code and understand what a type constructor is, then that is the most general, least metaphorical way to understand monads in FP.


I think maybe I am, but I have some tricks up my sleeve.


I once saw a paper proposing a generic system connector type for composing systems-of-systems (it was much more sophisticated than just I/O) - I can no longer find the paper, do you know it or what is the state of the art here?


Based on CMCDragonkai's comment, I'm guessing the one you're looking for is below. I thank both of you since it was an interesting concept and paper. Hopefully, the one you were looking for.

http://categoricaldata.net/operadics/OperadicSoS.pdf


I tried to understand operads before. But I got sidetracked by graphical linear algebra: https://graphicallinearalgebra.net/


I'm afraid I don't know, sorry! Sounds awesome.

That said, similar concepts in a world where Monads exist include: Free monads, Freer monads, Final tagless encoding.


Operads


My understanding of monads, based on the examples in this paper, is that a monad essentially takes a series of computational steps and augments then with a secondary piece of data (a value to be printed, an error, etc).

So my question is this: in languages which use monads (like Haskell), is this how monads are actually implemented? Does, say, printing values in Haskell actually involve keeping track of a lazy stream of output text, or does GHC just print out values the way an imperative language would, and use the concept of monads as a way to satisfy the type-system requirements?


It's my understanding that the IO monad is largely a type system thing. The IO monad doesn't "do" anything -- a call to the print function actually does write to a file descriptor right there and then.

In a lazy, pure language like Haskell, there's the problem that since a print function doesn't actually compute anything, it doesn't return anything that makes sense functionally; it's not a function in the mathematical sense. In Haskell, a function is only evaluated when it needs to be (i.e. potentially out of order), and functions called with the same arguments can be memoized, and functions that don't return anything can conceptually be eliminated entirely. So the challenge is to somehow tag the print function as being unmemoizable.

To do this, as far as I recall (it's been a while), the GHC's implementation of IO says that the monad wraps a value of a type "RealWorld". Every time you bind/return an IO function, you are sending the current RealWorld and then getting a new RealWorld back. This is hidden from the caller since it's a detail of the monad that you don't need to see. As far as I know, the world value is just a dummy value that is optimized away and doesn't exist at run time. But by forcing every IO to depend on the previous result, the "computations" will be chained in the right order.

Edit: Haskell wiki has more: https://wiki.haskell.org/IO_inside


I think you're right in that ghc's actual implementation is something like that, but it is a bit lacking in describing the semantics. To see why, imagine functions:

getLine :: RealWorld -> (String, RealWorld)

putLine :: RealWorld -> String -> RealWorld

And some code like:

    Let (line, w1) = getLine  w0
        w2 = putLine w1 line
        w3 = putLine w2 "other stuff"
        w4 = putLine w3 line
     ...
Now, imagine an optimization compiler for some reason decides it would be better to recompute `line`, rather than hang on to the value in the interim. It would be well within its rights to do so, because referential transparency guarantees that this is safe.

IIRC ghc essentially "plays dumb" and just never duplicates a computation (which is not typically a desirable optimization at this level, though register allocators will sometimes do it). But it's arguably a hack.

A better metaphor is to think of values of type IO a as fragments of a script in some other (imperative) language. The Haskell program merely computes the script, and the runtime then takes care of actually executing it. In this sense, effects in Haskell are not "side" -- they are first class citizens. What order the fragments of the imperative program are computed in (or how many times) is independent of the text of the final program.

In very early versions of Haskell, main had a type roughly like [Response] -> [Request]; you computed a list of things for the runtime to do, as a function of the results of its (hopefully) previous commands. This kinda works, but it's easy to deadlock by creating a circular data dependency. What the Monad structure does for you is avoid that problem, by making it impossible to depend on a future value.

It's also worth pointing out that all of this is specific to IO; understanding monads in general are neither necessary nor sufficient to understand Haskell's IO.


Thanks to all for the answers!

I want to make sure that I'm understanding this bit correctly:

>A better metaphor is to think of values of type IO a as fragments of a script in some other (imperative) language. The Haskell program merely computes the script, and the runtime then takes care of actually executing it. In this sense, effects in Haskell are not "side" -- they are first class citizens. What order the fragments of the imperative program are computed in (or how many times) is independent of the text of the final program.

In Wadler's paper, the monadic output example looks like this:

  type M a = (Output, a)
  type Output = String 
  
  unit :: a → M a 
  unit a = (“ ”, a)
  
  (*) :: M a → (a → M b) → M b
  m * k = let(x, a) = m in
         let(y, b) = k a in
         (x ++ y, b) 
  
  out :: Output → M ()
  out x = (x, ())
where each function, instead of returning a simple numeric value, returns a tuple of a string (the output to be printed) and a number (the value to be returned). If you wrote this function out in Haskell, you would have access the output text in the form of a lazy list of strings.

If we can think of IO values as fragments of an imperative script that are handled at runtime, then that makes me think that the Haskell compiler doesn't use the strategy presented in Wadler's paper (in the case of printing output, returning a lazy list of strings to be printed), and instead just performs the specified side effects directly, knowing that semantics of the IO monad guarantee that things will be sequenced correctly. Do I have this right?


More or less, yeah. Most of the language-aware optimization in ghc happens in an intermediate form that still carries the types, and then ultimately it spits out code that just has side effects as a result of evaluation like every other language.

The code you sketch out above is basically the Writer Monad, which exists in the libraries, and does see use, but it's not IO -- as I mentioned above the semantics of IO don't entirely just fall out of the monadic structure.

But I find the computed-imperative-program description lends itself to thinking about IO values as values in a way that thinking about them as type system goop to tame side effects doesn't; for example, the program:

    import Data.List (intersperse)
    import Control.Concurrent (threadDelay)

    main = sequence_ $ intersperse (threadDelay 1000000) (map print [1..])
will print out the natural numbers, one per second until you kill it. It works by constructing a (lazily evaluated) list of actions to perform, which alternately print out a number or wait for one second. sequence_ has type [IO a] -> IO (), and it just performs each action in the list in sequence (its type is actually a bit more general than this, but..).

This kind of code is really weird to think about from the "tamed side-effects" model, but very natural from the "computing a script" model.


Right, IO does not collect the results this way. putStr for strings is declared like so:

    putStr :: String -> IO ()
Here, () is the unit type, which is similar to null/nil in other languages. In other words, I/O functions like putStr etc. don't return anything. putStr internally calls lower-level functions that actually write to a file descriptor.

Some people get into Haskell thinking that functions can't have side effects because Haskell is "pure", but that's not right. Rather, as the parent commenter say, they can have all the side effects they want, and the consistency and purity of the program's execution is preserved through the use of monads.

Haskell is considered a pure functional language where everything is immutable, but that's not true, either. A lot of the GHC standard library uses mutation (e.g. with IORefs) for performance reasons. For example, putStr ends up writing to an internal byte buffer that is modified directly.


>My understanding of monads, based on the examples in this paper, is that a monad essentially takes a series of computational steps and augments then with a secondary piece of data (a value to be printed, an error, etc).

Not really.

A monad:

1) helps translate values from one domain to another.

2) helps handle the logic in a series of operations.

One use of that is to enrich data with the possibility of carrying some other information (e.g. the Optional/Maybe that can carry an error). But that's not essential to what a monad does, and monads can be used for all kinds of things.

The description of a monad as a "programmable semicolon" captures that a little better. Semicolons in that phrase are meant to refer to the semicolons in C or Java, that separate different expressions.

So, in a program like:

  do this; do that; do another thing; ...
Monads helps pass values between the various steps, and help determine what happens with the sequence of operations (the program wont have actual semicolons between steps in Haskell e.g. -- it's just how we notate a number of expression in C-like style for example's sake).


Monads are regular code. They get evaluated lazily like regular code, because they are regular code.

It's really hard to explain it further.


How would you explain monads to an OCaml programmer?


I think maybe for Ocaml programmers, most probably understand the "what". They're functions holding to specific rules, dispatched on return types. That's pretty boring fare for Ocaml folks.

Maybe the better question for Ocaml folks is "Why Monads?"

Would that be a more interesting response?


Yes, it would be, please fire away.

As an OCaml beginner my understanding is that I can do away with needing to understand monadic IO (yet) because OCaml is non-lazy and allows side-effects. I also like monadic parser combinators because they'll allow me to write parsers without the tedium of recursive descent, or being reliant upon parser generators. I have a rough understanding of monads as a purely algebraic abstraction (https://arcanesentiment.blogspot.in/2017/06/purely-algebraic...) and an even vaguer idea as a "representation of computations as values", based on this excellent thread: https://discuss.ocaml.org/t/can-monads-help-me-my-refactor-c...

But I don't grok it yet, which I'm sure I'll be able to once I find an occasion to use it. I suspect I'll hit that point sooner if I were using Haskell than OCaml though. I'm still collecting analogies (burritos, boxes, "smart pipes") and I want more. If nothing else, it is fun, and contrary to popular opinion, I don't think it distracts me from its essentially abstract nature.


If you want to grok monads as an OCaml programmer, there is a very simple solution: do some concurrent/system programming.

Both Lwt and Async are monadic, but they are presented without requiring you to know about monads, so you can get to use them pretty easily, and after you've used them to build reasonably sized programs, then you'll have a good intuition of what's happening.


Sorry I didn't get back right away, but I did answer this to someone else in a more generic context.

https://news.ycombinator.com/item?id=17008877

Ocaml folks have an awesome async library that is monadic, and I think maybe for OCaml specifically and Ocaml-specific value propositions, playing with that is better than any wall of text I can throw at you.

But: parameterizing on a generic monad is what really is the superpower they give you.


Since OCaml allows side-effects, they are not really necessary to do simple things like printing some output. The standard library is not monadic in nature.

However, there are a some very popular monadic libraries (for example, for concurrency you have lwt and async). The main advantage is that you have explicit information over which calls are blocking (it's in the signature), and explicit control over when you relinquish control back to the coroutine/fiber/lightweight_thread/future/whatever_you_want_to_call_it scheduler.

Anyway, monads are hard to compose, so the shiny new thing here is called "algebraic effects". http://www.eff-lang.org/


> Anyway, monads are hard to compose,

Monads don't compose because monads shouldn't compose. Many things form monads that do not form proper effects. Perhaps the most obvious is the typical continuation monad. It is a good thing that this particular monad does not compose because if it did, you'd be in for a ton of hurt in understanding what happened when.

Effect systems are good when all you want to do is write code that has effects. Monads are not really about effects, although they can be used to describe certain classes of them. Monads are nevertheless, strictly more powerful.


>Effect systems are good when all you want to do is write code that has effects. Monads are not really about effects, although they can be used to describe certain classes of them. Monads are nevertheless, strictly more powerful.

This is not true. Monads and effect systems are equally powerful.

More concretely: Effect systems are essentially delimited continuations. Monads and multi-shot delimited continuations are equally powerful, though some effect systems don't support invoking the continuation multiple times and so aren't multi-shot and therefore are less powerful than monads.

You can express any monad with delimited continuations, and vice versa. And using some monad in do-notation, and using the corresponding delimited continuation implementation in normal (non-Haskell) sequential code, will look identical.


> You can express any monad with delimited continuations, and vice versa

source please.


The best explanation for monads I have seen until now is this one:

http://adit.io/posts/2013-04-17-functors,_applicatives,_and_...

Enjoy! :)


Saw this post a while back, it's absolutely adorable. As someone who forgets what an applicative is sometimes, I can vouch for the fact that it really does get the idea across.


Thank you for that!


I see a monad as just an interface, as in a set of generic functions with specific type signatures that you can overload with different types. The functions being identity : a -> m a, and bind : m a -> (a -> m b) -> m b. The type m you use should have some specific properties but it's not like you can enforce these properties in haskell.

It's nice for use in haskell since it combines well with the do notation, allowing for an imperative programming style primarily when you use the IO monad. The do notation basically converts into a sequence of bind and return operations, so that it's impossible to call print a second time on an IO object while having reference to the same IO object that you called print the first time on. This is needed because you can't make a copy of the IO object.


"mitchty" on Lobste.rs told me I should just read the source material from Wadler instead of the various metaphors and tutorials. Since jxub posted it here, I figured I'd go ahead and try. I'll admit Wadler did a terrific job trying to explain it. I don't get it after one read from imperative background. It does seem a lot more clear, though, than many attempts to dumb the material down. He gives useful examples in both functional and imperative styles. We see why imperative makes some things easier, exactly why they're harder in functional style, and then how monad form solves that problem.

The monad itself is still weird. That we have different examples, application, and definitions are a good combo that might help it click if I think on it more just going through the steps of his examples or similar programs. Might help to learn functional programming first esp to get used to things like let/binding which he draws on. Although he avoids reliance on category theory and Haskell, he still might expect people to know basics of functional programming given how he describes it. So, a good paper like this may reduce the problem down to (a) learn functional programming or, if time-constrained folks are lucky, (b) find a few tutorials on just the concepts he uses here done in ways that imperative programmers can approach. Lambda's, binding, and typing seem to be the core operators on top of functions and expressions that we're familiar with. I say typing since types work differently in various languages. Gotta understand how he's using types along with their interactions with those examples and monads.


Thats me here too! >.<

Yeah the hidden subtext here was my presumption in that suggestion was someone was learning monads in Haskell and has at least a basic understanding of Haskell. Sorry if those papers threw you into the deep end. Monads really were thrown into play due to trying to make a pure functional programming language useful. I found the more mathematical explanation a lot easier than any analogies involving say burritos or pipes etc...

At first glance to imperative programmers I will admit all of this just looks like a bunch of navel gazing. I thought the same thing, but once you let go of thinking of computation as a series of steps and at a bit of a higher abstraction this makes a lot more sense.

For a more soft introduction to the concepts this channel 9 video is probably a bit better: https://www.youtube.com/watch?v=ZhuHCtR3xq8

Another option might be to just get the haskell programming from first principles book: http://haskellbook.com

Does a good job grounding you up to monads and gives copious amounts of reference material. Even though I already knew haskell when I read the book it helped clear up some gaps I didn't know I had.

But monads are going to be really weird no matter if all you've ever seen is imperative programming. Despite the fact that you already use them and have probably discovered them without knowing you did. Much like learning stack based programming languages, sometimes the only way up is to climb off the computational hill you're familiar with to learn another way to the other hill side to see things from their perspective.


There's a lot of confusion around understanding monads, and I remember having trouble understanding them. Here's the two things that helped me the most: understanding kinds, and staring at the type signature for >>= for a long time. I'll attempt to explain both of these here:

Just as expressions have a type signature, a type expression has a kind signature. In its most basic form, a kind tells us how we can construct a type. We represent kinds by using asterisks * and kind functions ->. The asterisk is pronounced as "type". The easiest way to understand kinds is by looking at a bunch of examples of types and type constructors. Monomorphic types such as Int and Bool have kind * . Type constructors are handled differently. An example of a type constructor is [] (list), which has kind * -> * . So list is a type constructor that takes in a type (which we represent with an asterisk), and returns another type. Therefore [Int] has kind * , since we applied the type Int to the list type constructor [], resulting in the type [Int]. Types constructors can also in some situations be partially applied, just like value constructors. Kinds are right associative, so the kind * -> * -> * is the same as * -> ( * -> * ). I have a table of different kinds on my blog here: http://www.calebh.io/Type-Inference-by-Solving-Constraints/

Now that we understand kinds, we are now ready to understand monads. In Haskell, type classes are used to overload functions in a disciplined way. One such function is >>=, which is defined in the Monad type class. When we want to make a new Monad for a different type, we overload the >>= function. Since the >>= function is the most important function in a Monad definition, here is its signature:

(>>=) :: forall a b. m a -> (a -> m b) -> m b

How do we interpret this signature? Well we can see that the >>= function takes in two arguments, one of type "m a" and another of type "a -> m b". Remember the kinds from earlier? In this case, "m" is a type constructor of kind * -> * . So "m" could be the list type constructor, the Maybe type constructor, or really any other type constructor that has this kind. It can even be a type constructor that we define ourselves. So what can an instance of the >>= function do with the first parameter? Well it can do anything that it wants, as long as it follows some laws, which I'll talk about later. However notice that bind also takes in a second parameter of type "a -> m b", which is a function that takes in a value of type "a" and returns a value of type "m b". So the >>= function might end up calling this "callback function" and using its result. It could even call this function multiple times if it wanted to. The point is that >>= can do anything as long as it adheres to the type signature and follows the monad laws.

The monad laws are not as relevant when learning how to use monads, but I will cover them anyway. When you write your own overloaded instance of the Monad type class, you have to make sure that your overloaded functions follows these laws:

Left identity: return a >>= f ≡ f a

Right identity: m >>= return ≡ m

Associativity: (m >>= f) >>= g ≡ m >>= (\x -> f x >>= g)

You can think of these laws as analogous to the laws for operations on numbers such as commutativity and associativity.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: