Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Monads are a class of hard drugs (ix.io)
42 points by memorable on July 11, 2022 | hide | past | favorite | 83 comments


It's amazing that the concept of "monad" is so simple, and yet the author still got the interface mixed up with one particular implementation (mutable state).

Monads are about sequencing. They are an interface to force applying functions in a particular order. This is why Haskell needs them -- in a language with only pure functions, there's no way to encode (for example) the order of 'print' statements directly, it has to be expressed in terms of combining functions.

Consider the interface, expressed in a C-like syntax:

  interface Monad {
    template<A>
    from(value: A) => Self<A>;

    template<A, B>
    sequence(self: Self<A>, f: fn(A) => Self<B>) => Self<B>;
  }
You can see that it forms a sort of one-way valve for functions from one value to another.

Types you could implement that interface for:

* Lists: `from` wraps the value in a size=1 list, and `sequence` becomes "apply the function to each value then concatenate the results".

* Optional: `from` is the case of a value existing, and `sequence` is "if value exists then apply the function, else return None".

* Sets: same as lists.

But it's hard to write a blog post about an interface with only two functions, especially if you've already committed a thousand words to making it seem complicated.


Monads are not about sequencing. They don't force applying functions in an order, at least any more than regular function application does. Haskell does not need them, they're just an interface and don't add anything to a language you can't do without them. The order of print statements is encoded the same way the order of elements in a list is, it's part of the definition of the data type. The monad instance says IO (IO a) is "the same as" an IO a, so really any sequencing ability implied in IO (IO a) already pre-exists in IO a.


  > Monads are not about sequencing. They don't force applying functions
  > in an order, at least any more than regular function application does.
Monads are about sequencing. This is easier to visualize if you de-sugar them:

  f = do
    x1 <- m 1
    x2 <- m 2
    return (x1 + x2)
is an imperative form of

  f = m 1 >>= (\x1 -> m 2 >>= (\x2 -> return (x1 + x2)))
You can see that there is no way to evaluate `x1` without first evaluating `m 1`, even if the result is a thunk.

  > Haskell does not need them, they're just an interface and don't
  > add anything to a language you can't do without them.
How would you recommend implementing do-notation without the Monad typeclass? Applicative functors aren't enough.

  > The monad instance says IO (IO a) is "the same as" an IO a, so really
  > any sequencing ability implied in IO (IO a) already pre-exists in IO a.
I don't see your point. That may be true of IO, but it's not true in general. `Maybe ()` and `Maybe (Maybe ())` are different types.


> You can see that there is no way to evaluate `x1` without first evaluating `m 1`, even if the result is a thunk.

Isn't this true of `f(g(x))` as well? You can't evaluate f(g(x)) without evaluating g(x) first, right?


Yep[0], and if you really wanted to, you could use that syntax to express the monadic interface:

  putStrLn :: TheWorld -> String -> (TheWorld, ())

  main :: TheWorld -> (TheWorld, ())
  main world = (part3, ()) where
    (part1, _) = putStrLn world "Hello"
    (part2, _) = putStrLn part1 ", "
    (part3, _) = putStrLn part2 "world!"
It would be a frustrating user experience, but it's a valid and equivalent encoding to Haskell's `IO` type.

[0] I'm ignoring lazy evaluation for now, but `f` could be implemented such that it returns before `g(x)` is evaluated. This is generally not relevant for purely functional code, but it can affect performance and exception-safety.


Lazy evaluation is exactly why Haskell has monads and Ocaml does not.

the whole point of haskell was to see what happens if try to build a language with the strong requirement of lazy evaluation.

If you remove it you also remove any need* for the IO monad

* you can obviously use an IO like interface in a non-lazy language, but it is not necessary


Yeah I was about to write this. Thinking that monads are only about sequencing works only for IO, effectively. Option and Either are about short circuiting, and there's others that mess in execution state in hilarious ways (like the TARDIS monad and its time travelling shenanigans). I tried "understanding" monads for years but now I'm not really sure if there is anything to understand in particular. It's super simple and super generalized at the same time, it's really hard to explain why it is that way without going deep into scary mathy territory (while also preserving analogies that actually apply)


Sequencing here does not mean "building sequences" or "a monad is a sequence", but that the bind/>>= operator build monad values as a sequence of operations.

For example with Lists the sequence is not the list itself but the sequence of flatMap[0]/bind/>>= operations necessary to build the desired list starting from `[a]`

[0] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


It can catch newcomers off guard. People think they write -

  do
   a x
   b x
in Haskell and that means the same as a sequential program (a runs before b). But this is not necessarily the case. It does not imply an order. It will only do so if the statements are dependent on each other -

  do
   c <- a x
   b c


You're technically right: a doesn't necessarily run before b, but that's pretty irrelevant. The only case in which it matters which one runs first are if one of the two results in an exception, which is rare especially in idiomatic Haskell code.

The effect described by (a x) certainly runs before the effect described by (b x).


Does it though? If I have the following -

  do
    f <- openFile "test.txt"
    putStrLn "Hello World"
    contents <- hGetContents handle
In this case the file is most likely not opened before Hello World is printed to the console. However it would be opened before contents is read. This is what I mean in that the ordering is not top-down. It is dependency driven. This really tripped me up at first.


I am unable to reproduce your claim. My exact code is:

    import System.IO
    main = do
      file <- openFile "hello.txt" ReadMode
      putStrLn "opened hello.txt"
      contents <- hGetContents file
      print contents
I ran it with:

    ghc main.hs
    strace -fo strace.log ./main
The strace log confirms the file is opened before putStrLn writes its output:

    ...
    96748 openat(AT_FDCWD, "hello.txt", O_RDONLY|O_NOCTTY|O_NONBLOCK) = 3
    ...
    96748 write(1, "opened hello.txt\n", 17) = 17
    96748 read(3, "this is a file\n", 8192) = 15
    96748 read(3, "", 8192)                 = 0
    96748 close(3)                          = 0
    ...
    96748 write(1, "\"this is a file\\n\"\n", 19) = 19
    ...
Do you have an example that demonstrates IO not happening in the order of the monad? I really don't think what you say can possibly be right, but I'd like to see it if it is.


https://wiki.haskell.org/Do_notation_considered_harmful

This article discusses it further, down around where it says "Newcomers might think that the order of statements determines the order of execution." and then proceeds to give examples further down.

I have experienced this in the wild. I was working with a lot of data/csv parsing code along with some heavy computer vision work in haskell (3d reconstruction from point clouds using OpenGL and Repa, and custom algos) about a decade ago. It's been a while but I remember this happening quite well.


That's not what they mean there. You're confusing the evaluation of an expression to its value, with the execution of that value when the value is an IO a. In x >> y, you can evaluate x and y in whatever order, but the program x >> y denotes always runs the program x denotes before the program y denotes. Similar to how in [x,y], x always comes before y in the list order, even if you evaluate y first.


Evaluation order is somewhat "dependency driven" in Haskell for pure functions, because it uses lazy evaluation.

But in this case the code here is basically generating a list of "instructions" for the Haskell runtime, like saying "first open the file, then say hello, then read the file".

The generation of the instructions may be in an undefined order, but that doesn't matter, because writing the instructions doesn't in itself produce any side effects.

The resulting list of instructions ends up listed in the order you wrote, and so the actual IO effects are executed in that order.


You know that sentence "A monad is a monoid in the category of endofunctors, what’s the problem"? Being a "monoid" is exacly why monads are very much about sequencing.


> The essence of monads is to use abstract types to enclose a mutable state, providing only a set of carefully-crafted combinators for using it in such a way that you can't duplicate the state but have to use it in a linear, non-branching fashion

When I was reading about these stuff, I was not lucky enough to find such a clear high level description of what is the purpose of a monad. I got it eventually, but I wish I encountered this concise description earlier.


It's also wrong. Lists are monads, and whether the implementation you use is mutable or not is not relevant to how you use them.

The essence of monads is not about what use can be made of them either, just like the essence of things that have a hook is not that you can use them to hang your coat.

Simply put, monads are data types that share a set of properties. Think of them as you would think of the atoms that are halogens, or of the animals that are mammal. In the case of monads, the property is that functions can be chained on them in a specific way, and that the result of this chaining respects some rules.

The reason it is so hard to explain or understand monads is that this property in many different ways for many different ends, and that the what and the why are often mixed in the explanation.


> Lists are monads

Isn't it just that you can define monadic interfaces for lists, as you can with (probably all) parametric data types?

Saying "the list monad" wouldn't be correct, right? Although it's pretty common to see that phrase.


Monad is just an interface; all "monad"s are things that declare some function that conforms to the interface.

You can say "the list monad" reasonably because there is only one sensible implementation of (List a -> (a -> List b) -> List b) in Haskell. You could unconditionally return an empty List b and fulfill that interface, but what would be the point of that? You can't just return a list of all the things the function call resulted in because that would be a List of a List of b, which is not the same as a List of b. You can't sort them in Haskell because you don't have any form of comparability available in that interface specification. etc.

Technically in other languages this may be less true as due to having fewer restrictions (for example, you may in a language that defines ordering on all values, and thus you could sort the result regardless of the types involved) you can do more things, but there's still really only one sensible and unsurprising implementation.


> You can say "the list monad" reasonably because there is only one sensible implementation of (List a -> (a -> List b) -> List b) in Haskell.

While this may be true, I think the main reason you hear "the list monad" in Haskell is that Haskell doesn't support multiple instances of a type class (such as Monad) for a single data type (such as List/[]); and the Prelude already includes such an instance, which then becomes the List Monad.

This limitation is perhaps not often relevant for Monads, but it is sometimes relevant for other Algebraic structures. For example, there are two very common Monoids on Number: (Number, +, 0) and (Number, *, 1).


> You can't sort them in Haskell because you don't have any form of comparability available in that interface specification. etc.

Technically, if you were the pervert kind of developer, you could revert the list.


You can define monadic interfaces for lists because lists are monads.

It would therefore be correct (but slightly pedantic) to talk about "the list monad".

> as you can with (probably all) parametric data types?

Actually, you can't. For instance, if you define a predicate type:

`data Predicate a = Predicate (a -> Bool)`

It is not a monad, it's not even a functor. (If you're interested in what it is you can look into contravariant functors).


Lists are absolutely not monads. There are implementations of interfaces on lists using monads, but a monad is a specifically designed term, that requires a lifting function and monadic combinators which are simply not part of the definition of the list.

The understanding of the monad as a wrapper with a combinator equivalent to function composition is simply correct.


I don't like being adversarial but this is wrong. https://www.schoolofhaskell.com/school/starting-with-haskell...

A list is absolutely a monad.


The Haskell list is implemented as a monad. A list in general isn't a monad. You can use Monads to implement lists, but that doesn't make them Monads anymore than saying that lists are generic container classes.


Do you have an example of a list, which meets the mathematical definition, which would not be able satisfy the requirements of a monad? Meaning one could not write a bind or lift operation for that list?


No, you deeply misunderstand what a monad is. Monad doesn't mean you can write the bind and lift operations, a monad contains the bind and lift operation. The bind and lift operation is a component of the monad. You could probably shoehorn some kind of bind and lift into any conceivable generic type.


> You could probably shoehorn some kind of bind and lift into any conceivable generic type.

You actually can't implement Monad for most contravariant functors.


> There are implementations of interfaces on lists using monads

Monad is an interface, just like List. You don't implement a list using a monad, anymore than you implement a list using a set or a tree.


You could absolutely implement a list using a set or a tree if you wanted. In the same way that we say that you would use generics to implement a list, you would be using monads to implement a list. It's really more of a design pattern.


I suspect that it is only after you ‘got it’ that you can find this description nicer than anything else that you can find in the swath of monad tutorials.


Unfortunately, if this is your understanding of monad, that is itself proof that you did not "get it". It is neither required for mutable state, nor required for mutable state in Haskell, nor are all monad implementations about mutability. That is, that understanding is neither a subset of understanding monads, nor is a proper understanding of monads a subset of thinking of them as being about mutability. It's just wrong, unfortunately.

I wrote about this extensively in http://www.jerf.org/iri/post/2958, "Functors and Monads For People Who Have Read Too Many 'Tutorials'". If original post is from 2008, then it is the sort of "too many 'tutorials'" that you may have read about them.


Sure, the idea that they have to be mutable in practice is incorrect, but Haskell has "do notation" which makes Monads act as if they always introduced mutability, and in the spirit of a simplification it is correct in the essence.

The essence of the understanding that Monads act as a wrapper and define lifting and composition of wrapped types is correct in essence and that's the meat of the understanding. Define it a bit more rigorously and it's literally mathematically équivalent to saying "monoid over the category of endofunctors".


No. Do notation has nothing to do with mutability. Do notation is a convenient syntax for monads period.

You can use it with the list monad for example and not IO at all.


Do notation gives the syntax and almost the semantics of mutability. I never said it actually means they are mutable, just that it is practically as if they were. Which is what the author says about Monads, they mean that practically it's as if they were mutable.

And yes, that goes with the list monad too when using do notation. It's syntactically and in practice almost as if you were mutating a new list and then returning it.


I don't agree. Maybe let's use a different example to illustrate more clearly - the maybe monad. Let's pretend you have a series of functions a, b, and c which take a value of type X and return a value of type Maybe X. With do notation you can chain the functions like this -

  do
    r1 <- a 1
    r2 <- b r1
    r3 <- c r2
    r3
This is about composing functions. It has nothing to do with mutability. It is just a convenient way to write the composition rather than using nested bind operators.


We get it, we do. Still, even in your example r1 etc. do in fact look "as if they were mutable".


If you were mutating a variable then you'd probably be using the same variable, but in this case you cannot do that. Not trying to be argumentative just saying when you have to store the result in a completely different variable I don't see how that could be considered to look like mutation.


Nobody here is saying that they are actually mutable. I am saying that it looks as if you are mutating a new variable and then returning it. It looks like mutation because you are "do-ing" stuff. The mutation that is almost happening is in the monadic state.


It's a rather incomplete description of monads, since besides "mutable state", monads also cover optional state (Maybe), multiple states (List), dependent state (Reader), exceptional state, etcetera.


I disagree with the description though, a monad is effectively the hot cell of functional programming.

You aren't allowed to touch the contents of the hot cell, instead you must send a robot arm into the hot cell. What is surprising though, is that the robot constructs another hot cell inside the hot cell and then merges it with the outer hot cell

The problem with monad is that it is a typeclass with not much meat on it, it would be more appropriate to talk about monad instances.


now that you have understood it, it is your duty to write yet another monad tutorial


> you can't duplicate the state but have to use it in a linear, non-branching fashion

So so you're saying bitcoin should have been made out of monads?


> The essence of monads is to use abstract types to enclose a mutable state, providing only a set of carefully-crafted combinators for using it in such a way that you can't duplicate the state but have to use it in a linear, non-branching fashion

This is a good description of one particular monad, the ST monad: https://wiki.haskell.org/Monad/ST


The ST monad is a good example, but that's the basic pattern of the Monad. Lifting a base type into a monadic type and composing monadic functions.


I may not have been harsh enough.

The ST monad (enclosing mutable state to use it in a linear, non-branching fashion) is indeed a good example of a monad.

Calling that the essence of monads is as accurate and helpful as calling it a burrito [1].

Future downvoters may want to try to reconcile any of the other common monads (Maybe, Either, List, State, Parser, Reader, Writer) against this enclosed/mutable/linear/non-branching definition and see if the essence fits.

[1] https://kjaer.io/a-monad-is-not-a-burrito/


I have considered all of those Monads and more when assessing that definition. The Maybe monad is in essence a Boolean wrapped around the base type (for which a value may be null but is part of the monoidal type nonetheless). The Either monad is another Boolean wrapped around the union of two types which itself a type. In all cases an operation on an object of the monoidal type will mutate the wrapper type but not the wrapped type which always stays the same. The List monad is even simpler, it's a linked list, so the wrapper contains another successor list which may be empty but of the same type and a value of the current type with mapping being an operation that maps function in the base type into functions on the monoidal type. The State monad is obvious. The Parser monad typically wraps around some form of string which is partially parsed and then contains an element or a list thereof of the parsed type such that operations on the string type yield a partial parse. The Reader and Writer Monads are obvious.

This type of intuition always works on Monads. If you want to learn more about it I'd recommend to learn about the relationship between the Kleisli Triple and the monad. In reality this way of understanding this is just an intuition of how the Kleisli Triple works and how it is an equivalent to the traditional representation of the monad.


Then let me be even more explicit - I claim that this is correct:

  "The essence of *the ST monad* is to enclose a mutable state, providing only a set of carefully-crafted combinators for using it in such a way that you can't duplicate the state but have to use it in a linear, non-branching fashion."
I claim that the following 4 are incorrect:

  "The essence of *the Maybe monad* is to enclose a mutable state, providing only a set of carefully-crafted combinators for using it in such a way that you can't duplicate the state but have to use it in a linear, non-branching fashion."

  "The essence of *the List monad* is to enclose a mutable state, providing only a set of carefully-crafted combinators for using it in such a way that you can't duplicate the state but have to use it in a linear, non-branching fashion."

  "The essence of *the Either monad* is to enclose a mutable state, providing only a set of carefully-crafted combinators for using it in such a way that you can't duplicate the state but have to use it in a linear, non-branching fashion."

  "The essence of *the Parser monad* is to enclose a mutable state, providing only a set of carefully-crafted combinators for using it in such a way that you can't duplicate the state but have to use it in a linear, non-branching fashion."
> In all cases an operation on an object of the monoidal type will mutate the wrapper type but not the wrapped type which always stays the same.

I know of no type mutation in Haskell.

> If you want to learn more about it I'd recommend to learn about the relationship between the Kleisli Triple and the monad.

The original author has made an overfitting error, taking a specific learning about ST and incorrectly generalising it to all monads. If I were to relate it to Kleisli triples, wouldn't I be overfitting even worse?

  "The essence of *the Kleisli triple* is to use abstract types to enclose a mutable state, providing only a set of carefully-crafted combinators for using it in such a way that you can't duplicate the state but have to use it in a linear, non-branching fashion."


> I claim that the following 4 are incorrect:

All four are correct. That is what they do in essence. What the author is describing the intuition of the Kleisli triple. The only thing that is questionable at all is the mutable part, but that can be excused because later the author goes back on it and clarifies it's merely an emulation of mutability

> I know of no type mutation in Haskell.

"Monad m" is type mutation. You are making the type m into a monadic type "Monad m" that still exposes all of the characteristics of the type m.

> The original author has made an overfitting error, taking a specific learning about ST and incorrectly generalising it to all monads. If I were to relate it to Kleisli triples, wouldn't I be overfitting even worse?

Kleisli Triples and Monads are strictly equivalent. A Kleisli Triple is just a different way of defining a monad. So it can't ever be "worse" than on monads because they are strictly equivalent.

Now as to the quote, the only real problem is "mutable" if you take it too literally. Enclosing a state around the element is what the unit function does, the wrapper itself is what the monadic type provides, and the sequencing of functions in a linear non-branching fashion to work both on the base type and on the state is what the join combinator does. Except for the whole mutability kerfuffle it seems like a fairly straightforward intuitive way to put the Kleisli Triple.


Any discussion about what a monad is rapidly degenerates into an endless sequence of

>it's simple, monads are like X

>you are wrong, it's actually simple, monads are like Y

>you are wrong, it's actually simple, monads are like Z

&c


This blogpost and a lot of comments in this thread (but absolutely not all) don’t seem to realize that monads are an abstraction over lots of different, but common programming constructs.

- Read-only values (e.g. environment variable, configuration)

- Write-only functions (e.g. logging, compiler output)

- State (read-write-values)

- Sum-types (e.g. Optionals/Maybe, Either/result)

- Inductive lists

- Arrays/vectors

- Contexts (e.g IO, file handlers, database cursors, etc)

- Functions

- Sequenced operations

- Continuations

- Parallelization

- Concurrency

- Interpreters

- Compilers

- Combinations of all of the above (plus more)

- And whole programs

- (and a whole lot more)

All of these can be built as monads, since it’s an extremely general abstraction. But when you try to attach the specifics of any of these constructs to your explanation/view/understanding of monads you will inevitably run into issues/contradictions with other things that are also monads.

At its core, monads are just an interface with two (or three) functions:

- unit/pure/return, which puts a value into a monad

- bind (or map and join) which combines a value in a monad with a function from that value into the same monad

(Plus some algebraic laws about how these functions behave when combined.)

However, the technical simplicity of this interface does not make it easy to understand how and why it’s a useful abstraction over so many programming constructs.

IMHO, the best way to get a grasp of this, is to actually use and play around with a bunch of of these constructs via these monadic functions.

(I am also thoroughly convinced that you can ignore Category Theory if you are only interested in monads as they are used in programming.)


So monads are from functional programming, right? 10 years or so ago they were all the hype on here. I tried to follow so many Youtube (and written) tutorials on Monads, but I never understood.

Maybe I should try again not that I'm a bit older and more experienced...


I usually tell people that monads are an abstraction which encourage writing code with less branches.

Consider the following code where you would like to multiply a number by two, but this number is potentially undefined:

    var x = ... // a number or null
    if (x == null) return null
    else return x * 2
Now, imagine if x were a list containing zero or one numbers (if the number is undefined, the list is empty). In this case, you could refactor the code as follows:

    var xs = ... // a list of zero or one numbers
    return xs.map(x => x * 2)
The advantage of the second snippet is that there are no branches, thus making it easier to reason about the code.

Thus, in order to understand 80% of their utility in 20% time, I can recommend that you think of a monad as a wrapper around a value which allows you to treat special cases uniformly. In this case, a list of one or zero items. Of course there is more to it, but I this is usually enough to get people to stop worrying and interested in exploring more.


The reason why most programmers struggle to understand Monads is that they are a tool for fixing several problems that don't exist in imperative languages. In functional languages, the order of operations does not matter, because the operations do not have side effects. Monads hack around this by forcing functions to be applied in a specific order and allowing them to store the side effects in the type.

For this same reason, my favorite explanation of monads is actually https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...

Why? Because it does not explain monads at all. In fact, it does not even have the word "monad" in it. But the whole mess about "function coloring" brought up to describe async code is basically the same thing as the "mixing crack and heroin" problem in the original post here - at least in terms of developer experience. JavaScript has two built-in, uncustomizable monads: imperative and async; with the async monad being a superset of imperative but being harder to call and manage.

Rust has at least four: normal imperative code, Option, Result, and Future. We can't really talk about Monad<T> as a type in Rust because its trait system lacks higher-kindedness; so no monad transformers or colorless functions. But they all have combinators that let you sequence functions and store state away in the type. Option/Result let us hack exceptions into a language that does not have exceptions[0], and Future lets us hack user-mode/green threads into a language that is committed to OS stacks.

All the monad tutorials fail because they try to walk you through the underlying math that makes monads work in a functional language. This is wrong; nobody needs to understand the category theory behind the curtain even if it seems super-cool. It's just a hack to force ordering and state into a language that does not have it.

[0] panic!() does not count as an exception mechanism, even though it can be implemented with stack unwinding. There are panics that don't unwind (stack overflow) and you can compile programs that terminate-on-panic instead of unwinding.


Fun fact though, you can actually implement a Monad type in C++. There's not really any reason to do so, but you can.


Try a tutorial written in a language you understand well that does not routinely feature monads.

IME the biggest hurdle is convincing yourself you've actually understood them, because they seem so mundane that it's weird they even have a name.


That's my experience too. I have a perfectly good understanding of the monad interface and some of its more common use cases, but it never felt like I had a magical moment of clarity regarding their nature, so I always doubted I really understood them.


I think a prerequisite is to have some experience with functional programming (F#, OCaml, Scheme...) and then realise a simple option type implement a monadic interface (bind + return). From there, you can basically try to write functions which return an optional value using bind/return rather than pattern matching on Some/None.

From there, it comes with practice and you can see how bind/return work for other things (notably concurrency). There's no need for any theory to use monads effectively.

To me, the problem with Haskell is that you need monad from the beginning, whereas in OCaml, you can start using them when you're familiar with basic ML.


I'm not an expert, but this was the best resource I've found to break it down for me:

https://adit.io/posts/2013-04-17-functors,_applicatives,_and...


Knowledge about monads is more prevalent in some FP circles, but it's not common in all FP circles, and it's not unique to FP.

To give you an analogy, tensors are maybe more common in Python than in other languages because of the datascience happening in that community, but that doesn't make tensors a Python thing, and Django folks probably don't care about it.

An example of monads being used in other languages would be all the stream libraries having a flatmap operation somewhere. They don't call it "monad" so they can't brag about it on Twitter, but they're still using it.


Don't learn it from HN (clueless)


> So monads are from functional programming, right?

Monads are from category theory, a branch of math.

Category theory is the study of generic, simple patterns that work in many different contexts. The insights from recognising those patterns and how they fit together helps to find common ground between different areas of math that initially seem distinct. It's like a kind of OOP and pattern language, but for math instead of programming.

Some patterns got names like Monad and Monoid because they were seen to crop op often, and useful insights could be gleaned from recognising the same pattern in different contexts.

Then someone doing functional programming (FP) realised that the Monad pattern from category theory could be used as a good generic structure in FP to encapsulate many different patterns that were already being used in FP another way.

Things like IO, state, efficient arrays, controlled parallelism, even kernel-like scheduling and user interaction, were already being done in pure FP before the introduction of monads. And before Haskell, which itself is quite old (~32 years). So FP didn't need monads. You just threaded all your functions together "by hand", using things like continuation-passing style or accumulators. There are several ways to do it, and it was possible to write large and sophisticated program this way.

One day, someone realised that much of this threading things together like IO, state, etc could be done in a generic pattern called Monad with a signature (like a generic or interface in OOP) equivalent to the rules of Monad in category theory from math. They also developed the idea of monad comprehension syntax, like a list comprehension but generalised to any monad.

Haskell was fairly new then, but it already worked fine without monads. These new-fangled FP monads and associated syntax so excited people in the Haskell community that they added them to the language, and redid the standard library around the concept. (Like the way Python initially didn't have list comprehensions, or async, but after those were added to the core language people started to use them a lot.)

Nowadays, monads are seen as a functional programming thing, but really there are many FP languages that don't use monads, and monads aren't just for FP. They aren't necessary, but they are a nice pattern that helps write clearer, longer programs that carry things like state and side effects in a pure-functional world.

The reason there are so many different kinds of Monad in Haskell, not just IO, comes from the same reason they were identified in category theory: They name a basic pattern that occurs naturally in many different contexts where it's not initially obvious. Once named, you can combine them. Unfortunately this leads to some confusing tutorials in Haskell, because Monad is a generic (in the OOP sense) abstraction for many kinds of "threading things together", which covers a lot of things you may write that seem very different from each other.


> So monads are from functional programming, right?

kinda.

they are needed where pure functional and lazy evaluation converge

in a pure, lazy language, you compute things based on dataflow

but what is the dataflow of a keystroke from the keyboard? you need an abstraction to get stuff like this into a pure, lazy programming environment

that's all monads are...abstractions to solve a problem


This is pretty dated (2008). I am not a hard core functionalist. I get to work with the crap show of “multi-paradigm” languages, enjoying the the occasional list comprehension. I’ve never experienced a monad.

I am curious how much of some of the side comments made in the paper are still relevant? The quips about slowness, etc. Are these things that have dramatically shifted in 14 years, or only incrementally improved?


[2008]


https plz thx


The term 'hard drug' is meaningless and has no real definition in the field of pharmacology. We need to stop using it.


It's a real law enforcement term, even though it has no pharmacological value. Hard drugs are just controlled substances that come with larger punishment.

There are also some incorrect definitions being peddled by law enforcement like higher chance of addiction, however hard drugs like LSD clearly contradict that.

As far as this article goes, I read it and I'm not sure what part of the hard drug meaning was meant to be communicated here.


There are people serving life in prison for growing and trafficking cannabis, the way the terms are defined have zero correlation with reality and therefore don't convey any useful information


It's perfectly meaningful, as the meaning is clear to the reader.


"Hard drugs" implies "bad", but the author appears to be complaining that code eventually gets too complicated and confusing to deal with, and that monads only help in the early stages and not forever. It seems to me that this is really about technical debt, but was given a different name and the author is brushing over that this isn't limited to monads even though they start by discussing pre-monad code?

If the author had written "Monads are a flavour of Pringles" (once you pop, you can't stop), it would have been marginally less bad an analogy.


Great, so are we using phrases and words that people define wildly differently to describe specific things? That's a terrible way to go about communicating ideas.

As a pharmacologist, the phrase is meaningless to me. There is literally no distinction between 'hard' or 'soft' drugs


Is it really meaningless? Despite how fuzzy the exact boundaries might be defined, I think it would be pretty safe to assume weed would be in the 'soft' category, and crack in the 'hard' category.


It absolutely is meaningless. Why would I assume that cannabinoids are 'soft' and cocaine base is 'hard'? Apart from the fact that the word 'hard' is literal slang for crack while 'soft' is slang for powder cocaine in the salt form

The terms are meaningless to me, I have no reasonable way of classifying anything into those categories because the categories literally have no meaning


> As a pharmacologist, the phrase is meaningless to me.

Glad I'm not a pharmacologist, so I can understand the term in this context.


I'm saying the definitions you learned are meaningless, so I'm not sure what information you could possibly take away from the statement

The terms and how they are used have zero correlation with reality


https://www.dictionary.com/browse/hard-drug

"an addicting drug capable of producing severe physical or psychological dependence, as heroin."

But you already knew what was intended.


Okay so cannabis is a hard drug, cannabis produces physical withdrawal symptoms as well as psychological

Caffeine is a hard drug, it also produces physical and psychological withdrawal symptoms

Nicotine is a hard drug, it produces physical and psychological withdrawal symptoms

Apparently the term includes all psychoactive drugs, which makes the term meaningless

Searching for 'hard drug' on the National Institute on Drug Abuse yields no results that contain any information on 'hard' or 'soft' drugs. Don't you think the national agency would have a definition if it were a real concept?

https://nida.nih.gov/search/Hard%20drug?sort=_score

Searching for 'soft drug' just yields no results at all

https://nida.nih.gov/search/Soft%20drug?sort=_score

Here is an article by a psych professor who works in the field of addiction:

https://www.verywellmind.com/the-difference-between-soft-dru...


Dunno what to say. Everybody who's _not_ a pharmacologist is completely clear on what the distinction is. Hard = heroin, meth, etc. Soft = marijuana, mushrooms, etc, plus nicotine and alcohol if you want to include those as drugs.

If your technical definitions don't include this taxonomy, so what? It's a meaningful term among laypeople because it communicates exactly this taxonomy in a single word.


The definitions and terms that the general public and law enforcement use are the result of misinformation thanks to the war on drugs. This is something we need to correct because it contributes to the stigma attached to drug users which prevents them from seeking help.

It doesn't matter if some people can agree on something when that concept is objectively and demonstrably not only false but systemically harmful

PS alcohol is more harmful than heroin physiologically. Alcohol would be one of the 'hardest' if we were going by physiological and societal risk

Editing to add: we can't not include alcohol and nicotine in the general category of drugs. They are undeniably drugs and anyone who tells you otherwise is a victim of the previously mentioned misinformation


It's poetic, but a reader will likely understand 'hard drug' = something like cocaine, where 'soft drug' = something like marijuana.


good thing it isn't written just for the pharmacolgists then




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: