Hacker News new | past | comments | ask | show | jobs | submit login
Are Monads a Waste of Time? (lambda-the-ultimate.org)
177 points by YouAreGreat on Feb 20, 2018 | hide | past | web | favorite | 270 comments



In university I was obsessed with Haskell and being clever and writing code that was an unholy stack of lambda functions and fancy constructs like monads and monad lifting. I used the typical intellectual supremacy argument inherent to many fields of study where you reason that something isn't used in industry because industry people are stupid and cannot possibly grasp all these advanced abstract concepts.

Eventually I realized that Haskell code is slow to write, hard to reason about efficiency, lazy evaluation sucks for most things, and most importantly Haskell is hard to read by others. Going on Hoogle to look up some package and seeing some abandoned doctoral thesis used by nobody where the only documentation is types becomes exhausting after a while.

Go is now my favorite language.


Surely there's a middle ground here. Most code written by university students in any language is problematic and not professional-quality. The specific deficiencies exhibited may vary from language to language. The extreme you illustrate here is clearly undesirable in any professional context, but that doesn't mean it reflects the code written by any professional Haskell developers out there.


>Surely there's a middle ground here.

LINQ. Monads that are easy to reason about without the need to speak about monads, functors and so on.


To communicate we need names for things; "thing that has a suitably associative implementation of flatMap" is an important concept, and "Monad" is the name that exists for it.


Yea to me his comment reads as a whiplash to realizing some of the downfalls of haskell. But he seems like he takes it way too far.


I dunno, not being able to reason about performance is a pretty big deal. I've only ever fantasized about learning haskell, but I can tell you that this quality is what makes sql (or god forbid an ORM) a pain to deal with in production. The explain plan for a given sql query can change on you for no reason other than your table has grown. You can go from using an index for speedy queries to a full table scan because your sql engine decided to no longer use any of your indexes.

I don't know what the equivalent cases might be for a more general purpose language, but I strongly suspect it's a problem all declarative languages suffer from.


>The explain plan for a given sql query can change on you for no reason other than your table has grown

True, but that may well lead to more predictable performance. If you write a procedural program and then the dataset grows, your programm might slow down dramatically while the new execution plan generated by the SQL engine would adapt and keep performing well (...ideally).

You could argue that a procedural programmer could anticipate the growth of the dataset and choose a suitable algorithm from the start. But if you have not one but tens of datasets and value ranges whose size relative to each other determines what the best algorithm is then manual optimization becomes very difficult indeed.

A Haskell compiler doesn't use knowledge of input data to optimize the program though (at least not to the extent a database engine does). So I think a relational database engine is more useful in this regard.


Haskell is not declarative, however, so the programmer actually has control over the evaluation order through ordering demands for the results of computations. It does, however, take a little longer to fully grok how lazy code evaluates compared to a simple left-to-right eager evaluation a la Scheme, so it can be a bit of a rough start (and many usually only decide to take the full time to understand laziness when they're in the middle of a critical issue due to it). However, a large domain of problems (for example, search) are more easily expressed in a compositional manner using laziness.


Forgive my ignorance, I really only have a passing familiarity with haskell, and I did a quick search before trying to lump it in with sql as declarative, but http://www.haskell.org states above the fold, "declarative, statically typed code". If it's not generally considered a declarative language, is there a better way to describe the language?


I think "declarative" exists on a spectrum. There's no clearly defined line between declarative and nondeclarative that I know of. For instance I would say Haskell < Prolog < SQL are on one end of the declarative spectrum, but in each case you do end up reasoning about the underlying execution model.


This is a fair statement. What I (and I believe most others) mean by declarative programming is more specifically logic-and-constraint programming, but in many ways you could view Haskell's type system, typeclasses, and laziness in combination as being more declarative than, say, a purely functional subset of Scheme. To the grandparent post I originally responded to - If you wanted to take a look at a more purely declarative language, see miniKanren https://docs.racket-lang.org/minikanren/index.html There exists an extension with constraints as well, cKanren http://scheme2011.ucombinator.org/slides/Alvis2011.pdf

To try to summarize, a declarative language is one in which you tell the computer what you want, but not so much how it is done. Logic Variables (Unification) are the key differentiators of these systems.

Take a look at chapter one of CTM for a really nice breakdown of the different orthogonal components to a language. I have a feeling differentiable programming and (dependent+) types belong on that list, too.

As an additional question of my own for anyone reading - is there anything that should be on that list thanks to modern CS research, that isn't?


> It does, however, take a little longer to fully grok how lazy code evaluates

I doubt it would take anywhere near as long if students were told from the get-go that the only thing triggering evaluation is the destructuring as it happens during pattern matching.

Unfortunately it seems like most teaching resources would prefer to uphold a sense of mysticism.


It's extremely frustrating, isn't it?


The attraction of Haskell is to build correct software in a mathematically principled way. With practice, it is also remarkably productive, certainly more so than Go when used as a general purpose language. Haskell is simply much more expressive and capable of abstracting over many more things (perhaps this is why you say it is hard to read). The Haskell ecosystem is also constantly evolving and solving more and more real world problems, often problems that have not been solved in other languages. Modern Haskell effectful streaming libraries actually make reasoning about stream processing much easier than in say Java, where the presence of effects (exceptions etc) destroys reasoning.

> Go is now my favorite language.

Is this really true? Or is it just the tool you are now most familiar with? It's certainly a much more riskier proposition to try and feed the kids writing Haskell.

Quoting Rob Pike on Google's own Go programmers: "They're typically fairly young, fresh out of school, probably learned Java ... They're not capable of understanding a brilliant language ... So the language that we give them has to be easy for them to understand and easy to adopt."

The Go toolset can obviously get the job done, but I don't understand why it would be anyone's favourite language, it doesn't even sound like it's Rob Pike's favourite language.


>certainly more [productive] than Go when used as a general purpose language.

Come on, there isn't really any evidence that Haskell is more productive than Go. It's not as if anyone's run a controlled experiment.

Anecdotally, I've written code in both languages and not found much difference in productivity. I think a lot depends on the size of the team, the size of the code base, the nature of the algorithms being implemented, and the amount of IO that's performed. Haskell's abstractions have saved me time in some cases. But then again, I've also spent unproductive hours figuring out the complex abstractions that someone else decided to use. Haskell's so-so tooling and culture of using unqualified imports compound this problem.

And then there's the issue of compile times. Do monad transformers make me more productive? Do functional dependencies make me more productive? I'm not sure. In contrast, I have no doubt whatsoever that fast compile times make me more productive. Most go codebases compile in under a second, whereas GHC's compile times are absolutely abysmal.

Go is a language without intellectual insecurity. It doesn't care if you think it's stupid, or if you think that go programmers are stupid. It focuses on features that are known to make everyone more productive (e.g. fast compile times) at the expense of features that make you feel smart.


> There isn't really any evidence that Haskell is more productive than Go.

Go appears to be a network services language. Although Haskell has equally good for-loops and channels, it does not have an incremental garbage collector or fast build times. So Go has its place.

For general purpose programming, Go is lacking so many features and provides such verbose syntax for what little it does have (worse than modern Java, e.g Lambdas and conditionals), that one can certainly say it is starting from a disadvantaged point with regard to productivity.

It sounds like you've had some bad experiences with some of Haskells libraries and tooling. The ecosystem is certainly still rapidly evolving and good documentation and libraries may not exist in all areas. With regard to complex abstractions, the Haskell community usually likes to formalise and reuse them. Learning an abstraction usually results in a big payoff.

EDIT:

> Do monad transformers make me more productive?

Monad transformers are specific to pure functional programming, but yes they aid productivity. For example, I can build an effectful streaming type with resource management and other desired features by stacking existing transformers. In Go I would be solving such problems with for loops.

> Do functional dependencies make me more productive?

Functional dependences are specific to Haskell type classes, but yes they can aid productivity. For example, you want to express arithmetic ops on matrices and scalars and have the compiler figure out the correct dispatch and return type depending on the type of the values. In Go I can't overload operators or do multiple dispatch.


>The ecosystem is certainly still rapidly evolving and good documentation and libraries may not exist in all areas.

Hmm, that's what people have been telling me since I first learned Haskell in 2002. Your brush this off as if it's a minor point ("some bad experiences with Haskell's libraries and tooling"), but libraries, tooling and compile times are a major, major influence on productivity. If you're at leisure to wait for decent libraries and tools to become available, then good for you! I was kind of hoping to get stuff done now, though.

Abstractions have both positive and negative payoffs. Take MTL as an example. It certainly increases code reuse and makes code more concise. On the other hand, it's also partly responsible for Haskell having a gazillion different techniques for handling errors, with little community consensus on what the best techniques are (see e.g. https://www.reddit.com/r/haskell/comments/5bkqf1/exceptions_...). I spend at least some time thinking about this stuff when I write Haskell code. I spend no time thinking about it when I write go code -- I just write the same error handling boilerplate that every go programmer uses. It's not pretty, and I don't get to pat myself on the back for understanding something complicated. But does it make me less productive? Not sure.

Response to edits: I think you're misunderstanding my comment. I use these features and know what they are. I'm telling you that I do not see any clear productivity benefit from them in practice. Haskell advocates do have a maddening habit of assuming that anyone who criticises any aspect of the language must be a complete novice.


> If you're at leisure to wait for decent libraries and tools to become available, then good for you!

As I said, it depends on the area you are working in. Haskell does have some good libraries.

> Haskell advocates do have a maddening habit of assuming that anyone who criticises any aspect of the language must be a complete novice

Please don't take it personally, I was attempting to explain those features that you mentioned to others who might read our posts (perhaps wishful thinking).

I try to promote Haskell whenever I can because it has been enormously successful in everything I have applied it to and needs all the help it can get. It doesn't have a billion dollar company behind it, just a small community of mostly volunteers.


> I was attempting to explain those features that you mentioned to others who might read our posts (perhaps wishful thinking).

/waves from the gallery


Tool quality and compile times are an issue whatever area you are working in, however.


Compile times are often mitigated by just using the runHaskell interpreter, or turning off optimisations.

What sort of tools does Go have that Haskell doesn't? From what I can see, most Go projects just use Makefiles and text editors. The package manager is described as "experimental".

In Haskell land we have many good choices, with my favourite (for deterministic reproducible builds), being nix and cabal new-build. I've also seen Haskell Shake replace C++ Makefiles at work for a huge speedup.

Haskell has good API search (Hoogle), docgen (Haddock), lint (HLint) and graphical profiling tools (Threadscope). Refactoring tools like HaRe and others, are also starting to appear.


>Compile times are often mitigated by just using the runHaskell interpreter, or turning off optimisations.

No, not really. If you touch a module that a lot of other modules depend on, then you have to recompile a lot of code, and that takes ages (even without optimizations). I'm hardly the only person who has this issue: https://www.reddit.com/r/haskell/comments/45q90s/is_anything...

To be fair, fast compile times are something that's hard to get excited about in the abstract. Before I tried Go, I'd read that it compiled super fast, but that wasn't something that really attracted me to the language. I certainly didn't think that it was worth giving up, say, generics to achieve those fast compile times. But now that I've actually experienced the joy of never having to wait for code to compile, it's really changed my perspective on language features.

As the reddit thread explains, the only people who are really motivated to work on GHC are people who want to add yet more features to a language that's already extremely feature rich. Compile times are getting worse and worse, and the value added by the additional features is getting less and less.

Honestly, Haskell 98 gives me 95% of what I'd want from a pure, lazy functional language. I'd much rather have a simple, fast Haskell 98 compiler than a beast like GHC. But unfortunately, it's too

>What sort of tools does Go have that Haskell doesn't?

For me, VSCode integration that just works. I get instant highlighting of compiler errors and I can jump to definitions, rename variables, run tests from within the editor, etc. etc. I'm aware that the Haskell ecosystem has at least some of this functionality in principle, but it's way more fragmented and less reliable.


Pure FP is just using math as its primary source of programming patterns and techniques, while most other languages, like OOP, use arbitrary conventions to abstract code. Because mutation gives you tremendous flexibility, you can model many kinds of patterns, give it some analogy like inheritance in animals, and then write a language that has it out of the box. Haskell on the other is using math to construct its patterns, so while the learning curve is higher, eventually you realize this is the ultimate abstraction (category theory), and it cannot go more abstract than that. This is useful in some scenarios requiring more logical rigor, but I agree it's not always the best tool.


> eventually you realize this is the ultimate abstraction (category theory), and it cannot go more abstract than that

This is exactly the kind of false sense of intellectual superiority to avoid. Mathematicians and category theorists don't talk like this. Even the parts of Haskell that have the flavor of category theory aren't really all that much like category theory.


Maybe you can expand on that, and/or show me something more abstract than category theory. Programming = abstracting information processing. Category theory = the mathematics of abstract functions (or morphisms a.k.a. changing something)... so it makes sense that CT captures something very fundamental about computation.


1. Just because we have not thought of something more abstract does not mean category theory is the "ultimate" abstraction that cannot be surpassed. You would have said the same thing about sets before category theory was invented. 100% guaranteed there will be another, more abstract paradigm when the need arises.

2. Just because one thing captures something fundamental about computation does not mean it captures everything fundamental about computation, nor does it mean nothing else can capture fundamental aspects of computation (disjoint from what may or may not be captured by category theory). For example, category theory does a terrible job representing the mathematical subfield of complexity theory, despite it being entirely about computation. For that matter, "how abstract" is not always comparable, and the intellectual dick measuring of what is the most abstract is counterproductive. Even basic questions in particle physics have no known model in mathematics that doesn't break under gentle prodding, so is particle physics more abstract than mathematics?

3. Haskell does a lousy job representing the majority of ideas that make category theory useful to mathematicians. For example, universal properties are neglected Haskell, and at best they're implicit. Nobody actually thinks about the math when they're writing Haskell programs, they just think about Haskell types and programs, which makes the "techniques and patterns" of Haskell programming just as arbitrary as the conventions in random Javascript libraries. It's just that there are fewer names for the same thing because fewer people write serious programs in Haskell.


> Haskell does a lousy job representing the majority of ideas that make category theory useful to mathematicians. For example, universal properties are neglected Haskell, and at best they're implicit. Nobody actually thinks about the math when they're writing Haskell programs, they just think about Haskell types and programs, which makes the "techniques and patterns" of Haskell programming just as arbitrary as the conventions in random Javascript libraries. It's just that there are fewer names for the same thing because fewer people write serious programs in Haskell.

That’s an excellent point. So many people think they’re doing category theory when they’re writing Haskell, but Haskell is essentially nothing like the mathematics it borrows terminology from. They’re really not comparable. Which is absolutely fine, but then people shouldn’t draw parallels to the actual mathematics when the terminology is similar in name only.


Technically, category theory is not more abstract than the rest of (abstract) algebra.


You can derive all other math branches with CT, so yes, it is more abstract. It is "higher level" (in the same sense of programming languages) than other maths.


Please point out where somebody genuinely puts an effort to found math on category theory? The standard way to do so is set theory, and the more recent hype is type theory.

Category theory in itself is pretty useless. Categories are a bit like graphs in that: they are too general.

There are serious mathematical theories using category theory as their language (mostly commutative algebra and whatever it touches) just as there are serious theories using the language of graphs, but in both cases the meat of the math is not in the underlying language.


There are serious efforts to found maths on elementary toposes, which are categories with certain properties. Much the same way that the axioms of ZFC model the properties of sets in terms of the "is element" relation, you can give first order axioms for how the morphisms and objects of an elementary topos behave. Then you may add more axioms to assert that the elementary topos you're talking about is really a category of sets in the sense of ZFC.

If you want to research more about this, I can recommend sheaves in geometry and logic by Maclane and Moerdijk. You can also search for "algebraic set theory" and go from there.

And btw, type theory is essentially the same thing. The axioms are IMO uglier and more ad-hoc, but there are (some proven, some conjectured) equivalence results.


>Please point out where somebody genuinely puts an effort to found math on category theory?

What about the following: https://ncatlab.org/nlab/show/foundation+of+mathematics#CFM

Even if the meat of the math is not in the categorical aspects of a theory, the categorical input can be useful - if nothing else, then as a convenient language.



That is a) a great project b) not category theory.


No, you can’t. The vast majority of analysis cannot usefully be represented in category theory. You can’t express everything in analysis using morphisms and maps.

When you’re working with structures (algebra, geometry, topology, etc), sure, you can construct them in the language of categories. But that’s very far from “all of math”.


CT exists at the same level as some other "fundamental level" fields of maths like topology and abstract algebra (/algerbraic topology).

It's certainly very abstract and fundamental, but it's definitely not the root field.


You can't derive CT with CT. Though that has nothing to do with your point, just suggesting that we aren't there yet.


You can derive all other math from any Turing complete system. Category theory is no more powerful than that.


Given that turing complete systems are not total, you can actually derive anything, including falsehood, from them. The point being that a general 'turing complete' system is about as useful to use as the basis of mathematics as a banana.

Category theory is not a great basis for mathematics, but at least it's not inconsistent.


You are missing his entire point.


I know, which is why I asked for some clarification. I actually think haskell is able to directly implement CT, there is HASK category after all. If it's that this is too pretentious to even mention in HN, then I'm completely lost.


> there is HASK category after all.

This is disputed and seems to be at best conjecture/intuition without adequate formal definition supporting it.


Hask is not a real category.


>Even the parts of Haskell that have the flavor of category theory aren't really all that much like category theory.

Can you expound on this? Just because the type classes defined in Haskell can't represent a wider range of categorical structures they purport to capture, does not mean that more structure doesn't exist throughout a Haskell code base. For instance, many argue that the only type of functors used in Haskell are endofunctors, but thats not really true unless we have a narrow definition of 'functors used in haskell', no?


The typeclass Functor represents endofunctors from * to *. That's a pretty limited notion of "functor" when considered mathematically but also extremely useful for programming!


"Because mutation gives you tremendous flexibility ... Haskell on the other is using math to construct its patterns" - so mutation is somehow outside of math?


x = 0 and x = 1. Therefore by the reflexive and transitive property of equality, 0 = 1.


OK - it uses different notation where '=' is not the same as in maths. But that is kind of trivial. There are perfectly mathematical semantics of both functional and imperative languages.


> Because mutation gives you tremendous flexibility,

Interesting how perspective works. I frequently think of mutation as a really annoying restriction. In the presence of mutation, I can no longer assume things are what they say they are -- for they mave have changed between when I looked at them and when I speak of them.

That is fine, if you have a way of separating state from identity, and a way of synchronising the two when it makes sense to do so. Unfortunately, languages with uncontrolled mutation lack both.


> ultimate abstraction (category theory), and it cannot go more abstract than that.

I’m curious. What makes you think that?


Because you cannot discard more detail than what CT has chosen to discard. You start with objects and arrows, and how the arrows compose. Every object has an identity which is another arrow, so in some sense you have just a composition of arrows, and that's it. CT starts really simple, and I don't think there's anything left to hide or abstract.


Every category is an infinity category, but the converse is not true. Thus, infinity categories are more general than categories.

But you don't have to go that far: Just remove e.g. identity morphisms from categories, and you get something that is more general than categories.

It is of course another matter completely whether these structures are a useful abstraction for what you want to accomplish. I've never come across something reasonable that has arrows and associative composition, but no identities. As for infinity categories, I remain very skeptical that they abstract any problems programmers not concerned with topology (i.e. > 99%) have.

So yeah, categories are a reasonable baseline from where you can build. But you can't show anything relevant and non-trivial that holds in any category. You'll need more structure, hence less generality, for that.


You may be wrong here, there have been some discussions - see for example https://www.quora.com/Whats-more-general-than-category-theor...


Not the other poster, but CT is the language of composition and abstraction, it's the high level language of math.

I think people are just butt-hurt that he used the word 'ultimate'.


Category theory is not “the high level language of math”. You cannot universally apply category theory to all other fields of mathematics.


What fields of math do you think it doesn't apply to?


Analysis.



You cannot build all of analysis from category theory. Categories, morphisms and functors are not nuanced enough to encapsulate everything in analysis. The question was not, “Can you restate some results in category theory”, the question was “what cannot category theory build?”

What you’re linking to is part of a relatively small corpus outside of e.g. algebraic geometry and topology that has successfully been used for nontrivial results. And as another commenter said, while it can be used for e.g. computation, it is extremely poor for complexity theory. There is a reason why basically any analyst (as opposed to algebraic geometer) will use set theory instead of categories for their work.


I think Haskell is full of good ideas waiting to be reused in a successor language, with a focus on software engineering as well as just purity (Multiple senses of the term!) and rigour in abstraction.

The ideas are sound - When I write pseudo"Haskell" in other languages (Especially those with enforced purity), the rewards are fairly good for relatively little extra work - but Haskell (The language) is, even putting nicely, rough around the edges.


I feel like this is a sentiment sort of echoed by it's creators: I believe they've stated that they (paraphrasing here) "don't want haskell to become successful". Not in that they don't want adoption at all, but they don't want to have to support commercial requirements around stability and whatnot that companies usually require.

As such, I think a language that comes along and takes a subset of the best features and learnings from Haskell would be a really great idea. Maybe like a python but for haskell?


I've been waiting for this for a very, very long time. I've not learned Haskell yet but generally speaking I'm very fond of FP in all languages, including Python.

However, the BDFL has arbitrarily decided that Python shouldn't even have remotely easy-to-use FP semantics (even though Python being so multi-paradigm is often touted as a major advantage), so (as an example) while he gave up on trying to remove `lambda`, he didn't make it simpler and more useful, which makes it so that whenever someone runs into it as a beginner they say, "Oh, what a stupid little thing that I can replace easily, FP is stupid." [1] I've been very upset generally speaking at how much control the BDFL is taking - not necessarily directly, but indirectly through people who seem to think that Pythonic™ programming is literally God and anyone who doesn't do Pythonic is Bad.

[1] https://stackoverflow.com/questions/890128/why-are-python-la...


I agree completely. Other choice decisions include:

"No tail-call-optimisation because I don't like recursion and I reckon the stack traces would probably be ugly"

And "python doesn't need to be any faster" even though it should be possibly to do so without compromising development speed.

If you want more functional style stuff in Python, check out Coconut: http://coconut-lang.org

I think an ideal candidate would look like Coconut with its python X haskell feel. Maybe target LLVM for nice compile time checks and performance benefits.


People often attribute this to careful language design - and to be completely honest, I can't say anything about that since I'm just some kid, but I feel like that doesn't make the BDFL immune to criticism. TCO, for example, would change almost nothing that already exists in the language, and would make it faster in at least some cases. As evidenced by Coconut, and in fact the number of libraries that smack TCO onto Python, albeit in a very inelegant way.

And sometimes it seems he intentionally cripples certain features so he can "prove" that certain ways of programming are bad.

> filter(P, S) is almost always written clearer as [x for x in S if P(x)], and this has the huge advantage that the most common usages involve predicates that are comparisons, for example, x==42, and defining a lambda for that just requires much more effort for the reader (plus the lambda is slower than the list comprehension)

(Quoted from https://www.artima.com/weblogs/viewpost.jsp?thread=98196)

If he had made lambdas even a little bit easier to write, then this criticism is completely blown out the window.

I'm not saying that FP is the holy grail and all must bow down to the ivory tower. But it disturbs me that the BDFL seems to be waaayyyyyyy to cautious about adding useful features to the language because Python Is Not A Functional Language.


> [...] Haskell is full of good ideas waiting to be reused in a successor language

Thanks. You managed to articulate something I've been thinking for a long time.

To me a good programming language has empathy. Many languages that have some clever ideas have little empathy with programmers.


or you could simply use it as a 'better java', and forget all the extra bells-and-whistles.


I never went to university and don't have a PhD.

I find Haskell prevents me from being too clever and keeps me honest.

Monads are a part of that toolbox which prevent me from using unconstrained effects in my code.


https://patrickmn.com/software/the-haskell-pyramid/

parts of it's type system has a lot of gnarly dark corners, but the those parts aren't necessary to line-of-business kind of programming. unfortunately, those are the ones that get discussed on social media - which it sounds like you were inundated in / attracted to

the core type system is pretty lean


Seems unrelated to the topic under discussion.


I'm not sure. The subtext is "monads are one of many clever ideas in Haskell that make it hard to use in practice." And thus it answers the very (very very) specific question posed by the article, just in a more general context: "Are Monads a waste of time?" "Yes, Haskell is not worth the effort."


Monads are not a clever idea that makes Haskell hard to use. Purity is a clever idea that makes Haskell hard to use, and Monads are the best solution so far.


Welcome to Hacker News.


This is pretty similar to what happened to me. I was implementing an algorithm that had multiple parts and the data would arrive asynchronously.

The first time, I wrote the multipart algorithm using a functional style to encode each state. As the algorithm grew, it became more and more difficult to reason about and managing dependencies between the parts was harder, not easier.

When this became unwieldy, I rewrote it. However, when I wrote it the second time, I wrote the algorithm in a way that made it appear imperative. Everything was linear, starting with part B, then part C, etc. A small backend managed state, but you could read through the algorithm code and it almost seemed like it would run synchronously.

The second method was easier to read, debug, and modify. I will probably not go back to the functional method anytime soon.


Be weary of "programmer's rewrite bias". Anything you rewrite is likely going to be easier to read, debug, and modify since you understand the problem better the second time.


Also the first version may have had post-release bugfixes etc which are frequently omitted from the rewrite and then rediscovered later on.


Haskell's value was never in actual "engineering" (someone somewhere is itching to point me to no fewer than 4 Haskell projects that nobody cares about) but rather in "research". To this day, I don't understand Monads, even though somehow I used the heck out of them in my C# days, and I loved "reactive" programming which also supposedly comes from Haskell, and for that - I'm grateful. Haskell tries dozens of ideas nobody but computer language theorists care about but, once in a while, something cool drops out that is adopted and made usable for mere mortals like myself.


I think you're rather putting it on a pedestal. for the truck-driving contingent of haskell programmers such as myself, i assure you i'm not interested in theory much and simply enjoy haskell as a 'better java'.


May be "better than java" in your use case, but it's not "better java", as it's nothing like Java at all.


I’m in the same boat. I deep dived into Haskell for two years. It was fun and interesting at first but lost its appeal for some of the reasons you mention. I also enjoy using Go.


Even the creators of Haskell have conceded that lazy-by-default was problematic

https://www.google.com/search?q="the+next+haskell+will+be+st...


I think his point was more that it is purity rather than laziness that has been the big contribution from Haskell, despite the original research goals. A good language should allow for both strictness and laziness, as Haskell already does.


So true. Haskell is great to learn about functional programming and programming language theory in general. But to get things done, it's just not as practical.


it's pretty practical if you're building line-of-business apps, web servers, or other things you would use java for.


Yes, miss one character in a thing that looks like HTML but is actually a gigantic function call, get an incomprehensible failure message about how the meta-template macro expansion has an error.


macros/templates are usually confined to borders, databases, routes. not the business logic. Lisps are more susceptible to that.


Not even talking about business logic, just talking about page layout.

https://www.yesodweb.com


Yes, c# and Java do codegen for you too, in the same way. This is how templating in static languages work.


you should try modern erlang. If you try really hard, you can get both the advanced abstract concepts plus obscure machine specific optimizations.


I'd encourage you to look into Kotlin. It's got the 20% of FP features that give 80% of the benefits, while still having generics and a thriving ecosystem (JVM). As someone with a similar personal trajectory, it really hits the sweet spot.


I'm sorry, but it just sounds like you're a terrible functional programmer. There are basic software engineering principles you need to use when using a functional language, just like any other programming language.


Think you should tone it down a little, whether or not I agree


Just to point out the probably obvious thing: that's only one type of monad (the state monad). There are many other things which form a monad and can't be represented easily in JS or other imperative languages.

Even though even the state monad is a clear win in my opinion (since it explicitly delimits a stateful computation which can then be embedded in a pure one in a principled way), the concept itself is much broader and thus much more useful than just allowing for imperative syntax in a pure language. For example, the continuation monad, which is just a library in Haskell, can't be implemented in most imperative languages without extending the language itself (async in JS, C#, etc).


To emphasize, of course the monad interface isn't useful if the only implementation of it that you use is also the monadic implementation used by an imperative language. But if you "just" use an imperative language, you lose all the other implementations because the language hardwires the "conventional" one at the bottom. You lose the parser implementation of the monad interface, the free monad implementations (which are useful for writing "programs" that are then separated from their interpreter), the software transactional memory implementation, the probability implementation, the effects-controlling implementations (that go beyond "either in IO or not" to implement more granular controls), and the many other things that have been implemented.

Obviously that's ultimately a viable solution since almost every programming language works that way. But if you're going to ask "is the ability to implement the monad interface worth it for some given language?", you aren't going to get anywhere if you consider only one of the many implementations of the interface.

Why would anybody use the iterable interface if all it could do is the equivalent of "for (i = 0; i < arr.len; i++)" and couldn't iterate over a binary tree or hash table as well? Why would anyone use any interface if an interface only has one implementation, ever? The problem lies in the question, not the value of the interface.

(I'm trying to get better at phrasing my posts about "monads" better because one of the major problems they have is that people think of "monad" as a noun. But it is an interface, just like in Java or a Rust trait, that can be implemented by a type; it just so happens that the interface is not something that can be cleanly represented in most statically typed languages because of the high-order polymorphism. "Monad" is an adjective, not a noun. And "monads" aren't "about managing state" any more than the Iterable interface is about "incrementing an integer to walk over an array"; the conceptual error is the exact same kind in both cases.)


This is a bit like saying "writing your own embedded mini-language and interpreter is better since you can make it do arbitrary things".

The question is whether you want people on a large project spending time on custom tools? There's benefit in simplicity and standardization. Using Rust (to pick a nicer example than JavaScript) and the community's standard language tools does have its advantages.

I say this as someone who likes to work on language tools. It's fun, but I wouldn't call it productive, unless you work on tools with a large userbase.


That's literally the opposite of what he's saying - the entire point of monads are their uniform interface for different abstractions.


A little off topic, but I wanted to thank you for your comment ‘..delimits a stateful computation which can then be embedded in a pure one in a principle way’

I always think of it the other way around: I write pure code, and then embed the pure code in IO, etc. I tend to have boilerplate bits of impure code that I use as an after thought once I have pure code written and tested in a repl.

I think that to progress, I need to swap my view of Haskell, and spend much more time on the impure parts, and earlier in my dev process.


Then you look at professional code written by the famous experts, and you see almost everything is written inside an app-specific ReadWriteStateIO monad.


Most apps do things. But most apps don't do everything. Of course most business code is written in domain-appropriate monads that express some set of effects; why would we expect anything different?


> For example, the continuation monad, which is just a library in Haskell, can't be implemented in most imperative languages without extending the language itself (async in JS, C#, etc).

I don't know about other languages, but the continuation monad is pretty trivial to write using promises. These can either be provided natively or using a promise library. It's a little more verbose than Haskell, but it's actually a useful pattern in JS.

https://codepen.io/anon/pen/JpLLJR?editors=0011

FWIW, I'm no JS expert, so its probably likely that there's a library out there that provide the functionality of applyAsync and composeAsync that I wrote.


Of course! When I said "it can't be implemented" I meant the logic that the continuation monad itself encapsulates, not async computations in general. For those you wouldn't even need promises, you could just use callbacks:

    fetchUser("url", (user) => {
      fetchAddress(user.address, (address) => {
        ...
      });
    });
But this repetetive plumbing is exactly what the continuation monad abstracts away:

    do
      user <- fetchUser "url"
      address <- fetchAddress user.address
      ...
So the point is that you can write your async code in the same way you'd write normal, sequential code (which is what the async extensions of JS/C# allow you to do).


This is a syntax thing only. You can do the same in Smalltalk with an array of blocks (which are closures).

  {
    [ user := url fetchUser ].
    [ address := user fetchAddress ].
  }
and pass that array as an argument to whatever evaluation strategy you want.

You can also do with with proper macros, e.g. in Nim (fictitious example, there's no actual `usingContinuations` macro):

  usingContinuations do:
    user = fetchUser("url")
    address = user.fetchAddress
This is all a matter of syntax allowing you to write a sequence of code fragments in a readable fashion while allowing transformations on the underlying sequence.

That said, outside of Haskell (or other functional languages emulating the approach) you'll encounter them fairly rarely in this particular form, because just allowing for a sequence of code fragments is a bit limiting if you can combine them in more general ways. For example, Smalltalk builds pretty much all control flow in what we'd call combinators (on closures and values) now and allows you to pretty much extend that arbitrarily. Not because Smalltalk does anything super-special here, but because it has a nice, concise syntax for expressing executable code fragments as values.


Every difference between Turing complete languages is a "syntax only thing".

A monad is basically an abstraction that encapsulates an evaluation strategy. You can certainly reimplement it on most functional languages with a syntax that is only a bit more verbose and error prone. That's not a win.

Also, macros are an all powerful construction, with heavy implementation and maintainability costs. They certainly can be used as monads. That's also not a win.


> Every difference between Turing complete languages is a "syntax only thing".

I'm pretty sure you can't ignore language semantics here.

> You can certainly reimplement it on most functional languages with a syntax that is only a bit more verbose and error prone. That's not a win.

Note that I was giving examples from imperative languages; Haskell has the additional problem that it has to transform the do notation into what's essentially function composition; but function composition is already the natural denotational semantics of imperative code [1].

[1] https://en.wikipedia.org/wiki/Denotational_semantics#Denotat...


> I'm pretty sure you can't ignore language semantics here.

As long as the languages operate on the same virtual computer (same IO capacities), you can create the same semantics on any language. At the worst case, you can write an interpreter for any language on any other language.

> but function composition is already the natural denotational semantics of imperative code

Most high-level languages got some influence by Lisp-style functional programing, so they transform into function reduction/composition with some amount of naturality. The imperative languages are the ones with the less straight-forward transformation, while Lisp is basically alone at the other extreme.

Pure function reduction/composition is a great representation for computer. It's simple to analyze, and cheap to represent. But I don't think it is that great for human consumption. Do notation is a bit more complex to represent, but often a lot more legible.


> As long as the languages operate on the same virtual computer (same IO capacities), you can create the same semantics on any language. At the worst case, you can write an interpreter for any language on any other language.

This is fairly tautological and would make semantics meaningless. I'm not talking about the universality of Turing-complete languages, but about the semantics of language constructs.

> Most high-level languages got some influence by Lisp-style functional programing, so they transform into function reduction/composition with some amount of naturality. The imperative languages are the ones with the less straight-forward transformation, while Lisp is basically alone at the other extreme.

This has nothing to do with what I said, so I'm not sure what your point is?


"Do notation" is a syntax thing.

What's more fundamental is modeling every computation in your language with these chainable algebraic constructs in such a way that they're always composable (when of the same computation type) and combinable (albeit with complexity, as in mtl).

Monads are just a modeling technique for this, and when laid bare without do notation, they're neither impressive nor surprising. However, it's worth noting that there are some very neat properties with generic monads (something that Haskell does very well because of how typeclasses can model constraints).

It allows for reuse across surprising axes. Let's extend the prior example:

    loadUser :: (MonadIO m) => UserRef -> m User 
    loadUser url = do
        userRecord   <- liftIO $ fetchAndParseUser url -- One function for compactness
        emailAddress <- validateEmailAddress userRecord
        pure $ User (publicName userRecord) emailAddress
This simplified, but what's really nice about this specific construction is that we've placed a constraint on how we load UserRefs (we need to pull data from a url). In most cases I'd expect that this would be some variant of ExceptT IO * or MaybeT *. But it also could be something more sophisticated.

I've had a case where I realized that I was using a dowdy 2008 approach to software assuming users would have only one identity record, but I wrote a load function a lot like this. I needed to change how I loaded users since users would often have multiple user records (i.e., they created and linked accounts to one another). I had to change my fetch and parsers (naturally, I chose a new endpoint). However, I started calling my load user function with ListT IO [1]. I barely had to change logic (mostly pattern matching on the edges) to cope with this change.

But you might also imagine that the computation includes async properties, or validating properties, or whatever. Because we can generically address ALL kinds of computations and even combinations of computations, we can freely swap them around as we see fit without substantial modification.

That kind of flexibility is fundamental to how these languages work in tandem with monadic computation, and it's very powerful.

P.S., I recognize that you might argue a value encoding could capture this in Smalltalk, and I agree. That's often how people implement this under the covers before they optimize it. What's worth noting is that without good type checking, interlocking more than a few of these constraints correctly is very difficult, which is why this is much more feasible to do in Haskell.

[1]: As an aside, I don't use ExceptT for runtime exceptions. IO already covers that and I think it's a bit of a fantasy to pretend I can do better. Also, I'm aware that ListT is a slightly troubled cousin in the world of monad transformers, but I was happy to use it here, none of its downsides really came into play.


Can you extend this approach to delimited continuations?


Depends on what you mean precisely by "extend to", but generally this is mostly a question of whether the underlying execution model supports them. "Code as values" is not exactly a particularly complicated thing (though arriving at an efficient implementation can be).


But Haskell’s execution model has no underlying support for continuations, and yet there are perfectly efficient implementations.

You actually can not implement delimited continuations in languages like c++, javascript, or go, unless you write your own execution environment at which point Id argue you’ve left the realm of the language. In c++ at least, such an implementation would certainly use undefined behavior


> But Haskell’s execution model has no underlying support for continuations, and yet there are perfectly efficient implementations.

What you need is a way to capture and manipulate stack frames at a very basic level. Haskell cannot magically avoid that. As I recall (though it has been a while), Control.Monad.CC reifies stack frames explicitly. Monads function in this context as a very basic metaprogramming technique (or as some sort of AOP, if you want to look at it this way).

> In c++ at least, such an implementation would certainly use undefined behavior

This is what I mean by support in the execution model. You do need to have access to stack frames, and C++ doesn't permit that. There are workarounds [1], but they generally require non-trivial metaprogamming, due to the insane complexity of C++.

You don't have that problem with, say, Smalltalk, as most Smalltalk VMs gives you the necessary functionality. For example, one of the original continuation-based web frame works, Seaside, was all Smalltalk [2].

[1] http://www.filpizlo.com/papers/baker-ccpe09-accurate.pdf; the context here is GC, but the underlying problem is similar, access to stack frames (in this context, for root scanning).

[2] http://seaside.st/about/examples


Eliminating the repetitive plumbing is a feature of haskell, not a feature of the monad. What the continuation monad gets you is a pipeline of asynchronous and synchronous operations. Whether or not something is asynchronous is abstracted away. Your example could be rewritten as:

    composeAsync([
      fetchUser("url"),
      function(user) { return user.address },
      fetchAddress,
      function(address) { return address.zipcode }
    ]).then(function(output) {
      document.write("Zipcode: " + output);
    });
In this example you don't need to care if fetchAddress is synchronous or asynchronous, it'll work either way.

Its at least worth noting that one thing Haskell gives you is the ability to perform assignment within the async functions. This is also possible with Javascript, but a bit more intuitive in Haskell. Again, this is a feature of haskell, not a feature of monads.

    var outputs = {};
    composeAsync([
      fetchUser("url"),
      function(output) { outputs['user'] = output },
      function() { return outputs.user.address },
      fetchAddress,
      function(output) { outputs['address'] = output }
    ]).then(function(output) { ... });
composeAsync could also be rewritten to store this in an accumulator, making it look a little nicer.


This is not as general as a monad though, it's just feeding the output of one function into the next.

I'm sure you know this, but the thing that makes monads a more general solution is that (because bind depends on a value produced at runtime) one can dynamically alter control flow, i.e:

    fetch = do
      user <- fetchUser "url"
      if userEmpty user
        then do
          fetchUserDetails "details"
          fetch -- recurse
        else
          fetchAddress user
For that to work, your functions inside the `composeAsync` array would need to able to return nested promises and at that point you've just reimplemented the continuation monad :)


Yeah, I guess I may have not completely implemented the continuation monad, but my point was precisely what you just said. That is, it's totally possible to implement continuation monads in vanilla javascript. It's not only possible, but its reasonably easy to do without heavy lifting.


JS has language level support for that now, too!

    async function getThings() {
      const user = await fetchUser(url)
      const address = await fetchAddress(user.address)
      return address.zipcode
    }

    // in some code
    getThings().then(zip => document.write(zip))


The reason async/await is implemented as a language extension in JS/C#/Kotlin/etc. is not because they're not powerful enough for a promise library, but because the language extension version composes with imperative control flow.

Haskell has exactly the same problem- do-notation is nice, but not very different from just any of the sibling comments' tricks with things like arrays of closures. It doesn't let you write things like this:

    while some_condition() {
        let result = await some_async_operation()
        await something_else_async(result)
    }
Instead you just use recursion and (lifted) higher order functions like elsewhere in Haskell. This is not very unusual there but it would be a pain in JS/C#/Kotlin/etc. which do use imperative control flow in idiomatic code.


Yes, they more or less had to lift while/for/if/switch etc into the continuation monad :)


IMO the builtin version is much nicer than the equivalent Haskell, and thinking in terms of composing continuations rather than composing monads makes more sense for imperative languages.


Indeed, one of the advantages monads give is the abstraction over different types of computations with side effects (state, exceptions, non-determinism etc)

Great tutorial:

http://binaryanalysisplatform.github.io/bap/api/v1.3.0/Monad...


> delimits a stateful computation which can then be embedded in a pure one in a principled way

You probably meant to say smth different, but to many this reads like "you can embed and call impure code inside pure code while the pure code stay pure" which doesn't make much sense. I mean, you always do the opposite: embed/call pure code from the impure one.


With the state monad (State not StateT) you can run it in pure code because the runner is pure. It only “mutates” one context and when it’s all unwrapped it’s actually pure functions all the way down.


You probably meant ST, but yes, much to nnq's surprise, this is possible!


Explanation for innocent bystanders:

The State type is just a convenient wrapper around regular functions. State makes it look, syntactically, like mutation is happening. You can use it to implement real mutable algorithms etc. But onces unfolded, it's just function calls.

ST, on the other hand, wraps true mutation. Is extends its secret magic sauce hooks into the compiler and will generate real, actual (thread-local) mutation under the hood, in a way that guarantees the effects cannot escape the current context in which mutation is allowed.


The fact that something called "ST" is one of the most important libraries in the language, and it's the one that you should choose for State (leave aside the "State" library! that's not the one you want!) shows how for all Haskell's power, it's terribly unusable for trivial reasons that never get fixed due to extreme deference to legacy mistakes. See also: Prelude.


ST is not the monad that should be used for state. You should use State for that.

ST is an optimization trick mainly used by framework implementors. In my professional experience writing Haskell, I’ve never seen st used for anything important


I wouldn't call "mutable arrays" an optimization trick mainly used by framework implementors, or "not important"

https://wiki.haskell.org/Arrays


"mainly used by framework implementors"

IMO it's data structures and algorithms - specifically those where mutation gives substantial speed improvement (which isn't all by any stretch) - more than "frameworks" as typically conceived...


I’ve never had to use ST. I most definitely meant State, which is what I wrote in the grandfather post.


It's not one of the most importart libraries in the language. It's hardly used.


not sure if this should stimulate me to re-try Haskell, or to totally scare me :)


The former :D

As elaborated below that comment, State isn't actually mutating anything, it's just hiding away the boiler plate of threading an extra parameter through your functions.

ST, on the other hand, is compiler magic akin to transients in Clojure - STRefs actually get mutated, below a pure context. But type magic ensures that STRefs can't persist across multiple "runs" of the ST value, so it still behaves like pure code. So it's actually a little less scary than Clojure's transients, although you're more likely to encounter the word "skolem".


It's a naming trick. It's not "impure" code, it's locally state-modifying, which looks impure inside a limited context, but the changes can't leak beyond a defined boundary. In Clojure this is called "transient", and it's much better explained there, because Rich Hickey believes programming should be comprehensible: https://clojure.org/reference/transients


> this reads like "you can embed and call impure code inside pure code while the pure code stay pure"

I think that's a fair summary yes. Monads let you define exactly what you need to do around the impure code to make it safe, and enforce that it's never used in a way that could "infect" the pure code around it.


Monads make a lot of sense in a pure functional language like Haskell, because it allows you to program in an imperative style when you need it, and keep the rest of the program pure and functional.

The benefit of State and IO monads is not that you can use them everywhere so you now have an imperative language. The benefit is you can explicitly delimit their use, and you still get the "easier to reason about" for the rest of the code.

For languages which already supports mutable state everywhere by default it does not make much sense to use monads for this.


Monads are not directly related to handling side effects. Or to the famous IO that you're refeering here. What a Monad does is to force sequencing by creating a data dependency.

Speaking of IO ...

In an imperative program you get guaranteed ordering of those instructions, but that ordering is thread local, meaning that the compiler, the runtime, the kernel, the CPU are all free to reorder instructions for optimal execution and ordering between threads is forced via memory barriers, which is in fact incredibly error prone.

In an FP language such as Haskell you get the extra problem that the compiler is even more free to rewrite and execute your code in any order it sees fit due to non-strictness and to referential transparency.

What the IO monad does is to "suspend side effects" and to force ordering on evaluation. And guess what, the IO monad is useful even in your average imperative program for describing concurrent logic.

Here are the implementations I contributed to:

1. https://monix.io/docs/2x/eval/task.html (Scala)

2. https://github.com/typelevel/cats-effect (Scala)

3. https://funfix.org (JavaScript)

If you've ever worked with JS Promises, think of IO as a non-broken, pure, awesome alternative.

And this is just IO. There are other monads as well. List is a monad, Option and Either are monads ;-)

---

I recommend you let go of your misconceptions and do some learning because this is a freaking useful concept.


> I recommend you let go of your misconceptions and do some learning

Thank you for the advice! Monads seem to have some kind of smugness creation field.

I know there are other monads beside State and IO, I was specifically referencing the argument in the linked article which was about using monads for mutable state, basically turning the code into imperative style.


That wasn’t smugness. I only replied to you because those were my thoughts from 5 years ago. And I was wrong.

And yes, it’s a little troubling that such awesome concepts are not penetrating the mainstream, being so misunderstood.

I blame the available leaning resources.

It took the GC 30 years after invention to become mainstream. Functional programming, actual functional programming, apparently takes longer than that.


> That wasn’t smugness.

Maybe not. It was still condescending, though...


> What a Monad does is to force sequencing by creating a data dependency.

Does it really? Maybe this is another one of those famous misconceptions!

https://hackage.haskell.org/package/tardis-0.4.1.0/docs/Cont...


Monads semantics are mostly what is created by bind. And bind is an operation that describes how to sequence expressions.

What you may be misunderstanding is that "sequencing expressions" does not mean "execute one expression after the other". It means, take a sequence of expressions and give it some meaning. Yes, that meaning can be "execute one after the other", or "run everything at the same time", or any other ordering. The meaning can ignore any ordering and only care about some context, or apply some order over the result in an ordered structure, or almost anything else.


I agree with your characterization, which is not at all equivalent to "What a Monad does is to force sequencing by creating a data dependency".


Nah, it's not really "one of those famous misconceptions".

The crucial detail is that TardisT must wrap a MonadFix, which is used for cyclical data structures. It effectively works not by making time go "backwards", but by making time loop around. It still creates a sequencing relationship.


> The crucial detail is that TardisT must wrap a MonadFix

So? The non-transformer version can still "read data before it's written". It's very hard to see any form of sequencing in that example.

> It still creates a sequencing relationship.

A sequencing relationship between what exactly?


The sequencing is described precisely in the signature of bind.

    (>>=)
        :: Monad m
        => m a         -- Left argument
        -> (a -> m b)  -- Right argument
        -> m b
You don’t need to explain an individual monad instance to notice that this signature requires “a” in order to generate “m b”.


That's not quite right.

Let's define an operator `|> = (flip $)`, which has a type signature `a -> (a -> b) -> b`, which, by your reasoning, should force sequencing too. Except it doesn't: Laziness is a bitch to work with, and `a |> f |> g` evaluates to `g(f(a))`, and g is evaluated first, which then evaluates f(a) on demand. If you consider `1 |> (+1) |> const 4`, this will evaluate to 4 without ever performing the addition.

Instead, let's compare this to the monadic equivalent. `(return 1) >>= return . (+ 1) >>= return . (const 4)`. The sequence that's being forced here isn't that (+ 1) is evaluated, it's that the `return` in `return 1` is evaluated (producing a 1) before `return . (+ 1)` is ever touched, and that _that_ return is evaluated (presumably producing an unevaluated thunk for (1 + 1)) before `return . (const 4)` is touched.

In IO world, this corresponds to guaranteeing that the IO operations happen in sequence irrespective of the order in which the pure computations they wrap are evaluated in.


> the `return` in `return 1` is evaluated (producing a 1) before `return . (+ 1)` is ever touched, and that _that_ return is evaluated (presumably producing an unevaluated thunk for (1 + 1)) before `return . (const 4)` is touched.

That's not quite right either. The former returns need never be evaluated.

    > undefined >>= (return . (const 4)) :: Identity Int
    Identity 4


Right — I should've said "that's the mechanism through which the implementation of `>>=` can force sequencing". `>>=` for Identity doesn't, most of the other common ones do, in some sense or another.


Right! `>>=` can force sequencing. It doesn't for Identity, Writer.Lazy or State.Lazy, but it can. OK, but also Arrow and Monoid can enforce a (different sort of) sequencing relationship.

Neither does monad force a sequencing relationship nor is it the only thing than can force a sequencing relationship so I can only conclude that "What a Monad does is to force sequencing by creating a data dependency" is indeed one of those misconceptions!


That's just Haskell's crazy evaluation semantics though, not something that applies to monads in general.


Indeed. But bad_user made a claim about monads in general ("What a Monad does is to force sequencing by creating a data dependency.") so all I need to do to refute it is give an example of a particular monad where the claim doesn't hold.


Monads force a denotation of sequencing. Of course you can write a language implementation where the sequencing of operation does not represent the sequencing of denotation (just as you can write a "compiler" that compiles all programs to Hello World), but, well, that's on you.


> Monads force a denotation of sequencing

Aha! Now we're on to something.


What about the Identity monad? Does that "force sequencing"? What "sequencing relationship" does it create?

If you are convinced that the Identity monad does "force sequencing" then you are also convinced that lazy evaluation "forces sequencing" (because the Identity monad is just lazy evaluation). That would be a fairly controversion conviction. If you are not convinced that the Identity monad "forces sequencing" then can you explain in more detail the meaning of your claim "What a Monad does is to force sequencing by creating a data dependency.".


The identity monad's sequencing is just basic cause and effect: you need to have produced an `a` in order to get a `b`.

The sequencing is very obvious from the signature of `>>=`. You don't need to know anything about any instances to see that it's there.


> The identity monad's sequencing is just basic cause and effect: you need to have produced an `a` in order to get a `b`.

You most certainly do not! See https://news.ycombinator.com/item?id=16420742


It's in the signature. `b` has a causal dependency on `a` because `a` is on the left hand side of the arrow and `b` is on the right hand side of the arrow. That's it. Functional programming 101. There's no way around this.

The operational semantics and details of thunks are not relevant here. Tricks with `undefined` are not relevant here. This is just the meaning of the arrow type.


If that were true, why do you need a monad for sequencing? Why doesn’t function composition suffice, or something of the form a -> (a -> b) -> b?


Because the values have extra "stuff" surrounding them. That's why it's `m a` and not just `a`: the `m` holds the invisible context that enriches the `a`.

Although your example is exactly what's happening in the identity monad, because there is no extra stuff.


Monads have been called "executable semicolons", because you can define a function to be executed each time the program passes from one statement to the next.


How is sequencing described in the type of join?


Tardis and the ReverseState monad are more examples of people being funny and clever by creating misleading names and mental-models instead of making libraries easy to understand.

There's no "time loop" in a cyclical data structure. There's just a data loop. Moving backward through a circular buffer, or doing arithmetic mod N, isn't "time travel"


But TardisT is sequenced in the monadic value. It is not sequenced in its inner backwards-traveling state, but that is not part of the monadic structure.

Have you used TardisT? In order to use the backwards traveling state, you need to bind the state syntactically before you use it, just as you have to do with the forwards traveling state.

For example, a typical setup for TardisT is fixing up jump offsets for an assembler. To write this, you do something like

    assembler (Label l) = do
       fixup <- getFuture
       (curOps, offset) <- getPast
       modifyBackwards (M.insert l offset)

       let ops = curOps ++ [ Jump (M.lookup l fixup) ]
       sendFuture (ops, offset + 1)
    assembler ... = ...
Note that both the forwards and backwards traveling state needed to be bound before they were used. There is an inherent 'forwards-traveling' sequencing due to the monadic structure.

If tardist were not monadically sequenced, you ought to be able to do

    assembler (Label l) = do
       (curOps, offset) <- getPast

       let ops = curOps ++ [ Jump (M.lookup l fixup) ]
       sendFuture (ops, offset + 1)

       fixup <- getFuture
       modifyBackwards (M.insert l offset)
In this sense, the forwards traveling state is privileged (and indeed if you write anything sufficiently complicated, you'll note that you'll need to add a lot of lazy pattern matches on the backwards traveling state, but not the forwards traveling one, again indicating a notion of sequencing, despite what it may seem).


Good explanation of Tardis, but would you really describe that as "forc[ing] sequencing by creating a data dependency"?


I would get rid of the forcing part, but I think my example shows it does cause a sequential dependency ?


The separation of pure and impure code in Haskell isn't something inherent to monads but is instead a side-effect (hah) of the typing rules of the language. It's easy to see how this separation need not hold when using monads (look at any monad library in virtually any other language), but more importantly there are ways to accomplish this separation without the use of monads (an effect-tracking type system is being developed along side OCaml's algebraic effects/multicore project).


> The benefit of State and IO monads is not that you can use them everywhere so you now have an imperative language. The benefit is you can explicitly delimit their use, and you still get the "easier to reason about" for the rest of the code.

It is also important to note that apart from providing the ability to write imperative code, monads encapsulate computation. In my understanding, this is where their use shines. Getting to wrap values in such contexts in certain imperative languages could be tedious. Using a monad, one passes around the value within the context of the monad. There is a computation(aka context) that is associated with the value in the monad and that computation is performed before a value is yielded.


I agree, but Monads also simplify non-blocking concurrent programming, allowing users to think sequentially, see F# Async and Haskell IO. C# had to add Async/Await support in the compiler and I would argue it is not as pleasant as the F# version using Monads.


There are some monads (or things close to monads) that are quite common in imperative languages. Java has

  - CompletableFuture  
  - Optional  
  - Stream  
  - dependency injection can be seen as a monad


Agreed. The "islands of immutability" approach.


Monad is practically synonymous with choice. Any time you multiplex on whatever is represented by a thing to choose a different thing, you're doing a monadic operation whether you want to or not. The difference is whether or not your language has libraries for this or you have to program it yourself every time.


Conditional evaluation might be a use-case of monads, but that doesn't make them synonymous. "getLine >>= putStrLn" (read a line from stdin, write it to stdout) is a fine Haskell expression of the IO () type, but nothing in it resembles a choice. Monadic expressions can be extremely, boringly linear.


Are you sure? Of all the messages the program could have printed, of all the I/O actions imaginable, the program somehow made a choice that was not hard-coded, but based on my input! The expression multiplexes one I/O action (what to print) based on the result of another (get input from user).

That's choice!

Edit: Just realised what might be confusing. putStrLn is not an I/O action in and of itself. It's a function that multiplexes on several I/O actions based on the input value. However, crucially, that input value is pure, and thus can only be hard-coded (maybe through a bunch of indirections, but ultimatedly statically determinable.) Monadic bind is what lets us depend on another I/O result.


You're stretching. :) But even if I accept the point you're making here, that doesn't make monads synonymous with choice. Every choice is not a monad, and every monad is not a choice. This is far too reductionist. Is there no choice involved in monoids, arrows, for example? What makes monads special in this regard? And what about I/O in languages that cannot implement these concepts?


> Monad is practically synonymous with choice.

Arrows also allow you to "multiplex on whatever is represented by a thing to choose a different thing" so you'll have to be more precise!


Sure, so the way I understand it, a monad is a special type of arrow, so that's kille saying "You also live in France, so you'll have to be a bit more precise than Paris", no? I.e. both claims are equally true.


I think you have that the wrong way round.


please do not talk about Arrows publicly; it has done enough damage


I really like D's approach instead. A function declared pure can have no side effects but it can do whatever kind of mess it wants within its own code. It can have loops, assignments, reassignments, memory allocations, but it can't mutate variables outside its scope nor can it call any impure functions. I find this is a very practical compromise that lets you have the good parts of purity without ever getting the urge to write a blog post about monads.

https://dlang.org/spec/function.html#pure-functions


Haskell has the ST type for this, which is a stripped version of IO in which we can still make proper mutable inplace updates like in other languages, but there is a »runner« function called »runST«, which guarantees that the computation it contains does not leak any state to the outside – in other words, »runST <impure>« is pure from the outside. This allows embedding of mutable algorithms, e.g. inplace sorting of vectors, into pure code.


That's what Haskell calls "ST" and is far less powerful than the general notion of monads.


This is definitely cool, but monads is a more general concept. It can be used to isolate side-effects, but can also be used for many other things.


This is "const" in c++


I don't think this is the same. My C++ is getting rusty, so correct me if I'm wrong, but making a function const is mostly about making the "this" pointer const. A const function can still mutate global variables or call non-const functions, such as operator<< on cout. A pure D function can do neither of those things.


That's fair. C++ has const methods on objects, but I think nothing at global scope. But you shouldn't have global mutable state :-)


The weird thing is that you end up with something that looks like 2 different programming languages in one (talking about Haskell)

Also, Haskell's implementation of it is a bit weird an unintuitive (and the shortcuts just hide the messiness)

Given that you need state to interact with the outside world, making it more complicated in that essential area makes the learning curve harder


Do-notation does look different, but it is similar to the statement/expression distinction which is present in most imperative languages.


Does it make sense to view IO/State monads in Haskell as analogous to unsafe blocks in Rust?


No, that does not make sense. Rust's unsafe code lets you do some things that could violate the compiler's expectations, and making sure that you're aware that you need to manually enforce those invariants. Haskell has some unsafe* functions that are exactly like that, too, such as unsafePerformIO, that are analogous. On the other hand, the IO and State monads are completely safe-to-use, using a simple abstraction to represent impurity in a pure language.


I don’t think so.

- IO can be seen as a tag added to a type that brings »the outside« into scope: the console, the network, current time, the harddrive. In Rust, these are always available, regardless of the type of the expression at hand.

- IO is »viral«, in the way that things that touch IO will become IO as well; there is no way to remove IO to yield a pure value. If a function in Rust contains an unsafe block, is the whole thing unsafe? I don’t know enough Rust, but I think it’s not.


unsafe is not viral; if it was, everything would be unsafe. An unsafe block is the boundary where you tell the compiler "I've got this bit", and so it trusts you and considers it safe.


Funnily enough, “unsafe” in Haskell (e.g. unsafePerformIO) is the same way. Not viral so as to say “trust me.”


Somewhat tangential, but I became infatuated with Idris's Effect system as a more approachable alternative to using monads directly. Yes, it's still monads under the hood IIRC, but the syntax is a lot easier to reason about IMO.

https://www.idris-lang.org/documentation/effects/


I feel like a recurring theme I find in Idris (though keep in mind it's just something I play with every now and then, not something I use with any kind of regularity) is that it has this "feels like Haskell, but less frustrating and cleaner" quality. I feel like I make less dumb mistakes while using it, and it gives me pretty much all the stuff I love about Haskell.

You of course have the relatively simple effects system (as mentioned),, but you also have the ability to use `do` notation without being in the monad context, and eager evaluation, which I find makes my program simpler to reason about.


I agree. I like to think that Idris is to Haskell what C# is to Java, with the exception that it's still a non-production-ready academic open source language.


It's not that academic AIUI - it was designed for production use, right?


From the release notes of 1.0[0] last year:

> Idris is primarily a research tool, arising from research into software development with dependent types which aims to make theorem proving and software verification accessible and practical for software developers in general. In calling this “1.0”, we mean no more than that the core language and base libraries are stable, and that you can expect any code that compiles with version 1.0 to compile with any version 1.x.

> Since Idris has less than one person working on it full time, we don’t promise “production readiness”, in that there is still a lot to do to make the compiler and run time efficient, and there may be libraries you need which are not available. And, there will certainly be bugs!

[0] https://www.idris-lang.org/idris-1-0-released/


Can someone please explain in a simple way what Monads are and why I want to use them in normal daily programming (assume, incorrectly, that I only know Python)?

I know I can google this, and have, and unfortunately I came away even more confused as a non-Haskell practitioner.


My advice is not to worry about it (unless the itch bugs you so much that you're willing to invest a lot of time researching the answer).

I don't think you'll ever get a single, straight answer that satisfies you. Just reading this discussion, I'm sure you can see that people describe monads in wildly different ways. Some descriptions are minimalist (perhaps a bit reductionist): it's just an algebraic data structure, nothing to get excited about. Many rely heavily on metaphors: containers, "programmable syntax", and yes, even burritos have been used to explain them. Some conflate monads with properties that do show up often in monadic code, but aren't themselves inherently monadic (immutable data, pure functions, sequencing of evaluation). Some are very language-limited, in the sense that the description invariably turns into a debate over the semantic fine-points of the language being used (this will almost always happen when Haskell gets discussed, because expression evaluation is a very subtle thing in Haskell). Frankly it's a mess. The only safe ground you have is the algebraic definition, but that's not terribly useful for building an intuition about how one might use them.

I think that you (and me, and most everyday programmers) would use them in your code only if you were using some library that was designed in a monadic style. In Haskell, so many libraries use monads these days that it's a given: using Haskell for any practical purpose means using monads, whether you want to or not. In Ocaml (about as close to Haskell as you can get without being purely functional), they show up mainly in third-party libraries, mostly when you want to do something tricky like asynchronous I/O -- again, you use monads if you want to use the library. But in run-of-the-mill imperative languages, though, monads don't tend to show up very often, and when they do, they usually are very specific to one library or another.


I like this answer, but it leaves one important thing out. Mathematicians have been studying abstraction for thousands of years. When they decide something is useful, that probably means it has broad applicability and enough functionality to get interesting results out of.

Mathematicians decided monads are useful.

Take advantage of that. Even if you're working in a language that can't express the abstraction, you can allow it to inform your library design. If something could be a monad, that's usually a better choice than something ad-hoc instead. (Sometimes it's overkill and a monoid would suffice, but both are better choices than an ad-hoc interface without any broadly accepted theoretical basis.)


I'm glad you brought up monoids here as a (sometimes) alternative -- in addition to their inherent properties, people have a much easier time wrapping their heads around monoids.

And yes, while abstraction is generally good, principled abstraction is generally better. At the same time, a lousy monadic library is still lousy. The sets of "great libraries" and "algebraically informed libraries" intersect, but neither subsets the other!


Great answer! Should be read by anyone who wants a down-to-earth pre-introduction to monads.


The definition of the thing we call “monads” is a poor place to start understanding what they’re for. The only way to understand monads is to use a number of monads, and you’ll start to see why they’re useful.

One great monad is the Option monad. It lets you represent, at the type level, the two casss of having something or not having. Think about when you use a python query operation that may return nothing. You don’t know if you’ll get back an empty list, or a null, or what. If the API returned an option, this would indicate what happens when you get nothing back. That’s cool. You can then do computations against maybe having a value. Option.map(lambda) will apply that lambda function to the value only if it exists, saving your program from blowing up on a null. Even better, tho, you can chain a bunch of operations that may fail together. In the map example, ifyour lambda also returns an option, you would end up with the type Option[Option[A]]. Monads implement a flatten method that will turn an Option[Option[A]] into an Option[A]. They abbreviate this with “flatMap”, which can be implemented as a call to map, followed by a call to flatten.

Futures are another great monad that help model asynchronous operations. Val f = Future(mydatabasequery) will return a Future[A], read as “future of type A”. You can perform computations on the result using the map function: f.map(lambda). You don’t know when this will happen, you just know it will. If your lambda also returns a Future, you will end up with Future[Future[A]]. You convert this into a Future[A] with the Future.flatten function. The shorthand for this is.flatMap(lambda).

Another great monad is the Either[A, B] monad. It’s like the Option monad, but rather than having None or Something, you have Left or Right. If you have an operation that may throw an exception, you can wrap it in an Either. A left value will be some error, and the right value will be success. You can conditionally perform operations on the success using the Map function: Either.map(lambda). If the Either doesn’t have a Right value, it will not execute your lambda, saving you from performing an incoherent operation. In python sometimes you might return either a string or an int, representing failure or success. Either[String, Int] is just formalizing that pattern, and letting your compiler prove that you’ve handled both cases. You can model sequential computations that may fail using either: if your lambda returns an either, you’ll get an either[string, either[string, int]]. You can flatten these using the Either.flatten method. The shorthand for this would be flatMap.


In computer science, a monad is a notation representing an intermediate value and its computational context. E.g., data necessary to continue the next steps of a calculation, or some kind of marker of the value's source (IO monad.)

It's an abstraction which rarely carries its own weight, but it's necessary to understand Haskell for what seem to amount to cultural reasons. Other strongly typed functional languages don't place nearly so much emphasis on it.


I would say... Monads are a concept that are useful when you are programing with monoids.

Do not bother trying to understand monads if you don't already have a good understanding of monoids.

Starting with monads is like trying to learn to divide before you learn to count.


Then can you explain a monoid please?


While monoids are explicitly identified in some functional languages, they don't have to be. Monoids just exist whenever the axioms are satisfied.

A monoid is a set, and an operation for combining elements from the set, that satisfy a few properties.

Closed: For every combination of elements, the result of combination is in the set.

Identity: There is an element of the set that when combined with any other element, results in that same other element.

Associative: For any elements a,b,c a+(b+c) = (a+b)+c

Examples are helpful here. I'll list a few as (set,operation,identity)

(Integers,+,0) (Integers,*,1) (Strings,&,"") (Lists,&,[]) (functions t->t, function composition, identity function) (Bool,and,True) (Bool,or,False) (Bags of candy and the empty bag, dumping bag a into bag b, the empty bag)


For me, I really like the “programmable semi-colon” metaphor, where essentially a monad glues together computations and has additional context/actions carried along between the glued computations.

If you’re looking for external resources, it really helped my understanding by reading all the definitions you’ll find online, looking at every example (including the source code on hackage!), and building some play examples myself. It also helped me to first understand functors, applicative functors, then monoids as I was going through other resources.


The problem with the “programmable semi-colon” metaphor is that you don't need a monad just to "glue together computations" (sequencing). That only requires an applicative functor. What sets a monad apart is that you can use the result of the first action to determine the second action.

Functor - If you have a value in a context (f a), you can apply a function (a -> b) to that value to obtain a new value in the same context (f b).

"Pointed" Functor - A Functor with a special unit value (a "point"). In Haskell this is part of the Applicative typeclass and the unit value is written as "pure ()" or (deprecated) "return ()". If starting from the unit value instead, "pure x = fmap (const x) unit".

Applicative Functor - If you have a function in a context (f (a -> b)), and a value in a context (f a), you can combine them to produce a new value in a new context (f b). Combining the contexts is where sequencing comes in, though there are other interpretations besides sequencing effects.

Monad - This can be expressed two ways: 1. The "join" version: If you have a value in a nested context (m (m b)), you can "flatten" the contexts to produce the same value in a new context (m b). 2. The "bind" version: If you have a value in a context (m a) and a function from that type of value to another value in a context (a -> m b), you can combine them to produce a value in a new context (m b).

These perspectives may appear rather different, but "bind" can be recreated from "join" by first mapping the function (a -> m b) over the value (m a) to obtain (m (m b)) and then collapsing the nested contexts to (m b) with "join". Similarly, "join" can be seen as a special case of "bind" with the value (m (m b)) and the identity function (m b -> m b).

Anyway, I think the article is looking at this backward. The power of (pure) functional programming lies not in providing the most powerful abstractions all the time, but rather in deliberately using the least powerful abstractions that will get the job done. Knowing when you need—or more to the point, don't need—a Monad is a key part of making your program easier to reason about. A traditional imperative language offers the power of IO and Monad everywhere, transparently, whether you need it or not, which makes reasoning about the program very difficult—any part of the program could potentially depend on and/or affect anything in the program's runtime environment. Perhaps a certain function only needs the ability to sequence effects which don't depend on their results (Applicative). Or perhaps it only needs the ability to transform a value (Functor). Or perhaps it doesn't need a context at all (pure code). The less power you demand from the abstraction, the more you can statically prove about the code's runtime behavior, which is good not only for correctness but also for reusability.


Tom Stuart did a seriously great talk about this, which really helped my understanding of monads. You can read it as a blog post, but I recommend watching the video. It's worth finding the time for. https://codon.com/refactoring-ruby-with-monads


Monad is an interface which a lot of useful "context" types conform to - I find that if a function returns any kind of "secondary value" or "effect" (think the sort of things aspect-oriented programming is supposed to be for - permissions, logging, database transaction boundaries) this can usually be represented by a type which will form a monad. http://blog.sigfpe.com/2007/04/homeland-security-threat-leve... is a simple example if you don't mind the Haskell syntax.

The concept is useful because it means that we can implement generic functions that work for any monad - http://m50d.github.io/2013/01/16/generic-contexts was one practical example I met at my job (if you can read Scala - I feel it's quite Pythonlike but YMMV). Better still, these functions already exist for you - things like the "traverse" function I described there, or cataM (write a function that operates on one level of a tree and returns a type in a monadic context, get the corresponding function that operates on the whole tree), are just a library import away.

Why do you want to use them? They make it practical to replace "magic" with plain old values - somewhat cumbersome values, but values that are usable enough that the advantages of working with plain old code start to show. If you're using Python, anything you're using decorators for is a very good candidate; I already mentioned AOP. A really simple example would be error handling: in a lot of languages you have the choice between https://blog.golang.org/errors-are-values (explicit, clear, but too cumbersome to use in practice; means your functions are hard to read because all the error handling gets in the way of reading the business logic) and exceptions (concise, but magic and invisible; code with exceptions can end up with really surprising control flow). Monads give you a middle way: https://fsharpforfunandprofit.com/rop/ , where your error handling is visible enough to not be a surprise when it happens, but hidden away enough that you can read the "happy path" code without getting too distracted.


You don't typically need them in normal programming.

Have you used the Python twisted framework? The way it chains continuations is a good example of a monadic bind in the wild.


On the off chance that you know C# and Linq:

    // A Box that holds stuff
    public class BoxOfStuff<TStuff>
    {
        public TStuff Value { get; private set; }

        public BoxOfStuff(TStuff value)
        {
            Value = value;
        }
    }

    // Class that holds methods for working with Boxes of stuff
    public static class Ext
    {
        // Takes something and puts it in a box
        public static BoxOfStuff<TStuff> PutInABox<TStuff>(this TStuff thingToPutInTheBox)
        {
            return new BoxOfStuff<TStuff>(thingToPutInTheBox);
        }

        // Converts a Box containing one thing into a Box containing something else
        public static BoxOfStuff<TOtherStuff> ConvertBox<TStuff, TOtherStuff>(this BoxOfStuff<TStuff> theBoxItself, 
            Func<TStuff, BoxOfStuff<TOtherStuff>> functionThatConvertsTheBox)
        {
            return functionThatConvertsTheBox(theBoxItself.Value);
        }
    }

    // Class that holds a string value
    public class StringHolder
    {
        public string Value;
        
        public StringHolder(string value)
        {
            Value = value;
        }
        
        public override string ToString()
        {
            return Value;
        }
    }

    var result = new StringHolder("A random string").PutInABox()
        .ConvertBox(randomString => new Random((int)DateTime.Now.Ticks).Next().PutInABox()
                .ConvertBox(randomNumber => (Guid.NewGuid()).PutInABox()
                        .ConvertBox(guid => ($"{randomString}, {randomNumber}, {guid}").PutInABox())));
        
    Console.WriteLine(result);

    // Compare the above result to this:    
    public static class LinqExt
    {
        public static IEnumerable<T> ToEnumerable<T>(this T t)
        {
            return new List<T> { t };
        }
    }

    var linqResult = new StringHolder("A random string").ToEnumerable()
        .SelectMany(randomString => new Random((int)DateTime.Now.Ticks).Next().ToEnumerable()
                .SelectMany(randomNumber => Guid.NewGuid().ToEnumerable()
                        .SelectMany(guid => ($"{randomString}, {randomNumber}, {guid}").ToEnumerable())));
                
    Console.WriteLine(linqResult);
BoxOfStuff + PutInABox + ConvertBox = Monad.

This is analogous to:

IEnumerable + ToEnumerable (the Identity function) + SelectMany (the Bind function). To be honest, this is a fairly loose, non-rigorous definition even by programming standards, but if you've used LINQ before it should give you an idea of how useful Monads can be and how they can be used.

LINQ works the way it does because of Monads. In fact, if I were to add a Map function (Select) to BoxOfStuff and rename ConvertBox to SelectMany, I could use query syntax eg.

   var result = from thing in myBoxOfStuff
                select thing.ToString();


Basically the traditional way to do I/O is to just call I/O functions (like printing to the terminal and reading keyboard input) in the middle of the program.

There is another way, which consists of having the program only consist of a pure function that returns the kind of an I/O function to call, its parameters, plus a closure that takes the result of the I/O function, and returns again a pair of I/O function kind, parameters and closure.

Then, an external "executor" can just perform the I/O, call the closure with the result, perform the I/O again, etc.

The result of this is the program can be executed to do I/O, but all functions are side effect free (they just compute a result based on the arguments).

The tuple of I/O function kind to call plus parameters plus closure is called an "I/O monad" (or equivalently it can be just the closure).

Now in this model if you want to have subroutines you need them to return an I/O monad, and need to be able to "chain" the rest of the computation to it, which is done by adding a method to the I/O monad called "bind".

These subroutines might not do any I/O, but it's still convenient to return an I/O monad, so an operation called "return" is added, which returns an I/O monad whose I/O function simply returns the parameter to return.

This interface with those "bind" and "return" functions is called a "monad", of which the I/O monad is an instance (along with some obvious properties, like that calling bind twice is the same as calling it once with the composition of the two functions, and that return and bind also commute in the same obvious way).

Another example is the "state monad", which is like the I/O monad except the operations are "read variable #i" and "write variable #i" (Haskell uses STRef objects instead of variable indices, but it's essentially the same).

While the I/O monad can only be executed "externally" to the program, the state monad can be executed internally: a pure function can be written that takes an array of initial values of the variables, a state monad instance and executes the state monad instance and gives back the final variable values.

So this gives a more complicated way of expressing imperative programming that has the advantage of only involving pure functions and thus being usable in pure languages like Haskell.

Now, why isn't this the best thing since sliced bread?

The obvious problem is that a naive implementation is massively inefficient, since every time you want to perform a memory write you need to stuff the stack of the program into the heap and return it as a closure.

Furthermore, monads for normal I/O and state are just not very useful: isolating the computation can be done with Rust's model of lifetimes and linear types, the ability to execute it in different ways can be achieved by providing a dynamically-dispatched set of I/O functions, and stopping the computation can be performed with unwinding.

The only practical advantage of monads is that you can pause the computation and have the state on the heap instead of the stack. This is why efficiently implemented languages like Rust (and in this case Java/C#/JavaScript) only use monads to implement futures/promises, where this ability is exactly what is wanted, and don't use state monads or I/O monads for synchronous I/O.

As an aside, the reason futures/promises are the best way to do I/O is that when the computation is paused, the state is stored on a heap in a compact layout (as a future/promise monad instance), while with either threads or coroutines it consists of an inefficiently laid out machine stack in userspace (that also takes a lot of virtual address space since it must be able to grow arbitrarily) and in case of native threads a machine stack and thread structure in kernel space as well.

The async/await and future/promise support in Java, C# and JavaScript is essentially a direct implementation of the Haskell monad model, with the downside of requiring an allocation for each await.

However, there's also way to have a monad-like style while preserving near-optimal performance, which is used in Rust's async/await generators.

The idea is that instead of having monads be closures that return closures, they instead become generic "objects" with a method that, upon being called, updates the state of the object and returns a description of the I/O operation to perform (or even just performs it with parameters provided by the caller), and is called again and again until it signals that the computation is done (this of course involves mutable state, but the mutation can be restricted with ownership).

"bind" can then be performed by simply defining a larger object that embeds the inner "monad" and the whole thing can be monomorphized, thus removing any dynamic dispatch and having a program that behaves as normal, except that instead of storing variables in the stack, it stores them in a single variable that needs to be immovable, either by heap allocating or ensure it's not moved on the stack. Note however that you need further allocations for recursion and in general for any dynamic calls whose state does not have a bounded size.

TLDR: monads primarily exists because Haskell must be pure and doesn't care about performance, but also because they are the only way to do memory-efficient async I/O


   f = let s1 = 23 in let s2 = 42 in s2
Grammatically, this reminded me of Erlang. Hence, then indirectly of Prolog. And now that I think about it, Prolog is considered a logic programming language. I'm not advocating logic programming lanaguages, but it seems that it might stand to reason that the features of logic programming languages facilitate reasoning about programs written in them.

Looping back, Erlang doesn't generate discussions about monads. Erlang is engineered so programmers don't think about monads. Erlang programmers just use them. It's transparent. It's non-controversial because Erlang doesn't stake out an intellectual argument.


When I worked at Basho, the senior Erlang developers definitely thought/talked about monads and how useful they could be in certain situations.

(I, alas, was and remain insufficiently clueful about monads to understand the discussions.)


Erlang is pure, but it's not lazy, so it's not an issue like in Haskell.


I'm not sure there is anything lazier than "Let it crash." Erlang is lazy at the question in addition to being lazy at the answer. However, all that laziness is transparent because Erlang is an engineering artifact not an academic artifact.


> The question is, just looking at the above code, it looks like "s" is mutated. Yes the implementation in the monad could be pure and use state threading, but how does that help us reason about the above imperative program?

The value of the monad is precisely that both these things are true. We can use our fast-and-loose imperative intuition most of the time, but if we ever get confused about how the "mutation" of s interacts with some other effect (e.g. a continuation), we can desugar the code into the pure state-threading version and bring all the power of our pure, referentially transparent reasoning tools to bear. Most of the time we don't, because that version of the code looks a lot more cumbersome, but it's there if we need it. Whereas if we find ourselves in the same position with the Javascript, we're stuffed; when the effect interactions get too complicated to reason about imperatively, we have no alternative.


You're using monads whenever you use Java 8 streams, Javascript promises, ActiveRecord and collection operations in Rails / Ruby; monads are everywhere these days. Monads are one of the handiest design patterns you can use; all you really need is a decent lambda syntax and you're off to the races.


The monad abstraction takes a lot of unlike programming features and abstracts away most of the important stuff to show something they have in common. The question is whether users of those features need to know about this abstraction? Why should we care that they have something in common? They're mostly different.

As concepts get more abstract, they get harder to teach. I don't think knowing that JavaScript promises are a monad is all that helpful when learning to do async I/O.


> Why should we care that they have something in common? They're mostly different.

You can of course ignore it, but it's useful to care about this because they are actually similar over this shared commonality. You start seeing patterns over which you can apply the same general functions. Just like you can map over an Option or a List or any "mappable-over" structure.

This is similar to asking (in a made up language, forget the specifics): "why should I care that Lists and Sets and Arrays and Enumerations have a shared 'iterable' interpretation? Sets and Lists are mostly different!" Well, because sometimes it's useful to have functions that operate on iterables, instead of writing one for Lists and another for Sets. Once this idea "clicks" in your mind, you start seeing more and more of this commonalities, and it starts to shape how you think about coding.


Sure, I agree.

My point is that just as with other design patterns, it's possible to overdo it and lose your audience. Enthusiasts can definitely go overboard here.


> Why should we care that they have something in common?

Well, for one thing, you can write reusable code that leverages this commonality and provides common functionality with regard to any Monad (ditto with related abstractions like Monoids, Applicative Functors, basic Functors, etc.)

That's kind of the whole point: you can provide in reusable library code what in other language would, at best, be recipe-driven repeated-boilerplate design patterns.


You're using monads whenever you code in any imperative language. The "bind" pattern is simply so pervasive that you don't even recognize that it's there, but any time you use information obtained through one side-effect as part of another side-effect, that's a monad. That trivial read-a-name-and-print-a-greeting program which frequently shows up on the first day of Intro to Programming? Yep, that's using a monad.

The difference with functional programming is that you can choose not to use monads, and IO, and other powerful abstractions which are baked in so pervasively into other languages. This enables more powerful static analysis and code reuse, but it also means that you need to be aware of what you're giving up in the process, such as the ability to choose effects at runtime based on the results of prior effects. People who have been using monads throughout their entire programming career are thus suddenly faced with trying to understand what a monad actually is and what sets it apart from non-monadic code. This leads to people writing tutorials and papers and blog posts, which causes people to think it's some super-complex concept accessible only to genius mathematicians, which leads to more tutorials...

The neat thing is, once you've begun to see the pattern, it turns out to be useful in other areas besides sequencing I/O effects; non-determinism, coroutines, generators, parsers, and so on. Most languages make sequencing I/O effects so easy you don't even notice it's happening, but other monadic patterns, lacking built-in syntax, are far more unwieldy. Haskell puts them all on the same footing via the Monad typeclass and do-notation.


Monads are too general though. Saying 'try monads' to solve a problem is like telling someone, 'try vowels' when they're trying pronounce a word.


I stumbled through high school math and never took any in university so this is all wizardry to me. All I know is that when I began using immutable data in JavaScript, my data processing workflows and state machines became nearly trivial to understand and debug. Pure functions are also beautiful to reason about and test and debug.

I'm not sure about monads. I barely understand them. But if I could have one wish, it would be to make immutable data structures first class in more languages, like JS and Python.


Your intuition about the benefits of pure functions & immutability are spot-on, and I think that any proponent of monads would agree that these are fundamentally more important concepts than monads themselves. (Many of the ways that people use monads would break, or lose their point, if the data they operated over could be mutated outside of the monadic context.)

You can get an awfully long way with immutable data structures without involving monads. There are plenty of languages that give priority to immutable data, but don't have monads in any inherent way: Prolog, Erlang, Ocaml, and Scheme (to a slightly lesser extent) all come to mind.


Prolog has definite clause grammars (DCGs), which are very similar to monads.

You can think of a DCG as giving you two implicit arguments, which you can use to express concatenations and in fact arbitrary relations between states.

There are also ways to access these implicit arguments, which are similar to the corresponding constructs in Haskell.

DCGs are a very important feature of Prolog. They are frequently used for parsing and state transformation tasks. Like monads in Haskell, DCGs often make working with immutable data much more convenient, because you can make the more uninteresting parts of your code implicit. Other logic programming languages like Mercury also support DCGs.


It never occurred to me that there were similarities between DCGs and monads, but that's an interesting argument. Thank you for this!


I've been wondering a similar thing, lately.

In principle, I like the way that Haskell enforces purity at the compiler level, and kicks everything impure out to the boundary of the program. I can try to write code that's that clean in another language, but it's always an unstable equilibrium. The compiler won't help me make sure functions stay pure, so all it takes is adding one little bit of statefulness to one function out there, and suddenly, like a virus, everything that calls it, everything that calls anything that calls it, and so on is impure, too.

On the other hand, monads are *@#$% opaque. I can't help feeling that, for many use cases, they are unnecessarily opaque.

At the risk of being branded a heretic, not all assignment makes a function impure. If the function never modifies or uses a mutable variable that's visible outside its scope, while it's visible outside its scope (e.g, it's fine to return that accumulator when you're done with it), and it's more efficient (either in terms of performance or code clarity) to get the job done with some mutation, why not? I'd like a language that doesn't worry so much about what consenting adults do behind closed doors. Rather than not letting me do it at all, I'd rather a language that just helps me make sure I'm doing it in a safe manner that doesn't lead to viral impurity.


> not all assignment makes a function impure

That's exactly what the ST monad/STRef's are for. {new,read,write}STRef let you manipulate mutable references while keeping track of their scope in the type of the reference. runST encapsulates a chunk of code that manipulates mutable variables in an externally-pure way into a pure function. The "Lazy Functional State Threads"[1] paper on it is a pretty good read.

[1]: https://www.microsoft.com/en-us/research/wp-content/uploads/...


I think no, and why can be found in these articles:

http://degoes.net/articles/modern-fp

http://degoes.net/articles/modern-fp-part-2


Those posts by Degoes are more polemical than techniques that he actually uses in production.


Monads cannot be a waste of time. Monads just are. It can be a waste of time to design an effect system for Haskell around monads (it's probably not) or it can be a waste of time to design an effect system for Javascript around monads (it might well be). But monads cannot be a waste of time.


Your comment is really pedantic. Obviously the concern is whether using monads is a waste. If I say "television is a waste of time" are you also going to correct me that tvs just are?


By no means.

There's this idea floating around that monads are things that had to be "introduced" into Haskell to "solve" certain problems with purity. This idea is false. Furthermore this question misses half the point of monads, that is, that you can program generically over them. Sure, that one example makes the IO monad look just like code in any other imperative language. But I can also use do notation and monad combinators with lists, coroutines, promises, futures, ASTs, and indeed any other monad I care to come up with. It may seem that any individual monad qua monad is a waste of time but when taken in totality it is a completely different question.

Since the question mentioned no other monad than IO it doesn't begin to address the important issues.


> Are Monads a Waste of Time?

No, they are not supposed to have side effects.


Very funny. Although like most humour, somewhat wasted here.


I think at the right dosage, it's welcome. For me, mild doses of good humour upgrade the most serious of discussions without detracting from their professionalism.


Agreed, let's not become robot news.


Humor does fine on HN.

Just be wary of conveniently blaming the community when your jokes flop. ;)


I've gotten loooots of upvotes from basically joke comments. HN doesn't like all humor, but if it's actually clever, it'll work.


Th author asks a general question, using JS code as an example of imperative code because it's faster to write and the syntax is familiar to many programmers.

The second top comment starts with:

> JavaScript is a mess with its truthiness, implicit conversions, dubious semicolon insertion, second class references, automatic variable declarations, weird scoping, etc.. You'd need to try harder to make the Haskell imperative monad just as bad as JavaScript.

That is not the point, is it?

Thankfully most other comments discuss the actual question asked, not the pet peeves aimed at a particular language.


I have been told a story that some CS courses trying to upscale to FP as the birth language have people teaching inside the IO monad to preserve imperative order, such that the graduates come out coding a strongly typed language almost completely bound in imperative systems design.

I feel the moments of clarity for me around the roles of monadic elements in a program are like brainwaves which vanish as soon as I try to grasp them. Right now, I'm stuck on the idea my FP colleague described as the boundary of the real-world, the thing which follows time order and cannot be gone back to in time or sequence. Most FP example fragments do things inside a domain of existence which is bounded solely by the variables of the problem. Solving a fibinnaci for some n thousand integers left in memory is distinctly unlike processing a UNIX pipe sequence yet the latter is said to be very like functional composition.


The answer to the posters question is yes... monads are a waste of time if you're using them to handle state changes. It's best to just use explicit threading in that case.

The only time (I think) it's okay to use stateful monads in haskell is when it simplifies the code syntactically by removing extra cruft. Otherwise, explicit state is always going to be clearer to understand.

On the other hand, most uses of monads in Haskell are not to handle state. In my Haskell SQL library Beam[1] for example, we use monads to build a relational algebra AST at run-time and then dynamically optimize it. The SQL generating system inspects branches of the AST at run-time to generate sensible SQL expressions. Because the DSL is implemented as a monad, we get a bunch of stuff for free. At my work for example, we use a straightforward haskell `forM` to generate large joins.

Because of the Haskell purity, we can know for sure that re-executing parts of the AST will not be dispruptive, because the code is pure.

I'm currently working on a little utility library for editing rich text with Haskell and I've used the continuation monad to automatically derive a zipper for my data structure. I get O(1) updates at my current position, for free. All I have to do is write a little interpreter. Because of Haskell's purity, I get a lot of functionality without doing anything. This is actually a big deal, as the same implementation in an imperative language involves complicated data structures and O(log n) complexity (look at the GTK text btree source file [2] for the kind of complexity I'm talking about). Because of the flexibility involved with the underlying monad, I have a 'fast' implementation of my zipper for simply modifying text, a 'visual' implementation that records a list of DOM updates to visualize the text as its being edited, and a 'network' implementation that outputs a list of commands for suitable editing. One simple data structure, one monad, three interpreters, and I have a collaborative text editing system.

Ultimately, the point is that you should only use an abstraction if it simplifies your code or reduces the error surface. If it does neither, then it's wasteful

[1] https://github.com/tathougies/beam [2] https://github.com/GNOME/gtk/blob/master/gtk/gtktextbtree.c


> I'm currently working on a little utility library for editing rich text with Haskell and I've used the continuation monad to automatically derive a zipper for my data structure.

Could you go into more detail about this?


Sure... most people familiar with zippers know that it can be thought of as the algebraic derivative the underlying type.

What’s perhaps less known is that this is the same as ‘differentiating’ the traversal of the data type.

The continuation monad is a sort of ‘derivative’ of a computation in that it nearly captures the ‘infinitesmal’ difference between two steps in a computation.

If your computation is the traversal of your tree, list, or text markup, then you can make a zipper from that without having to declare new data structures. This is isomorphic to declaring your own data structure. Its only advantage is convenience (that I know of at least).

Oleg Kiselyov goes into great detail with this idea and has lots of example code: http://okmij.org/ftp/continuations/zipper.html


Thanks for the answer, it'll keep me busy for a week or two!


How is this related to Monads? This is an argument against mutation, or saying that Haskell doesn't handle mutation better than js, which is absolutely true; mutation is heavily discouraged in pure FP... Monads can be utilized to encapsulate the concept of sequential computation - if that sequential computation happens to be mutations you're gonna have a bad time.


The problem here is not the Monad, but the “do” notation. Rewrite your code without it, and will be more clear and less “waste of time”


As others have said, there are many different kinds of monads.

In my mind, for practical reasons, uniqueness typing >> state monad.

Uniqueness typing allows programmer to use stateful programming locally while developing strictly functional programs in at scale. Best of the both worlds.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: