
Are Monads a Waste of Time? - YouAreGreat
http://lambda-the-ultimate.org/node/5504
======
shiado
In university I was obsessed with Haskell and being clever and writing code
that was an unholy stack of lambda functions and fancy constructs like monads
and monad lifting. I used the typical intellectual supremacy argument inherent
to many fields of study where you reason that something isn't used in industry
because industry people are stupid and cannot possibly grasp all these
advanced abstract concepts.

Eventually I realized that Haskell code is slow to write, hard to reason about
efficiency, lazy evaluation sucks for most things, and most importantly
Haskell is hard to read by others. Going on Hoogle to look up some package and
seeing some abandoned doctoral thesis used by nobody where the only
documentation is types becomes exhausting after a while.

Go is now my favorite language.

~~~
proc0
Pure FP is just using math as its primary source of programming patterns and
techniques, while most other languages, like OOP, use arbitrary conventions to
abstract code. Because mutation gives you tremendous flexibility, you can
model many kinds of patterns, give it some analogy like inheritance in
animals, and then write a language that has it out of the box. Haskell on the
other is using math to construct its patterns, so while the learning curve is
higher, eventually you realize this is the ultimate abstraction (category
theory), and it cannot go more abstract than that. This is useful in some
scenarios requiring more logical rigor, but I agree it's not always the best
tool.

~~~
j2kun
> eventually you realize this is the ultimate abstraction (category theory),
> and it cannot go more abstract than that

This is exactly the kind of false sense of intellectual superiority to avoid.
Mathematicians and category theorists don't talk like this. Even the parts of
Haskell that have the flavor of category theory aren't _really_ all that much
like category theory.

~~~
proc0
Maybe you can expand on that, and/or show me something more abstract than
category theory. Programming = abstracting information processing. Category
theory = the mathematics of abstract functions (or morphisms a.k.a. changing
something)... so it makes sense that CT captures something very fundamental
about computation.

~~~
j2kun
1\. Just because we have not thought of something more abstract does not mean
category theory is the "ultimate" abstraction that cannot be surpassed. You
would have said the same thing about sets before category theory was invented.
100% guaranteed there will be another, more abstract paradigm when the need
arises.

2\. Just because one thing captures something fundamental about computation
does not mean it captures everything fundamental about computation, nor does
it mean nothing else can capture fundamental aspects of computation (disjoint
from what may or may not be captured by category theory). For example,
category theory does a terrible job representing the mathematical subfield of
complexity theory, despite it being entirely about computation. For that
matter, "how abstract" is not always comparable, and the intellectual dick
measuring of what is the most abstract is counterproductive. Even basic
questions in particle physics have no known model in mathematics that doesn't
break under gentle prodding, so is particle physics more abstract than
mathematics?

3\. Haskell does a lousy job representing the majority of ideas that make
category theory useful to mathematicians. For example, universal properties
are neglected Haskell, and at best they're implicit. Nobody actually thinks
about the math when they're writing Haskell programs, they just think about
Haskell types and programs, which makes the "techniques and patterns" of
Haskell programming just as arbitrary as the conventions in random Javascript
libraries. It's just that there are fewer names for the same thing because
fewer people write serious programs in Haskell.

~~~
dsacco
_> Haskell does a lousy job representing the majority of ideas that make
category theory useful to mathematicians. For example, universal properties
are neglected Haskell, and at best they're implicit. Nobody actually thinks
about the math when they're writing Haskell programs, they just think about
Haskell types and programs, which makes the "techniques and patterns" of
Haskell programming just as arbitrary as the conventions in random Javascript
libraries. It's just that there are fewer names for the same thing because
fewer people write serious programs in Haskell._

That’s an excellent point. So many people think they’re doing category theory
when they’re writing Haskell, but Haskell is essentially nothing like the
mathematics it borrows terminology from. They’re really not comparable. Which
is absolutely fine, but then people shouldn’t draw parallels to the actual
mathematics when the terminology is similar in name only.

------
pka
Just to point out the probably obvious thing: that's only one type of monad
(the state monad). There are many other things which form a monad and can't be
represented easily in JS or other imperative languages.

Even though even the state monad is a clear win in my opinion (since it
explicitly delimits a stateful computation which can then be embedded in a
pure one in a principled way), the concept itself is much broader and thus
much more useful than just allowing for imperative syntax in a pure language.
For example, the continuation monad, which is just a library in Haskell, can't
be implemented in most imperative languages without extending the language
itself (async in JS, C#, etc).

~~~
nnq
> delimits a stateful computation which can then be embedded in a pure one in
> a principled way

You probably meant to say smth different, but to many this reads like "you can
embed and call impure code inside pure code while the pure code stay pure"
which doesn't make much sense. I mean, you always do _the opposite:_
embed/call pure code _from_ the impure one.

~~~
efnx
With the state monad (State not StateT) you can run it in pure code because
the runner is pure. It only “mutates” one context and when it’s all unwrapped
it’s actually pure functions all the way down.

~~~
tome
You probably meant ST, but yes, much to nnq's surprise, this is possible!

~~~
gowld
The fact that something called "ST" is one of the most important libraries in
the language, and it's the one that you should choose for State (leave aside
the "State" library! that's not the one you want!) shows how for all Haskell's
power, it's terribly unusable for trivial reasons that never get fixed due to
extreme deference to legacy mistakes. See also: Prelude.

~~~
tathougies
ST is not the monad that should be used for state. You should use State for
that.

ST is an optimization trick mainly used by framework implementors. In my
professional experience writing Haskell, I’ve never seen st used for anything
important

~~~
gowld
I wouldn't call "mutable arrays" an optimization trick mainly used by
framework implementors, or "not important"

[https://wiki.haskell.org/Arrays](https://wiki.haskell.org/Arrays)

------
olavk
Monads make a lot of sense in a pure functional language like Haskell, because
it allows you to program in an imperative style _when you need it_ , and keep
the rest of the program pure and functional.

The benefit of State and IO monads is not that you can use them everywhere so
you now have an imperative language. The benefit is you can explicitly delimit
their use, and you still get the "easier to reason about" for the rest of the
code.

For languages which already supports mutable state everywhere by default it
does not make much sense to use monads for this.

~~~
jordigh
I really like D's approach instead. A function declared pure can have no side
effects but it can do whatever kind of mess it wants within its own code. It
can have loops, assignments, reassignments, memory allocations, but it can't
mutate variables outside its scope nor can it call any impure functions. I
find this is a very practical compromise that lets you have the good parts of
purity without ever getting the urge to write a blog post about monads.

[https://dlang.org/spec/function.html#pure-
functions](https://dlang.org/spec/function.html#pure-functions)

~~~
gowld
This is "const" in c++

~~~
jordigh
I don't think this is the same. My C++ is getting rusty, so correct me if I'm
wrong, but making a function const is mostly about making the "this" pointer
const. A const function can still mutate global variables or call non-const
functions, such as operator<< on cout. A pure D function can do neither of
those things.

~~~
gowld
That's fair. C++ has const methods on objects, but I think nothing at global
scope. But you shouldn't have global mutable state :-)

------
Legogris
Somewhat tangential, but I became infatuated with Idris's Effect system as a
more approachable alternative to using monads directly. Yes, it's still monads
under the hood IIRC, but the syntax is a lot easier to reason about IMO.

[https://www.idris-lang.org/documentation/effects/](https://www.idris-
lang.org/documentation/effects/)

~~~
tombert
I feel like a recurring theme I find in Idris (though keep in mind it's just
something I play with every now and then, not something I use with any kind of
regularity) is that it has this "feels like Haskell, but less frustrating and
cleaner" quality. I feel like I make less dumb mistakes while using it, and it
gives me pretty much all the stuff I love about Haskell.

You of course have the relatively simple effects system (as mentioned),, but
you also have the ability to use `do` notation without being in the monad
context, and eager evaluation, which I find makes my program simpler to reason
about.

~~~
Legogris
I agree. I like to think that Idris is to Haskell what C# is to Java, with the
exception that it's still a non-production-ready academic open source
language.

~~~
lmm
It's not that academic AIUI - it was designed for production use, right?

~~~
Legogris
From the release notes of 1.0[0] last year:

> Idris is primarily a research tool, arising from research into software
> development with dependent types which aims to make theorem proving and
> software verification accessible and practical for software developers in
> general. In calling this “1.0”, we mean no more than that the core language
> and base libraries are stable, and that you can expect any code that
> compiles with version 1.0 to compile with any version 1.x.

> Since Idris has less than one person working on it full time, we don’t
> promise “production readiness”, in that there is still a lot to do to make
> the compiler and run time efficient, and there may be libraries you need
> which are not available. And, there will certainly be bugs!

[0] [https://www.idris-lang.org/idris-1-0-released/](https://www.idris-
lang.org/idris-1-0-released/)

------
azinman2
Can someone please explain in a simple way what Monads are and why I want to
use them in normal daily programming (assume, incorrectly, that I only know
Python)?

I know I can google this, and have, and unfortunately I came away even more
confused as a non-Haskell practitioner.

~~~
gmfawcett
My advice is not to worry about it (unless the itch bugs you so much that
you're willing to invest a lot of time researching the answer).

I don't think you'll ever get a single, straight answer that satisfies you.
Just reading this discussion, I'm sure you can see that people describe monads
in wildly different ways. Some descriptions are minimalist (perhaps a bit
reductionist): it's just an algebraic data structure, nothing to get excited
about. Many rely heavily on metaphors: containers, "programmable syntax", and
yes, even burritos have been used to explain them. Some conflate monads with
properties that do show up often in monadic code, but aren't themselves
inherently monadic (immutable data, pure functions, sequencing of evaluation).
Some are very language-limited, in the sense that the description invariably
turns into a debate over the semantic fine-points of the language being used
(this will almost _always_ happen when Haskell gets discussed, because
expression evaluation is a very subtle thing in Haskell). Frankly it's a mess.
The only safe ground you have is the algebraic definition, but that's not
terribly useful for building an intuition about how one might _use_ them.

I think that you (and me, and most everyday programmers) would use them in
your code only if you were using some library that was designed in a monadic
style. In Haskell, so many libraries use monads these days that it's a given:
using Haskell for any practical purpose means using monads, whether you want
to or not. In Ocaml (about as close to Haskell as you can get without being
purely functional), they show up mainly in third-party libraries, mostly when
you want to do something tricky like asynchronous I/O -- again, you use monads
if you want to use the library. But in run-of-the-mill imperative languages,
though, monads don't tend to show up very often, and when they do, they
usually are very specific to one library or another.

~~~
chowells
I like this answer, but it leaves one important thing out. Mathematicians have
been studying abstraction for thousands of years. When they decide something
is useful, that probably means it has broad applicability and enough
functionality to get interesting results out of.

Mathematicians decided monads are useful.

Take advantage of that. Even if you're working in a language that can't
express the abstraction, you can allow it to inform your library design. If
something could be a monad, that's usually a better choice than something ad-
hoc instead. (Sometimes it's overkill and a monoid would suffice, but both are
better choices than an ad-hoc interface without any broadly accepted
theoretical basis.)

~~~
gmfawcett
I'm glad you brought up monoids here as a (sometimes) alternative -- in
addition to their inherent properties, people have a much easier time wrapping
their heads around monoids.

And yes, while abstraction is generally good, principled abstraction is
generally better. At the same time, a lousy monadic library is still lousy.
The sets of "great libraries" and "algebraically informed libraries"
intersect, but neither subsets the other!

------
brudgers

       f = let s1 = 23 in let s2 = 42 in s2
    

Grammatically, this reminded me of Erlang. Hence, then indirectly of Prolog.
And now that I think about it, Prolog is considered a logic programming
language. I'm not advocating logic programming lanaguages, but it seems that
it might stand to reason that the features of logic programming languages
facilitate reasoning about programs written in them.

Looping back, Erlang doesn't generate discussions about monads. Erlang is
engineered so programmers don't think about monads. Erlang programmers just
use them. It's transparent. It's non-controversial because Erlang doesn't
stake out an intellectual argument.

~~~
mwkaufma
Erlang is pure, but it's not lazy, so it's not an issue like in Haskell.

~~~
brudgers
I'm not sure there is anything lazier than "Let it crash." Erlang is lazy at
the question in addition to being lazy at the answer. However, all that
laziness is transparent because Erlang is an engineering artifact not an
academic artifact.

------
lmm
> The question is, just looking at the above code, it looks like "s" is
> mutated. Yes the implementation in the monad could be pure and use state
> threading, but how does that help us reason about the above imperative
> program?

The value of the monad is precisely that both these things are true. We can
use our fast-and-loose imperative intuition most of the time, but if we ever
get confused about how the "mutation" of s interacts with some other effect
(e.g. a continuation), we can desugar the code into the pure state-threading
version and bring all the power of our pure, referentially transparent
reasoning tools to bear. Most of the time we don't, because that version of
the code looks a lot more cumbersome, but it's there if we need it. Whereas if
we find ourselves in the same position with the Javascript, we're stuffed;
when the effect interactions get too complicated to reason about imperatively,
we have no alternative.

------
barrkel
You're using monads whenever you use Java 8 streams, Javascript promises,
ActiveRecord and collection operations in Rails / Ruby; monads are everywhere
these days. Monads are one of the handiest design patterns you can use; all
you really need is a decent lambda syntax and you're off to the races.

~~~
skybrian
The monad abstraction takes a lot of unlike programming features and abstracts
away most of the important stuff to show something they have in common. The
question is whether users of those features need to know about this
abstraction? Why should we care that they have something in common? They're
mostly different.

As concepts get more abstract, they get harder to teach. I don't think knowing
that JavaScript promises are a monad is all that helpful when learning to do
async I/O.

~~~
the_af
> _Why should we care that they have something in common? They 're mostly
> different._

You can of course ignore it, but it's useful to care about this because they
are actually _similar_ over this shared commonality. You start seeing patterns
over which you can apply the same general functions. Just like you can map
over an Option or a List or any "mappable-over" structure.

This is similar to asking (in a made up language, forget the specifics): "why
should I care that Lists and Sets and Arrays and Enumerations have a shared
'iterable' interpretation? Sets and Lists are mostly different!" Well, because
sometimes it's useful to have functions that operate on iterables, instead of
writing one for Lists and another for Sets. Once this idea "clicks" in your
mind, you start seeing more and more of this commonalities, and it starts to
shape how you think about coding.

~~~
skybrian
Sure, I agree.

My point is that just as with other design patterns, it's possible to overdo
it and lose your audience. Enthusiasts can definitely go overboard here.

------
Waterluvian
I stumbled through high school math and never took any in university so this
is all wizardry to me. All I know is that when I began using immutable data in
JavaScript, my data processing workflows and state machines became nearly
trivial to understand and debug. Pure functions are also beautiful to reason
about and test and debug.

I'm not sure about monads. I barely understand them. But if I could have one
wish, it would be to make immutable data structures first class in more
languages, like JS and Python.

~~~
gmfawcett
Your intuition about the benefits of pure functions & immutability are spot-
on, and I think that any proponent of monads would agree that these are
fundamentally more important concepts than monads themselves. (Many of the
ways that people use monads would break, or lose their point, if the data they
operated over could be mutated outside of the monadic context.)

You can get an awfully long way with immutable data structures without
involving monads. There are plenty of languages that give priority to
immutable data, but don't have monads in any inherent way: Prolog, Erlang,
Ocaml, and Scheme (to a slightly lesser extent) all come to mind.

~~~
zmonx
Prolog has definite clause grammars (DCGs), which are _very similar_ to
monads.

You can think of a DCG as giving you two _implicit_ arguments, which you can
use to express concatenations and in fact arbitrary relations between states.

There are also ways to access these implicit arguments, which are similar to
the corresponding constructs in Haskell.

DCGs are a very important feature of Prolog. They are frequently used for
parsing and state transformation tasks. Like monads in Haskell, DCGs often
make working with immutable data much more convenient, because you can make
the more uninteresting parts of your code _implicit_. Other logic programming
languages like Mercury also support DCGs.

~~~
gmfawcett
It never occurred to me that there were similarities between DCGs and monads,
but that's an interesting argument. Thank you for this!

------
bunderbunder
I've been wondering a similar thing, lately.

In principle, I like the way that Haskell enforces purity at the compiler
level, and kicks everything impure out to the boundary of the program. I can
_try_ to write code that's that clean in another language, but it's always an
unstable equilibrium. The compiler won't help me make sure functions stay
pure, so all it takes is adding one little bit of statefulness to one function
out there, and suddenly, like a virus, everything that calls it, everything
that calls anything that calls it, and so on is impure, too.

On the other hand, monads are *@#$% opaque. I can't help feeling that, for
many use cases, they are unnecessarily opaque.

At the risk of being branded a heretic, not all assignment makes a function
impure. If the function never modifies or uses a mutable variable that's
visible outside its scope, while it's visible outside its scope (e.g, it's
fine to return that accumulator when you're done with it), and it's more
efficient (either in terms of performance or code clarity) to get the job done
with some mutation, why not? I'd like a language that doesn't worry so much
about what consenting adults do behind closed doors. Rather than not letting
me do it at all, I'd rather a language that just helps me make sure I'm doing
it in a safe manner that doesn't lead to viral impurity.

~~~
aweinstock
> not all assignment makes a function impure

That's exactly what the ST monad/STRef's are for. {new,read,write}STRef let
you manipulate mutable references while keeping track of their scope in the
type of the reference. runST encapsulates a chunk of code that manipulates
mutable variables in an externally-pure way into a pure function. The "Lazy
Functional State Threads"[1] paper on it is a pretty good read.

[1]: [https://www.microsoft.com/en-us/research/wp-
content/uploads/...](https://www.microsoft.com/en-us/research/wp-
content/uploads/1994/06/lazy-functional-state-threads.pdf)

------
js8
I think no, and why can be found in these articles:

[http://degoes.net/articles/modern-fp](http://degoes.net/articles/modern-fp)

[http://degoes.net/articles/modern-fp-
part-2](http://degoes.net/articles/modern-fp-part-2)

~~~
platz
Those posts by Degoes are more polemical than techniques that he actually uses
in production.

------
tome
Monads cannot be a waste of time. Monads just _are_. It can be a waste of time
to design an effect system for Haskell around monads (it's probably not) or it
can be a waste of time to design an effect system for Javascript around monads
(it might well be). But _monads_ cannot be a waste of time.

~~~
3minus1
Your comment is really pedantic. Obviously the concern is whether _using_
monads is a waste. If I say "television is a waste of time" are you also going
to correct me that tvs just _are_?

~~~
tome
By no means.

There's this idea floating around that monads are things that had to be
"introduced" into Haskell to "solve" certain problems with purity. This idea
is false. Furthermore this question misses half the point of monads, that is,
that you can program generically over them. Sure, that one example makes the
IO monad look just like code in any other imperative language. But I can also
use do notation and monad combinators with lists, coroutines, promises,
futures, ASTs, and indeed any other monad I care to come up with. It may seem
that any individual monad _qua_ monad is a waste of time but when taken in
totality it is a completely different question.

Since the question mentioned no other monad than IO it doesn't begin to
address the important issues.

------
_pmf_
> Are Monads a Waste of Time?

No, they are not supposed to have side effects.

~~~
tonyedgecombe
Very funny. Although like most humour, somewhat wasted here.

~~~
placebo
I think at the right dosage, it's welcome. For me, mild doses of good humour
upgrade the most serious of discussions without detracting from their
professionalism.

~~~
carlmr
Agreed, let's not become robot news.

------
dmitriid
Th author asks a general question, using JS code as an example of imperative
code because it's faster to write and the syntax is familiar to many
programmers.

The second top comment starts with:

> JavaScript is a mess with its truthiness, implicit conversions, dubious
> semicolon insertion, second class references, automatic variable
> declarations, weird scoping, etc.. You'd need to try harder to make the
> Haskell imperative monad just as bad as JavaScript.

 _That is not the point_ , is it?

Thankfully most other comments discuss the actual question asked, not the pet
peeves aimed at a particular language.

------
ggm
I have been told a story that some CS courses trying to upscale to FP as the
birth language have people teaching inside the IO monad to preserve imperative
order, such that the graduates come out coding a strongly typed language
almost completely bound in imperative systems design.

I feel the moments of clarity for me around the roles of monadic elements in a
program are like brainwaves which vanish as soon as I try to grasp them. Right
now, I'm stuck on the idea my FP colleague described as the boundary of the
real-world, the thing which follows time order and cannot be gone back to in
time or sequence. Most FP example fragments do things inside a domain of
existence which is bounded solely by the variables of the problem. Solving a
fibinnaci for some n thousand integers left in memory is distinctly unlike
processing a UNIX pipe sequence yet the latter is said to be very like
functional composition.

------
tathougies
The answer to the posters question is yes... monads are a waste of time if
you're using them to handle state changes. It's best to just use explicit
threading in that case.

The only time (I think) it's okay to use stateful monads in haskell is when it
simplifies the code syntactically by removing extra cruft. Otherwise, explicit
state is always going to be clearer to understand.

On the other hand, most uses of monads in Haskell are not to handle state. In
my Haskell SQL library Beam[1] for example, we use monads to build a
relational algebra AST at run-time and then dynamically optimize it. The SQL
generating system inspects branches of the AST at run-time to generate
sensible SQL expressions. Because the DSL is implemented as a monad, we get a
bunch of stuff for free. At my work for example, we use a straightforward
haskell `forM` to generate large joins.

Because of the Haskell purity, we can know _for sure_ that re-executing parts
of the AST will not be dispruptive, because the code is pure.

I'm currently working on a little utility library for editing rich text with
Haskell and I've used the continuation monad to automatically derive a zipper
for my data structure. I get O(1) updates at my current position, for free.
All I have to do is write a little interpreter. Because of Haskell's purity, I
get a lot of functionality without doing anything. This is actually a big
deal, as the same implementation in an imperative language involves
complicated data structures and O(log n) complexity (look at the GTK text
btree source file [2] for the kind of complexity I'm talking about). Because
of the flexibility involved with the underlying monad, I have a 'fast'
implementation of my zipper for simply modifying text, a 'visual'
implementation that records a list of DOM updates to visualize the text as its
being edited, and a 'network' implementation that outputs a list of commands
for suitable editing. One simple data structure, one monad, three
interpreters, and I have a collaborative text editing system.

Ultimately, the point is that you should only use an abstraction if it
simplifies your code or reduces the error surface. If it does neither, then
it's wasteful

[1] [https://github.com/tathougies/beam](https://github.com/tathougies/beam)
[2]
[https://github.com/GNOME/gtk/blob/master/gtk/gtktextbtree.c](https://github.com/GNOME/gtk/blob/master/gtk/gtktextbtree.c)

~~~
kmelva
> I'm currently working on a little utility library for editing rich text with
> Haskell and I've used the continuation monad to automatically derive a
> zipper for my data structure.

Could you go into more detail about this?

~~~
tathougies
Sure... most people familiar with zippers know that it can be thought of as
the algebraic derivative the underlying type.

What’s perhaps less known is that this is the same as ‘differentiating’ the
traversal of the data type.

The continuation monad is a sort of ‘derivative’ of a computation in that it
nearly captures the ‘infinitesmal’ difference between two steps in a
computation.

If your computation is the traversal of your tree, list, or text markup, then
you can make a zipper from that without having to declare new data structures.
This is isomorphic to declaring your own data structure. Its only advantage is
convenience (that I know of at least).

Oleg Kiselyov goes into great detail with this idea and has lots of example
code:
[http://okmij.org/ftp/continuations/zipper.html](http://okmij.org/ftp/continuations/zipper.html)

~~~
kmelva
Thanks for the answer, it'll keep me busy for a week or two!

------
deathrip
How is this related to Monads? This is an argument against mutation, or saying
that Haskell doesn't handle mutation better than js, which is absolutely true;
mutation is heavily discouraged in pure FP... Monads can be utilized to
encapsulate the concept of sequential computation - if that sequential
computation happens to be mutations you're gonna have a bad time.

------
sfilargi
The problem here is not the Monad, but the “do” notation. Rewrite your code
without it, and will be more clear and less “waste of time”

------
nabla9
As others have said, there are many different kinds of monads.

In my mind, for practical reasons, uniqueness typing >> state monad.

Uniqueness typing allows programmer to use stateful programming locally while
developing strictly functional programs in at scale. Best of the both worlds.

------
proc0
it's the only thing you can use if you want to write imperative style within
pure FP. Obviously a simple example like that makes no difference, but with a
giant app, if you eliminate mutations you get many benefits, and in such apps,
sometimes you want to express things imperative, but still not mutate. So in
other words, Monads are obviously useful, because the first assumption is that
mutation is the root of most bugs, so you eliminate it but still use Monads to
emulate it.

------
empath75
Aren’t the state like effects limited to the scope of that block?

~~~
olavk
No, but the state is part of the type signature, so the scope is explicit.

------
danharaj
The much less discussed killer app of monads is monad transformers, which add
a new dimension of program architecture to the toolbox.

~~~
KirinDave
I agree, although it's funny because the community has been trying to come up
with replacements for mtl for some time now.

------
e12e
Maybe.

------
thanatropism
Monads are just monoids in the category of endofunctors. Don't complicate that
too much!

------
jeffreyrogers
Every discussion I've seen about functional programming is entirely divorced
from the practicalities of real programming.

~~~
AnimalMuppet
OK, I'll bite. Here's the essential truth of functional programming,
applicable to all of real programming: _Constrain your state space or die._

------
bitL
Functional programming has the following issues:

\- it tries to abstract away von Neumann architecture and make it more math-
like, stateless, true and permanent. Yet von Neumann is completely stateful
(resembling real-world in its complexity and model approach), causing all
kinds of frictions, for which monads and other syntactic sugar were invented

\- while Turing complete, it has some weird properties, making it difficult or
inefficient to change communication between various components that are using
deep function calls when such a need arises in the future. This leads to using
mutable "shortcuts" that render a lot of advantages functional programming
might have had useless and starts resembling imperative/OOP programming with
its issues again

\- trivial things like IO can't be modeled without side-effects, so any
software requiring input from keyboard, loading external files etc. can't be
pure

\- often functional constructs make code unreadable despite readability being
one of the main reasons for its existence

\- bottom-up creation of algorithms is preferred by many language authors,
making it difficult for beginners

\- if you really want to know how it works, it takes 5+ years PhD level of
studies in abstract math

Disclaimer: I approach 100 languages I've used in production, worked for
companies that created some of the most popular languages and hate them all
;-) I hope to work on AI-assisted language creation in the near future.

~~~
KirinDave
Hello, most of your points seem wrong in the extreme and I feel like
addressing them. I confess I pressed the enter key hard a few times as I wrote
this, gritting my teeth at some of your points. But I have tried to keep a
level tone to help people see that many of these points either are simply FUD
or, in a few cases, haven't been true for well over a decade.

But, _first and foremost_ , it's very clear you've conflated Haskell (and
perhaps Purescript?) with all functional programming. This is a mistake, and
many functional languages do not make use of pure functions, laziness, monads,
or algebraic data types.

> \- it tries to abstract away von Neumann architecture and make it more math-
> like, stateless, true and permanent. Yet von Neumann is completely stateful
> (resembling real-world in its complexity and model approach),

Most languages abstract away von Neumann architecture to some point. This is
generally seen as a feature, as the notion of "von Neumann" itself has evolved
substantially over time such that many historical practices are no longer
meaningful (example: overlays).

For the last 2 decades, it's been considered Industry Best Practice to do
this, as we're increasingly finding we don't want the limitations of this
classical architecture to limit our computations or computers.

For example, it's quite easy to push computations to GPUs or TPUs in Haskell
because by default it doesn't force assumptions onto users about what state
means.

> causing all kinds of frictions, for which monads and other syntactic sugar
> were invented

No, Monads were not invented to solve this problem in Haskell. They were a
subsequent improvement to the original solution.

> while Turing complete, it has some weird properties, making it difficult or
> inefficient to change communication between various components that are
> using deep function calls when such a need arises in the future.

This is quite untrue. And if it _were_ true then it's quite a curious chain of
events that functional composition has only become more mainstream and relied
upon in many languages, and before the proliferation of closures and
functional composition frameworks and techniques (e.g., Dependency Injection)
came into being to model the kinds of interactions composed functions make
natural.

> This leads to using mutable "shortcuts" that render a lot of advantages
> functional programming might have had useless and starts resembling
> imperative/OOP programming with its issues again

I am surely not alone in having no idea what you're talking about other than
perhaps the occasional use of unsafePerformIO in haskell, but even then.

> \- trivial things like IO can't be modeled without side-effects, so any
> software requiring input from keyboard, loading external files etc. can't be
> pure

Monads are a way of modeling impure computation purely. And IO _is_ a side
effect. It's absolutely wrong to suggest the IO monad is some short of
shortcut or inherently impure, or that it's deficient in any way compared to a
block of Javascript for modeling the real world.

None of that is true.

> \- often functional constructs make code unreadable despite readability
> being one of the main reasons for its existence

This is a matter of opinion and training. One could make the same argument of
React/Redux reducers (really, the state transitions are in the actions?),
Google's Guice DI framework (what is happening), or elaborate builder-notation
spec framework tests.

Most people will find languages like Clojure or Ocaml pretty familiar.

What I will agree with is that languages that more closely model referential
transparency tend to appeal to some tools modeled after category theory that
require up-front study to really understand and apply. Most languages have one
or two things like this: Ruby has proc vs block, Python has many arbitrary
restrictions and list comprehensions, C has pointers, C++ has generics, Java
has reflection, Javascript has callbacks, Haskell has
Functor->Applicative->Monad.

> \- bottom-up creation of algorithms is preferred by many language authors,
> making it difficult for beginners

This has _nothing_ to do with FP and everything to do with programming in a
modular fashion. Every language with modules naturally leads people to bottom
up styles. That includes Javascript.

> \- if you really want to know how it works, it takes 5+ years PhD level of
> studies in abstract math

The Haskell compiler is a big, complex piece of work. But so is the C++
compiler, the Java JVM is surely bigger, the CLR is fantastically complex, and
have you ever looked at some of the bugs in the Python interpreter?

Of course it takes dedicated study to understand a modern compiler. That's
only natural.

> Disclaimer: I approach 100 languages I've used in production,

Okay. 100.

> worked for companies that created some of the most popular languages and
> hate them all

Who? Maybe we've worked together.

> I hope to work on AI-assisted language creation in the near future.

You're very late to the starting line here.

~~~
bitL
> Who? Maybe we've worked together.

I don't want to be doxxed, but let's say I worked for your fierce competitors
and the ones your former employer "borrowed" a lot of ideas from. Also
rejected an offer from them.

> You're very late to the starting line here.

I am not talking about gradient boosted decision trees-based meta-programming
here.

Finally, it's good to have an open discussion about this. I simply don't want
to tell everyone that functional programming is the "promised land". I hope we
can figure out much better programming abstractions in the future.

Good luck at Udacity, some NDs like SDC/Robo/DLF were simply awesome!

~~~
KirinDave
> I don't want to be doxxed, but let's say I worked for your fierce
> competitors and the ones your former employer "borrowed" a lot of ideas
> from. Also rejected an offer from them.

You worked for Sun and Oracle, then? I imagine the feelings around the
(somewhat technologically inferior but vastly more popular) JVM are pretty
complicated.

> I am not talking about gradient boosted decision trees-based meta-
> programming here.

Neither am I.

> Good luck at Udacity, some NDs like SDC/Robo/DLF were simply awesome!

Thank you.

~~~
bitL
> I imagine the feelings around the (somewhat technologically inferior but
> vastly more popular) JVM are pretty complicated.

Yeah, type erasure messes up everything, you can feel its bad influence on
Scala & Kotlin. I am a big fan of Anders' languages since Delphi days as well
and listening to ideological justifications internally how JVM is better or
IDE is evil was sometimes funny. But it was open, so it won in the end and I
am glad about it :)

But it wasn't the only company, I worked for; another one that produced
another rising star and was deep in meta-programming as well; with love-hate
relationship with your former employer as your ex would love to kill us but
some of your customers completely depended on our products, so you had to let
us live.

I am looking at how to use Deep Learning/RNNs/NLP/TTS/SS/CV in automating away
many of the annoying rudimentary development tasks in favor of programmers
only caring about higher abstractions. IMO for that I found functional
programming woefully inadequate; your opinion might be different, nevertheless
I'll see if I can push it in the direction I like in the future and how far it
can get.

