I feel like this rush to adopt functional language features in non-functional languages is like deciding to shoehorn a jet engine onto your hot-air balloon. Yes, it can be done but ... see PLT_HULK.
const f1 = function(s) {
let data;
try {
data = JSON.parse(s);
} catch (err) {}
if (data != null &&
data.a != null &&
data.a.b != null &&
typeof data.a.b.c === 'string') {
let x = parseFloat(data.a.b.c);
if (x === x) {
return x;
}
}
return null;
};
const f2 =
R.pipe(S.parseJson,
R.chain(S.gets(['a', 'b', 'c'])),
R.filter(R.is(String)),
R.chain(S.parseFloat),
S.fromMaybe(null));
Note that in the first function we have an unsafe expression,
data.a.b.c
which we must guard with null/undefined checks. Unsafe expressions can easily get out of sync with their corresponding guards.
We can apply functional programming concepts to JavaScript code to obviate the need for null/undefined checks. The fact that we can't have all the benefits of FP in JS shouldn't dissuade us from taking advantage of the ideas which are applicable.
Less as a real argument than as devil's advocate, here's a shorter non-library version:
function f1(s) {
try {
let data = JSON.parse(s);
let x = parseFloat(data.a.b.c.substr(0));
if (x === x)
return x;
} catch(e) {}
return null;
}
...ok, it's a bit golfy, and in JavaScript you don't get much control over exception catching. But in general, exceptions can be used like an implicit Maybe/Either monad, just as mutability is an implicit IO monad.
(Also, you picked an odd task - it's unusual to expect a float to be represented as a string in a JSON object, and a check that rejects NaN but accepts weird floats like Infinity is not terribly useful.)
Your counterexample is informative. I hadn't thought of exceptions in this way. The downside of putting everything in a `try` block, of course, is that we'll potentially catch exceptions arising from a bug in our code which should crash our program.
> I feel like this rush to adopt functional language features in non-functional languages is
Very few languages in use are fairly described as "non-functional"; pretty much any structured programming languages that supports function pointers supports the functional paradigm. A language doesn't have to be pure to support functional programming.
Without closures, or something that well approximates them, it seems hard to say a language "supports the functional paradigm". You can write functional code in C, but beyond very simple things you're implementing most of the machinery yourself.
"A language doesn't have to be pure to support functional programming."
I detest how everyone always takes monads, a quite simple concept, and completely butchers the explanation. I think it's likely in large part a reason for a lack of their acceptance and use.
I'd say you're now almost guilt-edged required to provide us with your simple explanation here. But i'll warn you, i've been nicely flummoxed by competing 'simple' explanations in this forum previously: "A Monad is an object whose methods return monads.", "They're simply a highly specific way of chaining operations together.", "You know how jQuery methods can string out multiple methods with periods? That's what monads are.", "Monads are merely an analogy to control flow what abstract data types are to data."
Haskell is a pure language. Purity is a concept that is absolutely bizarre to the real world, where adding two numbers together results in tons of side effects (load into registers, oops, have to load cache lines, alter memory, raise the ambient temperature, etc. etc.). However, the idea of pure functions is useful. So we make our basic program blocks from pure functions that don't "do" stuff, but just transform values.
However, we must chain these into useful programs somehow. Monad is the type name chosen in Haskell for the construct that does this. It's kind of simple if you work it backwards from the syntax. You want to be able to say
do
a <- readInt
b <- readInt
add a b
print
The first two can fail, rely on side effects and the last one is purely a side effect. The only pure function is add, which can't work if a or b aren't there. Monads serve as a way to represent the real-world-ness without it seeping into the add function. You can think of each line as a closure getting the next value from the previous line. So readInt returns a Maybe Int. If it returns a Nothing, the next closure is simply skipped. You can insert a check after the add, if you want to, to see if you got Nothing and alert the user, etc. However, the basic idea is that the program does not crash with a NullPointerException. We know what to do if we get Nothing. The add function doesn't need to concern itself with that.
Hence, the Monad. It wraps a real world concern around a type (it has a type constructor). You must be able to make one from a concrete value that is unburdened with exceptional state (it has the unit function). You must be able to call pure functions on these monadic values without the side state seeping into the function (it has the bind function).
Edit: the reason you don't see Monads in other languages is because they're not essential to them. You can do stuff in C without Monads. To Haskell, Monads are like the Higgs boson. It's how you bind stuff together.
* Purity is not at all bizarre to the real world. If I have two apples in one pile and three in the other, then merely regarding them as the same pile is (2 + 3) very, very purely. Computers are the bizarre ones!
* "The first two can fail, rely on side effects and the last one is purely a side effect." It'd be much nicer to say that the first two are "side effecting computations with interesting return values" and the third one it the same... except we are not interested in the return.
* "The only pure function is add". I highly doubt that `add` is pure in your example! (1) this wouldn't typecheck in Haskell now if it were and (2) it'd imply that we are just throwing away the result of the addition.
* "the reason you don't see Monads in other languages is because they're not essential to them" I would challenge you to turn this exactly on it's head. The reason you don't see monads in other languages it that they are so essential to them that you cannot perceive parts without the monad.
Option and Optional are indeed monads, and Java's Optional.flatMap() function is the monadic /bind/ function. But not all monads are Optional -- List is also a monad, and they go on to get a lot more complex than that.
I found that reading about Haskell really helpful for my Java development, and that looking at really simple monads like Optional, and using them in my code, was really helpful for understanding how more complicated monads work. Jumping straight into Haskell's IO monad is really unhelpful.
The maybe monad is just being used as an example here, but Haskell allows you to implement the same interface for any applicable type. for example; IO, mutable state, parsers, etc.
As I understand it, this is because nullability (typically?) doesn't nest and so can't be a functor (with the whole of the types of the language as domain). If you can't wind up with m (m a), then there is no place for join.
Yeah, exactly. The other way to see it is to note that `Nothing(null)` is a perfectly reasonable Javascript expression which ought to mean something different from `null` alone. If that cannot be true then you don't actually have parametricity.
So this is going to sound snarky and/or stupid at first, but: Monads are functors with a 'join' ("flatten") operation.
A functor is a type parameterized by another type[1], say 'a', with a function 'fmap' taking a function 'a -> b', yielding the original object except that its type is now parameterized by 'b'[2]. A monad's 'join' operation takes 'f (f a)' onto 'f a'.
On top of this, there are certain laws which must be obeyed. The functor laws amount to saying that 'fmap' can't change anything about the object (including structure) which doesn't depend on the parameter 'a', but must change everything which does depend on 'a'. The monad laws amount to saying that 'join' has a certain associative structure[3], and also that there is a left- and right-identity 'return: a -> f a'.
[1] a la Java generics, for those not into FP
[2] i.e., fmap has type 'forall a b. (a -> b) -> (f a -> f b)'
[3] Explaining exactly what this associative structure is in these terms is tricky, since 'join' as described is unary.
Here's where you have to introduce the function 'bind', '(\f -> join . fmap f)': 'bind g . bind f' === 'bind (bind g . f)'. But it's really not essential; the most intuitive consequence is that Haskell's 'do' notation works like you want it to.
> and also that there is a left- and right-identity 'return: a -> f a'
RTF, uh, Comment.
The point being that I think it's best to understand return in the context of the more important operation join. You might wonder, "well why don't functors have a generic constructor", and (among other answers to that) the reason is that it doesn't serve any purpose in the functor laws. But return does serve a purpose for the monad laws, namely, being an identity of sorts.
Those names are not very good. return takes a value and "monadifies" it. If you have function that takes a normal argument and emits a monadified value (e.g. int -> monad string). Then bind can transform your function so that it takes a monadified argument (e.g. monad int -> monad string) instead.
return and bind satisfy three laws:
1. If you have function that takes a normal argument and emits a monadified value. You could apply it to some argument right. Or you could monadify the argument with return, and then use bind on the function so it will accept the monadified argument. These two ways of using the function give the same result.
2. Remember how return takes an argument, and emits a monadified value? That means we can bind it! The second law states that the resulting function is just the identity for monadified stuff.
3. Lets say you have two functions f (e.g. float -> monad string) and g (e.g. string -> monad int) from normal argument to monadified value. We could bind them both and then compose them.
g_ = bind(g)
f_ = bind(f)
g_f_(x) = g_(f_(x))
Or we could imagine instead binding only the second function, then composing the functions, and then monadifying the composition.
g_ = bind(g)
g_f(x) = g_(f(x))
g_f_ = bind(g_f)
Law 3 states that these two procedures yield the same g_f_
Having offered this kindly explanation, would you say that this continues in the holy pursuit of a 'simple explanation' of what a monad is?
Here's an intermediate question triggered on your explanation: what is the byte level nature of your "monadification"? you say one has a function which converts a floating point to a "monad string", a string value to a "monad int" (a raven into a 'monad writing desk')? exactly what bytes characterize a "monad string" from a string? a "monad int" from a plain old int? it's perhaps a structure? maybe it has stack pointers among those structural elements?
If 'm' is a monad then the difference between a 'String' and a 'm String' (monad string) is most likely just a constructor, but it's really up to that particular monad's implementation. It isn't a byte level difference, it's a context level difference. Constructors are very different in Haskell than (insert imperative Lang). They are much simpler. Monads are similarly simple but hard to explain without any familiarity with the language. I suggest you just dive in. The cool thing about Haskell is you don't have to understand everything in order to work with it.
There are many monads, so the answer will depend on which one it is. For instance the "Maybe" monad I described in a sibling comment to yours could be implemented with a pointer that could be null, or not.
Another Monad could be a Logger monad. "return" transform a value into a pair of a value and an empty log. So in this case a monad int could be implemented as a pair<int, string>. (bind would take a function that starts logging from scratch and make it a function that appends to a preexisting log).
It's really, really vital to note that "monad" is not a noun but an adjective. Your question thus has no answer as stated.
Let's pick a type which happens to be monadic, though, like list. If `A` is a type and `List A` is the type of singly-linked lists with values in `A` then `fun a -> Cons a Nil` is what's sometimes called `return` and `flatten :: List (List A) -> List A` is what's sometimes called `join` and (List, return, join) is monadic.
And anyway, the byte-structure of SLLs can vary from place to place, but any good implementation that works generically will do.
Since you raised the linguistic characterization, (..'monad' is an adjective, not a noun, so your question doesn't apply), you'd therefore say there's "thus no answer" to the question: what's the difference between a "blue envelope" and a basic envelope? ('blue' being an adjective)
In my experience, here's often what's lacking with teaching the latest in programming formalisms: folks need the new concepts explained in some terms of the old in order to understand them, (hence why i was trying to get someone to explain monads in terms of crusty old terms like (C) structures). The frustrating tendency always seems to be to concoct a host of new terms, and then to define them only among those unfamiliar terms; often giving well-intentioned detailed examples yet only using those novel terms and syntax. It would be akin to teaching a foreign language but not even trying to pick up a familiar object and point to it in association with saying the new term for it.
By the bye, i accept and appreciate the explanation given somewhere here (likely by 'im3w1l'), that monadization generates a different structure for different incidences of monadification. This helps conceptualizing the mechanism, in a way that "binding of flattened functors returning null or non-null monads" doesn't. It's worthy to assume that we didn't all spring forth full grown out of the forehead of a particular computer science theory class.
Nonetheless, the efforts are appreciated. Even given the root commenter here promised that monad was a "simple concept" with a simple definition. If you scan about this comment section, do you think this view is well supported?
I agree with you, but think monad is a poor place to start such a discourse. It's a "simple" concept once you've gotten some other ideas firmly cemented.
I also think there's the standard "simple/easy" dichotomy going on here. Monad is SO simple that it can be quite hard to grok in the same way that quantum physics is.
I fully agree with im3w1l's note about different things coming from different "monadifications" and find that completely consistent with what I've been saying. I'm not sure who uttered the other quote but it's as misleading as it is nonsensical. I think that's the risk with explaining this stuff partially anyway: simplicity isn't necessarily easy and there are lots of partial understandings of "monad" floating around.
hm... "it's so simple it's hard to understand". i'm also very familiar with the 'monads can't be explained until later!' concept. yet folks who say they understand monads so often make heavy use of the term considerably earlier than "later".
"quantum physics" is an interesting reference for me. as it's often my job to implement quantum mechanics in software. computers are dumb. they 'understand' little beyond patches of memory upon which simple algebraic operations are done. so when i'm teaching quantum mechanics i often find it useful to explain how one applies it for (dumb) computers. "ooooh! second order differential equations are implemented as simple iterative passes over an array of floating point values - that's easy!" the student comes to see through the notable complexity of quantum mechanics by observing how we 'teach' it to a computer. why the hell do i bore you-all with this? well, it's possible that showing how one implements a monad in terms of dumb patches of memory, dumb interrupt vectors, bits and bytes, might not be a bad starting point for a common language ..maybe. anyway thank you for trying (it won't be the last time monads are null-functor flattened re-re-re-hashed)
Fair enough, and I also provided a "raw patches of memory" example earlier provided that you are willing to buy linked lists as common language :)
QM via approximation is an interesting point! I suppose in that sense my metaphor breaks down. Approximation works in some fashions, but you need notions of convergence to make that go. These cannot be had in describing monad-nature.
I'd definitely say that once you "have" the concept it becomes standard and nigh universally useful vocabulary. You want to use it a lot because it makes a lot of sense to do so, but this isn't a good didactic method.
I also kind of want to argue generally against the idea that computers are just dumb machines capable only of shuffling memory around. Of course to a certain degree this is true, but only in the same way that algebra is just a series of symbolic algorithms. It's true, but there's nothing interesting to be had from that POV.
The fun stuff occurs when you take the perspective that what's going on inside the model represents faithfully something more interesting going on inside our own heads or out in the world.
Monads are a powerful, simple and subtle thing which happen entirely abstractly—it's just up to us as humans to recognize the pattern (or, equivalently, up to automated computer algebra pattern inference machines to do the same).
Also with your edit it's worth noting that your given fmap cannot be: it violates both functor laws. As it turns out, for Haskell and most datatype-like functors there is exactly one law abiding implementation of fmap implicit in the structure of the type.
Sorry, that's a good point. The triple formalism (T, mu, eta) already assumes T to be a functor, but I didn't state that explicitly! So you need (List, map, return, flatten) all of them.
Ok let me give you an example of a monad! The Maybe monad. In this case a monadified value either has a normal value, or it has a null. Ok, so for the operations. "return" should monadify a value. In this case it does this by creating a Maybe that has that value (Maybes that have null can not be created with return. They must be created in some other way.). The bind operation creates a new function that takes a Maybe value from one that doesn't. If the Maybe has a normal value then the function is run like normal. But if the Maybe has a null, the function is not run. Instead null is emitted directly.
Why is this a good thing? It means you can chain together many failure prone operations, and you only have to check for null at the end, not in the middle. So it works a little bit like exceptions.
Let's verify the laws:
1. We have some value, and some function that could take that value, and either emit a result, or a null. If we monadify our value, we get a Maybe that has that value. If we bind our function, and then run it with our Maybe, what happens? Well we know our Maybe is not null, because we just put in a totally legit value. So because of how we defined our bind, the function will run just like normal.
2. Ok so lets bind "return" and see what happens. If the input is a maybe that has null, then we never run "return" but just emit null. Consistent with identity!
If the input is a maybe with a normal value, then we run "return" with that value. Return creates a maybe that has that value. So we get out what we put in, it works!
3. There are a number of cases we have to go through but let's do them one by one.
First construction:
a) x is null. f_ skips f and just emits null. g_ skips g and just emits null.
b) x is non-null. first we put x in f_. Since x is not null we just apply f to the normal value. Now f can either emit a normal value or null.
b1) f emits null. g_ is skipped result is null.
b2) If result is not null, g is run with the normal result, and result of overall computation is g(f(normal argument))
Second construction:
a) x is null. g_f_ skips g_f and just emits null. Same result as before
b) x is not null. g_f is run with value in x. To run g_f we first run f. Two cases. f emits normal value or null.
b1) f emits null. g_ of null is null. So if value in x causes f to emit null we have null just as before.
b2) f does not emit null. g_ just runs g with normal result. So if value in x does not cause f to emit null, result of computation is g(f(normal argument))
So both constructions work the same, they continue running until problem arises.
"A monad" isn't a thing, but we can ask what it takes for something to "have monad-nature" or "be a monad". For this to occur our something must be three things, a triple
(T, mu, eta)
The names are all meaningless so for the moment they just get short letters.
T must be what's known as a Functor. For PLs really, though, it's even more specific. It's an "endofunctor on the type category of your language". What this means is simple
* For any (static) type `a` in your language, `T a` is also a static type.
* For any function `f` from type `a` to type `b`, `T f` is a function from type `T a` to `T b`.
If your language doesn't have static types then you can approximate this by pretending that it does. Also note that `T f` is not usually the syntax used, but it'll do for now.
Now to make a monad we take any choice of Functor `T` and give it two operations. `eta` takes values in type `a` to values in type `T a` while `mu` takes values in type `T (T a)` to values in type `T a`. In other words they give you a "layer manipulation toolkit".
They must follow laws as well, but these laws are all "common sense laws" which allow you to think of values in the following types: `a`, `T a`, `T (T a)`, `T (T (T a))`, `T (T (T (T a)))`, ... as all being the same as values with just one layer: `T a`. This is the "flatten" idea.
And there you have it! Anything at all which can be regarded as equivalent to one of these triples following this design and laws is a monad!
---
So the real question is why should someone care? The answer is simple.
Lots of things have monad-nature in a typed language. By recognizing their common nature we can (a) see them in a new light full of perhaps previously unknown similarities, (b) share terminology lightening the burden of how to use them, (c) write generic operations which work over any monad and expect them to work with each specific one in a similar way, (d) introduce new syntactic sugar which is built entirely from (mu, eta) and expect it to work similarly for every thing with monadic-nature, (e) begin to form theories and impressions of "what it means to be a monad".
The reasons (a-d) show up all the time once you recognize this pattern since monads are really common.
Reason (e) is what everyone wants to hear about but it's hard to talk about without being super arm-wavey. But I'll try.
In order to provide the operation `mu :: T (T a) -> T a` the type `T a` must be able to "internalize itself" without losing too much information. In order to follow the laws, one bit of important information is an idea of sequence or nesting—but one unique to each particular monadic triple.
In this way monads are a very, very, very general way of talking about sequencing or nesting.
What makes thins interesting is that you can see imperative languages as being about sequencing or nesting as well. If you have a listing of statements { X ; Y ; Z } then you can see { Y ; Z } as being nested in the execution context that X created and { Z } being nested in the execution context that { X ; Y } created.
This would maybe make you think that imperative languages have monad nature and indeed they do. This leads to an interesting study of questions like "For some given imperative language, what is a pure-functional representation of the monad representing it?" which sometimes has interesting answers.
What's even more interesting is that one example of reason (d) above is to introduce a syntax sugar which makes any monad look a bit like an imperative language. This leads to an interesting study of questions like "What does the imperative language for some monad X feel like?"
So if you really like imperative languages and recognize that there's a larger design space here that most have any familiarity with then monads are a good thing to keep an eye out for.
I think that depends very much upon the audience. There are quite a few great ideas out there that are difficult to explain: try explaining the theory of general relativity to your mother, for example.
Yeah, if we're talking about JavaScript, the explanation should start with promises. It's a pattern lots of people know, and it's a very real-world example. No need to front-load the explanation with a bunch of complication before people even know why they should care.
I suspect some people (not saying this author) butcher the explanation intentionally. Makes it seem like an ineffable topic that only the smartest programmer can understand.
I agree, and I actually think a better word for `bind` is `then`.
For anyone unfamiliar, the `then` of Promises corresponds to the monadic `bind` or `flatMap` (if the given function returns a Promise) or the functorial `map` (if the given function returns a plain value).
A monad then is just the interface shared between Promises (`then` and `resolve`, collections (`flatMap` and `wrap`), etc.
The hard part for me was visualising the pattern for more complicated types like parsers (functions from strings to results) and continuations.
I kind of agree with this... In many monads binding creates a time-dependent sequentiality!
But there are some that do not. For instance, the "reverse state monad" would not make sense with 'bind' named 'then' since... as the name implies, state flows backwards through time in it!
if (cond1)
return val1;
if (cond2)
return val2;
if (cond3)
return val3;
return defaultVal;
is simpler and better than
if (cond1) {
return val1;
} else if (cond2) {
return val2;
} else if (cond3) {
return val3;
} else {
return defaultVal;
}
or something like that. In a certain sense this is objectively false as the second one uses fewer features than the first (a statement we can make formal using monads, but that's unrelated). I'd argue this is doubly true because whenever non-linear flow control begins to be used pervasively it is hard to know what code will be executed and under what conditions. With nesting at least these conditions are obvious.
In any case, we can solve this conundrum through the existence of an Option or Maybe type which encapsulates the imperative behavior of short-circuiting failure directly and limits it's scope.
rather than (if I understand the monadic way of doing things correctly)
ra <- fa()
rb <- fb(ra)
rc <- fb(rb)
How is the scope of the short-circuiting limited? We will obtain a None result in case of error but we still do not know when the error occurred. Granted, we can add some guards but so can we with promises. Since the return statements are always at the end of each separate function the scope should not be a problem?
I am kind of new to this so please bear with me. Thanks.
The idea is that from "inside", as we are using do-notation, we merely think of working in an imperative language where failure might occur and short circuit things while from "outside" we see the type of this computation as Maybe and can examine whether or not it actually failed. The "inside"/"outside" dynamic is the scope delimitation.
> Is there any reason to use monads in (let's say) Javascript rather than promises?
Not intrinsically, IMO. The main value of monads is the shared interface, and its utility is contingent on tools that recognise that interface.
In that respect, it's as useful as providing e.g. a `map` method for promises, arrays, dictionaries, etc. Though in my experience JavaScript rarely seems to be written with this kind of generic interface in mind (e.g. there's no coherent interface mandated between classes in the standard library).
I think Monads have been placed on a pedestal such that many consider that you are not a true programmer until you can understand them.
Consequently, whenever someone feels they understand Monads they are compelled to write a tutorial to demonstrate their understanding and consequent elevation to "true programmer" status. There is a large audience for these tutorials as all the "non-true" programmers struggle to reach this stage.
I certainly only wanted to understand Monads because they sounded cool, not really because I wanted to become a better programmer. Although I certainly found this to be quite a positive side-effect in my struggles to become a "true" programmer.
It's hard like abstract algebra is hard. When things are simple, you have fewer tools you can reach for when trying to figure out all the ramifications of your few rules. Simultaneously, when you reach for examples you need to keep straight what properties of your chosen examples are due to them being instances of the abstraction at hand, and which are (from your current perspective) noise.
That definitely doesn't make it "not simple" - it's the simplicity that's the trouble :-P
I think it's because they solve a problem that people don't realize they have (and truth be told maybe they don't really have it), so the learning process has no motivation.
They solve a major problem that Haskell has (namely circumventing a type-anal non-strict compiler), so they're a big deal in Haskell, whereas in imperative languages straightforward code works just fine.
More to the point, compared to the case of for loops vs `map`, the use case for this kind of function chaining doesn't appear all that often (nor is it as syntactically localized as a for loop) so it's not as easy to abstract. I.e. the underlying pattern is not as pronounced, so it's not really recognized as a pattern/problem/something to DRY in the first place.
> They solve a major problem that Haskell has (namely circumventing a type-anal non-strict compiler), so they're a big deal in Haskell,
Type anal? lol. Would you want a type checker that wasn't anal and exact in what it expects?
> whereas in imperative languages straightforward code works just fine.
Except when you have to do things concurrently and have to use locks. Mixing pure and impure functions also makes the source of bugs harder to find in my experience.
> More to the point, compared to the case of for loops vs `map`, the use case for this kind of function chaining doesn't appear all that often
Function chaining appears quite a bit in my Haskell code.
> (nor is it as syntactically localized as a for loop)
Huh?
> the underlying pattern is not as pronounced
But, I can tell you that a map will have no side effects and return a specific type whereas I can't guarantee that with a for loop.
> so it's not really recognized as a pattern/problem/something to DRY in the first place.
You realize that their is also fold and things like mapConcat right? Maybe I'm misunderstanding your argument.
I'm saying that in imperative languages for loops are more "above the noise" syntactically and frequency-wise than sequential error checking is, at least traditionally. It's not as obvious that there is anything there to give a name to, and so without an underlying motivation, monads are quite a bit more abstract than a lot of other FP concepts.
Actually, I think the "underlying motivation" for monads is specifically that they provide a useful abstraction for which there is not an equivalent elsewhere. That does make it harder for those with experience in other programming to form an intuition of the motivation than for abstractions with a close, e.g., imperative counterpart, but it also is why people with experience with monads in environments where they are widely used often look at ways to port them to other environments. Was there a clear, common equivalent, "monads in language foo" would be less popular.
"(namely circumventing a type-anal non-strict compiler)"
Monads don't circumvent anything.
"in imperative languages straightforward code works just fine"
Or seems to, but is actually deeply broken. Which is regularly the case in the code bases I deal with day-to-day.
"the use case for this kind of function chaining doesn't appear all that often"
Or it does, and you miss it. All of computation can be expressed as function chaining. Writing C or Python, I've often wanted to be able to constrain things at that level.
That's my point. Aside from my lack of amusement by Haskell's type system, I'm not making any moral or qualitative judgements one way or another. I'm talking about programmers in general. The parent's question was why monads are difficult to convey. My argument is that, in the first place, the pattern monads are DRYing is not obvious to people, and might not even be recognized as a pattern in the first place.
Hmm. I suppose there is some semantic ambiguity in "appear". If you just mean that any arising opportunities are not noticed, then I certainly agree with that part of your comment.
Or perhaps just the opposite. The more genuinely simple a concept is the more likely you are to run into it. Since nearly every programming language on the planet can be rightfully said to be built implicitly atop a single monad... it's a very pervasive concept!
The simplicity is an illusion. While monads are not complex, the concept is very abstract relative to the imperative languages most people program in.
Think about it. In python there's really no syntactic sugar to place something in a container, so the concept doesn't really exist in python until you invent it using other python language primitives. So when you tell a python guy about a burrito, he really has no language analogue to think about. When I tried to learning about the maybe monad, I looked up a tutorial that taught it in python... Big mistake. I was thinking why the hell would people go out of there way to wrap that shit up in a burrito when they can just do a goddamn try exception... It made no sense to me, and even now, it still doesn't make sense for python. Only when I learned about haskell did I realize how the maybe monad makes sense for haskell.
I think most people learn about monads through haskell or any other language with a similar type system. It'd be interesting to hear peoples' experiences about how they grokked the concept.
I think Maybe monad could be really useful in Swift (Apple's new language). Some people use it for chaining up the optional type. However, I found that the lack of functional feature such as curried parameters in swift makes it less useful.
OK, go on, explain what else it is, rather than accidental convention of how to ensure an order of evaluation of a pair of expressions in a particular lazy pure-functional language?
Erlang, for example, being functional but strict language, requires no monads. So does Standard ML. In these languages a monads would be a useless, redundant abstraction which will only clutter the code.
I've been using Monads in Scala for not throwing wild exceptions, but still being able to stop the computation immediately if needed. For example you want to validate things with a Validation[A] type, which can either be Valid or Invalid. The binding function in Scala is called flatMap, so flatMap for a Valid value returns a lambda having the included value as the parameter and Invalid doesn't have a function call, so the flatMap operation stops.
Now in the main function we can handle the Valid and Invalid with pattern matching and look, we can stop the computation without throwing exceptions, which makes testing and everything way simpler.
While I really appreciate your cleverness, I would argue that this is an example of so-called overengineering or, an analogy from architecture - redundant decorations. Why do I need all this complications instead of a predicate?
OK, in some statically typed languages which perform type-inference, there is a restrictions for homogeneity of conditionals and aggregates, otherwise all your type inference falls apart. So, to address this problem we could use data simple structures, like tuples, or we could create a new data-type. The canonical example that Nothing or Just T type. Because different branches of a conditional must be of the same type - this is the type. Also for the sake of type-consistency, it is parametrized type. In old-school languages we will just return nil, or (values ....) Semantically there is no difference.
Ideally, types and semantics should not interfere. Complicating semantics in order to satisfy a type-system is a controversial idea.
As for Monads - it is just an Abstract Data Type, nothing special, in which Semantics and Type information complement each other. It has been created to keep types consistent - a parameteized type along with two procedures.
As an old-school programmer, I used to think about types as tags, the way it is in Lisps. (Of course, I know that these tags could be arranged in categories, hierarchies, and so-called "laws" could be defined). So, in my view, this is nothing but nested type-tags. It makes it easy to view Semantics and Types separately.
In Haskell a Monad has another "function", which is, in my opinion, is the reason why it was created. Along with satisfying the type-system, it also ensures an order of evaluation in a lazy language. The semantics is obvious - you evaluate an expression, and lift (type-tag) the value back into Monad, so the whole expression has type Monad T.
OK, this parametrized type is justified in Haskell, but in other languages, in my opinion, it is a redundant decoration, not a necessity.
Hardly. Erlang and OCaml function in an implicit, ambient monad which makes some default choices like "single thread of execution, sequential, deterministic state, has exceptions" and each will use monads when a different choice of monad is useful.
As a simple example, if you'll buy that by your reasoning OCaml would find monads to be a "useless, redundant abstraction which will only clutter the code", then I'd ask why both Async and Lwt are built to be monadic? Or, if you still feel that's a mistake, how you'd design them elsewise?
Or rather, I think your perspective is better handled by a language which supports monads than one without. If I want a new execution context in Haskell—it's a mere ADT, if I don't think it's useful I use something else.
My point is that everything imperative lives in a monad. If you want, you can choose to be explicit about it which opens new freedoms. Or, if you don't, you can just use the ambient one in languages which have those. Sometimes people realize great things by picking new monads. If it's convenient enough you can do it every other line to great effect.
Ambient monads can be a little annoying though since when they exist you cannot "get outside of them" which will reduce your flexibility.
It's for being able to write functions that are generic in the context they operate in. You absolutely would want to do this in Erlang, or in SML if the type system supported it.
E.g. I can use Future for managing async-ness. I can use a system a bit like http://typelevel.org/blog/2013/10/18/treelog.html for weaving statistics collection through my computation. I use Free to express database operations that need to happen in a transaction. I can use Either to handle operations that might fail.
I can write a method that builds a report from a bunch of rows that works with any of these four contexts, because all of them form monads.
That because IO implies explicit order, Monad as an ADT has been used to explicitly evaluate one expression at a time. It has no "magic" or "special mathematical" properties.
Haskell's do notation is syntactic sugar over monads which effectively allows you to write 'imperative-looking' code while still carrying a local state forward without mutation. The Wikibook[1] does a pretty good example of explaining what this looks like (though I'm guessing you already know this).
Now, obviously it is true that one of do notation's advantages is the same as any other monad usage: it allows us to explicitly sequence events in a lazy language that otherwise offers no (obviously intuitive) guarantees on evaluation order. In that sense it's nothing more than sugaring over the otherwise necessary usage of a lot of ugly >> and >>= operators everywhere in increasingly annoying indentation.
But the other thing it offers is a syntactic sugaring over carrying state forward into successive computations (like the State monad[2]), which still carries at least some useful sweetness in a language that is otherwise functionally pure, which is why F# generalized the concept even further to computation expressions[3].
Looked at another way, do notation, or something like it, can be used to sugar over something that rather more looks like the Clojure ->/->> operators, where the initial value is essentially a local namespace. Much like the threading macros, the result even appears to be doing a kind of mutation, even though it's actually doing nothing of the sort.
This kind of thing turns out to be useful for games, for instance, as the linked State monad example above does. In games we often have a main update loop, where we have to do several successive operations on our game that might change the state. We can do this a number of ways, but one way is with something like do notation, where for instance (in some hypothetical language) we might do this:
do with gameState
oldGame <- gameState
gameState <- checkInput
gameState <- tick
if gameState != oldGame
draw
And all of this kind of "fake mutation" can be handled underneath the sugar in a purely functional manner. It's something I've been meaning to put into Heresy for some time. Heresy uses continuation based loops that have a "carry value", that can be passed from one cycle to the next. It's a simple matter of some macro magic to then layer over this some syntax sugar that makes that carry value effectively a name space, that can be altered from one statement to the next, but all entirely without actual mutation underneath.
You can write whole imperative, mutation-riddled languages in purely functional ones this way. There's an implementation of BASIC that runs in the Haskell do notation.[4]
We've leaned on this functional JS library to make JS more type safe (https://github.com/plaid/sanctuary). Heavily borrows Monads and other FP concepts.
Am I crazy that I never understood the point of talking about monads? It just seems like a trivial pattern. It's like having a Subroutine Pattern, or a Variable Assignment Pattern. I'm all for investing in better primitives, but do these kinds of libraries really help vs implementing them from scratch with language primitives inside your (inevitably more complex) domain code?
The big problem with monad tutorials is that you can't talk about them without understanding the motivation behind them, and that comes from purely functional languages (i.e. Haskell). When you're not allowed to touch IO, use `null`, exceptions don't exist, there is no global state, and all of these things have a common pattern to them, then you can motivate their use and talk about them. In my opinion, there isn't much of a point in talking about them without those things. There are some neat tricks (`flatMap` etc...) but without proper motivation, it's not likely to stick, and the idea seems relatively useless.
So what you're saying is that these purely functional languages are so restricted that if you want to do anything useful in them, you _need_ this concept that nobody on the internet is able to actually explain.
Makes me want to use purely functional languages even less.
While this statement seems like common sense, there are plenty of examples of limitations creating a better language. I am quite happy that nobody can write goto in code that I have to work with.
> So what you're saying is that these purely functional languages are so restricted that if you want to do anything useful in them, you _need_ this concept that nobody on the internet is able to actually explain.
What do you define as useful? Reading a file and taking the first line maybe? Here's what you need to know:
Start ghci (I'm using stack[0], if you don't have stack you can also just use `ghci` here).
cody@cody-G46VW:~$ stack ghci
Using resolver: lts-2.16 from global config file: /home/cody/.stack/global/stack.yaml
Configuring GHCi with the following packages:
GHCi, version 7.8.4: http://www.haskell.org/ghc/ :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Prelude> import Control.Applicative ((<$>))
Prelude Control.Applicative> readFile "/etc/issue"
"Ubuntu 15.04 \\n \\l\n\n"
Prelude Control.Applicative>
Now we are going to use the $ sign which allows you to avoid extra parenthesis.
Prelude Control.Applicative> let add x y = x + y
Prelude Control.Applicative> let subtract x y = x - y
Prelude Control.Applicative> add 5 (subtract 3 5)
3
Prelude Control.Applicative> -- now using $
Prelude Control.Applicative> add 5 $ subtract 3 5
3
Prelude Control.Applicative> -- what's the point of me teaching you the $ operator?
Prelude Control.Applicative> -- it's similar to one we'll use to apply our lines function to what readFile returns
The duplication of fmap is getting kind of tedious to me. We can use function composition to avoid it. I'll use the add and subtract function definitions from above, but I'm sure it's obvious what those do.
Prelude Control.Applicative> let addFiveSubtract3 x = (add 5 . subtract 3) x
Prelude Control.Applicative> addFiveSubtract3 0
Now you know (hopefully) how function composition works, how can you use it for real stuff?
Prelude Control.Applicative> -- back to reading files and stuff
Prelude Control.Applicative> fmap (head . lines) (readFile "/etc/issue")
"Ubuntu 15.04 \\n \\l"
Remeber that seemingly pointless detour to teach you what $ does? Let's try using that.
There is an infix version of fmap called <$> which you might notice resembles $ with the addition of square brackets, kind of like it's in a box or something. Try removing `fmap` and replacing `$` with `<$>`. I'll wait here.
Now how can we print it out? Well, let's see what our type is:
Prelude Control.Applicative> :t head . lines <$> readFile "/etc/issue"
head . lines <$> readFile "/etc/issue" :: IO String
Something to know about Haskell is that it's okay to be naive and pick functions whose type signatures look like they do what you want. There are even search engines[1][2] that take type signatures and give you functions.
How could we go from `IO String -> IO ()`? Well, IO is a Monad (thing which follows some rules as defined by typeclasses). A monad defines a few things as you can see here:
But remember, we aren't interested in intimately understanding all that stuff... just in getting our functionality working. Maybe later we can circle back and figure stuff out at a deeper level.
The first function defined is >>=, whose type is:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
What was that type we needed again? Let's compare these two:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
If we look at the type of our function getting the first line of a file:
head . lines <$> readFile "/etc/issue" :: IO String
We see that it can fit into the m a part of this:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
IO String
Keep in mind that m means Monad, as is specified in the typeclass constraint to the left of `=>`.
Now we can specialize our type signature to:
(>>=) :: Monad m => IO String -> (String -> IO b) -> IO b
What's an `IO b`? `a` meant anything so what does `b` mean? `b` actually means anything too, but a was already taken in the earlier part of the type signature.
If we look at the type of our putStrLn function:
putStrLn :: String -> IO ()
You might notice it fits into the second part of our new specialized type signature:
(>>=) :: Monad m => IO String -> (String -> IO b) -> IO b
String -> IO ()
aside: () is kind of like void, at least that's how I think of it. It's actual name is unit.
Cool, now notice that I put parenthesis around the infix function to make it a prefix function. Let's take those parenthesis off and use it as a proper infix function:
We could have also used do notation and the `<-` or "draw from IO to variable on the left, taking appropriate actions as defined by the Monad context you are in in the case of a failure". So if you are in the Maybe monad's context, a failure will cause a break (using imperative terms) and return Nothing.
There, now you know how to use monads and functors to some degree for a real problem and know the minimum knowledge (as far as I can tell) necessary. Why do all of this just to read the first line of a file?
Now that you know how the machinery works it's pretty simple and gets you composability and referential transparency.
Want to know more about Functors and Monads (maybe even applicatives O_o)? Check out Functors, Applicatives, And Monads In Pictures[3].
Dude, if you wanted to open a file and read the first line, then just do so. What you're doing is try to derive this simple task in a cery complicated matter full of weird symbols that, even if you don't believe it, are not entirely obvious. People who want the first line of a file want something that expressed that in an obvious matter, not with a baggage of functional programming theory.
> Dude, if you wanted to open a file and read the first line, then just do so.
After knowing how the Monads and Functors work, I can:
putStrLn =<< head . lines <$> readFile "/etc/issue"
There are advantages to separating pure and impure functions. Pure functions can be tested more aggressively with property testing for instance. They can also be optimized more agressively.
Most of the errors in software in my experience come from impure functions, so handling it in a principled way seems to reduce bugs.
> People who want the first line of a file want something that expressed that in an obvious matter
I feel like you mean to say "want something that expressed that in a familiar matter". The Haskell definition I used at the beginning of this comment is obvious to someone familiar with the basics of the programming language.
> not with a baggage of functional programming theory.
What functional programming theory? To put this into OOP terms this is just working with well defined Objects that adhere to an interface and whose failures are encapsulated in a sensible way.
edit: I agree it's harder to see or explain these advantages. Please give the tutorial I posted a quick once-over and let me know what you think. If you have specific criticisms about smaller and more specific pieces I think we could have good discussion.
To be honest I'm constantly re-assessing whether it's worth having to deal with the "baggage" of Haskell as you refer to it. However I go back to using other less principled languages and things simply don't ever seem to work out as well.
This is crazy talk. Scala (and .NET to a lesser extent) both make use of special language features for making monads easier to deal with, and neither of them forbid side effecting (IO). For comprehension and LINQ comprehension are both monadic comprehension for the operations flatMap and selectMany!
Monads are a fantastic way of handling errors, composing parsers, handling concurrency, parallelism, callbacks, and many other non-IO related things.
I'm not saying side-effecting IO is the _only_ thing that motivates the idea behind Monads, it's just one of the big ones. I mentioned more than just that, and the use cases you mentioned are also big motivators.
Great point, most are removing the force behind the emergence of the pattern. To add insult to injury, they explain it magister, by throwing concepts first, instead of bottom up, like non Haskell tutorials do (javascript, elisp, ...) that don't have advanced type system / Monad 'a, that blurs the mind, only choreography of a bag of lambdas.
> I'm all for investing in better primitives, but do these kinds of libraries really help vs implementing them from scratch with language primitives inside your (inevitably more complex) domain code?
Yes - the best way to build complex things is out of simple primitives. A lot of the time, making a custom type be a monad simplifies the logic - that is, you'd want to implement the monad operations (and support do notation) anyway. Calling them by their standard names makes it easier for other people to read your code, and being able to use generic library functions like traverse (which work for any monad) with your custom type is just a bonus.
I suppose it's nice to have a bunch of generic monad functions already implemented for you. Stuff like do-notation, applicative, sequence, mapM, join, etc.
I've really enjoyed "Functional JavaScript - Introducing Functional Programming with Underscore.js". I almost didn't get it because the use of underscore was a bit of a concern (the library is great but I felt I wanted to learn FP with "pure" JS).
It was a fairly big step up from the intro to JS I read before (Eloquent JS) but it's a fantastic book. Jam packed with good stuff. I think the writing is a bit dense at times but the content is gold.
I came to an understanding of monads the other day thinking about fail-and-reverse-on-error applications like messing with the filesystem. I have an example of this (and a simple implementation in OCaml) if it helps you [0]. After my "breakthrough", I've been seeing monads everywhere; everything is a nail.
In a strict (as opposed to lazy) non-functional language monads make no sense. Write two statements which uses a temp variable on the same line, separated by a semicolon - this is your monad.