Hacker News new | past | comments | ask | show | jobs | submit login
Functor, Applicative, and Monad (typeslogicscats.gitlab.io)
252 points by TheAsprngHacker on Sept 18, 2019 | hide | past | favorite | 120 comments



I don't think this article is very useful. It doesn't adequately provide an introduction to OCaml code (or adequately explain what a given code snipped is doing) and yet frequently defers to just code to explain a concept. It's an unrealistic expectation to expect an unfamiliar reader to simultaneously infer what a particular code snippet is doing then also go a level deeper and understand the concept that is trying to be presented.

This article is most readable to those who already understand OCaml code, and if you can already read OCaml code you already understand these concepts.


You claim that OCaml programmers are already versed in category theory but that’s not correct. The first exposure usually comes with concurrency libraries such as Lwt which isn’t very old nor in the standard library, and it provides syntax sugar so you can start being productive right away.

You can write perfectly fine production OCaml programs without knowing what a monad is. Some may, but a lot of OCaml devs don’t care about them.


This is correct, and in contrast to Haskell, where monads are a core part of the language (they are used in the definition of do-notation and list comprehensions) and currently the mainstream way to do IO.


By this definition Monads are also a core part of C#, they're not used to do IO but the async/await, IEnumerable, null-safe operators ('?.') are all implementations of Monads (not by coincidence), and above all you have the Linq syntax that you can extend to all of them and any other Monads you care to define.

Yet I imagine most C# programmers have little to no formal understanding of Monads (although quite a few probably do at least know about the link).


Sorry, I'm only vaguely familiar with C#, but maybe you're right, if they get sugar in the form of LINQ syntax. But regarding async/await, (?.), etc., a language doesn't have special support for vector spaces just because it has many vector spaces in it - and every language has many vector spaces in it.


That's true, it's just that it's culturally inappropriate to use the full power of monads in C# outside specific scenarios like Linq.


Well that really depends on who's working on it. In my opinion Linq isn't so much the only appropriate way to use monads but rather the most convenient way to demonstrate their full power.


This is misleading. There's no place in the specification of Haskell that specifically defines monads as part of the language. They aren't a language feature. They are a pattern which happens to be expressible in Haskell, and which is supported by libraries:

"Haskell's built in support for monads is split among the standard prelude, which exports the most common monad functions, and the Monad module, which contains less-commonly used monad functions. The individual monad types are each in their own libraries and are the subject of Part II of this tutorial."

https://wiki.haskell.org/All_About_Monads#Monad_support_in_H...

The fact that monads are used to implement parts of the language doesn't make them a core part of it. Techniques used in implementation aren't the same as language features.


I don't think this is true, do notation (see https://www.haskell.org/onlinereport/haskell2010/haskellch3....) specifically depends on the Monad typeclass and exists as syntactic sugar for monadic operators:

> A do expression provides a more conventional syntax for monadic programming


The Haskell 2010 Report mentions monads all over the place, both in the language and library sections. A Monad instance is literally the only way to do IO in standard Haskell. Monads get special syntax in the form of do-notation and, in an extension, monad comprehensions. It's undeniably a core feature of the language.


A pedantic distinction without a practical difference. Everyone who learns Haskell learns about Monads.


Is do notation a language feature?


it's syntactic sugar


You can also use monads without knowing anything about categories. It's just a convenient programming pattern.


It's plainly obvious that the GP poster made no such claim about category theory, and the OP post states several times that category theory is merely tangential to the topic.


OCaml (Reason) was my entry point into Typed FP (after a few failed attempts at learning Haskell). The best thing about OCaml is that it allows side-effects, so one can go a long way without having to touch advanced FP.

It is only recently that I've started getting comfortable with concepts cogently explained in this post, so to me it is quite valuable.

As an aside, I think OCaml/Reason/Elm should become the de-facto Typed FP entrypoint for programmers instead of Haskell. That'll help drive more adoption for the paradigm because the ecosystem is a lot more approachable and one wouldn't feel overwhelmed to even begin.


I would add F# to that list, which is an ML language as well.

It works on almost any platform, be it web, mobile, desktop or server via Fable, Xamarin or .NET Core.

It can work as an object-oriented language as well, which makes it easier to interface with C# code, but the documentation makes it very clear that functional is the way to go for F#.

For UIs there are libraries like Fabulous and Elmish, that provide a very similar programming model as Elm does.

You can even share code between all your target platforms:

https://github.com/retendo/FSharp-CrossPlatform


I learned typed FP from Haskell, coming from JS, and I'm glad I started there.

It was a bit of a slog, but there are some wonderful books out there. I don't think we should discount the amount of pedagological resources that are available in the language. My favorite is the "First Principles" book.

As a side effect, I found that once I had learned most of Haskell (minus some of the language extensions aimed at type level programming). You pretty much won't find anything in any other typed language that will surprise you.


I understand your point of view and I also don't think my understanding of Typed FP would be complete without sufficient immersion in Haskell. But even with the learning materials, it is too steep a climb for beginner programmers for whom the love of the subject is secondary to their day jobs and regular life.

For people from dynamically typed or static OO background, being able to write functional code with records and sum types alone is a huge quality of life improvement. They should be able to get there with the least amount of effort. Haskell however demands far too much understanding and effort before it can be confidently used in a commercial setting. That excludes mainstream programmers who could otherwise most benefit from the paradigm.


What would you say are some of the main benefits you gained with your journey with Haskell, beyond coming across a lot of abstract concepts?

I'm struggling to justify spending more time on picking up Haskell myself as I can't see clearly what's at the end of that tunnel.


I love Haskell, I find it a joy to write. If you don't really enjoy it then there isn't much reason to spend time on it. It's probably not going to help you with employment. As a side benefit, it's helped me solidify many PLT and CS concepts. That's enough for me to see value in it.


Similar for me. Tried to get into typed FP multiple times with no luck, but then I built a couple of web services with F# + Giraffe and an app in Bolero. It let me take my time getting into FP concepts and incorporate them along the way


> if you can already read OCaml code you already understand these concepts.

This is a mistake. There are plenty of functional programmers who aren't familiar with these concepts. It's hardly a topic in introductory OCaml...


Yeah this is a common problem. And the real problem is few people both understands the subject but remember the stumbling blocks to learning that subject. There is a Haskell book by Christopher Allen where I think he tested the book out on a friend who didn't know Haskell so that he could be sure that it made sense. That's probably the best way to create material like this.


I majored in math so I was already familiar with some of the concepts, but OCaml is so different from any language I’ve worked in I couldn’t understand how any of it was being applied.


I can try to explain this to you from a math perspective, if you want, but I need to know your math background. Are you familiar with the lambda calculus and its relationship to category theory?


When I first learned Haskell it took a very long time before I started figuring out what monads, applicatives and category theory were, and even then I only ever took a deep dive into monads specifically, basically so that I could do side effects.

For a long time in the beginning I was just learning about the basics, like list manipulation functions, and typeclasses. There was quite a journey in between beginning Haskell and even starting to look at the learning material for monads/functors etc.


As someone who's only spent about an hour reading an intro to Haskell (and none reading about OCaml), I was able to mostly get the gist of what was being said. Not completely. I had to look up the * operator (effectively a comma between arguments, as opposed to a currying arrow), but I would consider the article to have been useful, even if it could've been more useful.


The OCaml type 'a * 'b is the equivalent of the Haskell type (a, b). This type is known as the Cartesian product type, and it's written with a multiplication symbol in type theory (hence the asterisk in OCaml syntax).

The product type is the type of pairs. It's definition is

    x : A
    y : B
    --------------
    (x, y) : A * B
meaning that if x has type A and y has type B, then (x, y) has type A * B.

- Product type on Wikipedia: https://en.wikipedia.org/wiki/Product_type - Product type on the nLab (which is a math-heavy resource): https://ncatlab.org/nlab/show/product+type


I found it useful (or at least educational). I've used Haskell before, but not much OCaml -- the let+, and+, and let* forms were new to me.


If I recall, they are very new to OCaml too.


In your opinion, is this article less helpful than the functor/monad tutorials that predominantly use Haskell?


An important realization that I had was that monads/functors/applicatives aren't patterns in the sense of design patterns. You don't solve a singular problem with a monad. With something like a strategy pattern you have a concrete problem: how do I select different potential algorithms? Monads don't have a specific problem that they solve. Any attempt to motivate monads in such a manner falls flat because the problem is either too general to be motivating or too specific to apply to monads as a whole.

Instead, functors/monads/applicatives are more like a technique that can be used to solve a wide variety of problems that all coincidentally use the same function signature. And therefore, it's perfectly acceptable to say "I know how monads work with Maybe and List, but not Reader" Because fundamentally, how a Reader implements bind is in no way related to how Maybe or List implement bind.


> And therefore, it's perfectly acceptable to say "I know how monads work with Maybe and List, but not Reader" Because fundamentally, how a Reader implements bind is in no way related to how Maybe or List implement bind.

If this is indeed true, what is the point of learning these "patterns"? From a mechanical understanding of the signature of bind and unit, you'd arrive at more sophisticated signatures, say filterM, but an understanding of what it does will still need an understanding of the specific implementation of the Monad instance it is being applied on. That sounds like a leaky abstraction. If so, why bother?


The point is that Monad, Applicative and Functor are well defined interfaces with laws (properties) you can count on.

They are in fact much better, more precisely defined than classic design patterns.

And in expressive programming languages (that support higher kinded types or that at least let you encode such types) you can also describe generic code that works over any applicative or monadic type.

Having reusable functions that work just as well on lists, maybe/option, reader, io / promise or what have you means these type classes do a very good job at abstracting over data types. These are the purest forms of abstraction.

And you can get syntactic sugar from the language as well. For example "for comprehensions" in Python work via the iterator/enumerable protocol, but that's super limiting. Haskell's "do notation" or Scala's "for comprehensions" work on any monad instead, being much more powerful and reusable.

Unfortunately it takes an expressive language to understand and work with these abstractions comfortably. You won't grok monads in Go or Java.


> Having reusable functions that work just as well on lists, maybe/option, reader, io / promise or what have you means these type classes do a very good job at abstracting over data types. These are the purest forms of abstraction.

This is where I get kinda lost. Can you give an example of such a reusable function that works for all these things which does something valuable?


This is mostly useful for reusable boilerplate. For example, if I have a list and want to map over it with a function, I can call normal `map`. But if I want to map over it with a function that doesn't return a normal value, I can get something weird. With promise, for example, I might end up with a list of promises, where what I want is a promise containing a list. If I didn't have monads, I would have to write a function that manually took each value out of the list, awaited it, and put that value into a new list that will be returned from that function.

In Haskell, that function is just called `mapM`, and it's consistent whether you're working with exceptional cases (Either String a), nullable cases (Maybe a), IO (IO a), Promises (Async a), or even weirder things.

This is actually a huge problem for the Java Streams API, because Java's checked exception feature is sort of like most monads. You can't call a function that throws an exception inside of a function that doesn't, except by using some weird incantation. (You can think of `try` as being a built in syntax for "running" the exception monad). Which means that you can't call an exception throwing function in `Stream::map`. Instead, you have to do something really ugly. https://www.reddit.com/r/java/comments/49yjqf/tunnelling_exc...


Here is a quick real world example: You are working on a compiler, type checking statements one by one. You come across a function call and to type check that, you first need to type check the argument expressions to get their types and abort the whole thing if they don't make sense. You use a `Maybe` type to model the result of type checking the argument expressions: The result is either `Nothing` (in case of failure) or `Type` (in case of success). `Maybe` is kinda similar to a nullable type in other languages. In Java you would now make an if-statement for each argument and check that the result of type checking wasn't null; Very annoying. Luckily for you, `Maybe` is an `Applicative` so you can just use the `traverse` function to do this for you automatically: The argument iteration will stop automatically and return `Nothing` if any single argument type check returns `Nothing`. You are happy with your compiler, but your colleague says it sucks, because they can never know what went wrong. To fix this you replace `Maybe` with the `Either` type. Instead of returning `Nothing` in case of failure you can now return an `Error`. Dreadfully you set about to refactor your whole codebase, only to realize that `Either` is also an `Applicative` and has the `traverse` function. Everything works just as before. Your colleague still isn't satisfied: Your compiler always aborts at the first error but they would like to see all the errors in the input at once. Again, you replace `Either` with the `Validation` type, which accumulates errors in independent computations. `Validation` is also an `Applicative` and everything still works as before. You just got your error accumulation for free without large refactoring.


Map-reduce is effectively the fruit of realising that, as long as your processing can be described as a combination of Functor (map) and Monoid (reduce) operations, then you can trivially parallelise your computation by letting the implementation of `map` and `reduce` handle all the parallelism for you.


A question. Does map-reduce assume a commutative monoid, or does it need to order the output of map?

Edit: seems like it's the latter.


traverse (for sequence-like structures) or cataM (for tree-like structures). They let you walk a data structure with an applicative or monadic operation and compose the monadicity in the way that obviously makes sense.


Thanks for this short explanation. The benefits of these patterns were not clear to me just from reading the article. Questiob: is the idea always to think in terms of types as groups of values + some operations on those types such that <insert properties about resulting groups here>?


It's important not to think of types as just sets of values, because types are more semantic than that; two quite different types might have the same underlying values but they're different types because you use them to mean something different. A typeclass is one or two levels higher than a type, and yes, for a type to conform to a typeclass usually means that that type supports particular operations in a way that conforms to certain laws (e.g. a type `a` will have an instance of the monoid typeclass if there is a definition of `+` for `a` that is associative and a value `0` of type `a` that behaves as left and right identity wrt this `+`).

I'm not sure how important that is when working practically with them though. I tend to think of typeclasses as something like categories of types that behave similarly.


Seems similar to type traits in Rust. I skimmed through "Category Theory for Programmers" and it's mentioned this thing is mainly about composition. Composing functions? So you've got a trait (say Functor) that is meant for functions. You know something about functions that have this trait, so you can exploit that knowledge to create compositions of those functions. So then in theory you can write algorithms with as input functions implementing some trait? I'm trying to get the 'aha' moment here, I guess then that there exist many useful such algorithms? Or did I just completely miss the mark? :)


Rust traits are pretty much typeclasses. I don't agree with "this thing is mainly about composition" so I don't know what to tell you there. Functor isn't meant for functions, it's meant for types, or rather type constructors (types of kind * -> *).

Higher-kinded types are just "what if type parameters could be parameterized types" - I'd argue that from a certain perspective a language that allows them is simpler than one that doesn't. If you know Rust then you can hopefully see that Future#and_then, Option#and_then, and Result#and_then are in some sense "the same" function (and that function is the heart of the usual definition of a monad). But if you try to write some generic code in terms of and_then that could work for Future, Option, and Result, you'll find that Rust's type system isn't sophisticated enough to let you do that (even if you try to define a custom trait for it).

More generally, the way I tend to see applicatives and monads is: what if you could write an algorithm that worked generically for any secondary concern. What if you could write functions that incorporated what the AOP people call "pointcuts", in a generic way, but without having to step outside the type system and break the normal way that code and functions behave? But you absolutely need higher-kinded types before you can even start to talk about this, because you need to be able to talk about "wrapper" types in a generic way, which you can'd do if you can't have generic (parameterised) types as parameters.


That's pretty cool! Too bad Rust doesn't support higher-kinded types...


The simplest but still most versatile one is probably fmap (also knowns as <$>) which lets you use a function on whatever is inside the burrito:

(+1) <$> Just 1 → Just 2 (+1) <$> getIntFromThatUserThatOnlyTypes1 → IO 2 (+1) <$> [1,2] → [2, 3] ((+1) <$> ask) 1 → 2

Want let a function eat several burritos without making a mess? Keep your burritos apart with <>:

take <$> Just 2 <> Just "abc" → Just "ab"

etc.

You can be quite productive without moving past the burrito analogy.


No. If you attempt to write monad-generic code using the burrito analogy you will inevitably write broken code, causing serious problems for other users and/or your future self.

Burritos may be a good analogy for some subset of monads, e.g. collections, but they are not a good analogy for monads in general: the function you pass to fmap may be executed now or later, zero, one, or many times, before or after a function you pass to a later fmap.


Nah, like any bad analogy, it can be stretched :) Once you've moved past the "lies-to-children[1]" version, you can ascend to the real meat of the burrito: https://blog.plover.com/prog/burritos.html

And once you've fully grokked burritos in terms of monads, you can finally reach that zen level where you solve your problems without even a single line of code (because you've started working at the burrito shop instead: http://chrisdone.com/posts/monads-are-burritos ).

[1] https://en.wikipedia.org/wiki/Lie-to-children


too late to edit, but that should look like

    (+1) <$> Just 1 → Just 2 

    (+1) <$> getIntFromThatUserThatOnlyTypes1 → IO 2

    (+1) <$> [1,2] → [2, 3]

    ((+1) <$> ask) 1 → 2
and

    take <$> Just 2 <*> Just "abc" → Just "ab"
as if the operator line noise wasn't bad enough already :)


Sure. Let's look at alterF for the Map type in the containers library:

http://hackage.haskell.org/package/containers-0.6.2.1/docs/D...

It has the type:

    alterF :: (Functor f, Ord k) => (Maybe a -> f (Maybe a)) -> k -> Map k a -> f (Map k a)
Note that it works for all instances of Functor. Trivial choices allow this to reduce to simple things like insert or lookup:

    insert :: Ord k => k -> a -> Map k a -> Map k a
    insert key val map = runIdentity (alterF (\_ -> Identity (Just val)) key map)

    lookup :: Ord k => k -> Map k a -> Maybe a
    lookup key map = getConst (alterF Const key map)
But those aren't really compelling examples because they just reproduce simpler functionality. It starts to pay off when you start compounding requirements, though. What if you need to insert a value and return the previous one, if it existed?

    insertAndReturnOld :: Ord k => k -> a -> Map k a -> (Maybe c, Map k a)
    insertAndReturnOld key val map = alterF (\old -> (old, Just val)) key map
OK, those are all fine and good, but they're still barely scratching the surface. What if you had problem where when you had a value to insert and there was a previous value at the same key, it was ambiguous which you should use? And let's say the structure of the problem provides interdepencies which restrict the options such that you can't just stack up a list of every possibility for each key. So ideally, you'd like a way to model "insert this value, but if something was already present at this key, give me both possible maps back". Turns out alterF can do that!

    insertNonDet :: Ord k => k -> a -> Map k a -> [Map k a]
    insertNonDet key val map = alterF (maybe [Just val] (\old -> [Just old, Just Val])) key map
Fun facts with that one - thanks to using Functor, it only traverses the tree once, whether it returns one or two results. And thanks to Map being a persistent data type, returning two results only takes O(log n) space more than returning one.

(note: this is all typed on my phone without a compiler to verify. There might be simple mistakes in the above, but it's all conceptually sound.)

That's all still just the start. You can insert an IO operation on the old value to calculate the new one, and it still only traverses the data structure once. Or a huge number of other things. Anything that can be made into a Functor can be used to augment the operation alterF does.

And you know the best part of all this? Despite all that freedom, the type of alterF tells you that you can't change the key associated with a value and even that you can't operate on more than one value in the map. It really is nice when simple things give you lots of options, but make it clear what they don't offer.


>The point is that Monad, Applicative and Functor are well defined interfaces with laws (properties) you can count on.

except you can't because you can have unlawful implementations.


Your example, filterM has the type

    filterM :: Applicative m => (a -> m Bool) -> [a] -> m [a]
from this I can tell you conclusively what it does: it takes a monadic function that returns a Bool after performing some "action" as well as a list of values; it then performs this action on each element and look at whether the result is True or False; when True, keep it otherwise discard it.

Now this is already a lot of information on what filterM does. But to use it in an action piece of code, you need to supply your own understanding of what the "action" is in this context. It does not matter whether or not you understand the implementation details of the specific Monad instance. You just need to understand its behavior: with the Reader monad, you know the action in question is just reading a piece of value from a "hidden" environment, so your filtering function has access to this environment when it determines whether or not to keep or retain an element; with the State monad, you know the action in question can possibly modify this environment as well; with the Maybe monad, you know the filtering action can say I don't know and that would result in the entire result to be Nothing as well.

Your argument of functions like filterM being a leaky abstraction is akin to saying, a Java interface is a leaky abstraction because you can't instantiate an interface (you need a class) so you need to understand both the interface as well as the class being used that implements the interface. Instead, think about it, it's just how the nature of combining orthogonal (de-coupled) things requires you to have an understanding of both of the pieces you are combining.


"Conclusively" is probably a little strong. You're relying not just on the type, but also the name - the type alone isn't enough here, even without resorting to bottom, and even when requiring every piece be meaningfully used. For instance, with the same type we can write:

    doubleIf :: Applicative m => (a -> m Bool) -> [a] -> m [a]
    doubleIf f values = case values of
        [] -> pure []
        x:xs -> prepend x <$> f x <*> doubleIf f xs
      where
        prepend :: a -> Bool -> [a] -> [a]
        prepend x True xs = x:x:xs
        prepend x False xs = x:xs


Maybe a simpler way to illustrate this is two different functions built off of filter:

`keep`: Given a predicate and a collection, filter the collection to retain only items for which predicate evaluates to true.

`discard`: Given a predicate and a collection, filter the collection to discard items for which the predicate evaluates to false.

These are two very reasonable functions that would have the exact same signature, would be appropriate for the same arguments, but have two different behaviors!


Honestly, you could reasonably call either of those "filter" :D


> I can tell you conclusively what it does: it takes a monadic function that returns a Bool after performing some "action" as well as a list of values; it then performs this action on each element and look at whether the result is True or False; when True, keep it otherwise discard it.

The function below matches the signature of filterM for List monad, but does not do anything remotely similar to what you described.

    foo :: (Int -> [Bool]) -> [Int] -> [[Int]]
    foo f xs = 
      case bool of 
        [True] -> []
        _  -> [xs]
      where bool = f $ length xs  
This is a contrived example, but I think a large part of your guess on what filterM does comes from the word "filter" in its name. This isn't any different from other mainstream programing language, so it is not a ding on Haskell.


That's actually what I meant: using both the name of the type to figure out what it does. I'm not claiming there's a single implementation given the type. Indeed I don't think that's possible in any signature that's not totally just type variables; here we have Bool and [] as base types so there are operations on them that the function can perform by baking in knowledge of these two types.


To my understanding, although the behaviors of filterM, etc. differ depending on the monad instance, as long as the monad instances follow the laws, those functions like filterM have predictable behavior. It's a consequence of "theorems for free" / parametricity.


Monads let you redefine what a statement is. This is very powerful and makes Haskell my favourite language for imperative programming. There's no need to modify the compiler for e.g. Async/Await, it can be a library. A Monad instance has a set of properties or laws that it obeys. This gives you a general understanding of what e.g. filterM will do, i.e apply an effectful filter function such that the effects are sequenced in the order of the input list.


You may not know how it is implemented, but because you have used other monads, you will know how to use it and how to compose it.


Your description means they are design patterns: a technique to solve a problem that appears repeatedly. What trips people up is that monad is a 2nd-level pattern, as can be seen in the type. 'List a' is-a pattern for nondeterminism. 'Maybe a' is-a pattern for handling failure. Instancing 'm a' is a pattern for 'sequencing effects with branching' and also other patterns for various other applications.


I still wouldn't call them design patterns. At least in haskell, they are not used like design patterns in OOP languages. They are more inherent properties of certain types (Maybe a "is" a monad etc.) and the instances are provided regardless of whether they are needed to solve a problem or not.

In OOP design patterns not only solve a specific problem, they are only used to solve the problem (no need to sprinkle the visitor-pattern all over your classes just because it's possible). Monad/Functor etc. instances are defined where they are possible. I view them as making inherent properties visible and explicit ("this action obeys these laws").

They are used to solve problems, but mainly by making abstract properties directly visible and reminding the programmer that he can just view them as this abstract concept and reuse existing machinery.



I am the author of this submission. I have strong opinions about functor and monad tutorials, and here are my thoughts:

Back when I didn't understand what a monad was, I would read a bunch of tutorials and get confused by the analogies and examples. For example, I would get confused by comparisons to "boxes," or I would think that Maybe was the definition of a monad, or that IO was the definition of a monad.

The information in the tutorial that you link seems to be all correct. However, it's too "jumpy" for my tastes. The tutorial talks about "contexts," but doesn't really explain what a "context" is, except for making a comparison to "boxes." I get that a functor is an abstract idea, so explaining it in an understandable way is difficult. However, I wish that the article would discuss type constructors, because the idea of mapping types to types is an important part of the definition of functor. Without this explanation, I imagine that the comparison to "boxes" would have given the past me the wrong impression of what a functor is.

In my tutorial, I sought to teach the actual definition of a functor, applicative, and monad. I explain that a functor maps types to types and functions to functions in a way that preserves composition and identity, that an applicative preserves the product, and that a monad is characterized by a "join" operation that "flattens" the data and a "return" operation that "wraps up" the data. I actually would have preferred to explain the actual category theory, but I felt that it would be too intimidating, and so I attempted to convey the ideas in a non-category-theory way. With my current understanding of functor, applicative, and monad, I believe that if one doesn't learn their actual definitions, one doesn't truly understand them. I guess that I wanted my tutorial to be more rigorous.

However, I am not an expert on teaching, so maybe I'm taking the wrong approach.

See my Reddit comment: https://www.reddit.com/r/programming/comments/cy35zz/functor...


As Doug Crockford once said "In addition to it begin useful, it is also cursed and the curse of the monad is that once you get the epiphany, once you understand - "oh that's what it is" - you lose the ability to explain it to anybody." [1]

I think someone who is a beginner or just want to use Functor / Applicative / Monad without mastering underlying Category Theory, boxed model seems good enough. However, if you are creating own monads, then of-course we need to understand monadic laws. Every programming language has monad of some sort e.g. Optional in Java. To use optional chaining, I may not need to know all details but only how `flatMap` works. Maybe I am wrong too :).

[1] https://www.i-programmer.info/news/167-javascript/5207-crock...


Losing the ability to explain it is good, because the explanations are wrong. The best modern advice about moands is that you don't learn them by explanation of the monad in isolation, you grok them by experience with examples and study of the formal definitions. This applies to a lot of Haskell (and mathematics in general), that has the ability to provide precise concise implementations of very powerful, general concepts.

Monad tutorials for newbies are like explaining quantum mechanics to someone who hasn't learned anything about optics or electricity or complex numbers yet - a bunch of false, meaningless metaphors.


Thank you for taking what seems to be an interesting approach to explaining these concepts.

> I use OCaml in this tutorial, with some occurrences of Haskell.

> My intention is for anyone familiar with the basics of typed functional programming to be able to follow along.

I don't know OCaml or Haskell and I got lost very early on due to the unfamiliar syntax. Do you think your explanation would be easy to translate to a more widely known language like Python or C?


Shrugs Well, Python is dynamically typed and C doesn't really have good polymorphism or first-class function support. Both polymorphism and first-class functions (with closures) are important for understanding functors, applicatives, and monads.

I admit that I'm more fluent in OCaml than I am in Python or C, so please correct me if I'm mistaken.


Python and C would indeed be unsuitable. Maybe Rust (quite OCaml-like, but with a more C-like syntax) or Java/C#/Kotlin would let you present these ideas with a more "mainstream" syntax?


You can do it in Scala which has powerful type support.

The problem with other languages is that they lack the support for the higher order types, so the implementation is dishonest or you have to build up an API (new embedded minilanguage) to express them, which is a lot of work and a distraction.

And, the result tends to be very cluttered with syntax junk, as you can see in Scala and Functional Java.

Which incidentally is why people generally don't use these ideas in Java code. You can use these design ideas in your Java architecture, but not directly express them in your Java code.


OCaml also lacks higher-kinded types. As far as I can tell, anything you can do in OCaml you can also do (perhaps verbosely, though less so in more recent versions) in Java.


OCaml have HKT, it's in the module part of the languiage, it's just verbose.

And OCaml's module language is much more powerful than Java, since module language is a dependently typed language. You can't, say, pass a class including a type for another class in java.


I wrote a very long and incomplete monad tutorial that primarily uses Python. It's very different from OP, but feel free to check it out.

https://github.com/lalaithion/MonadPaper/tree/master/out


Hi! Reading your tutorial I wanted to leave a quick comment on HN and was worried you wouldn't see it. Nice to see you're present!

Thanks, I liked reading it a lot. It's a good refresher, since I never use haskell but have read about this a couple of times.

The tutorial appears great on mobile, so I was able to refresh my memory very quickly. However, the code boxes had a very small font. I think this could me easily fixed with fome html-fu on the meta tag or the sizing of the box/font.

I also think that it would be great it you could explain at the end what is a counterexample input for the small haskell program. For example, if the user inputs a file that doesn't exist, then the readFile returns Nothing, then putStrLn returns Nothing. There's a sentence in the conclusion that explains that monads are nice to use but not why. I understand that it deals with values inside contexes, but in practice how does it affect development?


Thank you for the praise. For me, the inline code has about the same font size as the surrounding text, maybe smaller, and the block code has a bigger font. Is it possible for you to send me a screenshot?

I'm a little confused by your last paragraph. My last code example doesn't use readFile and putStrLn, and it isn't written in Haskell?


there's an equivalent definition of Applicative that may be a bit less intimidating:

  class Functor f => Applicative f where
    unit :: f ()
    (**) :: f a -> f b -> f (a,b)
[source - a bit CT heavy](https://stackoverflow.com/a/35013667/5534735)

(i'll be writing the second operation as `××` in some places because of HN asterisk weirdness)

this gives us:

- `unit`, a "template" container with a hole we can fill (by doing `fmap (const myValue) unit`, equivalent to `pure myValue`)

- `fa ×× fb`, to "compose" two Applicative values. this shows how, unlike with Monads, the two computations must be "independent". for example, if IO were only an Applicative, you could do

  print "hello" ** print "world"
but not

  getLine >>= \s ->
   print ("hello, " ++ s)
(where the second computation depends on the string we got in the first one)

it also nicely shows the two ways lists can be an Applicative - `fa ×× fb` can be either the Cartesian product `[(a,b) | a <- fa, b <- fb]` or `zip a b` (ZipList).

(also, it's kind of like a Monoid, which is neat!)


Huh? I cover this definition in the tutorial, when I discuss OCaml's (and+) operator! Did I gloss over things too quickly?


oh man, i must've missed it! tbh though, i just quickly scanned through the article to see if this needed plugging, because I remember being completely stumped by `(<×>)`. too late to edit it now though :/ sorry!


Nice explanation of monads!

My two cents: personally, I would've started with the fact that a monad is exactly the stringing together of functions (a -> m b), and the similarity between monoids, monads, strings / lists and functions under composition. You mentioned that

• (a -> [b]) is the type of non-deterministic computations

• (a -> Maybe b) is the type of fallible computations

• (a -> IO b) is the type of effectful computations

• (a -> (s, b)) is the type of computations with state s

• that the Monad instance merely specifies how to compose them

• all such composable constructs can be expressed as a monad

• do notation and list comprehensions automatically work across all of them

and, in my opinion, these are all much more powerful motivation than beginning with a comparison to functors.


It's just mind boggling how universal these concepts are and how they show up in surprising ways. As a recent example, I've been learning the basics of composing music in Haskell with Euterpea[1] and wanted to make a function which played several notes over a list of octaves to make chords. It turns out the applicative operator was exactly the function I need to do this! It would be hard to go into the details in just a little blurb here... but here is a piece I made with with it[2]. And here's the line of code with the applicative operator:

notePlayer notes octs dur = musicSeq $ (uncurry <$> notes) <*> ((, dur) <$> octs)

Won't make much sense without context, but it's there!

[1]: http://www.euterpea.com/

[2]: https://soundcloud.com/a-mathematical-way/not-enough-time-to...


I have never heard of Euterpea, nor have I seen the rest of your code, but I suggest you refactor your slightly convoluted code like this:

    notePlayer notes octs dur = musicSeq $ notes <*> octs <*> pure dur
Coincidentally, this might be a testament of the power of parametric polymorphism and equational reasoning.


My favorite part about Haskell is how you can know literally nothing about the domain and make meaningful changes to programs regardless thanks to local reasoning.

That's really one power of functor/applicative/monad - if you understand their interfaces, you can work with new unfamiliar types that have these instances without much effort at all.


It's amazing. No one could ever do something like this with an imperative language!


Nobody ever said that :)

Although abstracting over this stuff isn't possible in any imperative language unless Scala or maybe advanced C++ template count. But that isn't due to their imperative nature exactly.

But local reasoning is very hard to count on in most other languages - that's for sure. It's easier to just run a VM in your head.


It's literally a meaningless change! :-) All refractorings at meaningless changes, by definition and design.

Purity's power is that beautiful meaningless changes are safe.


Amazing, thanks! It always amazes me how natural Haskell is for some people. Some seem to see the simplest solution with so little effort while I still struggle with almost embarrassingly simple things. I wish I would have picked up Haskell as my first language, instead my mind was poisoned by C...


I haven't figured out whether the C I learned was poison. I often wish I could intuit Haskell without translating everything to imperative in my head, because of how cleanly many complex things can be expressed. On the other hand, pretty much everything I program has some rough kind of performance constraint in practice and I appreciate how directly I can read the performance characteristics of an imperative program from its control flow.


Definitely learning Haskell as the first language can make it easier to get used to things, but I don't generally agree with C as "poisonous" to one's mind. My last big project written in C was writing a hypervisor, and honestly using C in this context is rather appropriate.


I always go back to “Learn you a Haskell for Great good”.

http://learnyouahaskell.com/functors-applicative-functors-an...


Being someone who is still struggling with these concepts, I like this tutorial because at least it doesn't use the common list/maybe/state to illustrate the concepts. Somehow I feel these concepts are so abstract - unless one is well versed in category theory, maybe only a data approach can prevent people from overfitting these concepts to specific examples.

I would really hope to see a tutorial that have a diverse set of examples and just fmap each example with a light explanation of say, what is a monad in this code and what is not, and because it's a monad we can do this.

Essentially the tutorial can just train a classifier in one's head, and with a nice set of examples maybe the brain can learn a general representation of concepts for the classifier ...


In this series functional concepts are very gently introduced. I feel like it really appreciates the beginners starting point and assumes very little. Is this close to what you want?

https://egghead.io/lessons/javascript-linear-data-flow-with-...


I started writing this the other day - https://codersteve.dev/post/refactoring-to-monads/

It's just a start, but maybe it could help.


After learning Elm for a few months I picked up a Haskell book. When I got to the Functor, Applicative and Monad chapters I discovered that I’d actually been using them for months without knowing it. Having had practical experience with them the theory came as a revelation. I found that I’d developed an intuition for them from all the concrete scenarios where I’d used them.

The Elm community goes out of it’s way to avoid talking about them because they make learning functional programming seem far more intimidating. It’s a very simple language though - Evan had always had beginners in mind when desiging the language and tools. If you’re looking to improve your functional programming skills it’s got a great learning curve.


I should learn to read Haskell one of these days. Not today though.


The majority of my tutorial actually uses OCaml, though. :P

I like OCaml because it strikes a balance between imperative and functional programming. Maybe learning OCaml would be easier than learning Haskell?

(Plus, OCaml has neat features like polymorphic variants and a powerful module system!)


"Learn You a Haskell" is sparse on theoretical foundations. I made it halfway through the book feeling like I only had a superficial understanding of the language. If you are interested in a rigorous or systematic approach I recommend "Haskell Programming from First Principles".


I think both are great books, but the book you choose is secondary in importance to the amount of time you spend just bashing your head against it until it starts to make sense.


> one of these days

More like one of these months/years. It's not just a language, but a philosophy of computation.


I still find Philip Wadler’s original paper on Monads to be the clearest explanation of the pattern and concept. It remains scoped to the usefulness of the concept in programming, lays out very clear motivating examples, and proceeds to implement the pattern to solve each case in a lucid and explocit manner. I’d say the only downside is that it assumes at least some familiarity with FP and writing more “theoretical” or academic tools like evaluators, but all in all it’s still much clearer than the majority of garbled explanations of monads, even those that attempt to explain the concept through its dependencies/priors (functor/applicative).

here’s the paper: https://homepages.inf.ed.ac.uk/wadler/papers/marktoberdorf/b...


    It turns out that every generic type t has a corresponding map function map : ('a -> 'b) -> 'a t -> 'b t.
So how about x : 'w -> Bool?


If there is a function `map : ('a -> 'b) -> 'a t -> 'b t`, and there exists a function `x : 'w -> bool`, then `map x : `'w t -> bool t`. Is this what you were asking?


Think `data Foo a = Foo (a -> Bool)`, pardon my Haskell. A function `map :: (a -> b) -> Foo a -> Foo b` is impossible, however `contramap :: (a -> b) -> Foo b -> Foo a` is fine (just pre-compose the given function with the stored function).

Even worse with `data Bar a = Bar (a -> a)`.


Thanks. I know what contravariant functors are, but I haven't used them, so I didn't realize that was what the parent commenter was asking about.

You're right, my claim that every type 'a t has a corresponding (covariant) functor is incorrect, and I should either take that out or mention contravariant functors.


It’s even worse than that. Consider

    data Foo a = Foo (a -> a)
This admits neither Functor nor Contravariant. Sadly all you can say is “If you can map, it’s a functor.”

I always end up finding out more about any subject I actually publish a post about when people read it...


The other person also mentioned this. It's called invariance. There's also a fourth variance: https://www.benjamin.pizza/posts/2019-01-11-the-fourth-type-...


Thank you for pointing out contravariant functors; I have fixed that sentence in my post and credited you.


Oh that’s much more extensive. Like the link to Julie’s work.


I believe that you can say "As long as `a` only appears as the last parameter in a higher kinded type, or not as the parameter of any higher kinded type, then you can define a functor over it."


> According to currying, 'a -> 'b -> 'c is interchangeable with 'a * 'b -> 'c. (The fancy math word for this interchangeability is "isomorphism.")

I thought isomorphism was when a function was reversible. I didn't think it had anything to do with currying.


It means there are two functions going between them that witness the isomorphism.

The witnesses in Haskell are the higher-order functions (they transform functions) `{,un}curry`:

    curry :: ((a, b) -> c) -> (a -> b -> c)
  uncurry :: (a -> b -> c) -> ((a, b) -> c)


Isomorphism is not about currying. When I mentioned isomorphism, I was referring to the interchangeability in general, and the currying relationship is just an example of an isomorphism. In category theory notation:

Hom(A * B, C) ~ Hom(A, C ^ B)

This is one of the laws of a Cartesian closed category. The simply typed lambda calculus with products is the internal language of a Cartesian-closed category.


I've always understood intuitively the existence of .map(), .reduce(), and .flat() in JavaScript, but .flatMap() felt like a weirdly arbitrary combination of two of them. Now it makes sense!


Maths is what you get when you limit all your variable names to single letters.


I used to think the way you do (or the way I'm inferring you think), especially coming from the OO world to functional.

But, functional programming is really about programming with types, not named things, and so a lot of the time we work with much smaller functions that are expressions. It should be trivial to follow the intention without explicit names.

Having terse and concise names can make it much easier to read if you're following the types. Occasionally, if a piece of code is more challenging to read, I may use single word variable names, but mostly it's trying to get the names out of the way of the types and operators.

For example, if I have a type (where `a` is generic):

    a -> bool
Then this can only be a predicate function. If it's provided as an argument to a function, I'll call it `f`, not `predicate`, because the type itself is the documentation.

It's definitely a mindset change for a different approach, functional programming is much more about function composition, whereas imperative is about a list of steps to perform, and so that composition is much more about whether the types fuse or not, the names become slightly less relevant, especially for general purpose library functions.

Personally, I feel it helps. But this is probably one of those subjective things like tabs vs spaces.

(spaces are correct)


And make symbols mean different things in different situations, and sometimes reuse symbols for the same thing, and sometimes make sin(x+3) mean the variables s * i * n * (x+3) and sometimes mean taking the sine of x+3.

Mathematical language is worse specified than markdown and has a huge amount of ambiguity and requirement that you understand the context that you're working in.

It works fairly well for mathematicians, in terms of being concise to express complex ideas on a blackboard, but overall it's a mess.


And the naming convention is absolutely horrid. It seems the main criteria is .. it should sound clever.


Pretty weird that a comment linking a chapter from an oft referred to haskell text is a dead comment here. Surely if the alternate explantion in "Learn You a haskell for great good" is somehow sub-optimal it would be better to explain how rather than kill the comment inside 10 minutes?

You see this sort of thing from language warriors fighting silly wars but, yeah, what's wrong with Learn You a Haskell? Why must it be fought and suppressed immediately? Crazy...


It was probably automated. larusso has only made two comments ever, both containing links, so that probably triggered the spam detectors. You can manually resurrect for accidentally dead comments like this by clicking on the comment's time and then clicking "vouch".


I’m personally more the consumer type here at hacker news.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: