Hacker News new | comments | show | ask | jobs | submit login
Monads in pictures (newartisans.com)
98 points by chrislo 1647 days ago | hide | past | web | 24 comments | favorite



That's not really how I visualize functors. Let's go back to the function picture:

    ---          ---
    |a| -- f --> |b|
    ---          ---
This is fairly simple--using f, you can go from a to b.

Now lets imagine a functor F (with f still being our function):

    ---          ---
    |a| -- f --> |b|
    ---          ---
          | |
          |F|
          \ /
           V
    ----           ----
    |a'| -- f' --> |b'|
    ----           ----
So a functor is just a function at a higher level. A function maps values of one type to values of another type. A functor maps types to other types. In Haskell, types in the Functor class always map types of kind * back onto * , so they're much like functions of type a -> a.

The crucial part to notice is that it not only maps between types but it also preserves functions. That is, you get new types corresponding to your old types and new functions corresponding to your old functions.

So the list type, for example, maps any type a to [a] and any function (a -> b) to ([a] -> [b]). In a sense, it is like a well-behaved function between types.

Coincidentally, thinking of functors like this is how I realized that functors in Haskell are not completely unrelated to functors in OCaml--they're just different reifications of the same mathematical idea.


That's right. That's how functors work in category theory.

I've seen a nice article explaining how Haskell types form a category. The objects of the category are types, an arrow a -> b is a function taking argument of type a and returning type b. A Haskell functor is then an endofunctor in this category (i.e. a functor from the category of types to itself). Unfortunately I cannot find the article anymore.



Thanks for the links! However I am thinking about a different article. If I remember correctly I found it somewhere in the Haskell Wiki. Wikibooks have also a similar article on the subject: http://en.wikibooks.org/wiki/Haskell/Category_theory


I've always liked this visualisation too!

I like to think of fmap as a converting a normal function into a function that operates "through" a functor. This is clearer looking at the Haskell type signature, which can be written as:

  fmap :: (a -> b) -> (f a -> f b)


In general, thinking about the different valid rewrites of the type signatures of the various higher order functions in Haskell tends to reveal quite a bit!


Unrhetorically,

What is the relationship between fold + cons (::) and monadic bind? What is the natural transformation from fold to functorial map? Is there a sensical dual?

Thanks.


> What is the relationship between fold + cons (::) and monadic bind? What is the natural transformation from fold to functorial map? Is there a sensical dual?

Whew, working backwards.

Fold is like your swiss army knife for working with entire lists at once; you just need to tell it what to do when it sees a node (Cons) or a terminator (Nil). If, for example, you instruct fold to apply a function to each element of the list but leave the linked structure intact, you've implemented map! And for lists, the fuctorial map, or fmap, is exactly map.

The monadic bind in the List monad is a way of expressing Nondeterminism. Let's say I have two functions...

  b = makeSomeBsFromAnA(a); c = makeSomeCsFromAB(b);
In the context of the List Monad, you would evaluate the first statement to produce a list of Bs. Then, for each B, you would generate a list of Cs, and then concatenate all of those lists into a big list. That's why bind is sometimes referred to as flatmap in the List Monad - take each element generate in the last list computation to generate a bunch of new lists, and then flatten them (smoosh them together; concatenate them). flatmap could of course be implemented using fold and cons.

The point here is that, since we're working within the context of the list monad, I can deal with a single B and a single C, even though the function calls are really giving back lists of those respective types.

By the way, in Haskell the code would really look like this

  makeSomeCsFromAnA a = do
    b <- makeSomeBsFromAnA a
    makeSomeCsFromAB b



Site seems to be down, here's a cached copy: http://webcache.googleusercontent.com/search?hl=en&clien...


Decent article, but I have something a little off ... it's an honest question though:

What compels people to use words like "grok"? Is comprehend, digest, understand, follow, etc. so insufficient that it necessitates an ugly sounding word meaning emotional absorption to be misappropriated as mental absorption? There's no good tense rules established and it's not widely known outside the domain of nerdom. So I ask, to what ends?


meaning emotional absorption to be misappropriated as mental absorption

Misappropriated? Wouldn't 'grok' mean absorption of all kinds?

Personally, I like the word. It implies something more than simple comprehension or understanding. That said, I think it's misapplied here, exactly because of that difference. As an introductory tutorial, the reader is just beginning to understand the concept. It'll take much more than that to achieve what the word implies.




People like it :)

(I don't.)


I know nothing about Monads, I know nothing about Haskell except for the fact that it's a functional programming language that has Monads. Yet I still found this really informative... Makes me want to play around with Haskell and see if this article makes it easy to grok how Monads work


from a non-Haskell programmer here. It looks like monads are just a way to call functions in difference context, in js

  map(myObj, someFunc, contextObj)
Or am I missing something here?


It's a bit more powerful than that - consider what would happen if, every time you sequence two statements together...

  doThingOne(); doThingTwo()
... you get to pick what happens during the semicolon depending on which Monad you happen to be working with.

For example, like praptak mentioned, maybe during every semicolon you can evaluate the failure state of your program and decide not to continue. I.E., "Maybe" the execution worked, and "Maybe" I have a valid computation at this point, but "Maybe" not!

The power here is that while you are really operating in this context with an implicit possibility of having failure, you can program as if each step succeeded. So now we can string together a whole sequence of possibly-failing computations, and create a bigger possibly-failing super-computation that error checks under the hood.


ok, gotcha that is a pretty nifty feature :)


The context isn't a property of the monad abstraction, but of a concrete monad, like the State monad.

If you desugar everything from the State monad, than you can see, that the state 'c' is an explicit argument, which is hidden in the "sugared" version.

    import Control.Monad.State.Lazy
    
    type Counter = State Int
    
    -- using the State monad and the do notation
    increment :: Counter ()
    increment = do
       c <- get
       put $ c + 1
    
    -- using the State monad and desugar the do notation
    increment_ :: Counter ()
    increment_ = get >>= \c -> put $ c + 1
    
    -- desugar 'get'
    get_ = \c -> (c, c)
    
    -- desugar 'put'
    put_ c = \_ -> ((), c)
    
    -- desugar >>=
    bind_ a b = \c -> let (result, c') = a c in (b result) c' 
    
    -- "desugar" the State monad
    increment__ = get_ `bind_` \c -> put_ $ c + 1


There's more to monads. One important feature is separation of composition from execution. Second one is that the composition can do some extra things automatically, for example the Maybe monad composes functions in such a way that if the first function produces 'Nothing', the second will not be evaluated.


I may be misunderstanding you, but from your description, that more describes functors than it does monads. The difference being that in a monad, the `someFunc` you pass in is responsible for creating a new context (of the same monad type).

Pretending that javascript is immutable, I think it would probably look something more like

    contextObj = monadBox(myObj)
    // someContextReturningFunc gets myObj as its argument from contextObj and generates newContextObj
    newContextObj = monadBind(contextObj, someContextReturningFunc)
    newNewContextObj = monadBind(newContextObj, someOtherContextReturningFunc)
    ...
To make this more concrete, let's take the Maybe monad/functor in javascript and compare them. The Maybe monad is described by johnpmayer elsewhere here.

    // Creating maybe "contexts"
    maybeNothing = {maybe: "nothing"}; // maybeMonad holding nothing
    maybeJust = function(x) { return {maybe: "just", just: x} }; // create a maybeMonad holding a (the '[a]' in the picture)

    // Monadic bind
    function maybeBind(maybeMonad, f) {
      // maybeMonad is '[a]' in the picture
      if (maybeMonad.maybe === "just") {
        return f(maybeMonad.just); // expects f to return another maybeMonad '[b]'
      } else {
        return maybeNothing; // also '[b]' but the b value doesn't really exist
      }
    }

    // Functor map
    function maybeMap(maybeMonad, f) {
      // maybeMonad is '[a]' in the picture
      if (maybeMonad.maybe === "just") {
        return maybeJust(f(maybeMonad.just)); // expects f to return a value without any maybe stuff ('a -> b')
        // notice that the monad wrapping remains separate from f
      } else {
        return maybeNothing;
      }
    }

    // Functor examples
    //////

    // possible values of 'f', note that maybe is not involved at all
    function addOne(x) { return x + 1; }

    // usage
    console.log(maybeMap(maybeJust(1), addOne)); // returns maybeJust(2)
    console.log(maybeMap(maybeNothing, addOne)); // returns maybeNothing


    // Monad examples
    //////
    
    // possible values for 'f', notice x is what was contained in the maybe (not the maybe itself) and the return is a maybe
    function maybeAddOneIfOdd(x) { return (x % 2 == 0) ? maybeNothing : maybeJust(x + 1); };
    function maybeAddThree(x) { return maybeJust(x + 3); };   

    // usage
    console.log(maybeBind(maybeJust(1), maybeAddThree)); // returns maybeJust(4)
    console.log(maybeBind(maybeJust(1), maybeAddOneIfOdd)); // returns maybeJust(2)
    console.log(maybeBind(maybeBind(maybeJust(1), maybeAddThree), maybeAddOneIfOdd)); // returns maybeNothing (due to 4 being even)
    console.log(maybeBind(maybeBind(maybeJust(2), maybeAddOneIfOdd), maybeAddThree)); // returns maybeNothing because the first bind returned nothing

    // and of course, composition
    function maybeMultiplyThree(x) { return maybeJust(x * 3); }
    function maybeMultiplyThreeThenAddOneIfOdd(x) {
      return maybeBind(maybeMultiplyThree(x), maybeAddOneIfOdd);
    }

    console.log(maybeBind(maybeJust(1), maybeMultiplyThreeThenAddOneIfOdd)); // returns maybeJust(4)
    console.log(maybeBind(maybeJust(2), maybeMultiplyThreeThenAddOneIfOdd)); // returns maybeNothing
Edit: added some extra examples and comments


Can someone point me more resources for category theory in the computer science point of view?


Someone's not ready for the HN-Effect? Website not responding.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: