
On Learning Haskell - karlzt
http://esr.ibiblio.org/?p=1796
======
jrockway
Pretty good article! I read the first few paragraphs and was a little scared,
but the author redeemed himself with the line, "A simpler way to think about
monads is as a hack to turn function composition into a way of forcing the
sequencing of function calls, or the functional-programming equivalent of a
shell pipeline." After that, I would agree with pretty much everything he
says! (A first for an "I looked at Haskell today" article :)

A few nit-picks that come to mind; thunks and closures are related but
different, every other language with thunks also calls them thunks.

Lisp is not really "based on lambda calculus" any more than any other
imperative programming language; "(loop for i from 1 to 10 collect (+ i 42))"
is not so different from a loop in assembler.

And finally, monads are not really a _hack_ for making function application
look like a sequence, that's already what function application does. Monads
just let you run some extra code between function applications. But given that
most people write stuff like, "monads are like nuclear-waste burritos inside
of spacesuits" (er...), I am not going to complain about this explanation too
loudly :)

~~~
barrkel
Well... if you consider IO to be the nuclear waste of functional programming,
the IO monad is a way of wrapping up that nuclear waste so tightly that it
can't escape until the functional program has logically completed executing.
The high-level result (of some IO type) is logically an imperative program
that the functional program built using functional composition with
placeholders where the IO was supposed to happen.

But while I have to stretch to make this analogy, monads are sufficiently
abstract that they could probably be made to fit many strained analogies...

~~~
Xichekolas
When jrockway said the burrito thing, I think he was referring to the Brent
Yorgey post about the "monad tutorial fallacy":

[http://byorgey.wordpress.com/2009/01/12/abstraction-
intuitio...](http://byorgey.wordpress.com/2009/01/12/abstraction-intuition-
and-the-monad-tutorial-fallacy/)

To summarize: everyone, once they learn Haskell, comes up with their own
analogy for "what monads are"... and the analogy, while it makes perfect sense
to the person that came up with it, never helps anyone else because everyone
gets to monads via a different route.

~~~
eru
I have the perfect explanation for Monads: Monads are like everything that
satisfies the three monad laws. (Of course this only helps mathematicians, if
at all. Not mere mortals.)

~~~
jrockway
"A monad is like a functor, except there is also a 'join' operation."

~~~
eru
Don't forget about pointed functors in between, i.e. return.

~~~
jrockway
Yeah. My mental picture is "thing with return/pure -> thing with fmap -> thing
with join". The second is a pointed functor, the third is monad. Not sure what
the first is, probably because it is nearly useless :)

~~~
eru
You may have it backward. The mathematicians use "thing with fmap (functor) ->
thing with return/pure (pointed functor) -> thing with join (monad)", as far
as I know.

Arrows are another generalization of Monads that goes in a direction different
from functors.

------
Xichekolas
I was bothered by him equating closures and thunks. While they are kind of
similar (a thunk is almost like a closure with no arguments), they aren't
really the same thing.

~~~
eru
Careful lying may aid in explaining. [1] The author knows the difference.

[1] Can anybody find the source of this quote?

~~~
jsyedidia
The author feels that this technique of deliberate lying will actually make it
easier for you to learn the ideas. --- Donald E Knuth, The TeXbook

~~~
eru
Thanks! That's what I was searching for.

------
stralep
Nice article, but Haskell type system is not a logical limit.

I would say that Coq's type system (or some other Martin Lof type system)
would be a logical limit.

(is there IO Monad in Coq?)

Again, nice article.

~~~
camccann
_(is there IO Monad in Coq?)_

To the best of my knowledge, no. If memory serves me, Coq actually isn't even
Turing-complete, being a typed lambda calculus with no general fixpoint
operator (specifically, a variant on the Calculus of Constructions). Any
"program" that "type checks" in Coq is thus provably terminating.

Note that, in Haskell terms, the simplest general fixpoint operator is:

    
    
      y :: (t -> t) -> t
      y f = f (y f)
    

...which, interpreting "->" as logical implication, is a theorem stating that
a proposition implying its own truth implies its own truth. Given that (t ->
t) is clearly a tautology, this suffices to prove all possible propositions,
also known as the "principle of explosion". [0]

On the other hand, see <http://ynot.cs.harvard.edu/> for an attempt to add
side-effects and general recursion to Coq in a controlled fashion, in order to
turn it into more of a programming language.

[0] <http://xkcd.com/704/>

------
mnemonicsloth
_Haskell pushes [static typing] to some sort of logical limit._

I think that distinction should go to Qi. Derived from Common Lisp, but with a
type system that's Turing-complete.

<http://www.lambdassociates.org/doc.htm>

~~~
Periodic
I believe GHC's type system is also Turing complete. As an example, you can
implement BrainFuck in the GHC type system. I haven't looked over the details,
but it looks promising.

<http://killersmurf.blogspot.com/2009/11/typefuck.html>

~~~
camccann
Type-level computation is actually quite limited by default in GHC; it becomes
quasi-Turing-complete only when using the UndecidableInstances extension, and
even then it has a very shallow stack by default, though you can increase it
if you want deeper recursion. If memory serves me, Oleg Kiselyov has a type-
level implementation of the untyped lambda calculus, which is rather more
pleasant than brainfuck.

------
andrewcooke
one odd thing reading through was the comment that _A benefit of lazy
evaluation is that you can write code like an Icon or Python generator that
spins forever, returning a value on each cycle, and it will only be called for
the exact number of returns that the enclosing program actually needs even if
the caller is itself a generator._

that sounds to me like it's making a contrast with icon and python generators.
but, as far as i understand things, that's just the same as icon + python.
also, "even if..." was emphasised, and i don't understand why.

i suspect i've misread it or am missing something. would someone be kind
enough to set me straight? thanks...

------
Auzy
Any reason why Haskell is so popular here? I spoke to people on Prgmr IRC and
they were wondering too. I've learnt lisp in the past, but I have to wonder
what benefits Haskell really offers that makes it so popular currently?

