

IO evaluates the Haskell heap : in pictures - dons
http://blog.ezyang.com/2011/04/io-evaluates-the-haskell-heap/

======
joelburget
As someone who is relatively familiar with evaluation in Haskell I'm enjoying
this series, but if I didn't already understand most of it I doubt I would get
anything out of it. Real World Haskell has a more concrete chapter on
profiling: [http://book.realworldhaskell.org/read/profiling-and-
optimiza...](http://book.realworldhaskell.org/read/profiling-and-
optimization.html). The Haskell wikibook also has a fairly comprehensive
chapter on laziness: <http://en.wikibooks.org/wiki/Haskell/Laziness>.

~~~
ezyang
It's an interesting charge. Here’s my personal experience with the two cited
resources, back when I was learning Haskell. YMMV.

RWH’s “Here’s how you get the job done” style exposition did a reasonably good
job of making me feel comfortable turning on profiling of my Haskell programs,
but didn’t give me much sense at all of why the particular examples they chose
were particularly representative of space leaks. I went on to do some amount
of performance tuning of some data structures
[http://blog.ezyang.com/2010/03/the-case-of-the-hash-array-
ma...](http://blog.ezyang.com/2010/03/the-case-of-the-hash-array-mapped-trie/)
... but I still managed to miss some really obvious performance bugs that
Tibell caught later when he revamped my benchmarks
[http://blog.johantibell.com/2011/03/video-of-my-hashing-
base...](http://blog.johantibell.com/2011/03/video-of-my-hashing-based-
containers.html) . I just didn’t know where to look! All I had were a bunch of
patterns: "watch out for lazy folds", "using bang patterns can be a good
thing", without any particular mental model of how to deduce these things from
just looking at code.

To be honest about the Wikibooks article... I found it hard to read. (Which is
terrible, since it's a wiki, and if I don't like it, I should improve the
article...) Exactly why is hard to pinpoint. It might be statements like
"Haskell values are highly layered"; it might be the fact that none of the
examples are in motion (they don’t show before-and-after; the actual operation
of the program!); it might be that in many respects, its coverage is
incomplete.

Obviously, this series isn’t going to be the be-all, end-all reference for
laziness. While it would be extremely gratifying to see other people take up
the ghosts and presents metaphor, I’ll be content if people walk away with a
sort of visceral feeling about all the churning that goes on under the hood
when they write a functional program, so that it’s _not_ surprising when a
trace statement shows up much later than you expect it and so that it’s _not_
a matter of trial-and-error inserting bang-fields and seqs.

~~~
joelburget
Really, my reason for saying I wouldn't learn anything is because I personally
learn better from concrete examples. That's why I like the RWH chapter. It
gives you knowledge of how GHC represents your program, and helps you use that
to change your program accordingly. Perhaps I should have reserved my
criticism for the end of the series. Hopefully you will show us how to apply
your model to our real programs. For instance, I think it would be neat to
have a simple example of, say, folding a list. Show us what happens when we
make a field strict. Show why `True || undefined` doesn't blow up. Just some
suggestions.

I agree with you on the wikibooks article. I included it so someone reading my
comment would have a place to go for the rest of the story that RWH doesn't
talk about.

(By the way you might want to reformat your links, HN seems to have
misinterpreted them.)

~~~
ezyang
Good suggestions, and I hope to do them.

------
hendzen
I think the section of the Wizard book (SICP) on streams may also prove
helpful to those trying to gain a deeper understanding of lazy evaluation.

[http://mitpress.mit.edu/sicp/full-text/book/book-
Z-H-24.html...](http://mitpress.mit.edu/sicp/full-text/book/book-
Z-H-24.html#%_sec_3.5)

