

Write Haskell as fast as C: exploiting strictness, laziness and recursion - rw
http://cgi.cse.unsw.edu.au/~dons/blog/2008/05/16 

======
dons
BTW, I moved the articles to a different host last year,

[http://donsbot.wordpress.com/2008/05/06/write-haskell-as-
fas...](http://donsbot.wordpress.com/2008/05/06/write-haskell-as-fast-as-c-
exploiting-strictness-laziness-and-recursion/)

and the follow-on using fusion,

[http://donsbot.wordpress.com/2008/06/04/haskell-as-fast-
as-c...](http://donsbot.wordpress.com/2008/06/04/haskell-as-fast-as-c-working-
at-a-high-altitude-for-low-level-performance/)

~~~
tumult
Way cool, thanks :]

------
michaelneale
I am happy to see UNSW powering on in the FP world. Waaay back when I went
there, they tried, valiantly, to teach us grubby Engineer hacker types the
virtues of purity (via miranda), of I tried to ignore it and wanted to get to
the good stuff that let me flip bits on a serial port. But now I am grateful
for someone at least trying to "show me the way" so early on (and over time, I
think it paid off).

------
rw
Ah, was posted previously: <http://news.ycombinator.com/item?id=190963>

Still, an awesome analysis that traverses the code hierarchy from assembly to
Haskell.

------
three14
I personally only care about optimizations that enable someone who's a domain
expert in _something else_ to write code that performs well. If I know Haskell
at an intermediate level, and I'm focusing on the equations for protein
folding, or game mechanics, or rocket trajectories, will I still be able to
write code that performs well? Or do I need to be a Haskell expert first?

~~~
rw
Wasn't that what the author was getting at? The DSL for including
rewrite/optimization rules in the compiler is actually accessible to the "mere
mortal".

~~~
three14
Yes and no. It's great that it's available. I'm just not seeing that it
implies a workable routine for Joe Programmer.

Write code. Run. Runs out of memory after 3 hours of protein folding. Check
the GHC processed code. Discover that you need to rewrite 5 functions. The
premise is that your mental model wasn't good enough to predict this without
the GHC processing, so rewrite, run through GHC, stare at code, rewrite again.
If that whole cycle is necessary, then this does not sound workable. But I'd
be thrilled to learn that I'm missing something.

~~~
chancho
To be fair, no language lets you just "write code" and have it run anywhere
near optimally. The programmer always has to be aware of what the compiler is
going to do and work with it.

I've written a fair amount of Haskell, and done a lot of this type of
optimization, looking at the core (intermediate representation) and fiddling
with pragmas, unboxing, strictness, etc. I've put Haskell on the back burner
and gone back to C++ because I spent SO MUCH time doing this. Maybe I'm not
being fair, and in another 5 years I'd be able to predict what GHC is going to
do as well as I can predict what a C++ compiler will do. But for now, I can't
justify the expenditure in time it takes to get a large piece of Haskell code
(more than a microbenchmark) to perform at the C/C++ level. If you write
idiomatic Haskell you're tracking down stack overflows (the Haskell kind, not
the C kind) and misfired optimizations, and if you're writing C-in-Haskell
you're better off just writing C. In the end, I think Haskell just isn't (yet)
the best language for what I was doing (image processing, graphics, 3d
modeling, etc.)

(The short reply to your comment is: yes, you the application writer have to
be very aware of what GHC is doing if you want C-level performance.)

~~~
three14
Thank you. That was the impression I got. I'm still curious to hear if anyone
else had a different experience.

I'm not so much looking for what it takes to get code running optimally, as
much as having idiomatic code run with "reasonable" performance.

