The purpose of the technical problem was to assess more directly candidates ability to write Haskell programs that can be used to solve real world problems, where memory usage and performance are important. The problem was all about evaluation order and memory behaviour. We started by asking candidates to look at a short program and say what shape they would expect the heap profile to be. That would then lead on to a discussion of what things are evaluated at what stage and how much memory they are taking in the meantime. For the final step we asked candidates to rewrite the program to run in constant space. We felt overall that the technical problem was quite useful and we allowed it to become a significant factor in our final decision making process.
The choice of problem is based on our belief that a good understanding of evaluation order is very important for writing practical Haskell programs. People learning Haskell often have the idea that evaluation order is not important because it does not affect the calculated result. It is no coincidence that beginners end up floundering around with space leaks that they do not understand.
I've written a lot of C++ and a fair amount of Haskell, and this kind reasoning about the evaluation order and space usage of Haskell a program is no less of a black art than pointer arithmetic or manual memory management in C++. In both cases, it's too steep of a learning curve to just be able to write practical programs, which is why garbage collected languages have displaced C++ in many domains.
It's particularly damning to say that "good understanding of evaluation order is very important for writing practical Haskell programs" because the easiest evaluation order to understand is strict evaluation order. In that sense, lazy evaluation is to strict evaluation as manual memory management is to garbage collection, except that the latter is supposed to be an improvement on the former.
The real benefit of lazy-evaluation is that it decouples generation from consumption in a very thorough way. Unfortunately it often doesn't scale up as space leaks attest. I'm starting to think that in the large it should be feasible to replace it with CSP-like channels, though this is essentially boiler-plating the runtime code, it gives far more fine control.
I'd say also say that automatic memory management is similar. The downside to manual is not all the debugging difficulties per se, but the fact that you do have to decide who allocates, who deallocates, and that these decisions are highly coupled and can greatly impact the choice of feasible data structures.
I was bitten by the space leak once or twice and my programs was useful for my colleagues despite that errors.
One great example is the model of CPU. Space leak made it to slow down quadratically - simulation_speed=O(1/simulated_time^2). It took a second for 5000 cycles and couple of hours for 100000. But all interesting effects about CPU execution could be discovered in 5000-20000 cycles - inner loops of various use cases. So you have to wait about minute or so to see what is good and what is not.
If anything, I would say strict:lazy::manual memory:poorly optimized GC.
The compiler should be able to reorder the structure of expression expansion.
Replace "lazy" with "non-strict" and I think you're on to something. Strict and lazy both specify a particular evaluation order, but you are talking about leaving the evaluation order unspecified and letting the compiler or runtime decide which evaluation order is best.
I love today's GC, which allows me to mostly forget about memory management, but if GC still meant waiting hours for a collection I would be more inclined to manage my own memory.
Clojure's lazy sequences, combined with doall to force strict evaluation when you want it, seem like a nice compromise for now, but will likely seem archaic when a good evaluation-order optimizer is developed.
-Simon Peyton Jones
The latest research fads in CS aren't always beneficial.
-Simon Peyton Jones"
True, he does say that but the slides you reference show a more nuanced position than implied by that isolated quote.
Insisting on laziness forced the Haskell designers not to add side effects, eventually resulting in monads (slides 22 and 23), which ended up with Haskell programs having a "thin imperative shell over a fat pure core" internal structure. (slide 32).
SPJ then concludes that (1)purity is more important than laziness, (2)the next ML will be pure with effectful programming only via monads and (3) the next Haskell will be strict, but still pure (slides 39 , 40). (the differences between "the next ML" and the "next Haskell" are worth pondering)
It is through insisting on laziness as default and solving some of the problems encountered (via monads for e.g) that he arrived at the position that "the next Haskell will be strict". He also notes some open problems with adding laziness into a statically typed strict language (slide 40 "Still unclear exactly how to add laziness to a strict language. For example, do we want a type distinction between (say) a lazy Int and a strict Int?").
So yes, you are right that "The latest research fads in CS aren't always beneficial." but in this case it seems to have been (beneficial).
As someone maintaining a 20 k+ line Haskell codebase on my "day job" ("20,000 lines of Haskell is enough to bootstrap SkyNet" - OH on Twitter), I would appreciate a "strict-by-default-Haskell" (with an ML like module system too, while you are at it, thanks in advance!), but really Haskell is certainly more "beneficial" than any other language I've seen to date for certain types of programs .
Not contradicting your position,just qualifying it a bit.
Could you unpack that? I'd love to hear what types of programs Haskell phenomenal at, in comparison to others.
As for Haskell the language vs others in general, I personally like the combination of high abstraction, static typing and raw speed. Besides (like lisp?) it seems to be a productivity amplifier for very small teams of very sharp people, which is my situation.
It would be great, though.