Haskell is very hard, but even after 3 years of pretty intensive use, I never really felt productive with haskell in the same way that I've felt in other languages. Anything I did in haskell required a lot of thinking up-front and tinkering in the REPL to make sure the types all agreed and it was rather time-consuming to assemble a selection of data types and functions that mapped adequately onto the problem I was trying to solve. More often than not, the result was something of an unsightly jumble that would take a long time to convert into nicer-looking code or to make an API less terrible to use.
I built an underwater ROV control system in haskell in 2010 which went well enough, but I had to tinker with the RTS scheduling constantly to keep the more processor-hungry threads from starving out the other ones for CPU. The system worked, but I had no idea what horrors GHC was performing on my behalf.
Later I built the first prototype of my startup with haskell, but the sheer volume of things that I didn't know kept getting in the way of getting stuff done in a reasonable time frame. Then we started to incrementally phase out haskell in favor of node.
I write a lot of node.js now and it's really nice. The whole runtime system easily fits into my head all at once and the execution model is simple and predictable. I can also spend more time writing and testing software and less time learning obscure theories so that the libraries and abstractions I use make sense.
The point in the article about haskell being "too clever for this benchmark" sums up haskell generally in my experience.
I started out maybe 5 years ago following tutorials, reading up on all the metaphors about Monads and doing project Euler problems.
After a while I started to tackle some small web related things with Haskell and had exactly your experience of running into a lack of understanding of how the system works and wrapping my head around functional datatypes.
I pretty much gave up on Haskell as a practical language at that point, but something kept me coming back once in a while.
Then at a point I had a use for making a small web service fast and the Node prototype I made performed badly and crashed in spectacular ways under high loads. I found Snap and made a quick prototype in Haskell. At that point the experience of years of small experiments must finally have made something click. In a very short time I had a very fast service using almost no memory. It's deployed in production (as a part of http://www.webpop.com) and has been extremely stable.
By now I think I've crossed some kind of barrier, and feel like I'm both being productive and having fun when writing Haskell, but it really didn't come easy to me and all else being equal my experience tells me that a good deal of my colleagues would have an even harder time.
Most languages, even lisps, are somewhat tolerant of 'programming by guessing' for beginners. Usually you write terrible code that works, learn more and see what you did wrong. Haskell is very unforgiving of this, if you don't understand why it works it probably won't
While I long given up PUI (Programming Under Influence) I still occasionally do some in Haskell. After a litre of beer I am pretty dumb, but I can follow clues from compiler to get something working.
Most of the time, it works the next day, when I sober. That's in contrast with C/C++. Scripting languages give some power like that, but I can screw myself with them much more violently.
In my humble opinion, Haskell is the language of choice for drunken programmers.
- Reach a point in a complex application where it becomes hard to reason what laziness will do to performance.
- End up in type-hell. E.g. some libraries extensively use existential quantification of type variables. Before you know it, you are chasing "type variable x would escape its scope"-type or error messages, in perfectly fine looking code.
- Pattern matching is nice, but if you extensively use it, adding a constructor argument is a lot of work.
- No-one uses the same data type for common things. For instance, for textual data, there is String, Data.Text, Data.Text.Lazy, Data.ByteSting, and Data.ByteString.Lazy. These days there is more or less consensus on when to use which type, but you are often converting things a lot. There are also types of data for which no consensus is yet reached (e.g. lenses).
- Artificially pure packages. There are some packages that link to C libraries, but (forcefully) provide a pure interface. (Or in other words: purity is just convention).
- For a lot of code you end up using monads plus 'do' notation, making your programs look practically imperative, but an oddball variation of it.
- Using functions with worse time or space complexity, to maintain purity.
- I/O looks simple, but for predictable and safe I/O you'd usually end up using a library for enumerators. Writing enumerators, enumeratees, and iteratees is unintuitive and weird, especially compared to (less powerful) iterators/generators in other languages.
Learning Haskell is something I'd certainly recommend. It provides a glimpse of how beautifully mathematical programs could be in a perfect world. Unfortunately, the world is not perfect, and even Haskell needs a lot of patchwork to deal with it.
Explain? What would the alternative be?
> Using functions with worse time or space complexity, to maintain purity.
This seems like the opposite of your previous complaint.
> For a lot of code you end up using monads plus 'do' notation, making your programs look practically imperative, but an oddball variation of it.
This seems to be a "psychological problem" with Haskell: the idea that because Haskell supports declarative, it's not OK to be imperative. It makes beginners tear their hair out looking for 'do'-free solutions when they could just use 'do'. C.f., "Lambda: the Ultimate Imperative" (and the rest of that series of LtU papers) http://dspace.mit.edu/handle/1721.1/5790
Box the value that is the result of evaluation an expression that calls impure code in IO?
This is what I'd expect for calling impure code in third-party libraries.
unUnsafePerformIO :: a -> IO a
unUnsafePerformIO = return
ezyang@ezyang:~$ cat Test.hs
unUnsafePerformIO = return
main = do
let a = unUnsafePerformIO (unsafePerformIO (putStrLn "boom"))
ezyang@ezyang:~$ runghc Test.hs
Your other concerns don't seem too worrisome to me. Type hell doesn't happen very much, though there are some libraries that really like their typeclass-based APIs (hello, Text.Regex.Base) which can be nearly impossible to decipher without some documentation with respect to what the author was thinking (``I wanted to be able to write let (foo, bar) = "foobar" =~ "(foo)(bar)" in ...'').
The data type stuff can be confusing for people used to other languages, where the standard library is "good enough" for most everything people want. A good example is Perl, which uses "scalar" for numbers, strings, unicode strings, byte vectors, and so on. This approach simply doesn't work for Haskell, because Haskell programmers want speed and compile-time correctness checks. That means that ByteString and Text and String are three different concepts: ByteString supports efficient immutable byte vectors, Lazy Bytestrings add fast appends, Text adds support for Unicode, and String is a lazy list of Haskell characters.
All of those types have their use cases; for a web application, data is read from the network in terms of ByteStrings (since traditional BSD-style networking stacks only know about bytes) and is then converted to Text, if the data is in fact text and not binary. Your text-processing application then works in terms of Text. At the end of the request cycle, you have some text that you want to write to the network. In order to do that, you need to convert the Unicode character stream to a stream of octets for the network, and you do that with character encoding. The type system makes this explicit, unlike in other languages where you are allowed to write internal strings to the network. (It usually works since whatever's on the other end of the network auto-detects your program's internal representation and displays it correctly. This is why I've argued for representing Unicode as inverse-UTF-8 in-memory; when you dump that to a terminal or browser, it will look like the garbage it is. But I digress.)
I understand that people don't want to think about character encoding issues (since most applications I use are never Unicode-clean), but what's nice about this is that Haskell can force you to do it right. You may not understand character sets and character encodings, but when the compiler says "Expected Data.ByteString, not Data.Text", you find that ByteString -> Text function called "encodeUTF8" and it all works! You have a correct program!
With respect to purity; purity is a guarantee that the compiler tries to make for you. When you load a random C function from a shared library, GHC can't make any assumptions about what it does. As a result, it puts it in IO and then treats those computations as "must not be optimized with respect to evaluation order", because that's the only safe thing it can do. When you are writing an FFI binding, though, you may be able to prove that a certain operation is pure. In that case, you annotate the operation as such ("unsafePerformIO"), and then the compiler and you are back on the same page. Ultimately, our computers are a big block of RAM with an instruction pointer, and the lower you go, the more the computer looks like that. In order to bridge the gap between stuff-that-haskell-knows-about and stuff-that-haskell-deson't-know-about, you have to think logically and teach the runtime as much about that thing as you know. It's hard, but the idea is that libraries should be hard to write if they'll make applications easier to write. If everyone was afraid to make purity annotations, then everything you ever did would be in IO, and all Haskell would be is a very nice C frontend.
For a lot of code you end up using monads plus 'do' notation, making your programs look practically imperative, but an oddball variation of it.
That's really just an opinion, rather than any objective fact about the language. I find that do-notation saves typing from time to time, so I use it. Sometimes it clouds what's going on, so I don't use it. That's what programming is; using the available language constructs to generate a program that's easy for both computers and humans to understand. Haskell isn't going to save you from having to do that.
Using functions with worse time or space complexity, to maintain purity.
ST can largely save you from this. A good example is Data.Vector. Sometimes you want an immutable vector somewhere in your application (for purity), but you can't easily build the vector functionally with good performance. So, you do a ST computation where the vector is mutable inside the ST monad and immutable outside. ST guarantees that all your mutable operations are done before anything that expects an immutable vector sees it, and thus that your program is pure. Purity is important on a big-scale level, but it's not as important in a "one-lexical-scope" level. Haskell let's you be mostly-pure without much effort; other languages punt on this by saying "nothing can ever be pure, so fuck you". I think it's a good compromise.
I/O looks simple, but for predictable and safe I/O you'd usually end up using a library for enumerators. Writing enumerators, enumeratees, and iteratees is unintuitive and weird, especially compared to (less powerful) iterators/generators in other languages.
IO is hard in any language. Consider a construct like Python's "with":
with open('file') as file:
As for jumping in too quickly? I was a pretty heavy haskell user for about 3 years.
Moreover, I was very proficient in OCaml before I discovered Haskell, and it just spoiled be. It has all of Haskell's qualities which matter (type inference, algebraic data structures, a naturally functional mindset) without the parts you regularly have to fight (mandatory monads and monad transformers, algorithmic complexity in a lazy context, tedious interfacing to the underlying OS).
If you felt like Haskell had many amazing qualities, spoiled by a couple of unacceptable flaws, especially when it comes to acknowledging how the real world works, I'd suggest that you give a try to OCaml. You should be proficient with it within a couple of days.
Please take a look at doing real-world, productive web development with Yesod. http://www.yesodweb.com
You are still going to take a productivity hit in Haskell due to lack of libraries in comparison to Ruby, Python, etc. So the practical reason for using Haskell today is to take advantage of the amazing performance, take advantage of Haskell non-web libraries in the backend, or for a high assurance project where its type system can rule out most common web development bugs.
oh, and Yesod is even faster than the mentioned Snap framework which is already much faster than Node (and unlike Haskell, Node does not scale to multi-core). Although Yesod isn't going to automatically cache the fibonacci sequence for this artificial benchmark because in the real world I have never once been tasked with writing code like that for a web application.
Reasoning about laziness? Polymorphism that can only be implemented using existential types plus Typable? Even purity is a double-edged sword (some algorithms are inherently mutable). Some of Haskell's problems in real-life projects can definitely be attributed to the language itself.
So the practical reason for using Haskell today is to take advantage of the amazing performance,
My experience with everything from simple checksum functions to parameter estimators (ML) is that Haskell is generally at least 2-10x slower than C (even when introducing strictness where necessary, unboxing constructors, etc.). So, in practice you'll often end up doing heavy lifting in C anyway (whether it is a database server or a classifier that works in the background), and in the end it doesn't matter so much whether you use Haskell or a dynamic language (performance-wise) if a significant amount of time is required processing requests.
where its type system can rule out most common web development bugs
Right, this is where Haskell currently has an edge, because it does not only make it easy to make DSLs (as e.g. Ruby), but typechecks everything as well.
oh, and Yesod is even faster than the mentioned Snap framework which is already much faster than Node
Yes, but the benchmarks you implicitly point to (the pong benchmark) is very synthetic and says fairly little about real-life use. Until we see Snap and Yesod more in production, the jury is still out.
 Sure, you can do quicksort in the ST monad, but it will require a lot of unnecessary copying.
I don't think the Pong benchamark http://www.yesodweb.com/blog/preliminary-warp-cross-language...
is that synthetic - I think it demonstrates concurrency capabilities fairly well. We just have to keep in mind which web applications benefit from high concurrency.
As for raw performance of a single request, I agree that the average web application won't see a great difference for the 80% case. However, for most Ruby web applications that I have worked on I have had to spend time re-writing slow parts of the application because Ruby was truly the bottleneck, and I would have been much better off using almost any compiled language with types.
Ruby applications I have worked on always have more complicated deployments, worse response times, and huge memory usage due to the lack of async IO. Async IO is possible in Ruby & Python, but it still sucks because it is extra work and you have to always be on guard against blocking IO. So I hope we can at least agree that async IO is a big win, and that Haskell & Erlang are the best at async IO because it is built into the runtime and no callbacks are required. And likewise deployment to multi-core is no extra effort in Haskell/Erlang, whereas in Node, Ruby, or Python you will need to load balance across multiple processes that are using more RAM.
I disagree, if the language were strict by default, this was not an issue. It is a language problem that is forced on libraries.
However, for most Ruby web applications that I have worked on I have had to spend time re-writing slow parts of the application because Ruby was truly the bottleneck,
My point was that Haskell is often a lot slower than C of C++, so people will rewrite CPU-intensive code anyway. Look at many of the popular Haskell modules where heavy-lifting is done (from compression to encryption), most of them are C bindings. That code will be nearly equally fast in Haskell as in, say Python.
BTW. I am not arguing that Haskell not faster than Python, Ruby, Clojure, etc. But for computationally intensive work C/C++ are still the benchmark, and that is what people will use in optimized code. Whether it is Haskell or Python.
Particularly if a library is forcing you to learn about existential types.
But why is it? Because the language does not support the kind of polymorphism that is commonly used, in an intuitive fashion. People need containers with mixed types that adhere to an interface in some applications. And a commonly-used method to realize this in Haskell is by using existential types.
we can at least agree that async IO is a big win
And likewise deployment to multi-core is no extra effort in Haskell/Erlang, whereas in Node, Ruby, or Python you will need to load balance across multiple processes that are using more RAM.
Since most modern Unix implementations do COW for memory pages in the child of process of a fork, this is not so much of an issue as people make it out to be. The fact that you mention Erlang is curious, since spawn in Erlang forks a process, right? Forking is more expensive than threading, but again, in most applications negligible compared to handling of the request.
I have not found it to be the case that existential types are commonly needed (and need to be forced on the user). Maybe you are in a different problem domain. I find Haskell's regular polymorphism to work very well for 95+% of my use cases.
Fork is not negligible to handling a request, but pre-forking theoretically could be. In practice, COW fork does not automatically solve multi-core. The Ruby garbage collector is not COW friendly and thus there is little memory savings from COW (unless you use the REE interpreter which has a slower COW friendly garbage collector but saves on memory and GC time). I haven't looked at this for other languages but I assume this is still a limiting issue. Also, you are still stuck doing load-balancing between your processes, which will limit or complicate your deployment. I don't know much about Erlang other than async IO is built into the language, which is why I mention it in the same breath as Haskell.
Is this a substantial headache with Akka, in practice?
it's pretty hard to google akka deployments but:
and in terms of memory overheads and how many erlang process-type things you can spin up:
And quicksort for arrays in ST monad wouldn't copy anything unnecessary.
Actually, I've seen many claims that some algorithms are inherently mutable. So far none stand close scrutiny.
Matrix operations? You better copy intermediate results, that way you'll be safer and faster (parallel algorithms). Good compilers do that behind the curtain (array privatization).
Sorting? Use maps or lists, that way you won't forget something important.
Graph operations? Immutable (inductive) graphs are slower by a constant multiplier and sometimes are faster than their mutable counterparts (tree-based maps are faster for changes than arrays).
The last one is even more amusing when applied to compiler optimizations (i.e., to non-trivial graph algorithms): http://lambda-the-ultimate.org/node/2443 Pure version is less buggy, faster (!) and allows more optimizations.
Sure, you can do merge sort. Except that the list split step in Haskell is O(n) in time, while it is constant when using arrays. As well as merging lists, since you have to 'reattach' the second list as the tail of the first list.
You have to copy the data from whatever representation you had to something that lives in a memory block in the ST monad.
You have probably never read Okasaki...
The rest of your argument proposes that slow is better because of persistence. First, persistence is often not required, second persistence can also be implemented in a mutable language.
Oh, no. You shouldn't split list by calculating length.
Try this instead:
even (x:_:xs) = x : even xs
even xs = xs
odd = even . drop 1
splitList xs = (even xs, odd xs)
So for merge. See here: http://lambda-the-ultimate.org/node/608?from=0&comments_... The solution contains proper merge algorithm.
And yes, I never read Okasaki in full. But, I use Haskell semi-professionally from 1999 and professionally from 2006.
> Sure, you can do merge sort. Except that the list split step in Haskell is O(n) in time, while it is constant when using arrays. As well as merging lists, since you have to 'reattach' the second list as the tail of the first list.
It's no problem writing a merge-sort in Haskell that uses O(n log n) time. So who cares what the asymptotics of the individual elements of the algorithm are? (You may care about the actual speed of the whole thing and its parts, though.)
I'm curious where you came across this. In an external library you were using, or in the process of trying to implement some kind of dynamic typing in your own code?
class Cond a l where
applies :: a -> TreePos Full l -> Bool
data Condition l =
forall c . (Cond c l, Eq c, Show c, Typeable c) => Condition c
fibServer x = quickHttpServe $ writeBS $ B.pack $ show (fibonacci x)
fibServer = quickHttpServe . writeBS . B.pack . show . fibonacci
fibOf42Server = quickHttpServe . writeBS . B.pack . show . fibonacci $ 42
main = print =<< foo
main = foo >>= print
Anyway, it's a little style thing, but it's nice to use the composition operator (.) when you want composition and the application operator ($) when you want application. It makes the code look nicer and it shows its intent more clearly. And really, they are different concepts, even if they both type-check the same.
And finally, remember that function application, by default, is the highest-precedence operator in Haskell. When you write:
foo . (bar 42) . baz
foo . bar 42 . baz
a . b . c . d $ e
a $ b $ c $ d $ e
There is growing list of smart programmers who get all enchanted with Haskell, jump into it wholeheartedly, and end up frustrated (see bottom of message). GHC makes the typical C++ compiler seem fast. Once code grows past the homework problem size, all hope of understanding memory usage is lost. I don't think people really get how bad that it is. The whole culture of Haskell is based around static checking, yet you have to run a program in order to find out if it blows your memory limit several times over.
Haskell is still a neat language, but we need less advocacy based on toy programs and more honest realism.
(Here's a typical, non-superficial example:
I do agree that there is entirely too much enthusiastic toying around in Haskell and not enough real world users and honesty about limitations.
As a contradictory anecdote, I have never once had a memory consumption issue with Haskell code. Haskell is actually in a nice position w/respect to memory now that enumerators (which always use constant memory) are taking hold. I have no doubt you encountered many memory leaks, but I don't think your experience completely generalises to modern Haskell.
You would expect that people that spend years and years working with their tools would be willing to put a few weeks or months into learning their most important tool: the programming language. It seems most programmers get frustrated and abandon learning of different programming paradigms very quickly.
Experienced developers have learned that, typically, newer languages are better than older ones, but they typically do not get better by leaps and bounds across the whole domain. Instead, language evolution typically is a matter of two steps forward, ten sideways, and one step back.
Also, experience tells that new languages often get overhyped as making only forward steps. Given that, it does not make sense to switch horses too often, or one would be forever learning, and never be productive.
Even a small productivity gain (say 5%) over a long period of time can make a very large difference, and is worth spending weeks to learn.
Additionally, Haskell isn't a new little "fad" language. It is a pretty old research effort that accumulated many novel and useful ideas that are worth learning. I understand someone who knows Python and does not think learning Perl/Ruby/Lua will teach him anything substantial/new. But I think even a cursory look at Haskell will remove any doubt about whether it contains novel ideas to learn.
Learning about programming languages enriches you as a programmer, and I can't imagine spending a few weeks learning novel languages and the reward not far-outweighing the costs.
So, yes, I agree that learning about languages is something one should do often, but I do not think one should try to become fluent in a new language (and its libaries) too often.
And yes, IMO that does apply to Haskell, too.
One cannot be expected to learn the thousands of languages out there. But learning a bit about many of them is possible -- and then learning to use the most interesting ones is most probably worthy.
The question then raised is whether any given new programming language or paradigm will bring that 5% productivity gain, and in what circumstances. If it were an obvious and clear path to greater productivity, and superior to other paths to greater productivity (such as spending more time learning your editor and shell and environment, or learning about a new library in the language your work is written in, or learning new tricks in your current language), no one would hesitate to increase their productivity in this way.
You seem to assume that people are choosing not to become more productive by opting not to switch to Haskell or learn Haskell or something about Haskell. There are thousands of programming languages. Shall we learn them all to become five thousand percent more productive?
I'm not opposed to learning new languages. I think folks should tinker. But, I don't think it is provable that learning Haskell will make you more productive than other activities.
Also, STL provides some infrastructure for FP-like programming (defining functors, argument binding, and providing map/fold-like transformations). But given that C++98 didn't provide lambda functions, it was all a bit too painful.
I've done enough to scream in terror and run towards a Lisp should anyone suggest such an awful thing! I think there's a huge, huge gap between your 'pretty much a functional language' and standard 'a functional language' [with proper meta-programming capabilities].
In the obscure years previous to c++11, meta-programming in c++ would have required a language lawyer.
In haskell, the syntax is so nice that is easily readable, and it doesn't get in your way.
Unless you want if-then-else in the do notation (yes, I know that there is a GHC extension for this), disagree with its whitespace rules, or like record syntax (which subsequently pollutes your namespace).
Also, point-free style is nice, but it is easily and often abused, leading to unreadable code.
Yeah, but the syntax and the verbosity hides your aim.
Many people would argue the same of Haskell. So much semantics are encoded in the particular operators, monads, functors, monad transformers, arrows being used, that they are hidden from plain sight.
That's why it's called point-less style. It's too seductive.
> Many people would argue the same of Haskell. So much semantics are encoded in the particular operators, monads, functors, monad transformers, arrows being used, that they are hidden from plain sight.
In a sense. But at least Haskell is parseable. And overloading is only done in a very systematic manner. So if something fishy's going on, you at least see strange symbols you haven't seen before.
In terms of programming-in-the-large, at Google and elsewhere, I think that language choice is not as important as all the other choices: if you have the right overall architecture, the right team of programmers, the right development process that allows for rapid development with continuous improvement, then many languages will work for you;
Also, I think that comma before "the right" in Norvig statements means logical AND. If we rewrite that statement it will look like that: if you have the right overall architecture AND the right team of programmers AND the right development process that allows for rapid development with continuous improvement, then many languages will work for you.
I think it is too many AND's here. In most realistic situations you cannot have such luxury.
Also, choice of Haskell (or similar language) allows you to address at least two points from Norvig statement: the right team and right development process.
Those who learned and applied Haskell almost cannot form the wrong team. Almost - as we cannot rule out failure completely.
The right development process is almost ensured by strong type system. Type systems like Haskell's can be viewed as a tool to spread requirement changes through complete program.
(that's why it is seemed hard to introduce or change a constructor in a data type)
So all in all I think that languages make a difference here. For many languages you should fulfill those three points, for some languages those points fulfill themselves.
I.e. in Haskell you make a change to a type and propagate it, until the compiler stops complaining. In Java you click some `refactor' button in your IDE, and your changes will propagate through the code base automatically.
That's less of a comment on the languages, since Haskell will probably grow better tools some day, but more a comment on the relative stages of maturity.
You can certainly develop rapidly in C++ if you have the right process and people. In fact that is where that quote came from, Norvig noting that devs at google where not hampered by their choice of Java and C++.
About our Haskell experience: Yes, the learning curve seems steep, but mainly because of the things you have to unlearn (OMG no for-loops!). However, functions are the most modular things ever invented. That translates into an uncanny ability to add features quickly. A sophisticated type system catches many errors at compile time.
Ironically, every one of his clients in the ab concurrency test will receive their responses before the users of the hypothetically parallel Python and Ruby services, because Node responds an order of magnitude faster. So he didn't actually demonstrate a problem.
If you want to benchmark concurrency, at least pick an benchmark algorithm that exercises concurrency. The FFT comes to mind, but there are probably lots of better examples (that is a challenge to HNers ;-).
Node developers probably don't do a lot of computationally complex stuff, but when they do, they have to think about the concurrency problem. Even something as trivial as sorting a large list or parsing a huge chunk of JSON is going to stop all other requests from executing.
$ time curl http://localhost:8000
I stand by my contention that he should have implemented an algorithm that could be solved in concurrent pieces and then benchmarked node.js against his favorite language. If the algorithm cannot be parallelized effectively, it doesn't matter how many tasks you spawn to solve it (cooperative or otherwise), the algorithm's dependencies will cause all the tasks to block and effectively serialize their execution.
I suspect that it worked so well because the idea of using Fib to talk about performance is kind of built into the collective programmer unconscious. The whys, hows, and whats of using Fib to talk about performance are somewhat less well-entrenched, though. So there's room to trip people up by getting them to go, "Yeah, this sounds interesting, and I recognize all the words so it probably isn't technobabble!"
EDIT: Well also, the author would have to actually benchmark this vs. Node with many concurrent clients in order for it to be relevant; here he's just timing a single request from start to finish, which obviously doesn't say anything about how this scales.
I'll do my part. Delphi, here I come. ;)
The point of the article was more the difference between the languages that really tackles concurrency (Haskell, Clojure, Go, Erlang) and Node's way of simply offering one solution that works for a lot of problems where the common scripting languages (especially PHP) doesn't work that well.
It is hard to write a Haskell program that is gimped enough to be in league with other languages in a synthetic benchmark.
PLEASE GIVE UP
If I read the article correctly it's simply a matter of concurrency and parallelism that's important.
There are a host of languages that do that quite well and Haskell just happens to be one of them.
The real point is that Haskell is quite good in many areas and is excellent in parallelism and concurrency. While other languages are excellent in concurrency and not so good in other areas.
Those many languages are the answer for the sole field of concurrency and Haskell is the answer when you combine many fields, one of which could happen to be concurrency.
And I should also point out that said "blub" languages can also implement those features which they lack that Haskell includes by design. Some have better features than Haskell, IMO (ie: Qi/Shen sequent types and the ability to turn the type system off when you don't need it).
Again, Haskell is a good language. I just don't see it as a "cure." There are many other options.
Lisp can do some of these, but it is not exactly a "blub" language. Is there a nice comparison of Qi and Haskell? Once you implement such a large, non-trivial system (such as an advanced type system), I really doubt using Lisp macros rather than implementing a compiler is easier. Macros that do such non-trivial things also do not compose well, so I doubt Lisp is beneficial for this purpose.
What was the question?
What tool will give us the the best task scheduling outcome, with appropriate effort?
Node schedules each task to run until it is done or you tell it to do something else. This seems simple and honest. The impact of excessive CPU load is obvious.
Other tools offer automatic task preemption and time slicing between tasks. Once CPU load gets high the impact is much less obvious.
Haskell occupies a niche similar to Hamilton's quaternions (for classical physics) and Heisenberg's matrices (for quantum mechanics) - not mandatory, inaccessible to the masses and abandoned with haste once a more intuitive tool is found.
But they will always be there if you need them.