
Comparing Haskell and Node concurrency performance - ky3
https://www.fpcomplete.com/blog/2016/12/concurrency-and-node
======
reqres
> Which leads to the 'callback hell' that we all know and hate. Part of the
> bill-of-goods we accept when using Node is that in exchange better time and
> space characteristics, we lose the thread as an abstraction.

Perhaps my head has been stuck in javascript/node land for too long but I
think accusations about javascript producing callback hell now seem a bit
disingenuous even for relative novices to the language.

It's 2016 and there are many well documented and widely adopted solutions
arising from external libraries and developments in ECMAScript. Thanks to
transpilers like Babel/Typescript we can even shoehorn these new ECMAScript
features into older browsers.

~~~
Noseshine
I never had callback hell even before modern Javascript times. I simply used
named functions instead of inlining everything. Nesting level: 1, maximum 2 if
I felt it was okay. And modules, modularization is key or it gets too complex.

So the _lexical structure_ of my code was linear - while the _runtime
structure_ was nested at arbitrary levels. There never was a reason to
represent the nested runtime structure in the written code.

I also didn't attempt to use node.js for things it wasn't made for, like
compute-intensive tasks or implementing business logic. The good old chat
server for ten thousand people was an often used example for node.js
programming for a reason - lots of I/O, little processing.

 _Note: I don 't write code like that any more in ES 2015. I also don't use
classes, prototype, this, bind, apply - only functions and (lexical) scope
(with an eye on capturing only as much scope as I need). Which is the opposite
of the above described method where lexical scope was not usable, but with the
methods available now the code still is "flat", so that's why I switched._

~~~
taeric
That sounds like its own form of hell to me. Ideally, a lexical structure
helps visualize and understand the runtime structure. Anything that obfuscates
that is a recipe for disaster.

~~~
Noseshine
Runtime structure can be arbitrarily nested, how do you want to show that in
code structure? That makes no sense. You presume a static structure of who
calls whom. It also isn't very flexible (refactoring, implementing change
requests).

The key was of course to come up with great modularization, of course you
would not want to do that with "flat code", the complexity of what function is
where would be (or would have been, since I no longer need to write in that
style) overwhelming.

~~~
taeric
I think that is part of the point. You want to do things that make it obvious
when the runtime structure has gotten arbitrarily nested. Closure callbacks
actually help there, since they make it someone visible and easy to "smell."

That is, if you have the same nesting at runtime, but it is just somewhat
obscured by the naming style that you did, that sounds problematic to me.
Ideally, you find structural ways to get rid of that nesting. (I said
elsewhere that I'm a huge fan of first class queues. There are other options.
Callbacks are one. And realistically, what you describe is an option, too.
None of them are intrinsically bad.)

------
spitfire
It feels like this generation has never had to use Windows 3.1 Or maintain an
event loop application long term.

There are good reasons we went to thread based models - developer productivity
and safety. Event loops are fine for toy demo's, or very carefully managed
products (trading systems, NGINX) but not for use as general purpose hammers.

Every single bit of extra friction and cognitive overhead costs you dearly a
few years down the line. We scrambled away from this stuff as soon as we
could, and there's no good reason to go back.

~~~
wolfgang42
Having done programming with both threaded and event-looped systems, I think
that event loops (when done well) cause less cognitive overhead. With threaded
systems, I was constantly worrying about how things need to be locked and what
happens if two things run simultaneously. Event-looped systems make the break
points explicit, so I _know_ precisely when other things might run.

"Callback hell" is IMO a terrible way of doing event-loop systems, as the
nesting can get confusing. For implicit event loops like Node, I strongly
prefer the Promises approach (preferably with async/await sugar); for explicit
event loops (I only have practical experience with Arduino, though I've also a
passing acquaintance with the classic 69k Mac as well) I like to build a set
of event-driven state machines with cooperative multitasking. If properly
designed, these keep everything nicely separated so you can follow the logic
without any trouble, while also avoiding the concurrency concerns of a
threaded system.

~~~
taeric
I don't actually see how promises help. In particular, it seems to encourage
people to build systems that basically horde promises in odd places.

I greatly prefer first class models for where things queue up, and then to use
producer/consumer objects against those queues.

~~~
wolfgang42
I'm not sure what you mean by "horde promises in odd places"\--you use a
promise whenever something is happening asynchronously, certainly, but I'm not
sure what's odd about that. Maybe this is an antipattern I haven't encountered
yet?

I think producer/consumer and promises cover two different use cases (with
some overlap). Indeed, when I have a work queue it frequently involves
promises: queueing a work item returns a promise which will resolve when the
work is complete, and part of executing a work item is returning a promise so
the consumer knows when the task is done and can start in on another one.

~~~
taeric
I have seen paths that will generate upwards of 10 promises and then put them
together at the end. In theory, this shouldn't be a problem and the code was
readable enough. Reasoning about all of the different ways backpressure can
happen was not as easy.

Compared to knowing that you have a set of queues that ultimately feed into
other queues. It is much clearer to reason about the throughput of individual
queues and take that into consideration when designing new queues in the
system.

It is basically like someone throwing a ton of outlets on a wire going through
a room. I mean, yes. You can do that. Often won't even cause issues. However,
for large enough systems, you ultimately need to know what the load on that
circuit will be and it is not acceptable to put yet another plug extender
there.

------
tracker1
I don't think this comparison is really fair at all... it seems to be cherry
picked to point out what is already known to be a bad use case for node. In
terms of the multiple async calls, Promises and async functions takes care of
that, not to be confused with the reference to the `async` library.

First, using clustering and memoization would improve the throughput a lot. I
did something similar when adapting a JS based script library to be used in
node, because I knew it would lock the main loop otherwise. Beyond this, cpu
intensive work should be avoided in your service loop regardless. It's best
distributed to an RPC/Worker pool.

In terms of scale, node scales as well or better than a lot of frameworks,
it's only that you will usually want to use similar techniques locally as well
as remote.

Another poor example is when you need millions of references in a single
thread, Node will die spectacularly. That doesn't mean it shouldn't be used
for many use cases, it only means that it's bad at some of them.

I find that node is _great_ as an intermediate/translation layer... your UI
talks directly to node, tightly coupled.. then node can translate against
backend databases or other services as a gatekeeper for your front end. It
allows you to make the data the shape that is most convenient, with the least
amount of disconnect of thought and approach.

It's also pretty great for certain types of orchestration control and even in
the proof of concept stages of applications. Doing a first version of almost
anything I've tried in Node is usually much faster than alternative platforms.
And often performs well enough to stick with it. Developer productivity is
more important than absolute scale at the beginning, and if you have a plan to
scale horizontally, you can do that for a while before you need to break off
other optimizations.

~~~
Scarbutt
_Doing a first version of almost anything I 've tried in Node is usually much
faster than alternative platforms._

Do you say this because of the libraries available for it? (curious, wanting
to jump to node).

~~~
tracker1
I mean faster to get up and running... development time that is. Mostly, for
me, because I'm doing front end work in JS, using mostly the same tooling npm,
babel (though with webpack on front end now).

It's just so much faster to get going if you're developing the full stack, and
already in JS heavy land anyway. Not having to context switch for the backend
is huge... being able to use a document/object/json database isn't as big of a
boost but still nice. On the db side, I've been using the template wrappers so
that I can write a simple query and it turns it into a parameterized query
returning a promise.

    
    
        async function getRecords(baz) {
          return await sql.query` 
            SELECT  
              a,
              b
            FROM 
              foo
            WHERE
              foo.bar = ${baz}
          `;
        }
    

So, I still have to think about some SQL, but still usually better than trying
to twist ORMs into shape.

Overall, for the past 6 years or so (since 0.8) I've been using Node pretty
heavily (moving from more C# on the backend) and really haven't missed it at
all. Even though the core .net and more open-source stuff has been
interesting... Managed to dockerize a few trivial .Net apps using the dotnet
onbuild base containers.

If you're using windows, the only gotchas are you need a C++ build environment
(Visual C++ 2015 Build Tools, checking all options) and Python 2.7.x in order
to build any binary modules... most of which now run without issue on windows,
was a _much_ bigger problem in 0.8-0.10 ...

------
jondubois
This article is misleading. Here's the real problem:
[https://github.com/AndrewRademacher/fpco-article-
examples/bl...](https://github.com/AndrewRademacher/fpco-article-
examples/blob/master/concurrency-and-node/node-async/src/starvation-
main.js#L22L32)

Anyone who understands JavaScript can see that the recursion invoked in the
slow route is not asynchronous (each recursive invocation keeps piling onto
the call stack without ever releasing it)! You'd have to use process.nextTick
(or setTimeout) if you wanted to recurse asynchronously without spawning a new
process...

For these kinds of unusual, heavy computations, though, you'd be better off
using the child_process module to spawn a new process and do the recursion
inside that process so that it doesn't block the main event loop.

This has nothing to do with starvation. Node.js just has a completely
different approach to this kind of problem.

~~~
wyager
> Maybe Haskell does some sort of automatic multi-threading behind the scenes
> - If so, I'm not sure that's a good thing

GHC (the leading Haskell compiler) provides an extremely advanced green
threading system with the runtime. [http://haskell.cs.yale.edu/wp-
content/uploads/2013/08/hask03...](http://haskell.cs.yale.edu/wp-
content/uploads/2013/08/hask035-voellmy.pdf)

> I like the way it's done in Node.js - Explicitly.

If you wanted to do it this way in Haskell, there are a number of monads that
make it way more convenient than doing it in Node. Cont in particular can be
used for cooperative concurrency. No one really uses these outside of niche
cases, however, because threads are almost always a better abstraction.

~~~
jondubois
Maybe, Golang also has similar advanced multi-threading constructs.

But I do a lot of REST API (and WebSocket) work and all of the workload that
happens inside my Node.js program is extremely lightweight.

If I need to perform some heavy computation, I will offload it to a separate
child process - Node.js forces me to put that code in a separate file/module
but I actually like this because it encourages separation of concerns. It
feels very natural so I don't really need any other special constructs.

I looked into goroutines a while ago; it looks cool, but I probably wouldn't
use them much because I don't like the idea of having code from the same
source file splitting off into multiple processes/threads; it makes is harder
to read and reason about the code (this is a bit like what happens with multi-
threaded code when you have mutexes all over the place).

To me, this feature has the same utility value as the ability to define
multiple classes per source file - Ok, that's cool, but is it a good idea to
do that?

~~~
wyager
I don't really have strong feelings about source file organization, but the
way it works in Haskell is that all IO actions have type "IO a", where "a" is
the type of the action's result. To do multithreading, you just do e.g.

    
    
        foo :: IO () -- optional type declaration
        foo = forever (print "hi")
        forkIO foo
    

Now Foo will run in its own green thread forever. Very simple. This works with
any IO action. You can put it in whatever file you like. For communicating
across threads you have many great options like STM. If you like Go-style
chans, there are libraries that provide various types of chans.

------
ky3
Previous comments on threaded vs event-loop concurrency abstractions are
relevant:

[https://hn.algolia.com/?query=why%20events%20are%20a%20bad%2...](https://hn.algolia.com/?query=why%20events%20are%20a%20bad%20idea&sort=byPopularity&prefix=false&page=0&dateRange=all&type=story)

[https://hn.algolia.com/?query=why%20threads%20are%20a%20bad%...](https://hn.algolia.com/?query=why%20threads%20are%20a%20bad%20idea&sort=byPopularity&prefix=false&page=0&dateRange=all&type=story)

[https://hn.algolia.com/?query=threads%20vs%20events&sort=byP...](https://hn.algolia.com/?query=threads%20vs%20events&sort=byPopularity&prefix=false&page=0&dateRange=all&type=story)

------
KirinDave
On one thread, I am defending FP from critics who are unaware from the state
of the art.

EDIT: Just to make it clear early on, I agree with the article's conclusion
that Nodejs is not as good at compute heavy workloads as Haskell. I simply
object to any use of "the nested callback problem" as valid in 2016. It's an
issue exclusively for legacy code and developers who take pride in writing
outdated code.

It seems only fair, then that I also should defend Javascript from people
obviously unaware of the state of the art in pseudo-imperative programming.
And by state of the art, I mean "has been around in some languages for 3+
years."

The example:

    
    
        request('http://example.com/random-number', function(error, response1, body) {
          request('http://example.com/random-number', function(error, response2, body) {
            request('http://example.com/random-number', function(error, response3, body) {
                ...
            });
          });
        });
    

But modern Javascript (before you start, yes, it runs on every browser with
preprocessing, which is normal for this ecosystem) would make it look more
like this:

    
    
        // rp is a request promise, multiple options for creating them
        async function make3StaticRequests() {
            try {
                var res1 = await rp('http://example.com/random-number')
                var res2 = await rp('http://example.com/random-number')
                var res3 = await rp('http://example.com/random-number') 
                // ...
            }
            catch(error) {
                // ... 
            }   
        }
    
        // And of course the promise library allows for many things
        // you'd like with applicative functors, like binding groups
        // of operations together and evaluating them all.
    
        function randomNumberPromise() {
            rp('http://example.com/random-number')
        }
    
        async function make3StaticRequests() {
            var [res1, res2, res3] = Promise.all([randomNumberPromise(), 
                                                  randomNumberPromise(),
                                                  randomNumberPromise()])
            // ...
        }
    

I don't really understand why people feel comfortable writing up comparison
articles without doing sufficient research into what they're comparing things
to.

That said, the articles point about large compute workloads starving other
operations is very much true and a good example of what the weakness of V8 as
a server-side programming environment brings.

~~~
spion
For starvation there is also clustering, which can shift the load towards
worker processes that aren't busy with cpu-intensive tasks, as well as simple
modules such as `[https://www.npmjs.com/package/process-
pool`](https://www.npmjs.com/package/process-pool`) which can be used to
offload (known) expensive tasks.

While Haskell is truly better at concurrency (no need to serialise when
passing messages, green threads yield not only at IO but also at memory
allocations), that part of the comparison isn't very good. Spawning a cluster
of NUMCORES threads using the built in cluster module would be an improvement.

~~~
KirinDave
Personally I think this is a total non-answer to the question. It enormously
complicates the concurrency and data sharing story for a solution that
literally every programming language has access to.

Nodejs has a poor story for compute-intensive loads. People need to be
comfortable saying that, because it's reality.

~~~
spion
First of all, there is no enormous complication. Clustering in node is super-
easy, and file descriptors of requests are sent automatically to one of the
processes in the process pool. This benchmark should have at least done that.

Secondly, the service presented there doesn't even need to take advantage of
shared memory concurrency at all. This is the case for a vast majority of web
service problems too: they either talk a lot to each other or do a bunch of
cpu-intensive work, but rarely both.

Finally, when you use shared memory concurrency/parallelism to solve web
service problems, there is a risk that the resources of a single machine will
not be enough. And then you are back to serialising things and sending them
through an even slower channel.

Haskell also has poor facilities for compute-intensive stuff, although they
are different facilities. For example, laziness makes reasoning about
performance more difficult. Space leaks are fairly easy to create unless you
know the common gotchas. Most naive/idiomatic Haskell code performs several
orders of magnitude worse than whats possible with optimized code. Etc etc.

~~~
KirinDave
> But those are not web-server problems.

I disagree. Totally and utterly. API servers are webservers, and in fact a
pretty big subgroup of them. API servers can often run into computational
requirements in mid request, and it sucks that ONE slow request can cause
latency ripples across your entire process.

> Haskell also has poor facilities for compute-intensive stuff, although they
> are different facilities. For example, laziness makes reasoning about
> performance more difficult.

This is changing the subject, and ultimately a non-sequitur. "This also has
problems which are different" is not actually an answer to the criticism that
nodejs is bad at these workloads.

Say it with me. It's okay to say. "Nodejs is bad at compute-intensive
workloads."

~~~
spion
> I disagree. Totally and utterly. API servers are webservers, and in fact a
> pretty big subgroup of them. API servers can often run into computational
> requirements in mid request, and it sucks that ONE slow request can cause
> latency ripples across your entire process.

Or it doesn't cause ripples, since the cluster master doesn't send requests to
it, and instead redirects them to the other processes.

That, plus a process pool for known compute-intensive stuff is often good
enough.

~~~
KirinDave
> Or it doesn't cause ripples, since the cluster master doesn't send requests
> to it, and instead redirects them to the other processes.

A methodology that doesn't scale well across boxes, so only buys you so much.
Demo code might benefit from this approach, but production code will do
something totally different to give an API or server actual durability and
uptime.

> That, plus a process pool for known compute-intensive stuff is often good
> enough.

To make it "good enough" for production workloads for anything less than a
trickle of traffic requires an entirely different and probably queue-based
architecture. While this is often in good style, it is (I restate) _a tool
available to every language environment_. If everyone has this technique, you
cannot say that Node is given a pass because "if you just totally re-architect
this code" then it's okay.

Nodejs's scheduler and execution model fundamentally make it worse at compute-
heavy workloads. THIS IS AN ACCEPTABLE TRADEOFF. But denying it exists only
misleads engineers and leads you to bad decisions.

~~~
spion
> A methodology that doesn't scale well across boxes, so only buys you so
> much. Demo code might benefit from this approach, but production code will
> do something totally different to give an API or server actual durability
> and uptime.

Yes, you would have a load balancer in front of several instances running on
several machines. Same as Haskell.

> Nodejs's scheduler and execution model fundamentally make it worse at
> compute-heavy workloads. THIS IS AN ACCEPTABLE TRADEOFF. But denying it
> exists only misleads engineers and leads you to bad decisions.

I agree, in principle. But one, its not nearly as bad as this article makes
it. And additionally, the given example is not convincing or very
representative.

~~~
striking
It absolutely is as bad as 'KirinDave says it is, although the article does a
bad job of showing it. Node shines at IO-bound applications, sure, but let's
say you want to do one big computation on 16 cores all at once.

With Node, you'd have to serialize data and pass it between child processes.
And that really, really sucks. Haskell's parallelization story extends way
past the "embarrassingly parallel" request handling.

A good data point based on a less contrived example is how handily Haskell web
frameworks _demolish_ Node.js at JSON serialization... and not much else
([https://www.techempower.com/benchmarks/#section=data-r13&hw=...](https://www.techempower.com/benchmarks/#section=data-r13&hw=cl&test=json&l=4fthvj)).

Finally, if you're looking to do something truly and extremely CPU-bound, I'd
tell you to write it in a C derivative and bind it to Node or Haskell,
regardless of what your favorite language is. Optimize for speed in some
places, and programmer happiness in others. A one-size-fits-all approach isn't
usually appropriate.

~~~
spion
I just spawned a least-latency load balancer in front of a cluster of N
processes (one per core) for the benchmark.

[https://github.com/spion/fpco-article-
examples](https://github.com/spion/fpco-article-examples)

Now it gives the same results as the Haskell solution. (Which only means this
concrete benchmark doesn't reflect real world performance)

------
aconz2
The first part about comparing the callbacks in Javascript vs do notation in
Haskell is super misleading. The do notation desguars to code that looks
essentially identical to the Javascript version. Sugar is not a negligible
consideration in system choice, but that kinda thing just bugs me.

~~~
efnx
It desugars that way beacause of the nested structure of monadic sequencing,
not because of threads or IO. We can distill the real argument down to
"JavaScript must use nested callbacks for async. Haskell doesn't."

------
vikingcaffiene
A lot of the readability problems with NodeJS the author mentions at the
beginning of the article have been solved in numerous ways. Promise based work
flows for instance allow one to define a step by step flow very similar to the
counter examples provided. Packages like babel can expose things like yield
and async/await which get us even closer. I'm not saying its ideal but it
certainly mitigates the worst parts of the 'callback hell' problem pointed
out.

------
spullara
"Many web servers, for example achieve concurrency by creating a new thread
for every connection. In most platforms, this comes at a substantial cost. The
default stack size in Java 512KB, which means that if you have 1000 concurrent
connections, your program will consume half a gigabyte of memory just for
stack space. "

As a WebLogic developer we fixed this in the late 90s but the Volano chat
benchmark was still run for no apparent reason. Somehow everyone else didn't
get the message until Netty was released and people started using it.

Obviously what you want is multi-threaded execution with asynchronous I/O.
Using node on multi-core systems just doesn't make a lot of sense as you end
up having to duplicate your entire program on each core to get the full
performance of the machine. Not unlike 512k/thread but much worse — especially
if you cache anything locally in the process like template compilation, etc.

~~~
seangrogg
If you're going to just duplicate your process across all the cores then yes,
local caching becomes an issue. Which was solved with Redis. Back in 2009.

Don't get me wrong, the fact that Node doesn't have a more efficient way of
handling overhead on multicore machines is a drawback. But you pay for your
abstractions. Nobody is going to argue that Node is a phenomenal solution for
lightweight, multi-threaded execution. But most companies using Node seem to
accept this and are fine with not utilizing 100% of their resources (hell,
most don't even seem to know Node can spawn processes, in my experience).

If they care they use a framework that meets their needs.

------
davidw
> Node ... popularized the event-loop

Tcl was doing event loops quite successfully in the late 90ies and was fairly
popular, back in the day.

These days, I'd use Erlang (Elixir).

This is kind of ranty, but also makes me laugh:
[https://www.youtube.com/watch?v=bzkRVzciAZg](https://www.youtube.com/watch?v=bzkRVzciAZg)

~~~
hyperpape
This is exactly what the word popularized means. It doesn't mean invented, it
just means exposed it to a large number of people who hadn't previously
encountered it.

I don't know if you can claim definitively that Node introduced more people to
event loops than any other technology, but it certainly is one major
popularizer of the idea, and the one that's had the most impact in the past
decade.

~~~
davidw
So basically people who knew nothing of what had come before, despite it being
a fairly widespread technology. I think that's kind of proving the video's
point.

~~~
hyperpape
People are constantly learning how to program, and they pick up ideas in
different orders from different sources at different times in the history of a
discipline. Every idea anyone has ever known was something they didn't know
until they were introduced to it.

The fact that you knew it first doesn't give you a reason to be snide.

------
partycoder
node is fast when the heaviest work is delegated to libuv or native modules
(that are performant that is). If you require to do heavy work on v8, it slows
down significantly.

What would I call "heavy work"? e.g: compression, serialization, encryption,
image processing... tasks that are bound by CPU and not only I/O. Usually you
want to delegate that to a native module and not do that yourself in
JavaScript. If you absolutely have to do it in JavaScript, then you need to
make sure the task is not blocking the event loop. In order to play nicer with
the event loop you inject something like setImmediate or process.nextTick
after certain amount of time or iterations... otherwise you will starve other
tasks in the loop, notably, I/O.

node is also not a really good idea if you need a lot of interprocess
communication.

It is a very viable alternative, though.

------
spion
Here we go, a node version that uses a least-latency load balancer and doesn't
exhibit the problem:

[https://github.com/spion/fpco-article-
examples](https://github.com/spion/fpco-article-examples)

Its an interesting benchmark, but it needs more work to give a more accurate
picture. It would be nice if it:

    
    
      * used wrk to measure requests and latency percentiles
      * percentage of "slow" requests was tweakable
      * number of workers per core was tweakable
    

Then we could generate a nice chart that shows latency percentiles as fn of %
of slow requests and workers per core, and compare with Haskell.

------
switchbak
Another nitpick - most Java based web/app servers have used thread pooling (or
similar approaches) for at least 10 years. Seems overly simplified to say
these environments always "spawn a new thread".

~~~
hyperpape
That saves you from thread spawning costs, but not thread overheads.

------
stanislavb
In regards of concurrency, I's suggest to have a look at Elixir. It's growing
like weed and is offering the best programming experience you can find. Not
kidding. Just try it.

------
guntars
I guess if we're nitpicking, then here's another one:

> Looking near the top of the output, we see that Haskell's run-time system
> was able to create 100,000 threads while only using 165 megabytes of memory.
> We are roughly consuming 1.65 kilobytes per thread.

Those are not the same kind of threads that the author is talking about in the
beginning of the article. Those a green threads and as such are multiplexed to
much smaller amount of real system threads to do work in parallel. What that
means is that they, for example, can't all make a system call at the same
time. Go has the same issue.

~~~
wyager
> What that means is that they, for example, can't all make a system call at
> the same time.

Yes, they can. The Haskell IO manager knows how to handle this.
[http://haskell.cs.yale.edu/wp-
content/uploads/2013/08/hask03...](http://haskell.cs.yale.edu/wp-
content/uploads/2013/08/hask035-voellmy.pdf)

~~~
guntars
That's true with a caveat. It knows how to handle IO syscalls on systems where
something like "epoll" or "kqueue" is available which, to be fair, is majority
of what a typical server does. In the general case waiting for a syscall to
return will block the system thread and no other green thread is going to be
able to run on it.

~~~
platz
The GHC Haskell runtime supports preemtive scheduling of cooperative threads
("green threads"). In non-preemtive cooperative threading, threads yield to
each other. However, when a thread goes into a syscall, it no longer has the
control to yield. The only way to wake up from a syscall (and thus to decide
wether another thread should be scheduled, inside the runtime, and thus to get
preemtive scheduling), is to send a signal to the process that's blocked in
the syscall; then the syscall gets interrupted with EINTR and the runtime can
do its scheduling decision, and then resume the syscall if needed.

This signal sending is done by setting up a periodic "timer signal" that sends
SIGALRM to the process every 10 ms (by default).

[http://man7.org/linux/man-
pages/man2/timer_create.2.html](http://man7.org/linux/man-
pages/man2/timer_create.2.html)

~~~
guntars
Thanks, I did not know this.

------
z3t4
lamda, anonymous functions are so popular in JS that it migt be a suprise that
you can actually name your functions. Heck you can even use them like any
object, pass them along, store them in lists, return a funtion from a fuction
etc.

------
eximius
> However, Haskell doesn't impose an additional burden on the design of your
> software to accomplish that goal.

 _giggles uncontrollably_

I like Haskell, but saying that it doesn't impose a design burden is
incredibly misleading.

------
hitgeek
is the author really just comparing single-core to multi-core? I didn't read
any specs on the machine that ran the benchmarks, but assuming its multi-core,
are the Haskell tests using all the cores, while node is only using 1?

node is probably not the best choice for truly CPU bound operations, but you
can sometimes get by using the native cluster module to spread work over
multiple cores.

~~~
tracker1
For a few things, I've spun off another process, and use a pool in order to
constrain the number of processes actually running at once... did this to
adapt a JS based scrypt module when I was running node in windows, and non of
the "native" versions of scrypt in npm would even build correctly in windows
(early 0.8 timeframe iirc).

------
ninjakeyboard
Node has concurrency? :P

~~~
leshow
i think you are confusing concurrency with parallelism.

------
jondubois
>> The difference here is stark because in Node.JS's execution model, the
moment it receives a request on the slow route, it must fully complete the
computation for that route before it can begin on a new request.

With this statement, the author acknowledges that the Node.js code in the slow
route was not asynchronous. The test is therefore invalid; it's comparing
apples and oranges.

Node.js is more than capable of handling different requests asynchronously
(regardless of whether they are fast or slow); if you have any kind of
blocking or waiting around happening; then you're doing it wrong.

I'm so tired of all the anti-Node.js propaganda; it's hurting people. If I
walk into one more company where some zombie tells me that they're migrating
away from Node.js because "the Node.js event loop starves the CPU", I'm going
to have a stroke.

In reality all Node.js 'starvation' problems can be solved with the 'cluster'
module or the 'child_process' module.

Since we've been talking a lot about 'Fake news' on Facebook recently. Maybe
we should start talking about how fake news is affecting Hacker News. This
anti-Node.js strain is particularly virulent.

------
igl
Comparing apples to oranges

~~~
lgas
How so?

~~~
edem
Node has no built-in concurrency just callbacks. The whole point is
nonblocking io in node not multithreading.

~~~
tracker1
Node does use threads for async io, it's just abstracted away from the main
loop. The bigger issue, is the cpu bound code is a bad use case for Node, and
it's known to be. There are options to run this type of code out of the main
process though.

You can scale node, via the same techniques you use to scale anything across
servers, you just do it sooner with node in order to better utilize a larger
server, or use multiple servers sooner. Node is great for just about any io
bound workflow.

------
MrBuddyCasino
So someone compared performance and programming model, and lo and behold,
found Haskell to be superior.

What might be more interesting:

\- what is the salary difference between a Node dev vs. a Haskeller?

\- is there a productivity difference? does the salary over- or under-
compensate?

\- is correctness a core business concern?

\- if I need to hire 10 devs, can I do that?

------
jdc0589
I'm getting sick of seeing this. Node wasn't intended for compute heavy
workloads...ever...at any point...for any reason. This is like the 50th time
someone has decided it was appropriate to point it out by generating Fibonacci
numbers (among other things).

Go watch the node.js presentation Ryan Dahl gave at JsConf 2009, he addresses
this during that speech.

His opinion on the "right way" to do concurrency is a little polarizing, but,
quote: "the right way to do concurrency is to use a single thread and have an
event loop. this requires that what you 'do' outside of IO waits not take very
long".

