
Webserver Benchmark: Erlang vs. Go vs. Java vs. Node.js - slashdotdash
https://stressgrid.com/blog/webserver_benchmark/
======
m45t3r
While the performance of Erlang was not the best between the contenders, I
really like the behavior of Erlang here: load makes almost no difference in
latency, so while you're serving less clients, you're guarantee to be serving
them well.

This is a much better behavior than having random spikes in latency, which
translates to a small but significative amount of users complaining about that
your application is slow. Erlang is the contrary, every user should have the
same experience, nonetheless of the load of the server.

------
jammygit
My understanding has always been that nodejs was ideal for high io workloads
with a light cpu load.

Having it sleep for 100ms per request is a little strange for nodejs though
because it is basically simulating cpu load, even though a node server
typically hands off work to an optimized library that might be disk or network
io bound instead of cpu bound. 100ms is a lot of cpu work per request, no?
That is specifically the sort of task node is not recommended for, since it
focuses on async.

Maybe I’m wrong, I need to work on larger projects to see what loads are more
typical. At my last job, the entire focus was to use node to redirect work to
c++ code, databases, etc, basically just request routing.

Still, this looks rough for node. 2x to 4x difference between it and the
fastest is still a significant cost for such a nice backend to work with

~~~
kt315_
Sleeping is done with setTimeout:

[https://gitlab.com/stressgrid/dummies/blob/b342b02407ce09cec...](https://gitlab.com/stressgrid/dummies/blob/b342b02407ce09cecbeac4c93260429e85371300/nodejs_cluster/node_dummy.js#L15)

This isn't busy wait. Instead it yields for specified time period, much like
network request to the backend database would do.

------
zamadatix
Unencrypted HTTP suggests post load balancer, would have been nice to see core
loads as I'm betting Node would scale better if it weren't being it's own LB
layer as well. RAM usage would still be garbage but I bet performance would
hit about Go level.

Go's concurrency model really fits the task best though and it's easy to see
that in the results.

------
jdlshore
I don't believe the built-in Node.js http server implements backpressure or
503 errors out of the box. Do the other systems tested do so? Could that
explain the poor behavior and out-of-memory crash?

~~~
kt315_
None of the tested webservers were returning 503.

