Hacker News new | past | comments | ask | show | jobs | submit login
Benchmarking Go vs. Node vs. Elixir (stressgrid.com)
75 points by pplonski86 on Feb 5, 2019 | hide | past | favorite | 32 comments

Not sure but it kind of seems like they are running only one (single-threaded) Node process on multicore CPUs.

This immediately comes into my mind when I see the charts.

It is also interesting to see that only one Node process can barely serve 10k connections.

---- After I check the source at [0]. They use a simple http module to serve the requests while they use `http.ListenAndServe` which spawns go routines, hence more CPU utilization.

[0] https://gitlab.com/stressgrid/dummies

It's an unfair comparison then if they don't even use Node.js to its full potential. Apple to oranges comparison.

Author here. Planning to run the same test using cluster module with one worker per CPU.

What would be the most performant way to serve HTTP in Go?

I'd avoid the cluster module, its not the recommended way to scale Node.js. It exists mostly as a way to make naive Node.js benchmarks perform better in multi-CPU comparison testing... like yours! :-) Many cloud providers charge per-CPU, so node's "mostly not threaded approach" is reasonable, IMO and that of many other users. Node is scaled by spinning up more instances, each instance being one of the cheapest "one CPU" variety. I'd be more interested in seeing a version of your benchmark that limited each of the language instances to a single CPU, and/or that spun up multiple node instances equivalent to one multi-cpu instance of go/elixir --- this latter may sound weird, but its a "cost equivalent" comparison, which is ultimately what's important: transactions served per $$.

One of the benchmark goals was to test "scheduling" efficiency of each runtime. In other words to show how well it scales given many-core instance, which often is more economical in transaction/$$ sense.

Question: how using cluster module is different from spinning up multiple node instances?

FastHTTP is the fastest server for Go, it's going to crush Elixir by something like 10x, it's really fast like the fastest c++ / rust libraries.

Well I believe few people use the built in http module in go. Not sure if you're testing would allow for 3rd party frameworks but the Iris framework loves to claim fastest go web server. Gorilla or gin are also popular.

As for the structure of it you would like have everything split out into goroutines with a worker pool of goroutines ready to ferry the data from request to backend and back to client.

There are so many options to choose from. But don't use Iris.

See https://www.reddit.com/r/golang/comments/57w79c/why_you_real...

Why would you not use the standard library’s http package. I would!

That’s the idiomatic way to handle requests in Go. What’s the problem?

I don't have any problem with their Go implementation. Instead, it is very impressive to me who has little knowledge in Go.

I just want to point out that Node runs with a single thread so it does not max out CPU utilization. It would be nice if the author does the benchmark again with multiple processes of node js.

Maybe they were simulating some sort of messaging server, or a shared cache, or something else which does not translate well to multiple process model?

Id like to see metrics on memory consumption - is this something you could add?

Good point, will add.

many thanks!

Would be interesting to compare with Java.

What would be the good Java web server to test?

Slightly off-topic, but: I love those graphs. Any ideas how they created them, or what they are using?

Wonder how much the anonymous function in the node handler affected it.

Not sure what `body` not having `var` in front (so it gets put on global) would do either.

Surprising that BEAM is such a CPU hog!

BEAM has some heuristics where it can switch to a busy wait in some cases in an attempt to reduce latency. This is tunable with the +sbwt argument to erl; but the default means it's hard to use the system CPU % as a load indicator.

It would be interesting to hear more about this because with CPU spiking like that I would have expected more variability in the response time...but quite the opposite it was the most consistent which is one of the big things that the BEAM aims for.

I wonder if somebody who's more of an expert on the BEAM could chime into why that might be. More activity in the scheduler maybe?


Node bad, Elixir good but spikey on the CPU, Go best.

It seems the Elixir CPU usage is due to the way the Erlang VM manages schedulers. A scheduler will literally busy-wait if it thinks there might be more work to do soon, in an effort to improve the responsiveness of the system

I would love some sort of bot that I could point at blog posts that could summarize articles with the exact language you used.

I read article and I saw that Node is not so bad at all..

God's work. Thanks

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact