It is also interesting to see that only one Node process can barely serve 10k connections.
After I check the source at . They use a simple http module to serve the requests while they use `http.ListenAndServe` which spawns go routines, hence more CPU utilization.
What would be the most performant way to serve HTTP in Go?
Question: how using cluster module is different from spinning up multiple node instances?
As for the structure of it you would like have everything split out into goroutines with a worker pool of goroutines ready to ferry the data from request to backend and back to client.
I just want to point out that Node runs with a single thread so it does not max out CPU utilization. It would be nice if the author does the benchmark again with multiple processes of node js.
Not sure what `body` not having `var` in front (so it gets put on global) would do either.
I wonder if somebody who's more of an expert on the BEAM could chime into why that might be. More activity in the scheduler maybe?