
Handling 1M Requests per Minute with Go (2015) - rcarmo
http://marcio.io/2015/07/handling-1-million-requests-per-minute-with-golang/
======
35bge57dtjku
It's strange that they seem to think this is an amazing amount of scaling and
then go on to reveal that it's actually with 20 servers. And it's not like
their solution is novel.

~~~
phillipcarter
833 requests per second per server (though it's not likely that the load is
perfectly distributed like that).

I'm ... not very impressed.

------
fastest963
Why didn't he keep the first single-channel solution and just listen on the
channel from multiple go routines? I don't see the advantage that a tiered
approach gets you over that.

~~~
ohnoesjmr
Yeah, I fail to understand the purpose of the dispatcher thingy aswell. This
also can run out of memory as he keeps spawning routines indefinately with
each request in the dispatcher.

------
nwhatt
I appreciate thoughtful posts like this. Instead of the typical "Go is just
better" style of article with specious data, this one talks about pitfalls
around Go. All developers need to realize that there are tradeoffs with any
implementation and this blog does a great job illustrating that.

------
mbrumlow
Seems like they just had a misunderstanding of how range worked. And now they
have complicated code.

~~~
baconomatic
What was the misunderstanding? I'm a Go noob and curious of what they did
incorrectly.

Edit: I read the comments at the end of the article and my question was
answered.

------
meritt
They're uploading each POST payload to S3 at a rate of up to 1M uploads a
minute? They're going to go broke from S3 operational fees. PUT fees are
$0.005 per 1k, or $5/minute, or $7200/day

S3 is an absolutely terrible financial choice for systems that need to store a
vast number of tiny files.

~~~
robbles
They're batching the requests into larger files on S3. The 1M refers to the
number of HTTP requests hitting _their_ server.

~~~
meritt
Can you show me where in the post that is described because I do not see it.
All I see is a description of how they moved the UploadToS3 aspect to a job
queue, but it's still sending individual files to S3.

------
kureikain
Last year I wrote a Go service handle JSON posting with payload of ~ 100KB and
insert them into MongoDB. It requires trivial amount of work and on a single
t2.small server it can handle up to 200K RPM just fine. So this number 1M
request on 4x c4.Large mean 250K RPM on a single c4.large. Maybe their payload
is too big and writing data to S3 is probably slower than to a MongoDB
cluster.

I was so happy at Go performance and since that day I realize just how slow
other language is. I found Java is quite comparable to Go as well. I was
coming from Ruby/PHP and I didn't realize how fast other languages are until I
tried it.

------
qaq
Since when did per minute become a thing? 16K/sec is typical for a node app on
fairly avg server.

~~~
KirinDave
16k r/s is not trivial to get for a non-hello-world app on NodeJS even on a
big server, but if you read the article you'd see it's actually about a
quarter of that.

... Yeah basically it is an article about using concurrency.

~~~
qaq
I was not obviously commenting on the content but on a trend of coming up with
measurements that make for a catchy title.

~~~
KirinDave
> 16K/sec is typical for a node app on fairly avg server.

This certainly doesn't read like "I'm talking about the title" to me.

Also, in other parts of the thread you take performance questions seriously,
further diluting the impression your post was about the title or phrasing.

~~~
qaq
The title is presenting handling 16K/sec as something special my only original
point was that the author is using per minute metrics to make numbers look
more impressive. It's very possible that I did a very poor job of making that
point :). My comment that I am sure Go more performant than node was beign
downvoted so that took things in a diff. direction.

------
tuananh
when i read it "blah blah with Go", i know for sure people is going to discuss
about Node =))

