Hacker News new | comments | show | ask | jobs | submit login

10K+ concurrent requests to a single server sounds like a server that is mostly shuffling bits around. I assume the 150 concurrent is more realistic for a frontend server which actually does something.

Also note the speaker is a CTO of a CDN (fast.ly), I am guessing he has experience with large concurrent requests as well :)

10k requests/s is the standard I expect to achieve with a HTTP load balancer without any challenge. (HAProxy or nginx)

It may be less with HTTPS, depends on how efficient the CPU is at cryptographic algorithms and how many cores.

fastly is a CDN. I assume that they're talking about the performances of their CDN servers. Not the web servers of the clients that will process the request, and is indeed a lot slower operation.

Yeah I guess I'm mostly thinking of a typical REST API workload where you just query a database, maybe do some small transformations to the data and then send it to clients.

Also, it'd have to be a server with good async support like Node.js, Go, Haskell, Scala, Tornado, nginx...

Considering that the speaker is CTO at a CDN company (which has to deal with a lot of different kinds of back-ends that are outside of their control), it makes sense that they would need to use and algorithm which can handle all possible scenarios - They can't force their customers to use bigger servers and more efficient systems.

> 10K+ concurrent requests to a single server sounds like a server that is mostly shuffling bits around.

Isn't that all a load balancer is supposed to do? I certainly don't want my load balancers performing computations or logic. I want it to pass that work off to another server.

If you actually intend to serve traffic back to the original requestor, you can't have too many requests handled per server because you have a limited amount of bandwidth. 150 requests per server on a 1Gbps pipe (which is what you get on, for instance, a single ELB instance) only actually gives you about 813 KB/s per request for 150 requests/server. For 10,000 requests you'd be down to 12 KB/s per request. For comparison, the average web page size is now up to 2048 KB!

To be clear, you can do a lot better with a better pipe, smart caching, compression, etc. But people often have horribly unrealistic estimates about how much traffic their servers can handle because they don't take bandwidth into account, and load balancers are no exception.

The HTML for an average web page is not 2048KB

The average web page size is more like 2331KB now: http://www.httparchive.org/interesting.php?a=All&l=Apr%201%2.... (I didn't say anything about HTML, by the way).

Of course, when you break it down by individual web request, most responses are still below 800KB, but you shouldn't load plan for the average case. And clearly even the average case is well above 12KB, especially for a CDN (which is responsible for serving the image, video, and large script content). I'm also pretty confident the page I linked already includes compression (which decreases size, but can increase time quite a bit; many people expect software load balancers to be using the absolute fastest compression available, but that's often not the case in my experience).

But most of those resources are static and cacheable.

Sure, if you actually set the headers correctly and the browser actually retains them (recall that browsers are free to discard their caches at any time). And, if they can't be cached on the user's computer, who caches them? That's right, the CDNs. Like fast.ly. Who wrote the article.

Regardless, the small pipe on your web server isn't serving the assets.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact