
Limiting Concurrency in Node.js - davidw
http://journal.paul.querna.org/articles/2010/09/04/limiting-concurrency-node-js/
======
tav
This is a really bad idea. Please don't do this in production settings — it'll
open you up to really easy denial-of-service attacks. See the classic
Slowloris attack for a case study <http://en.wikipedia.org/wiki/Slowloris>

~~~
pquerna
No, its not a bad idea.

The article is fundamentally about hitting limited backend resources.

Most application servers shouldn't have to deal with Slowloris attacks. Your
load balancers will, your frontend proxy will, but not your application
servers.

Letting your application servers transfer this load deeper into your
architecture is a bad thing. Its even harder to debug the farther you get away
from a web request. The right place to mitigate and detect is in your load
balancing or reverse proxying layers -- most production node.js deployments I
know about today, still sit behind nginx or apache proxies.

~~~
tav
The primary example you give in the article is that of a reverse proxy. I have
trouble seeing how that equates to an "application server" — but let's put
that aside as a difference of perspective. Attacks like Slowloris are very
much based on exploiting rate-limiting vulnerabilities in the application
layer.

And rate limiting by accepting less — as neat as your Node.js hack is —
explicitly opens up Node.js to this vulnerability. The vast majority of load
balancers will just pass the request onto Node.js. Not to mention the bizarre
nature of a setup where you "push back" to a load balancer sitting in front of
a rate limiter?? If your backend resource can only handle connection rates
less than what an individual Node.js server can process, then why is there a
load balancer in front??

Even just looking at the situations where one has apache/nginx sitting in
front of the Node.js server — both of them will happily pass on all Slowloris-
like requests. So if you had maxClients set to 100 and an attacker decides to
spend 30 minutes sending you the 100 requests, the rest of your users will be
shit out of luck during this time period.

The initial approach you took to rate limiting would've prevented such a
vulnerability. That is, accept and parse the request before queueing it to
access the limited backend resource. But by limiting accept(), slow requests
will quite easily bring your system to a halt. You could've avoided this by
doing the rate-limiting _after_ the request has been accepted and parsed.

I apologise if I haven't been as clear in my explanation as I could've been —
but I would strongly recommend against rate limiting by accepting less. Please
don't do it in production. Thank you.

