

Slowloris - the low bandwidth, yet greedy and poisonous HTTP client - signa11
http://ha.ckers.org/slowloris/

======
forza
While interesting, this attack is old. A newer "variation" on this is with
HTTP POST instead of headers.

[http://www.darkreading.com/vulnerability-
management/16790102...](http://www.darkreading.com/vulnerability-
management/167901026/security/attacks-breaches/228000532/index.html)
<http://www.owasp.org/images/4/43/Layer_7_DDOS.pdf>

~~~
signa11
yes that is exactly right. micahl-zalewski and adrian-llarion reported it here
: <http://www.securityfocus.com/archive/1/456339/30/0/threaded>, nevertheless,
it definitely _is_ very interesting

------
FooBarWidget
I dispute the notion that Slowloris's behavior should be considered an
"attack". If your server is hammered by tens of thousands of visitors who are
on slow links - say modems or 2G wireless links - then you will have the same
problem. The real problem if you ask me is the limited amount of I/O
concurrency that multi-threaded and multi-process servers can have. Pretty
much the only practical solution for this right now is by using evented I/O.

The problem can also be solved by putting the web server behind an evented,
buffering reverse proxy and fully buffers both the request and the response.
This shields Apache from slow requests.

Maybe some time in the future operating systems will be able to handle
millions of operating system threads easily but we're not there yet.

~~~
jamesaguilar
> The real problem if you ask me is the limited amount of I/O concurrency that
> multi-threaded and multi-process servers can have. Pretty much the only
> practical solution for this right now is by using evented I/O.

Why do you think that? What constraints do you believe limit the number of
threads you can use?

\- An idle thread is essentially free.

\- The memory used by an ongoing request should not be inherently different
between an evented server and a threaded server.

\- The number of sockets an evented server can use is the same as the number
of sockets available to a threaded server.

If evented servers are handling this better, all it shows in my opinion is
that the threaded servers have been written incorrectly or else configured
incorrectly. But that is not an inherent benefit of evented servers. Please
correct me if there is a constraint I am missing.

~~~
FooBarWidget
> Why do you think that? What constraints do you believe limit the number of
> threads you can use?

Virtual memory address and context switching overhead. On 32-bit platforms, if
each thread has an 8 MB stack then after creating a few hundred threads you
run out of _VM address_ even if you don't actually use that much memory. Most
OS schedulers also don't like dealing with tens of thousands of threads.
Furthermore each kernel thread takes a small amount of space but kernel memory
typically isn't swappable, unlike userspace memory.

~~~
jamesaguilar
By context switching overhead, I assume you mean the overhead of faulting data
into the cache from a thread that had been sleeping. Evented systems have this
overhead too when switching between events related to different requests.

Stack size I've discussed in another response. It can and should be decreased
if you plan on working with a lot of threads.

I can't comment on the Mac OS X scheduler. The Linux scheduler handles it just
fine, and my understanding is that the Windows one does OK with it too.

The small amount of kernel memory that isn't swappable likely isn't your
system's overall throughput constraint, but if it is I stand corrected.

------
someone_here
cleverjake: You're being censored. All your comments are dead on arrival.

~~~
drivebyacct2
And from the looks of it, for submitting a link to a YouTube video.

~~~
Estragon
This is a really disturbing characteristic of Hacker News moderation. The
people it happens to must feel like Bruce Willis in _The Sixth Sense_ , when
they realize it. Why not let them know??

~~~
eli
There's a term for it, but I'm blanking on it.

I believe the idea is that it's for people who are legit trolls. If you let
them know, they'll just create another account. But if they think people have
just grown tired of their antics, they're more likely to move on to another
site.

~~~
pronoiac
I think you're looking for the term "hellban:"
<http://www.urbandictionary.com/define.php?term=hellban>

------
liuliu
I guess this is the example why you should guard your own home-baked HTTP
server behind nginx or lighttpd.

~~~
marcinw
Right, because nginx and lighttpd aren't easier to exploit. /sarcasm

~~~
tkaemming
They (nginx, lighttpd) are more difficult to exploit, especially when used to
buffer requests to a heavier upstream server like Apache.

EDIT (Clarification):

They (nginx, lighttpd) are more difficult — although not impossible — to
exploit, especially when used to buffer requests to a heavier upstream server
like Apache.

Specifically, they are typically able to handle many more connections than
your application server would be able to (as long as they are properly
configured), without the incurring the resource overhead of your application
server by bufferring the HTTP request/response.

~~~
marcinw
Nice job editing your comment without saying so (" -- although not impossible
--" and your last paragraph).

Anyway, your nginx/lighttpd server is more likely to be exploited and
compromised via an actual vulnerability rather than your Apache server via a
slowloris-style attack. It's akin to putting a wide receiver in front of your
runningbacks...

~~~
FooBarWidget
I am not aware of Nginx having a bad security track record.

------
nailer
I wish they could have picked a better name - 'Slowlaris' (note minor spelling
difference) has been a colloquial name for the older, but more common versions
of Solaris, due to the badly performing IP stack for some time.

That said, Slowloris wins a Googlefight these days:
[http://www.googlefight.com/index.php?lang=en_GB&word1=sl...](http://www.googlefight.com/index.php?lang=en_GB&word1=slowlaris&word2=slowloris)

------
nwmcsween
Apache has many differing mpms - you can use an evented mpm similar to how
nginx handles connections. Anyways since this 'attack' is making a full tcp
connection a simple fix would be to limit the amount of full tcp connections
from one source.. or if you feel particularly evil add it to a blacklist of
tarpitted connections and then reverse DOS the attacker.

------
medina
See also "A. Kuzmanovic and E. W. Knightly. Low-Rate TCP-Targeted Denial of
Service Attacks (The Shrew vs. the Mice and Elephants). In Proc. ACM SIGCOMM,
2003." and related papers --
<http://www.cs.northwestern.edu/~akuzma/rice/shrew/>

------
epoxyhockey
Anyone try this out on a Varnish setup?

~~~
epoxyhockey
I just tried this on my stock Varnish config and the site was still responding
to requests. So, Varnish can be added to the list of 'not affected.'

------
hackermom
It's important to note that you can protect against Slowloris itself not only
by the various apache modules available specifically for Slowloris and
specifically for this kind of HTTP attack, but also through Apache's native
configuration settings that (among others) govern the number of simultaneous
connections any single IP is allowed to have. Slowloris itself is not much
different in terms of the effect it has on the HTTPd than a script pulling
data from the server using curl or wget.

~~~
ZoFreX
I don't know how the Apache modules to guard against Slowloris work, but I can
think of a modified attack that would still work if connections per IP were
limited. You can't limit connections per IP to just one, as browsers will
pipeline requests, they may have multiple tabs open, asynchronous requests, &c
. . . 256 is obviously very high, let's say you set the limit to 10
simultaneous requests per IP.

It's now impossible for a user to Slowloris your webserver, but they only need
to get hold of 26 separate IP addresses to be able to once more. Depending on
your setup, this may be far less than they would need for a naive DDOS attack.

I think a way to mitigate both attacks would be to limit how long the client
can send headers for (and perhaps refuse connections for X amount of time if a
client repeatedly acts in a way that appears malicious, but that's possibly
beyond the scope of "native configuration").

~~~
IgorPartola
First off, as others have pointed out you can successfully run thousands of
threads, so the default 256 does not mean much. Also, HTTP pipelining is not
what you think it is: <http://en.wikipedia.org/wiki/HTTP_pipelining>

~~~
ZoFreX
I know, I was one of those people - I just wasn't sure if Apache can do that
well, I know of other servers that can. And yes, you are correct, that's what
I get for posting when tired - I meant simultaneous requests!

~~~
IgorPartola
Happens to me too. I don't function correctly when I'm tired.

