

HTTP Post Denial Of Service: More dangerous than initially thought - ssclafani
http://www.acunetix.com/blog/web-security-zone/articles/http-post-denial-service/

======
tptacek
This attack seems just as straightforward to address in a reverse proxy as
"slowloris" (which was, despite the publicity, itself an attack known for many
years prior to its announcement).

Two things remain true:

It is straightforwardly possible to bring a conventional full-featured web
servers or app server to its knees with leveraged attacks (in which target
pain greatly exceeds attacker effort).

 _and_

This doesn't much matter, since even stupider attacks do just as much damage.
For instance, during the 2002 Winter Olympics, which Arbor Networks was
contracted to do anti-DDoS for, attackers simply got tens of thousands of
machines to make vanilla browser HTTP requests to targets.

If anything, things like this "more dangerous than initially thought" request
is _easier_ to defend against than the dumbest attacks; it involves unusual
patterns of POST requests, unusually large content-lengths, and unusually
long-lived connections, all of which can easily make connections a candidate
for a random early drop queue.

The fact that tools may not immediately exist to counter this kind of attack
is probably not an indication of how hard those countermeasures are to build.

~~~
peterwwillis
The countermeasures you can use today to counter web server resource
starvation attacks are:

    
    
      * Turn off keep-alives
      * Turn MaxRequestsPerChild/Thread down to a minimum
      * Set your timeouts low
      * Set your MaxClients/Threads to a number your server can handle
      * Enforce LimitRequest* settings
      * Enforce cpu runtime, memory, and other limits on CGI to prevent an overly-zealous set of web apps from loading your boxes
      * Don't accept POST and other methods if you don't need it, and enforce a maximum POST size in your web apps
      * Play with mod_antiloris, mod_qos and mod_evasive and see which works best for your application.
      * Use a single iptables rule to drop more than X connections at a time from any host
    

You're right that a stupid attack like a plain-old DDoS using normal machines
to make normal requests is probably the most effective. After blocking tor and
anonymous proxies there's only a few other ways to block a large number of
different addresses from flooding your site, usually with packet analysis on #
average connections per host and repeatedly requested URIs, but those aren't
as trivial to implement as the settings above.

~~~
SageRaven
Any idea if "accept filters" will help mitigate this attack? See "man
accf_http" on FreeBSD; not sure what the equivalents for other operating
systems may be.

I know that varnish, Apache, and nginx support this feature. Speaking of...
does having a reverse proxy like varnish help in these kinds of attacks?

~~~
peterwwillis
Actually I wasn't aware of such a feature, it's kind of neat. But no, that
won't help with this attack (the filter works only on HEAD and GET methods,
and this attack is about the content after a POST).

It looks like this filter deals with reducing the CPU required to handle the
request portion of an initial HTTP connection. I suppose this could help with
preventing the server from being starved of resources for an attack on solely
transmitting an HTTP request, but that could be mitigated by a 5 or 10 second
timeout on getting a HEAD/GET request from the client initially (which should
be reasonable even for dial-up and mobile connections). And I personally
haven't ever seen a DoS attack on a proxy layer that ate up more CPU than any
other resource, so I don't see this filter as being a significant advantage
over the basic tuning you can do with connections on the proxies.

Oh, and as tptacek mentioned, a reverse proxy like varnish can definitely help
in different DoS scenarios. I haven't used varnish in the real world so I
can't speak to specific advantages, but in general, dedicated reverse proxies
are a great way to offload initial processing, provide more resources for a
high number of connections and deal with host-level attack mitigation so the
app servers don't have to waste time on it.

------
jws
Original paper: <http://www.owasp.org/images/4/43/Layer_7_DDOS.pdf> (well,
slide deck)

§ Summary

➊ The _slowloris_ attacks of last year consumed all of a host's available HTTP
server connections with minimal bandwidth by trickling out HTTP headers
forever at a slow pace. Countermeasures were developed and applied.

➋ This exploit consumes connections similarly, but does so by trickling out
data in the content section of a POST request. Countermeasures targeting
_slowloris_ are ineffective.

➌ Countermeasures for this will include HTTP request timeouts, but to support
distant users operating over potentially saturated links you will have to set
the timeout high enough that any well connected attacker will still be able to
kill your server. (That has some extrapolation from me.)

§ Discussion

The real solution will be to not consume a process or other expensive resource
until the entire POST request has been received by the server. (Expand to PUT
and any other method with content in the request.)

This becomes difficult in HTTP/1.1. To really support 1.1 you have to give the
request the option to reject before the content is sent. (Yes you other old
farts, the client doesn't necessarily blindly send the content, but waits for
your server to say "ok, the method, resource, and headers look good, send me
the content".) I suppose you could force all those decisions to be made by the
HTTP server, but that would be sad.

~~~
pyre
This seems easy to defend against for sites that don't accept a lot of POST
data. You could set a really low threshold for content-length. You could maybe
even get fancy and only allow large POST data on certain URLs (e.g. low POST
content-length threshold on www.example.com, but a higher threshold on
admin.example.com; assuming that you can reject the connection as
unauthenticated before receiving the data).

~~~
andrewmccall
This makes perfect sense. If for example you're accepting 140 characters there
is no reason you should have an excessive content-length for that URL.

------
eis
This is not as dangerous as the authors make it sound like. Just limit the
maximum connections per IP (e.g. via mod_evasive in Lighty or firewall like
iptables) and voila - attack voided.

This is very similar to what Slowloris does but Slowloris is much more
intelligent because it slows the connection down before it all request headers
are sent and therefor many modules will not execute their limit checks just
yet.

"What’s special about this denial of service attack is that it’s very hard to
fix" "Therefore, to properly fix it would mean to break the protocol"

Not at all as outlined above.

From the paper: "In HTTP/1.1 where chunked encoding is supported and there is
no “Content-Length” HTTP header, the lethality is amplified."

No, it changes exactly nothing regarding this attack. Why would it?

I do not know why these guys make such a fuss about something so trivial and
obvious to anyone dealing with HTTP servers. But sadly this is quite common in
the security industry.

~~~
dangrossman
All they need is a botnet of at least 256 computers, which can be rented for a
small fee off any number of shady forums. Now what are you going to do?

A much better solution is to put a proxy in front that can hold lots of
connections very cheaply. nginx and other evented servers can hold 10,000
trickling connections without using significant resources.

~~~
eis
If you have a botnet attacking you, then it's a whole different story anyways.
Those can get your services down easily regardless of what attack they use or
what protocol is in play (HTTP,FTP,SMTP, you name it) so the botnet argument
does not really hold here.

A proxy in front does not help if it does not limit per IP connections as an
attacker can just use up all filedescriptors or connection slots. It just ups
the ante a bit but not much. 10k connections can very easily be created from
any DSL/cable connection let alone a server in a datacenter.

~~~
btilly
For most websites, sure. But the big boys have to deal with botnets all the
time, and (mostly) manage to remain up. They need to figure this sort of thing
out.

~~~
eis
It still is not specific to this attack.

~~~
btilly
Their problem is still not specific to this attack. But they need to figure
out how to handle every kind of attack that will come, _including_ this one.

------
nodata
(This has been known about for ages.)

~~~
jws
I agree. I've been killing servers with it since the '90s. The current
interest comes from it (possibly) arriving as an attack tool in botnets and
not being mitigated by current anti-DDOS methods.

It is time to cope with it.

~~~
tptacek
Yeah, because existing anti-DDoS methods are _so effective_ at "conventional"
DDoS attacks. Uh-huh.

------
chc
If this became a problem, couldn't we just have the server do triage? Like,
monitor the speed of open connections, and when we're nearing saturation, tell
some percentage of the absurdly slow connections (maybe with some heuristics
to detect more likely attack patterns), "Sorry, we can't support you at this
time." If we're going to deny service to someone, it may as well be the
problem clients.

------
snorkel
Use Apache's LimitRequest* directives
[http://httpd.apache.org/docs/2.0/mod/core.html#limitrequestb...](http://httpd.apache.org/docs/2.0/mod/core.html#limitrequestbody)

~~~
eis
This suggestion is impractical as the attack works with even fairly small
Content-Length. Say 10000 and sending 1 byte per second will limit you to 10kb
uploads but still give each connection hours to run. Since most IO timeouts
are fairly large you can even with a limit of 1000 bytes run a connection for
hours until you hit this limit.

The only solution is to limit connections per IP and maybe defining a minimum
speed can help too.

------
xxi
acunetix.com

