
Mitigating DoS Attacks with Nginx - Garbage
https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus/
======
mmaunder
I love nginx. But most of these are mitigation for DoS only targeting the web
server, not DDoS. They're explaining how to throttle a single IP. The whole
point of DDoS is to distribute the attack to bypass mechanisms that throttle
single IP's (plus to amplify).

Also the DDoS attacks that we've been hit with actually target our uplinks by
saturating them with traffic, not our services. We have a 1 Gbps port and the
last DDoS we were hit with was over 20 Gbps, which is a relatively small one.
The mitigation we used was to have our hosting facility get their upstream
provider to route the traffic through a layer 7 DDoS mitigation filter
provided by an external company. It worked wonderfully.

These are cool features, but when your link is saturated it doesn't matter
what a daemon listening on a port does.

~~~
codinghorror
And once IPV6 gets up to steam, welcome to a world of people with millions of
"addresses" to attack from. One advantage of IPV4 is that it was accidentally
pretty granular.

That'll be a while, of course, but already we see attackers with access to a
tremendous number of unique IP addresses in the IPV4 space.. they'll have many
orders of magnitude more soon.

~~~
dspillett
_> welcome to a world of people with millions of "addresses"_

... in the same /64 range for the most part, so as easy to block/filter/limit
as one IPv4 address.

You risk inconveniencing people who are assigned just a few addresses because
you potentially end up blocking many of them due to the actions of a few on
the same subnet, but you can't be held responsible for hosts/ISPs doing IPv6
wrong.

~~~
icebraining
_You risk inconveniencing people who are assigned just a few addresses because
you potentially end up blocking many of them due to the actions of a few on
the same subnet, but you can 't be held responsible for hosts/ISPs doing IPv6
wrong._

Not to mention that some ISPs do carrier-grade NAT specifically due to the
limitations of IPv4, so blocking a single IP(v4) might affect multiple people
as well.

------
nodesocket
Here is a sweet trick for dropping traffic with NGINX. When I mean drop, I
mean, don't send a response, literally terminate the connection.

    
    
        location = / {
            if ($http_user_agent ~* foo|bar) {
                # return non-standard (NGINX only) 444 code
                # closes the connection without sending a response header
                return 444;
            }
        }

~~~
NetStrikeForce
If I'm not mistaken, the above implies accepting a TCP connection (3-way
handshake) and a request from the attacker; then you look up the contents of
the user-agent header and decide to stop replying based on its contents.

This will get you nothing in a typical DDoS scenario, but thanks for sharing
as it may come handy for other situations.

~~~
creshal
Filtering like that also has a rather big performance impact on the nginx side
and make things worse under moderate, but not DDoS-y, traffic conditions.

------
joosters
You can't solve a traditional DDOS attack at the destination. No matter what
you do with the incoming flood, the fact is, your bandwidth is full of the
attack requests, leaving no room for legitimate traffic. Do what you like to
the attack requests, but the pipe is still full.

To mitigate a DDOS, you need to go upstream to your network providers and
filter out the traffic before it reaches you.

~~~
icebraining
True, but not all DDoS involve bandwidth saturation, since if the site has a
decent pipe, those are harder to achieve. Resource depletion based on forcing
the server to perform heavy tasks is common.

------
nodesocket
Nginx is great, and I absolutely love it. However, if you're under a true DDoS
attack, the box is going to be completely bogged down at the kernel level way
before traffic is even close to being accepted and processed by NGINX. So this
post is not very useful against a decent magnitude attack.

------
DanBlake
For all but the most basic attack, you really want a script putting these IP's
into iptables. Using the application itself to block them still requires the
connection setup/teardown resources to be used, as well as the application
itself.

~~~
hyperdunc
I can see how doing it lower level would be more efficient. Are there any
scripts anyone could recommend as a starting point?

~~~
SwellJoe
fail2ban can watch the nginx logs for throttling and/or blocking messages and
add iptables rules for you.

I haven't read this all the way through, but on a cursory glance it looks
reasonable:

[https://easyengine.io/tutorials/nginx/fail2ban/](https://easyengine.io/tutorials/nginx/fail2ban/)

~~~
XorNot
Urgh logwatching actively pains me these days. So much waste string parsing
what was originally binary data anyway.

I'm starting to think that we need some agreement where instead of logs, we
just get apps to emit a stream of protocol buffers and a format string for the
messages and data.

Which does make me wonder if you couldn't LD_PRELOAD something which replaced
fprintf and the like...

~~~
superuser2
Streams of plain text are what UNIX was built on. If you want binary APIs,
look outside the *nix family.

~~~
Ao7bei3s
Or modernize the applications. Throw out the ad-hoc formats and parsers,
replace them with machine-readable equivalents.

For example, systemd finally provides a logging system that allows structured
logging with key/value fields.

------
mgo
If someone wants to take you down they'll just bombard you with traffic, and
this won't help you there. Having been the victim of several DDoS attacks over
the years, almost all of them haven't been on the application layer.

~~~
aianus
Cloudflare, for example, is good at preventing non-application-layer DDOS
attacks but for application-layer they can't help much.

This blog post is a good starting point for the kinds of strategies you need
to fill that gap in protection.

~~~
rodionos
We tried it, setup was easy, but our response time for dynamic content
increased by 150 millis so it didn't work for us. It's worth noting that their
model is different from CDN - they proxy all of your traffic through their own
servers.

~~~
sciurus
That's not atypical for a CDS these days; fastly and cloudfront can work the
same way, e.g. [https://aws.amazon.com/cloudfront/dynamic-
content/](https://aws.amazon.com/cloudfront/dynamic-content/). How else do you
expect them to cache and serve your dynamic content?

~~~
dorfsmay
I don't recomend it, but you could use different domains for static vs
dynamic.

~~~
laumars
Some organisation do just that. But having your entire site behind CDN does
have additional benefits besides mitigating DDoS attacks. Such as allowing you
to handle other kinds of service outages more effectively (eg busy pages).
They can offer you analytics, allow you to separate different traffic under
the same domain name (sometimes handy for SEO), etc. Some CDN providers also
do some cool stuff like enable IPv6 on your site even if your origin servers
are only running IPv4 - but that's more a niche time saving feature than some
"must have" deal breaker.

~~~
rodionos
I like analytics if the price is less than 50ms per request. We use GA and
statcounter for analytics anyways. Charts that show how much static traffic
you saved are nice, but with bandwidth close to free, it's not a big deal. CDN
analytics need to be better than GA at which point I will not only trade off
latency but convert to premium all the way.

~~~
laumars
> _I like analytics if the price is less than 50ms per request. We use GA and
> statcounter for analytics anyways._

GA would cost you more than 50ms too. More so than a CDN controlled analytics.
But obviously that cost with CDN is an upfront latency rather than the more
hidden cost with background loading of GA. So arguably GA's cost is less "bad"
than the CDN's cost.

Personally speaking, I prefer the CDN approach as it produces web pages with a
lower browser footprint which I think does improve the user experience (though
I'm not implying that GA give a bad user experience!).

GA does give a greater breadth of information than CDN analytics though. Often
that's the real deal breaker since analytics is usually driven by project
managers / clients rather than by the developers.

> _Charts that show how much static traffic you saved are nice, but with
> bandwidth close to free, it 's not a big deal._

Oh it's definitely a big deal if you serve high traffic websites ;) I've spent
hours working against those kind of reports on projects that were seeing 100k
concurrent users. I will say that these graphs aren't so much about judging
what _bandwidth_ can be saved but more about judging what _requests can be
offloaded_. The idea being the fewer calls to your origin servers you need to
make, the more resources you have available in your farm for generating the
dynamic content (dynamic content you cannot cache!). This also has the
potential to save you money in server costs (depending on how they're
licenced) as well as improving site performance at peak times.

> _CDN analytics need to be better than GA at which point I will not only
> trade off latency but convert to premium all the way._

Indeed. GA will likely always be better from an account management
perspective. But as a devops engineer, CDN analytics fulfils my needs. The
great thing is that we have a multitude of options we have available :)

~~~
rodionos
Unfortunately, CDN analytics is no alternative to GA so it's either/or kind of
choice for us. Hence, full proxy type of CDN means that latency is additive.

~~~
laumars
I wasn't aware the point of our discourse was for me to sell your business
additional CDN services ;)

In all seriousness though, it might help to look beyond the very specific set
up of your present company when asking about why other people opt for other
CDN services. But for what it's worth, I've not experiences the same degree of
latency issues with either Cloudflare nor Akamai that you've described. And I
_have_ done extensive load tests.

------
Lanari
This is really helpful and practical, when an app start getting more popular
and people start writing scrapers and things like that sometimes they can
mistakenly send an insane amount of requests, so this surely help on this
cases since it make more since to solve that on the level of the server not
the app itself.

About DDoS I don't think there's any cheap solution for that...

------
ianamartin
I really wish Nginx didn't think it was so important to their business model
to hold hostage the health check.

That's a dealbreaker for me at their prices.

~~~
dlecorfec
Take a look at Tengine, from Taobao. It's based on nginx.

~~~
nikolay
It is, just like OpenResty [0], but it lags severely behind recently - not
sure why.

Edit: Linked to Tengine in [1].

[0]: [http://openresty.org/](http://openresty.org/)

[1]: [http://tengine.taobao.org/](http://tengine.taobao.org/)

------
awqrre
All the characteristics of DDoS attacks listed seem trivial to defeat... e.g.:
Constantly spoof IP addresses, etc..

