
Ask HN: How do you handle DDoS attacks? - dineshp2
For owners of small websites running on DigitalOcean, GCP or AWS, how do you handle DDoS and DoS attacks?<p>For context, while exploring the load testing tool Siege running on a VPS, I was able to bring down multiple sites running on shared hosting, and some running on small VPS by setting a high enough concurrent number of users. This is not a DDoS, but it goes to show how easy it is to cause damage. Note: I only brought down sites that I own, or those of friends with their permission.<p>What tools are useful in fighting DDoS attacks and script kiddies? Mention free and paid options.<p>What are the options to limit damage in case of an attack? How do you limit bandwidth usage charges?<p>There was a previous discussion on this topic 6 years ago https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=1986728
======
buro9
I've faced DoS attacks for years as I run internet forums.

The simple advice for layer 7 (application) attacks:

1\. Design your web app to be incredibly cacheable

2\. Use your CDN to cache everything

3\. When under attack seek to identify the site (if you host more than one)
and page that is being attacked. Force cache it via your CDN of choice.

4\. If you cannot cache the page then move it.

5\. If you cannot cache or move it, then have your CDN/security layer of
choice issue a captcha challenge or similar.

The simple advice for layer 3 (network) attacks:

1\. Rely on the security layer of choice, if it's not working change vendor.

On the L3 stuff, when it comes to DNS I've had some bad experiences (Linode,
oh they suffered) some pretty good experiences (DNS Made Easy) and some great
experiences (CloudFlare).

On L7 stuff, there's a few things no-one tells you about... like if you have
your application back onto AWS S3 and serve static files, that the attack can
be on your purse as the bandwidth costs can really add up.

It's definitely worth thinking of how to push all costs outside of your little
realm. A Varnish cache or Nginx reverse proxy with file system cache can make
all the difference by saving your bandwidth costs and app servers.

I personally put CloudFlare in front of my service, but even then I use
Varnish as a reverse proxy cache within my little setup to ensure that the
application underneath it is really well cached. I only have about 90GB of
static files in S3, and about 60GB of that is in my Varnish cache, which means
when some of the more interesting attacks are based on resource exhaustion
(and the resource is my pocket), they fail because they're probably just
filling caches and not actually hurting.

The places you should be ready to add captchas as they really are uncacheable:

* Login pages

* Shopping Cart Checkout pages

* Search result pages

Ah, there's so much one can do, but generally... designing to be highly
cacheable and then using a provider who routinely handles big attacks is the
way to go.

~~~
Kenji
Uh, stupid question but how do you cache a website like for example this
comment thread on hackernews? Suppose a DDoSer calls this comment thread a lot
of times. The request has to go through to the server because when I hit F5 or
post a comment myself, I see the comments in realtime. How do you handle that
exactly? Does caching for a few seconds help already, or does the backbone
push updated sites to the CDN server? I have no experience in DDos mitigation.

~~~
buro9
Cache everything a guest accesses for 5 minutes or more. Vary on the specific
cookie that represents a signed-in user.

None of my guests have noticed this, and it has increased most of my analytics
numbers as my pages are faster too.

The signed-in users, they get the dynamic pages.

But now the cookie that identifies the user is what you use to correlate any
attack traffic, the attacker is forced to (somewhat) identify themselves and
you can then revoke their authentication status or ban the account.

Finally you captcha and/or rate-limit the login page.

This is effectively what I do on my sites, the pages themselves and the
underlying API all cache if the cookie or access token is absent.

This is trivial to do within the code, but can be harder to do with the
CDN/security layer (who need to support a "vary on cookie" or "bypass cache on
cookie" or equivalent).

------
wesleytodd
My startup's site gets DDOS'd about once a week. We have seen a huge range of
attacks from UDP floods, to wordpress pingback attacks, to directed attacks on
our services.

We have many layers of protection:

* We run iptables and an api we wrote on our ingest servers. We run failtoban on a separate set of servers. When fail2ban sees something, we have it hit the api and add the iptables rules. This offloads the cpu of failtoban from our ingest servers.

* We block groups of known hosting company IP blocks, like digital ocean and linode. These were common sources of attacks.

* Our services all have rate limits which we throttle based on IP

* We have monitoring and auto-scaling which responds pretty quickly when needed. And has service level granularity.

* Recently moved behind cloudflare because google cloud did not protect us from attacks like the UDP floods which didn't even reach our servers.

EDIT: formatting

~~~
wesleytodd
One other thing to add:

If they attackers are persistent, there is really no way to guarantee zero
down time. THEY WILL FIND A WAY. Just make sure your stake holders know you
are doing everything in your power to resolve the issues, and then actually do
those things.

An anecdote:

We had been seeing DDOS attacks for a few weeks, so we had most everything
locked down and working. But then suddenly one of the most important parts of
our site started going down under load. That part is a real time chat system.
We looked for which chat room had the load and it was one which did not
require a user be registered. We switched the room into registered users only
mode and thought we had solved it.

About 5 minutes later the attack came back with all registered users. We were
amazed, becuase there is no way the attackers could have registered that many
accounts in 5 minutes because of our rate limiting on that. Turns out that
they had spend the past week or so registering users in case they needed them
:)

~~~
nickpsecurity
Great tips and examples. Makes me wonder what site/service you work on that
attracts attackers like that.

~~~
wesleytodd
[https://www.stream.me/](https://www.stream.me/)

We have some controversial users...

~~~
wesleytodd
I guess some people just really hate kittens :)

[https://www.stream.me/kittendorm](https://www.stream.me/kittendorm)

------
kev009
I work at a large CDN that also sells DDoS mitigation.

Firstly, we are built to endure any DDoS the internet has yet seen on our
peering, backbone, and edge servers for CDN services. This is quite important
when you are tasked with running a large percentage of the interweb but
probably not practical for most organizations, mostly due to talent rather
than cost (you need people that actually understand networking and systems at
the implementation level, not the modern epithet of full stack developer).

But, it is critical to have enough inbound peering/transit to eat the DDoS if
you want to mitigate it -- CDNs with a real first party network are well
suited for this due to peering ratios.

Secondly, when you participate in internet routing decisions through BGP, you
begin to have options for curtailing attacks. The most basic reaction would be
manually null routing IPs for DoS, but that obviously doesn't scale to DDoS.
So we have scrubbers that passively look for collective attack patterns
hanging on the side of our core, and act upon that. Attack profiles and
defense are confirmed by a human in our 24/7 operations center, because a
false positive would be worse than a false negative.

Using BGP, we can also become responsible for other companies' IP space and
tunnel a cleaned feed back to them, so the mitigation can complement or be
used in lieu of first party CDN service.

In summary, the options are pretty limited: 1) Offload the task to some kind
of service provider 2) Use a network provider with scrubbing 3) you've hired a
team to build this because you are a major internet infrastructure.

------
rmdoss
We need to divide DDoS here in two categories:

-DDoS you can handle (small ones). That anything up to 1 or 2Gbps or 1m packets per second.

-DDoS you can not handle. Anything higher than that.

For the smaller DDoS attacks, you can handle it by adding more servers and
using a load balancer (eg. ELB) in front of your site. Both Linode and
DigitalOcean will null route your IP address if the attack goes above
100-200Mbps, which is very annoying. Amazon and Google will let you handle on
your own (and charge you for it), but you will need quite a few instances to
keep up with it.

For anything bigger than that, you have to use a DDoS mitigation service. Even
bigger companies do not have 30-40Gbps+ capacity extra hanging around just in
case.

I have used and engaged with multiple DDoS mitigation companies and the ones
that are affordable and good enough for HTTP (or HTTPS) protection are
CloudFlare, Sucuri.net and Incapsula.

-CloudFlare: Is the most popular one and works well for everything but l7 attacks (in my experience). You need to get their paid plan, since the free one does not include ddos protection - they will ask you to upgrade if that happens.

-Sucuri.net: Not as well known as CloudFlare, but they have a very solid mitigation. Have been using them more lately as they are cheaper overall than CloudFlare and have amazing support.

-Incapsula: I used to love them, but their support has been really bad lately. They are on a roll trying to get everyone to upgrade their plans, so that's been annoying. If you can do stuff on your own, they work well.

That's been longer than what I anticipated, but hope it helps you decide.

thanks,

~~~
martin_
Worth noting that Incapsula had _multiple_, _world wide_ outages back in
March. Akamai is a more expensive, but more reliable/proven alternative.

[http://www.bauer-power.net/2016/03/incapsula-had-major-
world...](http://www.bauer-power.net/2016/03/incapsula-had-major-worldwide-
outage.html#.V8MrDJMrLMU)

~~~
rmdoss
Yep, we suffered through it.

To be fair, they all have some downtimes from time to time.

------
DivineTraube
We (Baqend) use an approach that is somewhat different from what has been
proposed here so far:

\- Every one of our servers rate limits critical resources, i.e. the ones that
cannot be cached. The servers autoscale when neccessary.

\- As rate limiting is expensive (you have to remember every IP/resource pair
across all servers) we keep that state in a locally approximated
representation using a ring buffer of Bloom filters.

\- Every cacheable resource is cached in our CDN (Fastly) with TTLs estimated
via an exponential decay model over past reads and writes.

\- When a user exceeds his rate limit the IP is temporarily banned at the CDN-
level. This is achieved through custom Varnish VCLs deployed in Fastly.
Essentially the logic relies on the bakend returning a 429 Too Many Requests
for a particular URL that is then cached using the requester's ID as a hash
key. Using the restart mechanism of Varnish's state machine, this can be done
without any performance penalty for normal requests. The duration of the ban
simply is the TTL.

TL;DR: Every abusive request is detected at the backend servers using
approximations via Bloom filters and then a temporary ban is cached in the CDN
for that IP.

~~~
rootlocus
I'm sorry, I know it's irrelevant, offtopic and I'm a horrible person but...
"Baqend"? Who came up with this name? Was there some brainstorming involved
and that was the best candidate? What does the branding say about your
business? How is it pronounced?

~~~
DivineTraube
It's pronounced "Backend" ;-)

And since we are in the Backend-as-a-Service market, the name is not all that
unfitting. Although it cannot be denied that from time to time some people
think we are French an spelled "Baquend".

------
tombrossman
I use and recommend hosting with OVH if you are worried about DDOS and serving
a Western market. No affiliation, just a happy customer.

OVH include DDOS protection by default[0] and they have a very robust backbone
network[1] in Europe and North America that they own and operate themselves
(this is how & why anti-DDOS is standard with them).

For quick side-projects I still fire up a DigitalOcean instance or two because
their UX is so slick and easy. If I needed huge scale and price didn't matter
I would probably go with AWS (their 'anti-DDOS' is their vast bandwidth + your
ability to pay for it during an attack). For everything else, I put it on OVH.

[0][https://www.ovh.com/us/anti-ddos/](https://www.ovh.com/us/anti-ddos/)

[1][http://weathermap.ovh.net/](http://weathermap.ovh.net/)

~~~
kalleboo
Have you been DDoSed while on OVH? I've heard there are a bunch of providers
who claim DDoS prevention, but what that means in practice is just "we'll take
your site offline right away and not charge you for the incoming bandwidth!".
Super helpful.

~~~
r1ch
I've had several attacks hit my services on OVH. Not once did they null route,
and as far as I could tell the majority of attack traffic was filtered with
only a minor service interruption before mitigation kicked in. Granted these
were shitty $10 booter services hitting me and not a "real" DDoS.

Be careful using services like game servers or VOIP or anything else using UDP
though, since UDP is subject to much more stringent filtering at OVH and may
get affected during mitigation.

------
jaypaulynice
I work at a CDN/Security engineering company, but this is just my view.

First off you need to determine where the attack is coming from. You could
redirect based on IP/request headers in a .htaccess file or apache rules.

Your next bet is to distribute/auto-scale your application if possible.

You need to setup a web application firewall that sits in front of your web
servers and analyzes the requests/responses that hit the web servers. A lot of
the ddos campaigns are easy to identify based on the request headers/IP/Geo
and requests/second.

It's not hard to write a small web server/proxy to do this, but it would be
best left to someone who knows what they're doing because you don't want to
block real user requests. You can use ModSecurity's open source WAF for
apache/nginx, but again you have to know what you're doing.

When I faced this issue, I wrote a small web server/proxy here that you can
start on port 80:

[https://github.com/julesbond007/java-nio-web-
server](https://github.com/julesbond007/java-nio-web-server)

Here I wrote some rules to drop the request if it's malicious:

[https://github.com/julesbond007/java-nio-web-
server/blob/mas...](https://github.com/julesbond007/java-nio-web-
server/blob/master/src/main/java/com/nio/http/filter/RequestFilter.java)

------
DenisM
AWS informs us that an ELB with HTTP/HTTPS termination takes care of all
problems except application level attacks. Traffic ingress is free, so it
shouldn't be expensive?

For static content there is always CDN. Costly, but it works in a pinch, while
you're planning you other moves.

The one thing left to worry about is dynamic content. Depending on the
application you could restrict all requests to authorized users only while
under attack.

This isn't a complete solution by any means, but reduced the attack surface
considerably.

[https://d0.awsstatic.com/whitepapers/DDoS_White_Paper_June20...](https://d0.awsstatic.com/whitepapers/DDoS_White_Paper_June2015.pdf)

------
rmdoss
To summarize the discussion here so far:

1- For small attacks you can optimize your stack, cache your content and use a
provider that allows you to quickly scale and add more servers to handle the
traffic. Do not use Linode or Digital Ocean as they will null route you.

OVH, AWS and Google are the ones to go with.

2- Use a DDoS mitigation / CDN provider that will filter the attacks and only
send clean traffic back to you.

The ones recommended so far:

[https://cloudflare.com](https://cloudflare.com)

[https://sucuri.net](https://sucuri.net)

[https://incapsula.com](https://incapsula.com)

------
carlosfvp
There are many services for HTTP protection, but when you have a custom
protocol for a RT service like a game, you are kind of screwed. It's even
worst if your game is UDP based.

I used to get attacked huge a load of corrupt UDP packets for a few seconds
and that used to hang the main server, wich in 1 or 2 minutes disconnected all
my players.

Solution: separate your UDP services from your TCP services in separate
applications and servers, also use different type of protection services for
each.

The attack still hanged the UDP services, so I started thinking about making a
plugin for snort to analyse the traffic and only allow legit protocol packets.
I haven't done any of this last idea because the attackers stopped since they
noticed that no one was being disconnected.

BTW, for TCP and HTTP I just used any tiny service that protects me from SYN
Flood, like Voxility resellers.

~~~
rmdoss
That's a good point. CloudFlare, Sucuri and friends only handle HTTP/HTTPS/DNS
traffic.

If you have custom protocols, you have to get a full /24 mitigation and so far
nobody can beat Arbor into it. Very expensive, but works well if you have BGP.

------
tumdum_
[https://www.cloudflare.com](https://www.cloudflare.com)

~~~
DyslexicAtheist
You mean the biggest MiTM on the web?[0]

The only reason why they're not constantly called out by serious infosec folk
for their _scam_ is because they hire guys also involved in DefCon/BlackHat
planning (try to sneak a hostile talk against Cloudflare past REDACTED[2] who
btw is also advising Mr. Robot). It's lobbying at its finest.

[0] [https://scotthelme.co.uk/tls-conundrum-and-leaving-
cloudflar...](https://scotthelme.co.uk/tls-conundrum-and-leaving-cloudflare/)

[1] [https://blog.torproject.org/blog/trouble-
cloudflare](https://blog.torproject.org/blog/trouble-cloudflare)

EDIT: [2] redacted name since there is more than one, please duckduckgo by
yourself.

~~~
tumdum_
Yes, I'm well aware that cloudflare is mitm, yet for my needs I've decided
that this is not a problem.

I can see that you are not happy with what they provide. Luckily theirs
service is not forced on you. Neither do you have to use it, nor visit server
that use it.

------
r1ch
Most DDoS attacks are volumetric. There isn't a way to defend against this
other than simply having a huge pipe, or paying someone with a huge pipe to be
in front of your site.

Non-volumetric attacks like SYN or HTTP floods can be mitigated with
appropriate rate limiting or firewalling.

Some providers like OVH have decent network-level mitigation in place, but
you're not gonna find that on a $5 VPS where they're more than happy to null
route you to protect their network.

~~~
rmdoss
Depending on the size of the syn flood or HTTP flood, there is no way you can
handle it locally.

Some syn floods can generate millions of packets per second, which is way more
than a dedicated linux server can handle.

Good video on the topic:

[https://www.youtube.com/watch?v=pCVTEx1ouyk](https://www.youtube.com/watch?v=pCVTEx1ouyk)

------
asimjalis
AWS DDoS defense techniques summary

[https://d0.awsstatic.com/whitepapers/DDoS_White_Paper_June20...](https://d0.awsstatic.com/whitepapers/DDoS_White_Paper_June2015.pdf)

AWS DDoS defense using rate based blacklisting

[https://blogs.aws.amazon.com/security/post/Tx1ZTM4DT0HRH0K/H...](https://blogs.aws.amazon.com/security/post/Tx1ZTM4DT0HRH0K/How-
to-Configure-Rate-Based-Blacklisting-with-AWS-WAF-and-AWS-Lambda)

------
northwardstar
+1 to CloudFlare and Incapsula. Content delivery networks inherently
distribute traffic and most have security enhancements specific to Distributed
Denial-of-Service mitigation.

DDoS protection providers offer a remote solution to protect any server /
network, anywhere: [https://sharktech.net/remote-network-ddos-
protection.php](https://sharktech.net/remote-network-ddos-protection.php)

------
toast0
A) have enough servers so when one gets null routed, it's not a huge deal

B) make sure your servers don't fall over while getting full line rate of
garbage incoming (this is not hard for reflection or synfloods, but is
difficult if they're hitting real webpages, and very difficult if it includes
a tls handshake)

C) bored ddos kiddies tend to ddos www only, so put your important things on
other sub domains

D) hope you don't attract a dedicated attacker

------
ebbv
Disclaimer: I work for a hosting company, but these views are my own personal
opinions which I held even before working where I currently do.

This is one of the reasons I would consider managed hosting as opposed to AWS,
Digital Ocean, etc. With any good managed hosting provider, they are going to
take steps to help deal with the DDoS. Depending on your level of service and
the level of the attack, of course. But they will have an interest in helping
you deal with and mitigate the attacks.

The reality is that true DDoS solutions are expensive, and if you have a
"small website" then you're probably not going to be able to afford them. But
if you're at a good sized hosting provider, they're going to need to have
these solutions themselves and can hopefully put them to use to protect your
site.

------
damm
1\. Have a big enough pipe; if you are getting a DDoS attack of
2Gigabits/second and your uplink is 1Gigabit there is nothing you can do
except look for someone else to filter your traffic. (They have to basically
take on the 2gig ddos; filter it and then pass back the valid traffic to you).

Verisign and others offer this service; typically using DNS. However often
they support BGP

2\. Add limiting factors; if you have an abusive customer rate limit them in
nginx. If you are expecting a heavy day rate limit the whole site.

3\. Stress testing and likely designing your website to withstand DDoS
attacks.

You can cache or not cache; that's not really the question. Handling a DDoS
means what can you do to mitigate the extreme amount of traffic and still
allow everything else to work.

~~~
TimMeade
We got hit by one about a month ago that was over 20Gb. Even a 10Gb pipe has
limits.

------
executesorder66
I've found this very useful:

[http://www.linuxjournal.com/content/back-dead-simple-bash-
co...](http://www.linuxjournal.com/content/back-dead-simple-bash-complex-ddos)

------
kalleboo
Don't piss anyone off

~~~
simbalion
That is excellent advice, in combination with other tactics.

If you do piss anyone off, keep records of everything. Make sure you know who
they are, and where they live, before you start doing business with them. This
lets you send the police after they hire someone to DDoS you. Bad people need
to be removed from the pool to reduce these sorts of attacks. Record 100% of
your phone calls. Android has free apps to do this for you automatically. If
you're in a state that requires 2-party authorization, move to a state that
offers 1-party authorization. Sanity in laws = freedom of citizens.

~~~
DenisM
Talk to a lawyer first re phone call recording. Seriously, you will be glad
that you did.

------
bowyakka
You pay blacklotus a big pile of money and giggle at attackers.

[http://www.level3.com/~/media/files/brochures/en_secur_br_dd...](http://www.level3.com/~/media/files/brochures/en_secur_br_ddos_mitigation_proxy.pdf)

------
anondon
Cloudbric offers free DDoS protection.

[https://cloudbric.com](https://cloudbric.com)

~~~
tmikaeld
They don't have any cdn in China, so it doesn't work for this market.

~~~
coconut98
They let me choose cdn in Singapore.

------
Kephael
I colocate and rent services from providers who offer DDoS filtering and put
all my websites behind CloudFlare. OVH's protection is actually an excellent
value, when I used to help run a game server provider they were mitigating 20
gbit/sec and larger volumetric floods almost daily.

------
voltagex_
Most of the responses here deal with bandwidth floods. Is that really the most
common DDoS?

Thinking like an attacker, wouldn't the most effective DoS be to find a CPU or
memory intensive part of an application and use a small amount of bandwidth to
create a large impact?

~~~
kev009
Attacks that are heavy on L1-4 are the hardest to protect against because of
the need for large fixed infrastructure (peering/transit).

L7 attacks can be scrubbed by the same infrastructure. Beyond that, it's all a
matter of detection. The computational expense of L7 inspection can be
mitigated by sampling or scaled with ECMP. You may see a "WAF" (Web
Application Firewall) enter the picture at this level.

------
sroussey
Depends on your needs, if you are in control of your network, etc. Two options
here:

[http://cloudflare.com](http://cloudflare.com)

[http://defense.net](http://defense.net)

~~~
rmdoss
Surprised to see anyone mention defense.net.

How has been your experience with it?

------
shALKE
Hey, I worked for game developing company and the attack was hitting some
backend services. We tested voxility.com, it worked out fine after all was
integrated.

------
vegancap
Cloudflare. It's been a real life saver for us.

~~~
dineshp2
Could you elaborate? Which plan are you on? What was the size of the DDoS
attack?

------
solusipse
To be honest, that was the only reason why I migrated from DigitalOcean to
OVH.

------
dogma1138
Post was too long [http://pastebin.com/48J9Ufdd](http://pastebin.com/48J9Ufdd)
:<

Random "wisdom", not in any particular order more like do's and dont's that I
picked up with dealing with and executing DoS/DDoS attacks.

Testing, testing, testing, regardless of how you choose and what you implement
your mitigation test it and test it well because there are a lot of things you
need to know.

Know and understand exact effect that the DDOS/DoS mitigation has, the leakage
rate, what attacks can still bring you down, and the cost of mitigation.

Make sure you do the testing at different hours of the day if not you better
know your application and every business process very well because I've seen
cases where 50GB/s DDoS would do absolutly nothing except on tuesday and
sunday at 4AM when some business batch process would start and the leakage
from the DoS attack + the backend process would be enough to kill the system.
Common processed that can screw you over are backups, site to site or co-
location syncs/transfers, various database wide batches, pretty common times
for this anything in early morning, end of weak, end of month, end of quarter
etc.

If you are using load or stress testing tools on your website make sure to
turn off compression it's nice that you can handle 50,000 users that all use
GZIP but the attackers can choose not too.

Understand what services your website/service relies on for operation common
things are services like DNS, SMTP etc. if I can kill your DNS server people
can't access your website, if i can kill services that are needed for the
business aspect of your service to function like SMTP I'm effectively shutting
you down also.

If you are hosting your service on Pay As You Go hosting plans make sure to
implement a billing cap and a lot of loud warnings, your site going down might
not be fun, but it's less fun to wake up to a 150K bill in the morning, if you
are a small business DoD/DDoS can result in very big financial damages that
can put you out of business.

Understand exactly how many resources each "operation" on your website or API
costs in terms of memory, disk access/IOP's, networking, DB calls etc, this is
critical to know where to implement throttling and by how much.

If you implement throttling always do it on the "dumber" layer and the layer
that issues the request for example if you want to limit the amount of DB
queries you execute per minute to 1000 do it on the application server not on
the DB server. This is both because you always want to use "graceful"
throttling which means the requesters chooses not to make a request rather
than the responder having not to respond, and it also allows you to implement
selective throttling for example you might want to give higher priority to
retrieving data of existing users than to allow new users to sign up or vice
versa.

Do not leak IP address this is both in regards to load balancing and using
scrapping services like Cloudflare. When you used services like cloudflare
make sure that the services you protect are not accessible directly, make sure
some one can't figure out the IP address of your website/API endpoint by
simply looking at the DNS records. Common pitfalls are www.mysite.com ->
cloudflare IP while mysite.com/www1.mysite.com/somerandomstuff.mysite.com
reveal the actual IP address. Another common source is having your IP address
revealed via hard coded URLs on your site or within the SDK/documentation for
your API. If you have moved to cloudflare "recently" make sure that the IP
address of your services is not recorded somewhere there are many sites that
show historic values for DNS records if you can it is recommended to rotate
your IP addresses once you sign up for a service like cloudflare and in any
case make sure you block all requests that do not come through cloudflare.

When you do load balancing do it properly do not rely on DNS to for LB/round
robin if you have 3 front end servers do not return 3 IP addresses when some
one asks whois www.mysite.com put a load balancer infront of them and return
only 1 IP address. Relying on DNS for round robin isn't smart it never works
that well and you are allowing the attacker to focus on each target
individually and bring your servers one by one.

Do not rely on IP blacklisting and for whatever reason do not ever ever ever
use "automated blacklisting" regardless of what your DDoS mitigation provider
is trying to tell you. If you only service a single geographical region e.g.
NA, Europe, or "Spain" you can do some basic geographical restrictions e.g.
limit access from say India or China this might not be possible if you are say
a bank or an insurance provider and one of your customers has to access it
from abroad. Ironically this impacts the sites and services that are the
easiest to optimize for regional blocking for example if you only operate in
france you might say ha! I'll block all non-french IP address but this means
that what an attacker needs to do is simply use IP spoofing and go over the
entire range of French ISP's and you blacklist all of France this only takes a
few minutes to achieve! If you are blacklisting commercial service provider
IP's make sure you understand what impact can it have on your site,
blacklisting DigitalOcean or AWS might be easy but then don't be surprised
when your mass mail services or digital contract services stop working. If you
do use some blacklisting / geoblocking use a single list that you maintain do
not just select "China" in your scrapping service, firewall, router, and WAF
all of them can have different Chinas which causes inconsistent responses, use
a custom list and know what is in it.

Do not whitelist IP! I've seen way too many organizations that whitelist IPs
so those IPs would not go for example through their CDN/Scrapping service or
would be whitelisted on whatever "Super Anti DDoS Appliance" the CISO decided
to buy into this month. IP spoofing is easy! drive by attacks are easy! And
since a common IPs to whitelist are things like your corporate internet
connection nothing is easier for an attack to do than to figure those out.
They simply need to google for the network blocks assigned to your
organization if you are big enough and or were incorporated prior to 2005 or
send a couple of 1000's of phishing emails and get do some sniffing from the
inside.

Understand collateral damage and drive by attacks. Know who (if) you share
your IP addresses with and figure out how likely they are to be attacked, yes
everyone would piss some one with keyboard access these days but there are
plenty of types of businesses that are more common as targets, if you are
hosting in a datacenter that also provides hosting for a lot of DDoS targets
you might suffer also. For drive by attacks you need to have good
understanding of the syndication of your service and if you are a B2B service
provider your customers. If you provide some embedded widget to other sites if
they are being DDoSed you might get hit also if it is a layer 7 attack. If you
are providing service for businesses for example an address validation API you
might get hammered if one of your clients is being DDoSed and the attacker is
hitting their sign up pages.

Optimize your website; remove or transfer large files things like documents
and videos can be moved to various hosting providers (e.g. YouTube) or CDN's,
if you are hosting large files on CDN's make sure they are only accessible via
the CDN, infact for the most part it's best if you make sure that what is
hosted on the CDN is only accessible via the CDN this prevents attackers from
accessing the resources on your own servers via selecting your IP instead of
the CDN. A common pitfall would be that some large file is linked on your
website as cdn1.mysite.com/largefile but it's also accessible directly from
your servers via www.mysite.com/largefile.

Implement anti-scripting techniques on your website, captcha, DOM rendering
(makes it very expensive for the attacker to execute layer 7 attacks if they
need to render the DOM to do so) and make sure that every "expensive"
operation is protected with some sort of anti-scripting mechanism. Test this!
captchas that are poorly implemented are no good, and I don't mean captchas
that are somehow predictable or easy to read with CV's if you have a services
that looks like this LB>Web Frontend>Application Server>DB make sure that the
captcha field is the 1st thing that is being validated and make sure it's
validated in the web frontend or even in the LB/Reverse Proxy. If you hit the
application server validate all the fields do the thing and just before
sending it to the DB you validate the captcha this won't help to protect you
against DoS/DDoS as well if at all.

When you implement any mitigation design it well and understand leakage and
"graceful failure", it's better for the dumb parts of your service to die and
restart than it is for the more complicated parts. For example if after all of
your mitigation you still have 10% leakage from your anti-ddos/scraping
service to your web frontend and from it there is a 5% leakage to to your DB
do not scale the web frontend to compensate for the leakage from your
scrapping service to the point of putting your DB at risk. A web server going
down is mostly a trivial thing as it would bring itself back up usually on its
own without any major issues, if your DB gets hammered well it's a completely
different game you do not want to run out of memory or disk and to have to
deal with cache or transaction log corruption or consistency issues on the DB.
Just get used to the fact that no matter what you are going to do and
implement if some one wants to bring you down they will, do what you can and
is economical to you do mitigate against certain attacks and for the reset
design your service with predicted points of failure that would recover on
their own in the most graceful manner and shortest period.

~~~
simon_acca
Why would you turn off compression? A given amount of requests is more
burdensome on the CPU when decompression is to be performed.

~~~
bch
Assuming the remote is actually decompressing, and the origin isn't
dynamically compressing for each request.

~~~
dogma1138
Even if the remote is decompressing it doesn't matter a botnet owner or some
one who compromised a few AWS accounts doesn't care it won't really slow them
down. CPU's are pretty fast at compression and when a site is easily 3-4 times
the size when not served compressed you are going to hit your network cap
limit faster than exhausting the CPU even with dynamic compression and no
caching under most circumstances.

------
thatrascaltiger
* Don't buy service from a single CDN. It's a recipe for disaster because even Akamai have outages, and having a traffic management setup that lets you move partial/entire traffic to a different CDN will let you not only mitigate their outages, but also move traffic to a cheaper/better provider.

* If you can't CDN all your traffic, a CNAME with low TTL that can quickly switch to a CDN/WAF endpoint can be helpful.

* AWS, Azure and GCP all have mitigations for L3 attacks built into their infrastructure. Because you don't know how they operate, or when, don't rely on them. Accept they may break your service and be prepared to have downtime or the means to shift your product quickly if an attack is big enough or presses enough secret buttons.

* Identify and remove all potential means of amplification both at networking/infra and application. This means not exposing your own nameservers or NTP servers publicly, for L7 this is more complicated as it'll depend on how your APIs and products interact with themselves and each other.

* Load test your products often to know what breaking point is and when performance regressions arise with a given amount of resources allocated. Fixing these early may mean you can ride out a DDoS without needing to do anything if it's small enough and your application efficient enough.

