Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How do you handle DDoS attacks?
223 points by dineshp2 on Aug 28, 2016 | hide | past | favorite | 106 comments
For owners of small websites running on DigitalOcean, GCP or AWS, how do you handle DDoS and DoS attacks?

For context, while exploring the load testing tool Siege running on a VPS, I was able to bring down multiple sites running on shared hosting, and some running on small VPS by setting a high enough concurrent number of users. This is not a DDoS, but it goes to show how easy it is to cause damage. Note: I only brought down sites that I own, or those of friends with their permission.

What tools are useful in fighting DDoS attacks and script kiddies? Mention free and paid options.

What are the options to limit damage in case of an attack? How do you limit bandwidth usage charges?

There was a previous discussion on this topic 6 years ago https://news.ycombinator.com/item?id=1986728




I've faced DoS attacks for years as I run internet forums.

The simple advice for layer 7 (application) attacks:

1. Design your web app to be incredibly cacheable

2. Use your CDN to cache everything

3. When under attack seek to identify the site (if you host more than one) and page that is being attacked. Force cache it via your CDN of choice.

4. If you cannot cache the page then move it.

5. If you cannot cache or move it, then have your CDN/security layer of choice issue a captcha challenge or similar.

The simple advice for layer 3 (network) attacks:

1. Rely on the security layer of choice, if it's not working change vendor.

On the L3 stuff, when it comes to DNS I've had some bad experiences (Linode, oh they suffered) some pretty good experiences (DNS Made Easy) and some great experiences (CloudFlare).

On L7 stuff, there's a few things no-one tells you about... like if you have your application back onto AWS S3 and serve static files, that the attack can be on your purse as the bandwidth costs can really add up.

It's definitely worth thinking of how to push all costs outside of your little realm. A Varnish cache or Nginx reverse proxy with file system cache can make all the difference by saving your bandwidth costs and app servers.

I personally put CloudFlare in front of my service, but even then I use Varnish as a reverse proxy cache within my little setup to ensure that the application underneath it is really well cached. I only have about 90GB of static files in S3, and about 60GB of that is in my Varnish cache, which means when some of the more interesting attacks are based on resource exhaustion (and the resource is my pocket), they fail because they're probably just filling caches and not actually hurting.

The places you should be ready to add captchas as they really are uncacheable:

* Login pages

* Shopping Cart Checkout pages

* Search result pages

Ah, there's so much one can do, but generally... designing to be highly cacheable and then using a provider who routinely handles big attacks is the way to go.


Uh, stupid question but how do you cache a website like for example this comment thread on hackernews? Suppose a DDoSer calls this comment thread a lot of times. The request has to go through to the server because when I hit F5 or post a comment myself, I see the comments in realtime. How do you handle that exactly? Does caching for a few seconds help already, or does the backbone push updated sites to the CDN server? I have no experience in DDos mitigation.


Cache everything a guest accesses for 5 minutes or more. Vary on the specific cookie that represents a signed-in user.

None of my guests have noticed this, and it has increased most of my analytics numbers as my pages are faster too.

The signed-in users, they get the dynamic pages.

But now the cookie that identifies the user is what you use to correlate any attack traffic, the attacker is forced to (somewhat) identify themselves and you can then revoke their authentication status or ban the account.

Finally you captcha and/or rate-limit the login page.

This is effectively what I do on my sites, the pages themselves and the underlying API all cache if the cookie or access token is absent.

This is trivial to do within the code, but can be harder to do with the CDN/security layer (who need to support a "vary on cookie" or "bypass cache on cookie" or equivalent).


The important thing you need to assess is how critical is it that clients receive fresh data.

You can imagine that for a real time service it would be better to provide a timeout immediately rather than providing stale data.

HN is an example of a near on-line site where some delay is perfectly acceptable. No one cares that they're receiving a 2 second old page, it's better for the site users to reveive old data fast rather than new data slow.

If you use nginx the following commands would help out significantly (if I remember them correctly)

proxy_cache_use_stale updating proxy_cache_lock on proxy_cache_lock_timeout 1s

This config allows nginx to fetch cache updates while serving clients and when fresh data is received from the upstream application server it'll use that immediately.

If that's wrong hopefully someone can correct the conf.


What you can do with a site like HN will be different than if you're a shopping getting DDoSed on Black Friday by a competitor.

You can put the whole of HN into read only mode if needed and it'll have no real impact; disallowing purchases on MyAmazonCompetitor.com would be catastrophic.


Literally only cache for 1 or 2 seconds at a time.

Lots of people use page caching to speed up their website, but that's a mistake, since caching means stale data on dynamic sites. Caching should only be used to solve resource issues, not latency issues.

Your entire site should be fast already without caching. This comments page should only take a few milliseconds to generate. If it doesn't, then something's wrong with the database queries.

I will never understand how some sites take hundreds of milliseconds to generate a page.


Make your comments system a static site generator, so that each comment generates a static HTML page and you serve that statically. 4chan does this.


If you're getting more traffic than 1 request/s, it's less work to generate a static cached version on the cadence of ~1 second than to dynamically generate the content for each request.


Have you heard about cache busting? Someone just needs to request a page that's not cached and the request will always hit your web servers.


You'd be surprised how few attacks I've personally seen vary that much. But yes, it happens, and good applications put their identifiers in their paths and ignore the querystrings, most CDN/security providers allow the configuring of their layer to ignore querystrings entirely.

Of course, this is precisely the attack that works on a search page, hence the advice above to be ready to captcha that if you haven't.

Anything GETable cache, everything else you need to think about how to validate the good traffic (trivially computable CSRF tokens help) and captcha the rest.

404s, 401s, etc... they should cost the underlying server as little resource as possible and also cache their result at an applicable cache layer (404s at the edge and 401s internally, 403s at the edge if possible, etc).


If there are 90GB of static files, and 60GB are in the Varnish cache, cache busting will be pretty ineffective.


If there's any dynamic content and the request hits that, Varnish cache will be pretty ineffective.


Actually Varnish is great here, one normalises the requests and retains only the querystrings that are valid for your application filtering out (removing) all those that are not valid.

The key thing is, you know your application, and you know what the valid keys are and the valid value ranges. If you can express that in your HTTP server and discard requests then it can be very cheaply done.

A forum really doesn't have that many distinct URLs, and so this is easily done. It would be harder on a much more complex application, but the original question related to these smaller side-project applications.


Caching not necessarily means more speed. Sometimes it can make things slower.

1. Get from cache

2. Determine if cached value is valid

3. Query data store

4. Put data store value in cache

5. Return data

Instead of just getting it directly. In order to be able to cache you need to think about good cache invalidation. And client side caching won't work against malicious users.


Those are problems the CDN solves for you.


CDN is only for static content and I was assumed all your static content would be already on a CDN... it's a standard practice.


CDN is NOT only for static content. Minimally Cacheable, or catchable based on cookie value, content can be cached on a CDN. Also running ALL traffic through a CDN (like CloudFlare or Akamai) allows you to do traffic optimization, FEO, DDOS protection, and much more.


My startup's site gets DDOS'd about once a week. We have seen a huge range of attacks from UDP floods, to wordpress pingback attacks, to directed attacks on our services.

We have many layers of protection:

* We run iptables and an api we wrote on our ingest servers. We run failtoban on a separate set of servers. When fail2ban sees something, we have it hit the api and add the iptables rules. This offloads the cpu of failtoban from our ingest servers.

* We block groups of known hosting company IP blocks, like digital ocean and linode. These were common sources of attacks.

* Our services all have rate limits which we throttle based on IP

* We have monitoring and auto-scaling which responds pretty quickly when needed. And has service level granularity.

* Recently moved behind cloudflare because google cloud did not protect us from attacks like the UDP floods which didn't even reach our servers.

EDIT: formatting


One other thing to add:

If they attackers are persistent, there is really no way to guarantee zero down time. THEY WILL FIND A WAY. Just make sure your stake holders know you are doing everything in your power to resolve the issues, and then actually do those things.

An anecdote:

We had been seeing DDOS attacks for a few weeks, so we had most everything locked down and working. But then suddenly one of the most important parts of our site started going down under load. That part is a real time chat system. We looked for which chat room had the load and it was one which did not require a user be registered. We switched the room into registered users only mode and thought we had solved it.

About 5 minutes later the attack came back with all registered users. We were amazed, becuase there is no way the attackers could have registered that many accounts in 5 minutes because of our rate limiting on that. Turns out that they had spend the past week or so registering users in case they needed them :)


Great tips and examples. Makes me wonder what site/service you work on that attracts attackers like that.


https://www.stream.me/

We have some controversial users...


I guess some people just really hate kittens :)

https://www.stream.me/kittendorm


Interesting service. Yeah, I could see you people getting attention. Love the homepage pic haha.


Cloudflare doesn't help you against UDP floods if your backend is publicly accessible on the internet.

For example:

curl https://104.154.116.193 -H 'Host: www.stream.me' -v -k


Indeed, it's a recent integration and we haven't yet shut off that IP.

Btw, check out curl's --resolve flag. You can use it to override default DNS resolution and can then drop the -k flag.


If I remember correctly, Digital Ocean was by far the greatest source of Wordpress pingbacks. They've got a severe problem on their hands. I had submitted a report to their abuse contact weeks (months?) but have not yet heard back.


I work at a large CDN that also sells DDoS mitigation.

Firstly, we are built to endure any DDoS the internet has yet seen on our peering, backbone, and edge servers for CDN services. This is quite important when you are tasked with running a large percentage of the interweb but probably not practical for most organizations, mostly due to talent rather than cost (you need people that actually understand networking and systems at the implementation level, not the modern epithet of full stack developer).

But, it is critical to have enough inbound peering/transit to eat the DDoS if you want to mitigate it -- CDNs with a real first party network are well suited for this due to peering ratios.

Secondly, when you participate in internet routing decisions through BGP, you begin to have options for curtailing attacks. The most basic reaction would be manually null routing IPs for DoS, but that obviously doesn't scale to DDoS. So we have scrubbers that passively look for collective attack patterns hanging on the side of our core, and act upon that. Attack profiles and defense are confirmed by a human in our 24/7 operations center, because a false positive would be worse than a false negative.

Using BGP, we can also become responsible for other companies' IP space and tunnel a cleaned feed back to them, so the mitigation can complement or be used in lieu of first party CDN service.

In summary, the options are pretty limited: 1) Offload the task to some kind of service provider 2) Use a network provider with scrubbing 3) you've hired a team to build this because you are a major internet infrastructure.


We need to divide DDoS here in two categories:

-DDoS you can handle (small ones). That anything up to 1 or 2Gbps or 1m packets per second.

-DDoS you can not handle. Anything higher than that.

For the smaller DDoS attacks, you can handle it by adding more servers and using a load balancer (eg. ELB) in front of your site. Both Linode and DigitalOcean will null route your IP address if the attack goes above 100-200Mbps, which is very annoying. Amazon and Google will let you handle on your own (and charge you for it), but you will need quite a few instances to keep up with it.

For anything bigger than that, you have to use a DDoS mitigation service. Even bigger companies do not have 30-40Gbps+ capacity extra hanging around just in case.

I have used and engaged with multiple DDoS mitigation companies and the ones that are affordable and good enough for HTTP (or HTTPS) protection are CloudFlare, Sucuri.net and Incapsula.

-CloudFlare: Is the most popular one and works well for everything but l7 attacks (in my experience). You need to get their paid plan, since the free one does not include ddos protection - they will ask you to upgrade if that happens.

-Sucuri.net: Not as well known as CloudFlare, but they have a very solid mitigation. Have been using them more lately as they are cheaper overall than CloudFlare and have amazing support.

-Incapsula: I used to love them, but their support has been really bad lately. They are on a roll trying to get everyone to upgrade their plans, so that's been annoying. If you can do stuff on your own, they work well.

That's been longer than what I anticipated, but hope it helps you decide.

thanks,


Worth noting that Incapsula had _multiple_, _world wide_ outages back in March. Akamai is a more expensive, but more reliable/proven alternative.

http://www.bauer-power.net/2016/03/incapsula-had-major-world...


Yep, we suffered through it.

To be fair, they all have some downtimes from time to time.


We (Baqend) use an approach that is somewhat different from what has been proposed here so far:

- Every one of our servers rate limits critical resources, i.e. the ones that cannot be cached. The servers autoscale when neccessary.

- As rate limiting is expensive (you have to remember every IP/resource pair across all servers) we keep that state in a locally approximated representation using a ring buffer of Bloom filters.

- Every cacheable resource is cached in our CDN (Fastly) with TTLs estimated via an exponential decay model over past reads and writes.

- When a user exceeds his rate limit the IP is temporarily banned at the CDN-level. This is achieved through custom Varnish VCLs deployed in Fastly. Essentially the logic relies on the bakend returning a 429 Too Many Requests for a particular URL that is then cached using the requester's ID as a hash key. Using the restart mechanism of Varnish's state machine, this can be done without any performance penalty for normal requests. The duration of the ban simply is the TTL.

TL;DR: Every abusive request is detected at the backend servers using approximations via Bloom filters and then a temporary ban is cached in the CDN for that IP.


Have you guys been DDoSd before? All that sounds very nice until someone UDP floods you and you get nullrouted.

Looks like you're hosting at least some stuff at Hetzner, they're not going to do any filtering for you.


We have not seen any serious DDoS attacks, apart from the ones we created through our own heavy load testing. UDP attacks are not really a problem, since the servers are only accessible by Fastly over TCP using a combination of SSL authentication and IP whitelisting.


I'm sorry, I know it's irrelevant, offtopic and I'm a horrible person but... "Baqend"? Who came up with this name? Was there some brainstorming involved and that was the best candidate? What does the branding say about your business? How is it pronounced?


It's pronounced "Backend" ;-)

And since we are in the Backend-as-a-Service market, the name is not all that unfitting. Although it cannot be denied that from time to time some people think we are French an spelled "Baquend".


It is actually a really good name for search engines, as it is completely invented word.


Fantastic name. I love it when you see a brand, mentally sound it out in your head, and then it delights you with its cleverness.


I use and recommend hosting with OVH if you are worried about DDOS and serving a Western market. No affiliation, just a happy customer.

OVH include DDOS protection by default[0] and they have a very robust backbone network[1] in Europe and North America that they own and operate themselves (this is how & why anti-DDOS is standard with them).

For quick side-projects I still fire up a DigitalOcean instance or two because their UX is so slick and easy. If I needed huge scale and price didn't matter I would probably go with AWS (their 'anti-DDOS' is their vast bandwidth + your ability to pay for it during an attack). For everything else, I put it on OVH.

[0]https://www.ovh.com/us/anti-ddos/

[1]http://weathermap.ovh.net/


As a word of caution for OVH their anti-ddos protection can be a little too strict at times. The nature of traffic on our site is such that we often get big spikes of traffic via social media. We were testing out OVH earlier this month to see if offloading part of our site to their servers would work well and they ended up shutting our vps server down for an hour when we got a big spike. After working with their support team they did say if we purchased a dedicated server they would possibly be able to set up a rule to account for this, but their vps at least couldn't be fixed. For what it's worth before they blocked the traffic the vps was handling the traffic just fine. So if you're thinking of getting a dedicated server you may be okay, but just use caution if you expect any sort of spike in traffic as OVH's algorithms may block it as a DDoS even though you are sure its legitimate. Edit: Also again, to be clear, they blocked 100% of the traffic when the "attack" happened. Their solution wasn't to block the "attack" but to just null route the ip.


This contradicts what they claim on their DDOS explanation here: https://www.ovh.co.uk/anti-ddos/mitigation.xml First question to ask them is why was their response different from what practice they advertise? That isn't cool. I would open a new support ticket with them and ask for an explanation since that happened to you recently.

If you need something low-cost and dedicated try their SoYouStart range, which is just their last-generation hardware. Going from a VPS to SYS is a huge performance jump for minimal cost. They have a higher guaranteed minimum bandwidth throughput than the VPSs and may get you better support possibilities. Cost is similar to a mid-size VPS.


Here is the result of the support ticket I had with them explaining that the traffic was not an attack and is expected to spike from time to time. Edited only to remove their upsell links.

"After looking into the matter, it would seem that our VPS do not have a profile that would accomodate your traffic spikes. However should you switch to a dedicated server, we would be able to apply a a custom profile to your server that we better suit your needs.

Our Kimsufi server are extremely affordable and do not cost much more then what you are presently paying. However they do have limitation, among other things, you cannot order additional IPs on the server. I would advise you to think carefully before purchasing such a server.

Another alternative would be to move to our mid range server, our Soyoustart servers, which offer many of the same benefits as our OVH dedicated servers, however at a very competitive price."


Sounds like their "Anti-DDOS protection" is bullshit for their VPS plans then, unlike the marketing statements all over the OVH website saying otherwise. :(


Have you been DDoSed while on OVH? I've heard there are a bunch of providers who claim DDoS prevention, but what that means in practice is just "we'll take your site offline right away and not charge you for the incoming bandwidth!". Super helpful.


I've had several attacks hit my services on OVH. Not once did they null route, and as far as I could tell the majority of attack traffic was filtered with only a minor service interruption before mitigation kicked in. Granted these were shitty $10 booter services hitting me and not a "real" DDoS.

Be careful using services like game servers or VOIP or anything else using UDP though, since UDP is subject to much more stringent filtering at OVH and may get affected during mitigation.


I've also been DDoSed while on OVH. It happened twice and was a non-event. I would never have noticed if they hadn't told me. Now the asshats don't seem to even bother DDoSing us any more.

On iWeb, however, they null-routed us for half a day.


OVH's DDoS protection is the real deal, they do not simply null route your IP like many other providers do.


I do use OVH and recommend them from time to time.

The main issue is that I lost a bit of faith in their support and reliability. vracks going down for hours with no updates. Connectivity issues. Servers disappearing.

Besides that, their DDoS protection works well for l3 attacks, except that they force a TCP reset on every connection. So if you are picky about extra connect times and having your clients re-establish their connections, they are great.


What kind of server, and which data center? I've been using a dedicated server at BHS for over a year with no problems at all.


Tangential, but how do you find OVH? Their hardware, bandwidth, uptime, customer service? I ask because of the conflicting reviews of OVH that a quick google search reveals.


I'm living next to France and in talking to local geeks the name kept coming up (OVH is a French company). I had been using DigitalOcean but needed more storage space so I tried Kimsufi and SoYouStart which are both OVH-related, and then I started A/B comparing performance of VPSs for OVH and DigitalOcean and saw that the VPSs at OVH were consistently out-performing the same size machines at DigitalOcean. So, I moved everything I plan to keep online for more than a month to OVH. OVH's dashboard for creating and managing machines used to be really horrid but it has improved recently.

I don't know what to make of the bad reviews you found, my personal experience has been great for several years now. Multiple products used, the occasional support ticket with quick response, and decent pricing.


I found OVH's offer to be very good on every point, except customer service. I'm mostly using Kimsufi dedicated servers, and let's just say that when shit goes wrong, you're left alone in the dark.

Anecdote: I had my dedicated server suddenly go down because it overheated. Wouldn't come back to life. Two days after submitting a ticket and getting no input, the machine suddenly came back up without any explanation. A day later, I got a mail saying the motherboard was broken and got replaced. Overall it was a very unpleasant experience, but it's to be expected given the low price of Kimsufi.


I'm surprised people think it's acceptable to be 2 days in the dark just because it's cheaper. Even really low end companies like nocix (former datashack), 1&1, etc. will reply to your tickets in a few minutes.

It's good to know that this stuff happens with OVH. I'll make sure to stay far from them.


It's up to you to chose the product that fits your needs. Slow support is part of the deal. And do your own backups because they may switch the disks any time if they detect that something is wrong with it.

This thing happens with Kimsufi servers, a side brand of OVH for cheap dedicated servers. The real OVH servers are more expensive, but you get all the bells and whisles that come with a professional server hosting offer.


Kimsufi is a completely different service to OVH's, even though they are owned by the same company, so you can't really compare the two. Kimsufi is dirt cheap and has a reputation for terrible support.


This is a marketing strategy. Kimsufi are for people to try out dedicated server hosting. It provides a low entry barrier. When used to it, people switch to real server once their need develop.


This is because kimsufi are the cheapest dedicated server of OVH. They do provide support but its delayed and restricted to hardware issues. These servers are intendent for playing around and testing, hence the low price. The offer is also minimal. I use it for some toy web site hosting and mail hosting. It's good for boostraping.


At work we're using OVH for our production, we've been with them for several years. The key point is that the price-performance ratio is very difficult to beat, and it offsets the problems we've had.

We've had very few hardware-related issues, a disk failing or a motherboard to be replaced. In all of those cases, the component were swapped promptly and we've been kept informed of the progress.

Where we're unhappy is with the network, especially with their vRack offering. Looking back at our production incidents of the past 6 months, about 50% of them were caused by some vRack problem where at the same time the public interfaces were up and running just fine.

We're generally happy with customer service, but we pay for VIP support and we speak French to OVH's support agents (I believe that the latter helps a lot).


I work at a CDN/Security engineering company, but this is just my view.

First off you need to determine where the attack is coming from. You could redirect based on IP/request headers in a .htaccess file or apache rules.

Your next bet is to distribute/auto-scale your application if possible.

You need to setup a web application firewall that sits in front of your web servers and analyzes the requests/responses that hit the web servers. A lot of the ddos campaigns are easy to identify based on the request headers/IP/Geo and requests/second.

It's not hard to write a small web server/proxy to do this, but it would be best left to someone who knows what they're doing because you don't want to block real user requests. You can use ModSecurity's open source WAF for apache/nginx, but again you have to know what you're doing.

When I faced this issue, I wrote a small web server/proxy here that you can start on port 80:

https://github.com/julesbond007/java-nio-web-server

Here I wrote some rules to drop the request if it's malicious:

https://github.com/julesbond007/java-nio-web-server/blob/mas...


AWS informs us that an ELB with HTTP/HTTPS termination takes care of all problems except application level attacks. Traffic ingress is free, so it shouldn't be expensive?

For static content there is always CDN. Costly, but it works in a pinch, while you're planning you other moves.

The one thing left to worry about is dynamic content. Depending on the application you could restrict all requests to authorized users only while under attack.

This isn't a complete solution by any means, but reduced the attack surface considerably.

https://d0.awsstatic.com/whitepapers/DDoS_White_Paper_June20...


To summarize the discussion here so far:

1- For small attacks you can optimize your stack, cache your content and use a provider that allows you to quickly scale and add more servers to handle the traffic. Do not use Linode or Digital Ocean as they will null route you.

OVH, AWS and Google are the ones to go with.

2- Use a DDoS mitigation / CDN provider that will filter the attacks and only send clean traffic back to you.

The ones recommended so far:

https://cloudflare.com

https://sucuri.net

https://incapsula.com


There are many services for HTTP protection, but when you have a custom protocol for a RT service like a game, you are kind of screwed. It's even worst if your game is UDP based.

I used to get attacked huge a load of corrupt UDP packets for a few seconds and that used to hang the main server, wich in 1 or 2 minutes disconnected all my players.

Solution: separate your UDP services from your TCP services in separate applications and servers, also use different type of protection services for each.

The attack still hanged the UDP services, so I started thinking about making a plugin for snort to analyse the traffic and only allow legit protocol packets. I haven't done any of this last idea because the attackers stopped since they noticed that no one was being disconnected.

BTW, for TCP and HTTP I just used any tiny service that protects me from SYN Flood, like Voxility resellers.


That's a good point. CloudFlare, Sucuri and friends only handle HTTP/HTTPS/DNS traffic.

If you have custom protocols, you have to get a full /24 mitigation and so far nobody can beat Arbor into it. Very expensive, but works well if you have BGP.



You mean the biggest MiTM on the web?[0]

The only reason why they're not constantly called out by serious infosec folk for their scam is because they hire guys also involved in DefCon/BlackHat planning (try to sneak a hostile talk against Cloudflare past REDACTED[2] who btw is also advising Mr. Robot). It's lobbying at its finest.

[0] https://scotthelme.co.uk/tls-conundrum-and-leaving-cloudflar...

[1] https://blog.torproject.org/blog/trouble-cloudflare

EDIT: [2] redacted name since there is more than one, please duckduckgo by yourself.


As a:

* Longtime repeat speaker at Black Hat

* Repeat review board member (including this year's), and

* Extreme skeptic of Cloudflare's

I do not believe this is true. If you have a talk that is on topic for Black Hat and is harmful to Cloudflare, you'll get accepted. There's no one person who screens Black Hat talks; it's a panel of people, with several of the longstanding members of that panel (I'm not one of those) being more or less unimpeachable (Mark Dowd, Chris Eagle, Alex Sotirov, Dino Dai Zovi). None of these people are in the tank for Cloudflare. In fact: for most of the review board, none of them give a shit about Cloudflare.

The process isn't perfectly transparent! But it's such that if you submitted a talk, and it got shitcanned before reviewers even saw it, and you made a stink about it on Twitter, people would notice.

I generally agree with your assessment of Cloudflare as a threat to the Internet, for what it's worth. I just don't think you're right that they've gamed Black Hat.


Yes, I'm well aware that cloudflare is mitm, yet for my needs I've decided that this is not a problem.

I can see that you are not happy with what they provide. Luckily theirs service is not forced on you. Neither do you have to use it, nor visit server that use it.


The paid plan, yes. But the commonly used free plan does not do much to prevent DDoS or DoS attacks.


Source?


https://www.cloudflare.com/plans/

Look at the "Advanced security" section.

I also used Siege to flood a site behind Cloudflare's free plan and brought it down.


There's the option for free plans: Basic DDOS protection with the following blurb:

Built-in security measures automatically protect your website against DDoS attacks. CloudFlare's service allows your legitimate traffic to reach your website, while stopping illegitimate traffic at the edge, before it hits your server.

So Cloudflare promises least a minimal protection for free plans.

As for Siege, I assume Cloudflare is optimized to protect from botnets. A single machine running Siege is not a realistic test case. Perhaps it also depends whether your website is mostly static, then Cloudflare can do a lot of caching.


That page argues against your point, even the basic plan does quite a bit to fend of DDOS. In particular, the most common and effective type of DDOS, which is volumetric and based on reflected UDP traffic, is defended against, even on their free tier.

Using a tool like Siege to bring a site behind Cloudflare down doesn't mean it's not protected. A layer 7 attack against a site which can't handle incoming HTTP requests is still possible. Cloudflare, or any other service, can't magically make a site scale.


> That page argues against your point, even the basic plan does quite a bit to fend of DDOS. In particular, the most common and effective type of DDOS, which is volumetric and based on reflected UDP traffic, is defended against, even on their free tier.

Maybe I'm missing something here, where does it mention that the free plan protects against UDP floods?

> Using a tool like Siege to bring a site behind Cloudflare down doesn't mean it's not protected. A layer 7 attack against a site which can't handle incoming HTTP requests is still possible.

Flooding a site using Siege from a single IP falls under the layer 7 attack (correct me if I'm wrong), which is protected against in the Business plan.

> Cloudflare, or any other service, can't magically make a site scale.

Where did I mention that I expect Cloudflare to magically scale a site? A POST or GET flood falls under layer 7 protection, which Cloudflare offers in paid plans.

If it was not clear, the point was that layer 7 protection is offered in the paid plans(Business and Enterprise), but not in the free plan.


Most DDoS attacks are volumetric. There isn't a way to defend against this other than simply having a huge pipe, or paying someone with a huge pipe to be in front of your site.

Non-volumetric attacks like SYN or HTTP floods can be mitigated with appropriate rate limiting or firewalling.

Some providers like OVH have decent network-level mitigation in place, but you're not gonna find that on a $5 VPS where they're more than happy to null route you to protect their network.


Depending on the size of the syn flood or HTTP flood, there is no way you can handle it locally.

Some syn floods can generate millions of packets per second, which is way more than a dedicated linux server can handle.

Good video on the topic:

https://www.youtube.com/watch?v=pCVTEx1ouyk



+1 to CloudFlare and Incapsula. Content delivery networks inherently distribute traffic and most have security enhancements specific to Distributed Denial-of-Service mitigation.

DDoS protection providers offer a remote solution to protect any server / network, anywhere: https://sharktech.net/remote-network-ddos-protection.php


A) have enough servers so when one gets null routed, it's not a huge deal

B) make sure your servers don't fall over while getting full line rate of garbage incoming (this is not hard for reflection or synfloods, but is difficult if they're hitting real webpages, and very difficult if it includes a tls handshake)

C) bored ddos kiddies tend to ddos www only, so put your important things on other sub domains

D) hope you don't attract a dedicated attacker


Disclaimer: I work for a hosting company, but these views are my own personal opinions which I held even before working where I currently do.

This is one of the reasons I would consider managed hosting as opposed to AWS, Digital Ocean, etc. With any good managed hosting provider, they are going to take steps to help deal with the DDoS. Depending on your level of service and the level of the attack, of course. But they will have an interest in helping you deal with and mitigate the attacks.

The reality is that true DDoS solutions are expensive, and if you have a "small website" then you're probably not going to be able to afford them. But if you're at a good sized hosting provider, they're going to need to have these solutions themselves and can hopefully put them to use to protect your site.


1. Have a big enough pipe; if you are getting a DDoS attack of 2Gigabits/second and your uplink is 1Gigabit there is nothing you can do except look for someone else to filter your traffic. (They have to basically take on the 2gig ddos; filter it and then pass back the valid traffic to you).

Verisign and others offer this service; typically using DNS. However often they support BGP

2. Add limiting factors; if you have an abusive customer rate limit them in nginx. If you are expecting a heavy day rate limit the whole site.

3. Stress testing and likely designing your website to withstand DDoS attacks.

You can cache or not cache; that's not really the question. Handling a DDoS means what can you do to mitigate the extreme amount of traffic and still allow everything else to work.


We got hit by one about a month ago that was over 20Gb. Even a 10Gb pipe has limits.



Don't piss anyone off


Chilling effect. With some bad luck this won't help you.


That is excellent advice, in combination with other tactics.

If you do piss anyone off, keep records of everything. Make sure you know who they are, and where they live, before you start doing business with them. This lets you send the police after they hire someone to DDoS you. Bad people need to be removed from the pool to reduce these sorts of attacks. Record 100% of your phone calls. Android has free apps to do this for you automatically. If you're in a state that requires 2-party authorization, move to a state that offers 1-party authorization. Sanity in laws = freedom of citizens.


Talk to a lawyer first re phone call recording. Seriously, you will be glad that you did.


Don't host any user-generated content, either.


You pay blacklotus a big pile of money and giggle at attackers.

http://www.level3.com/~/media/files/brochures/en_secur_br_dd...


Cloudbric offers free DDoS protection.

https://cloudbric.com


They don't have any cdn in China, so it doesn't work for this market.


They let me choose cdn in Singapore.


I colocate and rent services from providers who offer DDoS filtering and put all my websites behind CloudFlare. OVH's protection is actually an excellent value, when I used to help run a game server provider they were mitigating 20 gbit/sec and larger volumetric floods almost daily.


Most of the responses here deal with bandwidth floods. Is that really the most common DDoS?

Thinking like an attacker, wouldn't the most effective DoS be to find a CPU or memory intensive part of an application and use a small amount of bandwidth to create a large impact?


Attacks that are heavy on L1-4 are the hardest to protect against because of the need for large fixed infrastructure (peering/transit).

L7 attacks can be scrubbed by the same infrastructure. Beyond that, it's all a matter of detection. The computational expense of L7 inspection can be mitigated by sampling or scaled with ECMP. You may see a "WAF" (Web Application Firewall) enter the picture at this level.


At AWS re:Invent 2015, Amazon claimed that 15% of attacks were at layer 7, 65% were network level bandwidth floods, and 20% were network level state exhaustion [1].

[1]: https://youtu.be/Ys0gG1koqJA?t=229


Depends on your needs, if you are in control of your network, etc. Two options here:

http://cloudflare.com

http://defense.net


Surprised to see anyone mention defense.net.

How has been your experience with it?


Hey, I worked for game developing company and the attack was hitting some backend services. We tested voxility.com, it worked out fine after all was integrated.


Cloudflare. It's been a real life saver for us.


Could you elaborate? Which plan are you on? What was the size of the DDoS attack?


To be honest, that was the only reason why I migrated from DigitalOcean to OVH.


Post was too long http://pastebin.com/48J9Ufdd :<

Random "wisdom", not in any particular order more like do's and dont's that I picked up with dealing with and executing DoS/DDoS attacks.

Testing, testing, testing, regardless of how you choose and what you implement your mitigation test it and test it well because there are a lot of things you need to know.

Know and understand exact effect that the DDOS/DoS mitigation has, the leakage rate, what attacks can still bring you down, and the cost of mitigation.

Make sure you do the testing at different hours of the day if not you better know your application and every business process very well because I've seen cases where 50GB/s DDoS would do absolutly nothing except on tuesday and sunday at 4AM when some business batch process would start and the leakage from the DoS attack + the backend process would be enough to kill the system. Common processed that can screw you over are backups, site to site or co-location syncs/transfers, various database wide batches, pretty common times for this anything in early morning, end of weak, end of month, end of quarter etc.

If you are using load or stress testing tools on your website make sure to turn off compression it's nice that you can handle 50,000 users that all use GZIP but the attackers can choose not too.

Understand what services your website/service relies on for operation common things are services like DNS, SMTP etc. if I can kill your DNS server people can't access your website, if i can kill services that are needed for the business aspect of your service to function like SMTP I'm effectively shutting you down also.

If you are hosting your service on Pay As You Go hosting plans make sure to implement a billing cap and a lot of loud warnings, your site going down might not be fun, but it's less fun to wake up to a 150K bill in the morning, if you are a small business DoD/DDoS can result in very big financial damages that can put you out of business.

Understand exactly how many resources each "operation" on your website or API costs in terms of memory, disk access/IOP's, networking, DB calls etc, this is critical to know where to implement throttling and by how much.

If you implement throttling always do it on the "dumber" layer and the layer that issues the request for example if you want to limit the amount of DB queries you execute per minute to 1000 do it on the application server not on the DB server. This is both because you always want to use "graceful" throttling which means the requesters chooses not to make a request rather than the responder having not to respond, and it also allows you to implement selective throttling for example you might want to give higher priority to retrieving data of existing users than to allow new users to sign up or vice versa.

Do not leak IP address this is both in regards to load balancing and using scrapping services like Cloudflare. When you used services like cloudflare make sure that the services you protect are not accessible directly, make sure some one can't figure out the IP address of your website/API endpoint by simply looking at the DNS records. Common pitfalls are www.mysite.com -> cloudflare IP while mysite.com/www1.mysite.com/somerandomstuff.mysite.com reveal the actual IP address. Another common source is having your IP address revealed via hard coded URLs on your site or within the SDK/documentation for your API. If you have moved to cloudflare "recently" make sure that the IP address of your services is not recorded somewhere there are many sites that show historic values for DNS records if you can it is recommended to rotate your IP addresses once you sign up for a service like cloudflare and in any case make sure you block all requests that do not come through cloudflare.

When you do load balancing do it properly do not rely on DNS to for LB/round robin if you have 3 front end servers do not return 3 IP addresses when some one asks whois www.mysite.com put a load balancer infront of them and return only 1 IP address. Relying on DNS for round robin isn't smart it never works that well and you are allowing the attacker to focus on each target individually and bring your servers one by one.

Do not rely on IP blacklisting and for whatever reason do not ever ever ever use "automated blacklisting" regardless of what your DDoS mitigation provider is trying to tell you. If you only service a single geographical region e.g. NA, Europe, or "Spain" you can do some basic geographical restrictions e.g. limit access from say India or China this might not be possible if you are say a bank or an insurance provider and one of your customers has to access it from abroad. Ironically this impacts the sites and services that are the easiest to optimize for regional blocking for example if you only operate in france you might say ha! I'll block all non-french IP address but this means that what an attacker needs to do is simply use IP spoofing and go over the entire range of French ISP's and you blacklist all of France this only takes a few minutes to achieve! If you are blacklisting commercial service provider IP's make sure you understand what impact can it have on your site, blacklisting DigitalOcean or AWS might be easy but then don't be surprised when your mass mail services or digital contract services stop working. If you do use some blacklisting / geoblocking use a single list that you maintain do not just select "China" in your scrapping service, firewall, router, and WAF all of them can have different Chinas which causes inconsistent responses, use a custom list and know what is in it.

Do not whitelist IP! I've seen way too many organizations that whitelist IPs so those IPs would not go for example through their CDN/Scrapping service or would be whitelisted on whatever "Super Anti DDoS Appliance" the CISO decided to buy into this month. IP spoofing is easy! drive by attacks are easy! And since a common IPs to whitelist are things like your corporate internet connection nothing is easier for an attack to do than to figure those out. They simply need to google for the network blocks assigned to your organization if you are big enough and or were incorporated prior to 2005 or send a couple of 1000's of phishing emails and get do some sniffing from the inside.

Understand collateral damage and drive by attacks. Know who (if) you share your IP addresses with and figure out how likely they are to be attacked, yes everyone would piss some one with keyboard access these days but there are plenty of types of businesses that are more common as targets, if you are hosting in a datacenter that also provides hosting for a lot of DDoS targets you might suffer also. For drive by attacks you need to have good understanding of the syndication of your service and if you are a B2B service provider your customers. If you provide some embedded widget to other sites if they are being DDoSed you might get hit also if it is a layer 7 attack. If you are providing service for businesses for example an address validation API you might get hammered if one of your clients is being DDoSed and the attacker is hitting their sign up pages.

Optimize your website; remove or transfer large files things like documents and videos can be moved to various hosting providers (e.g. YouTube) or CDN's, if you are hosting large files on CDN's make sure they are only accessible via the CDN, infact for the most part it's best if you make sure that what is hosted on the CDN is only accessible via the CDN this prevents attackers from accessing the resources on your own servers via selecting your IP instead of the CDN. A common pitfall would be that some large file is linked on your website as cdn1.mysite.com/largefile but it's also accessible directly from your servers via www.mysite.com/largefile.

Implement anti-scripting techniques on your website, captcha, DOM rendering (makes it very expensive for the attacker to execute layer 7 attacks if they need to render the DOM to do so) and make sure that every "expensive" operation is protected with some sort of anti-scripting mechanism. Test this! captchas that are poorly implemented are no good, and I don't mean captchas that are somehow predictable or easy to read with CV's if you have a services that looks like this LB>Web Frontend>Application Server>DB make sure that the captcha field is the 1st thing that is being validated and make sure it's validated in the web frontend or even in the LB/Reverse Proxy. If you hit the application server validate all the fields do the thing and just before sending it to the DB you validate the captcha this won't help to protect you against DoS/DDoS as well if at all.

When you implement any mitigation design it well and understand leakage and "graceful failure", it's better for the dumb parts of your service to die and restart than it is for the more complicated parts. For example if after all of your mitigation you still have 10% leakage from your anti-ddos/scraping service to your web frontend and from it there is a 5% leakage to to your DB do not scale the web frontend to compensate for the leakage from your scrapping service to the point of putting your DB at risk. A web server going down is mostly a trivial thing as it would bring itself back up usually on its own without any major issues, if your DB gets hammered well it's a completely different game you do not want to run out of memory or disk and to have to deal with cache or transaction log corruption or consistency issues on the DB. Just get used to the fact that no matter what you are going to do and implement if some one wants to bring you down they will, do what you can and is economical to you do mitigate against certain attacks and for the reset design your service with predicted points of failure that would recover on their own in the most graceful manner and shortest period.


Why would you turn off compression? A given amount of requests is more burdensome on the CPU when decompression is to be performed.


Because you want to test it with it on and off and understand the impact it has. Text (Javascript, HTML) compresses rather well (on average 3 to 1 or better) it's considerably more likely that you are going to hit a networking limit than than a CPU limit when not using GZIP or any other compression method. So when you do stress or load testing you really want to issue the same requests with and without the Accept-Encoding: compress, gzip or any similar headers.

Also in some cases you want to disable accepting GZIP on the server side completely because if you accept GZIP encoded requests an attacker can send very large requests that compress very well forcing you to decompress them eating up a lot of memory and CPU cycles on your side only to discard them. In principle you want to accept only non-compressed requests but send compressed responses to save bandwidth, but in any case you want to know how your service/application scaling works in with all cases and combinations.


Assuming the remote is actually decompressing, and the origin isn't dynamically compressing for each request.


Even if the remote is decompressing it doesn't matter a botnet owner or some one who compromised a few AWS accounts doesn't care it won't really slow them down. CPU's are pretty fast at compression and when a site is easily 3-4 times the size when not served compressed you are going to hit your network cap limit faster than exhausting the CPU even with dynamic compression and no caching under most circumstances.


* Don't buy service from a single CDN. It's a recipe for disaster because even Akamai have outages, and having a traffic management setup that lets you move partial/entire traffic to a different CDN will let you not only mitigate their outages, but also move traffic to a cheaper/better provider.

* If you can't CDN all your traffic, a CNAME with low TTL that can quickly switch to a CDN/WAF endpoint can be helpful.

* AWS, Azure and GCP all have mitigations for L3 attacks built into their infrastructure. Because you don't know how they operate, or when, don't rely on them. Accept they may break your service and be prepared to have downtime or the means to shift your product quickly if an attack is big enough or presses enough secret buttons.

* Identify and remove all potential means of amplification both at networking/infra and application. This means not exposing your own nameservers or NTP servers publicly, for L7 this is more complicated as it'll depend on how your APIs and products interact with themselves and each other.

* Load test your products often to know what breaking point is and when performance regressions arise with a given amount of resources allocated. Fixing these early may mean you can ride out a DDoS without needing to do anything if it's small enough and your application efficient enough.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: