& according to https://www.comparitech.com/blog/information-security/ddos-s... their frequency isn't declining
But yes, the larger sites have gotten their shit together so that the cost to DDoS has gone up
Also if you have a botnet you now have to ask: do you want rent out DDoS or do you want to mine crypto?
The best I could find was 250 a day using only 15,000 hosts. Not bad considering the cost is literally zero for the attackers. Scale that up to half a million hosts, which is a tiny botnet in reality would make 8000+ a day or over 3 million a year based on low hanging fruit
For example, in a Smurf attack the attacker finds broadcast IP addresses by sending an ICMP request to an address and counts the number of ICMP replies that come back. A broadcast IP address is one that sends a packet to every host on a network (often with 255 as the last octet like 184.108.40.206 for a Class C network of 220.127.116.11/24).
After finding suitably large networks with an open broadcast IP address they then send the broadcast IP address packets with a spoof IP address of the victim. The attack is then multiplied by however many hosts are on the broadcast IP address network.
DNS reflection is another type of DDoS attack that also relies on the ability to spoof an IP address of the victim.
The more interesting/difficult to mitigate attacks are those that complete handshakes (if TCP) and make fully formed requests at L7 that otherwise appear legitimate.
A bit speculative, but my hunch is --
IOT and some other advancements still create opportunities for new DDoS attacks, but attackers herd. And the "X as a service" support infrastructure is mostly supporting ransomware right now, likely because its safer and more lucrative. You can walk away from a ransomware target, fire and forget, so you can do it at scale. DDoS you have to pick your victims, and monitor and maintain the pressure, choose how to allocate your resources to targets while they're investigating or waiting you out.
Cloudflare might be part of the story, maybe that was enough of a headwind to stop the trolls, but for the professional criminals, I suspect this is about lucrative alternative attacks.
100gbps is basically trivial test for a new botnet
Additionally, depending on the exact service, you can certainly firewall traffic - close to the source.
The specific problem here is that mail servers, since that is not the target of DDoS until now, which means that there are few companies who do provide mail exchange-specific DDoS protection, which means larger companies (Verizon/Yahoo, Microsoft/Outlook, Google/Gmail) just operate servers well beyond what they really need, and I don't think that they can just run to Cloudflare and violate their privacy promise in the process.
Put that with the change to more people being mobile-only and there's fewer ways to create the botnets behind these.
Interesting, I think of TPM as being for holding keys for bitlocker encryption or personal certificates. Can you clarify how TPM makes it harder for to remotely take over a computer?
Better to have the target blackholed upstream. Can usually be done with a BGP community of 666 if your peers support it.
From the ISP's point of view, you might have prevented an overload that could have affected other customers. From the customer's point of view, their service was denied all the same. Doesn't sound like anything has improved compared to 10-20 years ago.
Source: Security Engineering by Ross Anderson.
Might want to check how much it costs to increase someone's billing 20k a day (granted, a botnet makes it cheaper, but measure opportunity cost of what else that botnet could be doing), see also https://www.reddit.com/r/aws/comments/7z6uc3/comment/dutgw6u...
Full disclosure: I work for Azure
For context, I’ve picked up self-hosting again after many years and though I don’t anticipate being a target for large attacks I have been curious what tools individuals have at their disposal or if it’s a fools errand to even try.
It is an attack on the httpd and smtpd daemons. imap has been unaffected as far as I can tell.
> how does one protect smtp servers from distributed attacks?
By design, MX can be as distributed and in large number as you can afford or as willing to spend. This can be a combination of load balancer virtual IP's distributing load to many MX servers behind it and many MX DNS records with the same or different priorities. This of course won't help much if the people attacking are paying ddos-as-a-service farms to bring on massive volume and packet rates that overload all your servers. There are DDoS scrubbing services you can pay for that will advertise your AS number or use GRE tunnels or VPN's to clean the attack data for you. These scrubbing solutions are no guarantee of mitigation.
> Do tools like fail2ban help?
No. That would be pointless whack-a-mole. If an individual person is mad at you and launching a tool from their own PC or a handful of VM's, then yes fail2ban will help. Blocking individual IP's on your MX servers under a real distributed DDoS attack would be futile. Scrubbing centers are about the only solution once the attack is big enough. Or if you had unlimited funds you could deploy many datacenters or point-of-pressence destinations and build your own scrubbing networks but that is very expensive.
But once you have that, you should be able to tell them "forward me all traffic except <list of IPs>" and then list the IPs that are sending you the most (remaining) traffic, or even cost (e.g. if the IP is sending little traffic, but performing many TLS handshakes). That's where a fail2ban like tool would come in, no?
The benefit of this approach is that it works completely independently of the protocol you're running. TCP, UDP, doesn't matter, as long as the attacker cannot spoof IP addresses at scale.
as long as the attacker cannot spoof IP addresses at scale
That is kindof the rub. Until a majority of tier-1 backbone providers implement bcp38  or some derivative of it, spoofing from the ddos farms is trivial. There has been talk of implementing this for many years but very little action. Perhaps when DDoS attacks cost enough tax revenue or impact investors, perhaps there may be push for legislation in some countries to implement but in an ideal world most of the providers would work through this as one big team. I just make some network engineers laugh, or smirk, or other
 - https://en.wikipedia.org/wiki/Ingress_filtering#Networks
So you need to be able to handle:
a) the reflected volumetric attack (you can block IPs and even protocols/ports here),
b) a much smaller volume of IP-spoofed traffic, which for TCP services basically means you have to be handle a SYN flood (because if they spoof, they can't get a connection established to do the nastier stuff)
c) make sure you don't let your countermeasures against a) be triggered by b) in a way that disrupts your legitimate customers.
For a TCP based service, I'd expect the following to be quite effective:
1. At the very first stage (has to be able to handle the full volume, may be external), discard all UDP traffic, any TCP traffic pointing to the wrong port, and any traffic from blacklisted IPs.
2. At the second stage (has to be able to handle the IP-spoofable volume), terminate TCP connections using SYN cookies, and forward actual connections to the real servers.
3. At the real servers, any remaining incoming connection is coming from a non-spoofed IP. Monitor cost (TCP handshake, TLS handshake, authentication attempt) and successful accounts per IP. Block IPs that cause a high cost but no legitimate users, or an extreme cost and few legitimate users, using the stage 1 blacklist.
4. Optionally you can also aggregate information about successful logins and block sources that cause a high load at stage 2 and have few successful logins using a short-lived entry on the stage 1 blacklist.
UDP based services would be much harder since you don't have such a trivial stage 2, but for TCP, stages 1+2 should be something that you can buy.
> […] contrary to general opinion about the lack of BCP 38 deployment, some 80% of the Internet (by various measures) were already applying anti-spoofing packet filtering
I know very little about global routing, so I'm wondering if it's an ineffective way of filtering against these attacks we see today or is 80% not enough to be effective?
 - https://archive.nanog.org/sites/default/files/1_Kaeo_Ddos_Tr... [PDF]
It’s not clear to me why fail2ban wouldn’t at least help, if the botnet is a thousand machines wouldn’t I (eventually) have them all blocked? And therefore reduce the overall duration of the attack? Or is the problem that it’s hard to differentiate between good clients and bad clients because no single client is sending enough traffic to be suspicious?
Also, do you have any specific examples of ddos scrubbing services? Would like to take a look specifically at affordability for individuals.
In reality however, a large portion of the attack won't even show up in logs. The attack will also contain depending on how much the attacker is willing to spend tens of millions or hundreds of millions of packets per second of TCP, UDP packets on random ports, no ports, random protocols, random sizes, random TTL's, random headers. That is a volumetric attack. Fail2ban and most network IDS/IDP's would not even see this attack. It would saturate the uplinks to the ISP before you even see anything. Your ISP will most likely null route you and encourage you to stop advertising your AS number.
In reference to scrubbing centers for individuals, that is not a thing unless you have unlimited funds. There are some VPS providers that can scrub tiny attacks. Linode, Vultr, OVH to name a few but they can only deal with tiny attacks. If you want to research this for your business, my suggestion would be to get on the NANOG mailing lists and discuss it with all the network engineers to find out which scrubbing services are currently most effective. This is a moving target and a very big investment. I am not a fan of any of the companies that provide these services, but that is only based on my limited experiences with them.
Im thinking like a lambda/faas platform where the computing endpoint is massively distributed ?
I guess there are (currently) no 'one final solution' but like traditional security a good solution consists of many many layers/rings of defences ? Can one(in theory) decenterlized all 7 of the osi model ?
Trying hard not to use the word (crypto/blockchain) here
But, voip.ms has points of presence all over and they were DDoSed effectively recently; distribution can often help, but it's not enough. If an attacker sends X gigabits of garbage at your San Jose PoP and disables it, your other PoPs will likely function, but if they send X/N gigabits at each of your N PoPs, that might be enough to disable all of them.
Using a large provider can help a lot though. Volumetric DDoS is 'solved' by having large pipes and discarding lots of traffic. And, where possible, getting upsteams to discard lots of traffic before it arrives on your network. Large providers have large pipes and good relations with their carriers. Smaller providers or DIY doesn't.
If you mean distributing the destination servers that people access that is basically the same pattern. One could in theory spin up cloud instances of the email httpd/smtpd servers but there are still diminishing returns. This also requires the clients to know how to route to the right clusters of servers. Not easy, but doable. Ultimately this is the same problem people run into with web servers hosted on AWS. You can spin up more instances but there are still network link bottlenecks going into each region. AWS and the like have partnerships with scrubbing centers. This model breaks down if the attacker is willing to spend more than $500. Most massive scale attacks cost less than $200 on the dark web. If the extortionists believe they can get more than say $1k, then they might be willing to spend enough to even take down some scrubbing centers. It can cost hundreds of millions to defend against a $200 attack.
The current email RFC's would not support something like crypto/blockchain. At that point you are basically inventing a new standard and adoption may take a very long time unless there is a compelling business advantage to it in my opinion.
* Machines here could be anything from a lightbulb to a server.
A game i run was recently hit with a 102 Ggbps CLDAP reflection attack. We were down for a while until our ISPs DDoS protection detected it after that we were mostly unaffected. If the attack is difficult to separate from legitimate traffic you'll still suffer though.
There are theoretical cheap ways where you can assist your customer subject to attacks to setup a GRE tunnel to act as that scrubbing however it’s a bit annoying. Get a VM on a host that has that capability already, and route your packets through them first. Con is you have one more failure mode and increased latency.
1. Network level:
- BGP Filtering (meaning in order to reach your IP range, packets must first route through another company that has sufficient bandwidth to receive AND filter out bad packets), then clean traffic will go through to your data center, and to your server.
- GRE Tunnels, Similar to above, but it will not be transparent in that you will likely use their IPs.
2. Datacenter level:
- Colocate or find a dedicated server that sits behind a dedicated appliance that solely exists to act as a filter. You will also need to ask what their upstream link speeds are (i.e. 40gbps to that appliance). You still might encounter leaks, and rely on the fact that they have configured correctly or are willing to apply custom filtering if you have an advanced attack.
3. Node/Server level:
- By now, you can only filter whatever your line rate is (i.e. 1gbps or 10gbps most likely). There are various methods, but all of them require to create custom filters and be active in patching up leaks. You'll want to do it as far up in the server stack as possible. Best option is SmartNICs or NICs that support hardware offloads. Second best option is in front of IPtables. Most tutorials online talk about IPtables. That assumes you have Linux first, and second not the optimal way. Use tc (traffic control) instead, it's further up on the network stack.
Disclaimer: I run Pingly  an email hosting service, but ironically our signups are turned off at the moment due to a botnet that hits us with fake accounts to send spam that I'm working to mitigate completely.
How do you mitigate such an attack though? I know Cloudflare can stop this, but how do you create your own bespoke 'DDOS mitigation' tool, and what does that look like?
And it's in their interest to talk to their providers and mitigate even further back if possible. And some mitigations are relatively easy (block all DNS traffic to this subnet, etc).
Otherwise, with centralized infrastructure, you identify the bad traffic and send it elsewhere, an instance that keeps connections open but delivers nothing and have a small amount of resource usage. Problem is that you still need to be able to handle the traffic, but at least you avoid hitting your main infrastructure.
I can see how this works for some applications, but how would this work for SMTP and IMAP/JMAP?
You can actually get this semi-transparently with a TOR-like system. (I'm not sure TOR itself actually implements all of this, and also TOR is perennially underprovisioned for political reasons, so this mostly won't work in practice.)
First, you need a distributed DNS mechanism to publish "example.com. 9999 IN MX 1 abcdefghijklmnop.onion". This is mostly static, so DDOS doesn't really work.
You then come up with a sequence of rendezvous servers, which we'll number starting from 1. (I think actual TOR just assumes a single (or few) rendezvous server is sufficient, but I'd need to go dig through the code to be sure.)
You then try (in sequence) servers 1,2+rand(2),4+rand(4),...,2^k+rand(2^k),... where rand(x) picks a random number in [0,x). (0: I'm not sure whether distinct rands should share lower bits; see below.) If a server is overloaded it just drops traffic on the floor.
The destination server then checks the rendezvous servers in sequence until it's gotten enough successes that any client would have tried one of the successful rendezvous servers in its random sequence ( above affects the distribution here).
Under heavy load, the destination server also sets a proof of work requirement (clientHello statisfies the standard hash-has-x-leading-zeros), which allows the rendezvous servers to drop most of the incoming traffic. Legitimate clients by definition are not spamming connections as fast as they can, so they can burn CPU to meet this requirement. DDOS clients can also burn CPU on this, but that reduces the rate at which they generate traffic.
The end result is volumetric attacks are spread over 2^k rendezvous servers, where k is dynamicly chosen such that they can handle the load, while for faux-legitimate attacks, DDOS will just push up the computational costs for legitimate clients without ever actually shutting down the target.
This works for anything TCP-like.
1: You can DDOS anything by just behaving like (absurdly many) legitmate clients, eg `while true;do wget http\://example.com/;done`.
2: If the rendezvous and destination servers are all similar, then the work per server scales as the square root of the attack volume. Or put the other way around, the amount of attack traffic this setup can absorb scales as the square of the traffic each server can handle.
3: So for faux-legitimate attacks, the attacker's goal is not to overwhelm the server, but to maximize the costs to legitimate clients trying to connect; the attacker will generate only (roughly) as much traffic as the destination can handle, with as large a proof-of-work as possible. Assuming the destination server normally runs around 50% load, the total work imposed on legitimate clients (distributed over all of them) will be about the same as the attacker's available CPU. If the destination server normally runs significantly below 50% load, the imposed work will be proportionately lower.
You haven't seen Gnutella in its glory days.
The point of the cost is that it's cheap but non-zero. It makes spam uneconomical, creates pseudonyms for accruing reputation and makes it trivial to moderate (IDs are permanent).
Applications built on the platform can take advantage of this ID system and none of them need to rebuild auth or handle networking across the web. This means application devs can just focus on their apps and distribution is trivial.
The modern web is a nightmare of complexity for people trying to build applications, you basically need to raise VC and have a SaaS in order to be able to hire the armies of people required to build anything.
And it's identity management is based to Ethereum, so maybe just attach "Sign-in with Ethereum" component to the Mastodon instance, and you get something with same featureset as Urbit, but somewhat simpler and vastly more supportable. And as an extra, you get nice features like multi-machine scaling and live encrypted backups.
You can have simplicity with the proper abstractions and design to get there over time then you can fix it bit by bit. This is the only way a massive rewrite of the stack like this has any hope of success.
Mastodon is mostly a twitter clone - with a lot of effort you could probably tie server creation to eth IDs and resolve that bit at least (make it easier to spin up servers), but you don't solve the issue with updating servers to match versions and the fall out from all of that other stuff. You also don't solve application creation or distribution. Urbit's OS is designed as a deterministic function of its inputs - this is necessary to make it trivial to run/update the nodes. It's more than there being no distinction between users and nodes (though that's a big part).
I'd bet against any other federated system long term. Urbit is still a long shot, but if it wins - it wins big.
Frankly, it doesn't seem like such a bad strategy. With the amount of panic the attack on Voip.ms caused, a number of people were discussing switching voip providers. And switching targets to other providers just as they get a massive influx of new customers seems like a force multiplier IMO.
I'm fully in favor of not paying into the extortion and not going for a global tap system like cloudflare, but we do need some solution in at most a few days. In the Netherlands there is for example the Nawas (it's also a pun, referring to a laundry cleaning thing) that scrubs malicious traffic for ISPs. I don't know the details of how they're being attacked, but with typical reflector services it's easy to remove that traffic based on a source port. Any large Norwegian hosting company (they just need an uplink bigger than the DDoS, or work with whoever the traffic comes from at their peering points) could provide that service as well. Not saying this is trivial but there are options other than waiting.
They hit a number of VoIP providers recently, too.
I expect we will see Cloudflare mail protection soon enough...
Makes you wonder.
Today we can help quite a bit by proxying TCP traffic using Cloudflare Spectrum, e.g., spoofed traffic will never reach origin as it can't complete a handshake, we can use things like SYN cookies to challenge source, etc.
In the future, there's a lot more we plan to do here.
It still seems much easier to just blackhole IPs that are causing problems, like collectively (at the edge of your AS) block IPs that long-term host a service that is actively involved in facilitating DDoS attacks, but for some reason nobody is doing that. This could be a more direct way: see where DDoS traffic is coming from and... poof
We didn’t disclose it at the time but this 17.2M rps attack came from (home) Mikrotik devices that were running proxy services: https://blog.cloudflare.com/cloudflare-thwarts-17-2m-rps-ddo....
Still, if an ISP has had multiple abuse reports for the same subscriber and they're not doing anything, after some time it starts to become reasonable to block this IP, and in a further escalation, this ISP's ranges altogether until they clean their act up. I remember getting the Internet connection blocked as a teenager on an XS4ALL connection for being an ass on the Internet (I tried to DoS a domain squatter that tried to sell a domain I wanted for a thousand times the price with no added value). The abuse desk which I had to contact to unblock it took my promise to not do it again seriously (as did I), not sure how other ISPs handle this.
I kept having errors in sending emails and logging into the web interface and the mailbox.org status page indicated nothing was wrong. Why have the page if that’s not where I’m supposed to find out something like this?
Without getting into too much detail: we were limiting some HTTP methods, but unintentionally blocked REPORT, which DAV clients use to see what’s changed.
Basically SMTP login and everything works fine, client sends the message and then it just times out.
If you're a heavy gmail user you should read this story of my last year (out of 10+) with them and IMO migrate off as soon as you can. I wish I hadn't used an @gmail as long as I did.
For some unknown reason, they won't tell me, G marked my google pay account as "possible fraud, unable to verify identity" at some point this year. No idea when, I realized it when CC expired and my 3 month old Google Fi account wouldn't pay its bill and it wouldn't let me enter a credit card to pay for any G service. My G services started to drop like flies once my card expired. They locked me out of Google Pay and won't let me enter a new CC to pay for subs/gdrive. I had to clear out the entire 50gb of my gdrive so that I could get EMAIL again to my now "free" gmail account which is hovering around its 15gb limit, that's how old it is. The real kicker; I can't contact support anymore because I'm not a "paying customer" who gets their "world class support."
The fun part of this was I was completely without a cell phone for a week until I got onto ATT. Support (live chat) was completely useless and just sent me to a page to send them a copy of my license and a utility bill. I did that about 4 times now and I still can't change CCs on my Google account to pay for things. And they still won't contact me, won't tell me what the deal is or anything. It's been about 6 months.
I was joking to a friend before I switched to Fi "watch this be a terrible idea, it breaks and I'm without a phone for a month and have nobody to fix it." Welp.
Anyway, I tried out Protonmail months ago and was completely unimpressed with its search, interface, etc. I had low expectations of fastmail (due to proton exp) but it's great, I'm really impressed. It threads conversations like you'd expect and I could actually find things I searched for, I sold a house in the middle of my Proton use and it was miserable keeping track of all of the emails/docs/etc I was getting/sending for some reason. I remember searching for important attachments and it finding 10s or hundreds of attachments that were my realtors signature picture of herself that showed up in every email.
Sorry for the tangent, be very careful of how much you rely on G services because this cliche horror story you randomly see on twitter/reddit completely happened to me. I'll never be able to stop using this @gmail account because I have so, so many things tied to it but I'm going to try my best to undo most of that.
My biggest fear now is that my G account gets completely locked and all of the things I use G to authenticate to will be lost. Undoing all that is a nightmare.
Use a custom domain for your email..
Doing the thing where you try to sign up to services using firstname.lastname@example.org confuses people though. I stopped doing that after having to explain to people I receive emails to any address sent to my domain a few times.
I could see using uncommon top level domains being a problem as well.
Moving to another provider would mean to set up a new account, and usually the free tiers won't allow to use your own domain. So you would be locked to a new provider where you could face the same problems.
Temporarily moving to a self-hosted server would be an option, but probably just to receive emails during this time. You'd have to set up your certificate, and optionally DKIM, DMARC, SPF or whatever is required to ensure that your sent emails arrive properly. I can't imagine self-hosting email being something which won't give you a hard time every now and then.
Generally yes, I am in favor of owning your email-domain, but then using it with a professional provider like mailbox.org unless you really know enough about the topic.
But in this case, where the issue is a DDoS attack, I wouldn't do anything, since all undelivered email will be re-sent at a later time.
Are there any mail admins in the comments who could propose an smtp daemon & configs that would store all mail for a domain and wait indefinitely to forward it on? I think there's a lot of users/businesses who would like to implement this.
It's the same idea behind a VPS provider serving their status page on another company's infrastructure.
Are we sure that isn't the address the extortion mail came from? Would make more sense?
> Are we talking generic consumer desktop machines, like my grandma’s old XP desktop running in her basement?
Those combined with hundreds of thousands of compromised VPSes, etc.