We still hear about DDoS attacks like this once in a while but it seems it's not anywhere near as common as it used to be. What happened? It looks like the bad guys are really having more and more trouble mounting succesful DDoS: how comes? It also looks like, in despair, they're targetting smaller fishes. Why? Smaller botnets? Cloudflare and OVH and the likes just being too good at absorbing everything and anything you can throw at them? Simple firewall rules getting rid of 99% of the crap? What's the reason it's not as prevalent as it used to be?
How that's a good point: especially that if you DDoS while asking for a ransom, you take the risk that your botnets gets taken down. While if you "discretly" mine CPU (and/or GPU?) mineable cryptocurrencies, you kinda fly under the radar.
There have been many attempts to do exactly that with varied success.
The best I could find was 250 a day using only 15,000 hosts. Not bad considering the cost is literally zero for the attackers. Scale that up to half a million hosts, which is a tiny botnet in reality would make 8000+ a day or over 3 million a year based on low hanging fruit
A lot of the old DDoS attacks rely on the ability to spoof your IP address. Many networks are now configured to drop packets exiting their network that don’t have an address from their network.
For example, in a Smurf attack the attacker finds broadcast IP addresses by sending an ICMP request to an address and counts the number of ICMP replies that come back. A broadcast IP address is one that sends a packet to every host on a network (often with 255 as the last octet like 207.103.0.255 for a Class C network of 207.103.0.0/24).
After finding suitably large networks with an open broadcast IP address they then send the broadcast IP address packets with a spoof IP address of the victim. The attack is then multiplied by however many hosts are on the broadcast IP address network.
DNS reflection is another type of DDoS attack that also relies on the ability to spoof an IP address of the victim.
Once you get to a certain scale, you don’t really worry about those vectors anymore.
The more interesting/difficult to mitigate attacks are those that complete handshakes (if TCP) and make fully formed requests at L7 that otherwise appear legitimate.
IOT and some other advancements still create opportunities for new DDoS attacks, but attackers herd. And the "X as a service" support infrastructure is mostly supporting ransomware right now, likely because its safer and more lucrative. You can walk away from a ransomware target, fire and forget, so you can do it at scale. DDoS you have to pick your victims, and monitor and maintain the pressure, choose how to allocate your resources to targets while they're investigating or waiting you out.
Cloudflare might be part of the story, maybe that was enough of a headwind to stop the trolls, but for the professional criminals, I suspect this is about lucrative alternative attacks.
Yes I have the same suspicion. It is way harder to maintain a certain volume of DDOS, and some people use services to keep them online during a DDOS. Spending your time on a single action, encrypting, and keeping a company hostage without further energy I think is more, what's the word, efficient for those bad actors.
The general quality of DDoS scrubbing services has dramatically improved in the last 10 years. I work for a large tech company and Silverline has protected us from 100G+ attacks.
Cloudflare is the most well-known, but there are lots of providers now that provides these level of service, from the old guys like Akamai's Prolexic to new ones like Imperva to tier-1 ISPs like Telia's.
Additionally, depending on the exact service, you can certainly firewall traffic - close to the source.
The specific problem here is that mail servers, since that is not the target of DDoS until now, which means that there are few companies who do provide mail exchange-specific DDoS protection, which means larger companies (Verizon/Yahoo, Microsoft/Outlook, Google/Gmail) just operate servers well beyond what they really need, and I don't think that they can just run to Cloudflare and violate their privacy promise in the process.
More protection at the OS and ISP level. ISPs can isolate nodes that become part of botnets, and operating systems increasingly remove the avenues malicious actors use to cause trouble. Microsoft's push for hardware security is justly controversial, but the move to TPM by default in Windows 11 is the latest in a long line of changes that's made it harder to take over an ordinary person's computer. Android has had an equivalent since 8, and I'm pretty sure iOS has it.
Put that with the change to more people being mobile-only and there's fewer ways to create the botnets behind these.
> TPM by default in Windows 11 is the latest in a long line of changes that's made it harder to take over an ordinary person's computer
Interesting, I think of TPM as being for holding keys for bitlocker encryption or personal certificates. Can you clarify how TPM makes it harder for to remotely take over a computer?
It's what you can do with the TPM. With the TPM to hold keys, you can require that e.g. bootloader changes be signed by the vendor. It's hard for malware to convince an ordinary person to go into BIOS and disable vendor locked bootloaders. Of course, Microsoft also gets into trouble here, because sometimes the vendors (and Microsoft itself) don't put the option to disable locking in the BIOS.
Bingo. I've seen multiple instances in the last year or so where people were advised to reboot their devices to make sure a newly identified and patched out malware was removed.
It's a lot easier for something like Windows Defender to untangle something confined to user space than something that can prevent the OS from protecting its files by taking over the boot process.
You are correct on the ISP level. I am a network engineer for an ISP, we utilize Corero to monitor and mitigate DDoS attacks into our network. Since 99% of the time the DDoS is not targeted to us but rather the customer, I also kill the active IP addressing to their Modem/ONT, and configure that endpoint so it isn't allowed to pull an IP. Once the attack stops, re-config the endpoint and have it pull a new address.
Doesn't that simply mean that the customer loses connectivity, just as the attacker intended, for the duration of the attack?
From the ISP's point of view, you might have prevented an overload that could have affected other customers. From the customer's point of view, their service was denied all the same. Doesn't sound like anything has improved compared to 10-20 years ago.
Might want to check how much it costs to increase someone's billing 20k a day (granted, a botnet makes it cheaper, but measure opportunity cost of what else that botnet could be doing), see also https://www.reddit.com/r/aws/comments/7z6uc3/comment/dutgw6u...
AWS Shield (while expensive $36000 a year) does have `DDoS cost protection` as one of it's features. i.e. if you have to 10x your server fleet to outscale + outlast the ddos attack, then AWS will forgive the additional cost.
Is the attack on its webmail/website or on their smtp servers? I’ve been wondering about this, but how does one protect smtp servers from distributed attacks? Let’s assume smaller attackers, do you just need good firewalls in front of your servers to prevent congestion to the smtp servers? Are there off the shelf tools that can be configured to help here (pf maybe)? Do tools like fail2ban help?
For context, I’ve picked up self-hosting again after many years and though I don’t anticipate being a target for large attacks I have been curious what tools individuals have at their disposal or if it’s a fools errand to even try.
> Is the attack on its webmail/website or on their smtp servers?
It is an attack on the httpd and smtpd daemons. imap has been unaffected as far as I can tell.
> how does one protect smtp servers from distributed attacks?
By design, MX can be as distributed and in large number as you can afford or as willing to spend. This can be a combination of load balancer virtual IP's distributing load to many MX servers behind it and many MX DNS records with the same or different priorities. This of course won't help much if the people attacking are paying ddos-as-a-service farms to bring on massive volume and packet rates that overload all your servers. There are DDoS scrubbing services you can pay for that will advertise your AS number or use GRE tunnels or VPN's to clean the attack data for you. These scrubbing solutions are no guarantee of mitigation.
> Do tools like fail2ban help?
No. That would be pointless whack-a-mole. If an individual person is mad at you and launching a tool from their own PC or a handful of VM's, then yes fail2ban will help. Blocking individual IP's on your MX servers under a real distributed DDoS attack would be futile. Scrubbing centers are about the only solution once the attack is big enough. Or if you had unlimited funds you could deploy many datacenters or point-of-pressence destinations and build your own scrubbing networks but that is very expensive.
I agree that you do need some form of first stage that can take the traffic volume (usually a third party service, but could also be 100+ Gbps of bandwidth you get somewhere and an appliance that can do IP filtering), no good way around that.
But once you have that, you should be able to tell them "forward me all traffic except <list of IPs>" and then list the IPs that are sending you the most (remaining) traffic, or even cost (e.g. if the IP is sending little traffic, but performing many TLS handshakes). That's where a fail2ban like tool would come in, no?
The benefit of this approach is that it works completely independently of the protocol you're running. TCP, UDP, doesn't matter, as long as the attacker cannot spoof IP addresses at scale.
Agreed adding bandwidth can help against some attacks.
as long as the attacker cannot spoof IP addresses at scale
That is kindof the rub. Until a majority of tier-1 backbone providers implement bcp38 [1] or some derivative of it, spoofing from the ddos farms is trivial. There has been talk of implementing this for many years but very little action. Perhaps when DDoS attacks cost enough tax revenue or impact investors, perhaps there may be push for legislation in some countries to implement but in an ideal world most of the providers would work through this as one big team. I just make some network engineers laugh, or smirk, or other
My understanding is that the attacker can generate a relatively small amount of IP-spoofed traffic, which is then used to generate a much bigger amount of amplified traffic from servers the attacker doesn't control (i.e. not IP spoofed).
So you need to be able to handle:
a) the reflected volumetric attack (you can block IPs and even protocols/ports here),
b) a much smaller volume of IP-spoofed traffic, which for TCP services basically means you have to be handle a SYN flood (because if they spoof, they can't get a connection established to do the nastier stuff)
c) make sure you don't let your countermeasures against a) be triggered by b) in a way that disrupts your legitimate customers.
For a TCP based service, I'd expect the following to be quite effective:
1. At the very first stage (has to be able to handle the full volume, may be external), discard all UDP traffic, any TCP traffic pointing to the wrong port, and any traffic from blacklisted IPs.
2. At the second stage (has to be able to handle the IP-spoofable volume), terminate TCP connections using SYN cookies, and forward actual connections to the real servers.
3. At the real servers, any remaining incoming connection is coming from a non-spoofed IP. Monitor cost (TCP handshake, TLS handshake, authentication attempt) and successful accounts per IP. Block IPs that cause a high cost but no legitimate users, or an extreme cost and few legitimate users, using the stage 1 blacklist.
4. Optionally you can also aggregate information about successful logins and block sources that cause a high load at stage 2 and have few successful logins using a short-lived entry on the stage 1 blacklist.
UDP based services would be much harder since you don't have such a trivial stage 2, but for TCP, stages 1+2 should be something that you can buy.
Curious as you speak of "very little action", yet the Wikipedia article you linked says:
> […] contrary to general opinion about the lack of BCP 38 deployment, some 80% of the Internet (by various measures) were already applying anti-spoofing packet filtering
I know very little about global routing, so I'm wondering if it's an ineffective way of filtering against these attacks we see today or is 80% not enough to be effective?
It really isn't enough. Everyone needs to do their part [1] not just ISP's but also large businesses, cloud providers, others. And this has to occur in all regions. I admittedly over-simplified the problem statement.
Thanks for your response! So using specific examples here for smtp, I get a 1gbps guaranteed network from Hetzner so in theory I’d need to distribute over 50 servers to withstand this attack?
It’s not clear to me why fail2ban wouldn’t at least help, if the botnet is a thousand machines wouldn’t I (eventually) have them all blocked? And therefore reduce the overall duration of the attack? Or is the problem that it’s hard to differentiate between good clients and bad clients because no single client is sending enough traffic to be suspicious?
Also, do you have any specific examples of ddos scrubbing services? Would like to take a look specifically at affordability for individuals.
In regards to fail2ban, assuming the attack was purely SMTP specific which it won't be even then blocking IP's would be futile. The DDoS-as-a-service farms have hundreds of thousands to millions of IP's under their control. Block one IP and ten more show up. One IP is not one attacker. Those farms have probes that can tell how effective their attack is. Some of them even have "proven work" that is reported back to the buyer to validate the effectiveness of their attack. You may want to research this one as it is a very big topic.
In reality however, a large portion of the attack won't even show up in logs. The attack will also contain depending on how much the attacker is willing to spend tens of millions or hundreds of millions of packets per second of TCP, UDP packets on random ports, no ports, random protocols, random sizes, random TTL's, random headers. That is a volumetric attack. Fail2ban and most network IDS/IDP's would not even see this attack. It would saturate the uplinks to the ISP before you even see anything. Your ISP will most likely null route you and encourage you to stop advertising your AS number.
In reference to scrubbing centers for individuals, that is not a thing unless you have unlimited funds. There are some VPS providers that can scrub tiny attacks. Linode, Vultr, OVH to name a few but they can only deal with tiny attacks. If you want to research this for your business, my suggestion would be to get on the NANOG mailing lists and discuss it with all the network engineers to find out which scrubbing services are currently most effective. This is a moving target and a very big investment. I am not a fan of any of the companies that provide these services, but that is only based on my limited experiences with them.
Word, thanks for all the details! Yeah, I don't have any actual business use-cases outside of a personal interest in making my dedicated server reasonably protected.
You're welcome! For your personal MX servers, the most cost effective solution I know of would be to have multiple domains and each domain have their own MX records and corresponding VM/server on its own unique provider to isolate them. Then ensure that people/businesses that are important to you know to contact you at 2+ email addresses/domains. If someone attacks one of your domains or MX servers, you can safely ignore the attack. These separate MX servers should be on different server/VPS providers in the event that the attack causes one of them to suspend your account. Your imap client can poll each of the servers/domains so that you get your emails.
Would a decenterlized service architecture work better than an server-client arch ?
Im thinking like a lambda/faas platform where the computing endpoint is massively distributed ?
I guess there are (currently) no 'one final solution' but like traditional security a good solution consists of many many layers/rings of defences ? Can one(in theory) decenterlized all 7 of the osi model ?
Trying hard not to use the word (crypto/blockchain) here
.
Function as a service isn't really what I'd call decentralized. It's centralized at the service provider. But, they mix your load in with everyone elses and they serve the load in many places (maybe geographically distributed is decentralized?).
But, voip.ms has points of presence all over and they were DDoSed effectively recently; distribution can often help, but it's not enough. If an attacker sends X gigabits of garbage at your San Jose PoP and disables it, your other PoPs will likely function, but if they send X/N gigabits at each of your N PoPs, that might be enough to disable all of them.
Using a large provider can help a lot though. Volumetric DDoS is 'solved' by having large pipes and discarding lots of traffic. And, where possible, getting upsteams to discard lots of traffic before it arrives on your network. Large providers have large pipes and good relations with their carriers. Smaller providers or DIY doesn't.
Email is already designed to be decentralized. You can have a very large number of mail servers in any number of clouds behind multiple MX records, Anycast IP's, Load balancers, etc... The scale of distribution required to defend against these attacks is possible, but prohibitively expensive. Some VPS providers do support Anycast so in theory you could dynamically spin up tens of thousands of inbound MX nodes that would at least spool the emails. Anycast will allow you to have any number of MX servers appear as one IP address. Those nodes in turn then need to relay that to your centralized servers. This brings up more scalability issues as you could essentially DDoS yourself in this pattern. There may not be enough clouds to support this idea and the cost would give just about any CTO sticker shock or medical issues.
If you mean distributing the destination servers that people access that is basically the same pattern. One could in theory spin up cloud instances of the email httpd/smtpd servers but there are still diminishing returns. This also requires the clients to know how to route to the right clusters of servers. Not easy, but doable. Ultimately this is the same problem people run into with web servers hosted on AWS. You can spin up more instances but there are still network link bottlenecks going into each region. AWS and the like have partnerships with scrubbing centers. This model breaks down if the attacker is willing to spend more than $500. Most massive scale attacks cost less than $200 on the dark web. If the extortionists believe they can get more than say $1k, then they might be willing to spend enough to even take down some scrubbing centers. It can cost hundreds of millions to defend against a $200 attack.
The current email RFC's would not support something like crypto/blockchain. At that point you are basically inventing a new standard and adoption may take a very long time unless there is a compelling business advantage to it in my opinion.
You're thinking too small. DDoSaaS use hundreds of thousands to millions of compromised machines* to attack. You might only see a handful of connections from each IP. Whch makes it fruitless to use something like f2b.
* Machines here could be anything from a lightbulb to a server.
At this scale you pretty much need to apply some sort of DDoS scrubbing service. Your ISP might already have one they can route traffic through or if you have your own AS you can let a DDoS service announce the target prefixes.
A game i run was recently hit with a 102 Ggbps CLDAP reflection attack. We were down for a while until our ISPs DDoS protection detected it after that we were mostly unaffected. If the attack is difficult to separate from legitimate traffic you'll still suffer though.
I’ve been a server operator and was subject to DDoS attacks. I’ve also tried self hosting since I am familiar with the challenges. Short answer is 80% of the time you will be able to block a <10gbps attack (assuming that’s your uplink speed) at the node level. Most boosters all use similar attack methods, and it just requires you to drop packets as upstream as possible (preferably on the NIC as a hardware upload). In most cases you can block common SOURCE ports of packets. Note, there may be legitimate uses for some protocols, but I doubt most customers use them. If the attack is >10gbps you must likely need a BGP scrubber, that can divert your traffic and “tank” any bandwidth before it comes to your line lease. If your uplink is saturated it doesn’t matter how much your hardware can filter, good traffic won’t be able to get through.
There are theoretical cheap ways where you can assist your customer subject to attacks to setup a GRE tunnel to act as that scrubbing however it’s a bit annoying. Get a VM on a host that has that capability already, and route your packets through them first. Con is you have one more failure mode and increased latency.
To add to what I said, the general options and availability on the stack is something like:
1. Network level:
- BGP Filtering (meaning in order to reach your IP range, packets must first route through another company that has sufficient bandwidth to receive AND filter out bad packets), then clean traffic will go through to your data center, and to your server.
- GRE Tunnels, Similar to above, but it will not be transparent in that you will likely use their IPs.
2. Datacenter level:
- Colocate or find a dedicated server that sits behind a dedicated appliance that solely exists to act as a filter. You will also need to ask what their upstream link speeds are (i.e. 40gbps to that appliance). You still might encounter leaks, and rely on the fact that they have configured correctly or are willing to apply custom filtering if you have an advanced attack.
3. Node/Server level:
- By now, you can only filter whatever your line rate is (i.e. 1gbps or 10gbps most likely). There are various methods, but all of them require to create custom filters and be active in patching up leaks. You'll want to do it as far up in the server stack as possible. Best option is SmartNICs or NICs that support hardware offloads. Second best option is in front of IPtables. Most tutorials online talk about IPtables. That assumes you have Linux first, and second not the optimal way. Use tc (traffic control) instead, it's further up on the network stack.
If they're hitting the outbound SMTP servers, there's no way (at least that I know of) to protect the IP/servers via obfuscation with a service like Cloudflare etc. Email deliverability relies heavily on the source IP of the sending SMTP for reputation and is going to be viewable in the headers of an email. Also changing IPs isn't a small task unless you're sitting on a load of good reputation IPs that are pre-warmed up.
I think a scrubbing service would be the only way to help or of course, having enough resources to deal with it directly (bandwidth, cpu, etc).
Disclaimer: I run Pingly [1] an email hosting service, but ironically our signups are turned off at the moment due to a botnet that hits us with fake accounts to send spam that I'm working to mitigate completely.
Cloudflare has a product where you don't actually need any public IPs to host your apps. You install a daemon on each server and firewall off the box. It makes it virtually impossible for someone to get around the DDOS protection.
> Since these DDoS attacks started we have worked with our system administrators and Internet Service Provider to mitigate the attacks
How do you mitigate such an attack though? I know Cloudflare can stop this, but how do you create your own bespoke 'DDOS mitigation' tool, and what does that look like?
You simply need bigger pipes to ingest more traffic than the attack can provide. It is presumed these days that packet analysis in some cases can require too much power costwise rather than scaling up the connection to swallow it.
And the "further up the chain" you can move the mitigation, the easier that is. Mitigating on your box requires a huge pipe to your box, but if your provider can mitigate at their border router, well those are bigger and already have huge traffic to and through them.
And it's in their interest to talk to their providers and mitigate even further back if possible. And some mitigations are relatively easy (block all DNS traffic to this subnet, etc).
One way is to build your software on top of distributed/content-addressed P2P software (not Blockchain, but pure P2P). The angle of attack disappears completely then.
Otherwise, with centralized infrastructure, you identify the bad traffic and send it elsewhere, an instance that keeps connections open but delivers nothing and have a small amount of resource usage. Problem is that you still need to be able to handle the traffic, but at least you avoid hitting your main infrastructure.
> One way is to build your software on top of distributed/content-addressed P2P software (not Blockchain, but pure P2P). The angle of attack disappears completely then.
I can see how this works for some applications, but how would this work for SMTP and IMAP/JMAP?
You can actually get this semi-transparently with a TOR-like system. (I'm not sure TOR itself actually implements all of this, and also TOR is perennially underprovisioned for political reasons, so this mostly won't work in practice.)
First, you need a distributed DNS mechanism to publish "example.com. 9999 IN MX 1 abcdefghijklmnop.onion". This is mostly static, so DDOS doesn't really work.
You then come up with a sequence of rendezvous servers, which we'll number starting from 1. (I think actual TOR just assumes a single (or few) rendezvous server is sufficient, but I'd need to go dig through the code to be sure.)
You then try (in sequence) servers 1,2+rand(2),4+rand(4),...,2^k+rand(2^k),... where rand(x) picks a random number in [0,x). (0: I'm not sure whether distinct rands should share lower bits; see below.) If a server is overloaded it just drops traffic on the floor.
The destination server then checks the rendezvous servers in sequence until it's gotten enough successes that any client would have tried one of the successful rendezvous servers in its random sequence ([0] above affects the distribution here).
Under heavy load, the destination server also sets a proof of work requirement (clientHello statisfies the standard hash-has-x-leading-zeros), which allows the rendezvous servers to drop most of the incoming traffic. Legitimate clients by definition are not spamming connections as fast as they can, so they can burn CPU to meet this requirement. DDOS clients can also burn CPU on this, but that reduces the rate at which they generate traffic.
The end result is volumetric attacks are spread over 2^k rendezvous servers, where k is dynamicly chosen such that they can handle the load[2], while for faux-legitimate attacks[1], DDOS will just push up the computational costs for legitimate clients without ever actually shutting down the target[3].
This works for anything TCP-like.
1: You can DDOS anything by just behaving like (absurdly many) legitmate clients, eg `while true;do wget http\://example.com/;done`.
2: If the rendezvous and destination servers are all similar, then the work per server scales as the square root of the attack volume. Or put the other way around, the amount of attack traffic this setup can absorb scales as the square of the traffic each server can handle.
3: So for faux-legitimate attacks, the attacker's goal is not to overwhelm the server, but to maximize the costs to legitimate clients trying to connect; the attacker will generate only (roughly) as much traffic as the destination can handle, with as large a proof-of-work as possible. Assuming the destination server normally runs around 50% load, the total work imposed on legitimate clients (distributed over all of them) will be about the same as the attacker's available CPU. If the destination server normally runs significantly below 50% load, the imposed work will be proportionately lower.
Only the initial identity is non-zero cost (and there are free IDs, but they're more likely to get banned for spam).
The point of the cost is that it's cheap but non-zero. It makes spam uneconomical, creates pseudonyms for accruing reputation and makes it trivial to moderate (IDs are permanent).
Applications built on the platform can take advantage of this ID system and none of them need to rebuild auth or handle networking across the web. This means application devs can just focus on their apps and distribution is trivial.
The modern web is a nightmare of complexity for people trying to build applications, you basically need to raise VC and have a SaaS in order to be able to hire the armies of people required to build anything.
But Urbit is "overlay OS" -- which means it still uses all the existing stuff (Linux, filesystems, UDP, TCP, HTTP, Javascript, web browsers for frontend) but adds a whole bunch more stuff on top of it [0]. So you are not reducing complexity, but increasing it.
And it's identity management is based to Ethereum, so maybe just attach "Sign-in with Ethereum" component to the Mastodon instance, and you get something with same featureset as Urbit, but somewhat simpler and vastly more supportable. And as an extra, you get nice features like multi-machine scaling and live encrypted backups.
Overlay OS is a strategy (worked well for the web), long term it'd be nice to run directly on the metal but that's obviously a ways off.
You can have simplicity with the proper abstractions and design to get there over time then you can fix it bit by bit. This is the only way a massive rewrite of the stack like this has any hope of success.
Mastodon is mostly a twitter clone - with a lot of effort you could probably tie server creation to eth IDs and resolve that bit at least (make it easier to spin up servers), but you don't solve the issue with updating servers to match versions and the fall out from all of that other stuff. You also don't solve application creation or distribution. Urbit's OS is designed as a deterministic function of its inputs - this is necessary to make it trivial to run/update the nodes. It's more than there being no distinction between users and nodes (though that's a big part).
I'd bet against any other federated system long term. Urbit is still a long shot, but if it wins - it wins big.
Reading through r/VOIP, I could have sworn Voip.ms was 2 weeks ago - then other providers began to be hit with DDoS attacks.
Frankly, it doesn't seem like such a bad strategy. With the amount of panic the attack on Voip.ms caused, a number of people were discussing switching voip providers. And switching targets to other providers just as they get a massive influx of new customers seems like a force multiplier IMO.
I've been with RunBox since 2012 because of Norway's internet privacy policies. I support them not giving into to extortion. If I don't have email for a few days, or a month, big deal. I remember how to use a phone to pay bills.
My employer uses runbox. I'm happy you're so unattached that you can go a month without any email (can't sign up anywhere, for example) but we'd miss incoming invoices, can't send invoices to our customers, can't deliver reports to customers (we do security audits so that's kinda important to be able to deliver using e.g. pgp), would miss requests for new assignments...
I'm fully in favor of not paying into the extortion and not going for a global tap system like cloudflare, but we do need some solution in at most a few days. In the Netherlands there is for example the Nawas (it's also a pun, referring to a laundry cleaning thing) that scrubs malicious traffic for ISPs. I don't know the details of how they're being attacked, but with typical reflector services it's easy to remove that traffic based on a source port. Any large Norwegian hosting company (they just need an uplink bigger than the DDoS, or work with whoever the traffic comes from at their peering points) could provide that service as well. Not saying this is trivial but there are options other than waiting.
This is a surprisingly insensitive comment. Email is as much a a part of modern life as phone calls and postal mail. Asking people to "just go without" for a few days may be possible for you, but is certainly not possible for a huge fraction of people.
The sooner these kids realize that DDoS extortion attacks do nothing except waste resources (and monetary resource to CloudFlare, basically), the sooner they'll give up.
They hit a number of VoIP providers recently, too.
Today we can help quite a bit by proxying TCP traffic using Cloudflare Spectrum, e.g., spoofed traffic will never reach origin as it can't complete a handshake, we can use things like SYN cookies to challenge source, etc.
In the future, there's a lot more we plan to do here.
This isn't new; it's called a protection racket and it works especially well if you actually do have a mechanism to protect against the thing (though of course that's not a requirement if you're generating most of the problem yourself).
Anyone knows what the cost of performing a DDoS is these days? As a target, if you can't do anything else, is it best to hang in there and wait it out due to mounting costs for the attacker?
DDoS is said to be very cheap. What if we used that to boot the culprits? It's obviously illegal, so just as a thought experiment: if those amplifiers (dns resolvers or whatever is popular at the moment) started experiencing issues due to their servers being a nuisance to others...?
It still seems much easier to just blackhole IPs that are causing problems, like collectively (at the edge of your AS) block IPs that long-term host a service that is actively involved in facilitating DDoS attacks, but for some reason nobody is doing that. This could be a more direct way: see where DDoS traffic is coming from and... poof
Not really practical or effective to implement. The attacks more often than not come from botnets comprised of compromised consumer devices. You can’t just outright drop traffic from residential ISPs.
Ah right, I was figuring most of those misconfigured udp services (dns, ntp, ...) were running on servers rather than regular home IPs. That does make it a little different.
Still, if an ISP has had multiple abuse reports for the same subscriber and they're not doing anything, after some time it starts to become reasonable to block this IP, and in a further escalation, this ISP's ranges altogether until they clean their act up. I remember getting the Internet connection blocked as a teenager on an XS4ALL connection for being an ass on the Internet (I tried to DoS a domain squatter that tried to sell a domain I wanted for a thousand times the price with no added value). The abuse desk which I had to contact to unblock it took my promise to not do it again seriously (as did I), not sure how other ISPs handle this.
I kept having errors in sending emails and logging into the web interface and the mailbox.org status page indicated nothing was wrong. Why have the page if that’s not where I’m supposed to find out something like this?
If you didn’t want to depend on a big provider like AWS or Cloudflare, what is the approach to fending off a DDoS attack? What type of hardware would you need to acquire? What type of software? Are there guides on this type of thing?
You could try to do it yourself with firewall rules, reverse proxies, things like that, but to fight large scale ddos you really need to be moving the traffic around using BGP, and you'd want to dump the traffic somewhere, so you'd need bandwidth to dump the traffic into, that's why companies like cloudflare exist, they're able to work with bgp and they have a lot of bandwidth to absorb the traffic on behalf of the customer.
You need a massive amount of bandwidth and a few redundant servers. There are counties with less bandwidth than you need to handle. It isn't impossible, but cloudflare isn't evil (that I know of?) and so it is best to support them as your backup
Coincidentally, the app password that I've used for Fastmail's CalDAV service for years suddenly started causing 403's today. I wonder if that's related (but can't think of how it could be)
(I work for Fastmail.) One of our attempts at doing some mitigation of the attack caused this; we fixed it about 16:00 US Eastern this afternoon. Sorry about that!
Without getting into too much detail: we were limiting some HTTP methods, but unintentionally blocked REPORT, which DAV clients use to see what’s changed.
This is going to sound weird, but my father can't send email with attachments currently. I'm a long time fastmail user/customer and I'm also a long time IT guy so can't see what's going on.
Basically SMTP login and everything works fine, client sends the message and then it just times out.
Well that explains it, I literally switched from gmail to fastmail 2 days ago and it was going amazingly until last night when I couldn't load it. "Oh great, an unreliable service I just paid a year for" -- this makes me want to support them even more. Their customer support was fantastic. I didn't inquire about the outage, instead a dns issue I'd created.
If you're a heavy gmail user you should read this story of my last year (out of 10+) with them and IMO migrate off as soon as you can. I wish I hadn't used an @gmail as long as I did.
For some unknown reason, they won't tell me, G marked my google pay account as "possible fraud, unable to verify identity" at some point this year. No idea when, I realized it when CC expired and my 3 month old Google Fi account wouldn't pay its bill and it wouldn't let me enter a credit card to pay for any G service. My G services started to drop like flies once my card expired. They locked me out of Google Pay and won't let me enter a new CC to pay for subs/gdrive. I had to clear out the entire 50gb of my gdrive so that I could get EMAIL again to my now "free" gmail account which is hovering around its 15gb limit, that's how old it is. The real kicker; I can't contact support anymore because I'm not a "paying customer" who gets their "world class support."
The fun part of this was I was completely without a cell phone for a week until I got onto ATT. Support (live chat) was completely useless and just sent me to a page to send them a copy of my license and a utility bill. I did that about 4 times now and I still can't change CCs on my Google account to pay for things. And they still won't contact me, won't tell me what the deal is or anything. It's been about 6 months.
I was joking to a friend before I switched to Fi "watch this be a terrible idea, it breaks and I'm without a phone for a month and have nobody to fix it." Welp.
Anyway, I tried out Protonmail months ago and was completely unimpressed with its search, interface, etc. I had low expectations of fastmail (due to proton exp) but it's great, I'm really impressed. It threads conversations like you'd expect and I could actually find things I searched for, I sold a house in the middle of my Proton use and it was miserable keeping track of all of the emails/docs/etc I was getting/sending for some reason. I remember searching for important attachments and it finding 10s or hundreds of attachments that were my realtors signature picture of herself that showed up in every email.
Sorry for the tangent, be very careful of how much you rely on G services because this cliche horror story you randomly see on twitter/reddit completely happened to me. I'll never be able to stop using this @gmail account because I have so, so many things tied to it but I'm going to try my best to undo most of that.
My biggest fear now is that my G account gets completely locked and all of the things I use G to authenticate to will be lost. Undoing all that is a nightmare.
I feel like I hear more and more of these horror stories. I’ve been looking to migrate my custom domain away from Google for a while and was also looking at Fastmail. I use an @gmail.com for everyday things because it’s widely recognised and easy to communicate to people over the phone or even face to face in a shop for a digital receipt.
About the domain, I recommend you get your own domain if you ever migrate off Gmail. That way you hopefully won't have to do it again anytime soon, even if you move to another provider
If you keep verbal communication in mind when picking a domain it’s fine. Get a five character one and it’s five syllables instead of gmails two.
Doing the thing where you try to sign up to services using servicename@yourdomain.com confuses people though. I stopped doing that after having to explain to people I receive emails to any address sent to my domain a few times.
I could see using uncommon top level domains being a problem as well.
I went first initial, middle initial, first three letters of last name. If that doesn’t work for you I’d just try different combinations of your names letters until you get something available.
Mine is firstname@firstnameLastinitial.dev and it makes people stumble, especially the .dev, also it's kind of rare nowadays but some sites don't think thats a TLD.
Thanks for the reminder that one of my most critical domain names is still on google domains. I've started using porkbun for newer domains I get but I still have the main one on google. I need to get that off before they lock me out of the console or something ridiculous.
Our Bank "A" suffered a major DDoS attack recently. My non-techy partner, upset, declared were moving to Bank "B". I pointed out that Bank B had also been target of DDoS attacks so we would be moving banks only to face the exact same issues. Point being you may move to a different service provider (email, banking, whatever) to find the new one has the same problems anyway.
Is it really worth the hassle to move to another provider/self-hosted server for such a temporary problem?
Moving to another provider would mean to set up a new account, and usually the free tiers won't allow to use your own domain. So you would be locked to a new provider where you could face the same problems.
Temporarily moving to a self-hosted server would be an option, but probably just to receive emails during this time. You'd have to set up your certificate, and optionally DKIM, DMARC, SPF or whatever is required to ensure that your sent emails arrive properly. I can't imagine self-hosting email being something which won't give you a hard time every now and then.
Generally yes, I am in favor of owning your email-domain, but then using it with a professional provider like mailbox.org unless you really know enough about the topic.
But in this case, where the issue is a DDoS attack, I wouldn't do anything, since all undelivered email will be re-sent at a later time.
Or less hassle than completely moving over to a competitor (terrorists win in that case, to use counter strike terminology): add a backup MX record, perhaps to a small vps that just forwards mail to the real server with no retry timeout.
Are there any mail admins in the comments who could propose an smtp daemon & configs that would store all mail for a domain and wait indefinitely to forward it on? I think there's a lot of users/businesses who would like to implement this.
It doesn't look like that email actually belongs to Fastmail staff, but if it did, having a backup on another email service is a good idea in case there's a DNS issue with Fastmail or something.
It's the same idea behind a VPS provider serving their status page on another company's infrastructure.
The name on that email is not one of the Runbox folks listed on their About page, so one can only guess who that actually is or how the email was sent; it could have been a BCC for all we know. https://runbox.com/about/runbox-team/
On another note, the amount they are asking for seems really reasonable, like how are they making money? A DDoS attack must cost more then like 0.06BTC (like $3500 USD) to run all weekend?
I’d assume they’re not actually paying for these attacks and have access to a large botnet, which would imply no cost (outside of hours spent constructing the attack) to the adversary, right?
A DDoS can be rented for less than $100/mo depending on how much bandwidth you want to flood. Remember, they're compromised machines; you're not paying for egress bandwidth.
Are most machines involved in attacks like this compromised? Are we talking generic consumer desktop machines, like my grandma’s old XP desktop running in her basement?