PS Technical details: http://blog.cloudflare.com/the-ddos-that-almost-broke-the-in...
See poorly configured DNS servers and ISP's failing to configure their networks properly - so traffic with a source address which is not part of your allocated IP block is not allowed to leave your network. It is not that hard!
The Internet Infrastructure is working as designed.
If you run a DNS server - it is your responsibility to maintain and protect it so that it cannot be used to attack others, and by doing that you are helping the 'Internet infrastructure' remain intact as designed. By not doing this you are helping the 'attackers'
BCP 38, RFC 2827, is designed to limit the impact of distributed
denial of service attacks, by denying traffic with spoofed addresses
access to the network,
It may not be that hard to set things up this way, but very few ISPs configure their network with this restriction.
If Spamhaus' Linford is quoted accurately, he's kind of full of it. The NYT article gives more detail about CloudFlare's involvement.
Not just for the awesome read but this seems like a data point for the global internet. Very much of interest.
It has an actual location. The name of the owner is known. It has evidentially been involved in legal disputes so it is on record with the government.
Much more likely is someone using the hosting system for something nefarious is retaliating against spamhaus. I don't think the hosting company should go down for that.
I abhore censorship. Does Spamhause engage in it?
That being said, the problem with many BL's is that they are run by incompetents or extremists. They usually either end up blocking things that are not spam by accident (see lists of supposedly dynamic IPs), or block whole subnets (sometimes entire ISPs) to try and "teach them a lesson." or blackmail them into fixing the problem.
Unfortunately, that includes Spamhaus.
It's a bit sad to see how many companies will blindly support such entities because they've "heard" that they somehow help fight spam. As someone who's had issues with them because of their badly configured hosts and shady practices (e.g. using domains previously used by mail providers as "spam honeypots", meaning anyone who emails someone with an old address can be banned [all content mailed there is considered spam, regardless of what it actually is]), I am disappointed (yes, looking at you cloudflare).
AIUI, as mhurron says, what Spamhaus does is publish a list.
The nominal purpose of that list is to identify spammers, so that people who wish to filter out spam can be assisted by that. People do, in fact, use that list, to filter email. The email recipient wants to be protected from spam, so the recipient's ISP attempts to perform that service, and Spamhaus contributes an opinion that the ISP takes seriously.
So, in practice, if Spamhaus adds you to their list, many many users will stop seeing email from you. Spamhaus has a great deal of power to mostly-silence domains.
I have no reason to believe that Spamhaus uses their power for anything other than good. But it's not quite as simple as "do they censor? no".
In CRAs' own opinions, they are practicing "free speech" and giving what amounts to "numeric editorials on the quality of companies." In critics' opinions, large corporations and sometimes governments are relying on these "editorials", so CRAs' abilities to say whatever-the-heck-they-want should be regulated.
From what I understand, Spamhaus basically provides lists that identify known spammers, or known spam hosts. These lists are used for things like filtering out spam emails. So Spamhaus is basically saying that Cyberbunker is a host to spammers, and therefore email coming from the Cyberbunker's IP addresses should be treated as spam.
Spamhaus isn't preventing anyone from putting anything anywhere, they provide a service that others can use so they don't have to see it.
This post in no way is a comment on any of Spanhaus' practices which has garnered some criticism.
Some of my sites have had short periods of slow response times for the past few days but I assumed it's the crappy host their on. One of my clients on CF hasn't had any issues.
I assume it was related to this attack.
"Before the break of dawn on a morning in April, a full SWAT team was sent to execute a search warrant on CyberBunker's property."
"It must not have occurred to the officers that the blast doors were designed to withstand a 20 megaton nuclear explosion from close range. When the SWAT team realized that the door was not being opened for them, they throw flashbangs and take other actions to draw attention."
And from the NYT article:
“Dutch authorities and the police have made several attempts to enter the bunker by force,” the site said. “None of these attempts were successful.”
Haha, this is too funny.
More detailed article on NYT
"On the other side of the blast doors, no one inside the bunker noticed anything unusual."
According to that story, nobody even realized that a SWAT team had tried to break their door down until hours later!
Edit: talking about this picture: http://cyberbunker.com/web/images/swat-bunker.jpg
Edit2: I think it's more difficult to determine if it's real than I thought.
Light could be coming from the top (around midday) and be blocked in arbitrary parts by the trees, so one part of the place gets light from the right, while another gets it from the left.
I think they shopped the photo to illustrate their story -- they’re not claiming it’s genuine I imagine (after all, apparently it took place while they were asleep).
Everything he does however should be taken very seriously. He has the technical chops (old school hacker) and the means and criminal connections to back it up.
Also, not everybody seems to love Spamhaus, which is largely overlooked in the articles.
Missing in this is who is attacking Spamhaus and why -- what is motivating the attackers to risk eventual detection or capture. Also, it seems implied but not stated, are all of the attacks originating from leased servers in Spamhaus???
However, realistically speaking it would be very, very simple to get their attention. With one shovel: http://online.wsj.com/article/SB1000142405274870463000457624...
You can read that they were "operating from a Cold War era government command bunker that was purpose-built by the military to house sensitive electronic gear".
This makes the story about the SWAT team very believable.
They claim that the SWAT team just gave up and went back to their police station and then denied they ever went there. If a SWAT team were executing a search warrant or seeking to arrest people, they would not just decide to drop the case because they couldn't get in.
They also claim to have extensive video of this event but decided not to release any of it because the police gave them 8,000 Euros to repair their fence. That video would be worth orders of magnitude more for PR purposes if they released it.
If it walks like a duck and quacks like a duck...
Having said that, in the case of 20Mt nukes I suspect 5km does count as "close range".
So, yeah, 5km counts as "close range".
Not even a large mountain is going to withstand multiple 20Mt hits.
Tom Clancey described these missiles as having the mission to: "turn Cheyenne Mountain into Cheyenne Lake."
The novel "Arc Light" also has a description (probably not that accurate) of a limited strike by Russia against the US that includes destruction of the bunkers at Cheyenne Mountain, Raven Rock Mountain and Omaha:
On the other hand, an early strike at both ends of the tunnel is very different from eroding it over some period of time (has to be some separation to avoid fratricide); if the construction was done well enough, the latter might have provided more time for NORAD to transmit additional data about the attack in progress.
Then you could have Cory Doctorow write it.
Oops! A URL in your Tweet appears to link to a page that has spammy or unsafe content. Learn more
If this is caused by spamhause then I support the other guys. Censorship is worse than spam.
Spamhaus identifies spammers. They don't take down websites or block connections.
Physically cutting its uplink lines would be IMHO the most efficient way to neutralize that datacenter.
When I say "our", I mean the loose knit group of sysadmins, self proclaimed "computer people", hackers, phreakers, security experts, and government officials trying to quell the increasing lurch of botnets and malware that has gone on since the Eternal September.
Botnets get big because users don't know any better, users don't know better partly out of laziness, partly because they feel they can't know any better. I don't know of a single site I can point to and say "If you really give a shit about not getting your credit card data stolen, go here." Instead as far as I can tell the majority of users in this demographic have their needs "met" by fraudsters selling bogus antivirus packages and weird proprietary utilities.
If you want a computing environment that can survive open, it needs users who can use open.
It used to be that could just tell people to install a security suite on their computer and they'd be mostly OK. I don't think that's really true any longer.
You could also partly lay the blame at Microsoft's door in getting users to start connecting to internet with an OS designed without any reasonable security (Windows 95).
Now that we have operating systems with better security it's hard to change people's usage patterns to take advantage of that.
Even a sophisticated user is just as vulnerable in many cases. If I give personal information to a third party site that I presume to be trustworthy (say a government site) there's no way I can know if someone is going to find SQLi vulnerabilities in that site next week and exfiltrate all of that data.
SQLi should not be a thing. At all. It's the most trivial fucking thing in the world to validate data before you use it in an SQL query, and people get it wrong, every single day. Security isn't hard, it's just tedious.
That's just the way the Internet is right now.
Sure, there's evolving threats, there's drive-bys that will slip around all of this, there's ways attackers could still get through. But as scary as it all is, anything beyond these steps gets into the territory of major inconvenience. The problem with that is, the more intrusive and inconvenient the security becomes, the less likely people are going to be to actually use and remember their security practices. If mom can't repeat it at her book club, it's not going to be effective. And to be honest, these types of attacks that bypass these restrictions are exceedingly rare when it comes to mom and grandma. The biggest threat there is phishing and malware. Corporate security has professionals enforcing a policy that meets the business's own requirements.
So to answer your question, yes, that's all you can reasonably do. In most cases, you'll be pretty well protected with just that, and those steps aren't too complicated to follow or remember.
Turn the computer off when you're not using it.
If I look at the logs, it's all connections to CDNs with weird hostnames. How do I know which ones are legit and which ones might be part of a DDOS?
Also CC numbers are 16 bytes long, would just get lost in all the noise..
The internet is not safe for banking, and I don't see any way it can be made safe.
Have you noticed the spate of attacks against SSL lately? BEAST, CRIME, Lucky 13, RC4 in general? https://en.wikipedia.org/wiki/BEAST_%28computer_security%29#... Not profitable for some things, maybe, but definitely worth mounting such an attack for banking info.
Have you noticed that the certificate authority system is totally broken? http://www.theregister.co.uk/2011/04/11/state_of_ssl_analysi... Heard of DigiNotar? Comodo? http://arstechnica.com/security/2011/09/comodo-hacker-i-hack... These aren't hypothetical attacks! Google got MITM'd by Iran https://blog.mozilla.org/security/2011/08/29/fraudulent-goog...
Most people's personal finance is probably safe to do on the Internet because of legal requirements on banks. Small businesses are another matter.
As wel as the daily stats by the way: https://ams-ix.net/technical/statistics
Reading on through the article, they continue about Spamhaus. What's that got to do with slowing down the internet? And "But we're up - they haven't been able to knock us down." is factually incorrect, Spamhaus did go down. They're winning in the end, but they did go down.
> He added: "These attacks are peaking at 300 gb/s (gigabits per second).
Source? 300gbps would definitely be visible, and I think I remember hearing about something between 60 and 100gbps.
> Spamhaus is able to cope, the group says, as it has highly distributed infrastructure in a number of countries
> We can't be brought down
We've seen that. Am I missing information or is this a lie?
From Cloudflares response [1 - nice graph in blog] ~75Gbps extra traffic was hitting part of their network.
Obliviously there would have been much more traffic floating around and getting dropped by ISPs that have correctly configured their outgoing traffic filters.
Many [not all] ISPs that were affected only have themselves to blame, the 'Internet' didn't slow down - the part that they are responsible for did - and it was their fault....
In October, 2011, Spamhaus identified CyberBunker as providing hosting for spammers and contacted their upstream provider, A2B, demanding service be cancelled. A2B initially refused, blocking only a single IP address linked to spamming. Spamhaus retaliated by blacklisting all of A2B address space. A2B capitulated, dropping CyberBunker, but then filed complaints with the Dutch police against Spamhaus for extortion.
I understand taking pride in your work but isn't bragging like this kind of an invitation for more things like this to happen to Spamhaus?
"We mustn't be brought down."
Since it's a Dutch company I highly doubt they host anything illegal (as the article implies). The same rules apply to them as they do to other hosting companies in The Netherlands (and EU).
Is that around like 3000 compromised computers? Maybe 2-5 botnets worth? I might be a bit off on the prices here, but that sound like maybe ~$1k/day on the market? would be nice to get a price tag on the "'biggest attack in history'".
Nope, this could easily be done with far less. This is an amplification attack.
Due to the design of DNS and UDP it allows you to send a simple/small request to a poorly configured DNS server [one that open resolves for anybody - there are a lot out there] and pretend you are doing it from your targets IP address.
UDP is a fire and forget protocol, you send it a source address and it will reply to that address. With DNS recursion you can easily send a request which will reply to your target. The amount of data returned from these DNS servers and sent to your victim can often be a 50x larger than your initial request. The more open resolvers you find, the more damage you can do, without needing much more upload bandwidth from your host [relative]
You request from your host:
dig ANY isc.org @x.x.x.x +edns=0 == 64bytes
; <<>> DiG 9.7.3 <<>> ANY isc.org @x.x.x.x
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 5147
;; flags: qr rd ra; QUERY: 1, ANSWER: 27, AUTHORITY: 4, ADDITIONAL: 5
;; QUESTION SECTION:
;isc.org. IN ANY
;; ANSWER SECTION:
isc.org. 4084 IN SOA ns-int.isc.org. hostmaster.isc.org. 2012102700 7200 3600 24796800 3600
isc.org. 4084 IN A 184.108.40.206
isc.org. 4084 IN MX 10 mx.pao1.isc.org.
isc.org. 4084 IN MX 10 mx.ams1.isc.org.
isc.org. 4084 IN TXT "v=spf1 a mx ip4:220.127.116.11/21 ip4:18.104.22.168/16 ip6:2001:04F8::0/32 ip6:2001:500:60::65/128 ~all"
isc.org. 4084 IN TXT "$Id: isc.org,v 1.1724 2012-10-23 00:36:09 bind Exp $"
isc.org. 4084 IN AAAA 2001:4f8:0:2::d
isc.org. 4084 IN NAPTR 20 0 "S" "SIP+D2U" "" _sip._udp.isc.org.
isc.org. 484 IN NSEC _kerberos.isc.org. A NS SOA MX TXT AAAA NAPTR RRSIG NSEC DNSKEY SPF
isc.org. 4084 IN DNSKEY 256 3 5 BQEAAAAB2F1v2HWzCCE9vNsKfk0K8vd4EBwizNT9KO6WYXj0oxEL4eOJ
;; MSG SIZE rcvd: 3223 [bytes]
Seems like 30,000 nodes at 10Mb/s would be more likely?
But I don't have experience in botnets, just curious.
So the botnets involved don't have to send 300 Gbps of traffic to Spamhaus. The DNS servers being much more powerful will take care of that. I have no idea about the going rate for a botnet, though.
There is no legitimate use case for sending traffic with a spoofed source IP. I'm simply amazed that ISPs who should have the technical knowhow still haven't eradicated all kinds of network attacks that rely on spoofed source addresses(of which DNS amplification is only one).
Cyberbunker brags on its Web site that it has been a frequent target of law enforcement because of its “many controversial customers.”
“Dutch authorities and the police have made several attempts to enter the bunker by force,” the site said. “None of these attempts were successful.”
If this happened in the USA - the police would never leave - they'd call in tanks or bunker busters from the military.
Did the Dutch just turn around and go away and say "oh well" ?
The US may have a bunker buster that can open that thing, but it would kill everybody inside, and possibly around it. I am not quite sure that the US bunker busters of that size are non-nuclear either.
Would the US FBI really drop a nuke to serve a warrent? That seems, excessive and counter productive (as you would destroy whatever it was you wanted).
I have no idea how I know or remember this, I must have read it somewhere but I am very anti-war, so why I know it is strange to me.
Just yank the network and turn off the water, it'll either be pointless to stay inside, or untenable. Either way gets the door open.
I'm not sure to understand why this should slow down the whole internet. It seems to be only for email filtering, not for the web, and only those ISP that use their service should be impacted, and only when their DNS cache is not triggered. Am I missing something ?
This is blatantly wrong, the DNS system and poorly configured networks were used to target and attack Spamhaus.
I got there from the comments on the CloudFlare blog posts, where a user named "STOPhaus" posted taunting CloudFlare, with a link to the stophaus.com website. Apparently it's a meeting place for anti-Spamhaus sentiment and includes such classy stuff as personal information on Spamhaus employees.
Wild stuff, and thanks to CloudFlare for their writeup.
To this day - with v4 exhausted and despite numerous delisting attempts - I have a /21 listed in Sorbs because it happened to be in the past part of an ISP's /18 dynamic range for customers.
They deserve all that's coming to them and more.
Too bad other's get affected in the process.
Indeed, all they should be used for is perhaps 0.5 points worth of spamassassin weight. Sysadmins who use spamhaus as a blacklist are just as incompetent as those who bounce virus e-mails to the address in "From:" ...
Also, the DNS servers are being used to perform the attack via DNS amplification, the slowdown is not caused by clogged DNS servers.
I don't have the exact quote but one of the articles likens the situation to having a motorway with on-ramps and off-ramps to individual networks/hosts. The usual DDoS seeks to clog the on-ramp or off-ramp the target uses by sending too many cars their way. However, this attack is so big that it's clogging up the motorway itself not just on/off-ramps.
1 billion+ people won't hear of it much.
I would be interested to know, what did Spamhaus paid google to use its resources, and would such a type of cooperation on global scale means the end of DDoS in the long term?
My internet down here in Oz has been as slow as ever!
These people have put in place a high quality method to discriminate spammers. I've been around since their beginnings, and their list has been incredibly successful (very high quality) for me, compared to njabl and other "dynamic" lists based on honeypots, or backlash entirely (say hi to spamcop).
You would also recognize that you can just as well tag the message with "likely spammminess" for use along the chain, and people would still complain that your "legitimate" message was tagged as spam by SOMEBODY, while you wouldn't complain if it was tagged as spam by a learning algorithm.
In short, people would complain anyway, except that spamhaus is doing real damage to the spammers (as in "the mail really didn't go through") and reducing their revenue, and thus forcing them to come out which such measures. Not that they will accomplish anything anyway. Spamhaus helped stop a lot of known/professional spammers, and I applaud them for that.
You constantly have to check if there is a chance that spammers noticed your honeypots so that they can avoid them or use them against you as well (the bigger you get the more sophisticated these attackers get too), you have to use tagged email addresses that can be linked back to the offenders. Methods to probe address ranges multiple times before validating them, and ways to automate the unlisting as well. False positives are basically unavoidable at some point, also because spammers themselves like to rotate their addresses based on their previous owners or known datacenters that are "too big to be blocked" wholesale for this exact reason. If they had a chance to know one of your trigger addresses, a common practice is to generate spam from a "safe" range into the trigger address, in an attempt to generate a false positive and thus, of course, backlash. It's sickening.
Exchanging digests of message contents among multiple server cooperatively became a good indicator of spammyness (vipul's razor), though you would catch bulk emails in the process, and spammers quickly adapted to random email contents so that the method became quickly ineffective.
The real problem here is that these assholes don't care as long as they can deliver the message, that's the only metric they have and care for. Maybe you don't care for it, because you can then use filtering later, but that's a huge volume of trash that needs to be shoveled around. I actually witnessed many cases in organizations bigger than a hundred eployees where several servers were used 24/7 just to churn messages through "dspam" or similar filters before delivering to the final mailbox. This is a huge cost in terms of measurable power wasted for a couple of assholes.
The hypothesis I came to was that we weren't using SPF records on the domain associated with our IP address for a long time.
Some spammers were taking advantage of this by sending emails from different IP ranges with the From: header spoofed to be from our domain.
So Spamhaus blocked our IP address on the grounds that spam filters would also be able to confidently block anything appearing to originate from a domain name that resolved to our IP address.
Didn't use SPF to begin with because there was a large number of hosts legitimately sending mail for the domain and it was a pain to get all of the IP numbers for various crazy reasons.
I wanted to believe him. But before I could reply to his mail, I got first-hand evidence that the SBL has in fact gone bad.
As of this writing, any filter relying on the SBL is now marking email with the url "paulgraham.com" as spam. Why? Because the guys at the SBL want to pressure Yahoo, where paulgraham.com is hosted, to delete the site of a company they believe is spamming.
Wait, there's more!
Impossible. The SBL lists only IP addresses; there is no content filtering at all.
Furthermore, there's a lot of FUD in this thread about Spamhaus listing people who don't emit spam. IF this is true, then Spamhaus would have an unacceptably high false positive rate, and we would be able to observe this. In reality, Spamhaus has the lowest FP rate in the industry. Occam's Razor suggests those who claim to have been wrongly blocked are mistaken about the reason for their listings (if they ever existed in the first place).
I hear the SBL can also block domains, how? What is "URIBL_SBL"?
Yes, the SBL can also be used as a URI Blocklist and is particularly effective in this role. In tests, over 60% of spam was found to contain URIs (links to web sites) whose webserver IPs were listed on the SBL. SpamAssassin, for example, includes a feature called URIBL_SBL for this purpose. The technique involves resolving the URI's domain to and IP address and checking that against the SBL zone.
And of course they also have the DBL (Domain Block List), though I don't know if that existed back when PG ran into problems.
Do you have a link to the false positive rankings? I'm curious as to how that is measured.
As for DNSBL false positive rates, I haven't seen statistics in a few years, and by now they wouldn't be worth much. The only ones I saw were from 2005 or 2007. This one (linked to from the below article) from 2011 doesn't even test Spamhaus:
This is just my personal experience saying (in 2013) that Spamhaus has the lowest FP rate, which isn't scientific. I'm kind of surprised there haven't been more FP comparison reports of major DNSBLs in recent years. If anyone has a link I'd love to see it.
Ummm, my ISP IPs hav been blocked several times for absolutely no fault of mine. I have a shared IP for browsing and turns out that cloudfare has blocked them. I also had issues with my sites, the IPs signed to me were blacklisted.
I understand that no one is forcing usage of spamhaus db but it seems unfair and white-listing is near impossible.
Been there, done that, got the t-shirt.
What can I do to provide extra firepower in the ongoing ddos against them?
I'm truly curious on why the reaction to spamhaus being DDoS is so polarised.
Trying to keep mail-servers running and keeping up with the different spam clearing houses different policies that kept changing without notice was a lot of work back then.
Once you got black-listed getting removed wasn't always an easy process no matter how quickly you tried fix whatever caused it. Methods of communicating were not always available and when it were, responses were not always helpful or even very polite.
I haven't managed mail servers for over ten years, and really hope that the conditions for being included on a blacklist and process for getting removed is more transparent by now.
Given the amount of trust that most people running mail servers are putting into the different blacklists organizations like spamhaus get a lot of power over the internet.
Judging from my experience with spam clearing houses it looks like that power have once more corrupted when I read the news stories about cyberbunker.
We need places like cyberbunker to keep the internet free and open. When all the pr0n, w4r3z and 1337 stuff have been cleaned out from the internet the infrastructure to stop anything at will on the internet will be in place and functional.
I wonder what would be the next thing to be removed from the internet?
I took a job in the year 2000, at a company with 3000 email users, listed by Spamhaus. First thing I did was close the open relay they were running. The listing was promptly removed, and the mail queue was back to normal within only a few days. I'm skeptical of your claim. I've never seen a confirmed case of Spamhaus aggression, but I've seen a lot that were disproven, and even more that sound like they were written by miscreants. Like the kind who would advocate DDoS attacks cough.
Open relays were at the time manageable, even the ones that suddenly appeared when someone installed an old OS-version, as were the process for getting removed from the blacklists due to open relays.
Once you had one a computer lab workstation hacked and used for spamming - not so easy to get whiteliested anymore.
The university had a class B-network, trying to get the staffs subnet whitelisted while keeping the computer-labs blacklisted was apparently not possible according to the spam clearing houses. Blocking port 25 for outgoing traffic not possible to check from the outside and didn't help.
I can understand that organizations like spamhaus are overworked don't have the resources to handle every non-standard case on the internet as quickly as the blocked ip-range would like, but the replies we got were truly unhelpful.
The fact that someone bothered to register the domain stophaus.com seems to indicate that my experiences isn't uniqueue.
But since the spam trap addresses are secret, it was an impossible charge to defend against or investigate. Not fun.
Yeah, I could fix that, either by hosting my full stack of email, or not doing it at all. Either way, it's a pain.