Hacker News new | past | comments | ask | show | jobs | submit login

Wow, 1.35Tbps? That's a lot for a DoS attack, right?



According to [0] that is around 1/400th of total internet traffic per second. This begs the question: who has that kind of botnet at their disposal and why are they targeting Github?

Edit: The attacker didn't need nearly that kind of bandwidth to execute this attack. See [1]

Edit: 1/50th -> 1/400th (bits vs bytes)

[0] http://www.internetlivestats.com/one-second/#traffic-band

[1] https://news.ycombinator.com/item?id=16493497


I assume this explains the apparent hugeness of the attack:

> The vulnerability via misconfiguration described in the post is somewhat unique amongst that class of attacks because the amplification factor is up to 51,000, meaning that for each byte sent by the attacker, up to 51KB is sent toward the target.



Who is exposing their memcached instances on the public internet? Who is not filtering outbound UDP traffic from their memcached instances?

Rhetorical of course. Akamai should have logs of their offenders. Off to scan for offenders and notify their providers!


During some analysis we did notice that at least some cloud providers default to having instances with public IPs (with no network-level ACLs) by default, and some Linux distributions default to having memcached listening for UDP traffic and binding to `0.0.0.0` by default as soon as it's installed. The unfortunate combination of these result in the machine being vulnerable to being used as an amplification vector in these attacks.


This is one area where I really disagree with Debian and derivatives default behavior of starting a service immediately after installation and not having a firewall enabled by default.

If I install a service on CentOS/RHEL/Fedora it is disabled by default, if I start the service firewalld will block traffic until I have explicitly enabled a rule to allow it (or explicitly stopped and disabled the firewalld service).

Does this prevent people from making poor decisions, like just blindly starting the service without reading the configuration file, or disabling firewalld/enabling a rule without checking the configuration first? No, it doesn't - but that small hurdle at least prevents people from inadvertently turning on a service and opening it up to the world just by installing a package.


As a compromise, Debian also comes with a secure-by-default configuration. In the case of memcached, the service is configured to listen on 127.0.0.1.


Thankfully that is one thing they do pretty well in most cases, though there are some (apache2, for example) that do listen on all interfaces by default. While even services like Apache may have secure configurations by default, it can often be installed by other programs that link or copy their own configuration files into the apache config directory - and then all you need is a PHP vulnerability or whatever.


Did you receive any cooperation from those cloud providers in using ACLs at their network edge to drop that traffic (that they should’ve been blocking in the first place)?


> Who is not filtering outbound UDP traffic from their memcached instances?

This is of course the wrong way to do it -- you need to filter inbound UDP to your memcached instances so you don't waste your resources generating the responses, and also so you don't accidentally fragment the responses and only drop the first fragment outbound.


I disagree, due to seperation of responsibilities. Having run both an ISP and a hosting company, you have to filter traffic at your edge that can impact external resources (just as ISPs block outbound NetBios and SMTP traffic on port 25/tcp).

Yes, the server or instance customer should be doing this. But they’re not, because poor security practices are an externality, not a cost they sustain.

Security is more important than developer velocity, but users pay the bills.


Your confused if you think it's the clouds that are misconfigured here. The issue is the ISPs allowing the spoofed traffic going towards the memcached servers.


If you’re a service provider allowing your equipment to participate in an amplification attack, you’re the fool trashing the commons.

It’s an ISPs job to filter outbound udp on arbitrary ports? Shall we only let 443 tcp outbound from eyeball networks?


Any UDP service can be used in an amplification attack. It's not the responsibility of AWS or other hosting companies to shut down all UDP traffic.

The problem is the ISPs allowing spoofed IPs.


So the problem here is that a number of UDP packets were sent from somewhere (with a small bandwidth) that had a spoofed source address. They were then sent to the reflection servers which produced more/bigger UDP packets that did not have a spoofed source address.

So the attacker only needs to find somewhere on the internet that is capable of generating spoofed packets. They needed a lot of places that had a reflection server, but the requirements for the spoofing was much smaller.

In other words, you would have to prevent 99.9% of the internet from being able to spoof source addresses before you fixed this problem.


And taking down UDP services is just as folly. It only takes a 1000 servers with 100 mbps upload streams to wipe out any single load balancer. There are at least that many root name servers.


udp is also used for web traffic, a very very large amount of web traffic - so you can't really block it.


I see. For anyone else who doesn't have any background in this attack: memcached is an open source general purpose cache that uses sockets to cache data. From what I gather, the attack here was possible because Github engineers accidentally left the memcached port open. So the attackers were able to spam memcached with large requests, and memcached responds immediately with the full contents of the cached memory (assuming, of course, that the client is localhost).

> The memcache protocol was never meant to be exposed to the Internet, but there are currently more than 50,000 known vulnerable systems exposed at the time of this writing. By default, memcached listens on localhost on TCP and UDP port 11211 on most versions of Linux, but in some distributions it is configured to listen to this port on all interfaces by default.

Yikes!


> From what I gather, the attack here was possible because Github engineers accidentally left the memcached port open.

That is incorrect.

The attackers made requests that were forged to have the sender IP address of Github to multiple public memcached instances. Memcached then responds back to Github instead of the attacker.

This is documented in more detail in the Cloudflare vulnerability report[0]

https://blog.cloudflare.com/memcrashed-major-amplification-a...


Ah ok. This makes much more sense -- leaving a port open seems like an amateur mistake for a firm like Github. Thanks for the link.


If Github had no open ports they wouldn't have much of a website.


I think it was more likely meant re. the original (incorrect) interpretation that github had left public access to memcache instances.


I don't think it was GitHub's memcached instances. It was other public instances that with spoofed network requests ended up sending traffic back towards GitHub's network.



Probably to get press. If you have a giant botnet and get press for it, it's probably easier to get clients.


It's in bit/sec not byte/sec so 8x lower.


Linked article for the memcached explanation: https://blog.cloudflare.com/memcrashed-major-amplification-a...

From what I understand the attack originates from publicly exposed memcached servers configured to support udp and that have no authentication requirements:

- put a large object in a key

- construct a memcached "get" request for that key

- forge the IP address of the udp request to point to that of the target/victim server

- memcached sends the large object to the target/victim

Multiply times thousands of exposed memcached servers.

That about right?


Yes and there are a lot of attacks of very, very large sizes going on. Over the last few days we've mitigated some huge attacks. Luckily, everyone is working together to rate limit and clean up this problem.


This is honestly why I have no problem with things like brickerbot.

I consider it community service.


It's the largest recorded attack: https://www.wired.com/story/github-ddos-memcached


The GitHub post says the amplification factor is 51000, while the Wired article says it's 50. So which is it?


We've seen 51,000x. I've personally looked at the pcaps of that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: