According to [0] that is around 1/400th of total internet traffic per second. This begs the question: who has that kind of botnet at their disposal and why are they targeting Github?
Edit: The attacker didn't need nearly that kind of bandwidth to execute this attack. See [1]
I assume this explains the apparent hugeness of the attack:
> The vulnerability via misconfiguration described in the post is somewhat unique amongst that class of attacks because the amplification factor is up to 51,000, meaning that for each byte sent by the attacker, up to 51KB is sent toward the target.
During some analysis we did notice that at least some cloud providers default to having instances with public IPs (with no network-level ACLs) by default, and some Linux distributions default to having memcached listening for UDP traffic and binding to `0.0.0.0` by default as soon as it's installed. The unfortunate combination of these result in the machine being vulnerable to being used as an amplification vector in these attacks.
This is one area where I really disagree with Debian and derivatives default behavior of starting a service immediately after installation and not having a firewall enabled by default.
If I install a service on CentOS/RHEL/Fedora it is disabled by default, if I start the service firewalld will block traffic until I have explicitly enabled a rule to allow it (or explicitly stopped and disabled the firewalld service).
Does this prevent people from making poor decisions, like just blindly starting the service without reading the configuration file, or disabling firewalld/enabling a rule without checking the configuration first? No, it doesn't - but that small hurdle at least prevents people from inadvertently turning on a service and opening it up to the world just by installing a package.
Thankfully that is one thing they do pretty well in most cases, though there are some (apache2, for example) that do listen on all interfaces by default. While even services like Apache may have secure configurations by default, it can often be installed by other programs that link or copy their own configuration files into the apache config directory - and then all you need is a PHP vulnerability or whatever.
Did you receive any cooperation from those cloud providers in using ACLs at their network edge to drop that traffic (that they should’ve been blocking in the first place)?
> Who is not filtering outbound UDP traffic from their memcached instances?
This is of course the wrong way to do it -- you need to filter inbound UDP to your memcached instances so you don't waste your resources generating the responses, and also so you don't accidentally fragment the responses and only drop the first fragment outbound.
I disagree, due to seperation of responsibilities. Having run both an ISP and a hosting company, you have to filter traffic at your edge that can impact external resources (just as ISPs block outbound NetBios and SMTP traffic on port 25/tcp).
Yes, the server or instance customer should be doing this. But they’re not, because poor security practices are an externality, not a cost they sustain.
Security is more important than developer velocity, but users pay the bills.
Your confused if you think it's the clouds that are misconfigured here. The issue is the ISPs allowing the spoofed traffic going towards the memcached servers.
So the problem here is that a number of UDP packets were sent from somewhere (with a small bandwidth) that had a spoofed source address. They were then sent to the reflection servers which produced more/bigger UDP packets that did not have a spoofed source address.
So the attacker only needs to find somewhere on the internet that is capable of generating spoofed packets. They needed a lot of places that had a reflection server, but the requirements for the spoofing was much smaller.
In other words, you would have to prevent 99.9% of the internet from being able to spoof source addresses before you fixed this problem.
And taking down UDP services is just as folly. It only takes a 1000 servers with 100 mbps upload streams to wipe out any single load balancer. There are at least that many root name servers.
I see. For anyone else who doesn't have any background in this attack: memcached is an open source general purpose cache that uses sockets to cache data. From what I gather, the attack here was possible because Github engineers accidentally left the memcached port open. So the attackers were able to spam memcached with large requests, and memcached responds immediately with the full contents of the cached memory (assuming, of course, that the client is localhost).
> The memcache protocol was never meant to be exposed to the Internet, but there are currently more than 50,000 known vulnerable systems exposed at the time of this writing. By default, memcached listens on localhost on TCP and UDP port 11211 on most versions of Linux, but in some distributions it is configured to listen to this port on all interfaces by default.
> From what I gather, the attack here was possible because Github engineers accidentally left the memcached port open.
That is incorrect.
The attackers made requests that were forged to have the sender IP address of Github to multiple public memcached instances. Memcached then responds back to Github instead of the attacker.
This is documented in more detail in the Cloudflare vulnerability report[0]
I don't think it was GitHub's memcached instances. It was other public instances that with spoofed network requests ended up sending traffic back towards GitHub's network.
From what I understand the attack originates from publicly exposed memcached servers configured to support udp and that have no authentication requirements:
- put a large object in a key
- construct a memcached "get" request for that key
- forge the IP address of the udp request to point to that of the target/victim server
- memcached sends the large object to the target/victim
Multiply times thousands of exposed memcached servers.
Yes and there are a lot of attacks of very, very large sizes going on. Over the last few days we've mitigated some huge attacks. Luckily, everyone is working together to rate limit and clean up this problem.