
Denial of Service Attacks - silenteh
https://github.com/blog/1796-denial-of-service-attacks
======
eik3_de
To GitHub and everyone: _please_ use UTC timestamps when there are potential
readers outside of your timezone. Since every technical person should know
their current UTC difference, calculating the local time is easy.

~~~
smackfu
>Since every technical person should know their current UTC difference,

Awww, that's so optimistic.

~~~
elwell
Don't forget the ancient proverb handed down to us from the original
neckbeards:

"At some point in their career, every programmer must pass through the fires
of timezone hell."

~~~
derefr
Timezone hell is, of course, followed up with the three-headed hound of
locales, charmaps, and keyboard layouts. And if you pass by that, you drop
into the river of cache coherency...

(Esoteric software-engineering topics could make a wonderful video game, you
know?)

~~~
ramses0
Turkey Test: [http://www.moserware.com/2008/02/does-your-code-pass-
turkey-...](http://www.moserware.com/2008/02/does-your-code-pass-turkey-
test.html)

------
xedarius
You ever sit there and wonder who the person is on the other end of the
attack? Someone sitting there, I guess with not much on that day, decides to
command their army of infected bots to attack github.

Why github I wonder? Perhaps it provides a challenging target. Perhaps github
is used as a testing ground for a more profitable future attack.

We often get technical writeups after a DDoS attack, however we very rarely
get a writeup sumising the motive behind the attack. I can't believe _every_
attack is simply driven by 'because they can'.

~~~
derefr
There's a good piece of journalism (EDIT: see the reply to this comment for
the link), in which someone who was being DDoSed went into the various black-
market forums where such people as would DDoS hang out, to see if they could
find their attacker.

What they found was that, at one forum, there was a convention where new, as-
yet-untrusted sellers of DDoSing-as-a-Service are expected to take down some
big, technically-respected target (e.g. GitHub) to prove their mettle, before
anyone would hire them. And conveniently-enough, some new DDoSaaS seller was
advertising right then, telling everyone to "look and see that _________ is
down. That was me! Hire me now!"

It further turned out that the site-owner doxxed the account, found them to be
a thirteen-year-old(!), and called their parents to tell them what their child
was doing on the internet.

~~~
cwb71
[http://krebsonsecurity.com/2014/02/the-new-
normal-200-400-gb...](http://krebsonsecurity.com/2014/02/the-new-
normal-200-400-gbps-ddos-attacks/)

------
noja
It just shows that we need some kind of distributed version control system.

~~~
sanderjd
Ha! This is good satire, but for me personally, github going down isn't a
version control problem nearly as much as it is a project collaboration
problem. I can't go review pull requests and discuss issues when github is
down, but I can still do all the traditional version control activities.
Github is so much more than distributed version control. If somebody started
doing the non-version-control things that github does in a distributed
fashion, I would be very interested in taking a look.

~~~
skymt
Take a look at Fossil, then. It's a distributed version control system, bug
tracker, wiki and blog, from the author of SQLite.

~~~
sanderjd
Yeah, cool, thanks for the pointer. I remember looking at it before but never
quite wrapped my head around it.

------
ozh
Call me naive but I fail at imagining why would someone want to DOS Github.

I mean, if you're into this, it's certainly fun to launch DOS attacks against
large "evil" things such as government services, large corps and Micro$oft
becoz w1ndoz sux0rz, but... Github? Why?

~~~
meritt
Can you think of an easier way to negatively impact productivity at nearly
every tech startup? There might not be a reason other than simple bullying. In
fact, I think these attackers might be the same ones behind that 2048 game.

~~~
spullara
Or bitcoin! I've always referred to it as a DDoS on engineers.

------
zaroth
If the attacks against Github are mostly proving grounds for fledgling
DDoSaaS, I would assume write-ups like these only serve to elevate their
status as a good proving ground.

Did this article contain anything particularly useful for anyone thinking
about DDoS hardening? I didn't find anything. I guess it's not really supposed
to be a technical article, just a smattering of buzzwords to let you know how
hard they try.

The postmortem-half-apology has become quite an art form; as getting it right
can actually draw a lot of positive publicity, and getting it wrong can be
brutal. But I can definitely see how this post would feel like a pat on the
back to whoever launched the attack.

~~~
asolove
I was going to disagree with you, but then I realized I didn't understand what
you were saying: What do you suggest they should have done?

~~~
zaroth
Github downtime (and subsequent postmortems) are a regular feature of the HN
front page. The postmortems have come to command their own audience, similar
to the CloudFlare reports.

It's actually a pretty bad position Github's being put in. They sit at the
crossroads of playing defense against DDoS and trying to dispel or at least
ameliorate any blame for the downtime.

My point was, if they have indeed become the internet's DDoS proving ground
(as several others were speculating), then while you can see how much effort
they're putting into these postmortems, I can see it becoming a vicious cycle.

Then the challenge is, how does Github placate their users without basically
pinning a ribbon on the attacker? The funny thing is how the "best practice"
checklist for a postmortem (say what happened, say how you thought you were
safe, say how something unexpected broke your assumptions, apologize, say what
you're doing differently in the future) basically ties their hands.

A pretty bad position for Github all around.

------
IgorPartola
I honestly feel bad for the engineers at GitHub for having to deal with stuff
like this. GitHub is large, so they are a target, and the specifics of what
they do means that caching is not a straightforward task. I imagine there are
a lot more vectors of attack that have not been used yet and guarding against
them is always going to be on a case-by-case basis. In the meantime, when
GitHub is having downtime or even badtime it impacts its users pretty
significantly. The private repo's I work on are a source of income for GitHub,
but if this gets common enough the people in charge might just move away from
it to a smaller competitor that doesn't have these problems just so that my
time is not wasted on waiting on GitHub to come back up.

~~~
anaphor
Isn't the whole point of git that you don't need to even have internet access
to get work done?

~~~
cytzol
You still have your local source code, but GitHub gives you issue tracking,
comments, pull requests, and other things you could still use for work - none
of which can be cloned locally.

~~~
anaphor
Okay yes, but it probably isn't going to be down for more than a few hours at
a time. I'm sure pull requests and issues can wait a few hours vs. actually
fixing bugs or writing new code. Right?

~~~
king_magic
No, they can't wait a few hours. Not when you have a 60 developer team on a
private GitHub enterprise account, and it's the middle of the workday.

~~~
balls187
if all 60 are working together sure, but if it's a handful of devs working
with each other, you can pull directly from each others repository.

PITB, yes. But you can still get work done.

~~~
coryking
You can pull from eachother, but since you know github will be up in a few
hours and it would take more than that to really coordinate any workflow
change, most of the time people just goof off until github is back.

Plus you are forgetting that lot of automated jobs get triggered on github
changes. Many shops kick off all kinds of tests, deployments, and other things
based on changes to the github repo.

~~~
balls187
If devs want to use that as an excuse to do other things, that's cool.

My point was simply that you can still be productive when github goes down, if
you want to be.

~~~
king_magic
Sure, _developers_ can still be productive. But what about QA, UX, Product
Managers? They _rely_ on automated jobs that get triggered by GitHub changes.

I don't think you're "wrong" for the developer use case, but the reality is
that it can bring large teams to a screeching halt.

------
caio1982
Kudos to the folks at Github for such summary of the attack! Clear, with a
decent amount of info and honest.

------
robgering
I'm not sure why someone would attack GitHub. Extortion? But aren't there more
valuable targets? Showing off their botnet, perhaps? These attacks seem
frequent.

~~~
Zikes
Unfortunately there may not be a "good" reason. It's not hard for me to
imagine that someone could do it for kicks, or perhaps to test some new attack
vectors against a reasonably hardened target.

~~~
ansible
_It 's not hard for me to imagine that someone could do it for kicks, or
perhaps to test some new attack vectors against a reasonably hardened target._

Though when you try out a new attack vector, the community (hopefully)
publishes enough details about the attack to help the next target more
effectively deal with that particular attack.

If someone is attacking whitehouse.gov or a similar target, I somewhat
understand their reasons why (though I don't agree with them). github.com, on
the other hand, while a for-profit corporation, is also a valuable bit of
Internet infrastructure that makes the world a better place. Attacking targets
like github.com, Wikipedia, and others helps no one, and forwards no coherent
political agenda.

So I'm going with the "for kicks" / jerkwad theory.

------
geovizer
GitHub has been targeted by the Chinese government hackers before, with a man-
in-the-middle attack, and blocking GitHub with the Great Firewall. Maybe they
are at it again?

[http://www.theregister.co.uk/2013/01/31/github_ssl_man_in_th...](http://www.theregister.co.uk/2013/01/31/github_ssl_man_in_the_middle_attack/)

[https://en.greatfire.org/blog/2013/jan/github-blocked-
china-...](https://en.greatfire.org/blog/2013/jan/github-blocked-china-how-it-
happened-how-get-around-it-and-where-it-will-take-us)

------
muaddirac
I'd be interested to know who their "DDoS mitigation service provider" is.

~~~
dangerlibrary
Cloudflare is a very popular choice.

~~~
eli
Very popular among consumers. I think large organizations are more likely to
pay someone like [http://www.prolexic.com/](http://www.prolexic.com/)

~~~
gandalfu
I have seen the folks from prolexic at work and they service/platform is
impressive.

On their spare time they take down botnets:
[http://www.prolexic.com/knowledge-center-ddos-
vulnerability-...](http://www.prolexic.com/knowledge-center-ddos-
vulnerability-disclosure-dirt-jumper.html)

[http://arstechnica.com/security/2012/08/ddos-take-down-
manua...](http://arstechnica.com/security/2012/08/ddos-take-down-manual/)

Shameless plug for hackmiami, if anyone is interested in learning how its done
up and close they run frequent talks/meetups locally:
[http://hackmiami.org/](http://hackmiami.org/)

------
csense
What motive does the attacker have?

There are lots of articles on HN about DDoS attacks on various websites or
online services. Most of the discussion is about the bandwidth used and the
technical mechanics of the attack and defense.

This is interesting, but there's little discussion of the economic motivation.

I assume the kind of infrastructure used to launch this attack is not free. I
understand people or groups might be using this as a way to further various
political agendas or simply for bragging rights. I also understand DDoS
attacks might be an extortion tool.

In the former case, wouldn't the attacker try to loudly and publicly claim
responsibility? In the latter case, wouldn't the defenders take pride in their
"we don't negotiate with extortionists" stance while they're in disclosure
mode?

Or maybe this is just some rich guy's private hobby, and he does it for the
amusement he gets out of reading about people's reactions when they can't
figure out who's responsible?

It seems like the set of rich guys who have the technical skills to do this
kind of thing without getting caught would be kinda small. And if they hire
people, the bigger their organization gets, the likelier they'll hire a law
enforcement plant -- or simply someone with a conscience -- and the game will
be up.

Organized crime might be a possibility, but I assume those guys are interested
in making money, not just committing crimes and wreaking havoc. So what's the
business model that motivates these attacks? If it's extortion, why do the
targets feel comfortable revealing the attack, but uncomfortable revealing
they're being squeezed for money?

------
xwowsersx
> In addition to managing the capacity of our own network, we've contracted
> with a leading DDoS mitigation service provider. A simple Hubot command can
> reroute our traffic to their network which can handle terabits per second.
> They're able to absorb the attack, filter out the malicious traffic, and
> forward the legitimate traffic on to us for normal processing.

That's kind of awesome

~~~
gandalfu
This is what we use at our company. Recently bought by Akamai.

[http://www.prolexic.com/why-prolexic-best-dos-and-ddos-
scrub...](http://www.prolexic.com/why-prolexic-best-dos-and-ddos-scrubbing-
centers.html)

------
Aloisius
It is too bad ICMP Source Quench couldn't have been repurposed to help deal
with these kinds of attacks. It would be extremely nice to be able to simply
send a packet to each host involved in an attack and have them (and optimally
routers in between) slow their rate to the target host.

------
jacquesm
The smaller a service is the easier it is to mitigate such attacks. All kinds
of tools that smaller services can use (whitelists, software based filters
such as iptables, location based filters and so on) are not available once you
cross a certain level of scale. So any simplistic solutions that you might
think of for a smaller service will likely simply not be applicable.

------
larrys
Wondering if, for a service like github, it would be possible to setup a
whitelist of allowable ip addresses.

If an attack was launched only that whitelist would be allowed until the
attack was mitigated.

So while certain legitimate traffic would be blocked for sure, people who
connect through fixed ip addresses that were whitelisted would get through and
be able to do what they needed to do.

Thoughts?

~~~
aroman
Seems like it'd be pretty trivial to circumvent that. Just have your botnet do
a few regular old requests to the network a few days before launching. That
way the IPs of the botnet members get white listed.

For a website of GitHub's scale, I don't think it would be very effective,
though maybe it could be helpful in combination with other measures.

~~~
larrys
No, I'm specifically talking about clients who have signed up entering in
their ip address that they access from.

"Just have your botnet do a few regular old requests to the network a few days
before launching."

Not talking about "whitelist sites that have made access in the last x days".

For example on HN it would be easy to create a white list. They do it now
recognizing new people who signed up and keeping track of activity as well (by
points).

You could either have people identify the ip address that they accessed from
and further limit the whitelist to a certain period of time and activity
additionally.

The idea is not to be 100% perfect but enough so that if you are a regular
user of github from an IP address at your office (as opposed to wifi cafe) you
will be able to get through.

This is, by the way, how registries limit access to their system. It's all
whitelist you have to pre identify the ip addresses that you will access the
system from.

The whitelist only comes into play when under attack. And for sure yes if you
are connecting from a new place you will be blocked. But others will not be
blocked and there will be some access for some people.

~~~
randunel
There are ISPs who change your ip address every 10 seconds, not just on
reconnection. This would complicate github too much, I'd rather have 2 hrs
downtime every now and then, than have to input my ip address classes from all
the locations :D

------
api
Is there any way to mitigate DDOS attacks systematically without sacrificing
network neutrality?

~~~
jsmthrowaway
If the majority of ASNs in the world followed BCP 38, these attacks would be
more difficult because the origin would be easily identifiable. Today you
can't tell where it's coming from because backwater networks see value in
letting their customers emit forged packets however they like. So all you can
do is mitigate and wait for them to move on.

BCP 38/RFC 2827 would change the DoS game, but it's been a best practice for
longer than most of this audience has been alive and nobody yet gives a shit
and/or they are too lazy to automate the implementation. So operators waste
their lives cleaning up after bad actor ASNs that they can't even identify. I
shouldn't be mitigating 65 Gbps destined for a controversial customer, the
attacker should be removed from the Internet before I even notice.

You can tell from my tone that attacks are part of life for me. I'd venture
that denials are the second largest problem facing the Internet today, behind
the organizational structure of critical systems like DNS and ahead of spam
and surveillance. However, there is now a sizable DoS prevention industry so I
wouldn't be surprised if BCP 38 drifts into even more obscurity, but that's
the cynic typing.

~~~
kbuck
Actually, since this attack wasn't volumetric and was instead attacking
GitHub's (TCP-based) applications, they have the rare ability to identify the
attacker's drones and possibly hand the list off to someone that can get them
shut down. Hopefully GitHub does the right thing here.

~~~
scurvy
Most popular sites have huge lists of compromised machines. You can't really
do anything with them though. If you block compromised machines, you'll blow
up your support team by people complaining they can't reach your site.

It's not in my interest to "Citizen's Arrest" someone with a pwnt node.

------
julesbond007
I'm quite surprised this happened to github...Sometimes I'm trying to look at
some repos, but I apparently click too fast and have to wait before I can do
other things. I thought they had ddos attacks under control.

------
kclay
I find it odd that github can even be subjected to DOS attacks, but it seems
its only HTTP traffic. I also wonder why or if it is even possible to DOS the
raw tcp layer of the git protocol.

~~~
marcosdumay
You can DOS anything that has a network interface.

------
coops
"A simple Hubot command can reroute our traffic to their network which can
handle terabits per second."

Really? You have to round-trip through Campfire to control your network?

~~~
imbriaco
It's just the most efficient and visible way for us to do it, it's not the
only way. Here's a couple of reasons why we like it:

1\. It's scripted so you don't have to think about it at 3am.

2\. The rest of the team can see it happening in realtime so you don't have to
explain what you're doing via a side channel. They can see it happening.

3\. It doesn't require specialized knowledge of routing to enable it. If the
on-call engineer sees an attack and calls someone for guidance, it's super
easy to tell them "type /mitigation enable" for instance.

4\. Of course we can run the exact same script or login to our routers and
manually change our BGP announcements if we need to.

------
lauradhamilton
WTF is wrong with people attacking github and meetup.

DDoSing a government site I can understand, sure. (Aaaand now I'm on a list.)

------
scurvy
tl;dr We're bad at detecting and handling layer 7 attacks. We're better now.

Dear github dudes, netflow is your friend.

------
crashandburn4
Am I the only person that gets slightly annoyed whenever I read "an order of
magnitude" and the article doesn't mention whether it's binary or decimal.
What do you people think they're talking about, I'm guessing decimal order of
magnitude?

~~~
pessimizer
Doesn't matter to me, 2^X and 10^X are close enough to each other for pretty
small to large values of X. When I say it, I usually mean something between x5
and x15.

I think avoiding the temptation to false precision is more important.
Inconsistent units and the retention of insignificant digits in order to make
numbers look bigger (or smaller) drive me up the wall, though.

~~~
marcosdumay
2^10 = 1000 10^10 = 10000000000

Not that close, if you ask me.

If you mean something between _5 and_ 15, you are using base 10. *16 are 4
orders of magnitude in binary, but 1 in decimal.

That said, everybody asking that same question, please, do not use binary
orders of magnitude. Our language suffers every time somebody does that.

~~~
valarauca1
3-5 orders of magnitude is a decent margin of error for things like
astrophysics.

