Hacker News new | past | comments | ask | show | jobs | submit login
GitHub is "experiencing major service outages" (twitter.com/githubstatus)
27 points by brokenparser on July 19, 2013 | hide | past | favorite | 47 comments



I don't really understand why this is that big of a deal. Okay, I pay for github and they have downtime, that sucks. But git is designed to be distributed and your workflow can be partition tolerant. I honestly don't care if github goes down for a few hours; it doesn't stop anyone on my team from continuing to work. Now if they have days of downtime... that could be an issue and I would probably look to host repositories elsewhere.


> But git is designed to be distributed and your workflow can be partition tolerant.

but issues are not distributed, as an example.


They are.

$ git add . $ git commit -m 'fixing #23'


the ability to write a commit with a number in it doesn't magically create a distributed issue system which is still hosted in ONE place, which is still down when ONE place is down, and which still demands you have an account registered in ONE provider where those issues are posted and discussed.

btw the auto-linking auto-fixing commit syntax is just a local convention, which isn't even compatible between the 2 biggest players, see https://github.com/zzzeek/alembic/ and https://bitbucket.org/zzzeek/alembic/commits/all for example reference.


Correct me if I'm wrong but isn't there very little someone can do to mitigate a DDoS attack?

Github just happens to be a very large target; BitBucket or alternatives are just as vulnerable.


The DDoS-proof solution is to put your repo on your own infrastructure.


I agree with you, I'm just refuting the claim that this is Github fault due to their own incompetence. What can Github realistically do except mitigate? What can _anyone_ do except mitigate?


DDos-proof, just not idiot-proof

And we employee such creative idiots

:-)


Sign up for a service that protects against DDoS such as CloudFlare.


A nice reminder not to put all your eggs in one basket.


It's a good thing git is a distributed system. In such cases, people can just switch to a different remote and push their branches there (with some care, of course).


But most people never use the distributed part of git, other than having the repo on their local computer.


Just being able to continue working and committing locally and pushing to Github when it comes back up is still a DVCS win over SVN. People may not use the distributed part of git to its fullest, but they do use it.


That is true. When working at a project at work which uses SVN, I luckily have IntelliJ's great stashing/changelist features, acting like a local branch.

Many of our projects are using SVN just because it's the easier one (at least until lately) to configure and set up locally. So that's a small win for SVN, and removes the issue of GitHub being down.


If you have a co-worker with full copy of repository, isn't it already somewhat distributed? Of course it is not all attached to network.

The real problem that we should be talking about is - Pull request based workflow many companies (including mine) have adopted. Which doesn't work when github goes offline.

I hear from lot of teams, they have switched to using github issues for project managment entirely, so as pull request and tickets can be "seamlessly" linked. That goes out of window as well.


After using Jira for a year (with Grasshopper), I can't imagine a company with more than a few developers maintaining everything with github issues. shudder

But yes, you are right. Being able to copy it from my neighbor is nice.


For us, GitHub issues > corporate intranet. Beyond that, JIRA just doesn't add enough to be truly worthwhile.


Sounds like they should learn.


It was a DDoS attack. Nothing to do with their infrastructure.


Yeah, but everything to do with their reputation. before 8 months ago, I never recommended Bitbucket to anyone. Now I do based on the knowledge that GitHub has a loss of connection at least once a month.

The company I worked for used GitHub as the source for its build systems releases. If you can't do releases because GitHub is down a few times, I'm sure the dev/ops team will start looking elsewhere.


Is there a good counter-measure against DDoS? I believe there is none. Commercial services help but only up to certain traffic level.

Tomorrow Bitbucket can get under the same attack.


There are some good counter measures. CloudFlare, the CDN that I'm using for my site http://gitignore.io helps mitigate DDoS's [1]. Github also took a good step by separating the source code domain github.com from the pages domain github.io [2]. I agree that tomorrow Atlassian could suffer a DDoS attack but I feel like since they are a more mature company, they have a lot more experience dealing with that type of attack.

[1] - http://www.cloudflare.com/ddos

[2] - https://github.com/blog/1452-new-github-pages-domain-github-...


anycast + lot of bandwidth are the best solution against DDoS.


There comes a point where there isn't enough bandwidth you can buy...Reflection and amplification attacks can, very quickly, generate in the 100's of Gbps worth of traffic. IT simply isn't economical to keep that much bandwidth at hand all the time.


And how will Bitbucket magically handle DDOS? Please do share.

You are blaming Github for being attacked by DDOS?


Not OP, but my last project on github ran for about 6 months and had at least 3 outages on the order minutes or hours. Didn't shut down the project, but it was worrisome and I don't think they were DDoS attacks.


That I can understand. If the comment was on a thread about github outage that was not DDOS related.


The enterprise version of GitHub might be a middle ground.


GitHub enterprise has absolutely no dependencies on the mother ship, except for expiring license packs.

And, in my experience, they'll very freely give you 1-2 month temporary packs, even if you're late with the renewals. Very good customer service.


Gitlab is easy to set up and it works really well. It has a much cleaner and pleasant interface as well (especially since GitHub ruined theirs about a month ago).


Definitely agree. gitlab is just getting better and better.


Yeah, that is a great point. Our lead ops guy was pushing for just hosting a bare bone git repository inside the firewall and not even buying a commercial product. Our problem was that we didn't have enough servers for our product, let alone our code.


Or Atlassian's Stash, which is is also really affordable for small teams.


The reminder still stands.


So?


Where I work we're slowly switching to Stash lately, a self-hosted GitHub-like (sans Issues) software by Atlassian. I took a look and it seems to cost $10 one-time for teams up to 10 people. Suspiciously low price—I guess they expect customers to add Jira integration for ticket management and make more money off that.

So this makes you basically safe from DDoS-attacks, since you host it yourself, and costs much less than enterprise GitHub installation (which is $50K/year for up to 20 people). Judging by screenshots it seems pretty similar to GitHub—although, of course, closer to BitBucket. Haven't tried it myself yet.


Does anyone know of a simple utility to host a remote on your local machine so that a small team can take advantage of git's DVCS in the event that a service like GitHub goes down?


Any local machine may be problematic (if you have a dynamic IP), but git is really easy to host on a VPS or any box with a static IP -- just make sure you have everyone's SSH keys authorized (not on root unless you trust everyone with that), and use the following:

    $ git remote add origin2 user@your-ip:/path/to/repo.git
To create repo.git:

    $ mkdir repo.git  
    $ cd repo.git  
    $ git --bare init


Email.

No seriously, that is the simplest thing to use and punches right through any firewall, anywhere. Instead of git push, do this:

    tar cvf project.tgz .git
    mail -a project.tgz team@example.net

    Subject: Pull request #9042
    Please pull master branch from attached
    git repository, it fixes all the things.
    ^D
The other end can extract that and add the remote pointing to the local filesystem path.

Other than that, any file server will work. Apache, nginx, samba (just mount it), rsync, even *ftpd.


While not as nice as hg serve, there's git daemon.


If your team is small enough, why not use bitbucket as a remote as a backup option?


gitolite


Someone didn't do their homework. -- Man, I'm not going to get this project done by morning. Hey I know, I'll take down github and say its checked in.


I love how we put this on HN. Most likely everybody who sees this already knows about it - that's why their here (they can't work)


If you can't work because GitHub's down, you're doing git wrong. It's a minor annoyance that I can't access the issues but it's by no means unworkable.


What else is new?


Sad tradition - being down once a month. Oh, come on guys!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: