I don't really understand why this is that big of a deal. Okay, I pay for github and they have downtime, that sucks. But git is designed to be distributed and your workflow can be partition tolerant. I honestly don't care if github goes down for a few hours; it doesn't stop anyone on my team from continuing to work. Now if they have days of downtime... that could be an issue and I would probably look to host repositories elsewhere.
the ability to write a commit with a number in it doesn't magically create a distributed issue system which is still hosted in ONE place, which is still down when ONE place is down, and which still demands you have an account registered in ONE provider where those issues are posted and discussed.
I agree with you, I'm just refuting the claim that this is Github fault due to their own incompetence. What can Github realistically do except mitigate? What can _anyone_ do except mitigate?
It's a good thing git is a distributed system. In such cases, people can just switch to a different remote and push their branches there (with some care, of course).
Just being able to continue working and committing locally and pushing to Github when it comes back up is still a DVCS win over SVN. People may not use the distributed part of git to its fullest, but they do use it.
That is true. When working at a project at work which uses SVN, I luckily have IntelliJ's great stashing/changelist features, acting like a local branch.
Many of our projects are using SVN just because it's the easier one (at least until lately) to configure and set up locally. So that's a small win for SVN, and removes the issue of GitHub being down.
If you have a co-worker with full copy of repository, isn't it already somewhat distributed? Of course it is not all attached to network.
The real problem that we should be talking about is - Pull request based workflow many companies (including mine) have adopted. Which doesn't work when github goes offline.
I hear from lot of teams, they have switched to using github issues for project managment entirely, so as pull request and tickets can be "seamlessly" linked. That goes out of window as well.
After using Jira for a year (with Grasshopper), I can't imagine a company with more than a few developers maintaining everything with github issues. shudder
But yes, you are right. Being able to copy it from my neighbor is nice.
Yeah, but everything to do with their reputation. before 8 months ago, I never recommended Bitbucket to anyone. Now I do based on the knowledge that GitHub has a loss of connection at least once a month.
The company I worked for used GitHub as the source for its build systems releases. If you can't do releases because GitHub is down a few times, I'm sure the dev/ops team will start looking elsewhere.
There are some good counter measures. CloudFlare, the CDN that I'm using for my site http://gitignore.io helps mitigate DDoS's [1]. Github also took a good step by separating the source code domain github.com from the pages domain github.io [2]. I agree that tomorrow Atlassian could suffer a DDoS attack but I feel like since they are a more mature company, they have a lot more experience dealing with that type of attack.
There comes a point where there isn't enough bandwidth you can buy...Reflection and amplification attacks can, very quickly, generate in the 100's of Gbps worth of traffic. IT simply isn't economical to keep that much bandwidth at hand all the time.
Not OP, but my last project on github ran for about 6 months and had at least 3 outages on the order minutes or hours. Didn't shut down the project, but it was worrisome and I don't think they were DDoS attacks.
Gitlab is easy to set up and it works really well. It has a much cleaner and pleasant interface as well (especially since GitHub ruined theirs about a month ago).
Yeah, that is a great point. Our lead ops guy was pushing for just hosting a bare bone git repository inside the firewall and not even buying a commercial product. Our problem was that we didn't have enough servers for our product, let alone our code.
Where I work we're slowly switching to Stash lately, a self-hosted GitHub-like (sans Issues) software by Atlassian. I took a look and it seems to cost $10 one-time for teams up to 10 people. Suspiciously low price—I guess they expect customers to add Jira integration for ticket management and make more money off that.
So this makes you basically safe from DDoS-attacks, since you host it yourself, and costs much less than enterprise GitHub installation (which is $50K/year for up to 20 people). Judging by screenshots it seems pretty similar to GitHub—although, of course, closer to BitBucket. Haven't tried it myself yet.
Does anyone know of a simple utility to host a remote on your local machine so that a small team can take advantage of git's DVCS in the event that a service like GitHub goes down?
Any local machine may be problematic (if you have a dynamic IP), but git is really easy to host on a VPS or any box with a static IP -- just make sure you have everyone's SSH keys authorized (not on root unless you trust everyone with that), and use the following:
No seriously, that is the simplest thing to use and punches right through any firewall, anywhere. Instead of git push, do this:
tar cvf project.tgz .git
mail -a project.tgz team@example.net
Subject: Pull request #9042
Please pull master branch from attached
git repository, it fixes all the things.
^D
The other end can extract that and add the remote pointing to the local filesystem path.
Other than that, any file server will work. Apache, nginx, samba (just mount it), rsync, even *ftpd.
If you can't work because GitHub's down, you're doing git wrong. It's a minor annoyance that I can't access the issues but it's by no means unworkable.