But of course it does because so much of its utility is that it substitutes for a lower level partition strategy for source code version control at the level of the individual user. By which I mean that the whole point of distributed version control is to have sufficient resources for working at hand regardless of what is happening remotely as the default condition.
Git is designed to be available and partition tolerant. Using it with the expectation of consistency is a mistake and Git requires manual intervention on the part of users to even get good-enough-consistency. Github can't change that.
None of which changes makes Github being down not suck for the people for whom it sucks.
This is totally false. GitHub doesn't just provide a git server, it also provides ticketing and project management. When GitHub is down, projects that rely heavily on, for example, commenting on issues grind to a halt.
Yes, you don't need GitHub to be up 99.999% of the time to do a `git push` every once in a while, but there are other projects that use other features of GitHub besides the git server aspect.
It's not hard to imagine a production system being unable to scale out because they didn't vendor and they can't fetch the repo for an essential library.
I do publish projects with github because of the popularity of the platform, but I really don't approve github pull requests, and I also don't like github over-simplified issues. Together, these two features are probably what most projects depend on.
Github pull requests are definitely inferior to the patch-by-mail approach for a conversation within a closed group of people. It's clicky and looks friendly, but you don't have the patch to see in the mail notification (you have to be online), and again it's more convoluted to edit/apply using vanilla git.
Many people don't know, but git makes patch-by-mail handling extremely convenient to handle. In fact, it's more convenient to handle than using the several github command-line interfaces I tried(!). You submit the patch, have a conversation by regular mail, and then either edit the patch or merge it. Everything is always right there. Using github you have several side effects: people often add comments to specific lines in the patch, which will looks like garbage in the generated github notification. People will also naturally edit the comments, which won't generate notifications. I cannot count how many times I have to re-read comments because I had the feeling I missed something (and indeed, some comments were edited). This is bad. It shouldn't be allowed. It cannot happen with email.
Github has the advantage of making the conversation visible. But if you have a developer mailing list for your project, git and plain, regular patches do work really well. It might take a bit more to get used to, but it pays back.
Issues are a different problem, since I still feel the distributed solutions are inferior. I experimented a lot with bugs-everywhere and simple defects (sd), two distributed bug trackers that store the issues in the repository itself. Both are nice, and have distinctive features from each other which I'm not going to discuss much. The main thing is that 'sd', at some point, even supported to sync issues with github, which allowed to have best of both worlds. Unfortunately, it's lacking maintenance at the time, but it would solve ticketing in a distributed way. It looks like there's not much interest in this, as both projects are definitely not as active as the many available github clients which do 1/10 of what these projects offer.
Please, look into these two projects.
I don't know if "login with github" stopped working during this outage but I can imagine the sort of hell if I suddenly couldn't access my PaaS running production infrastructure because it couldn't authenticate my github account.
DVCSs still have many merits, even with that central repo. First, when the central repo is down, you still have the whole history in your local repo. You can still view logs, check out an old commit, create new branches and commits, etc.. Second, if that central repo is destroyed, you can trivially recreate it using any update-to-date clone. This is already way more advanced than centralized VCSs.
>11:32 UTC "We're seeing high error rates on github.com and are investigating".
>11:40 UTC "We're doing emergency maintenance to recover the site".
I don't use github because it gives me a git daemon. I use it because the interface allows me interact with non-technical people when dealing with code and resources. I use github because, even though it is down right now, my stuff is backed up, I don't have to deal with the nitty gritty of ensuring backups succeed, I don't have to deal with the system when it goes down - someone else does that for me. Hence I would never want to run a local github clone.
Assuming that we're talking about things which are already stored in Git and not e.g. an HD video collection, what would make your original statement true?
Arguably you should be able to mirror on both an internal git repository somewhere, but GitHub is down infrequently enough that it's hard to justify that work + continuing costs if you don't have your own hardware + security patching + other admin costs to managers until you see multiple outages in a small enough window.
It doesn't matter what kind of deployment you do. Even s3 has outages if the network between you and s3 bites the dust. Even if you deploy straight from the developer's workstation to the production server, you can have an outage - network splits, hardware failure, developer-with-commit-rights gets sick. Working around an outage means you're not working on the product, and that's a loss of productivity.
With a ton of projects now hosted on github, github being down is a major dent in my overall productivity even though we _do_ have an internal git server.
There's a lot of people spreading a holier than thou attitude in this thread.
Github will probably have more downtime which you can do noting about, than your selfhosted gitlab, which will probably fail because you upgraded something, or tinkered with a configuration.
It's a numbers game. They have more people monitoring, they spend way more money than you do on uptime/services, but such a service doesn't get more stable as it grows.
I trust myself way more than I trust GitHub or other hosted/cloud services because I cannot affect/help them in any way when stuff happens.
Ultimately, I know this, I accept this, and I happily use GitHub, Bitbucket and such all the time.
Personally I don't think you will see less downtime on your own server. The difference is, that when it goes down, there is something you can do about it, so you get busy fixing it.
When github is down we get to all get online and talk about our woes together.
The outages are usually pretty short. Go get a coffee and come back, or grab the team and go play a round of bowling. That has to be better than trying to manage and keep secure our own repos.
Last week we set up the community edition of GitLab on one of our local servers. We then mirror the repository to GitHub. So far so good.
We all like GitHub but after it's recent problems we realised we rely completely on it being up. We couldn't even deploy if it went down.. we still use it for open source things. For our main product it has simply become offsite back up.
People should have all their code in place and just merge and rebase from time to time? At least that's what I (a SVN user) have read on the internets about Git.
Most people use GitLab to refer to the downloadable software you run locally https://about.gitlab.com/downloads/
You would think you can still develop an iOS app when GitHub is down, but a simple `pod install` requires access to repos hosted on GitHub.
Somewhat related I highly recommend both Gogs and Gitlab for internal git hosting.
Could be another DDoS?
Update 11:32 UTC "We're seeing high error rates on github.com and are investigating".
It would also be interesting to track loss of productivity in the software engineering professions when GitHub is down. Is this a single point of failure for your company?
Think of all the package managers that pull straight from Github repos as part of their installation processes..
I guess for mission critical applications you could have a local mirror of all your vendor package dependencies (probably not a bad idea) but I would bet most people don't do that (I don't)..
11:54 UTC "We've finished emergency maintenance and are monitoring closely".
Octobot iOS app: http://octobotapp.com
And my own service, StatusGator: https://statusgator.io
It's like an unintentional pub crawl-- 400 people show up to party, empty the pub of booze, move on to the next pub that hasn't enough booze...
As you can see here:
Git hosting is on the roadmap, not ready yet.
Also Upsource takes a shitload of memory.
Update: "We've finished emergency maintenance and are monitoring closely".
Update: seems to be back, so it doesn't matter unless someone's having an early morning.
It's all good though, I still have loco host to keep me busy until their resolve the issue.