Hacker News new | past | comments | ask | show | jobs | submit login
GitHub's down?
206 points by 9point6 on May 6, 2015 | hide | past | favorite | 146 comments
Getting the unicorn on any pageload



I read about Github being down and know that it sucks for many people. What I find interesting is thinking about how quickly so many installations of distributed version control adopted a de facto central server. Github is a platform that should not need a particularly high uptime from the point of view of any one user.

But of course it does because so much of its utility is that it substitutes for a lower level partition strategy for source code version control at the level of the individual user. By which I mean that the whole point of distributed version control is to have sufficient resources for working at hand regardless of what is happening remotely as the default condition.

Git is designed to be available and partition tolerant. Using it with the expectation of consistency is a mistake and Git requires manual intervention on the part of users to even get good-enough-consistency. Github can't change that.

None of which changes makes Github being down not suck for the people for whom it sucks.


> Github is a platform that should not need a particularly high uptime from the point of view of any one user.

This is totally false. GitHub doesn't just provide a git server, it also provides ticketing and project management. When GitHub is down, projects that rely heavily on, for example, commenting on issues grind to a halt.

Yes, you don't need GitHub to be up 99.999% of the time to do a `git push` every once in a while, but there are other projects that use other features of GitHub besides the git server aspect.


It's also the primary host for dozens of package management systems anymore.

It's not hard to imagine a production system being unable to scale out because they didn't vendor and they can't fetch the repo for an essential library.


And it's a shame this is the case, though.

I do publish projects with github because of the popularity of the platform, but I really don't approve github pull requests, and I also don't like github over-simplified issues. Together, these two features are probably what most projects depend on.

Github pull requests are definitely inferior to the patch-by-mail approach for a conversation within a closed group of people. It's clicky and looks friendly, but you don't have the patch to see in the mail notification (you have to be online), and again it's more convoluted to edit/apply using vanilla git.

Many people don't know, but git makes patch-by-mail handling extremely convenient to handle. In fact, it's more convenient to handle than using the several github command-line interfaces I tried(!). You submit the patch, have a conversation by regular mail, and then either edit the patch or merge it. Everything is always right there. Using github you have several side effects: people often add comments to specific lines in the patch, which will looks like garbage in the generated github notification. People will also naturally edit the comments, which won't generate notifications. I cannot count how many times I have to re-read comments because I had the feeling I missed something (and indeed, some comments were edited). This is bad. It shouldn't be allowed. It cannot happen with email.

Github has the advantage of making the conversation visible. But if you have a developer mailing list for your project, git and plain, regular patches do work really well. It might take a bit more to get used to, but it pays back.

Issues are a different problem, since I still feel the distributed solutions are inferior. I experimented a lot with bugs-everywhere and simple defects (sd), two distributed bug trackers that store the issues in the repository itself. Both are nice, and have distinctive features from each other which I'm not going to discuss much. The main thing is that 'sd', at some point, even supported to sync issues with github, which allowed to have best of both worlds. Unfortunately, it's lacking maintenance at the time, but it would solve ticketing in a distributed way. It looks like there's not much interest in this, as both projects are definitely not as active as the many available github clients which do 1/10 of what these projects offer.

Please, look into these two projects.


Thanks for the distributed issue tracker mentions; I didn't know they existed.


Our Jenkins is configured to sign-in with GitHub. And yeah, it sucked to not be able to get in. Probably in future it's best to move away to non-Gihub accounts.


It also acts as the de-facto authentication provider for a ton of third-party hosted services.

I don't know if "login with github" stopped working during this outage but I can imagine the sort of hell if I suddenly couldn't access my PaaS running production infrastructure because it couldn't authenticate my github account.


Agreed with you. There are gists which people curl or wget from the servers. Github is not exactly like source forge or something like that....


Sounds like a poor choice if you're curling scripts from GitHub


I see your point about productivity. From a CAP perspective the ticketing and other project management processes depend on all three.


Although DVCSs don't require a central repo to operate, the most natural workflow for most projects is still to have a central repo where everybody pulls and pushes.

DVCSs still have many merits, even with that central repo. First, when the central repo is down, you still have the whole history in your local repo. You can still view logs, check out an old commit, create new branches and commits, etc.. Second, if that central repo is destroyed, you can trivially recreate it using any update-to-date clone. This is already way more advanced than centralized VCSs.


Every time a Github down article comes on HN (why do they still come and get upvoted?), this is usually the top comment - centralised server for a DVCS.


https://status.github.com/messages

>Today

>11:32 UTC "We're seeing high error rates on github.com and are investigating".

>11:40 UTC "We're doing emergency maintenance to recover the site".


Just noticed an interesting design decision on that page. An 'everything OK' message once a day. One way to push previous downtime below the fold of the page.


It's also a way to show "yes, this status check is still reliably doing its job, it hasn't crashed".


OK I could take that as a reason.


Rightfully so; humans are notoriously bad at interpreting sparse data on a linear scale.


Notice the change in the favicon.


> 11:54 UTC We've finished emergency maintenance and are monitoring closely.

it's back


Along with the switch to the Octocat "We're down for maintenance" message.


For everyone who is currently loosing productivity, perhaps your time might be well spent reviewing `git daemon`: http://git-scm.com/book/en/v2/Git-on-the-Server-Git-Daemon


For everyone who is currently losing productivity, perhaps you should close your HN tab


^ /thread


OK I get what you're saying, but honestly, who is really 'losing productivity'? I suspect the ones saying they are in fact just getting on with other tasks they can do because they are, despite github, still using distributed version control and do have access to and the ability to commit to code repositories. You don't need to host your own git daemon or github clone to get that functionality.

I don't use github because it gives me a git daemon. I use it because the interface allows me interact with non-technical people when dealing with code and resources. I use github because, even though it is down right now, my stuff is backed up, I don't have to deal with the nitty gritty of ensuring backups succeed, I don't have to deal with the system when it goes down - someone else does that for me. Hence I would never want to run a local github clone.


It kinda sucks when you rely on continuous integration, though.


Can't you just run your tests locally?


I diaagree on some points: github/git is not an appropriate backup strategy and github is not user friendly enough for non technical people (the default repository page, github pages is another matter).


Do you have any support for those assertions? In particular, the backup claim is rather dubious unless you're redefining “backup” to be a much higher bar than most common answers meet.


Git is a distributed revision control system and designed as such. The fact it can be used as a back-up tool doesn't mean it should be used as a back-up tool, especially when there are back-up solutions designed from the ground to do just that.


But what you actually said was “Github/git is not an appropriate backup strategy”. Git is tamper-evident and uses strong hashes to protect against bitrot, has full change tracking and numerous measures to avoid data-loss becoming permanent if you detect human error before weeks/months go by, has integrated remote tracking so you can tell how stale your off-site copies are and trivially update them, and most server implementations allow as much access control as you desire.

Assuming that we're talking about things which are already stored in Git and not e.g. an HD video collection, what would make your original statement true?


I guess no-one else has deployment scripts that pull from github, just because you don't have them.


GitHub isn't made for that though, if you need scripts to pull downm files for important work, then you should really be storing that stuff in S3. And probably some other place you can switch to if something goes wrong.


Someone pushed code to master or some other specified pre-master branch. Jenkins pulls down, builds, runs tests, uploads artifacts to S3. That doesn't sound unrealistic or risky on the surface, modulo relying on both GH and S3 being up.

Arguably you should be able to mirror on both an internal git repository somewhere, but GitHub is down infrequently enough that it's hard to justify that work + continuing costs if you don't have your own hardware + security patching + other admin costs to managers until you see multiple outages in a small enough window.


That sounds exactly like our deployment strategy, and I can push to production when GitHub is down. What's preventing you from copying & pasting the build script from Jenkins into a terminal in a local checkout? S3 being down would be a show stopper, but GH being down is just really annoying.


Developers don't always have access rights to the build systems or artifact repositories. Most of the devs I work with don't - not because I'm not willing to share, but because they're supremely uninterested in the deployment systems. If there's an outage, the people who can fix it aren't always available right then and there.


What about the code it has to build? Where does that come from? What about third party libraries it has to pull in? If the only thing the build server needed to build your code was the build script, this would be easy.


The code is building is coming from the local checkout. As for third party libraries, npm used to be so unreliable that we needed a strategy for that.


Checkout from where? I'm not being dense, but new code has to come from SOMEWHERE. If GH (or wherever your code is hosted) is down, the checkout won't be able to happen. Development isn't generally done on the build box.


I've got about 3 different checkouts of my code on my laptop right now, and each of my coworkers has at least one...


Again, how does that get to the build box? I also have several local copies of my code on my computer, but that doesn't help the Jenkins machine get it unless I scp (or whatever) it over.


You can copy whatever files you want with SCP or SFTP or any number of file transfer protocols.


Yes, for production assets. But that's not the only thing deployment scripts serve. Tomorrow we're trying out a feature branch on our staging servers, and will be pulling from Bitbucket as we iterate. If Bitbucket goes down, we can work around it, but we still 'lose productivity' because we're spending our time working around it. Yes, the devs can 'work on other things', but tomorrow's feature branch is what's on their mind, and it's what the sprint target is.

It doesn't matter what kind of deployment you do. Even s3 has outages if the network between you and s3 bites the dust. Even if you deploy straight from the developer's workstation to the production server, you can have an outage - network splits, hardware failure, developer-with-commit-rights gets sick. Working around an outage means you're not working on the product, and that's a loss of productivity.


how does that help for the long long lists of external dependencies, for the bugtrackers of projects where I'd like to look up if someone else had the same issue and maybe there's a workaround, to view and compare code for older revisions of projects I'm using?

With a ton of projects now hosted on github, github being down is a major dent in my overall productivity even though we _do_ have an internal git server.


For repositories your projects depend on, you can set up a local proxy so you at least have local copies for continuous integration. E.g. PHP's Composer project has _Toran_ and _Satis_ for this: http://tech.m6web.fr/composer-installation-without-github.ht...


doesn't give me easy access to the source, browseable and searchable, I can't link to it when discussing with a colleague via chat, can't browse the docs and the wiki and I can't update them. It's a minor advantage at the cost of running yet another piece in my infrastructure that may die or exhibit problems. We use nexus as a proxy for various repositories and so far I've had nexus down more often than github.


You can host your own Nexus servers, for whatever that's worth, if you were unaware. Again, adding more infrastructure, but it at least solves the "it's down on the internet!" problem.


we are hosting our own nexus servers - but they tend to have issues of their own. Bugs, repositories get broken, storage full, machines down for maintenance and patching, yadda yadda. Githubs track record at being up is better than our nexus track record.


Or setup Gitlab for bigger teams, they even offer a Docker image.


There's even an AMI image for Amazon ec2 hosts IIRC, which brings down the costs when coupled with a reserved instance.


You remember correctly, it is available on https://about.gitlab.com/aws/


You can also run `sudo apt-get / yum install gitlab-ce` nowadays.


This is very useful for people who do not like browsing HN. And want to have something more worklike to do when their git server goes down.


git daemon isn't encrypted, you might consider using regular git over ssh instead.


Funny, I switched our projects to Gitlab a week ago and told the team: Because we should not trust a centrealized platform to keep our code decentralized.


So you switched from one central platform to another singular central platform? Am I missing something here?


This is what I don't get. I trust github as a central platform more than hosting my own gitlab in the cloud.

There's a lot of people spreading a holier than thou attitude in this thread.


GitHub is a much larger target, and has many, many more moving parts than your average Gitlab on, say, a Linode virtual server.

Github will probably have more downtime which you can do noting about, than your selfhosted gitlab, which will probably fail because you upgraded something, or tinkered with a configuration.

It's a numbers game. They have more people monitoring, they spend way more money than you do on uptime/services, but such a service doesn't get more stable as it grows.

I trust myself way more than I trust GitHub or other hosted/cloud services because I cannot affect/help them in any way when stuff happens.

Ultimately, I know this, I accept this, and I happily use GitHub, Bitbucket and such all the time.


>Github will probably have more downtime which you can do noting about, than your selfhosted gitlab, which will probably fail because you upgraded something, or tinkered with a configuration.

Personally I don't think you will see less downtime on your own server. The difference is, that when it goes down, there is something you can do about it, so you get busy fixing it.

When github is down we get to all get online and talk about our woes together.

The outages are usually pretty short. Go get a coffee and come back, or grab the team and go play a round of bowling. That has to be better than trying to manage and keep secure our own repos.


This - the last thing I want to do is be on the line when something happens to our git server and no one can work because our solution is my own custom one-off.


I don't know what to say other than I disagree.


Our team did something similar after the problems GitHub had when it came under attack a few weeks ago. We have a team of 20 or so and when GitHub becomes unreliable it becomes a big problem for the team.

Last week we set up the community edition of GitLab on one of our local servers. We then mirror the repository to GitHub. So far so good.

We all like GitHub but after it's recent problems we realised we rely completely on it being up. We couldn't even deploy if it went down.. we still use it for open source things. For our main product it has simply become offsite back up.


GitLab CEO here, good to hear your experience with GitLab is good. What solution do you use to mirror the repository?


Yes exactly, between trusting a third-party centralized platform and trusting my git hosted on my own serve .. I choose my own and everyone should .. this was the original point of creating Git in the first place


> Because we should not trust a centrealized platform to keep our code decentralized.

People should have all their code in place and just merge and rebase from time to time? At least that's what I (a SVN user) have read on the internets about Git.


That is Murphy trolling you very, very hard.


GitLab CEO here, I might be missing the joke here, GitLab is a decentralized solution you run on your own servers. Maybe you are confused with GitLab.com?


Is it? Having it hosted on my servers doesn't make it decentralized: it's just centralized in a place I own. If the server(s) go down, same problem as Github (except I may be able to take action). Right?


Compared to multi-tenancy it seems more decentralized to me since it dispersed functions from a central point. It is not redundant, if that is what you mean. In general having the server together with the rest of your infrastructure ensures it is available when the rest of your infrastructure is available and reduces network/ddos problems.


correction: GitLab is a centralized solution you run on your own servers - you end up with a replicated central set of hosts.


Yes, Gitlab.com is not having hosting services.


Uhhmm, GitLab.com has hosting services https://about.gitlab.com/gitlab-com/

Most people use GitLab to refer to the downloadable software you run locally https://about.gitlab.com/downloads/


the d in github stands for distributed


how did you rotate b?


The joke is there is no 'd' in github, it's not distributed.


The joke is a reference to this old gem

http://www.bash.org/?330261


Ah, a true classic :)


That's when you notice how dependent you can be on GitHub.

You would think you can still develop an iOS app when GitHub is down, but a simple `pod install` requires access to repos hosted on GitHub.


Time for a sword fight!


For those confused by this: https://xkcd.com/303/


So glad we don't rely on GitHub for our deployments anymore, we haven't had an outage to git in the past year yet it seems GitHub goes down more often than people realise. Obviously DDOS attacks are nearly impossible to mitigate for anyone but it should serve as a reminder of companies dependencies on external services.

Somewhat related I highly recommend both Gogs and Gitlab for internal git hosting.


Yes, response times seem to have spiked: https://status.github.com/

Could be another DDoS?

---

Update 11:32 UTC "We're seeing high error rates on github.com and are investigating".



Cool to see you set up a GitLab instance for others to use, I had not seed that TLD before.


Neither do I. Seems their are managed by donuts.domains

http://www.donuts.domains/


Thanks for GitLab!


You're very welcome, thanks for spreading the word.


To me, that looks like an angry unicorn ... does that mean it's another DDoS?

It would also be interesting to track loss of productivity in the software engineering professions when GitHub is down. Is this a single point of failure for your company?


Not just for a company, but for developer productivity in general, professional or not...

Think of all the package managers that pull straight from Github repos as part of their installation processes..


Exactly ... we've got Ansible scripts that do that. One of the current Java rockstars (not my description), Adam Bien, recommends pulling the source code for all third-party dependencies to build them locally and put them in a local repository. I suppose you can push directly back upstream to avoid having an extra step.


For OS-level package management like Apt or Yum this is less of an issue, but for things like Composer, Gem, NPM and the like, is the concept of "closest available mirror" even a thing?

I guess for mission critical applications you could have a local mirror of all your vendor package dependencies (probably not a bad idea) but I would bet most people don't do that (I don't)..


If you're using Java most of the build tools use the Maven repositories, for which you can set up a local mirror too. The company i work for runs a Sonatype Nexus so we're not really affected by others' downtime unless we need stuff that we hadn't ever used before.


Yes, for our Java projects we have a Nexus repository which mirrors everything locally. It doesn't protect us from someone deleting a project ... then we're stuck on the last available version without a way to address issues, but it's worked pretty well so far.


It's just git you'd go and get the code from literally anywhere else its being used.


I was trying to run composer ... and many composer packages are hosted directly on GitHub. Also, issues, pull requests ...


Up again!

11:54 UTC "We've finished emergency maintenance and are monitoring closely".


I was uploading a new profile picture sorry everyone.


A few projects can help you monitor this:

Octobot iOS app: http://octobotapp.com

And my own service, StatusGator: https://statusgator.io


Probably people trying to clone curvytron because curvytron isn't currently playable at its own site.

It's like an unintentional pub crawl-- 400 people show up to party, empty the pub of booze, move on to the next pub that hasn't enough booze...


Github is down again! And Slack too. The only page of slack.com that is available is their status page that says: "All's good under the hood, boss! Slack is up and running normally". UTC/GMT 19:06


Every time GitHub is down I'm peering at Upsource:

https://www.jetbrains.com/upsource/


UpSource IS not a repository. It is just a Viewer for your repository. You still need to host everything yourself.

As you can see here:

> https://www.jetbrains.com/upsource/roadmap/

Git hosting is on the roadmap, not ready yet. Also Upsource takes a shitload of memory.


SE yesterday, gh today. Bad week for dev productivity.


SE?


Stack Exchange.


stack exchange I assume


Seems to be partly up, hurray! (maybe)

Update: "We've finished emergency maintenance and are monitoring closely".


I'm just getting unicorns everywhere. Sprint review in an hour. Damn, I love wednesdays.


Hopefully the responding on-call engineers aren't tired folks but folks in other timezones. The reason that the risk of compounding the situation is lessened, because trying to fix important shit when tired is like driving without a seatbelt.

Update: seems to be back, so it doesn't matter unless someone's having an early morning.


Probably? This is a very random concern to have, why would you think that Github has problematic on-call procedures?


11:40 UTC "We're doing emergency maintenance to recover the site".


Damn you github. Just tried making my first push in my new company and boom!

wasn;t me


Had to deploy an important service. And github is down.


perhaps a good time to reconsider depending on a third party for your important services to deploy?


Yes, but these decisions aren't under me plus setting up a github kind of thing is way too time consuming with all the features that we're used to now.


Understandable, but surely the potential disruption if github was to go down for a long time or indefinitely would be more than setting up a local git server that you control?


Maybe now would be a good time to point out the downsides of that decision to whoever made it.


A GitLab droplet on Digital Ocean should run you only $10/month and literally sets up in ~10 min.


Then you've just shifted the problem from GitHub being down to a VPS you've spent 10 minutes maintaining being down.


What makes the servers of digital ocean safer than github?


They're not GitHub. i.e. The chance of GitHub and your GitLab instance on DO going down simultaneously is very slim. Spend an extra $10/month and you can set up another DO server in another data-center, and reduce the odds of simultaneous failure even further.


Setting up a GitHub alternative is quite expensive but if you're just making your deployment system more robust it doesn't need to be more time consuming as having a server which you can SSH to.


It is likely that GH have a better uptime than the parent's services.


It is likely that the parent's services would be up while GH is down.


The 'one' day that I wake up at 7AM to get some quality caffeinated work done, and I can't pull.

It's all good though, I still have loco host to keep me busy until their resolve the issue.


OK it's back.


It's back to normal. Back to work guys!!


I thought git was all about decentralization?


IS there any opensource private github?



gitlab


Slack is up. And github too.


Back up !


it's weird


it's up


it's back


it's back now


Not again.


yep


It wasn't me


The site is live now


Yes it is!


:)


looks like its time to go for lunch


Yep. Down for maintenance.


So the DDOS is basically all of us accessing status.github.com right?


Back now! Orange octocat, "We've finished emergency maintenance and are monitoring closely".




Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: