Hacker News new | past | comments | ask | show | jobs | submit login
Github experiencing major service outages across all services. (status.github.com)
181 points by experiment0 on Dec 23, 2012 | hide | past | favorite | 143 comments

This is why I love git (and distributed version control systems in general). For the most part a short downtime isn't the end of the world. When it comes back up I'll push my changes and that'll be that.

While I actually agree with you, as I look at GitHub as "somewhat reasonable git hosting", that is not how many (if not even most) projects actually use the website, so it is an unfair analysis: people seriously often use (and even encourage others to use) GitHub as the only web presence for their project, using a Readme.md to build a landing page; meanwhile, for many people, GitHub is how they handle incoming bug reports and pull requests (as they believe, for right or for wrong, that it is easier to use GitHub to collaborate on these features than a more traditional mailing list).

You thereby can't just say "oh yeah, git is distributed, so I don't care if GitHub is offline"; it makes about as much sense as saying "I don't care that Gmail is offline, as email is distributed: I have all my messages locally in Outlook and can send mail through my ISP" while ignoring that other people now can't send you mail and you have lost access to your calendar entirely.

I spent the downtime trying to figure out how to use Jade without the documentation available -- the docs are in the readme.md on the Github repo. It was only later that I realized that the docs were also available through npm (https://npmjs.org/package/jade), but it was an interesting experience nonetheless.

emm.. it IS exactly the point.

Webservers can become unavailable. Github has a great trackrecord, and because git is distributed, people can still commit their code locally.

If you would have an SVN workflow, you would either have to wait committing (which is costly), or commit multiple changes in one go.

Also the email example is exactly the same. It IS a good thing you can access stuff locally.

You basically tell that dropbox is useless.

You seem to have ignored what I said... this insistence that "people can still commit their code locally" is focussing on just one of the use cases for this website. To read jtchang's comment, it is as if that's the only service that GitHub provides.

In fact, "git hosting" is one of the least interesting services GitHub provides for the specific reason that, yes, git is so ludicrously decentralized: it seriously takes less time to push to your own git repository than it takes to log in to your GitHub account.

(Really, the only thing it makes simpler, with regards to actual git hosting, is managing user credentials for a centralized Subversion-like repository; however, if you actually think of git as a decentralized system you don't do that in the first place.)

I will thereby attempt to make this point even more clear by taking my example about Gmail (where you totally ignored the Calendar comment) and ratchet it even further: if all of Google went offline tomorrow, my ability to still send mail using Outlook is great, but it doesn't help me watch YouTube videos or, you know, search the web.

Likewise, when GitHub has an "outage across all services" that affects all kinds of things developers do that have nothing at all to do with "hosted git repository". To repeat one example: a lot of developers are crippled if they don't have access to an issue tracker to tell them what they need to work on next.

So, just because your workflow doesn't rely on anything GitHub provides, doesn't mean that isn't why most of the people who use GitHub aren't using it: GitHub is a popular set of services for project management that are all bundled together in a way that is easy for people to get started with, not just a git hosting company.

Exactly. To expand on this point, I love the redundancy built into Git.

I had a clone of my personal fork of my department's github repository at home. I read this and thought "that's interesting, I wonder if I could still work on it from home if I want to." I saw with "git config -l" that I had my github repository and my university's github repository, but not my university's mirror of the repository. I did a "git add remote", logged in to my university's VPN, "git fetch", and was up and running with my latest changes from Friday.

If that didn't make any sense, well, the gist of it is that the outage wouldn't affect development work in the slightest, thanks to the fact that every repository is a) content addressable, so you have consistency between servers and b) self-contained, so you have a complete copy everywhere. Hit me up for more info if you're curious; I recently upgraded us from CVS to Git for this redundancy advantage, among others.

Exactly. I'm sitting here thinking "poor GitHub engineers" rather than "oh crap, how am I going to work?"

(Though I do wish GitHub Issues weren't as centralized...)

One of the neat things about repos in Fossil (the DVCS by drh of SQLite fame) is that they have a built-in wiki and issue tracker whose contents are version controlled along with everything else in a project's repo. The Fossil executable also has a built-in web server that presents a web interface for working with the repo, including the issue tracker and wiki.

In short, if you have a project's Fossil repo, you can work on everything locally, then push source, wiki, and issue tracker changes to a central repo at your leisure.

Just to note, Github Wiki is git repo, so it you want to backup the data, branch, etc, you can push pull and everything else with any Github Wiki.

Github issues aren't however.

But you can't view GitHub Wiki in browser without some preprocessing. In Fossil, I just do

    $ fossil ui
And view (and edit) a copy of the whole project website, including wiki, tickets, history, interface for diffs, etc. in my browser.

The problem with fossil is that it's not git. However it would indeed be nice if someone built a distributed github on top of git.

how does fossil do versioning of the issue tracker? i tried using a setup like that for a while, with ditz as the issue tracker. it turned out to be really inconvenient to have my issues versioned along with my code as part of the same repo - i want to navigate around in my code's history and branch layout with the issue tracker conceptually being an entirely separate repo that always points to HEAD unless i explicitly go scroll back in it.

Fossil doesn't track issues along with source code, unlike some scripts for bug tracking on top of git. While Fossil stores tickets in the same repository with source code, with the same basic format, tickets are separate entities and are not connected to the source code itself.

You can read more about it here: http://www.fossil-scm.org/index.html/doc/trunk/www/bugtheory...

Yup. I wanted to do some coding before heading off, but I guess that's off the table now.

What? That's the complete opposite of what OP was saying.

You don't need github to be online to work with your code or even your repositories so long as you haven't gone and thrown all your digital eggs in the same basket (i.e. as long as you have other remotes or at least local copies of your repositories).

I think kmfrk is implying his place of employment doesn't use git.

I don't think so, because it's GitHub that's down and I think kmfrk would have mentioned it if his other service was coincidentally down at the same time. I think what kmfrk meant is that he can't operate without GitHub's issues. It isn't ideal but it's understandable.

Exactly. I for one don't have a photographic memory of my issues.

> For the most part a short downtime isn't the end of the world

Unless the Mayans were off by one

I can do the same thing with CVS?

You can't branch, commit, checkout previous versions, etc while the server is down.

He can.

You're kidding, right? Might as well say you can do the same thing with diff, rsync, and cron.


I would point out that both TFS and Mercurial do the same thing (as does ClearCase, but ClearCase is awwwful), but with much less memory consumption on the local PC. Not trying to add snark (I upvoted the above comment), just adding that it's not a GitHub feature.

> I would point out that both TFS and Mercurial do the same thing (as does ClearCase, but ClearCase is awwwful), but with much less memory consumption on the local PC. Not trying to add snark (I upvoted the above comment), just adding that it's not a GitHub feature.

Well, no, obviously it's a feature of Distributed Version Control Systems in general.

In fact, that's probably why OP wrote "This is why I love git (and distributed version control systems in general)" and didn't in any way claim it was a feature of GitHub?

And do you have some kind of reference for git being less memory-efficient (are you talking RAM-wise?) vs Mercurial or TFS? On its face its pretty hard to believe; git is pretty light-weight.

Git for large projects is not lightweight, nor do they claim to be. In fact, that's the basis of their philosophy - that they store full versions of the files as opposed to just storing the changes because data storage is considered inexpensive. The only time Git uses pointers in when the files have not changed.

I'm certainly not slamming Git for working exactly the way it was intended.

I think you may be thinking of a different VCS such as SVN as Git actually just stores the meta data (changes/deltas) and not full versions of the file in each commit.

As an example when you clone a repository, it downloaded it, then in goes through a process of 'Resolving Deltas'. This is it going through the history applying the changes/deltas to the files so that they are up to the most recent revision.

The only files that are added in full are binary files.

Some reading: http://git-scm.com/book/en/Git-Internals-Packfiles :)

To be fair, it uses the full blob until you pack, which is not immediately and depends on your settings. Large repositories spend a significant amount of effort determining optimum window pack sizes etc to optimise the packing of their history.

So, git can be quite heavy on disk usage, until all the loose objects are packed.

Pro-tip: when you want people to understand a comment you're making about the disk-consumption patterns of one piece of software vs another rather than memory-consumption, consider actually referring to it as "disk-consumption". Git is quite lightweight, if we're actually talking about memory.

Also, maybe don't confuse git with GitHub, as your original comment did.

Your comment looks just like snark to me since 1) the parent comment mentioned other distributed systems and 2) a random unreferenced jab at git for its allegedly bad memory consumption. I have no idea if it is true or not, but it is hardly relevant to the discussion.

Last time I used TFS it would take 5, literally 5 minutes to time out for checkouts. The way TFS acts I don't even feel it's accurate to call it a DVCS.

The only SCM I like less than TFS is SourceDepot. The bane of my existence. It's sad when I can rant about it abstractly and my father can sympathize.

Does TFS beat out ClearCase on the Worst SCM Hierarchy? I'm just curious; I had to use ClearCase a lot at a previous job, and hated it but I've never used TFS.

I've yet to have the pleasure of using ClearCase.

I'm amazed at how much this has knocked me on my arse.

I first attempted to redo the README for a service I've just open-sourced, before realising Github is down.

Then, I attempted to fix the company CI server (OOM errors because of Carrierwave needing more than 500MB of memory to run 1 spec in, for some unknown reason), which failed because it couldn't check out the code.

After giving up on that, I attempted to install Graphite to a company server, where I hit another roadblock because the downloads are hosted on Github, and so I had to use Launchpad, which I had an allergic reaction to.

Also, when I was shelling into the server, oh-my-zsh failed to update because, you guessed it, Github was down.

Still, shouts to the ops team in the trenches, we're rooting for you.

Your company runs its source revisions through Github without a backup solution? Do you really put all your eggs in a basket you have no control over?

I know that in theory a cloud solution should have a higher uptime than an amateuristic set up private server, but cloud solutions have a certain complexity and coherence that make them very vulnerable to these kinds of 'full failures' where nothing is in your control.

Maybe you should take this time to learn from this, and analyze what you could do to reduce the impact of this failure. For example, you could research what it would take for your company to move to another Git provider, perhaps even on your own server or a VM slice at some cloud provider.

I'm not saying you should drop github, because obviously they have great service, but be realistic about cloud service.

Cloud service is like RAID: it is not a backup.

The way RAID is nice for recovering from errors without downtime, there is a chance something bigger happens and you still lose your data cloud is nice for offering scalability and availability but there's a chance everything goes down and you still can't run your operations.

* Git is decentralised, if Github drops off the face of the earth, it will take the two of us about half an hour to fix it as we each have a copy of the codebase on our laptops.

* If I wanted the build server to point to our internal git mirror, I would configure it to point to our internal code mirror, but I want it to build off of Github webhooks.

* The "eggs in the basket" analogy is probably best saved for a situation where I'm dependant on a cloud service, such as Twilio.

* I would expect an amateuristic private server to have better uptime than a monolithic service such as Github because there are a lot less moving parts, and in our case, a lot less people doing things with it.

* The "company impact" of Github going down is next to nil. It's one o'clock in the morning on a Sunday, I'm eating string cheese, and I feel like being productive. The company is not paying me for this, and have encountered zero losses from it. We have a very simple mirror which we can use to push code to production if Github goes down, which we have never actually had to use.

Personally, I find the "keep a torch in every corner of every room because for 15 minutes last year the power was out" attitude to life is a bit over-rated, and you're planning for an edge case. I'd much rather remember where the torch is and learn to walk in the dark.

Add up your downtime over a three year period by relying on Git (or gmail, or AWS) versus the cost of trying to engineer some local-backup system, and the downtime associated with that going awry.

Outages happens - as long as we're talking hours a year, pretty much everyone but life-safety systems, critical infrastructure, and payment/high-traffic commerce sites are probably better off just letting third-party cloud vendors manage their systems. Take the downtime and relax.

(Now, if downtime consistently gets into the 10s of hours/year, it's time to look for a new cloud provider. )

You make a very good point, but it took me about three minutes to build a git mirror which we can push/pull to, can re-configure CI to if we need to, and can be used to run a full deploy from on the company VPS server.

* Create an unprivileged account & set a password that you don't need to remember -> sudo adduser git

* Add your public key from your laptop to the unprivileged user's authorized_keys file -> sudo su git; cd ~; mkdir .ssh; vim authorized_keys - then copy and paste your id_rsa.pub to that file

* Repeat that for all public keys on your engineering team

* In git's home directory, git init --bare <name of repo>.git

* On your local machine, git remote add doomsday git@<DOOMSDAY SERVER HOSTNAME>:<NAME OF REPO FOLDER>.git

* git push doomsday --all

* On colleague's box, git clone git@<DOOMSDAY SERVER HOSTNAME>:<NAME OF REPO>.git

Let me know if there is a better way of doing this, or if it's monumentally screwed somehow.

Yup. Github going down barely breaks my stride, but for a real production outage (e.g. Heroku going down), I pour myself a tall glass of scotch and thank my lucky stars I'm not the one who has to scramble around tailing logs and restarting servers. I'm pretty sure their ops team is better at this than I am anyway.

It's not about downtimes and outages. It's incomprehensible to me how lax businesses are with their backups, especially business where their clients data is everything. Yes, the brave new world of the cloud seems tantalizing, but even there, data can and will be lost. Don't just use only one way / provider / service / mechanism for backing up your data.

A tape / lto backup system doesn't cost the world. Yes, it introduces overhead and maintenance, but I'd rather be safe than sorry.

At my place of work we currently use a lot of virtual servers, hosted by external providers. We use their backup and snapshot mechanisms. But we also pull all data to our local back up server. From there we backup to tape on a daily basis.

I do have backups of all my (relevant) GH repos since that's just a "git pull" away and can be automated nicely. But I'd probably be out of luck running my regular CI-process with github down or do a deployment. Both rely on having a public-facing git server - having a backup does not imply that I have a full git server running. I could set one up and administer it, but it's just too much effort given GH uptime.

Github is definitely in 2-digits this year.

I doubt it. (Assuming by 2 digits you mean < 99.0%, ie that they don't have "two nines" (though I guess two nines could even be 98.5 with rounding)).

1% downtime is over three days. They've had some big outages, but I think this may be their longest of the year and it was a < 6 hours. They could have one of those every month and still only have 72 hours of downtime, which is 99.18%.

I'm pretty certain he means 2 digits of hours, given the reference in the parent comment to 10s of hours of downtime

I agree with you but one thing: you don't have to build anything locally. Just pushing to bitbucket as backup would have done it. It does not have to be a locally hosted solution.

Periodic reminder that git != GitHub

> Your company runs its source revisions through Github without a backup solution?

Thanks to git, every developer usually has a complete backup of the repository + all of its history

I did a bunch of work on GitHub today just before it went down. Talk about getting lucky.

I'm sure we will all learn a bunch from the post-mortem. These high-profile and very openly discussed failures are always good for learning all kinds of things.

No issues at all on how GitHub is handling it so far. Eager to learn what happened. Hitting refresh on the status page every so often. Better than watching underwater basket-weaving competition at the olympics.

I created a new org on GitHub today (through all the unicorns) and was about to push an existing repo to them so we could all start pulling it. Talk about unlucky.

The good part is you can do it yourself without GitHub (which, after the downtime, could come into the equation quite easily):

  -- Remote machine, or SSH port forwarded machine --
    > adduser git (you either add pusher/puller's ssh pubkeys to its 
      authorized_hosts, or use a shared password)
    > su git
    > cd ~
    > git init --bare myproject.git

  -- Your local repo --
    > git remote add <name> git@aforementionedmachine:myproject.git 
    > git push <name>
Setting up a git repo for confidential pushing and pulling is quite easy.

Already had the repo up and running elsewhere, thanks. Just needed to do some non-git stuff on github (create a new repo on GH to push it to, add team members, and so on).

I've been doing private DVCS for years (mercurial) but this is my first project that's on git and I've been looking forward to the opportunity to host it on github and see what I've been missing.

Well handled, minus the unicorns. Tangentially relevant: I just wish Github offered some sort of an academic plan for students, no private repo means that I cannot use Github at all not because I'm building closed-source software, but because I (obviously) can't put my assignment work for public viewing before the assignment deadline. So I've been using Bitbucket, which is fine and all, but I would have loved to be a part of Github community.

They do offer a free plan for students. I get 5 free private repos.

I can't get the link now since GitHub is down ha, but when it's back up, just have a quick google and you'll find a quick form to fill in and voila. It's great.

Once the current issues are resolved, students can sign up for a free GitHub micro account (5 free private repositories with unlimited collaborators) via https://github.com/edu. You just need a verified, valid-looking email address i.e. joe@stanford.edu, jim@strath.ac.uk, etc.

If you give them a good explanation, or some proof you are at school somewhere, they will be cool with that even without the .edu email. Great guys!

Absolutely, I got one without .edu email.

It's actually a discount. That way you can have a better account for $5 less each month. Although I should probably go turn that off having just graduated.

When I signed up around a year ago it wasn't a discount. It was a free micro account that lasts two years. I'm not sure if that's changed however.

Right. It is. But you can upgrade your account and it acts like a $5 discount (the micro plan is $5...)

If you want free Git hosting, you can DIY with Gitlab [1] if you want a web-based GUI like Github's.

If you prefer command-line usage, I'd recommend Gitolite [2]. It allows you to give people access to git and only git (as opposed to git's built-in system, which requires granting ssh shell access); and it only uses one OS-level user/group regardless of how many people it's managing.

Either of the above solutions are for your compsci professors who are clued-in enough to be comfortable with CLI in general and Git in particular.

If you're trying to give files to technically clueless humanities professors, I'd suggest only using Git privately, to develop your paper or whatever, then using a plain old email attachment, or hosting on an HTTP server, to submit the assignment. Or going really old-school by printing out an old-fashioned dead-tree hardcopy.

Of course, all of these solutions (except email attachments and printouts) require running your own server, which is actually a great learning experience. I'd recommend prgmr.com for hosting; their smallest plans should be able to fit even an undergrad's budget, and you have full root access to your (Xen VM) system, so you can do all kinds of fun and exotic experiments. It's not necessary for basic usage, but you can install any version of any Linux distro, use LVM, even use a custom-compiled kernel or FreeBSD (the only requirement is guest Xen patches). It's great because if you have problems, they give you access to an ssh-based out-of-band console, rebooter, and rescue image so you can fix them yourself. (By contrast, many other hosts require you to make changes through some half-baked web UI that lacks half the tools you need, require you to install only approved distros and only do OS upgrades on an approved schedule, and require you to file tickets with lengthy turnaround times and/or fees in order to do the most routine troubleshooting or maintenance tasks.)

Disclaimer: My only relationship with prgmr.com is that I've been their hosting customer for a long time (and very happy with them given the nonsense I've had to put up with from other hosts, in case you couldn't figure that part out from my above rant).

My only relationship with Gitolite is a project user. (I've created and maintained three small-scale Gitolite installations.)

I haven't used Gitlab, but I've heard good things about it.

[1] http://news.ycombinator.com/item?id=4957145

[2] https://github.com/sitaramc/gitolite

> Either of the above solutions are for your compsci professors who are clued-in enough to be comfortable with CLI in general and Git in particular.

I'm a visual design student, and most of my professors are clued in enough to know how to handle git. I'm lucky on that regard.

>Of course, all of these solutions (except email attachments and printouts) require running your own server, which is actually a great learning experience.

I do have my own VPS, Amazon AWS. I just dumped in an ubuntu server image, LAMP stack and python, and it's just vanilla unix after that—it's great. I'll check prgmr out though, thanks for the recommendation.

Gitlab looks good, but I almost exclusively use git CLI, so I don't need a fancy interface. I'll take a look at gitolite. Thanks!

How fitting that one of the links you've posted is unavailable at the moment due to the very downtime that caused this discussion! Evidence of our over-reliance on GitHub in the open source community, perhaps.

Here I took it as evidence of Github's generous-to-the-point-of-being-ridiculously-foolish attitude toward their customers: They'll even give free public Git hosting to products that directly compete with their core business at a lower price point (free)!

Even if you Do No Evil (R), some people will still complain about you.

That being said, Git hosting would be better for everyone if Github had a bigger competitor in their niche.

> They'll even give free public Git hosting to products that directly compete with their core business at a lower price point (free)!

unless you are providing paid accounts and expensive enterprise solutions with support, you are not competing with them in any way.

> unless you are providing paid accounts and expensive enterprise solutions with support, you are not competing with them in any way.

Yes, you are. If you would have bought their product (paid Git hosting), but you used somebody else's product instead (Gitolite), then that other product (Gitolite) is competing with Github for your business.

I agree that there is a subset of the market that (a) won't or can't figure out Git hosting on their own, or (b) decides that paying for a Github account will actually be less expensive. But I never said that Gitolite will ever replace Github.

BitBucket. It's essentially the same service as GitHub, but allows private repositories for free. And with an academic .edu email you get unlimited collaborators. Much simpler than running your own git server as others have suggested I think.

> BitBucket. It's essentially the same service as GitHub, but allows private repositories for free. And with an academic .edu email you get unlimited collaborators. Much simpler than running your own git server as others have suggested I think.

> So I've been using Bitbucket, which is fine and all, but I would have loved to be a part of Github community.

I already have Bitbucket edu plan, as I've said. It's pretty great, but it lacks community.

Ah missed that, or an edit, either way. Yeah there's not the community because it's used for more private projects. GitHub has the community because it's focused towards open source and all the sharing and cooperation that OSS entails.

Yeah, I agree. One interesting thing is that their JIRA is really popular as an issue tracker to the point that more than a few organisations I've seen are using JIRA as a tracker and github:enterprise as repo. Github really needs to up their game on this part.

Odds are the orgs in question were using Jira long before they were using GitHub. Pretty sure it's Atlassian feeling the pressure here and not GitHub; given the existence of Stash.

I use both bitbucket.org (for unlimited private repos) and github (for public repos). You are not limited to use only one of them.

Github does have academic plans. When I signed up, it was 2 years of a free Micro plan (5 private repos) if you signed up with a verified school email address. When they're back up, check out: https://github.com/edu

You mean something like this? https://github.com/edu

GitHub do provide Edu accounts, I have one! I guess I went to github.com/edu or /education or something but it gave me the form at /contact, you say what your up to, where you go and such and they put a coupon on your account to give you a free micro plan.

Five dollars...

You can also commit locally, and only push after the deadline.

It should be pretty far down their list of priorities at this point, but I just noticed the "Exception Percentage" value at https://status.github.com/graphs/past_day is saying 483.704%. The fact that they're measuring this in percentages implies the maximum is 100%, but this isn't so.

I imagine it was a percent measured against successful responses. The exceptions now far exceed those successful responses. You are correct in saying it should be 100% but people have a habit of scaling beyond it.

If this is in fact how they're measuring...I was going to make this post a long rant about how ridiculous it is to measure it in that way, but then I realized that people do sometimes measure probabilities by quoting the win-to-loss or loss-to-win ratio.

This is called "odds" and frequently used in gambling. Usually, though, someone says "4.7 to 1" or "47 to 10" (abbreviated 4.7:1 or 47:10) instead of 470%. Usually the larger number is stated first, and the direction is usually indicated by a word like "favorite" or "longshot." So one would say "Errors seem to be a 4.7:1 favorite today."

It's slightly complicated by the fact that odds can measure one of several things:

A. A probability ratio ("Red" is a slight underdog in the game of roulette [1]; the odds against hitting it are 20:18 since there are 20 non-red spaces and 18 red spaces)

B. A payout ratio ("Red" pays 1:1, meaning the prize if you win this bet is equal to the amount of the bet)

C. The current payout of a paramutuel pool [2]

Odds are seldom used outside of a gambling context.

[1] http://en.wikipedia.org/wiki/Roulette

[2] http://en.wikipedia.org/wiki/Parimutuel_betting

I'm really curious how much longer github can offer so much free service. It's not just git but effectively free web hosting as well, at least for statically served pages.

It seems like it's only a matter of time before something will have to give. Either they'll have to start throttling web serving or cover the site in ads like sourceforge or something

I guess I'd better sign up and start paying ASAP to help be part of the solution

Plenty of companies pay GitHub multiple thousands of dollars a year for their services. It’s not just personal accounts.

I don't think GitHub's hosting of free repositories costs them that much. As you said yourself, it is just essentially free hosting so the largest cost will probably be storage (which is pretty cheap anyway now) and then the much minor cost of Push/Pull/Clone etc bandwidth and processing.

Think of it less as a free service and think of it more as marketing. I have a few free repos which I used to open source some of my personal modules; however when it came time to select a source revisioning service for work, I remembered how easy GitHub was and went with their payed service. I am fairly sure us now paying for our repositories offset itself and my and quite a few other free repositories.

Github is used by large enterprises.

What's funny is the extent to which Github is being used as a centralized repository for many projects. I don't just mean for project discovery; the issues and gists and other services aren't replicated as often or easily as the code.

In fact, a lot of services depend on github for various reasons, all of which are probably borked now ...

A little bit more info here:


What about a script/service to mirror Bitbucket and Github (or others) through webhooks or etc?

Was just getting my hands on Homebrew after a fresh OS install when i hit the Octobummer :-/


    [remote "bitbucket"]
	url = git@bitbucket.org:you/repo.git
	fetch = +refs/heads/*:refs/remotes/bitbucket/*
$ git push bitbucket

//edit after tinco's comment

+refs/heads/:refs/remotes/origin/ -> +refs/heads/:refs/remotes/bitbucket/

That snippet overrides your remotes/origin branch with what you pull from bitbucket.

This might not be what you want. Instead you probably should do:

  fetch = +refs/heads/*:refs/remotes/bitbucket/*
Unless you really know what you're doing and are just crazy like that :)

I thought he thinks of a push and forget backup, so I didn't care. but yea, you're right.

Ah but i meant a global service for public repos, instead of each owner setting up manually

A remote can have multiple url entries. 'git push origin master' in my case pushes to Bitbucket, then GitHub. Must be manually added through 'git config -e' though. The order is important as it pulls from the first remote URL, while the push is to all.

i understood that and was aware.

What i mean was for a web service that would monitor changes on public repos (commits, tags, branches) and would sync them automatically (mapping accounts, pruning branches, etc).

What would happen if that service goes down?

:-) the original and mirror repos would still be available, even if the mirrors would be outdated since the mirror service failure ?

I know this is just speculative, and in fact I was joking, but to achieve that you either have to:

1) Configure a push hook on the master server, which you need access to.

2) Remove decentralized from DVCS, as your service becomes a new master which then mirrors.

3) Continuously poll (pull) the master server from your mirror service.

I can't seem to find any of these three options more desirable than simply adding a new remote, other than being automatic.

Yes :-) plus the bandwidth and storage would be overkill overtime

You can push to both at the same time, no need for hooks.

It's easy enough to set up a backup repo so a team can keep collaborating when Github is down, but does anyone have a way to deploy Rails apps with Capistrano+bundler? It's terrible not being able to deploy; sometimes that can be really urgent.

With Cap I can just repoint config/deploy.rb to the backup git repo, but what about bundler?

To answer my own question:

Of course if you have private patches of gems with bundler's `:git` option, you'd need to repoint all of those, too (as well as keep them in two places).

It'd be awesome if there was some simpler way, maybe a separate declaration in your Gemfile so that any gems added to your project also get installed on a corporate git server, and then bundler uses that as a fallback if `source` doesn't work.

Solution is to self-host your critical files. If you have SSH already on a server (ha!) it is pretty easy:


Good.I was starting to miss "Github is down" submissions. Quick let's look for alternatives.

I feel like such an uber goober! I was installing a package that relied on a github file, which for obvious reasons failed....little did i realize that that was the problem, all I saw was such and such python error callback....doh!

Most of developers in my countries 're using SVN. I don't use SVN at all, and i'm failing to convince my partner that Git is better than SVN, just because i don't believe in hardware stability.

What does hardware stability have to do with it?

I mean the harddisk or mainboard could go wrong anytime. The Distributed model seems more stable.

use git-svn.

I never forget to pull before going remote, and today I did, and the one time I actually need github to be up and it's not. But I can't be upset, even if this wasn't my fault, it's github.

Interesting: Github Pages is still up (got a blog on there and it doesn't appear to have experienced any outages). Static site generators for the win.

Pages was out briefly a couple of hours ago (certainly some repos, anyway)

Seems to be fine now though.

I guess Murphy is strong on this one, I was just right in the middle of bringing up new server (to host Rails app). Oh well, I'll go to bed early :D

Can't read the clojure 1.5 RC1 release notes. Damn.

Looks like services implemented in terms of github are not reliable (but there was no guarantee of that anyway).

It's back now.

GitHub outage is now over.

I'm curious what actually went wrong.

Time to fork github. Bad bad sign that they can't get their operations in order. I guess $100 Million can't buy you uptime...

That's a really nice status dashboard, tracking and publicly displaying your 98th% is really cool. On the other hand stuff like this:

"13:17 UTC We are seeing unicorns ..."

Comes off as un-professional at exactly the wrong moment.

Can't disagree more.

Gives me the impression there are actually real people trying to fix things, not just blank-faced robo admins following company policy HOWTO-Fix guides.

That github isn't a blank sheet devoid of life and emotion and all 'professional' is the reason I'm willing to shrug off downtime like this.

If anything, I really feel for whoever posted:

   We do not expect this to have visible impact to 
   customers, but will update status if that changes.
Ouch. You never ever want to follow that up with a post in red.

Hopefully it means that they thoroughly thought this through before planning the maintenance, and through their due diligence, they concluded that customers shouldn't see any problems, and thus decided to go forward.

Just shows people aren't perfect. Haven't we all been there?

(I do agree though that it does feel like a slap in the face to say: "Hey there shouldn't be any problems! Never mind, we were wrong...")

Come on, really?

Did you ever have a piece of code that should be impossible to reach under normal conditions? Did you put a message there to remind yourself that the case in question should never, ever occur?

That could be a unicorn.

For extra credit, if you have put these messages in your code, have you seen all of them at one time or another? I certainly have!

Github switched to the Unicorn server (http://unicorn.bogomips.org) some time ago. Sorry if you don't like the name but, uhm well, it's the name of the server they use. Anyways. Error pages featured an "angry unicorn" since then, so when they are saying "we are seeing unicorns", most users know that this translates to errors..

You say most users will know what it means. Do the rest not deserve a readable status page?

How is that un-professional? Unicorns mean errors for github, and they report them.

This is more professional reporting than most companies do.

> Unicorns mean errors for github

I have no idea what this means, but I can see how the idea of coming up with an alternate language in which fantasy creatures represent things we already have established words for might seem unprofessional

The term unicorn actually comes from the fact that they run the 'unicorn' webserver. When a webapplication error occurs unicorn catches it and displays an error to the visitor. They prettied that error up with a pretty image of a unicorn and a small explanation that something went wrong.

So when they say 'we saw unicorns' they literally saw (reports of people seeing) pictures of unicorns. Now I understand that might confuse someone who does not regularily use github, but it is just a term that is in their jargon and I see no reason why they should market-speak that up in an intermediate status message. (Note that these status messages are not press releases but reports that engineers make during the discovery process)

There's a picture of a unicorn on their error page. Via natural metonymy, seeing an increase in unicorns means they're seeing an increase in errors.

It's the exact same linguistic process that lets Americans refer to the executive branch of government as "the white house". I'd hardly call it unprofessional.

Quite literally, the unhandled error page on github shows a unicorn. As seen on this google image search result[1]. It appears they have put up an octocat "we are down" temporary static page right now though...

[1]: http://www.flickr.com/photos/beatak/4008687328/

Additionally they could be referring to their use of `Unicorn: Rack HTTP server for fast clients and Unix`


I'd disagree. Having an easily identified image shown when a particular error is thrown can be really helpful when diagnosing problems, particularly when a customer / user is trying to communicate the problem with the site's support team. It's also quirky and friendly.

Unicorns are also supposed to be these rare, mythical beasts which I'd say is exactly the kind of imagery you want representing your server errors.

Other posters have noted where the terminology comes from.

But you should really learn to take the meaning of an unknown term from context if you don't know what it means.

As for the accusations of unprofessionalism -- I'll take vivid metaphor [1] like "seeing unicorns" over a dull monotone description like "our site is experiencing server errors."

[1] As other replies have pointed out, in this case it's not a metaphor; people are (or were, as the case may be) literally staring at pictures of unicorns.

May be just a matter of preference but for me "our site is experiencing server errors" is much clearer communication than "seeing unicorns." The latter requires me to learn a new terminology in a high pressure environment for no good reason.

And if "seeing unicorns" was a code word for a specific kind of errors, then again I would rather have a more descriptive terminology and not one that I may confuse for an inside joke.

You are right. Some users are happy to see quirky behavior that serves as an "in joke" for them, but any who failed to understand will not find any comfort during a site emergency.

Status messages should be communicated with no unnecessary ambiguity both internally (where unicorns may be the clearest term) and externally (where it is not).

Lack of clarity during periods of heavy human workload is a common cause of e.g. plane crashes. Operations people of the internet would do well to learn from mistakes made before the cloud was a thing.

I'm not a Ruby programmer. I've never used Unicorn. I had only vaguely heard of it. I found it completely obvious in context.

If you get bent out of shape over some pretty mild slang, there are loads of companies out there who'll take your money for as staid a response as you'd like.

>May be just a matter of preference but for me "our site is experiencing server errors" is much clearer communication than "seeing unicorns." The latter requires me to learn a new terminology in a high pressure environment for no good reason.

If you don't already know that "unicorns" means in this context, then you are not a GitHub user, and thus it being down is not a "high pressure environment".

Likewise - I haven't the slightest idea what "unicorns" are in this context.

The github error page is an angry unicorn. Similar to ars technica's moonshark, or twitter's failwhale.

unicorn is their web server for rails. http://unicorn.bogomips.org/

Unicorns are the illustration for server error pages.

>Comes off as un-professional at exactly the wrong moment.

No, adding a touch of humor is highly professional, not to mention good marketing.

Remember, they are not marketing to enterprise managers, rural Utah programmers, or corporate 80's programmers in suits, but to todays and tomorrows "hip" programmers.

Now, some of today's programmers arguably square and adhere to BS 50's professionalism ideas (where professional = boring, somber and well groomed) but most would take the unicorns over any kind of "professional" copy text.

And this is why I use Google Code: supports git and is more reliable than Github.

It's a shame that they don't offer a paid service for closed-source software.

Where do you live, under a rock? Of course they offer paid service for closed-source software.

I'm assuming by "they" he was referring to Google Code. (Wishing that Google Code offered a paid service.)

Indeed. Not only do they offer a paid service and private repos, but they offer a paid self-host version as well.

Were there any issues with Github prior to this? I don't recall any aws-like pattern. But I might have just missed it.

Github is awesome, but yeah, they've been having lots of outages for a while now. A deploy based on a git fetch && git reset from Github might not be a good thing.

A few. They lost a server about 10 days ago which took down a few repos including some gists. All in its been very stable.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact