You thereby can't just say "oh yeah, git is distributed, so I don't care if GitHub is offline"; it makes about as much sense as saying "I don't care that Gmail is offline, as email is distributed: I have all my messages locally in Outlook and can send mail through my ISP" while ignoring that other people now can't send you mail and you have lost access to your calendar entirely.
Webservers can become unavailable. Github has a great trackrecord, and because git is distributed, people can still commit their code locally.
If you would have an SVN workflow, you would either have to wait committing (which is costly), or commit multiple changes in one go.
Also the email example is exactly the same. It IS a good thing you can access stuff locally.
You basically tell that dropbox is useless.
In fact, "git hosting" is one of the least interesting services GitHub provides for the specific reason that, yes, git is so ludicrously decentralized: it seriously takes less time to push to your own git repository than it takes to log in to your GitHub account.
(Really, the only thing it makes simpler, with regards to actual git hosting, is managing user credentials for a centralized Subversion-like repository; however, if you actually think of git as a decentralized system you don't do that in the first place.)
I will thereby attempt to make this point even more clear by taking my example about Gmail (where you totally ignored the Calendar comment) and ratchet it even further: if all of Google went offline tomorrow, my ability to still send mail using Outlook is great, but it doesn't help me watch YouTube videos or, you know, search the web.
Likewise, when GitHub has an "outage across all services" that affects all kinds of things developers do that have nothing at all to do with "hosted git repository". To repeat one example: a lot of developers are crippled if they don't have access to an issue tracker to tell them what they need to work on next.
So, just because your workflow doesn't rely on anything GitHub provides, doesn't mean that isn't why most of the people who use GitHub aren't using it: GitHub is a popular set of services for project management that are all bundled together in a way that is easy for people to get started with, not just a git hosting company.
I had a clone of my personal fork of my department's github repository at home. I read this and thought "that's interesting, I wonder if I could still work on it from home if I want to." I saw with "git config -l" that I had my github repository and my university's github repository, but not my university's mirror of the repository. I did a "git add remote", logged in to my university's VPN, "git fetch", and was up and running with my latest changes from Friday.
If that didn't make any sense, well, the gist of it is that the outage wouldn't affect development work in the slightest, thanks to the fact that every repository is a) content addressable, so you have consistency between servers and b) self-contained, so you have a complete copy everywhere. Hit me up for more info if you're curious; I recently upgraded us from CVS to Git for this redundancy advantage, among others.
(Though I do wish GitHub Issues weren't as centralized...)
In short, if you have a project's Fossil repo, you can work on everything locally, then push source, wiki, and issue tracker changes to a central repo at your leisure.
Github issues aren't however.
$ fossil ui
You can read more about it here: http://www.fossil-scm.org/index.html/doc/trunk/www/bugtheory...
You don't need github to be online to work with your code or even your repositories so long as you haven't gone and thrown all your digital eggs in the same basket (i.e. as long as you have other remotes or at least local copies of your repositories).
Unless the Mayans were off by one
Well, no, obviously it's a feature of Distributed Version Control Systems in general.
In fact, that's probably why OP wrote "This is why I love git (and distributed version control systems in general)" and didn't in any way claim it was a feature of GitHub?
And do you have some kind of reference for git being less memory-efficient (are you talking RAM-wise?) vs Mercurial or TFS? On its face its pretty hard to believe; git is pretty light-weight.
I'm certainly not slamming Git for working exactly the way it was intended.
As an example when you clone a repository, it downloaded it, then in goes through a process of 'Resolving Deltas'. This is it going through the history applying the changes/deltas to the files so that they are up to the most recent revision.
The only files that are added in full are binary files.
Some reading: http://git-scm.com/book/en/Git-Internals-Packfiles :)
So, git can be quite heavy on disk usage, until all the loose objects are packed.
Also, maybe don't confuse git with GitHub, as your original comment did.
The only SCM I like less than TFS is SourceDepot. The bane of my existence. It's sad when I can rant about it abstractly and my father can sympathize.
I first attempted to redo the README for a service I've just open-sourced, before realising Github is down.
Then, I attempted to fix the company CI server (OOM errors because of Carrierwave needing more than 500MB of memory to run 1 spec in, for some unknown reason), which failed because it couldn't check out the code.
After giving up on that, I attempted to install Graphite to a company server, where I hit another roadblock because the downloads are hosted on Github, and so I had to use Launchpad, which I had an allergic reaction to.
Also, when I was shelling into the server, oh-my-zsh failed to update because, you guessed it, Github was down.
Still, shouts to the ops team in the trenches, we're rooting for you.
I know that in theory a cloud solution should have a higher uptime than an amateuristic set up private server, but cloud solutions have a certain complexity and coherence that make them very vulnerable to these kinds of 'full failures' where nothing is in your control.
Maybe you should take this time to learn from this, and analyze what you could do to reduce the impact of this failure. For example, you could research what it would take for your company to move to another Git provider, perhaps even on your own server or a VM slice at some cloud provider.
I'm not saying you should drop github, because obviously they have great service, but be realistic about cloud service.
Cloud service is like RAID: it is not a backup.
The way RAID is nice for recovering from errors without downtime, there is a chance something bigger happens and you still lose your data cloud is nice for offering scalability and availability but there's a chance everything goes down and you still can't run your operations.
* If I wanted the build server to point to our internal git mirror, I would configure it to point to our internal code mirror, but I want it to build off of Github webhooks.
* The "eggs in the basket" analogy is probably best saved for a situation where I'm dependant on a cloud service, such as Twilio.
* I would expect an amateuristic private server to have better uptime than a monolithic service such as Github because there are a lot less moving parts, and in our case, a lot less people doing things with it.
* The "company impact" of Github going down is next to nil. It's one o'clock in the morning on a Sunday, I'm eating string cheese, and I feel like being productive. The company is not paying me for this, and have encountered zero losses from it. We have a very simple mirror which we can use to push code to production if Github goes down, which we have never actually had to use.
Personally, I find the "keep a torch in every corner of every room because for 15 minutes last year the power was out" attitude to life is a bit over-rated, and you're planning for an edge case. I'd much rather remember where the torch is and learn to walk in the dark.
Outages happens - as long as we're talking hours a year, pretty much everyone but life-safety systems, critical infrastructure, and payment/high-traffic commerce sites are probably better off just letting third-party cloud vendors manage their systems. Take the downtime and relax.
(Now, if downtime consistently gets into the 10s of hours/year, it's time to look for a new cloud provider. )
* Create an unprivileged account & set a password that you don't need to remember -> sudo adduser git
* Add your public key from your laptop to the unprivileged user's authorized_keys file -> sudo su git; cd ~; mkdir .ssh; vim authorized_keys - then copy and paste your id_rsa.pub to that file
* Repeat that for all public keys on your engineering team
* In git's home directory, git init --bare <name of repo>.git
* On your local machine, git remote add doomsday git@<DOOMSDAY SERVER HOSTNAME>:<NAME OF REPO FOLDER>.git
* git push doomsday --all
* On colleague's box, git clone git@<DOOMSDAY SERVER HOSTNAME>:<NAME OF REPO>.git
Let me know if there is a better way of doing this, or if it's monumentally screwed somehow.
A tape / lto backup system doesn't cost the world. Yes, it introduces overhead and maintenance, but I'd rather be safe than sorry.
At my place of work we currently use a lot of virtual servers, hosted by external providers. We use their backup and snapshot mechanisms. But we also pull all data to our local back up server. From there we backup to tape on a daily basis.
1% downtime is over three days. They've had some big outages, but I think this may be their longest of the year and it was a < 6 hours. They could have one of those every month and still only have 72 hours of downtime, which is 99.18%.
Thanks to git, every developer usually has a complete backup of the repository + all of its history
I'm sure we will all learn a bunch from the post-mortem. These high-profile and very openly discussed failures are always good for learning all kinds of things.
No issues at all on how GitHub is handling it so far. Eager to learn what happened. Hitting refresh on the status page every so often. Better than watching underwater basket-weaving competition at the olympics.
-- Remote machine, or SSH port forwarded machine --
> adduser git (you either add pusher/puller's ssh pubkeys to its
authorized_hosts, or use a shared password)
> su git
> cd ~
> git init --bare myproject.git
-- Your local repo --
> git remote add <name> git@aforementionedmachine:myproject.git
> git push <name>
I've been doing private DVCS for years (mercurial) but this is my first project that's on git and I've been looking forward to the opportunity to host it on github and see what I've been missing.
I can't get the link now since GitHub is down ha, but when it's back up, just have a quick google and you'll find a quick form to fill in and voila. It's great.
If you prefer command-line usage, I'd recommend Gitolite . It allows you to give people access to git and only git (as opposed to git's built-in system, which requires granting ssh shell access); and it only uses one OS-level user/group regardless of how many people it's managing.
Either of the above solutions are for your compsci professors who are clued-in enough to be comfortable with CLI in general and Git in particular.
If you're trying to give files to technically clueless humanities professors, I'd suggest only using Git privately, to develop your paper or whatever, then using a plain old email attachment, or hosting on an HTTP server, to submit the assignment. Or going really old-school by printing out an old-fashioned dead-tree hardcopy.
Of course, all of these solutions (except email attachments and printouts) require running your own server, which is actually a great learning experience. I'd recommend prgmr.com for hosting; their smallest plans should be able to fit even an undergrad's budget, and you have full root access to your (Xen VM) system, so you can do all kinds of fun and exotic experiments. It's not necessary for basic usage, but you can install any version of any Linux distro, use LVM, even use a custom-compiled kernel or FreeBSD (the only requirement is guest Xen patches). It's great because if you have problems, they give you access to an ssh-based out-of-band console, rebooter, and rescue image so you can fix them yourself. (By contrast, many other hosts require you to make changes through some half-baked web UI that lacks half the tools you need, require you to install only approved distros and only do OS upgrades on an approved schedule, and require you to file tickets with lengthy turnaround times and/or fees in order to do the most routine troubleshooting or maintenance tasks.)
Disclaimer: My only relationship with prgmr.com is that I've been their hosting customer for a long time (and very happy with them given the nonsense I've had to put up with from other hosts, in case you couldn't figure that part out from my above rant).
My only relationship with Gitolite is a project user. (I've created and maintained three small-scale Gitolite installations.)
I haven't used Gitlab, but I've heard good things about it.
I'm a visual design student, and most of my professors are clued in enough to know how to handle git. I'm lucky on that regard.
>Of course, all of these solutions (except email attachments and printouts) require running your own server, which is actually a great learning experience.
I do have my own VPS, Amazon AWS. I just dumped in an ubuntu server image, LAMP stack and python, and it's just vanilla unix after that—it's great. I'll check prgmr out though, thanks for the recommendation.
Gitlab looks good, but I almost exclusively use git CLI, so I don't need a fancy interface. I'll take a look at gitolite. Thanks!
Even if you Do No Evil (R), some people will still complain about you.
That being said, Git hosting would be better for everyone if Github had a bigger competitor in their niche.
unless you are providing paid accounts and expensive enterprise solutions with support, you are not competing with them in any way.
Yes, you are. If you would have bought their product (paid Git hosting), but you used somebody else's product instead (Gitolite), then that other product (Gitolite) is competing with Github for your business.
I agree that there is a subset of the market that (a) won't or can't figure out Git hosting on their own, or (b) decides that paying for a Github account will actually be less expensive. But I never said that Gitolite will ever replace Github.
> So I've been using Bitbucket, which is fine and all, but I would have loved to be a part of Github community.
I already have Bitbucket edu plan, as I've said. It's pretty great, but it lacks community.
You can also commit locally, and only push after the deadline.
This is called "odds" and frequently used in gambling. Usually, though, someone says "4.7 to 1" or "47 to 10" (abbreviated 4.7:1 or 47:10) instead of 470%. Usually the larger number is stated first, and the direction is usually indicated by a word like "favorite" or "longshot." So one would say "Errors seem to be a 4.7:1 favorite today."
It's slightly complicated by the fact that odds can measure one of several things:
A. A probability ratio ("Red" is a slight underdog in the game of roulette ; the odds against hitting it are 20:18 since there are 20 non-red spaces and 18 red spaces)
B. A payout ratio ("Red" pays 1:1, meaning the prize if you win this bet is equal to the amount of the bet)
C. The current payout of a paramutuel pool 
Odds are seldom used outside of a gambling context.
It seems like it's only a matter of time before something will have to give. Either they'll have to start throttling web serving or cover the site in ads like sourceforge or something
I guess I'd better sign up and start paying ASAP to help be part of the solution
Think of it less as a free service and think of it more as marketing. I have a few free repos which I used to open source some of my personal modules; however when it came time to select a source revisioning service for work, I remembered how easy GitHub was and went with their payed service. I am fairly sure us now paying for our repositories offset itself and my and quite a few other free repositories.
In fact, a lot of services depend on github for various reasons, all of which are probably borked now ...
Was just getting my hands on Homebrew after a fresh OS install when i hit the Octobummer :-/
url = firstname.lastname@example.org:you/repo.git
fetch = +refs/heads/*:refs/remotes/bitbucket/*
//edit after tinco's comment
+refs/heads/:refs/remotes/origin/ -> +refs/heads/:refs/remotes/bitbucket/
This might not be what you want. Instead you probably should do:
fetch = +refs/heads/*:refs/remotes/bitbucket/*
What i mean was for a web service that would monitor changes on public repos (commits, tags, branches) and would sync them automatically (mapping accounts, pruning branches, etc).
1) Configure a push hook on the master server, which you need access to.
2) Remove decentralized from DVCS, as your service becomes a new master which then mirrors.
3) Continuously poll (pull) the master server from your mirror service.
I can't seem to find any of these three options more desirable than simply adding a new remote, other than being automatic.
With Cap I can just repoint config/deploy.rb to the backup git repo, but what about bundler?
It'd be awesome if there was some simpler way, maybe a separate declaration in your Gemfile so that any gems added to your project also get installed on a corporate git server, and then bundler uses that as a fallback if `source` doesn't work.
Seems to be fine now though.
Looks like services implemented in terms of github are not reliable (but there was no guarantee of that anyway).
"13:17 UTC We are seeing unicorns ..."
Comes off as un-professional at exactly the wrong moment.
Gives me the impression there are actually real people trying to fix things, not just blank-faced robo admins following company policy HOWTO-Fix guides.
That github isn't a blank sheet devoid of life and emotion and all 'professional' is the reason I'm willing to shrug off downtime like this.
If anything, I really feel for whoever posted:
We do not expect this to have visible impact to
customers, but will update status if that changes.
Just shows people aren't perfect. Haven't we all been there?
(I do agree though that it does feel like a slap in the face to say: "Hey there shouldn't be any problems! Never mind, we were wrong...")
Did you ever have a piece of code that should be impossible to reach under normal conditions? Did you put a message there to remind yourself that the case in question should never, ever occur?
That could be a unicorn.
For extra credit, if you have put these messages in your code, have you seen all of them at one time or another? I certainly have!
This is more professional reporting than most companies do.
I have no idea what this means, but I can see how the idea of coming up with an alternate language in which fantasy creatures represent things we already have established words for might seem unprofessional
So when they say 'we saw unicorns' they literally saw (reports of people seeing) pictures of unicorns. Now I understand that might confuse someone who does not regularily use github, but it is just a term that is in their jargon and I see no reason why they should market-speak that up in an intermediate status message. (Note that these status messages are not press releases but reports that engineers make during the discovery process)
It's the exact same linguistic process that lets Americans refer to the executive branch of government as "the white house". I'd hardly call it unprofessional.
Unicorns are also supposed to be these rare, mythical beasts which I'd say is exactly the kind of imagery you want representing your server errors.
But you should really learn to take the meaning of an unknown term from context if you don't know what it means.
As for the accusations of unprofessionalism -- I'll take vivid metaphor  like "seeing unicorns" over a dull monotone description like "our site is experiencing server errors."
 As other replies have pointed out, in this case it's not a metaphor; people are (or were, as the case may be) literally staring at pictures of unicorns.
And if "seeing unicorns" was a code word for a specific kind of errors, then again I would rather have a more descriptive terminology and not one that I may confuse for an inside joke.
Status messages should be communicated with no unnecessary ambiguity both internally (where unicorns may be the clearest term) and externally (where it is not).
Lack of clarity during periods of heavy human workload is a common cause of e.g. plane crashes. Operations people of the internet would do well to learn from mistakes made before the cloud was a thing.
If you get bent out of shape over some pretty mild slang, there are loads of companies out there who'll take your money for as staid a response as you'd like.
If you don't already know that "unicorns" means in this context, then you are not a GitHub user, and thus it being down is not a "high pressure environment".
No, adding a touch of humor is highly professional, not to mention good marketing.
Remember, they are not marketing to enterprise managers, rural Utah programmers, or corporate 80's programmers in suits, but to todays and tomorrows "hip" programmers.
Now, some of today's programmers arguably square and adhere to BS 50's professionalism ideas (where professional = boring, somber and well groomed) but most would take the unicorns over any kind of "professional" copy text.
It's a shame that they don't offer a paid service for closed-source software.