Hacker News new | comments | show | ask | jobs | submit login

Back when hurricane Sandy took out my employer's upstream Mercurial server for several days straight, I was pretty chuffed to be able to tell my boss, "You know how we migrated to a new source control system a couple months ago? Well, thanks to that, we can keep working pretty much uninterrupted. We should double check that XXX is getting frequent offsite backups, though."

My current company uses a self-hosted option, so this doesn't affect me. But I can't help but think that this time it's different, and we'd be hosed. The git part would still work without too much hassle, but we are heavily dependent on a bunch of additional things that GitHub offers, such as the pull request interface. That's slightly worrisome, I suppose.

All that said, I want to steer clear of knee-jerk assuming, "We don't have this problem b/c we self-host." There's a sickening sense of not really being in control of your own fate when a cloud provider goes down, but, realistically, I wouldn't be the person in charge of getting one of our self-hosted services back up, either. What really matters is % downtime. My experience has been that, compared to many in-house IT departments, folks like GitHub are generally very good at keeping the lights on.




While I don't necessarily agree with it, the argument is that with a self hosted solution, you have more control over when you are doing things.

So if you have a really busy time coming up, you don't deploy the new git update that day, you wait until you would be okay with some downtime.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: