Hacker News new | past | comments | ask | show | jobs | submit login
GitHub was down (githubstatus.com)
201 points by benbruscella on July 13, 2020 | hide | past | favorite | 128 comments



This is a nice reminder that you can have multiple remote repos and push/pull to all of them at the same time. For my side projects I usually use both github and google cloud source (I use gcp). If one is down the other is still available and then just resync when service is recovered.


Additionally, I have a machine on my local network that I push to (and that is backed up separately).

Really quick pushes, too :-)


Nice, good tip, I will set that up for my local projects. Not using Github for those, but it nevers hurts to have redundant Git servers.



Yes, push/pull all.


Do you have something set up for the remotes to sync with each other?


No, I do not. Usually though it is easy since whatever reconciliation you did in one repo you can do in another. The only difficulty is if you rewrite remote history, so don’t do it!


I know Github is down because I'm trying to update a kops cluster, but Github breaks it

    > kops update cluster
    error reading channel "https://raw.githubusercontent.com/kubernetes/kops/master/channels/stable": unexpected response code "500 Internal Server Error" for "https://raw.githubusercontent.com/kubernetes/kops/master/channels/stable": 500: Internal Server Error


See, I like the idea of K8S quite a lot. Enough that I decided to set up a cluster from scratch to figure out how everything ties together.

I've since grown to strongly dislike how much of the entire ecosystem seems to depend on random unversioned Github gists or direct links into a raw file in a repo somewhere.


NPM is the same. You might think that all of the packages are stored on NPM servers. Mostly true, but still when you install some of them they'll download precompiled binaries or whatever directly from Github.


If you have IPv6, remove IPv4 to see how many tools break, because github does not have IPv6.

Composer is useless in a native IPv6 only servers.


This actually is testament to the fact that the github used to be that damn reliable at one point.


Eh, I'd love to see numbers on this but I don't think this is true. I remember stuff like homebrew breaking even in the beginning of the project because of the dependency on github to pull down formulas and stuff.


Some major tools like python's `poetry` has official recommended installation as:

    curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python
Guess why our python docker images stopped building today?


This is slowly becoming a weekly occurrence ever since Microsoft entered the picture...

https://www.githubstatus.com/


I remember it being a weekly occurrence before Microsoft as well.

edit: though their historical data doesn't back it up. It's just a 2020 thing it seems; coincidentally picked up right near the middle of February which correlates way too closely with the beginning of COVID.


I recall it happening many years ago too.. but GitHub seemed to eventually sort it out, I suspect they will this time too.


Might be a side-effect of working remotely because of covid


correlation != causation though.


The whole "correlation does not imply causation" thing is completely misunderstood.

The issue is with the meaning of the word "imply"; when used in the formal sense as it appears is Classical Logic, correlation does indeed not imply causation.

In common parlance however, "imply" is often used to mean "provides evidence for", and correlation can indeed provide (potentially strong) evidence for a hypothesised causal link; the problem lies in people reading "correlation does not imply causation", assuming the informal meaning of "imply", and then going on to reject any notion of causation which uses observed correlation as evidence.

Pretty much every empirical science uses notions of correlation (in its various formal statistical guises) to provide support for causation, indeed to reject such reasoning would be to invalidate huge swathes of mainstream accepted science; half the battle in these instances is making the leap from correlation to causation in a manner which is considered scientifically sound.


Fortunately, otherwise covid outbreaks might be caused by github outages


And yet (Correlation != causation) != (!causation)


I don't think it has anything to do with Microsoft acquisition. Perhaps they even have more compute power available since then.

But, they are moving much faster in recent years. They've added a lot of new features.


It really has been. Hopefully it's just because they're continuing to migrate systems from their previous host to Azure.


From what I understood, they are having capacity issues with one of their MySQL masters or something. I've read that they are in process of sharding/resolving that but it takes time.

To GitHub SRE/oncall people: hang in there, you're awesome.



Yeah, our builds are failing for a couple of hours now, and we just use Github because one of the dependencies in NPM downloads a binary from a public repository. We're already forking what we can to our gitlab self-hosted server, but even a simple git clone or even browsing the website can lead to a HTTP 500 right now.

Lots of incidents lately, but it's becoming increasingly hard to get away from Github.


I said this before many times and I'll say it again, consider self-hosting your projects on a solution like GitLab or Gitea to avoid this sort of situation. [0]

GNOME, Xfce, Redox, Wireguard, KDE and Haiku all have self-hosted on either cgit, Gitlab, Phabricator or Gitea.

[0] https://news.ycombinator.com/item?id=23676072


I used to help maintain Xfce's svn and then git server.

I'm not going to say it was hard, but it was work, and it was work that took time (time that was volunteer free time) away from working on Xfce itself. When I did the final git imports of our svn repos in mid-2009, GitHub had been around for about a year, maybe a year and a half, and wasn't that popular yet. And I suppose back then I had the bull-headed "must self-host because that's the only thing a respectable open source developer should do".

That was a foolish attitude that took time away from the actual goal, which was making Xfce better. If it were today, I would have moved us to GitHub in a heartbeat, or perhaps GitLab (their hosted offering; I wouldn't self-host), instead. I haven't been involved with Xfce (still a happy user though!) in nearly a decade, but I suspect they still self-host out of inertia, with probably a little of that bull-headedness mixed in (that I myself can't claim to be fully free of either).

While GitHub's uptime isn't perfect, it's pretty damn good, and better than most volunteers will get running their git server off a single box someone had racked somewhere in Belgium, which I believe was what we were doing at the time. Tools for that sort of thing are better now, and if I had to self-host today, I'd use EC2 or something like that, and automate the hell out of everything, but it's still a lot more work than just using somebody else's infra.


The downside is that a lot of these self hosted archives will have disappeared 10 years from now.

Meanwhile, I expect many forgotten repos to remain online with Github one way or another for a very long time.

If you do go the self-hosting route, add a mainstream host as an additional remote and push your commits to both.


The easy solution to this is that git supports pushing to multiple remotes: https://news.ycombinator.com/item?id=23818609

In fact, with all these free services, I'd probably say it's well worth automatically making remotes (at least) for GitLab, GitHub and having a local Gitea for everything you do. This should be resilient against specific outages, or GitHub simply deciding they don't like your project name, or some other disaster.


No guarantees either way - Google Code was alive and kicking, until it suddenly wasn't.

I do appreciate how easy git and other dvcs make it to mirror repositories, though. I find myself using Github mirrors of projects that are otherwise self-hosted, just because the code search works on Github works pretty well.


But the content is still available isn't it: https://code.google.com/archive/

When people try to self-host and get lazy or hit by a bus, the content just disappears unless it was lucky enough to get archived.


Funny, I find myself cloning GitHub repos just because code search on GitHub works pretty badly (e.g. it doesn't pick up partial tokens). Haven't had the chance to compare to other solutions though.


This is a huge reason why they are making MaidSAFE. Anyone remember Freenet?



How far along is safenet? What can it do today, in terms of its parts?


Is there something like archive.org but for git (and why not other vcs) that finds repos and periodically fetches?


Self-hosting is hard work, you easily forget security updates, or backups, and should your project get a lot of traffic your server might not handle it.

And should your project go unmaintained it'll probably disappear if you don't have managed hosting.


True, but the burden of maintenance and up-time is on yourself then...

Might be good to have a self hosted backup that's automatically synced with Github


You can easily do the latter with Gitlab, free for open source projects.


How about a more low-tech solution? I don't want/need a full-blown web application like Gitlab, just need somethings than auto-syncs my public and private repos so that i can still access everything when Github is down.

Any ideas?


Just add another git remote push URL so that your commits are automatically mirrored when you push:

    git remote add origin https://user@gitlab.com/myrepo
    git remote set-url --push --add origin https://user@github.com/myrepo
    git remote set-url --push --add origin https://user@gitlab.com/myrepo


The lowest tech solution is to just wait until the outage is over. It’s 100% free!

Seriously, though, any repo that I work on regularly will be cloned to my local dev environment, so it’s not a hard blocker.

That said, a cron job on a cheap VPS would probably do the trick.


Moreover, GitHub will eventually get these outages under control. Or we'll all be driven to gitlab :)


A lot of people posting complex things. All you need is SSH.

    ssh sparkling@git.example.com
    mkdir project-1.git
    cd project-1.git
    git init —-bare
    exit
    git remote add alternate sparkling@git.example.com:project-1.git

For the syncing part. If you don’t want to do it manually, you can add multiple destinations on the same remote. Someone already mentioned it here, https://news.ycombinator.com/item?id=23818609, https://jigarius.com/blog/multiple-git-remote-repositories


If you are working alone on these projects, store bare git repository in Dropbox and/or similar services that sync data through all your devices.

If more privacy is required, you can use something like gocryptfs to only send encrypted data to these services.

Syncthing is a similar option that doesn't require an external service, also doesn't require local encryption since all data in transit is always encrypted.

This has a big caveat though: there's no locking mechanism that will work reliably on the bare git repo, so you may have to resolve some conflicts manually if two separate devices push to the same git repo at the same. This is why you should only use this method of you work alone on these repos.


It's still a full full blown web app but I can highly recommend gitea. It's just a single binary so getting setup is really easy. I run an instance and it literally requires no maintenance.


never trust external tools in your build process, you may have an emergency and not be able to fix.

we are trying to migrate to https://www.sonatype.com/nexus-repository-oss (self hosted), it would cache the git tags and you just have to replace the git links to nexus like in the dependency manager.

if you want something simpler, you can try satis for php, sinopia or verdaccio for npm packages. you will find a lot of other tools for the other languages.



Sure, if you're a big project. But smaller ones I bet would have less downtime, require less ops, require thinking less about everything.


I wouldn't consider self hosting better with regards to downtime. Our company has various self hosted components like jenkins and all and full time people to manage that, and I can say for sure their downtime is more than github. And good luck if you think you can manage everything for a side project.


No, the main point of programming for me is to fill up the activity graph on GitHub. Without that I may just as well quit.


Seriously, again!? I am growing seriously impatient with this. It's been downhill since Microsoft took over.

GitLab is starting to look good (or even Gitea self-hosted).


I might just self-host Gitea on my Pi today


I was going to warn you about SD card corruption, but then I realized even this is probably more reliable than GitHub at this point.


You can always just sync to multiple destinations like gitlab, bitbucket, github etc :).


I personally haven't experienced any SD card corruption, even though I am running a Maria-DB instance, Home-Assistant instance and a Pi-Hole instance on a Pi Zero with a 32 gig SD Card ( 23 days uptime as of writing )


It typically kicks in after a few months depending on the stuff you're running and type of SD Card. SD Cards aren't really designed for the type of read/write patterns a 24/7 Raspberry Pi server would entail.


I've had 2 sd-cards go corrupt (read-only, any writes were lost). Had a mariadb instance running on it as well. It will hold for for some time, but eventually, the card will give up.


Oh no! What will you recommend as a long term solution? I have ordered a Pi 4, should I use an USB Flash Drive / HDD / SATA SSD?


The advice is to buy a quality SD card and backup data that shouldn't be lost to a more reliable location. S3 bucket, a local NAS or something like that. The general idea with Pi's and IoT devices is to act more as input point for data or programmable controllers rather than reliable networked storage or server space.

also Pi4 now supports boot from USB: https://www.tomshardware.com/how-to/boot-raspberry-pi-4-usb


Since the Pi4 can boot from USB now (might still be in beta, not sure if it's fully released), I would get a USB3 to M.2 adapter, buy a decent M.2 drive, and use that. It'll take up more space than the SD card, obviously, but not that much more.


You can even get SSD based USB sticks like the Sandisk Extreme Pro 3.1: https://www.amazon.com/dp/B01N7QDO7M

I bought one to run Windows on my Retina MacBook Pro. I only need Windows for gaming when visiting friends, and it works flawlessly for that purpose.


You can use a USB hard drive and just save to it while still booting and running on the SD.

Check speeds when you get it to see what works for you best.


Honestly throw the thing in a bin and use an old mini pc instead. Lenovo thinkcentre tiny. You can pick them up on eBay for about the same as a fully equipped pi.

Pi is 100% not suitable for 100% duty work. It’s just a toy.


As someone who's been running a media server and home automation server on his Pi 24/7 for 3 years now, I beg to differ.


I’ve had 6 so far and all have had reliability issues or weirdness. Mostly related to SD corruption, crashing or power brown out. The power issue was not solved by running them off a proper keysight bench supply.

SD cards are quite frankly horrible boot media as well.


That's 27W (idle), compared to the Pi 4's 3.4W (idle).

That's an additional 145kg of CO₂ per year (2018 US average).


M600 fanless is around 15W full whack and 5.5W idle (measured). The CPU in it has a 4W TDP.

For that you gain:

Decent thermal design., A decent quality enclosure, A power supply (thinkpad brick), Two displayport holes, An SATA interface (M.2 form) for an SSD, Built in Wifi, A RAM slot you can chuck 16Gb in, 2 more USB ports.

There's no competition. I paid 79 GBP each for mine (I own 3). Pi is 57 GBP bare board.

Pictures. Mac mini for scale: https://imgur.com/a/jXjLusb

And on CO2, perhaps you should just do without it if it's a problem.


I run a cosy Gitea hosting service over at https://hostedgitea.com/ for those who want their own private Gitea box without the hassle of deploying or maintaining it. Basically I handle all that for you myself. Just starting out but happy to get feedback!


I changed the i18n of Gogs to s/Organization/Folder/, which I think works pretty well, though I'm becoming more interested in gitea's fork. I reckon the same trick will work.


ITT: Lots of Github hate.

Dont forget that change in software is inherently risky and will result in bugs, etc. Id rather have a platform that is always looking to make things better and risking a bit of downtime, than a stale platform that we all know we depend on.


I think the issue here is that Github is considered a mature product and it just works.

So is there any actual pressure to move fast and break things?


Yeah it feels they're just being pushed to out-everything Gitlab, to kill a competitor, before relaxing and stagnating.


Getting 500s all over the api, gl whoever is on call


(copied over from other thread - https://news.ycombinator.com/item?id=23817794)

Github started doing availability reports. Last month's details in the blog post below with summary of the issue.

Stay tuned till next month for the current outage.

https://github.blog/2020-07-08-introducing-the-github-availa...


Here's the RSS feed of the blog

https://github.blog/feed/

I am using https://github.blog/category/engineering/feed/ for engineering category


What if we had a smart failover. Use GitHub and GitLab simultaneously. All issues, all comments, all PRs duplicated. If one goes down you use the other one in the time being with no interference at all. One can probably then build a frontend which magically does this failover for ci/CD etc. Isn't that's how much redundant this should be?


I guess it's time Github realizes that they're not longer just relied upon for git, but for so much more.


Cool thing about git is that it is distributed and there is no single point of failure ...


It is. We all have our own copy of the repository, and can still distribute changes using any of the other methods:

- A different central server

- Email

- A shared on-filesystem copy, e.g. local network drive

- HTTP or SSH between developer computers (put your repository somewhere where your NginX or Apache serves it, the other developer can "git remote add chvid http://chvid.example.com/repository").


Do you really think the average developer using git has any idea about how it can be used decentralized?


IMO, more people need to ask themselves how their tools work and why they exist and find the answers to those questions


Who would really rely on a single, external, free, no-guarantees service and not have redundancy to tolerate some hours of downtime?

Make github a mirror (at least source-wise) and you can benefit from it's outreach without being held hostage. Am happy with that e.g. https://notabug.org/mro/ShaarliOS/src/master/doap.rdf Inspired by https://indieweb.org/POSSE


A lot of people pay for Github?


seemingly not enough :-)


Yay this crashed our Jenkins instance as well.


Same mate


Hopefully it's just them reverting their latest UI changes


What's wrong with them? Maybe it's because I do not use project management features much, but I like new UI more.


My main dislikes are slow, lazy loading, and that you can’t read the last commit messages without extra clicks of the “...” button. Even in pull requests I’ve seen new commits show up without the message (and clicking the expand button, it closed itself after a few seconds).

There have been some other annoyances/changes in behaviour that have bugged me too, but mostly stopped remembering them because am resigned to it now.


>you can’t read the last commit messages without extra clicks of the “...” button

You can opt-in to this now. It is a preview functionality, i guess it will be GA in a couple of weeks.


the repo top bar is wider than the rest of the content. It feels unbalanced. Personally I'm not a fan of that part. No complaints about any other change.


Ha!


I just had to access my Github stars to find an old app I bookmarked. No dice. Otherwise I've moved all my current projects to Gitlab so Stars and contributing to other repos are my two most used features atm.


Everything seems up now. Does anyone know if Github pages went down too?


in my build pipeline, I query several different package hosts (npm, pypi, docker-hub etc) and Github/Gitlab. If any of them is unavailable, the build fails.

What's the best way to keep my own copy of the packages my software needs (and their dependencies), so that my build process is less fragile? Ideally, I'd only have to rely on those 3rd party platforms to download new versions or have them as a backup.

When relying on my own copy of required packages - can I expect much faster builds?


I've used Nexus for a while without any issues.


I still don't understand people who always mentions to Microsoft's acquisition. Until the official statement, it isn't Microsoft failure. Don't blame them.


It's not MS fault, of course, but since MS acquired GH, GH has been much more relaxed. New features are added all the time, clearly not ready for the spotlight. It gives you a different comfort knowing your daddy is there as a safety net.


But also they have a lot of new customers/users, so maybe the scale's the problem? I don't know.




They’re minutes away from dropping below two nines looking at Issues, Pull Requests, Projects. Other services look comparably unreliable.


"We have identified the source of elevated errors and are working on recovery."

A day wrecker!


Human sacrifice, dogs and cats living together… mass hysteria!


It is ironic that a version control system engineered to be distributed is now typically used in such a centralised way.


Savage Unicorn, Github who knew.


People are advocating hosting their own Git repos, but wouldn't those go down, too, and wreck the day even more?

Or, are you guys all devops geniuses better than those who work at GitHub?


You get to spend time fixing it rather than waiting it done :p


I used to self-host everything and while I never had hiccups or problems, I got tired of the additional work of maintaining a git server and moved my personal projects to github just so I don't have to deal with it. I suppose the comments are fueled by frustration and the fact that when you have something in-house, you can directly go and push the devops teams to fix it whereas github you just sit and wait. I never understood why people think that poking someone with a stick will magically speed up the fixing process but there we go...


Of course everything goes down one day or another, it's just it's been a very common occurrence with Github recently.

When my own side-project has more uptime than Github, there's something wrong somewhere.


Git is notionally decentralized. It should be neither as difficult or uncommon as it is to have multiple repositories, but we have collectively let convenience get in the way of reliability.


But those wouldn't take most of the world's public git repos down all at once just because of a single issue. Single points of failure have a bad reputation for a reason.


Absolutely not, but I guess that most of "internal repos" access patterns are widely less trafficked than GH. So, it's not that difficult to have a "stable enough" configuration for most organizations. Sure, for a 5-10 devs shop it's overkill, but if you have a mid-sized team and already having someone caring for internal tooling/systems, it's not that a bad idea. Unless you have the "hosted here" syndrome.


Well, if you host it yourself, and it goes down, then you have control over getting it back up, rather than waiting for someone else to fix it.

Though really, if it is that important for it to be up, you should mirror it to at least one other provider (ex. self-hosted and github).


I did work somewhere once with GitLab self-hosted in a VM. The person in charge believed it was important to reboot the VM every so often.

One day, for whatever reason, he couldn't bring the VM back up. Self-hosted GitLab was out for the rest of the day. I found this pretty funny.


When I was running subversion we had no downtime in 6 years during office hours.

Really git is designed to be serverless and decentralised this centralised GitHub malarkey is probably wrong.


Umm, so if you are hosting your own Git Repos, and not fiddling with the configurations much once you have found the perfect fit.

Isn't the only reason it will go down be because of network issues or power failure? What other possible cause could be for system failure?

I have been running HA, Pi-Hole, Maria-DB and my own API instances on my Raspberry Pi and in the 22 days of uptime till now, there have been 0 failures


Many reasons, including your hosting provider going down.

At that point you'd have to create and manage a cluster.

You have to update servers, etc. etc. and if you count the hours is many times more than working locally and wait a couple of hours until technicians at GitHub fix the problem for you.


Self-hosting means something different.

Also, most people don't need to provide access to hundred of thousands of users so they won't ever need a cluster of Git servers.[1] (Some bloated UI like GitLab may need more computing to host even for a moderate amount of users, though).

Self-hosting Git is easy. Besides power failures there is not much reason this could go down if you don't help with that actively. But if you don't touch it besides OS updates it can run 24h a day for many years without any effort. (Biggest issue is backup actually, but you have to think about that point anyway if you run something on your own).

[1] https://news.ycombinator.com/item?id=23804373


Not sure why you're being downvoted; I absolutely agree. For a smallish project, especially made up of volunteers, free time spent toward maintaining infra is time not spent on working on your project.

For larger projects where you have the resources to have dedicated infra people, I guess it depends.


> Or, are you guys all devops geniuses better than those who work at GitHub?

Do you call IBM to maintain your pi-hole?


People are mad/frustrated/busy, that's how it goes


Surely it can't be a coincidence that Github is down every other week after the Microsoft acquisition? Is Microsoft interfering too much? Or did the core technical expertise leave for other greener pastures in Microsoft or outside?


I believe GitHub's been adding new features much faster after the Microsoft acquisition. Moving faster can lead to breaking things more frequently.


probably overly complicated layers-upon-layers of services, infrastructure and feature creep fighting with each other




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: