Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
GitHub Is Down (githubstatus.com)
51 points by funOtter on Feb 18, 2023 | hide | past | favorite | 44 comments



I see a lot of hate for GitHub after Microsoft, but there were many big improvement that I think outweigh some instability, especially for smaller companies or open source projects. Free private repos and free GitHub actions. The UI is also pretty nice and clean. I think GitHub is still one of the best options to host your code.


Do we have to give the you-are-the-product speech? It's a proprietary platform that is trying to sell you all of their other services, get you locked into the their tools, and require everyone you wish to collaborate with create an account. Even then, it's buggy, overly social, and since it's not FOSS, we can't fork it, improve/fix it, or self-host it. We also see the status come up a lot because folks have centralized on a service for a distributed version control system and mirrors have become increasingly less common.


> Do we have to give the you-are-the-product speech?

Not for me, because I pay for the service, and I'm extremely happy with the value it provides for the price. I don't like downtime either, and I think GH needs to work on their stability, but I've also worked in companies that hosted all their source control on prem, and it had plenty of downtime, too.


I pay for SourceHut and haven't noticed an (unplanned) down times. I also don't feel like they're trying to sell me anything beyond the platform itself. The bundled IRC bouncer is a nice touch as well.


Upvote for both this comment and the parent, because both are true.

> We also see the status come up a lot because folks have centralized on a service for a distributed version control system and mirrors have become increasingly less common.

At least if it actually went down for a long time you could just push your repos from whoever has then checked out to a new mirror and get going again.


I wish my best to the GitHub operations team. There's going to be a lot of pressure to get things running.


I mean, that's literally just their job. It would be one thing if Github and its parent company had a history of treating developers and open source with kindness and respect for their work but..yeah.


Can confirm 500s. Not sure what is happening but to me it feels like Github became pretty unreliable in the last 2-3 years. I remember several occasions when teams and whole companies I worked at were basically blocked by Github/Actions/Whatever funny service they were paying for.

From those experiences I wouldn't recommend Github anymore because I really question the benefit of having a blackbox-as-a-service you're fully locked into kinda randomly failing, IIRC correctly Actions was down 2y ago for 1-2 days which is like, a lot these days.


I wonder if developers have realised already how unreliable GitHub has been since Microsoft acquired them. I expect them to have an incident every month. Just look at their greatest recent hits: here [0] and [1]

It's no wonder why going all in on Github makes absolutely no sense. Like I said years ago, it is better to just self-host or mirror your repository to GitHub.

Blender surely did choose wisely [2] and perhaps this is the time to self-host your repositories and not 'centralize everything to GitHub' [3]

[0] https://www.githubstatus.com/history

[1] https://news.ycombinator.com/item?id=32752965

[2] https://news.ycombinator.com/item?id=34700390

[3] https://news.ycombinator.com/item?id=22867803


Haha, why would anyone be surprised to see that? Just take a look at other M$ products, pretty much the same.


Many of us saw this coming a year ago, when GitHub shoved a bunch of developer QoL features behind the enterprise tier and started having consistent downtime with actions.


There would have to be centricity at some point, otherwise OSS sort of dies. Maybe a git link aggregator?


I saw this yesterday (https://freeradical.zone/@tek/109876335075879967), although their status page was all green. I ran a command line like:

  for i in (seq 10); git clone https://me:api_key@github.com/… foo; rm -rf foo; sleep 5; end
Out of those 10 clones, 2 of them succeeded, and 8 failed (evenly split between 2 different error messages). I opened a trouble ticket because our CI server was driving us nuts with alerts. The status was never anything but all green, though.


It seems intermittent for me when browsing repositories, both public and internal/private ones. Normally works every 4/5th refresh, but I might be partially responsible for making it worse.


I run a Gitea server on my (home lab) file server. At one point I ran a second server in parallel during a migration. It was pretty straight forward to configure projects to use two servers in parallel. But 99% of my usage is to simply store a copy of my files. Occasionally I file an issue as a reminder. I wonder how difficult it is for users of more advanced features such as CI/CD pipelines to duplicate that functionality either on other public servers (Gitlab? Bitbucket?) or a self hosted server.


Gitea supports Drone (CI). My experience with running Drone using docker-compose is that you end up messing around with SSL certificates and SMTP for reporting before you get to write your pipeline’s YAML. Some DevOps is required beyond what it takes to set up Gitea.

I wish Circle CI would support Gitea, because not setting up the CI system and just configuring the pipeline would save a lot of time.


This is a better URL for this incident: https://www.githubstatus.com/incidents/t2xwk9mz56f4

Everything is currently marked as "degraded", but in reality the entire website is returning 500s.


Well it’s technically correct.. the best kind of correct


Trying to get a production build out and... it's down.


It’s late night Friday. Don’t.


For myself and my company, I'd agree but, let's be honest: we have no idea about this person's use case, culture or geographical location so we shouldn't make judgements.


Saturday morning, a weekend, is ostensibly worse than deploying on Friday evening.

I don’t see how any culture or geographical location can possibly make deploying outside of work hours reasonable.


The most obvious case is if the current production deployment is severely broken, in which case a weekend deploy may simply be the best of bad options.

Obviously that points to larger problems though.


> Obviously that points to larger problems though.

Does it, though?

Given that the readership of this site is ostensibly hackers and hustlers, including plenty of people with side-businesses from which they are at least trying to earn money, is it really so odd that somebody might be trying to do a deploy at an "unusual" time?

I'm as guilty of this as anyone else in the past, although I try to restrain myself now, but I'm constantly amazed at people who want to wade in with a hot take on practices and processes off the back of next to no information.

Here are reasons that I can think of for deploying at an unusual hour:

- As you suggested, production is broken

- Plenty of companies exist whose busiest times are weekends, or where events happen at weekends for which systems need to be available

- It's a side business, so it gets worked on outside of normal business hours

- It's an important source of income that will be disrupted (regardless of the size of the business, or whether it's main or side revenue)

- It's a startup, and people who work there are focussed on making the business a success at all costs (different points of view on whether that's a valid life choice but, for me, grown adults should be allowed to make their own choices without facing undue criticism)

- The world has mostly normalised on a Monday - Friday work week these days but there are still countries that have a one-day weekend, or where the weekend is split (I'm not sure if Thursday/Friday is still a norm in any Muslim countries these days). Point being, Saturday might be a business day or an important trading day

- The company could be a total shambles with poor processes and practices run by clueless management and leadership (but we don't know that and we shouldn't assume it)

- I'm sure there are plenty of other reasons for deploying late on a Friday

But, no, GP decided to lead with, "It’s late night Friday. Don’t." which I think is pretty arrogant, and to be discouraged. As I pointed out, we really don't know what's going on here, and deploying on a Friday might be a perfectly valid thing to do.


> Plenty of companies exist whose busiest times are weekends, or where events happen at weekends for which systems need to be available

This is a reason to not do Friday deploys. It's always going to be harder to get anything done on weekends if stuff goes wrong, you don't want to add in deploys to that. For example, contacting a third party provider will be harder on a weekend.

I can see the reasons for a Friday deploy - if it must be done or it's just low stakes (which applies to a lot of projects where everyone can just go home and fix issues on Monday with some mild grumbling) then it's fine but otherwise I do think avoiding deployments on a Friday is a very good rule. They certainly shouldn't be routine unless you've got very good justifications.


> I'm as guilty of this as anyone else in the past, although I try to restrain myself now, but I'm constantly amazed at people who want to wade in with a hot take on practices and processes off the back of next to no information.

I’m going to go based on the information posted. If someone posts that they’re deploying on Friday with no other information, I’m going to tell them the sensible thing, to not do it. I’m not going to come up with a million and one excuses about why they could be justified in doing it, I’m going to tell them not to unless they provide the justification. Sorry, not sorry.

It’s not perfectly valid for the vast majority of cases. You coming up with (frankly, extreme, and in most cases far from extenuating) edge cases in no way invalidates my point. Deploying on what is generally considered the end of a work week, without extenuating circumstances described, is going to elicit the same reaction. Because it’s the appropriate reaction to someone doing a risky thing after hours at the end of a week.

Your examples hardly even justify deploying outside of work hours. A scrappy startup or side business should not employ the shittiest of practices this early as it will only lead to these shitty practices being ingrained down the line. Almost every culture has shifted to the Western system and at best has Friday/Saturday as weekend, and even if weekends are the most important time for a business, it does not somehow justify deploying right before the start of that busy time when most people will not be working. There’s many things you should be doing at 9PM before your busiest time of the week, and deploying ain’t it.

> I'm not sure if Thursday/Friday is still a norm in any Muslim countries these days

It’s not, but you consider deploying on a weekend to be more sensible? Ok, sure Jan.

> But, no, GP decided to lead with, "It’s late night Friday. Don’t." which I think is pretty arrogant, and to be discouraged. As I pointed out, we really don't know what's going on here, and deploying on a Friday might be a perfectly valid thing to do.

Quite frankly, I don’t give a shit if a random person on the internet thinks it’s arrogant. Go off sis, you do you. I’m going to continue telling someone that wants to deploy on a Friday not to.


> green is experiencing degraded availability. We are still investigating and will provide an update when we have one.

Anyone knows what service is called green.


Have they ever resolved their MySQL problem?


Seems to be working for me.. just cloned, pushed, and browsed files. Regional thing?


I'm in Europe and I'm getting a 500 on ~1/4 of requests.


Things are starting to work, but my actions are stuck in the queue.


USA East Coast Mid-Atlantic, 500 errors for me.


https://www.githubstatus.com/history

So far in February there have been 14 incidents. Today is February 17. If you do the math, 14 ÷ 17 = a lot of fucking downtime.


While I agree that GH should focus on stability after adding features at what felt like a pretty breakneck speed over the past few years, lumping all of their incidents isn't helpful, nor is weird "math" that compares days to incidents.

For example, issues with repos themselves (pulling or pushing code) or Actions (which many folks use for their deployment pipelines) would be a lot worse than issues with Codespaces, in general.


Maybe in general, but you can keep developing for a while with no pushing of code.

You cannot keep developing without Codespaces if you use them in the first place.


The metric I was going for was incidents per day. But otherwise you're right, maybe I'm being too hard on them


14 incidents each were a full 24 hours long?


If this is true, they really need to do a code freeze and figure things out.


Note that one is a two minute planned maintenance period.


Either this is Ruby on Rails in all of its glory, or their infrastructure and ops need a lot of attention.

GitHub is definitely down more often in the last couple of years though. It's noticeable. Hope they figure it out.


We're also running a large rails app and I can't remember a single downtime caused by rails itself, it's always been some db, network, cloud or business logic issue. Rails is pretty far down the list of the issues you encounter.


There is a lot more to fail before the web worker that can be scaled indefinitely by putting more machines to respond behind a reverse proxy server.

Things like the database or random microservices that implement the death star architecture.


Sure, and my comment doesn't exclude it. I wonder what's truly the weakest link.


It's back up.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: