> GitHub Enterprise Server customers should upgrade immediately - at the time of this writing, our data indicates that 88% of instances are still vulnerable
GHES is essentially unmaintained (perhaps “on life support” would be more charitable since they are certainly accepting payment for it) and has been so for about a decade. It requires a multi-hour downtime to apply even a patch-level release. They do not have any supported mechanism for HA upgrades. So even the most conscientious GHES customers lag the latest version because they can’t afford the downtime.
They are constantly telling all their GHES customers who complain about the severe flaws with the self-hosted appliance product to move to GitHub Enterprise Cloud, which is just regular GitHub.com, but who in their right mind would make that move nowadays??? At least GHES stays up during the daily github.com outages.
Until GHES can do zero-downtime upgrades nothing will get better. Not on their roadmap because as far as I’m aware the GHES team doesn’t actually exist or is entirely focused on KLTO. It’s a dead product that they wish didn’t exist.
It sure isn’t! GitHub Enterprise Cloud is simply an enterprise plan on the regular multitenant github.com. Your repositories are on disk right next to everyone else that uses github.com. There is no segregated storage or compute.
I wish they had a plan to literally host GHES for you because then more people in the company would be forced to reckon with how terrible GHES is from an operational perspective. It is stuck ca. 15-20 years ago conceptually.
GHEC is a terrible fucking product too and for the life of me I don't understand why they didn't use subdomains to namespace customers from each other and from github.com.
It should be mycompany.github.com because the way it is now, we have to rename all our damn repo orgs as we move from GHES to GHEC ("github.com/mycompany-org/repo") which is no guarantee either because anyone could create that org before is. All sorts of terrible UX falls out from not having name-spaced the GHEC customers.
> X-Stat header that controls whether the server operates in enterprise mode.
Perhaps this header mentioned in the article is related, maybe that's the toggle for the enterprise mode? Seems there is at least traces of "enterprise mode" on the normal github servers.
There is no “the toggle”. Read the article. A GHES appliance (and github.com) is dozens of services working together, some of which act differently in ES mode, so there are toggles galore. But probably not a lot that can be toggled by user input :(
I did read the article and that was a direct quote from the section "From GHES to GitHub.com".
The parent comment was talking about the "GitHub Enterprise Cloud" not "GitHub Enterprise Server" which are two distinct products.
The way that they where able to escalate the RCE from a GHES environment to github.com environment is by injecting this header and enabling this enterprise feature. This supports the idea that "Github enterprise cloud is on github.com".
I assume a fair amount of these on-prem customers restrict access to their GHES instance to be behind corporate VPN or something similar and are planning a date to upgrade their instance that won't affect operations.
Any public instance should update immediately though, it's not very hard to put together how to repro the vulnerability on your own from what they provide in the article and the fact that GitHub Enterprise source is publicly available.
I suppose so. The company invested pretty heavily in security tooling, though I think it wouldn't have been hard to do something to bypass the security for internal servers.
If you're in the enterprise you can update something outside of the normal schedule and guarantee blow up everything (and be blamed) or you can stick with the schedule and hope for the best.
Question is how fragile the upgrade process is in large installations. In other enterprise software messing around with large amounts of data I've seen the smallest things break the install and leaving the OPs team rolling back. Was like SharePoint in the past, you were rolling a dice when upgrading it.
The GitHub blog had an article saying that all patches must pass for github.com before merge but the GitHub Enterprise tests have a three day window to be rectified.
I guess I woukd say youre fortunate to have not worked in a "we cannot use github.com because we take security very seriously" environment. Because always tells me you'll be running a on prem product that might get updated once a year.
On prem beats the heck out of github post Microsoft though... At least you know how to get it working again when someone breaks it. These days with github you expect a weekly 500, a rainbow unicorn error, build failures due to unavailable errors, etc. Last I checked the third party tracker github services were barely pushing one 9 of reliability.
> Git is pretty simple internally, and its ui is just knobs and levers to reach into that simple reliable internal structure.
that's not true either. originally it was simple internally - it was mostly shell scripts! writing text files! - but now it has all sorts of complicated optimisations.
the "middle" is somewhat simple for CS people, though - a graph of commits, you can put labels on them, you can send and receive strict appends to the graph to another repository. both the stuff under and above that is quite complicated in practice, but the UI does continue to improve - e.g. editing a past commit message until the release last week was ... complicated.
> editing a past commit message until the release last week was ... complicated
Was it? ‘git log —-oneline’ to figure the commit id if it’s not really recent. ‘git rebase -i <commit-id>^’ and then apply the reword action to your commit.
anyone who's actually worked there, could you explain why they're finding scalability and reliability so hard? naively it seems like 'repo groups', ie clusters of repositories linked by being mutual forks, would be fairly isolated for the whole git storage layer, and everything else feels pretty easily parallelisable (issues, actions, etc, modulo taking locks now and then to submit results or whatever). and given that, surely you can incrementally deploy changes across those many shards to avoid most big outages?
are there big conceptual serialisations that I've missed? is it just not well factored? was the move to Azure just a catastrophically bad idea? some other thing?
Almost every high volume service on the internet is write a little, read a lot, and when there are writes, they're relatively small, a few bytes into a database that can fan out. GitHub is very different: constant writes, large files, it is under far more pressure than the systems the rest of us build. And then, as the article says, vibecoding happens, and suddenly they're receiving 30x the volume of expensive operations. GitHub are responsible for many of the performance improvements made to Git over the years, Git scales today because of work GitHub did, but that work was never intended to scale to volume of today.
Even as recently as 18 months ago, Lovable appeared, seemingly overnight, and caused huge problems for GitHub because they were creating repositories on GitHub for every single Lovable project, offloading the very high cost onto GitHub, hundreds of thousands of repositories. A couple of years before that, Homebrew used GitHub as a de facto CDN and that was a huge problem, too.
Nowadays it is easy to imagine how we can scale out a service like Twitter or YouTube or Facebook because everything has been done before, but that's not true of Git, Git hasn't ever scaled like this before, there are very few examples of service with GitHub's characteristics.
recently there was a twit how GitHub PR diffs had 10 React components PER LINE. And how they optimized that to only 2 React components per line or something.
> To summarize, for every v1 diff line there would be:
I'm asking about the infrastructure, obviously they chose for some reason to make my computer fans turn on to show some red and green lines on a text file.
I can't speak to the last few years since I left, but over the many years I was there the git storage layer was almost never the core issue - it was well designed by infrastructure-minded nerds that leveraged and improved git and replicated it really well across multiple nodes.
What always struggled was the richness of the Rails monolith itself and its backing MySQL databases - the expectation that everything links to everything throughout the product (think: issue cross references across orgs that only appear if you're able to access the remote repo, and other things like that).
Those details appear richly everywhere you look, and the combination of that with a general lack of understanding and/or focus on performance (shipping big features is hard, shipping them with performance at scale is MUCH harder), compounded by Ruby being an easy language to get performance wrong in (object count really hurts, and it makes it very easy to create many) leaves every feature adding to the performance problem, and makes it daunting/impossible to make fast once it's slow.
There was a full on year or more of making GitHub fast while I was there that just couldn't gain enough momentum to make enough of a dent to make it better. I remember finding and fixing a N^3 (or maybe it was N^4? something bad) in the home page activity feed - the worst thing I found but gives an idea. IMO it would need a fresh view of how to keep interfaces simple and how to design the data layers performantly - not adding every bell and whistle to every screen.
I hope someone at GitHub realises they are about to lose everything that was hard earned by early GitHub - it once was a site people (myself included) looked up to for ideal availability, responsible releases, data driven improvement - but no more it seems :(
I agree that there’s chaos everywhere and system integration on individual axes is a better way to look at it but then it turns out that Electron is more native than most GUI libs out there. Browser devs worked very hard and for a long time to integrate with host OSes. Browsers have native text rendering, good accessibility, IME support on all platforms, etc. All these features are usually missing from the hot new UI libs.
ImHex is a great hex editor, lots of features. It’s got antialiased fonts like only a year ago. Fonts still look not exactly as the rest of the OS. Has no accessibility integration at all.
Zed, looks pretty good at first glance. No accessibility.
“Native” implies all the features that the platform provides. It is chaotic on all major platofrms but it’s still meaningful.
why did you make it so complicated? magit has a `magit-wip-mode` that just silently creates refs in git intermittently so you can just use the reflog to get things back.
From what I know (correct me) magit-wip-mode hooks into editor saves. UNF hooks into the filesystem.
magit-wip-mode is great if your only risk is your own edits in Emacs. UNF* exists because that's no longer the only risk; agents are rewriting codebases/docs and they don't use Emacs.
of course not, but it can often give a plausible answer, and it's possible that answer will actually happen to be correct - not because it did any - or is capable of any - introspection, but because it's token outputs in response to the question might semi-coincidentally be a token input that changes the future outputs in the same way.
> GitHub Enterprise Server customers should upgrade immediately - at the time of this writing, our data indicates that 88% of instances are still vulnerable
> Upgrade to GHES version 3.19.3 or later
https://docs.github.com/en/enterprise-server@3.19/admin/rele... :
> Enterprise Server 3.19.3 - March 10, 2026
88% of on-prem customers haven't applied a critical security fix from 7 weeks ago, that seems ... bad.
reply