Hacker News new | past | comments | ask | show | jobs | submit login
GitHub mirror compromise incident report (gentoo.org)
311 points by amaccuish 11 months ago | hide | past | web | favorite | 114 comments

He should have sat on the password. He should have watched for PRs, and started pushing updates immediately after they had received approval, and then merging. Instead, he panicked and kicked out all the maintainers, who realized the intrusion only 10 minutes after he gained access. And all he did was add `rm -rf /*` to build scripts, and the N word to the readme.

The malicious commits:








He was likely not a person trying to do serious damage more than trying to have fun.

Putting "rm -rf /*" in every ebuild seems like a pretty clear indication of malicious intent.

Can't really picture anyone doing that as a "just trying to have fun" thing.

Yes, but malicious intent for the purpose of having fun watching the reaction. Not malicious intent for the purpose of personal gain, long term access, or government intelligence.

So chaotic neutral? If going by DnD.

More chaotic evil. Personally I'm pretty close to chaotic neutral - varies over time, sometimes true neutral, sometimes chaotic good - and just outright attempting destruction of unknown third parties seems pretty far towards the "evil" side of things.

I would say destroying random things for laughs is chaotic neutral. You do random chaotic things not for pleasure or pain. He isn't doing it for a specific reason but because he can. That's neutral.

A chaotic evil would be destroying out of greed or hate.

Hmmm, dunno. The only evidence being reported is that they locked out the project admins then attempted to destroy everything they could.

To me, that's not a "neutral" thing. It seems like it was only luck that the rm-ing didn't work, else there would be a bunch more unhappiness.

People do run Gentoo on production systems. Though they obviously shouldn't pull straight from upstream without some real testing before deployment, in the real world it does get done.

Hmmm, thinking about this bit more:

> He isn't doing it for a specific reason but because he can. That's neutral.

If the action is to just do something harmless ("echo 'Couldn't have rm -rf-d you there!", then sure it could be seen at neutral. But it was clearly an attempt to destroy other people's stuff or work. That really doesn't seem very neutral to me. :)

Well, it's arguably still personal gain.

It's just of the transient "I had fun from it" kind, instead of something with more tangible rewards (as you mention).

If it was just someone having a lark, there are plenty of ways to do that other than literally attempting to destroy everything they can access. ;)

It's the sort of "fun" that can land the malicious actor in prison.

Almost anything can land you in prison for years, nowadays. But that doesn't mean there's no difference in impact. Gentoo can count themselves as extraordinarily lucky precisely because there is a difference, between getting hit by a troll rather than a white-collar criminal.

Having "fun" by ruining quite a few people's day.

Malicious is malicious.

Looks like a student joke. I just dont know how many files were deleted in such a jokes during education )

forgot --no-preserve-root for it to really take effect properly.

I believe rm -rf /* will work without --no-preserve-root

GNU rm I believe requires the --no-preserve-root flag these days, to prevent that command from happening by accident

The shell expands


, not rm.

It expands to /bin /etc /lib and such and is not covered by the sanity check.

Forcing any commit to be a PR merged by another dev would have solved it

While I agree this is a good practice, it wouldn't have helped much. He had organization-level admin access and could've easily added a second dev account to accept them by himself.

This part of the discussion started thinking about an attacker with a goal of the malicious code being undetected.

I dunno about "solved", but helped sure. I bet you could manage to get a commit in soon enough before someone else merged that they wouldn't notice the extra commit. Or even a history rewrite adding your code to the last real commit.

Do you know of any companies that do that? That seems like a big hassle.

This is standard practice where I work due to SOX compliance. (No pushing directly to master, all PRs need at least one other person's approval). In practice it's not an issue, since PRs are good practice anyway.

Yes but you can push to PRs after approval and then merge them.

You could revoke push permissions on that branch after the PR is requested though - there are probably tools that do that already.

For mine every feat/fix/refactor is a new branch which then requires 2 approvals to get merged, among other things (style guide enforced as linting, minimum test coverage threshold, etc ).

Tbh it's cool, coming from a previous job at a startup where version control meant just zipping the project from time to time.

Same here. All code goes in a Pull Request, and requires at least 1 approval.

Since Pull Request review is a high priority activity there's no "bottleneck", and you double the bus factor for free + prevent bad things from happening.

Yeah but you can still push commits to a PR that has received approval, and the approval is not revoked.

This is what happens where I work as well.

I don't know of company-wide policies like that (though I'm sure that they exist -- and I do know of individual teams that have such policies), but I do know of many projects which have such policies (for instance, all of the Open Container Initiative projects require two approvals from maintainers other than the author).

The company I am working for does that at least for the "junior" members on their core project. There is a bot that checks the tests pass and a senior dev has to approve the review.


I don't think it's appropriate to give advice to bad actors. Certainly everyone should be aware that silent attacks can and do occur. However it seems like a bad idea to post on a public forum ideas for how to better inflict damage.

Security through obscurity never works.

We should all be talking about the worst things that can be done, so we can make sure we are protected from them.

Good security should not depend on obscurity, but it does not mean that security through obscurity never works. It's still better than complete transparency.

> It's still better than complete transparency.

I consider it worse, since it's too easy for people to become content with it.

> I consider it worse, since it's too easy for people to become content with it.

It's not just that.

For a given vulnerability, there is an amount of time before the good guys discover it and fix it, and an amount of time before the bad guys discover it and exploit it. Obscurity makes both times longer.

In the case where the good guys discover the vulnerability first, there is no real difference. In theory it gives the good guys a little longer to devise a fix, but the time required to develop a patch is typically much shorter than the time required for someone else to discover the vulnerability, so this isn't buying you much of anything.

In the case where the bad guys discover the vulnerability first, it lengthens the time before the good guys discover it and gives the bad guys more time to exploit it. That is a serious drawback.

Where obscurity has the potential to redeem itself is where it makes the vulnerability sufficiently hard to discover that no one ever discovers it, which eliminates the window in which the bad guys have it and the good guys don't.

What this means is that obscurity is net-negative for systems that need to defend against strong attackers, i.e. anything in widespread use or protecting a valuable target, because attackers will find the vulnerability regardless and then have more time to exploit it.

In theory there is a point at which it may help to defend something that hardly anybody wants to attack, but then you quickly run into the other end of that range where you're so uninteresting that nobody bothers to attack you even if finding your vulnerabilities is relatively easy.

The range where obscurity isn't net-negative is sufficiently narrow that the general advice should be don't bother.

> obscurity is net-negative for systems that need to defend against strong attackers

If that's the case, why doesn't the NSA publish Suite A algorithms?

> If that's the case, why doesn't the NSA publish Suite A algorithms?

The math on whether you find the vulnerability before somebody else does is very different when you employ as many cryptographers as the NSA.

They also have concerns other than security vulnerabilities. It's not just that they don't want someone to break their ciphers, they also don't want others to use them. For example, some of their secret algorithms are probably very high performance, which encourages widespread use, which goes against their role in signals intelligence. Which was more of a concern when the Suite A / Suite B distinction was originally created, back when people were bucking use of SSL/TLS because it used too many cycles on their contemporary servers. That's basically dead now that modern servers have AES-NI and "encrypt all the things" is the new normal, but the decision was made before all that, and bureaucracies are slow to change.

None of which really generalizes to anyone who isn't the NSA.

A lot of the Suite A algorithms have also been used for decades to encrypt information which is still secret and for which adversaries still have copies of the ciphertext. Meanwhile AES is now approved for Top Secret information and most of everything is using that now. So publishing the old algorithms has little benefit, because increasingly less is being encrypted with them that could benefit from an improvement, but has potentially high cost because if anyone breaks it now they can decrypt decades of stored ciphertext. It's a bit of a catch 22 in that you want the algorithms you use going forward to be published so you find flaws early before you use them too much, while you would prefer what you used in the past to be secret because you can't do anything about it anymore, and the arrow of time inconveniently goes in the opposite direction. But in this case the algorithms were never published originally so the government has little incentive to publish them now. Especially because they weren't publicly vetted before being comprehensively deployed, making it more likely that there are undiscovered vulnerabilities in them.

If we assume for the moment that there are no ulterior motives:

Cryptography is the keeping of secrets. Obscurity is just another layer of a defense in depth strategy. Problems occur when security is expected to arise solely from obscurity.

Because they are the adversary.

I actually agree, sometimes.

But you got to do exersizes thinking out the worst cases (what an attacker could do if they didn't make any "unforced errors") in order to think about defending against them (ie, to think about security at all, which nearly every dev has to be).

Which is what the above was. We can not avoid thinking through the best case for the attacker, in public, if we are to increase our security chops. It's not "advice for the attacker".

For widespread and international software like this, it's actually a step backwards not to openly discuss how security can be potentially beaten.

This "advice" is even written on the report itself:

  The attack was loud; removing all developers caused everyone to get emailed.
  Given the credential taken, its likely a quieter attack would have provided a
  longer opportunity window.

There will always be bad actors of various levels of competency. Should the public only be aware of the simplistic ones, or should we make them aware of the worst-case scenario so they can be aware of the risks of poor security?

Serious question.

Something to note here: "Evidence collected suggests a password scheme where disclosure on one site made it easy to guess passwords for unrelated webpages."

For any folks out there who've been using or promoting formula-based passwords, this is the potential impact: a leak on one site can be leveraged by an attacker towards other sites you use.

The only thing a formula-based password strategy might protect against is a bulk automated attempt to check whether leaked username/password combos are used on other sites. But if any human actually wants to hack you, a formula-based password approach is almost certainly useless.

if the formula is to salt and then hash a string, that could be one very strong formula.

if the formula is to go “hacker$$$$news_” then you’re SOL

There was a brain password hash developed by Blum IIRC

I suppose it's only as good as the formula itself and at least it protects you from automated attacks. Still, I wouldn't advise anybody from going with formula-based passwords, just use whichever password manager you prefer and use unique strong passwords per-website.

Maybe one day the web will get its shit together and let us login everywhere with a certificate that we'll be able to securely store on a HSM and easily revoke and update if necessary but in the meantime it's the best we can do.

Most of the day-to-day issues with certificate auth aren't actually problems with "the web", so much as they're broader user workflow or endpoint support issues. Here's a couple:

* How does a user log in from a new device?

* Especially if they've lost / broken their original device?

* What happens if users want to log in from a shared computer (think public library)?

* Do all the OSes/browsers that users are using actually support certificate management and auth?

Message signing with CryptoCurrency wallets (especially those that integrate with the browser like web3.js compatible ones e.g. Metamask), might be a good avenue in the future, as these tools may become widespread for other reasons, yet also fix web authentication in the process.

Users would find the process of moving their identity on different devices tedious, but if that allows them to synchronize their whole digital lives, it would seem worth the effort.

Hence using an HSM, e.g. something like a Yubikey.

Of course, they can be lost or stolen, which is why certificate revocation is necessary, and they would need broad adoption among OS and browser vendors.

Plus you still have the issue of proving your identity if it's lost/stolen.

Hardware tokens don't really solve any of the problems I listed. You still have to deal w/ devices that don't do cert auth from a Yubikey (iOS being a great example), so as a site operator you don't get to avoid supporting new device / temp device workflows.

Hardware tokens can allow individual power users to solve issues around multiple devices, lost devices, etc themselves, but unless you're suggesting porting 100% of users to hardware tokens, it doesn't change the workflows a site must support.

I believe Yubikeys work from iOS now. I have not tried it, however.

I'd like to add "truly random" to

> use unique strong passwords per-website.

... use unique, truly random passwords per-website.

"Strong" is ambigous and hard to explain how to do right. Random means, in practice: "don't come up with one yourself, but let a tool generate one for you".

You're looking for "pseudorandom", not truly random. True randomness is an unnecessary and unrealistic bar to impose for typical cryptographic purposes. Not only is using true randomness more computationally and logistically expensive, it's difficult to implement correctly and safely. If you have a CSPRNG with sufficient pseudorandomness, it's more important to secure and optimize other parts of the system than it is to get stuck in local maxima with randomness fetishization.

To nitpick slightly, I think "truly random" is the wrong phrase here. Computers can't generate such numbers (only pseudo-random). What they can do is generate unpredictable passwords. Unpredictability is the feature we want in a password, whether that happens through "randomness" or something else doesn't matter.

Of course computers are far better at coming up with unpredictable passwords than humans are.

RDRAND is supposed to generate very true random numbers

Well either a RNG generates true random numbers or it doesn't. There's no room for "very" here. From Intel[0]

> With respect to the RNG taxonomy discussed above, the DRNG follows the cascade construction RNG model, using a processor resident entropy source to repeatedly seed a hardware-implemented CSPRNG. Unlike software approaches, it includes a high-quality entropy source implementation that can be sampled quickly to repeatedly seed the CSPRNG with high-quality entropy.

So it's a cryptographically secure pseudo random number generator that takes entropy from the processor. It's not a True Random Number Generator. And again, if it does work well for cryptography, it's the unpredictability that matters, not the randomness itself.

[0] https://software.intel.com/en-us/articles/intel-digital-rand...

From the link you provided:

> This method of digital random number generation is unique in its approach to true random number generation in that it is implemented in the processor’s hardware

> The all-digital Entropy Source (ES), also known as a non-deterministic random bit generator (NRBG), provides a serial stream of entropic data in the form of zeroes and ones.

> The ES runs asynchronously on a self-timed circuit and uses thermal noise within the silicon to output a random stream of bits at the rate of 3 GHz

What on earth would classify as TRNG if not above?

Just fyi, checked, and NIST SP 800-90B defines NRBG as following:

> Non-deterministic Random Bit Generator (NRBG):

> An RBG that always has access to an entropy source and (when working properly) produces outputs that have full entropy (see SP 800-90C). Also called a true random bit (or number) generator (Contrast with a DRBG).

As a user, how do I know if a tool is generating a "truly random" password?

You probably don't want a certificate. I mean, first of all the thing you're protecting in an HSM is a private key, not a certificate, but beyond that...

A certificate means now Youporn, your employer, and your bank all share the same way to identify you. As does Facebook, and five thousand shady advertising companies.

Something like Web Authn / U2F is better here. With this technology sites don't get any meaningful identity, just confirmation that you still have the same token as before. This also means if you find somebody's token you learn nothing from that, you'll have no idea where to return it and may as well just start using it yourself.

> "let us login everywhere with a certificate that we'll be able to securely store on a HSM and easily revoke and update if necessary"

Would U2F be a part of the answer here? It is by far the most user-friendly implementation I've seen, and it's already supported as a 2FA token on many sites.

Yes, it would very likely completely prevent the attack. So would TOTP, though. In this day and age, if you don't enable 2FA on all sites that support it, you're just irresponsible (especially if you're in a position as interesting as Gentoo maintainer).

Passwordless authentication is one of the most refreshing UX wins for blockchain backed applications.

Formula based schemes are inherently flawed as you still have to remember some kind of nonce to handle forced password rotation.

I disagree.

In this case "evidence collected suggests a password scheme where disclosure on one site made it easy to guess passwords for unrelated webpages" (emphasis mine), however it is trivially easy to create formula-based passwords that are not immediately obviously related to the site for which they were created.

Take some combination of:

  Number of characters in the URL
  Number of syllables in the URL
  Number of characters in the first / last syllable
  Index in the alphabet of the Nth character of the first / last syllable
  Nth character in the URL
  Character offset in the URL according number of syllables
  Nth symbol on the number key row according to number of characters in the URL
  Alt-key symbol for the URL character in the URL
  Etc, etc, etc.
Interspersed memorable seeds.

It would take serious effort to even detect the presence of a formulaic password from a single leaked unhashed / unsalted password, never mind to determine what the formula might be.

>> from a single leaked unhashed / unsalted password

These days it’s becoming more common to have been compromised from multiple sources. And unfortunately storing passwords securely seems to still be too hard for a lot of companies. The more examples of your pattern an attacker has, the easier to work out your pattern.

With the frequency of new breach announcements I feel like you’re going to be fighting a losing battle. For now the best and safest solution is unique truly random passwords per site.

You can still pass your formula based password though a cryptographic hash function and would be fine. Although not very useful for device logins and as mentioned in some other comment probably trick to rotate passwords.

It's worse than that, now a malicious admin of any of those sites can maybe figure out your password scheme.

To be clear, GitHub itself was NOT compromised. The password of a Gentoo contributor, who presumably wasn't using 2FA, was.

The title should probably clarify it's a "Gentoo mirror on GitHub compromise incident report"

Glad I clicked through. When I scanned the Hacker News titles earlier, I'd assumed Github had been compromised. Very misleading.

I too thought the title was about one of Github's CDN sites or something.

Title is super misleading.. should be something like

Incident Report for a Gentoo GitHub contributor account being compromised

As they have now done, they could have reduced the threat exposure by requiring 2FA to join the organisation, and using a password pattern scheme does expose you to a targeted attack so ideally encouraging password manager generated passwords is probably also recommended.

But Github could probably have helped by detecting logins from unusual IPs for that user - i.e. a login attempt from an IP they haven't logged in from before, and required something like email verification too. Although if they were using an easy to guess pattern, then likely the admins email could have been compromised too.

Edit: GitHub could also warn people whose accounts have admin access to organisations if they don't have 2FA enabled.

Small piece of annoyance, the clickbait press had a field day with this. From the positive which I appreciate[0] to claiming putting `rm /*` in the wrong place so that it didn't even work is "file wiping malware."[1]

For the most part, this appeared to the work of a teenage skiddy (given the addition of a readme with a racial slur as the text), and not any actual sophisticated attack.

[0] https://www.techrepublic.com/article/gentoo-stops-github-rep...

[1] https://www.bleepingcomputer.com/news/linux/file-wiping-malw...

Always require 2FA for something as important as your code base.

This wasn't the code base (save the systemd stuff which increasingly is not being used by gentoo users), it was merely a github mirror.

It was the codebase. It's mirrors, but if the attacker had been less subtle the code have pushed smaller changes instead of force-pushing, anybody using the mirror would have pulled those changes down.

Suggesting that this is "merely" a mirror downplays how seriously a more sneaky attacker could have harmed users of the project. Would similar statements downplaying the attack be made if this had been https://mirrors.kernel.org/ ?

The wouldn't have even had to push smaller changes - there are ways in git to do a commit that basically overwrites the current state of the branch, but doesn't require a force push. We use it where I work when promoting branches (e.g. to production branch) so that we never ever have to worry about merge conflicts or force pushing. It's a merge strategy that basically says "When merging, just use parent 1, and pretend parent 2 doesn't exist when calculating files, etc."

actually it would be much less bad if it was. Gentoo GPG verification still doesn't work quite right, but most users of the kernel for critical purposes do GPG verification.

While true, many apps, including github, make this problematic for something like an admin account.

2FA, by it's nature, is bound to one single software, tool, or piece of hardware[1].

This limits the access of e.g. an administrator-login to one person and her personal phone, only. A bus-factor that is unacceptable to many.

Software like AWS allows more granular set-up, but still complex. Linode, Digital-Ocean and even docker.io, last time I looked, make it impossible to share the admin-account by allowing multiple 2FA devices active on one account simultaneous. And if they did, that would greatly lower the security of that account (still better than no 2fa though)

[1] 2FA, like google authenticator (or one of the much better open source alternatives thereof) make it possible to share a 2fa secret across devices, but that is both insecure and hard.

Github allows multiple administrator accounts.

It's very simple to do this on Github.

Github supports U2F (two-factor auth). Please use it.

Github's SMS or TOTP requirement for u2f use is a shame - why would you use u2f if you have already set up these? And if you want to use u2f because you don't trust your phone as a security device (hello Android users) - not good. Or maybe you don't want to mix your personal phone with work it infra, it's a bad equation there too.

I guess you could still use a software TOTP implementation on your workstation to fool Github, but then you are not getting the additional security from u2f because the totp codes are a substitute for the u2f token.

What if you just throw away the TOTP secret right after you get it and verify it on Github? You don't have to use your phone either. Just because you used TOTP to set up u2f on Github doesn't mean you have to store the TOTP secret indefinitely.

Yes, I guess you could navigate it this way. But clearly this is not something Github wants users to do, so I wonder if this way is bad somehow. Maybe there is no backup mechanism to recover from u2f token loss other than the old 2fa mechanisms, for example.

> - action-item: review 2FA requirements for GitHub org

> - done: Gentoo GitHub Organization currently requires 2FA to join.

Are there any generic incident response plans or playbooks for foss projects? Is gentoo's documented anywhere?

Gotta say,a bit impressed by the response.

I don't normally post on hackernews (so I've made a throwaway). I'm Antarus from the report.

We don't have a plan for Gentoo. I work for Google and I mostly used a vaguely similar plan to the Google incident plan.

1) Communicate early. For publically visible stuff (defacement was very obvious) you want to get a message out quickly before a natural narrative forms.

2) Communicate often. 3) Mitigate the problem first (e.g. prevent the malicious stuff from being downloaded) then investigate second.

4) Assign roles to people and be clear who is responsible for what. 5) Collect lots of data.

It's pretty hard because that's very generic. The security workflow we have for a couple of customers essentially has these steps:

1. Identify a security incident. This is the hardest one usually, unless they are blunt and noisy like in this case.

2. Shutdown and firewall everything involved in the security incident and shutdown and firewall everything that has a high probability to be immediately involved or targeted.

- For example, if you have a cluster of several identical application servers and one of these application servers is compromised, all application servers of the same kind in the same cluster must be shutdown and firewalled to the VPN.

- identical application servers in other clusters are handled as per 2.1, but with less hesitation to react strongly.

- If you have a database only available inside a private network and you have identified suspicious activity, it is reasonable to shutdown + firewall everything with access to this database.

2.1 Increase attention similar systems or connected systems in case of lateral movement.

- For example, our productive clusters have access to our monitoring setup. If a cluster is accessed in a creative way by a user, we'll revoke the credentials and certificates of this cluster. However, an attacker might have obtained information and utilize that to pivot his attack onto the monitoring cluster or the management infrastructure, so we need to pay attention to that.

3. Communicate to customer support and management about the incident.

- Yup. This is intentionally after axing productive systems. If senior persons are sure about a security incident, we don't want to wait for management to kill the malicious access. We want the senior guys to shutdown the security incident and ask for forgiveness later. Fast isolation beats bureaucratic process.

4. Identify vector of entry and eliminate it.

- This is normal post-work. What did they do, how did they come in, patch or report. This tends to be split into immediate mitigation (in gentoos case, change of passwords and implementation of 2FA, or disabling software features in an incident we had some time ago) and further mitigation down the road, such as the audit logs they are planning.

5. Resume operation of service coordinated with customers.

- This is something we learned some time ago. Some of our customers have their own dedicated systems, and from there they have a word how to properly resume operation after an incident. In one case, one department of a customer had an unannounced penetration test against a system without coordinating with the security department of the customer and without coordinating with us. All of that went to hell and back. We were forced to leave the system down for almost a month due to contractual bindings and catfights between teamleads of that customer on our bug tracker. That was fun.

I'd sleep better at night if every organization were this transparent.

Mirror is the important word here.

I'm pretty sure it didn't impact anybody.

Also the type of the malicious content (rm -fr /*) is "easy" to spot in which I mean there are no rootkits or remote exploitable flaws added.

Not very nice to be a victim of that but at least there is no doubt whether you're vulnerable or not.

The timeline also suggests that the malicious content was made after the break-in and not planned beforehand?

Really it was just a major inconvenience to the contributors preferring to use GitHub to merge their work in. There was also a repository that was only on GitHub, but I think one of their action items is to change that. Overall, nothing overly bad indeed.

From the incident report it sounds like the gentoo/systemd GitHub repo is not a mirror: the official repo is on GitHub and they're now looking to mirror it onto Gentoo infrastructure.

Does anybody know why this is the case?

It probably just happened that way. To note, systemd isn't as popular/required for Gentoo as most people I know still use openrc. There is even a set of hacks (that I very thankfully use!) to use Gnome 3 on a non-systemd system[0].

[0] https://github.com/dantrell/gentoo-project-gnome-without-sys...

> The main Gentoo repositories are kept on Gentoo hosted infrastructure and Gentoo mirrors to GitHub in order to "be where the contributors are."

Doesn't sound to me as if it's not a mirror.

Though I guess you could say as their CI depended on this mirror, it had a higher status than normal mirrors.

I'm specifically talking about https://github.com/gentoo/systemd

From their incident report (under https://wiki.gentoo.org/wiki/Github/2018-06-28#What_went_bad... ): "The systemd repo is not mirrored from Gentoo, but is stored directly on GitHub."

And from their Action Items: "mirror systemd repo on git.gentoo.org"

I'm curious why they seem to treat this repo differently from the others, using GitHub as authoritative and adding a mirror to git.gentoo.org rather than making git.gentoo.org authoritative and mirroring to GitHub.

Github supports two factor authentication, maybe make it mandatory for all admins?

> Clearer guidelines that even if users got bad copies of data with malicious commits, that the malicious commits would not execute.

Eh, that sees unwise. To suggest based on what you think you are "sure" about security protections, that users should not worry about having a copy with malicious code.

Not enabling MFA is just unacceptable at that level...

No 2FA?

Thanks, we've updated the link from https://lwn.net/Articles/759046/ which points here.


GitHub was not at fault. Someone gained access to an account using a disclosed password scheme.

Also, the Microsoft acquisition of GitHub hasn't happened - they have only agreed to acquire GitHub.

what mitigations do you use to protect systems that automatically updated after every git push?

Don't auto-update production like that? If a change will affect all users,manually review it and as another commenter said,gpg signatures. Have every prod commiter use mfa too.

I have seen a lot of people describe their continuous delivery where the code is checked in and moves into production after passing tests. I wasn't sure if they had a way to deal with this type of issue or I am misunderstanding their process somehow. I

Would GPG signatures suffice?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact