I take responsibility for what happened here. My RubyGems.org account was using an insecure, reused password that has leaked to the internet in other breaches.
I made that account probably over 10 years ago, so it predated my use of password managers and I haven't used it much lately, so I didn't catch it in a 1password audit or anything.
Sometimes we miss things despite our best efforts.
Rotate your passwords, kids.
1) Find high-value target libraries
2) Grab the usernames of accounts with push access
3) Check those against password dumps
I feel really stupid about this, but like I said it was an oversight. I apologize and will try to do better.
The best approach is probably to disable the account completely until an interactive login is made and a password reset can be forced but some would be up in arms about the inconvenience caused: you can't just allow a simple reset as the login could be coming from an attacker not the original user, an extra channel will need to be used to verify the identity. You might just have to leave the account locked forever and expect the user to create a new one - but now you have the old account and its content which may be used as a dependency of many projects which now break, unnecessarily if there hasn't been a login by a nefarious type.
You wouldn't lock the account forever, the point is to establish that the person whose password was compromised knows, that the password is not the only factor which is used to regain access to the account, and to ensure that your service (rubygems) and its downstream users are not compromised as well as a result of the breach.
Any groaning about the inconvenience caused by disabling account access until the password is changed, can be simply shrugged away in favor of security concerns, with a link to this story about rest-client.
By the time you have learned the user's plaintext password, their account may already have been compromised. There's a case to make that you disable all downloads of any gems that might be compromised from the account until you've verified they aren't. That might be over the top, especially for popular projects as now we are talking serious inconvenience affecting potentially thousands or more of downstreams.
It's a sticky situation, since you don't really know how long that password has been in the open for hackers to use and abuse once you've discovered it in a password dump.
(source: I work for Heroku Support)
RubyGems has a responsibility to its users and community here. It (like npm) needs to take this stuff seriously.
But simply forcing a password change at the next login after detecting an insecure password would not unduly burden anyone and would be better than doing nothing.
Sometimes you have to have your priorities straight. If you found the password, someone else can find it.
The only time you would have access is when the user logs in, so for rarely logged in users you would have to proactively reset their password or cross your fingers.
But even with 12 round bcrypt hashing, you should be able to fairly cheaply attack a list of 2,000 bcrypted passwords with a million-entry database of leaked e-mail/password combos in a GPU-month.
Probably easier to force a password reset on everyone and then do the checking on password change, although you need to be careful there not to be sending the password.
EDIT: uhm, wait, so if you've got the e-mail address in the dump then there's only one user for that, so just grab their salt and hash the password and check it. So that million entry database should be checkable in a bit over half an hour...
Wasn't challenge/response / SRP authentication debunked ?
The larger question is about if gem/npm/cargo-style package managers are such a terrific idea in the long run. The security implications are pretty serious.
that's.... rare. Well done.
Here is a summary of the exploit re-pasted from a great comment  written by @JanDintel on the github thread:
- It sent the URL of the infected host to the attacker.
- It sent the environment variables of the infected host to the attacker. Depending on your set-up this can include credentials of services that you use e.g. database, payment service provider.
- It allowed to eval Ruby code on the infected host. Attacker needed to send a signed (using the attacker’s own key) cookie with the Ruby code to run.
- It overloaded the #authenticate method on the Identity class. Every time the method gets called it will send the email/password to the attacker. However I'm unsure which libraries use the Identity class though, maybe someone else knows?
So... it potentially comprised your user's passwords AND (if you were on Heroku or similar like many Rails apps are) system-level access to all your attached data stores. About as bad as it gets.
The first hijacked version was released on August 13th.
So any gem with more than 50,000 downloads should force to gem maintainer to have MFA set up before they can publish a new version or do anything with that gem.
Because, having MFA is not about protecting gem maintainers, it's about protecting users. So, gem maintainers should not be allowed to be careless with security by not using MFA. It's not their choice to make.
Also worth noting that whilst MFA helps, it's not a panacea as MFA isn't generally compatible with automated CI/CD processes, so API keys will still be required, and can be leaked/lost/stolen.
So if an attacker compromises the API key used by that pipeline, they get the rights to push to Rubygems.
It'd depend on the individual software library and of course as a consumer of many libraries you generally will have limited or no visibility of the practices of all your dependencies.
As a consumer of software libraries, have you ever looked into the security practices of the library author before choosing whether to use it or not?
One thing that would be inconvenient but would protect against that would be to have the api work as usual, but need to use MFA and login to the website to approve a new release (and have information there listing the ip and time of upload). That would only make sense for heavily used gems like this one but it seems that it would stop most issues?
A less invasive control might be to notify all owners when a new version is pushed, so they would be aware of a risk, if they weren't expecting a new release. Not perfect, but something.
2FA is generally trivial for maintainers to take into use. There is simply no excuse to not require it at this point for all new uploads. The status quo of hoping maintainers never re-use their passwords / use weak passwords / have their machines hacked is clearly not working since security incidents like this keep happening every other week with Rubygems/npm/etc.
"safeguards against malicious changes to project ownership, deletion of old releases, and account takeovers. Package uploads will continue to work without users providing 2FA codes."
They are also working to enforce 2FA on uploads:
"But that's just for now. We are working on implementing per-user API keys as an alternative form of multifactor authentication in the setuptools/twine/PyPI auth flows. These will be application-specific tokens scoped to individual users/projects, so that users will be able to use token-based logins to better secure uploads. And we'll move on to working on an advanced audit trail of sensitive user actions, plus improvements to accessibility and localization for PyPI. More details are in our progress reports."
these guys with libraries that have a massive user base really have no excuse besides laziness for not having 2fa on their account.
We all just hope that nothing bad will happen or that it will be noticed fast enough.
Accounts get compromised, maintainers quit and transfer their project, bad actors might even pay the dev of some lesser-known dependency…
I have no easy solution for this problem and of course I too use external dependencies in my projects – but it feels like it's only a matter of time till disaster will happen and most of us just ignore this problem till then.
One of the most ridiculous things about most package repositories (npm, rubygems) is how opaque they are. It's only a mere courtesy to link to the github repo from a package, and a gentlemen's agreement that it actually represents the code that will get run with 'npm/gem install'. There are various ways to go about this like the package repo requiring linkage to an actually git repo that it builds from.
Commonly pitched solutions like 2FA are useful but don't do anything to stop the case of a malicious actor actually having publish rights, like the trivial attack where you simply offer to take a project off someone's hands. But diffing releases and reading source code should be absolutely trivial at the very least.
I did a talk for AppsecEU back in 2015 on this topic and found good material talking about it as a risk going back years before that...
And even in isolated environments I find myself running code outside of the container for testing. Usually a quick script to test some package's functionality or opening a REPL to run something or running a code-generator (manage.py, artisan, etc). That's all it takes for the malware to break out of the isolation and attack your machine.
Is there any fix at all? Aside from something like multiple-account code signing/release verification I cannot think of something that couldn't be compromised in some way.
At the end of the day you have to trust someone and trust that they trust someone else. The problem is you have no way of vetting the entire dependency chain. You may have reviewed gem/package A but you aren't going to (realistically) review all of its dependencies and those dependencies' dependencies.
At this point it's all a "many eyes" approach. And it seems to be working relatively effectively.
There's a number of possible technical mitigations, maintaining internal package repositories, code review of key libraries, enforcing package signing and checking signatures etc but all of them increase costs and decrease development speed, so they're not adopted that heavily.
There are also possible mitigations at a legislative/policy level, but they would be so deeply unpopular that I'm sure they'd never pass muster in most countries.
Taken at face value, then yes, obviously more people looking at a specific piece of code will make it better. But this does not extend to the entire landscape of open source. Most developers, especially ones doing so as a hobby, would much rather work on their own new code. Not look at someone else's old boring code. We would rather reinvent the wheel a thousand times before touching a line of code written by someone else.
This becomes even more dire when you look at code no one wants to touch. Like TLS. There were the Heartbleed and goto fail bugs which existed for, IIRC, a few years before they were discovered. Not surprising, because TLS code is generally some of the worst code on the planet to stare at all day.
This is major new line of attack, and web app infrastructure is critically weak to it. We rejected distro-controlled package management in favor of pip and gem and npm years ago (for good reasons), but as this sort of attack becomes much more common (which it will), we might find ourselves missing the days of strong central control.
Rubygems should have acted on the Strong_password news, but missed the opportunity. I hope that they can get their act together now that they are lucky enough to have a second chance before this style of attack really explodes.
All the major language repo's are free at point of use, and I don't get the impression the maintainers are exactly rolling in money, so it doesn't seem likely that they can easily ramp up on that front.
While I'm generally a fan of distribution-provided packages, they would not have helped in this case. Distributions simply lack the manpower to audit all upstream releases for these kinds of issues.
Version 1.7.0 was released to rubygems on 8th July 2014, and 2.0.0 on 2nd July 2016, so anyone who has started using rest-client or run a `bundle update` recently is unlikely to be affected.
The impact could have been significantly greater had the hijacker pushed a new versions of 1.8.x or 2.x as well, so it's very fortunate the breach was spotted now.
I only have a passing familiarity with ruby gems, so I may be completely wrong.
This situation definitely lends itself to push for 2FA by default on all rubygems.
cd ~/code # Where all my projects live
grep --include='Gemfile.lock' -r . -e 'rest-client \(1\.6\.1\)'
With the software supply chain being as complex as it is, and the large number of moving parts, we're only going to see more of these ...
- Would you pay money for access to a package repository that had good security practices? How much would you pay? Would you accept delays in library updates to allow for security checks? If so, how long a delay would be acceptable.
- Have you ever looked into the security practices of open source applications or libraries that you wanted to use and had the information you did or did not find affect your decision to use that software?
- How often do you use the inventory of all the libararies you have, to periodically check on the provenance of those libaries and that they are maintaining good security practices?
Ultimately these problems (like most) are one of incentives. It's very easy to build software very quickly using the huge number of open source components that are freely available.
Whilst speed of development and price are the primary considerations, it's not going to be surprising that security takes a lower rung on the ladder.
Additionally, GitHub provides a bundler-audit like service for free. And, identifying those that are t CVEs yet seems like something scriptable but also obfuscatable (on the attacker end).
I don't think any team I've been on (federal or private) would pay extra for this kind of service given the frequency with which we do updates and the amount of effort that needs a careful developer spends when doing these updates. The most recent breaches have been noted well I'm advance of any updates we would have done.
I'd be glad to be wrong about it because I think a few more tools in this space would only help the community.
It would be nice if a service could summarize the differences in actual gem releases, to make things like the changelog and the diff easily digestable for all the available updates though (versus a scan/lint). That would let a developers more easily identify these kinds of breaches than cloning and diffing.
There should be a very limited number of known external URLs that your production system needs to hit. Whitelist them on a proxy. Block the rest. Dump the blocked requests into a log. Put alerts on a log. It will get most of data exfiltration attempts or attacks such as this. Remember, when your goal is not to have a perfect security -- your goal is to have a better security than someone else so that someone else gets to be a chump and not you.
If there was a "published with mfa" flag on every gem release and it would allow a Bundler setting to block installing gems without 2FA.
Of course, this would also help attackers find targets. But maybe its worth the trade-off?
What about all the attacks where the malicious actor is someone with publish rights, like friendly package takeover? Your proposal makes that even more effective since now the attacker gets a nice "published with mfa" badge.
Your post was about a "published with mfa" vanity badge which I was responding to, not the merits of mfa in general.
I don't care really about a badge, I care about the information being available so that it can be used in Bundler. Its about developers being given the choice in their gemfile to disallow installation of any gems uploaded without 2FA. But in order to do that, we need rubygems to publish that information.
It doesn't seem like it does you much good to know if a package uses 2FA except to potentially weaken your defenses. For example, any scrutiny you level at a non-2FA package should also be leveled at 2FA-enabled packages. Though I suppose there is a non-zero benefit, so I won't belabor this argument any further.
Perhaps package repositories should be nagging publishers to enable 2FA. Though poorly implemented 2FA also introduces new attack vectors like the "lol lost my phone" social engineering attack.
I'm wondering why wouldn't RubyGems implement some basic form of malware detection? This type of code shouldn't be too hard to classify.
Even simple open(), sleep(), eval() could be easily obfuscated.
> However, this method of securing gems is not widely used. It requires a number of manual steps on the part of the developer, and there is no well-established chain of trust for gem signing keys. Discussion of new signing models such as X509 and OpenPGP is going on in the rubygems-trust wiki, the RubyGems-Developers list and in IRC. The goal is to improve (or replace) the signing system so that it is easy for authors and transparent for users.
But to answer your question, yes there is a system for signing gems, though it's not widely used: https://guides.rubygems.org/security/
When consuming APIs and not thinking about this, we are only building technical debt and security issues for the future.
Today's example is really bad since it targets a well-used meta API open-source library, but how many of those issues are already present on hundreds of other obscure open-source API clients?
Then you get the delight of likely jurisdictional issues, if it turns out the attacker is not a resident of the same country as the victim that reported it.
And the same address is linked to a Malta based company that appears on the "Panama Papers".
It could be one, but it could just as easily be one of the many criminal gangs who used credential spraying to get access to accounts and then figure out what they can do with them afterwards.
If the cost of the attack is as near to zero as makes no odds, any income is profit, whether it comes from being able to compromise bitcoin related accounts elsewhere, getting a miner to run on 00s of servers and/or 000s of clients, or getting other details to use in a "send me bitcoint and I will/won't X" blackmail. And if there is no income from the attack, the cost of trying is near zero.
For a long time, environment variables have been evangelized as the secure place to store credentials and things, but that just gives third party scripts a known place to look.
You could argue that might actually be more secure to store your secrets in a separate, custom config file that gets read into the rails app via an initializer or something.