A full post-mortem is coming, but here's what happened from my perspective. I got mentioned on twitter about this thread while I was on the bus. I asked @evanphx to put the site into maintenance mode immediately.
Once I got on wifi, I downloaded the "exploit" gems and found that they used the Psych hack to post config/initializers/* and config/*.yml to Pastie. Luckily, none of our API keys were actually in those files - our config is a bit different since you can still run RubyGems.org on Heroku, which needs to put secret keys in the ENV.
I deleted the exploit gems permanently along with the throwaway account that posted them...that's why this post's URL is a 404.
I've reset our S3 keys just to be sure and any other keys we had on our production site. From what we know right now, no other changes on our S3 bucket have taken place, and we're going to check the logs to make sure. Once we get a real fix for this issue out, pushing gems will be enabled again.
Just a general PSA -
Please, if you find an issue like this, be nice. Tell the maintainers privately. Don't post to Reddit, HN, or a public Gist. RubyGems.org is completely volunteer run. No one gets paid to work on it. Thanks for your patience everyone.
If you want security notices to go to the Rubygems team, you have to set up a security page that tells people how to do that. Like everyone else, I appreciate your volunteer work, but no amount of goodwill creates the ability for people to read your mind.
Please post a security page. It literally doesn't need anything more than an email address and a PGP public key.
By the way: if Rubygems needs security help, it looks like there's quite a number of people who are willing to pitch in. When I published a FreeBSD crt0 bug back in 1996, I was given commit privs. I thought that was a pretty effective way to co-opt adversarial researchers.
Maybe I dont understand PGP and this is a stupid question, but if the site is compromised, would it not be possible for them to just put a different key up? One which they knew the private key for? How would you know the public key on the compromised site has not been changed unless you could compare it to the PGP key from before the site was compromised?
You can't, but having a security.html page gives attackers the possibility to contact you privately and securely; this reduces the chance that they'll decide to "contact" you by completely pwning your public site.
You don't want to send an actual exploit in clear text to a possibly compromised email account. PGP encrypts, and provides some level of validation that the message can only be read by someone on the security team.
Respectfully, RubyGems has already been compromised. Not telling anyone has zero advantages to RubyGem users.
Telling people to avoid using RubyGems and to audit if they've used it recently helps protect users.
Responsible disclosure is about avoiding giving bad guys new attack vectors and ensuring that the maintainer has the best chance of fixing the issue before a blackhat discovers and exploits it. That doesn't apply in this case.
Yes, and thanks for telling us. Really, we should have disabled gem pushes immediately. Hindsight is 20/20, and I'm not sure why I didn't think of doing so earlier. The pain of not being able to push gems would have forced us to fix it.
I'm sorry about this. I don't know what else to say. I wish we didn't have to deal with this kind of problem in the Ruby community.
> Please, if you find an issue like this, be nice. Tell the maintainers privately. Don't post to Reddit, HN, or a public Gist. RubyGems.org is completely volunteer run. No one gets paid to work on it. Thanks for your patience everyone.
It'd probably go a long way if you guys considered making an easily-discoverable security page with information on who to contact, how you prefer that contact to take place and how long they should expect to have to wait to hear back from you.
Is this maybe an opportunity to re-evaluate the volunteer only aspect of RubyGems? I love the service (as I'm sure thousands of others do) and I'd happily pay for a subscription model that would fund a full-time developer/maintainer for the service. Based on how critical it is to the Ruby community I can't imagine that I'm alone in thinking this.
Hmm. That's for private gems. Interesting, but a different use-case. I'd chip in to pay for someone to maintain "rubygems.org" on a professional basis. It's awesome that some people take their private time to support such an essential service, but I'm with the grandparent that it might be good if it were run on a paid-for basis. Maybe through donations or similar, so that access is still free.
Hi Nick, instead of being a dick, I want to first of all thank you for understanding the urgency (despite being on a public bus, you took the pains to get it temporarily fixed somehow). Like others, I do agree that posting on HN does more good than bad, but at the same time, I do understand that it is also polite to inform the maintainers too (which I think in your case they didn't.). Anyway, thank you for being a responsible maintainer!
Once again, Thank you for maintaining rubygems.org! You're doing a great job :)
What I saw was Ben twerping to Ruby people asking who to contact about this first, then posting to HN when he couldn't find the right person. Can I ask, where's the security page on Rubygems.org that tells people who to alert when stuff like this happens? I'm trying to find it on Archive.org and drawing a blank.
I became aware of this problem this morning when it was discussed on a public github issue. I think the problem has been known for a couple of weeks. I checked latest rack and active_support gem (would seem to be good targets...) to see if they had been backdoored but they were clean. Not sure if these gems are still clean or if any others have been compromised.
Now would be a good time to build up a quick script to compare the last modification time of the files on S3 against the updated_at of any gem record in the system. If the delta is too long, that would be a candidate for deeper investigation.
If someone compromised a package repository and replaced OpenSSH with a backdoored version, I would want to know immediately, and relay that message as quickly as possible to as many other users as possible before it spreads.
It is better to be paranoid about it and get the word out before someone actually gets hurt. If the users get to it before the maintainer gets to it, at least someone got the word out.
Even if most gem developers would understand, there's still a problem with the trust model: If you choose "high security", you cannot install a single gem without signature - so things won't work unless _all_ authors sign their gems. Gem signing would have to be a requirement, similar to packages on ubuntu's ppa. If you choose a lower security setting, a gem may just be repackaged and signed again by the attacker or the signature removed.
No, as I said, this doesn't help. You would need to sign the most popular gems and all their dependencies and the dependencies dependencies. Given that a pretty much standard rails app can easily pull in hundred or more gems that's quite a bit of work. It's doable, but I only see that coming if for example rubygems requires a key and gem signatures.
Then there's still the unsolved problem of providing key trust - the current implementation relies on self-signed certificates.
On a related note, we are working on adding this requirement to Clojars, the Clojure equivalent of Rubygems. It's a lot of work and requires significant community buy-in, but would really pay off in moments like these.
And gem/bundler warn if the signature is not present or invalid?
The problem seems to be that work on the trust infrastructure has been dormant since 2011. The basic signing functionality is in place, but it's not enforced and consequently nobody is using it.
I hope this event will remind the people in charge to finish what they started. Otherwise it's a matter of time until something Really Bad™ happens. The code underlying rubygems[.org] is the stuff nightmares are made of (Marshal...).
It is very possible that something Really Bad™ has already happened, and we don't know about it. If it were me, I wouldn't let it out that I had essentially an unlimited backdoor to every system that installs gems.
No, you miss the point. The idea is that if, say, Rails were signed, someone wiht an exploit couldn't attack rubygems to _modify rails_ without it being discovered, because they wouldn't be able to sign their modified rails.
The point of signing gems is not that any signed gem is neccesarily trustworthy. It's that any signed gem is neccesarily what the signature owner distributed, and has not been modified by someone else since.
But to make that so requires a bunch of things, it's not quite as simple as 'everyone just has to sign their gems'. But that's the idea.
Where are the adults? This is absolute amateurism. Why haven't Heroku or GitHub or some actual professionals in the Ruby community, stepped up to ensure that this kind of amateurism doesn't damage their ecosystem?
[Throwaway as I'm a small-fish in the Ruby community. I expect I'll get to use this account again soon though]
1 week is too long for such a severe security issue. A proof-of-concept gem seems to be a logical step to "force" rubygems.org to fix this issue. However, publishing database.yml on pastie is TOTALLY 100% NOT COOL, and turns this from "whitehat who wants to help" to "blackhat that wants to destroy (or doesn't understand social norms)".
I don't quite get what you're trying to say. What "community social norms"? Are you talking about how rubygems.org handled this vulnerability from the beginning? How the community handled this incident after it happened?
And what does it have to do with my quote? I was just pointing out that in general I'm for PoC to highlight security issues when the vendor doesn't respond, but stealing database.yml makes you nothing more than a simple crook.
Did you catch where was this payload injected? Did they have write access to the rubygems.org source? It would seem if they did, they didn't need to write this script at all and could simply read the configuration files directly.
After reading through the github.com issue posted in the comments, I get the impression this was targeted attack at rubygems.org (and not the user install base). There has been a tweet, also posted in the comments, suggesting that this was successful.
Why is it that all these parsing issues with yaml and xml were ignored before it all blew up recently? Input parsing is usually the most venerable part of any application, and it's strange that these issues were ignored for so long. These are not super complicated exploits, they are using the _expected_ behavior of the yaml parser in terms of it constructing arbitrary objects.
Well, a unsafe parser (which was designed only for parsing trusted input) was being used by lots of people (including Rails) for parsing untrusted input as if it was a safe parser. You can debate about whether there should be a safe parser for YAML, but that's a separate issue.
Agreed, there should be a clear warning - in fact, the load method should be renamed unsafe_load. The root cause of this is probably unclear documentation and misunderstandings between the users and authors of psych.
But the developers were resistant (and understandably so) to having gpg as a dependency for gem verification, and I didn't trust myself to write the crypto and a decent keyring validation algorithm from scratch.
The trouble is self-signed X.509 certificates are pretty worthless.
I disagree. If you obtain the public key from a second location that is not rubygems.org (e.g. from gem.homepage) then that raises the bar for a potential attacker. He now has to either compromise two sites in order to inject a malicious gem, or trick you into obtaining the public_key from a forged location.
I hope 'bundler' and 'gem' hear the wakeup-call and will start displaying a visible warning on unsigned gems (please also print the homepage-url to make it less of a pain for the user to hunt down the public key). After a grace period the default should be switched to -P HighSecurity (reject unsigned gems).
Yes, it's is not perfect, users may still import keys without thoroughly checking that the source is legit. However, it'll be a huge improvement over what we have now and greatly reduce the chance of a compromised gem to go unnoticed.
Point taken, but my day-job rails project, which isn't even that big, has 461 gems in my Gemfile.lock. (which lists all the explicitly included gems, as well as their dependencies, if you're not familiar.) That is a lot of self-signed certificates to track down, attempt some form of verification, and manually add as trusted.
My proposal would allow people who want a medium-level of security to verify that whoever uploaded the gem has (1) valid credentials to rubygems.org, and (2) control of the account owner's email, without requiring all that manual installation.
And it would allow people who wanted to set higher standards of verification to do so, basically for free, by configuring their settings in gpg.
I think your approach is problematic because it hinges on the integrity of rubygems.org (the very host that is compromised right now).
I do see your concern about having a large number of gems, but I believe that is a bootstrapping problem which could be relatively easily mitigated.
Trusted entities (rubygems team, 37signals etc.) could publish signed versions of their keyrings, i.e. public_keys that they trust. So, if you trust 37signals, you'd just download their ring, validate their signature, and import it. And that will probably cover most of your 400 gems.
Debian and Ubuntu solve this problem by having an archive signing key that is regularly rolled over and requires multiple people to re-assemble. The entire archive is cryptographically signed by this archive signing key, and each of the individual pacakges that are made up of that archive have been signed by the keys of the people who have been approved to be part of the keyring. This provides a cryptographic verification model that goes all the way up allowing you to verify each package comes from the right place. This would be a huge improvement over what there is now.
No, it doesn't solve every problem in the universe, but let us use this opportunity to improve the infrastructure.
We're basically on the same page, but coming from different directions.
Once you start having people import lists of trusted X.509 signatures from other trusted sources (e.g. 37 Signals) what you end up doing is basically creating the Web Of Trust that already exists in OpenPGP.
That is, instead of simulating a X.509 Certificate Authority in OpenPGP like I want to do, you're now simulating a OpenPGP web of trust in X.509 land.
I would have the rubygems OpenPGP key be the default for people who would want moderate security. Although a compromise could affect new users, all existing user would have the old key when they installed rubygems. That would immediately set off flags when gems signed by a forged key only were installed by the millions of users with the uncompromised install.
In addition, the more security conscious could trust for example 37signals or a co-worker and create a proper Web of Trust for validation.
This is the way OpenPGP works by default. You can even different trust levels automatically with this approach. Simply a rubygems.org signature would make the gem moderately trusted. Two additional signatures from two other trusted sources (37signals and a co-worker) would make the gem show up as highly trusted.
But rather than manually importing in an ad hoc fashion, you just sign the 37signals key locally, and gpg already takes care of enforcing all the rules, building the chain of trust, publishing peoples signatures to keyservers distributed world-wide, performing security calculations based on a user's custom preferences, etc.
So if you want the web-of-trust approach, best to just start with OpenPGP, instead of trying a NIH approach with X.509 certs.
It's always sad to have people chime in like that.
The changes I specified are, indeed, about a 10 line patch. You change the default for '-P' (1 line) and you inspect pushes to rubygems.org for a signature and reject them if they lack one. I'd be very surprised if the latter exceeds 9 lines. All you have to do is look for s.cert_chain in the gemspec, so it would actually likely be closer to 3 lines.
You may disagree with what I proposed and demand additional changes, or you may prefer the superior (albeit more complex) PGP approach. But you should not go ad hominem over a basic fact when you obviously have no idea what you're talking about.
(disclaimer: one of us two has actually read the code involved with gem signing just a few days ago - and I'm pretty sure it wasn't you...)
I'm not disputing that this change is a step in the right direction, but the implication that it solves roughly the same problem as the existing OpenPGP-based infrastructure is very misleading. [Edit: if you din't mean to imply this by your post then I apologize; it was not clear from how I read it.]
Same with the idea that you can "just apply a 10-line patch" without considering the background in documentation, tooling, and educating the community that such a change would require--it implies that the current rubygems maintainers are ignorant or lazy for not having done this easy fix already.
I didn't claim it solves "roughly the same problem", only a very relevant subset of the problem (integrity) closely related to the unfortunate situation rubygems is in right now.
What you call background (docs, tooling, education) is an argument in favor of the simple approach, which is already implemented and just not enforced - in contrast to whipping up new infrastructure of significantly higher complexity from scratch.
Also I didn't imply the maintainers are ignorant or lazy, they were merely oblivious.
So I'll repeat: The PGP approach is obviously superior, but comes with serious complexity and implementation concerns (there is no native PGP-library in ruby). Consequently it might make sense to add basic security right now, in a minimally intrusive fashion, while a more complete solution is developed. This, by the way, is part of the discussion that the rubygems team is having right now.
Even with pgp you'd need a whole lot of infrastructure. Where to get the public key from etc. Ubuntu requires pgp keys for their ppa and provides a keyserver. Since a lot of gems seem to use github nowadays, maybe an option to provide a pgp key on github would be great. That would allow to trust at least the most interesting projects - rails, db-layers, rack, etc. which could limit the fallout of an attack against rubygems.org
My proposal was that rubygems would create a simulated certificate authority that operates similarly to the PGP Global Directory, which would allow registered users to have their keys signed by an authorized RubyGems.org signing key. (This is covered on the bottom of the page on the link above.)
In addition more active developers who go to conferences and what not would do all the proper WoT stuff to ensure that the RubyGems.org key being used was indeed the authentic one, and that gems in their circle of friends/co-workers were signed by the correct keys.
Yes, in general I think the proposal is sound. I just wanted to add that GH might be in a position to add "trust" quickly since the accounts already contain a little bit of information. Many people there have a paid account and so are at least known by credit card. It's not much and by no means perfect, but it's more than we have now.
Yep, you can do WoT with X.509 if you write the semantics from scratch, and you can also do a CA approach with OpenPGP.
The thing is, for years no-one has stepped in and actually gone through the pesky little details of implementing all those semantics for rubygems via X.509 even though it's come up repeatedly. But maybe someday some expert will come along and fix things.
If you actually read the rubygems documentation on signing gems, it's pretty much "Okay generate your own self-signed key, then sign it with your self-signed key." That leaves a lot to be desired vs best practices. There isn't even an attempt to describe how you get your self-signed keys to users.
I just think leveraging the existing OpenPGP semantics, ecosystem, and battle-hardened gnupg codebase, is a lot easier than implementing an entire system from scratch on X.509 just because openssl is already installed everywhere and ruby has better bindings and gnupg is not and does not.
I'm just trying to take the approach you usually preach: Don't roll your own crypto code and leave the real work to the pros, and it seems like gpg offers a lower path of resistance.
Jeeze, bad things happen (or could) to the greatest services ;) I'm late to the discussion, didn't have time to lurk around, nevertheless what I can agree the most is about gem signing. A gem is a package of software, and every single package distributor makes the author sign their work (think linux packages, being a Debian/Ubuntu developer I can tell you that if you don't get your package signed, it won't get uploaded, period, your key in some cases (Debian) needs to be verified in person to build a circle of trust. I think that besides the overhead (initially, I'm sure people will contribute scripts or integrate this so that the developer won't notice) this is really a needed step you guys should take.
Yes, but the attacker would need to inject code into every .gem file that's pushed to rubygems.org. Because that source code would be highly visible, it would be simple to figure out what they're trying to do.
A more hidden attempt would be to simply alter the existing .gem files on S3. Once they have the access tokens from the configuration YAML files, they can do that from anywhere, not just the rubygems.org host. That keeps their activity under the radar and is slightly harder to track.
In that case, only newly installed gems in your system that were uploaded to rubygems.org before this incident would be at risk. If you run bundle update and pull in gems built from today onward, there is not as much to worry about. If you're updating gems and the bundle is pulling in gems from a few days ago, then you are at high risk.
I'm not disputing anything here, but I just want to emphasize that if they have the S3 credentials, replacing any gem is trivial. And they wouldn't need to replace every gem. Picking the 10 most popular ones would have pretty big fallout.
And, if like most people you don't verify your installed gems (not that you really can if they're not signed), you're going to have a very bad day. The "tracking" aspect is a situation where it doesn't matter because at that point, you're hosed anyway.
Also, walking S3 is not particularly fast. So if the plan is to walk all gem files to verify them, expect that to take a while. Hopefully they have bucket logging enabled.
Why aren't gems protected by a trust chain all the way up like Debian does? Gems aren't even required to be cryptographically signed by the developer and from what I can tell are just thrown over http without any authentication. This seems terrible.
For those that don't know how this exploit worked, I think the "exploit" gem description provides a pretty good explanation:
'A Proof-of-Concept PoC gem that exploits a vulnerability in the Psych YAML parser, which allows the #= method to be called on arbitrary Objects.
If the #= method later calls eval() with the given arguments, this allows for
arbitrary execution of code.'
can someone explain what happened here to those like me who are a little late. seems like the issue is resolved already or at least temporarily fixed.
All I can figure out from the url and the comments here is that there was a gem called exploit and that somehow the config files where posted to pasties.org
Oh the timing. So, there was a big meeting, where some of the devs pleaded the case that the recent wave of security issues in the Ruby world were a tipping point, and that we were over it. They presented a good case, and I think the architects were close to permitting the usage of Ruby and Rails in their current cases.
Then we get an interruption, as a couple of projects had updated their gems today, and there was a concern about overall security of the apps.
IT Security required the apps were offlined, and the green light was given to the PHP and Python guys in house (and a few retained developers) to start the process of rewriting the apps.
It will be painful, but, I think, the business has lost it's appetite for the constant Ruby idea of "Let's run with Yaml", before firstly "Let's walk safely with Yaml".
I can understand it's bad timing, but the reaction seems ad-hoc. AFAIK neither python eggs nor PHPs packagist repository sign their packages, so in theory they're open to the same attack vector . Judging from the docs, composer doesn't even allow signed packages. A better reaction would have been to fix the process: Allow only signed gems or cache and audit the packages you install. You don't need to depend on rubygems.
 Even OS package repositories or mirrors were broken into. I faintly remember a break-in where debian(?) had to check all their mirrors.
I'm glad to hear that at least one organization has done the right thing, and started the process of moving completely away from Ruby and Ruby on Rails. It's just the responsible and sane thing to do, given how the software and the community apparently can't be trusted.
In Ben's defence, I doubt he's the only one who wouldn't have thought to check the whois data. I wouldn't have, for a start. There should have been an obvious contact email linked on the site itself for issues, not just a Twitter account.