A full post-mortem is coming, but here's what happened from my perspective. I got mentioned on twitter about this thread while I was on the bus. I asked @evanphx to put the site into maintenance mode immediately.
Once I got on wifi, I downloaded the "exploit" gems and found that they used the Psych hack to post config/initializers/* and config/*.yml to Pastie. Luckily, none of our API keys were actually in those files - our config is a bit different since you can still run RubyGems.org on Heroku, which needs to put secret keys in the ENV.
I deleted the exploit gems permanently along with the throwaway account that posted them...that's why this post's URL is a 404.
I've reset our S3 keys just to be sure and any other keys we had on our production site. From what we know right now, no other changes on our S3 bucket have taken place, and we're going to check the logs to make sure. Once we get a real fix for this issue out, pushing gems will be enabled again.
Just a general PSA -
Please, if you find an issue like this, be nice. Tell the maintainers privately. Don't post to Reddit, HN, or a public Gist. RubyGems.org is completely volunteer run. No one gets paid to work on it. Thanks for your patience everyone.
Please post a security page. It literally doesn't need anything more than an email address and a PGP public key.
By the way: if Rubygems needs security help, it looks like there's quite a number of people who are willing to pitch in. When I published a FreeBSD crt0 bug back in 1996, I was given commit privs. I thought that was a pretty effective way to co-opt adversarial researchers.
I know dealing with this stuff is no fun. Try to keep in mind, though, that people like Ben Murphy and Hal (Postmodern) are on your side. Again, maybe consider formalizing that relationship a little!
The purpose of the key is to allow people to report security vulnerabilities without worrying that by doing so they're giving ammunition to people snooping emails.
Github's is in their site footer.
37signals' is in their site footer.
Twitter's is linked off the sidebar in their "About" page.
Google's and Facebook's are the top search result for their site and "vulnerability" "security".
These are all fine options.
Telling people to avoid using RubyGems and to audit if they've used it recently helps protect users.
Responsible disclosure is about avoiding giving bad guys new attack vectors and ensuring that the maintainer has the best chance of fixing the issue before a blackhat discovers and exploits it. That doesn't apply in this case.
I agree, but how do people audit? I think it's time to start putting application dependencies, like rubygems and npm modules, under version control, not in the same repo, but in a different repo.
Mikeal Rogers has some good reasons why, but there gets to be too much commit noise if you have big dependencies, and put them in the same repo. A separate repo solves this.
Nick. I dont support today's PoC, I don't, really. But I told you. Twice in a week. I feel deeply sad now.
I'm sorry about this. I don't know what else to say. I wish we didn't have to deal with this kind of problem in the Ruby community.
Do you think it makes sense to disable pulling down gems until you know for sure nothing in S3 is compromised?
It'd probably go a long way if you guys considered making an easily-discoverable security page with information on who to contact, how you prefer that contact to take place and how long they should expect to have to wait to hear back from you.
Just a suggestion!
Its amazing to see that one can just type 'gem blah' and magically things get installed.
Once again, Thank you for maintaining rubygems.org! You're doing a great job :)
Oh, and +1 for the security page
You can never be too sure. There's a whole contest (http://underhanded.xcott.com/) whose goal is "make something that looks legit, but actually does something evil / has an exploit).
Nuke from orbit.
It is better to be paranoid about it and get the word out before someone actually gets hurt. If the users get to it before the maintainer gets to it, at least someone got the word out.
Here's the chapter on it in the RubyGems manual:
And that's all that I can find. Try to find something else, like a blog tutorial.
Then there's still the unsolved problem of providing key trust - the current implementation relies on self-signed certificates.
The problem seems to be that work on the trust infrastructure has been dormant since 2011. The basic signing functionality is in place, but it's not enforced and consequently nobody is using it.
I hope this event will remind the people in charge to finish what they started. Otherwise it's a matter of time until something Really Bad™ happens. The code underlying rubygems[.org] is the stuff nightmares are made of (Marshal...).
The point of signing gems is not that any signed gem is neccesarily trustworthy. It's that any signed gem is neccesarily what the signature owner distributed, and has not been modified by someone else since.
But to make that so requires a bunch of things, it's not quite as simple as 'everyone just has to sign their gems'. But that's the idea.
[Throwaway as I'm a small-fish in the Ruby community. I expect I'll get to use this account again soon though]
The vulnerability itself is simply ridiculous:
The response from the Gem folks has betrayed a serious lack of experience.
Has Rails been such an insular and young community for so long that there are truly no qualified engineers around with experience in this level of professional software development?
If other framework were using YAML in this fashion, yesterday will be the time to look for security holes in those implementations.
Edit: At least a PoC. How someone other than him (since hes denying compromising rubygems) found his PoC and wrote the actual payload and pushed the exploit gem is unclear.
- 1 week ago: blambeau reported the issue to the rubygems people: https://github.com/tenderlove/psych/issues/119#issuecomment-...
- Postmodern wrote a PoC - https://gist.github.com/4674219
- Postmodern: "I posted it in a private chat room. Guess someone took it for a ride." - https://twitter.com/postmodern_mod3/status/29665192240284057...
- A gem called `exploit-36.44.16` was pushed to RubyGems. This contains a payload that only sends `uname -r` to pastie.
- A gem called `exploit-22.31.31` was pushed to RubyGems. This contains a payload that posted the config/database.yml to pastie. This gem was removed by the rubygems admins when they discovered it - http://news.ycombinator.com/item?id=5140109 & https://gist.github.com/d891e876c53e55bf0920 (that's the payload)
Other exploit-gems that was pushed (but we don't know what did):
- 16.17.49 - https://twitter.com/rubygems/status/296618422702309377
- 20.22.1 - https://twitter.com/rubygems/status/296617610622144512
- 12.5.4 - https://twitter.com/rubygems/status/296540884781109248
- 7.27.42 - https://twitter.com/rubygems/status/296537952320884736
My conclusion based on the current facts:
1 week is too long for such a severe security issue. A proof-of-concept gem seems to be a logical step to "force" rubygems.org to fix this issue. However, publishing database.yml on pastie is TOTALLY 100% NOT COOL, and turns this from "whitehat who wants to help" to "blackhat that wants to destroy (or doesn't understand social norms)".
This whole thing is a joke, from the ridiculously unsafe YAML parser to the response of the RubyGems people.
If these are your community social norms, then you quite simply deserve the resulting shake-up. This is how you become inoculated against such incredibly poor engineering practices.
And what does it have to do with my quote? I was just pointing out that in general I'm for PoC to highlight security issues when the vendor doesn't respond, but stealing database.yml makes you nothing more than a simple crook.
https://twitter.com/rubygems/status/296537952320884736 (12:38) 7.27.42 ~ 7 hours ago :(
but there was quite a few exploits pushed
'paste[authorization]' => 'burger',
'paste[access_key]' => '',
Maybe the post was removed, or the hash is different than the original so not all can find it.
It was put in maintenance to stop pushing possibly infected gems to users.
I tried to get together a proposal to introduce OpenPGP signed gems and wrote some proof-of-concept code so there could be a proper Web-of-Trust for rubygems:
But the developers were resistant (and understandably so) to having gpg as a dependency for gem verification, and I didn't trust myself to write the crypto and a decent keyring validation algorithm from scratch.
I disagree. If you obtain the public key from a second location that is not rubygems.org (e.g. from gem.homepage) then that raises the bar for a potential attacker. He now has to either compromise two sites in order to inject a malicious gem, or trick you into obtaining the public_key from a forged location.
I hope 'bundler' and 'gem' hear the wakeup-call and will start displaying a visible warning on unsigned gems (please also print the homepage-url to make it less of a pain for the user to hunt down the public key). After a grace period the default should be switched to -P HighSecurity (reject unsigned gems).
Yes, it's is not perfect, users may still import keys without thoroughly checking that the source is legit. However, it'll be a huge improvement over what we have now and greatly reduce the chance of a compromised gem to go unnoticed.
My proposal would allow people who want a medium-level of security to verify that whoever uploaded the gem has (1) valid credentials to rubygems.org, and (2) control of the account owner's email, without requiring all that manual installation.
And it would allow people who wanted to set higher standards of verification to do so, basically for free, by configuring their settings in gpg.
I do see your concern about having a large number of gems, but I believe that is a bootstrapping problem which could be relatively easily mitigated.
Trusted entities (rubygems team, 37signals etc.) could publish signed versions of their keyrings, i.e. public_keys that they trust. So, if you trust 37signals, you'd just download their ring, validate their signature, and import it. And that will probably cover most of your 400 gems.
No, it doesn't solve every problem in the universe, but let us use this opportunity to improve the infrastructure.
Once you start having people import lists of trusted X.509 signatures from other trusted sources (e.g. 37 Signals) what you end up doing is basically creating the Web Of Trust that already exists in OpenPGP.
That is, instead of simulating a X.509 Certificate Authority in OpenPGP like I want to do, you're now simulating a OpenPGP web of trust in X.509 land.
I would have the rubygems OpenPGP key be the default for people who would want moderate security. Although a compromise could affect new users, all existing user would have the old key when they installed rubygems. That would immediately set off flags when gems signed by a forged key only were installed by the millions of users with the uncompromised install.
In addition, the more security conscious could trust for example 37signals or a co-worker and create a proper Web of Trust for validation.
This is the way OpenPGP works by default. You can even different trust levels automatically with this approach. Simply a rubygems.org signature would make the gem moderately trusted. Two additional signatures from two other trusted sources (37signals and a co-worker) would make the gem show up as highly trusted.
But rather than manually importing in an ad hoc fashion, you just sign the 37signals key locally, and gpg already takes care of enforcing all the rules, building the chain of trust, publishing peoples signatures to keyservers distributed world-wide, performing security calculations based on a user's custom preferences, etc.
So if you want the web-of-trust approach, best to just start with OpenPGP, instead of trying a NIH approach with X.509 certs.
Your approach is definitely more complete but also seems to require fairly substantial code changes (OpenPGP?).
My approach should be doable with a mere ~10 lines patch; Gem and bundler should default to '-P HighSecurity' and rubygems.org should reject pushes of unsigned gems.
Anyway, whichever implementation strategy they choose, I hope they will do it soon.
If you actually believe this can be accomplished in 10 lines of code you are not qualified to be giving security advice.
The changes I specified are, indeed, about a 10 line patch. You change the default for '-P' (1 line) and you inspect pushes to rubygems.org for a signature and reject them if they lack one. I'd be very surprised if the latter exceeds 9 lines. All you have to do is look for s.cert_chain in the gemspec, so it would actually likely be closer to 3 lines.
You may disagree with what I proposed and demand additional changes, or you may prefer the superior (albeit more complex) PGP approach. But you should not go ad hominem over a basic fact when you obviously have no idea what you're talking about.
(disclaimer: one of us two has actually read the code involved with gem signing just a few days ago - and I'm pretty sure it wasn't you...)
Same with the idea that you can "just apply a 10-line patch" without considering the background in documentation, tooling, and educating the community that such a change would require--it implies that the current rubygems maintainers are ignorant or lazy for not having done this easy fix already.
(I spoke at length on a PGP-based approach to this same problem at the 2012 Clojure Conj conference if you would like to talk about credentials: https://www.youtube.com/watch?v=sBSUIKMdQ4w)
What you call background (docs, tooling, education) is an argument in favor of the simple approach, which is already implemented and just not enforced - in contrast to whipping up new infrastructure of significantly higher complexity from scratch.
Also I didn't imply the maintainers are ignorant or lazy, they were merely oblivious.
So I'll repeat: The PGP approach is obviously superior, but comes with serious complexity and implementation concerns (there is no native PGP-library in ruby). Consequently it might make sense to add basic security right now, in a minimally intrusive fashion, while a more complete solution is developed. This, by the way, is part of the discussion that the rubygems team is having right now.
In addition more active developers who go to conferences and what not would do all the proper WoT stuff to ensure that the RubyGems.org key being used was indeed the authentic one, and that gems in their circle of friends/co-workers were signed by the correct keys.
Ruby has better support for X.509 PKI than it does for OpenPGP, so using OpenPGP seems kind of silly in this case.
The thing is, for years no-one has stepped in and actually gone through the pesky little details of implementing all those semantics for rubygems via X.509 even though it's come up repeatedly. But maybe someday some expert will come along and fix things.
If you actually read the rubygems documentation on signing gems, it's pretty much "Okay generate your own self-signed key, then sign it with your self-signed key." That leaves a lot to be desired vs best practices. There isn't even an attempt to describe how you get your self-signed keys to users.
I just think leveraging the existing OpenPGP semantics, ecosystem, and battle-hardened gnupg codebase, is a lot easier than implementing an entire system from scratch on X.509 just because openssl is already installed everywhere and ruby has better bindings and gnupg is not and does not.
I'm just trying to take the approach you usually preach: Don't roll your own crypto code and leave the real work to the pros, and it seems like gpg offers a lower path of resistance.
Both OpenPGP and X.509 are going to require a little of fiddley semantic details that will take time to hash out. The bottleneck here isn't the technology.
I like, and routinely recommend, OpenPGP for solving app crypto problems. X.509 makes a lot more sense here.
Unfortunately I don't see any way to help fund such an effort. Is there an officially supported way to donate to RubyGems? Can we create a pool to help cover the cost of maintaining this resource?
rubygems.org uses an invalid security certificate.
The certificate is not trusted because the issuer
certificate is unknown.
(Error code: sec_error_unknown_issuer)
not valid before: 2012-03-03
sha1 fingerprint: BD:2E:60:69:49:99:53:29:BD:E9:24:12:95:E1:5E:65:E0:64:79:2A
Edit: I am incorrect. It seems that this uploaded gem was used to compromise RubyGems.org.
> Someone posted http://rubygems.org 's config/*.yml to pastie. Didn't include the S3 bucket secret key, but going to reset everything anyway.
Running `$ bundle update` will not inject this into your app.
You'd have to intentionally add `gem "exploit", "~> 22.31.31"` to your Gemfile.
A more hidden attempt would be to simply alter the existing .gem files on S3. Once they have the access tokens from the configuration YAML files, they can do that from anywhere, not just the rubygems.org host. That keeps their activity under the radar and is slightly harder to track.
In that case, only newly installed gems in your system that were uploaded to rubygems.org before this incident would be at risk. If you run bundle update and pull in gems built from today onward, there is not as much to worry about. If you're updating gems and the bundle is pulling in gems from a few days ago, then you are at high risk.
And, if like most people you don't verify your installed gems (not that you really can if they're not signed), you're going to have a very bad day. The "tracking" aspect is a situation where it doesn't matter because at that point, you're hosed anyway.
Also, walking S3 is not particularly fast. So if the plan is to walk all gem files to verify them, expect that to take a while. Hopefully they have bucket logging enabled.
'A Proof-of-Concept PoC gem that exploits a vulnerability in the Psych YAML parser, which allows the #= method to be called on arbitrary Objects.
If the #= method later calls eval() with the given arguments, this allows for
arbitrary execution of code.'
When did the compromise happen? Was it compromised yesterday or only found out yesterday?
I have default gems installed on my system and haven't updated anything since the last big Rails security issue that was reported a bit ago.
It'd be great to get some guidance on what to do.
I'm considering re-installing my gems directly from the github repos. Thoughts?
IT Security required the apps were offlined, and the green light was given to the PHP and Python guys in house (and a few retained developers) to start the process of rewriting the apps.
It will be painful, but, I think, the business has lost it's appetite for the constant Ruby idea of "Let's run with Yaml", before firstly "Let's walk safely with Yaml".
 Even OS package repositories or mirrors were broken into. I faintly remember a break-in where debian(?) had to check all their mirrors.
Tech Email: ...