Hacker News new | past | comments | ask | show | jobs | submit login
What The Rails Security Issue Means For Your Startup (kalzumeus.com)
401 points by timcraft on Jan 31, 2013 | hide | past | web | favorite | 176 comments



The "everybody has bugs" response is intellectually dishonest. Yes, everybody has bugs, but most people's bugs aren't an intentional feature that a trained monkey ought to have known was a bad idea.

- Someone implemented a YAML parser that executed code. This should have been obviously wrong to them, but it wasn't.

- Thousands of ostensible developers used this parser, saw the fact that it could deserialize more than just data, and never said "Oh dear, that's a massive red flag".

- The bug in the YAML parser was reported and the author of the YAML library genuinely couldn't figure out why this mattered or how it could be bad.

- The issue was reported to RubyGems multiple times and they did nothing.

This isn't the same thing as a complex and accidental bug that even careful engineers have difficulty avoiding, after they've already taken steps to reduce the failure surface of their code through privilege separation, high-level languages/libraries, etc.

This is systemic engineering incompetence that apparently pervades an entire language community, and this is the tipping point where other people start looking for these issues.


Any sufficiently advanced serialization standard will let you serialize/deserialize a wide variety of objects. When J2EE SOAP libraries are passing large amounts of XML back and forth over the wire, it is often going to get instantiated via calling a zero-argument constructor and then firing a bunch of setter methods. As J2EE has learned to its sorrow, there are some good choices for objects to allow people to deserialize and some not so good ones.

http://wouter.coekaerts.be/2011/spring-vulnerabilities

If J2EE is a boring platform to you, pick your favorite and Google for a few variants. You'll find a serialization vulnerability. It's hard stuff, by nature.

* The bug in the YAML parser was reported and the author of the YAML library genuinely couldn't figure out why this mattered or how it could be bad.*

Do you have a citation for this? What particular bug in the parser are you referring to? The behavior which is being exploited is a fairly complicated interaction between the parser and client Rails code -- I banged my head against the wall trying to get code execution with Ruby 1.8.7's parser for over 12 hours, for example, without any luck unless I coded a too-stupid-to-be-real victim class. (It's my understanding that at least one security researcher has a way to make that happen, but that knowledge was hard won.)


> Any sufficiently advanced serialization standard ... it is often going to get instantiated via calling a zero-argument constructor and then firing a bunch of setter methods.

Yes, this is always a bad idea. It's actually in a similar problem space as the constant stream of vulnerabilities in the Java security sandbox (eg, applets); all it takes is one mistake and you lose.

And thus, people have been saying to turn off Java in the browser for 4+ years, and this is also why Spring shouldn't have implemented such code.

> It's hard stuff, by nature.

Which is why deserializing into executable code is a bad idea, by nature. I'd thought this was well established by now, but apparently it is not.

> Do you have a citation for this? What particular bug in the parser are you referring to?

http://github.com/tenderlove/psych/issues/119


> This is systemic engineering incompetence that apparently pervades an entire language community

The original target of that claim was the Ruby community. With this comment allowing the same issue existing in the Java community, are you leveling the same claim against it? Does every severe security issue that remains unnoticed by a community for some time and is eventually noticed suggest pervasive engineering incompetence throughout that entire community? Maybe you would be entirely right to make that claim because any security issue is indicative of incompetence at some level, but I think the closer your definition of incompetence comes to including everybody, the less useful that definition is.


> Which is why deserializing into executable code is a bad idea, by nature. I'd thought this was well established by now, but apparently it is not

I'm not sure that means anything. In an OO language, you are always de-serializing into objects, and objects are always 'executable code'. Hashes and Arrays are executable code too, right?

The problem is actually when you allow de-serializing into _arbitrary_ objects of arbitrary classes, and some of those objects have dangerous side effects _just by being instantiated_, and/or have functionality that can turn into arbitrary code execution vector. (Hopefully Hash's and Array's don't).

It is a problem, and it's probably fair to say that you should never have a de-serialization format that takes untrusted input and de-serializes to anyting but a small whitelisted set of classes/types. And that many have violated this, and not just in ruby.

But if you can't even describe the problem/guidance clearly yourself, I think that rather belies your insistence that it's an obvious thing known by the standard competent programmer.

(I am not ashamed to admit it was not obvious to me before these exploits. I think it was not obvious to a bunch of people who are in retrospect _claiming_ it was obvious to.).


> I'm not sure that means anything. In an OO language, you are always de-serializing into objects, and objects are always 'executable code'. Hashes and Arrays are executable code too, right?

No. You're conflating code and state (which was the problem to begin with!)

Let's disassemble parsing a list of strings:

When you instantiate the individual string objects, you do not 'eval' the data to allow it to direct which string class should be instantiated. You also do not 'eval' the data to determine which fields to set on the string class.

You instantiate a known String type, and you feed it the string representation as an array of non-executable bytes using a method you specified when writing your code -- NOT a method the data specifies.

The data is not executable. It's an array of untrusted bytes. The string code is executable, and it operates on state: the data.

You repeat this process, feeding the string objects into the list object. At no point do you ask the data what class or code you should run to represent it. Your parsing code dictates what classes to instantiate, and the data is interpreted according to those fixed rules, and your data is never executed.

It should never be possible for data to direct the instantiation of types. The relationship must always occur in the opposite direction, whereby known types dictate how to interpret data.

> I think it was not obvious to a bunch of people who are in retrospect _claiming_ it was obvious to.

Given the preponderance of prior art, this seems unlikely.


The YAML vulnerability was not from any 'eval' in the YAML library itself, you realize, right?

It was from allowing de-serialization to arbitrary classes, when it turned out that some classes had dangerous side-effects merely from instantiation -- including in some cases, 'eval' behavior, yes, but the eval behavior wasn't in YAML, it was in other classes, where it could be triggered by instantiation.

To use your language, I don't think it's 'intellectual honest' to call allowing de-serialization to data-specified classes "a YAML parser that executed code"--that's being misleading -- or to say that a 'trained monkey should have known it was a bad idea' (allowing de-serialization to arbitrary data-specified classes).

There have been multiple vulnerabilities _just like this_ in other environments, including several in Java (and in major popular Java packages). You could say with all that prior art it ought to have been obvious, but of course you could say that for each of the multiple prior vulnerabilities too. Of course, each time there's even more prior art, and for whatever reason this one finally got enough publicity that maybe this kind of vulnerablity will be common knowledge now.


> The YAML vulnerability was not from any 'eval' in the YAML library itself, you realize, right?

> It was from allowing de-serialization to arbitrary classes, when it turned out that some classes had dangerous side-effects merely from instantiation -- including in some cases, 'eval' behavior, yes, but the eval behavior wasn't in YAML, it was in other classes, where it could be triggered by instantiation.

That is eval behavior.


What you are looking for is not "OO language", but "dynamic interpreted language".

In a traditionally compiled OO language like C++, classes cease to exist after compilation; there is no fully generic way to instantiate an object of a class by data determined at runtime. So this whole concept of deserializing to whatever the protocol specifies goes completely out of the door.


So your conclusion is that dynamically interpreted languages are all insecure?

(You can instantiate objects with classes specified by data in Java too, although Java isn't usually considered exactly dynamicaly interpreted. In fact, there was a very analagous bug in Spring, as mentioned in many places in this comment thread. But anyway, okay, sufficiently dynamically interpreted to allow instantiation of objects with classes chosen at runtime... is the root of the problem, you're suggesting, if everyone just used C++ it would be fine?)


"Interpreted" is too restrictive. For example, Objective C provides NSClassFromString().


One could argue that since every call goes through a runtime messaging framework, Objective C is really just an interpreted language with pre-JITed function bodies.


Here's the thing. You can not load YAML with attackable data. Period. If you do, you have to assume bad things are going to happen. The fact that psych calls []=(key, val) on instantiated objects in combination with ActionController::Routing::RouteSet::NamedRouteCollection calling eval on the key made for a particularly easy drive-by attack on a huge range of deployments, but even without the []=, there are still plenty of ways to exploit loading arbitrary YAML, though they may require more custom targeting.

In terms of that issue request, I doubt that adding a safe_load option would have stopped the Rails vulnerability. After all, the Rails guys _already knew_ that they should not be loading YAML from the request body; that's why it was not allowed directly. The issue was loading XML, which then allowed YAML to be loaded. Allowing YAML to be loaded there was a mistake; it seems unlikely that someone would make that mistake, while at the same time mitigating it by adding safe_load.


You're describing the previous Ruby on Rails vulnerability. The latest one involved them deliberately using the YAML parser to parse untrusted JSON data. Also, the RubyGems compromise was a result of them parsing gem metadata represented using YAML - since the metadata is YAML you pretty much have to use a YAML parser to parse it.


Using YAML to parse JSON was obviously non-optimal, which is (presumably) why Rails stopped doing it in 3.1 (thus the vulnerability your refer to is only present in 3.0 and 2.x).

W.r.t RubyGems, I hear what you're saying, but that doesn't mean there's a bug in psych. Even the feature request of adding a safe_load option strikes me as problematic...either you're limiting the markup to json with comments, or you'd have to name the option something like sort_of_safe_load.


Spring isn't the only widely-used Java framework that's had these problems. The Struts developers put a general-purpose interpreter (OGNL) in their parameter parsing pipeline, and thought they'd kept things safe by blacklisting dangerous syntax.

Wackiness ensued: http://blog.o0o.nu/2010/07/cve-2010-1870-struts2xwork-remote...

It would obviously be unfair to claim on this basis, or the recent problems with the Java browser plugin, that the "entire Java language community" has a bad attitude on security matters. Communities are big, each of them has a range of attitudes within it, and most importantly --- regardless of attitude --- sooner or later, everyone screws up.


Parsing is not deserialization. I keep having to say this on all these threads. There should be a giant glowing force field between these two practices.

the particular issue in the Yaml parser is explained pretty well here: http://www.insinuator.net/2013/01/rails-yaml/


The security fuckup is a lot more simple than that - they fucked up as soon as they opened the door to this kind of complicated interaction by letting untrusted code instantiate arbitrary classes and pass strings of their choice to them. Doesn't matter that they weren't aware of any way this could be exploited, as soon as they let an attacker pass data to random classes that were never designed to accept untrusted input a security disaster was basically inevitable.



I guess the difference re: spring is that I don't load my spring configuration via a HTTP connection that is pointed to the internet.


"Yes, everybody has bugs, but most people's bugs aren't an intentional feature that a trained monkey ought to have known was a bad idea."

First, given how many times I've seen a deserialization library "helpfully" allow you to deserialize into arbitrary objects in a language that is sufficiently dynamic to turn this into arbitrary code execution, evidence suggests this is not an accurate summary. I'd like to see "Don't deserialize into arbitrary objects" become General Programming Wisdom, but it is not there yet.

It's not like we live in a world where XSS is rare or anything anyhow. The general level of programming aptitude is low here. That's bad, regrettable, something I'd love to see change and love to help change, but it is also something we have to deal with as a brute fact.

Secondly, there's still the points of A: even if you don't use Ruby on Rails, your life may still be adversely affected by the Severity: Apocalyptic bug, and B: what are you going to do when the Severity: Apocalyptic bug is located in your codebase? And that's putting aside the obvious matters of what to do if you use Ruby on Rails and this was your codebase. The exact details of today's Severity: Apolalyptic bug are less relevant than you may initially think. Go back and read the piece, strike every sentence that contains "YAML". It's still a very important piece.

At which point a re-quoting of my favorite line in the piece is probably called for: "If you believe in karma or capricious supernatural agencies which have an active interest in balancing accounts, chortling about Ruby on Rails developers suffering at the moment would be about as well-advised as a classical Roman cursing the gods during a thunderstorm while tapdancing naked in the pool on top of a temple consecrated to Zeus while holding nothing but a bronze rod used for making obscene gestures towards the heavens." Epic.


> I'd like to see "Don't deserialize into arbitrary objects" become General Programming Wisdom, but it is not there yet.

I think that's pifflesnort's point.


No, his point is that it already is and we can declare anybody who doesn't know to be worse than a "trained monkey". I say the evidence clearly shows that it is not widespread knowledge (IMHO "trained monkey" makes it sound like you could ask a normal college sophomore about this and get the correct response, which is just false, a lot of programmers don't even know what the term "serialization" means), and while it is a great goal we can not act as if it is already true.


Speaking to the first portion of your complaint, I remember coming across that particular feature of YAML years ago, and being surprised at how odd it was that it was present. I shrugged it off because (at the time), I considered YAML to be something that simply would not enter the serialize / deserialize phase of incoming requests. Ignorance of all the frameworks actions on my part, and further ignorance to not think of the security effects of that particular feature, certainly. I would presume that I am not the only in that situation.

You're definitely right that the security reports should be handled better. I hope that this whole situation results in a better security culture in the Ruby community.

Regarding your tone ("intellectually dishonest", "trained monkey", "systemic engineering incompetence pervades an entire language community"), it's a bit of hyperbole and active trolling. You are certainly right in many of your points, and you are certainly coming off as a jerk. It may not be as cathartic for you, but I'd suggest toning it down to "reasonable human being" level in the future.


> It may not be as cathartic for you, but I'd suggest toning it down to "reasonable human being" level in the future.

The Rails community has exhibited such self-assured, self-promotional exuberance for so long (and continues to do so here), it feels necessary to rely on equivalently forceful and bellicose language to have a hope of countering the spin and marketing messaging.

Case in point, the article seriously says, with a straight face:

"They’re being found at breakneck pace right now precisely because they required substantial new security technology to actually exploit, and that new technology has unlocked an exciting new frontier in vulnerability research."

Substantial new security technology? To claim that a well known vulnerability source -- parsers executing code -- involves not only substantial new technology, but is a new frontier in vulnerability research?

This is pure marketing drivel intended to spin responsibility away from Ruby/Rails, because the problems are somehow advanced and new. This is not coming from some unknown corner of the community, but from a well-known entity with a significant voice.


I can understand your frustration with the community. I share this frustration at many times, because I feel that Rails / Ruby tends to value style over substance.

I'll also raise an eyebrow at that particular sentence, though without spending much time looking into what's backing it I can only add that I too find that slightly incredulous.

I definitely question your stated intent. Were you to "counter the spin and marketing messaging", would that reduce the number of vulnerable machines? Overall, reduce the number of people that use Ruby/Rails, if that is your intent? Given the number of comments you've made to that effect versus the number of folks using Ruby/Rails, I'd suggest you have a very long battle in front of you.

Put another way, I perceive your tone as an exasperated, reactionary tone to a group that you happen not to like. If you are indeed trying to achieve some greater good here, I believe there's more effective ways you could achieve it.

Otherwise, just tone it down in the future. You had good points, there's no need to insult people from an effectively unassailable position.


> Overall, reduce the number of people that use Ruby/Rails, if that is your intent? Given the number of comments you've made to that effect versus the number of folks using Ruby/Rails, I'd suggest you have a very long battle in front of you.

I'd like it to be 'cool' in the Ruby community to apply serious care towards security, API stability, code maintainability, and all the other things that aren't necessarily fun, but are very much necessary to avoid both huge aggregate overhead over time, and huge expensive failures like this one.

I'd like to see a shift towards an engineering culture where taking the time to consider things seriously is considered 'cooler' than spinning funny project names, promoting swearing in presentations, and posting ironic videos.

It seems increasingly obvious to me that for this to occur, one can succeed in pushing back against emotive marketing with a similar approach, and thus shift the conversation.


> The bug in the YAML parser was reported and the author of the YAML library genuinely couldn't figure out why this mattered or how it could be bad.

Is that seriously what happened? It sounds oddly similar to the Rails issue from about a year ago (the one in which the reporter was able to commit to master on Github), even though I believe that was a separate set of developers altogether.

If so, then that might suggest a larger community/cultural issue, which makes me wonder what other exploits exist but haven't been reported (publicly) yet...


> Is that seriously what happened?

Surprisingly, yes: https://github.com/tenderlove/psych/issues/119

And the RubyGems folks are trying to handle this with whitelisting specific classes that the YAML parsing will still be allowed to instantiate:

https://github.com/rubygems/rubygems.org/pull/516/files


As a temporary workaround until Psych is updated. At least that's how I read it.

We can either sit around throwing stones at them or pull up our sleeves and help. I'm not sure what there is to gain with the former.


I agree with the sentiment to help and not just throw stones, but I think all the outrage and whatnot is very useful: it makes this kind of stuff less likely to happen.


Rails needs to die. It is super nice to code in (for a certain class of problems, ie. CRUD apps) and the language is awesome but it is too big and insecure to use.


I'm willing to bet if you put someone like benmmurphy, ptacek, or homakov in your codebase he'll have you begging for a web framework in hours.


I wonder if various companies pooled money for a comprehensive audit of rails by ptacek etc., one would think that's a win-win for all.


I wondered exactly the same thing, a security-chase kickstarter or similar.


Anything you implement to replace the functionality missed by not using Rails will be, statistically, just as insecure. Arguably, even more-so because you will no doubt lack the peer review a large project like Rails benefits from.


Unless you implement it in a fundamentally simpler way.


I don't think so. Rails has to cover all cases, you just have to code the few cases that you actually use.

And even if you get it wrong, you get it wrong in a different way. That might mean that you are technically more at risk, but so long as the attack is focused on getting as many targets as possible, rather than you explicitly, then that is arguably a great strategy: the cost of adapting an already existing attack to target a novel target is going to be astronomically high, versus using an already existing vulnability. If you are refining neuclear material for Iran, you are going to need all the protection you can get; if you are just another start-up you just need not to be vulneable to the latest drive-by exploit.


rails don't need to die, but seeing how ruby devs like bashing other languages, this event seems to me like karma.


Can we please try to avoid making generalisations like this? Yes, the ruby community has some very vocal contributors with very questionable social skills. Please don't assume that all ruby developers are egotistical hipster hackers. The creator or Ruby, Matsumoto Yukihiro is one of the most softly spoken and humble individuals I have encountered in technology. We can all learn by his example.


Some ruby devs do, yes. Sadly, this feudalistic approach is prevalent in our industry which hurts all of us. It is probably the reason we have to keep re-learning the same concepts over-and-over again.

There is no karma here, there is just a race to the bottom for all of us. I thought the point of OS was for us all to group together and find and address these issues?

You know, kumbaya and all that...


Different issue altogether. It was mass-assignment wherein the default Rails config did not have the whitelist enabled by default. The howto is here:

http://homakov.blogspot.ca/2012/03/how-to.html


Actually it is the same issue as basically all other security issues in web programming (attack on crypto aside):

Failure to blacklist non-conforming input.

Really, it is that simple and that complicated.


Wouldn't whitelisting conforming input be a better approach? I realize it may be more difficult, but wouldn't that be more secure?

Edit: I'm genuinely interested - I always try and whitelist things when I'm building software. Although I have next to no background when it comes to security in particular.


Whitelisting is what Rails did to get around the mass assignment issue. It was solved for awhile, it just was not the default configuration setting.

Whitelisting is what the rubygems folks are doing to work around this problem until a better implementation is put in-place in the YAML parser.

Generally, it is a better solution but it is more difficult and can break a lot of dependencies if not implemented correctly.


Stupid me. I meant to write whitelisting.

Yes, you are absolutely right.


I know it was a different vulnerability - I was asking more whether the same developer(s) was/were responsible, since this seems to be a comment pattern that I'm hearing with regards to the initial response to the vulnerabilities.


> It sounds oddly similar to the Rails issue from about a year ago (the one in which the reporter was able to commit to master on Github)

Hey, yes, yaml bug is _very_ similar. Whitelist is better than no list at all


What is crazy to me is everyone has had this bug, and learned from it, and fixed it. Why has it taken so long for Rails? Java has this bug; you can't deserialize untrusted input without a lot of work. Python has this bug; you can't unpickle untrusted input. Bad Javascript JSON parsers that just call eval() have this bug. It's not a complicated concept; you can't treat untrusted user input as code to execute. How'd the YAML developers miss it?


My analysis: layering. YAML doesn't in itself execute untrusted code, but it has a bunch of semi-experimental features most people don't know about or use, and one of those features (deserializing arbitrary classes) has a side effect that sets values on certain other classes -- not written by the people who wrote the YAML parser -- that then, often later on in their lifecycle, execute this untrusted code. I'm not saying this as an excuse -- I have long mistrusted YAML's blasé complexity, not to mention Rails' anarchic pass-HTTP-params-directly-into-the-DB pattern -- but as an explanation of how they missed it.

Also, more than other communities, Ruby has a cultural gap between the people developing the language and core libraries and the people using it to write web apps and frameworks.

Here's two good technical writeups of the exploit as it applies to Rails apps: http://blog.codeclimate.com/blog/2013/01/10/rails-remote-cod... http://ronin-ruby.github.com/blog/2013/01/09/rails-pocs.html


Thanks, the complexity and layering are presumably part of the problem. This reminds me of the old XML External Entity attack that keeps coming back because developers don't realize you can coerce most XML parsers to open arbitrary URLs. That's been affecting products that parse XML for 10 years now and still hasn't stopped and leads to ugly security holes (like in Adobe Reader). The root cause is XML is far too complex and has surprising features, in this case, entity definition by URL.


The initial code for Rubygems was written at Rubyconf '06, if I remember correctly. The Ruby world was very, very different back then. Same with Rails, originally released in '05.

My point is that it's 'taken so long' because all this code is stuff that was written in a totally different time and place. And then was built on top of, after years and years and years.

Now that it _is_ being examined, that's why you see some many advisories. This is a good thing, not a bad one! It's all being looked through and taken care of.


Because it was not obvious that "allowing de-serialization to objects of arbitrary classes specified in the serialized representation" was the same thing as "treating input as code to execute"

And then, as someone else said, becuase of layering. The next downstream user using YAML might not have even realized that YAML had this feature, on top of not realizing the danger of this feature. And then someone else downstream of THAT library, etc.

Maybe it _should_ have been obvious, but it wasn't, as evidenced, as you say, by all the people who have done it before. After the FIRST time it was discovered, it should have been obvious, why did it happen even a second?

In part, becuase for whatever reason, none of those exploits got the (negative) publicity that the rails/yaml one is getting. Hopefully it (the dangers of serialization formats allowing arbitrary class/type de-serialization) WILL become obvious to competent developers NOW, but it was not before.

20 years ago, you could write code thinking that giving untrusted user input to it was a _special case_. "Well, I guess, now that you mention it, if you give untrusted input that may have been constructed by an attacker to this function it would be dangerous, but why/how would anyone do that?" Things have changed. There's a lot more code where you should be assuming that passing untrusted input to it will be done, unless you specifically and loudly document not to. But we're still using a lot of code written under the assumptions of 20 years ago -- assumptions that were not neccesarily wrong cost/benefit analyses 20 years ago. And yeah, some people are still WRITING code under the security assumptions of 20 years ago too, oops.

At the same time, we have a LOT MORE code _sharing_ than we had 20 years ago. (internet open source has changed the way software is written, drastically) And ruby community is especially 'advanced' at code sharing, using each other's code as dependencies in a complex multi-generation dependency graph. That greatly increases the danger of unexpected interactions of features creating security exploits that would not have been predicted by looking at any part in isolation. But we couldn't accomplish what we have all accomplished without using other people's open source code as more-or-less black box building blocks for our own, we can't do a full security audit of all of our dependencies (and our dependencies' dependencies etc).


Presumably they missed it the same way the developers of those other things you listed did. I can only assume they didn't know about these other problems when developing their YAML parsers.

Of course, you could argue that developers should always be thinking about and searching for security related issues in whatever field they're working in, but that doesn't appear to be the norm at the moment.


Python has this bug; you can't unpickle untrusted input

I thought you could unpickle untrusted input in Python? Sure there's a great big red warning message on the documentation, and hence it's currently rare for people to do it, but it is technically allowed, right?


Sure, this is “can't” in the sense “must not”.


This is systemic engineering incompetence that apparently pervades an entire language community

This is master level, "Captain Obvious"-style trolling, beyond me how this is the top comment in a place like HN.


> Someone implemented a YAML parser that executed code. This should have been obviously wrong to them, but it wasn't.

Someone implemented a YAML parser that could serialize and de-serialize arbitrary objects referenced by class name.

It was not obvious that this meant it 'executed code', let alone that this meant it could execute _arbitrary_ code, so long as there was a predictable class in the load path with certain characteristics, which there was in Rails.

In retrospect it is obvious, but I think you over-estimate the obviousness without hindsight. It's always easy to say everyone should have known what nobody actually did but which everyone now does.

As others have pointed out, an almost identical problem existed in Spring too (de-serializing arbitrary objects leads to arbitrary code execution). It wasn't obvious to them either. Maybe it _should_ have been obvious _after_ that happened -- but that vulnerability didn't get much publicity. Now that the YAML one has, maybe it hopefully WILL be obvious next time!

Anyhow, that lack of obviousness applies to at least your first two points if not first three. It was not in fact obvious to most people that you could execute (arbitrary) code with YAML. If it was obvious to you, I wish you had spent more time trying to 'paul revere' it.

> The issue was reported to RubyGems multiple times and they did nothing.

Now, THAT part, yeah, that's a problem. I think 'multiple times' is 'two' (yeah, that is technically 'multiple'), and only over a week -- but that still indicates irresponsibility on rubygems.org maintainers part. A piece of infrastructure that, if compromised, can lead to compromise to almost all or rubydom -- that is scary, that needs a lot more responsibilty than it got. We're lucky the exploit was in fact publisizied rather than kept secret and exploited to inject an attack into the code of any ruby gem an attacker wanted -- except of course, we can't know for sure if it was or not.


>The issue was reported to RubyGems multiple times and they did nothing.

Er, there would have been trouble on that end too ...

http://news.ycombinator.com/item?id=5139583


"The "everybody has bugs" response is intellectually dishonest."

Indeed. It's the "fallacy of gray". Nothing is black or white, hence everything is gray. Nothing is 100% secure, nothing is 100% insecure, hence everything is "semi-secure": it's bad, but not too bad, because every language / API / server can be attacked.

You've effectively substituted a black/white dichotomy with something even worse: instead of having only two options (black or white), you now only have one: gray.

It is probably one of the most intellectually dishonest logical fallacy of all times and we keep seeing it more and more.

It's really concerning.


This quote caught my attention:

    There are many developers who are not presently active on a Ruby on Rails
    project who nonetheless have a vulnerable Rails application running on
    localhost:3000.  If they do, eventually, their local machine will be
    compromised. (Any page on the Internet which serves Javascript can, currently,
    root your Macbook if it is running an out-of-date Rails on it. No, it
    does not matter that the Internet can’t connect to your
    localhost:3000, because your browser can, and your browser will follow
    the attacker’s instructions to do so. It will probably be possible to
    eventually do this with an IMG tag, which means any webpage that can
    contain a user-supplied cat photo could ALSO contain a user-supplied
    remote code execution.)
That reminded me of an incredible presentation WhiteHat did back in 2007 on cracking intranets. Slides[1] are still around, though I couldn't readily find the video.

[1]: https://www.whitehatsec.com/assets/presentations/blackhatusa...


Yep. localhost:3000 is only the most obvious guess you could make, too. You could try redmine:3000 and see who that worked on, or 192.168.[enumerate all IPs], or the top 1,000 host names, or use a Javascript port scanner, or... yeah, lots of bad stuff. (I thought getting into that rabbit hole would make a long and convoluted post even longer. Suffice it to say the world is a grimmer and more dangerous place than we thought it was.)


The metasploit folks put a pen-tester's guide to finding Rails-running targets on their own blog here:

https://community.rapid7.com/community/metasploit/blog/2013/...

In addition to common port numbers and stuff like redmine, their tipoffs include looking for Rails-style session cookies, and HTTP response headers emitted by Rails or support machinery. These include "X-Rack-Cache:" and the "X-Powered-By:" header that Phusion Passenger tosses in even if you've configured Apache itself to leave version numbers and component identifiers out of the response. (I'm not sure there's any better way to suppress this stuff than adding mod_headers to the Apache config and using "Header unset")


Note: From a sysadmin standpoint http://localhost:3000 commonly refers to http://127.0.0.1:3000. When running "rails server" locally in development mode, you actually get http://0.0.0.0:3000. These are not the same! 127.0.0.1 means that "rails server" can only be accessed from your local machine, where 0.0.0.0 means, it can be accessed on any address your computer is listening on. If you are on a local intranet, say at the office, then you probably have a 127.0.0.1 and 192.168.x.x interface, then everyone can access it via 192.168.x.x, or god forbid a public IP ;)


Again, even if your development box is being physically protected by the Swiss guard with a firewall that sprung from Donald Knuths' forehead with the River Styx separating it from all inbound connection attempts, it won't even matter, because you run a browser on your development box, that browser can always connect to your development box, and that browser can be instructed to pass malicious input to your development box if you do innocuous things with it like e.g. viewing web pages on the public Internet.


Yeah, I get it. I guess I'm making an additional point, that anyone can have direct access to the development environment via any address your machine is listing on -- not just localhost.


Other than ease of setup, I've never understood why you wouldn't develop in the same environment as what you're running in production. Setting up a vm is trivial and allows you to easily open/close access to your application as needed.

There is also a lot less headaches once you've decided to move it into production.


Vagrant[1] is great for this :)

[1] http://www.vagrantup.com/


I don't think this would be any more secure if you were to enable networking on the VM to allow requests from the host machine, which seems like common behavior so that the developer can access the webapp running on the VM from a browser on the host. Or are people developing inside a VM and then testing with a browser on the VM?


You can set up a cloud VM (on Rackspace for example), and then set up your VM's firewall (iptables) to only allowing connections from your test machines (could be your local IP, or the IP address of the test machines from Browserstack or Sauce.) This allows you to keep your dev/staging/prod environments in sync (and allows you to do things like blueprint/image your dev setup to build it elsewhere), decouple your dev from your staging/prod, and allow you to develop from anywhere without needing to carry around the same laptop or rebuild your dev environment on another box - particularly if you use an intermediate system with a static IP so you can reconfigure your firewall whenever needed.


I agree 100%... but doing so would not protect you from the security issue being discussed.


Wouldn't the same origin policy prevent requests to localhost?


You might be thinking of preventing Javascript on host X from sending XMLHttpRequests to host Y. That will not prevent Javascript on host X from adding a form to the web page and having it post to host Y with arbitrary content, or from having an IMG tag on host X attempt to load (via a GET) a URL on host Y (assuming someone finds a pathway that works via GET requests for these or related vulnerabilities).


afaik you can't use cross site requests to exploit either the xml bug or the json bug without also exploiting a browser or plugin bug. both issues depend on setting a request header and you are not allowed to do this in the browser security model. but it sucks that CSRF bug becomes RCE bug :(


>but it sucks that CSRF bug becomes RCE bug :( you just said it - it cant be exploited via CSRF. Because you cannot set header.

NO EXPLOIT FOR LOCALHOST:3000 calm down


i actually lied :) there is #from_xml so if you were doing Hash.from_xml(params[:trololol]) or Post.from_xml(params[:lols]) then you would be vulnerable to localhost:3000 attack. but I don't think there is generic attack it would have to be application specific.


you still needto bypass CSRF protection which is on by default


Yet.


You can dynamically create an iframe in JavaScript and do a regular form post to localhost:3000 thorough it.


how u gonna set content-type?


When I look at the Ruby/Rails community, the word that comes to my mind more than any other is hubris.

You see this in things such as security issues being marked as wontfix until they are actively exploited (e.g. the Homakov/GitHub incident), in the attitude that developer cycles are more expensive than CPU cycles, and on a more puerile level in the tendency towards swearing in presentations.

I've always had the impression that the Rails ecosystem favours convenience over security, in an Agile Manifesto kind of way (yes, we value the stuff on the right, but we value the stuff on the left even more). One of the attractions of Rails is that it is very easy to get stuff up and running with it, but some of the security exploits that I've seen cropping up recently with it make me pretty worried about it. I get especially concerned when I see SQL injection vulnerabilities in a framework based on an O/R mapper, for instance.


I have the same impression.

Many start-ups are built by well-meaning people who have no formal CS or even engineering background and thus are somewhat out of touch with what it means to build a robust system. It's natural for people to focus on "what's important" and ignore boundary/edge conditions, while in reality 90% of sound engineering is getting boundary/edge cases right.

And as most of such start-ups use Ruby/Rails due to the easiness of "getting it up and running", and thus they inject the Ruby/Rails ecosystem with this "focus on what's important" mindset, important boundary issues, including security, are neglected.


Except, I don't think it's even all that easy. In fact, there was a bit of a meme I remember from last year in which people stated "Rails was never marketed as easy to use!".

I think in 2006/2007, there was a simplicity to the basic "get up and running" aspect, but Rails 3.x+ is a pretty large ecosystem with quite a lot of decision points to educate yourself on to do any sized project beyond 'hello world'.


I think the recent Rails, Java, RubyGems and other vulnerability issues have been an absolute boon to the industry. And not just for the increased business I think most security consultants are going to be seeing.

The exploits have happened in ways that have exposed and hammered home the myriad places many applications expose unexpected side channels and larger attack surfaces than you'd think. These issues have opened a broader range of people to vulnerability, and I think opened a lot of people's eyes to the need for a sense of security and what that really means.

Top that with the level of explanation we've seen in at least the Rails and Ruby exploits, it's been a tremendous educational opportunity for a lot of people who will benefit greatly from it, and by proxy their users.

When the idea of a "SQL Injection" first became really prevalent, we saw an uptick in concern for security amongst framework developers, as far as I could tell. I think this will help get some momentum going again.

Speaking as a non-expert on the subject, security is all about a healthy sense of paranoia, across the board :)


+1

I was going to post something similar. Also we often see people insulting others when they post exploits too early or describe exploits in depth too early. Posting stuff like: "You're an .ssh.le, wait a few days before posting that".

I don't think so. I think exploits should be publicly posted as soon as possible and affecting as many people as people. Maybe even damaging exploits, actively deleting users data or servers data.

The bigger the havoc, the sooner the entire industry is going to realize security is a very real concern.

People are still considering buffer overflow, SQL injection, query parameters objects instantiation through deserialization exploits, etc. to be "normal" because "everybody creates bugs" and "a lot of bugs can be exploited".

I think it's the wrong mindset. Security if of uttermost importance and should be thought of from the start.

For example I'm amazed by the recent seL4 microkernel which makes buffer overflow provably impossible (inside the microkernel) or even the Java VM (the JVM) which makes buffer overflow in Java code impossible. It's not perfect (we've seen lots of major Java exploits, but zero were buffer overrun/overflow in Java code... Some in 3rd party C libs, but zero in Java code. Some other Java exploits too of course, but zero buffer overrun/overflow).

So security exploits are not a fatality.

All we need is people, from the very start, to conceive systems more resilients to attacks.

The more attacks, the more exploits, the more bad reputation and shame on clueless developers, the better.

I actually start to love these exploits, because they fuel healthy research by the white-hat community.

And one day we'll have more secure microkernels, more secure OSes, more secure VMs, more secure protocols, etc.

Let them security exploits come.


It would be interesting if someone wrote a worm that just took all the vulnerable rails apps offline. That way we would have less worry about a million compromised databases. It could be launched from a bookmarklet run from tor browser, and would probably exhaust every ip address in a few days. It would also land whoever did it in jail for a really long time.


Maybe, though with today's legal climate such a thing would be extremely, extremely risky. Gaining unauthorized access to a computer system and exploiting it in a way that potentially causes loss of revenue is not something you want to do lightly even if it might be the right thing to do.


This is a pretty good example of why I hate big frameworks. They are simply too big to prevent stupid issues like YAML extraction in JSON and XML.

If you are like me, you would expect that YAML was used in the configuration files and nowhere else. A small framework like Sinatra wouldn't have been big enough to hide an issue like this.


Really? Has a giant framework like Django had bugs this severe, that allowed data-file parsers to execute arbitrary attacker code?


Nearly. Both piston and tastypie (the two leading frameworks for writing APIs for django) were affected by a very similar code execution vulnerability a while ago. see https://www.djangoproject.com/weblog/2011/nov/01/piston-and-...


Those were both third-party modules for Django (albeit popular ones). But at best, this means that Rails devs have known since Nov 2011 or so that YAML code should be carefully audited, especially since there was no equivalent in Ruby for Python's .safe_load (http://stackoverflow.com/questions/14348538/is-there-an-equi...).

I don't mean to beat on the Rails guys too hard though, they're off shipping stuff and I'm not and I'm not very fond of those who criticize while a safe distance from the action. But I think it's fair to say that this could have been foreseen earlier (or much earlier, depending on who you ask).


Wow. Not sure how they managed to miss the big warnings about yaml.load. Notice, however, that unlike Ruby's YAML parser the Python one does actually have a yaml.safe_load.


The OP specifically mentioned that a similar bug was found in Django, and had previously been found in another big java framework.

I understand the appeal of "magic" to solve issues when you are under a deadline. It is just that trusting it is dangerous.


I checked the article, and that bug was similar in the relevant sense. It was a security bug related to hashed messaged authentication codes, a class of security exploits related to very non-trivial issues in cryptography. It was not comparable to "let's allow an information-file parser to execute arbitrary attacker code".


> The recent bugs were, contrary to some reporting, not particularly trivial to spot. They’re being found at breakneck pace right now precisely because they required substantial new security technology to actually exploit, and that new technology has unlocked an exciting new frontier in vulnerability research.

What technology is he talking about here?


Basically, a new way to combine things we already know about. Like, you might have already unlocked "stone axe", "vines", and "dry wood", but given those three primitives I can show you a novel way of combining them that repeatable produces fire. We know have a fun and exciting new way to use commonly-accepted-general-purpose-programming-tools to blow stuff up, and are iterating -- rapidly -- on bringing other previously-assumed-safe constructs into the "blows stuff up" zone of knowledge.


I'm not sure I get what you're the analogy to.

When I first read your blog post I got the impression that you were saying that the YAML vulnerability were found with some new code scanning technology that lets us find bugs in Rails faster. Or are you just saying discovering the existence of the YAML.load() class of vulnerability is "new security technology?"

Or are you talking about the ronin support module people are using in some of the PoCs?


So if I had told you at Christmas three salient facts:

+ Some objects are unsafe to instantiate if you don't pick all values you initialize them with very carefully.

+ YAML can instantiate objects from any class.

+ Rails uses YAML, in a lot of ways.

You might have said "Yes, I am aware of all these three things. Do you have anything important to tell me?" Now, if I demonstrate to you working PoC code which combines those three into remote code execution, the substantial work involved in producing that PoC code -- finding the vulnerable classes which ship with Rails, demonstrating how to get data from where the user controls it into the far-inside-the-framework bits where Rails might actually evaluate YAML, etc etc -- immediately starts suggesting lots of other fun ways to use variants of that trick.


> You might have said "Yes, I am aware of all these three things.

An experienced engineer ought to have said "this is a perfect storm, and it is wrong that YAML can instantiate objects from any class, and there will be a vulnerability here".

The reason such an engineer ought to say this is because 1) In general terms, it should be self-evident that any system built on riding the edge of risk will fail, and 2) We have countless examples over decades of this exact issue occurring repeatedly.

If you need a PoC to understand the severity of such an issue, you do not have the proper engineering mindset to be writing secure code. This was a lesson much of the industry learned in the 90s, where it was necessary to provide a PoC before many developers would take action on an issue.


Those three facts constitute a security vulnerability, and did so before we had working exploit code.

Over the past few years people have developed technology to make it easier to exploit null pointer dereference bugs for instance. That doesn't mean they weren't security bugs before we were good at reliably exploiting them.

Exploitation techniques increase the impact of vulnerabilities certainly, but the 3 facts you stated above would indicate a security issue even before we knew the right class to instantiate.


If told all three things alarm bells should be ringing very loudly.

However if told any one of them you might not worry enough even if the other facts were somewhere deep in your background knowledge.


Anyone competent would, when knowing those three things, immediately think of the possible exploits.


And yet they didn't, and here we are.

Yes, it's super easy to call everyone involved with the YAML library incompetent, but let's be honest - they're not, in general. They fucked up here, and hindsight is 20/20, but I think it's only face-stabbingly obvious now because of what's actually happened.


I wouldn't necessarily blame the YAML library devs, assuming they documented that it could instantiate arbitrary classes.

I do more Perl but I can tell you that "this deserializer can create new arbitrary objects" would give screaming alarm bells. And that is because there is a long history of trying (and failing) to safely do stuff like this (e.g. note the lack of warrant for the Safe CPAN module: http://search.cpan.org/~jhi/perl-5.8.0/ext/Opcode/Safe.pm )

Python has the same well-known and well-documented issue with their pickle module.

In general using /any/ deserializer that can create arbitrary objects of arbitrary classes has been known to be a bad idea for some time, and as far as I can tell Ruby YAML documents that it supports doing exactly this: http://www.yaml.org/YAML_for_ruby.html#objects

So if we were talking about a security vuln from something like JSON where we expect benign data to be the only possible output I think I'd agree completely.

Using a deserializer even more powerful than that is at the very least a bad smell from the POV of security, especially post-Spring (fixed in 2011, even if it was re-iterated 2013), so I wouldn't be so quick to claim this could only have been predicted in hindsight.

I get that it's still hard work to move from "I made an object of my choosing" to "framework pwned" but you pretty much have to assume that the former implies the latter nowadays. It was more than 5 years ago now that tptacek was gushing over Dowd's "Inhuman Flash Exploit" and I somehow don't think that pen testers and security experts have gotten any dumber since then. ;)


I think we should train ourselves to be fully alarmed at "...can instantiate arbitrary...", or just become hypervigilant around synonyms of "any".


That seemed like an ill-spoken line. What's happening here is that a row of dominoes is falling because of one class of vulnerability. The publication of the YAML hole has caused people to search for a) exploit vectors for that hole, and b) similar classes of exploits that are now much more obvious due to the knowledge of the original hole.

I think that to suggest that this bug could not have been found before is wrong, but the reason we're seeing such a cascade is because security almost never happens in a bubble.


YAML automatic code execution.

Previously you had to send something to rails and find a way to cause rails to execute that. Not so easy.

Now? You just have to send some YAML to rails.


Or JSON. If I understand one of the vulnerabilities properly, the JSON parser in some versions of Rail srelies on the YAML parser.


This was a hugely helpful big-picture overview of the recent vulnerabilities. Everyone, please go read it.

I had been meaning to get some context for the recent spate of security problems and this provided that in spades. Thanks for taking the time to write it up and post it.


> The first reported compromise of a production system was in an industry which hit the trifecta of amateurs-at-the-helm, seedy-industry-by-nature, and under-constant-attack. It is imperative that you understand that all Rails applications will eventually be targeted by this and similar attacks, and any vulnerable applications will be owned, regardless of absence of these risk factors.

Who was the first reported compromise of a production system?


I believe Patrick is referring to this Bitcoin exchange being hacked: http://news.ycombinator.com/item?id=5043122 .


All the RubyGems stuff is happening at a high rate and I understand that over 90% of the Gems are now verified and it looks like nothing was backdoored but I couldn't find a good summary of the current situation so I have a couple of questions.

1) Is it currently safe to "bundle update" and be confident that only verified Gems will be provided? I don't mind errors on any unverified ones but don't want to download them.

2) Is there a drop in replacement for RubyGems? The problems that have occurred this month would have been multiplied if RubyGems was unavailable at the time Rails had an apocalyptic bug.


Pretty good breakdown going on here [1]. To be honest, while the chosen tool to provide the update is odd, it is one of the best post-mordems that I've seen and applaud the volunteers for taking it so seriously.

1. I wouldn't say so. Not until they're all the way through.

2. Not at the moment, but general guidance is that we should all have local gem repos that we maintain ourselves and only rely on external sources when needed. It is something I'm going to look into ASAP.

[1] https://docs.google.com/document/d/10tuM51VKRcSHJtUZotraMlrM...


It is extensive and up to date but it is lacking a brief status of the current situation and whether the site is safe to download from. The answer is currently "no", 90% safe is unsafe.

It's a shame that they seem to have put the service back up in an unsafe mode, I would have hoped that they could have quarantined the unverified Gems.

Edit: Looking at the status page the API is down so it can't be accessed from Bundler so they are doing it the good/safe way.


Regarding #2, you could use gems from the github repositories (just specify the tag) instead of relying on gems hosted on RubyGems.

Obviously then it is up to you to verify everything, including that you're using the right versions and what not.


Probably not safe enough as most of them probably list external dependencies that will fall back to rubygems.


I'm not sure of the best way to go about it, but if all the dependency gems are also on Github, a script might be able to pull the SHAs from the right version of each dependency and return the proper entries for a Gemfile.


gemspecs/Gemfiles rarely list the gh repo, so you'll likely have to get it from a source which is probably rubygems. If it's compromised, they could update the gh repo location as well.


I guess, at least for the most common gems, there could be an independent list which maps gem names to their Github repos. Of course, that list would have to be trustworthy. It would be nice to solve that mapping problem anyway, because sometimes it's not entirely clear which Github repo is the official source for a project.


Good point.


This blindness to how bad YAML was is causing "convention over configuration" to devolve into "security by convention".


you mean "insecurity by convention"?


To me it seems that all of this is due to the obsession with implicit behaviour in Rails, and to some extent Ruby.

I hope they learn from this and stop chanting "convention over configuration" when told that explicit is better than implicit.


Is this YAML vulnerability something that can be patched in relatively short order without Rails itself having to be completely rewritten?

Or should I basically just not run Rails on any machine ever anymore, get a different web server, and start implementing my own request routing and ORM without any sort of YAML-parsing magic?

>One of my friends who is an actual security researcher has deleted all of his accounts on Internet services which he knows to use Ruby on Rails. That’s not an insane measure.

So anyone who uses Twitter, for example, could have their passwords and other data stolen through this exploit?


Is this YAML vulnerability something that can be patched in relatively short order without Rails itself having to be completely rewritten?

Long story short: There's a variety of things that can be done to mitigate this vulnerability and an active conversation on which is the best option. My go-to suggestion would be having Rails ship with either a non-stdlib YAML serialization/deserialization parser or have it modify the stdlib one, with the major point of departure being "Raise an exception immediately if the YAML encodes any object not on a configurable whitelist, and default that whitelist to ~5 core classes generally considered to be safe."

Or should I basically just not run Rails on any machine ever anymore, get a different web server, and start implementing my own request routing and ORM without any sort of YAML-parsing magic?

That is astonishingly unlikely to be a net-win for your security.

So anyone who uses Twitter, for example, could have their passwords and other data stolen through this exploit?

I'd expect that Twitter (in particular) has a better handle on it than your average startup, but successful exploitation of this means the attacker owns the server, if the attacker owns the server they probably get all the servers, and they will tend to gain control of any information on all of the servers. That can include, but is certainly not upper-bounded by, passwords/hashes stored in the database. It is absolutely possible, and indeed likely, that many people will be adversely affected by this vulnerability without themselves running Rails or even, for that matter, knowing what Rails is.


>>Or should I basically just not run Rails on any machine ever anymore, get a different web server, and start implementing my own request routing and ORM without any sort of YAML-parsing magic?

>That is astonishingly unlikely to be a net-win for your security.

In the long run, you are probably right. Once this gets fixed, which will probably be soon considering how much attention is on it.

But in the short run, is there anything worse than a vulnerability that allows a remote attacker to automatically detect, penetrate, and execute arbitrary code on your machine? To the point where it's not even safe to run the framework on localhost on your dev box?


I think you could make YAML safer and still support most usage by just preventing the deserializer from making any explicit native types. Just support the default types (string, list, map) and ignore all language specific tags. In fact, it looks like the core schema defined in the YAML spec would serve this purpose and would have similar types to JSON.

By making that the default schema, developers would have to explicitly request the dangerous "ruby" schema that makes arbitrary Ruby objects.


So I went through my heroku closet and cleaned everything up (pulling the plug on unneeded apps and making sure needed apps were up to date).

My question: do these security issues affect Sinatra apps?


These issues affect all apps which deserialize arbitrary user-specified YAML. I suspect Sinatra doesn't provide any attack surface by default here, but Sinatra apps tend to have a lot of bolted-on functionality, so it's worth doing an audit of any Sinatra apps you run to make sure you haven't introduced any exploitable surfaces.


"Any page on the Internet which serves Javascript can, currently, root your Macbook if it is running an out-of-date Rails on it."

Why are you running Rails as the root user? This is a bad idea.

EDIT: I'm not really into client-side JavaScript these days, but when did browsers start allowing JavaScript to connect to anything except the server from which it came? That would be yet another Bad Idea.


You have bad assumptions. It is all about leveraging access to higher and higher levels.

1. You load the evil JavaScript.

2. That JavaScript adds an image with a URL pointing at localhost:3000.

3. When you load that URL, it causes code execution, causing your computer to open a connection somewhere and start taking instructions.

4. The instructions that arrive includes downloading and installing software that takes advantage of known local root vulnerabilities in OS X.

5. Congratulations! Someone rooted your machine!

Nothing in this path required Rails to be run as root, or JavaScript to directly connect anywhere.


You're right. I wasn't seeing all the angles here. But to say this is limited to Macs seems disingenuous.


It is a tongue-in-cheek reference to widespread perceptions about Rails developers' hardware of choice.


He didn't say it was limited to Macs. He gave it as a random example of what could happen.


I'm pretty sure OP specifically said "Macbook" in the article. But see patio11's comment beside yours.


Getting from local user access to root access on an interactively-used Mac is almost trivial. Inject something into the user's bashrc/zshrc that watches their commands and waits for them to successfully use sudo. Then run sudo again immediately and do arbitrary things as root.

There are several tricks that can be used by JavaScript to connect to non-origin servers, in limited ways.

To create a GET, inject an <img>, <script>, <iframe>, or <style> tag. (Or several others.)

To create a POST, inject a <form> tag, and call form.submit()


Local privilege escalation is much easier than remote code execution. Once someone has the ability to execute code as a restricted user, there is generally at least one easily exploitable bug to get root. This is because people don't take local privilege escalation as seriously as remote code execution, and tend not to fix or patch them as quickly.


Is Rails moving to a YAML (or almost-YAML) parser that does not execute code for future major releases? I find it hard to believe that such functionality is used often. Until then, as the article says, people will just keep finding zero-days. This seems like the only logical choice for the Rails core team.


I'm losing my Ruby and Rails faith here; what gives? This is just as bad as leaving SQL injection attacks open.


This is just as bad as leaving SQL injection attacks open.

No, it is much worse.


This is not a Ruby or Rails issue. This is a software issue.


Given the severity, it'd almost be a public service to hit every public Rails server and exploit it to patch it with the security fix(es)...


New internet law: any sufficiently sized platform or framework will attract increasingly more compromising/malware attacks. Anyone running Wordpress still knows this all day.


Some time ago I asked certain Ruby people how to dynamically load Ruby code (for configs). They told me it's Wrong. Seems that in practice the idea wasn't much worse than Yaml after all.

I am still convinced that configs and templates should be treated as executable code and are best implemented in the same language they're used from. At least it makes certain things blatantly obvious. (It also makes a lot of other things possible without any extra coding/learning.)


You do all deploy from your own cache of all the gems you depend on, right? No? Why not?


That only helps you with availability though doesn't it? You are just as likely to have pulled backdoored files and cached them as to get backdoored files directly. Also at some point you need to update.

So I think it only helps if you are likely to need to deploy additional/alternative servers of the same versions. For significant deployed services this makes sense but if you are only in development/testing or using a service like Heroku it doesn't really help you very much does it?


> You are just as likely to have pulled backdoored files and cached them as to get backdoored files directly.

At least your deployments will be consistent. This is a great starting point. Now all you have to do is check your cache against the backdoored version, and you instantly and verifiably know where your deployment stands.


I'd love to. Any helpful guides on how to proceed?



Wasn't aware of bundle --deployment, that's very helpful thank you steve.


Any time! Not enough people are.


  bundle package
will cache all of your deps in vendor/cache. You can install from this cache using:

  bundle install --local


Is there no hardened version of Psych which lets you either disable object deserialization, or whitelist classes? That would seem like the safest option right now to guard against coming vulnerabilities in Rails in this regard.


This is currently being discussed on https://github.com/tenderlove/psych/issues/119

There is also https://github.com/dtao/safe_yaml (hat tip @patio11, who also points out that this has not been audited for completeness/correctness)


Why do people write things like "We Should Avoid #’#(ing Their #()#% Up" instead of "We Should Avoid Fucking Their Shit Up"?

http://www.youtube.com/watch?v=dF1NUposXVQ



It seems like the Ruby guys should spend a little more time learning the basics of CS in between two self promotion ("rockstar developers", anyone?).


How feasible would it to have a gem that sits in middleware that would check for possible attacks before the string gets any further and block/share IPs of people fishing for exploits?

I could see it as a service company that shares blacklist info between sites and can even find new exploits from the "bad" requests.


I'm not a Rails developer, is JRuby on Rails affected by this?


Why don't we have "building codes" for software?

There was a time when anyone who claimed to have the ability could design and build things like bridges and buildings. After enough of them collapsed due to repeated, avoidable mistakes, we said no, you can't do that anymore, you need to be licensed to design and build buildings, and furthermore you have to follow some basic minimum conventions that are proven to work. And you and your firm has to take on personal liability when you certify that your design and construction follows those basic best practices.


So what are the good versions of recent minor versions of rails? And where can I find them in the future?


Nothing. Neither recent Java flaws.


As much as I was a fan of developing Ruby apps, I was constantly shocked by the lack of engineering, security concern, stability of API, basically serious software engineering within the community.

It would be good if all this was a clarion call to the Ruby community to improve things holistically, rather than the current trend of band-aid fixes they seem to apply.


Holistic improvement is good.

Every popular technology goes through this. (C, Java, PHP, etc)

What is encouraging to me is the speed with which these issues get patched in Ruby and Rails, and how the ecosystem is paying attention to these lessons and learning from them.

Contrast this with the length of time recent Java flaws took to get patched (6 months or more) or some of the bugs reported in TOSSA got fixed years later.

The deal is to learn from each of these incidents.

Very few people want to take the trouble to write and use correct programs. We, as an industry, would rather Ship Early and Often. It takes a lot of energy and time endeavor to write correct programs. Very few do that. Three that come to mind are Dijkstra, Knuth, DJB.


ಠ_ಠ

Because other frameworks are rock-solid. Yup. None of this happens anywhere else on the internet.


Doesn't matter whether this happens elsewhere. It's still bad. Here's a near-Godwin-like example. If I say that gun violence is a problem in the USA, you could say "Hey! You're up in Canada, what happened with that guy who gunned down 14 women in Québec? Gun violence happens everywhere!"

It does happen everywhere. It should be stopped everywhere. But it happens more frequently in some places. There are special conditions that permit it to happen in some places.And if it is a serious concern of yours, knowing where it is and isn't most likely likely to happen again is important.


Well, both the Python and Perl guys at least seem to have a healthier awareness of how dangerous untrusted arbitrary data can be (e.g. http://docs.python.org/2/library/pickle.html)

The Perl YAML warning is less obvious but they at least mention in their LoadCode docs (http://search.cpan.org/~mstrout/YAML-0.84/lib/YAML.pm) that you have to specifically enable code deserialization since untrusted evaluation is a bad idea.

Python's YAML is only slightly worse, with an available safe_load method that refuses to run code (and a failure to use appropriately led to vulns in popular Django plugins a little more than a year ago).

There's no easy equivalent to safe_load or UseCode for Ruby's YAML (http://apidock.com/ruby/Psych) as far as I can tell, at least while still using the high-level parser. And I'll note that the API docs I provided are for the new YAML parser introduced with 1.9.3. I would like to think that by 2010 there would be a general awareness of the risk of using deserializers/code emitters on untrusted input.


Nothing quite this catastrophic tends to happen to things that aren't PHP.


That is simply not true. Here's an example linked upthread for Struts, for example: http://blog.o0o.nu/2010/07/cve-2010-1870-struts2xwork-remote...


To be fair, other platforms and frameworks have had serialization issues, BUT, and this is the big one, they learned from the experience. Will the Ruby community learn? That is the question. Software Engineering are not dirty words!


Code execution while deserializating / parsing data is my first and uttermost concern. Nowadays I'm in Clojure land and it's still not entirely clear as to what I can and cannot do and what the language as to offer me so that data doesn't contain rogue code that is going to be executed.

In Common Lisp, for example, as far as I know you can set a flag so that the reader is set to "no evaluation ever" (if I understand things correctly) and, hence, if you're not using eval yourself specifically, nothing is ever going to be evaluated.

But how would that work in Clojure? And what about other languages? Ruby? Haskell? Java? C#?

I think the ability to execute code became the most important security issue (more than buffer overflow/overrun which can now be prevented --even sometimes provably impossible to happen thanks to theorem provers).

More thoughts should be put into explaining how/when a language / API can execute code and how it should/can be used to prevent such a thing from happening.


FUD much


Hiya, welcome to Hacker News. I'm Patrick and I wrote this article. I run four businesses, three of which are intimately tied to Ruby on Rails, I am a contributor in the community, and I want to see it win. If you believe I am trying to spread fear, uncertainty, or doubt, you are greatly mistaken about my motives.

As someone who loves Rails, to someone who presumably likes Rails, it is imperative that you understand how serious this issue is. If you use Rails, you need to have addressed this already. If you have not, drop what you're doing and go fix it right now.


Patrick, Thank you for writing this article. It opened my eyes to how serious this issue is and how easy it is to compromise a server. Upgrading all my rails apps now. Thanks again.


I think Rails has already succeeded. What is winning?


Not much Uncertainty or Doubt in there.

The Fear seems appropriate.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: