Here is the commit where it was introduced:
This is a Rails vulnerability, not a Ruby vulnerability.
But seriously. This is extremely critical, please upgrade!
I know this has to have been a stressful weekend. Is there a tip jar anywhere for beer money for the team that worked on this?
Would also be happy to give you a hug ;-)
As a simple solution, one could pass a signed auth-hash of the fields generated by form_for, and the server could re-hash the fields submitted to ensure the form data you asked for is what you get (this solves the primary issue with attr_accessible). I feel getting this right is crucial to Rails' future.
More compared to what, exactly? This vulnerability was responded to pretty damn quickly after it was reported, given that almost nobody is even paid to work on Rails. If you saw Aaron tweeting about "working over the weekend" a few days ago, well, now you know.
That said, you mention attr_accessible in your post: that's gone as of the next release of Rails. Basically everyone agrees that strong_parameters is a better approach, which is why a gem was released that works with 3.2: you can use that better approach now.
As was mentioned below, security is a process, not a result. Nobody wants these kinds of issues to happen, but they will happen, and do to every single framework that's used by lots of people.
Part of being a good developer is understanding your framework and making sure that your app has the level of security you need. The framework cannot protect you from everything.
The team responded really quickly. Aaron is a really talented developer and a nice guy. We should all be thanking him. If you don't feel that enough emphasis is being put into security in rails beat away at it, find holes and then get involved in developing fixes for them. That is the beauty of open source.
Matz developed Ruby for adults. You get a lot of power, but also a lot of rope to hang yourself with. This is why testing in rails applications is so important.
Here are some good and experienced Rails developers who apparently had no idea that Rails would automagically suck in XML and YAML and turn it into symbols instead of strings.
Clearly they aren't the only ones who didn't "understand the framework" or we wouldn't have gone a week with the impression that CVE-2012-5664 was only exploitable in specific circumstances.
I made the "understand the framework" statement in response to some mistakes I have seen introduced by sloppy dev's on some rails projects I have worked on...That obviously wasn't the case here.......
If security is important you should periodically try to hack the system and consider incorporating automated penetration testing software against your application. I haven't had a need to go this far with my own recent apps so I cannot speak authoritatively on this, but I think there are some tools that can help with doing penetration testing etc..
In a past life I had to work on some pretty secure systems and did some crazy testing on things. I saw a lot of good developers introduce pretty big security holes....my favorite was when our /etc/password was served up by an application...and this was a well known team of craftsman that did this on a fairly large project. None of these have been limited to Ruby projects....they have included Java, C etc..
In my view security is a moving target. If the cost of a attack warrants the effort to protect against it then you do. If not then you don't. Even if the developers of the framework concentrate on security, there will always be ways to get around it. Safe's are rated on the amount of time it takes to break into them, if someone wants into bad enough they will get in. The same is true with software.
Should I be more aware of the security on my apps? probably. Should we as a community be better with it? Yup.
But unless I've taken the time to really dig into it, offer constructive feedback and be willing to jump in to fix it I have no business criticizing the state of things.
And I say all of this as someone who loves Ruby.
It's very hard for an app developer to test for vulnerabilities such as this one which seem to involve a combination of contributory factors. When magical stuff is constantly willing to help the app developer out in the background, it's very difficult to get a handle on what our true attack surface is.
Look at big data and AI. There are loads of permutations. How do you prevent stuff from just happening? How do you make sure that things are correct etc.? Trust me I would love to do TDD on an AI / big data analysis project. But the reality is that no-one has figured out how to reliably test things....so TDD is not the right tool at this point. But there is also the potential to introduce vulnerabilities. That doesn't mean I am not going to use AI or do big data.
It's always a balance between tighter security/less complexity, or more complexity and less security etc...obviously there are other factors as well, but my point is to choose the right tool for the job.....sometimes it is Rails, and sometimes not......
And beyond that give the Rails team kudos for a taking care of things like this when they do find them.
We on the sidelines primarily criticise as a stage of our shock and grief. That the patches keep flowing is unsettling in the short term but reassuring in the long term.
Thanks again everyone who dropped everything and worked so hard to get this set of fixes out!
The more magic, unexpected behavior you have when parsing untrusted input, the more likely you are to have security holes.
Instead of building up some complex object based on untrusted input, the author of the application should specify the values and types expected, and the parser should parse those and nothing more. This would lead to much simpler code paths, as the user never has an object that has unexpected keys, values, behaviors, etc. Don't parse the object using complex, general purpose code, then hand the user an object that they have to treat specially; if their form only expects 5 values of given types, then parse only those values and those types.
The problem is, all of this kind of magic is at the very heart of what Rails is. I don't know if you could eliminate it all and still have Rails be Rails.
Whether money is involved or not, for a framework, developers should either be committed to their products or not.
If this was a different sort of product then that limitation of not getting paid might carry some weight, but when you encourage people to develop on top of your platform and when it is shown to have egregious flaws there is no excuse. You either get to work fixing them or you tell the world to stop using your framework because it's broken and not going to be fixed.
Fortunately, the Rails devs are seriously committed to their product, which is why these fixes came out so quickly.
Edit: The Rails team is certainly deserving of many thanks, but they don't get a pass on problems just 'cause they work for free. Similarly, if someone gives me a free car I will thank them, but if that car starts a fire in the garage and burns down my house I will curse them too.
Even without the money, this issue was dealt with swiftly. See the comments below about 'the Rails security team is the best vendor I've worked with.'
I am a committer, and part of my job is to work on open source. Ish.
Other than that, it's everyone else's spare time, IIRC.
I've been busy with some other small OS projects (and stuff that pays the bills) but personally feel like I need to try to carve out some time this year to do something to contribute back to Rails.....
If I can help you help us somehow, please let me know.
There are always going to be security holes in anything we make. We can be a bank and focus two feet ahead on making sure everything is as secure as possible, or stay aware of security (and not do anything stupid) while moving fast enough that any flaws are irrelevant/fixed when exposed.
It also highly depends on how much risk you're willing to accept. For the average rails app, absolute security is not as important as moving fast. Be an adult and make adult decisions about your tools and processes to suit your circumstances.
Because this is ruby we're talking about. A "Fun" language that has 100000 ways todo the same thing, so newbs find it fun and easy. You can almost guess how the language works and almost always be right. Thats cool, great for learning, makes you feel like a superstar when you're just getting started with programming... but its really not such a good thing when it comes to maintainability, and security.
This breads a community of people who arent very mindful of anything but having fun coding. (not always a bad thing, but certainly not conducive to good security)
The second you talk about "Multi-platform" or "security" to your average ruby user, their eyes glaze over. They just want to make cool stuff on their Mac, not worry about Security and best practices.
You could say the same thing about a lot of interpreted languages, but Ruby is especially bad.
This has nothing to do with the Ruby language, by the way, any more than a hole in IIS is a problem with .NET.
If you're going to talk smack, at least learn what you're talking about.
Other languages tend to have more support in corporate/educational areas, tend to have more money backing them, tend to have more 'best practices', tend to have more rigorous testing and review. Ruby is the hipster hacker's language.... and the quality of code shows this. (in the core of the language, and by the individuals who use it)
Other languages absolutely have these problems too, but it has been my experience that Ruby is particularly bad. You are welcome to disagree with that part.
But IMO it makes sense. The quick and dirty 'million ways todo the same thing' nature of Ruby breads this kind of culture. I certainly didnt mean to single out interpreted languages tho, or imply other languages dont have security problems, or unique issues with their cultures.
PHP is probably about just as bad. I'd put Ruby and PHP high up on the 'fun to program in' list, and low on the 'secure, quality languages' list.
No, it isn't.
"90% of the answers to "how do I do this in ruby" on forums are actually "how to do this with rails" answers"
You're looking in the wrong places.
"who'd ever write a ruby program without rails, right?"
People who are writing shell scripts? People who are using Sinatra? People who are writing desktop Ruby programs?
Your examples of this would be?
Neither did I.
Which is worse?
Most Ruby developers primarily work with Rails. I presume this is the reason you are conflating the framework with the language.
I can tell you that "security" is a topic that, unless handled carefully, will make anyone's eyes glaze over.
The only way to fix this by "more of a focus on security" would have been not to do clever things with parameters in the first place, but the clever things provide a lot of value, so the next best thing is security auditing and be on top of patching any vulnerabilities.
The problem here was a feature somewhat haphazardly added to Rails for ActiveResource was turned on-by-default and enabled features that should only be active for interactions with trusted clients (i.e. authenticated services running in your own infrastructure)
Your suggestion is not without merit, but this is a case of having to learn to walk before you learn to run. There are clearly much more egregious parameter parsing vulnerabilities which need to be solved before the things you're describing would ever make it into rails-core.
/usr/lib/ruby/1.9.1/rubygems/remote_fetcher.rb:215:in `fetch_http': bad response Not Found 404 (http://bb-m.rubygems.org/quick/Marshal.4.8/activesupport-3.2.11.gemspec.rz)
I usually just use that one for my apps, much faster than rubygems.org
gem "rails", :github => "rails/rails", :tag => 'v3.2.11'
$ cd <deploy-path> && bundle install --gemfile Gemfile --path <shared-path>/bundle --without development test
Edit: finally got it out. This deploy model is completely screwed, though. It just shouldn't be normal to have a service like rubygems.org in the daily deploy loop. This is absolutely not a knock on the fantastic volunteers that run it - they simply shouldn't be dealing with this sort of load spike.
Are there others?
This one deals with problematic JSON parsing and affects only 3.x. It is dealt with in the release that fixes the other vulnerability
1. This class of problems is not unique to Ruby.
2. Similar problems have been identified in Struts, and python's pickle.
3. Specifically in this case, YAML.load() can deserialize unintended object types. In the case of Struts the problem was the expression library used can also deserialize unintended object types (like File), plus setting properties on these types can have side effects (such as dropping files into your system).
4. I took a look at Microsoft's WCF. The DataContractSerializer states that it only is allowed to load types that are specified by a contract. http://msdn.microsoft.com/en-us/library/vstudio/ms733135(v=v... This should be the gold standard. In addition, it warns that even loading XML documents can be dangerous if we then load remote DTDs for validation.
5. For the old salts, remoting or RMI have similar issues - both mitigated by restricting the types that can be deserialized. http://msdn.microsoft.com/en-us/library/5dxse167(v=vs.71).as...
6. Here's another vulnerability which targets serialization http://wouter.coekaerts.be/2011/spring-vulnerabilities
1. all deserializers should be viewed with suspicion.
2. A deserializer which does not implement a whitelist of types that it can deserialize to is not suited for handling arbitrary data.
3. For example, it is capable to creating untainted/trusted objects in application servers, which some time later, may be used for XSS, or execution in SQL. In the Struts case, the standard Java libraries have constructors and methods that deserializing is enough to result in an arbitrary file being dropped on the remote file system.
This vulnerability is most similar to the object loader vulnerabilities found in Spring a few years back. It is the kind of vulnerability that is occasionally found in Java web stacks.
It is a simpler vulnerability. This is a double edged sword. On the one hand, it is easier to fix (and to be sure we've fixed) than the objectloader-type stuff. On the other hand, it's so easy to reason about and work with that the exploit is straightforward. It was very difficult to find ways to talk about the general pattern of weakness in this code without immediately disclosing the exploit.
The vulnerability is similar in spirit to Python's Pickle, which is also unsafe for untrusted data. A difference between Raila and Django, though: while specific Django apps have had Pickle exposures, I'm not sure Django itself ever did.
PHP has vulnerabilities that are similar in impact to this vulnerability. But there's a big difference between this flaw (and the Python issues) and PHP: PHP grappled for years and years with a publicly known bug class (remote file inclusion) that coughed up code execution. It's not impossible that more RCE flaws will be found in Rails, but it's unlikely to become a class of bug that every Rails developer will need to adopt best practices to stop.
No mainstream web platform has ever survived long deployment in popular applications without some horrible finding. Nobody's hands are clean. It is very difficult to get security right in every single component that a full-featured web framework needs to offer. It only takes one mistake.
You are dead right about deserializers in general.
Some explanation why YAML user input is evil.
It works like this
1.9.3p327 :001 > id = YAML.load("--- !ruby/string:Arel::Nodes::SqlLiteral \"1 --\"\n") # if user input can contain arbitrary YAML
1.9.3p327 :002 > Keyword.where(:id => id).first
Keyword Load (0.3ms) SELECT `keywords`.* FROM `keywords` WHERE `keywords`.`id` = 1 -- LIMIT 1
I intend to share some details about this later on, but not so soon after the vulnerability is announced. There has to be a reasonable amount of time allowed for people to patch their servers.
We've now officially captured both sides of the argument and can safely move on.
module YAML; @@tagged_classes.delete('tag:ruby.yaml.org,2002:object'); end
But by taking out Object, YAML is only left with a whitelist of types that are safe, anything else will get turned into a YAML::DomainType.
irb(main):001:0> YAML::parse("!ruby/object:File 123").transform
irb(main):002:0> module YAML; @@tagged_classes.delete('tag:ruby.yaml.org,2002:object'); end
irb(main):003:0> YAML::parse("!ruby/object:File 123").transform
=> #<YAML::DomainType:0x7f242783a840 @domain="ruby.yaml.org,2002", @type_id="object:File", @value="123">
> There are multiple weaknesses in the parameter parsing code for Ruby on Rails which allows attackers to bypass authentication systems, inject arbitrary SQL, inject and execute arbitrary code, or perform a DoS attack on a Rails application. This vulnerability has been assigned the CVE identifier CVE-2013-0156.
> Due to the critical nature of this vulnerability, and the fact that portions
> of it have been disclosed publicly, all users running an affected release
> should either upgrade or use one of the work arounds *immediately*.
This is the key bit for me: Rubygems is literally straining with everyone being frantic to upgrade. Giving it a few days means that everyone can patch their apps.
I don't believe that everyone will listen to little old me, of course, but that doesn't mean I can't tell them I don't think it's a not-great thing to do.
I think your week started Jan 02 with CVE-2012-5664.
I think you underestimate your adversary. https://twitter.com/mikko/status/288766998228393984
The cat is out of the bag. You can no longer negotiate with this reality.
Publicly disclosing a bug is like birthing a baby. Once it's sticking halfway out, just get it all the way out because it's counterproductive to try to hold parts of it back in.
All this does is allow people who want to do harm to not have to figure it out themselves.
Current version at time of my post is 3.2.11, if I'm using that am I safe or do I need to perform additional steps?
Versions Affected: ALL versions
Not affected: NONE
Fixed Versions: 3.2.11, 3.1.10, 3.0.19, 2.3.15
The problem is that the XML code used in the untrusted request path was also used by code that handled trusted messages elsewhere, and those trusted messages had requirements that weren't appropriate for request-path messages.
Because JSON is so much more popular than XML in Rails apps now, a reasonable workaround for this problem is to just turn off XML if you're not using it. More importantly, it's a workaround that (a) does more to reduce the attack surface given how XmlMini works, and (b) was a workaround that disclosed less of the vulnerability last week. But don't let that confuse you about the nature of this bug.
A lot of people get bitten by a component they never consciously used and activated in the first place. While the second part is true for almost every part of a framework, the first one is problematic. ("XML? Why do I have a vulnerability through XML and YAML in a JSON-only app?")
I'd like my apps to poll rails.org (or whatever) every few minutes and by default shutdown hard when an incident like this is announced.
Of course Twitter is not exactly the most reliable platform but the likelihood of a twitter-downtime to coincide with a critical vulnerability seems relatively low.
> Tens of Thousands of Rails Applications Remotely Disabled Following Rails.org Intrusion
The aftermath of an incident like the current one is a lot more expensive than an unplanned downtime.
However, perhaps they could just promise to post a signed message, in a specified format, on a dedicated twitter account, if such a thing happens again. This would seem like a relatively low-tech approach, about adequate for such a rare event (just keep that secret key secret!).
The community can then roll their own gems to watch said twitter-account and act according to any user preference. Perhaps one of these gems would even make it into rails-core after sufficient review.
Obviously one can always argue whether such a rare case deserves dedicated infrastructure. But on the other hand we have yet to see how many rails deployments will be bitten by this incident in the long term. It's not uncommon to see years of exploitation for a vulnerability in a popular piece of server software.
I'm going to be the asshole here, because it is vitally important that no one responsible for security ever listen to what you're saying. You're advocating some Orwellian kill-switch mechanism based on unspecified "signed messages" over a third-party social messaging service (limited to 140 chars, no less), and throwing in meaningless phrases like "low tech". What about this problem leads you to believe we all need something low tech?
I am not qualified to design such a system. You are negatively qualified to even comment on such a system. Please stop.
Since you apparently neither understand what was discussed (an optional rather than an "Orwellian" kill-switch), nor the implementation options (a signed message via any broadcast mechanism), nor why using twitter as the transport would be feasible and "low-tech" versus most alternatives, you should perhaps refrain from commenting on this thread at all. - And especially not in that tone.
God know I love programming in Ruby now, but is Rails really that insecure?
It was never about Java(C, C++) vs. Ruby despite what fanboys on either side made out. It was about conservative vs. devil-may-care. All that "convenience" and "it's so clean" came at the price of a whole load of code executed behind the scenes. You didn't write it, and the Gods of TDD preached that you don't need to test it because that's Someone Else's Problem. Auditing it is nearly impossible not least because it's a moving target, so you either get stuck in a backwater of obsolete versions that once got audited and approved or you live on the bleeding edge constantly (and DevOps hate you forever).
FWIW we (large satnav company) prototyped the last major service in Rails (actually a one-man hack proof of concept) then implemented it in production in Scala (and that was a bridge too far for a lot of people) because no-one wanted to run Rails in production.
These kinds of issues are open to all software.
I'm happy you work in the kind of place that audits all of its software, though. I'm sure you've all read through all of Hibernate, Spring and not to mention all the .NET framework code.
The claim here is that people in dynamic languages tend to misuse that and write all sorts of magic that are pure gold for 10-line snippets but open up a vast attack surface, like building completely arbitrary objects from string input.
Really? Could you show me how I could possibly create such a hole in a language like ocaml or haskell?
First you'd have to write the equivalent of Rails in Haskell (I'm not talking about an MVC framework, but something as large, complex, and featureful.)
The link you posted illustrates that it is unfortunate that Java supports reflection, and even more unfortunate that various "enterprise" software stacks abuse reflection in ever more creative ways. Stay away of Java reflection and/or use C/C++, and you'll avoid this kind of vulnerabilities.
Rails will always have zero-day security issues; I'd hazard that all web apps of any notable size will. What matters most to me is how the core team and community respond on those three criteria above.
Lately the Rails team has been performing exceptionally.
You've escaped anyway.
That's not to say all frameworks are created equally secure, but the differences have more to do with the culture around the project than any technical decisions (minus some very specific language-related issues).
Rails is a big, public project; with many years now of being used by a sizable number of people. There will be more security vulnerabilities discovered, hopefully they'll be addressed quickly and communicated well (which this one seems to be).
That's a long-winded way of saying to be cautious with picking some other framework because it has less security vulnerabilities reported, that almost certainly has no bearing on it being more secure.
I can't comment on how on-the-ball the Rails security team is, but I can say it's really easy to update your apps.
It's also relative to your alternatives. It's way safer than not using a framework. Is it safer than Django? That's kind of unknowable; maybe, maybe not.
* Quick turn around. I have another vendor where it takes up to 3 months to get stuff fixed. :(
* They give you a patch to review before releasing publicly. This is very important and gives researchers a chance to fix any problems with the patch. With another vendor their fix missed a really obvious attack vector and anyone who diffed the code would have been given a free zero day vulnerability. :(
The following is speculation, but keep in mind that this bug may have been found because the Rails team has been looking for security holes similar to the exploit that was found a few days ago.
It might be secure NOW. But restrospectively, it never was before this patch.
ps: I love Tekpub! Are there any plans for some new Rails videos? I purchased the Rails 3 series but it's a bit outdated, given how fast Rails moves. Would purchase a series on Rails 4 day-one. :)
https://bugzilla.redhat.com/show_bug.cgi?id=854757 - CVE-2012-4406 Openstack-Swift: insecure use of python pickle()
The 3.2.10 announcement provided an example of `Model.find_by_id(params[:id])` as an exploit, but nobody could figure out how you could get a hash with a _symbol_ key into `params[:id]`, which is what it would take for that to be an exploit. So people were confused.
But the pre-3.2.11 exploit, apparently, possibly provides ways to do just that, eh?
I realize that providing an in-depth answer is tantamount to publishing an exploit how-to, but some reasonable way to privately test this would be very useful.
Maybe a "simple" URL tester hosted by a trusted Rails source (e.g. rubyonrails.org)? Ok, has the obvious issue of showing the world who they should target, but maybe you can riff on that theme?
Auditing and stuff you know. For some reason people in charge get really upset when all our base are belong to the bad guys.
"ah, actually 1.1.x isn't vulnerable. The issue first arrived in 2.0" - https://twitter.com/tenderlove/status/288777229276704768
I'm not advocating it as the only security mechanism, but rather as another barrier to be overcome just like address-space-randomisation, data-exection prevention and all the rest...
(Haven't Google recently shared a valgrind-lite runtime bounds checker which is being incorporated into GCC etc? Might lead the way on how this can be down with the minimum of runtime cost.)
If you want to secure a system...whitelist, don't blacklist.
The real problem is the very mentality of the people who downplay security issues, always saying "this is not a serious issue" (or, worse, saying "but language xxx / framework yyy" suffers from issues too, it's how the world works).
That mentality is the reason why such exploits do exist in the first place. Security is nearly always an afterthought.
The most braindead argument being: "My goal is to sell xxx, not to have an unbreakable server".
Once you read that one, you know you have reached the low of the low.
Also, the message was not "overblown". It was "don't panic, but still upgrade ASAP".
Patch/issue from the old Rails issue tracker:
This vulnerability is exploitable even if you don't have any exposed controllers.
I still haven't figured out an attack vector yet, but least I now know that my patches are working!
Considering it affects all versions, what are the odds of multiple people pointing this out at the same time?
Rails has a very good track record regarding these things, but I'm just curious.
My understanding is that while investigating the SQL issue a week or so back, it gave several people ideas on how to make this exploit happen, and they all reported it.
it has just been updated:
Should come in handy today.
Place a couple of lines in a file in config/initializers, and you're good,.
in your Gemfile
gem 'rails', '3.2.3'
I think that patch maybe? But I dont know how. Google is not helping,
gem 'rails', '3.2.11'
bundle update rails
It is just plain wrong to reason like this.
So let me ask something to the ones using the above fallacy: are all programs (say webservers) equals in the face of security?
It's an easy question right? And the answer is: "no, they're not all equal".
So stop saying: "But Java had several DoS bugs affecting Tomcat in 2011 too, so we're not doing anything wrong here".
And start coding (and documenting) to higher standards.
It is very valid to reason within constraints of reality. Like knowing that a car "which will never ever have an accident. ever" is a lie. We know that driving a car brings a risk of an accident. That is realism. Some turn that reality into dangerous behaviour. Saying things like "Statistics tell me I will have an accident no matter what. So I can just as well finish this bottle of whiskey before driving at 150Km/h home". You are making it sound as if the Rails developers follow that logic.
They don't. There simply is a certain realism that, no matter how much effort you put into security, there will be security issues. But nothing more. Or less.
2. Pull it off next time someone starts with the 'test-replace-static-typing' argument.
update your Gemfile and set the version you want. In my case:
gem 'rails', '3.2.10'
'bundle update rails' which will update your Gemfile.lock
check-in and deploy your code. If you are using capistranso, the default 'deploy' task should handle everything for you. Otherwise, run 'bundle update rails' on your production server.
Which is in fact why it's probably wiser to list `gem 'rails', '~> 3.2.10'` (or 3.2.0 or anything) instead, and then `bundle update rials` will update you to latest 3.2.x (but never 3.3.x), in this case 3.2.11, instead of only to the exact version you specified (3.2.10, incorrectly).