Hacker News new | past | comments | ask | show | jobs | submit login
Metasploit Rails 3 Remote Code Execution Hours Away (rapid7.com)
398 points by tptacek on Jan 10, 2013 | hide | past | favorite | 166 comments



Expect a point-and-click exploit that will run arbitrary code on vulnerable servers.

If you've never dealt with a problem like this, you may not be ready. So, here's the most important thing you need to understand:

If you have a vulnerable application anywhere, on any port it will be found and compromised. This is a spray-and-pray vulnerability. It costs attackers nothing to try, attempts don't crash servers, and so people will try everywhere.

If you lose an application in your data center / hosting environment, that's the ballgame. It doesn't matter that the app you lost was the testing instance of a status dashboard with no real data in it, because the exploit coughs up shell access on that server. If there is one thing every black-hat attacker in the world is truly gifted at, it is pivoting from shell access on one server to shell access on every goddam server.

Please make sure you didn't just patch the app servers you know/care about. THEY ALL NEED TO BE PATCHED OR RETIRED.

Additionally:

* If you are one of those "same password on a whole bunch of services people", now is a good time to make sure nothing you care about has that password. Some app somewhere is about to lose that password.

* Now would not be the worst time in the world to go to your Twitter config, hit Settings -> Apps, and scrub out all the stuff you don't use.

* Now you know why you never give 3rd party web apps your Gmail password.


(Bah, great point about passwords. I need to reform my ways.)

To amplify and expand on Thomas here: when this was announced I pushed the Big Red Button and pushed three emergency patches to my servers at 3 to 5 AM Japan time. My perception was "This just can't wait." I went to sleep with the vague feeling that I had probably broken something (there's always something that slips when you're tired and hasty) but that it was almost certainly acceptable given the alternative.

Sure enough: despite automated and smoke tests passing and metrics remaining nominal, Appointment Reminder suffered breaking downtime for some customers (it depended on browser - long story not relevant). This ended up locking them out for about 16 hours, felicitously mostly not during the US working day.

After being told of the issue by a few mighty pissed end users, I fixed it and spent a second awake-to-9 AM night both writing a to-all-customers apology email and fielding questions. I went into detail on why I screwed up (acted too fast) and a simplified version of why I had to (third-party software required an urgent patch; delaying deployment by one day would have been an unacceptable risk to customer data).

Several customers - including a few of the ones most inconvenienced - got it touch to say "Right call." One of them was of the opinion that, if I hadn't patched, he would be in Big Red Button mode today, because no machine or data on a local network with an unpatched Rails instance is safe. "I honestly prefer knowing it broke because you were on top of things than it being stable because you weren't" end quote.

I'm not a security guy, I'm just a systems engineer, but my take on it is that this does not just require the Big Red Button, this is the paradigmatic example of why you have a Big Reg Button. If you don't, or if you pushed it yesterday like you should have and something blew up, this is an excellent opportunity to improve procedures for next time.

Edit: Big Red Button is funny shorthand for "Immediately drop what you're doing, pull out the In Case Shit Happens folder, and have the relevant people immediately execute on the emergency plan." We call it something different in big Japanese megacorps but I always liked the imagery.


For me "big red button" is "Emergency Power Off", which I guess is also a viable response to some showstopper bugs, until you have time to fix, sometimes.


I used turned on heroku maintenance mode for a couple of apps I didn't care to patch right away, according to its description it prevents requests from reaching dynos so should be good for preventing access to existing data/stored keys in the meantime.


Likewise - we had a big meeting yesterday with an important customer, and my cofounder ended up taking much of it alone because this couldn't wait. I actually found out about this a little bit prior to it being publicly announced, so luckily had a little bit of lead time, but it was a 'showstopper' for us nonetheless, especially as a website security company. ;)

Also, if anyone wants help or explanation on the vulnerability (though there are plenty of posts that do a great job), I'd be happy to chat about it - feel free to email me whenever at borski@tinfoilsecurity.com


LastPass makes it much, much easier to never re-use passwords. Just make sure your master password is unique and strong and never re-used and you use 2FA!



You can use LastPass as completely zero knowledge app by not utilizing the binary extensions for Chrome or Firefox.


I prefer KeePass (2 or X) with the encrypted pwd database backed up to SpiderOak (or Tarsnap, the only two really secure backup services I know of).

OSS, local not cloud-based, encrypted pwd file, excellent pwd generator, easy to use.


I use SuperGenPass, it's comparable and requires storing no state, thus nothing to lose or back up.


Downside: You can never change your SuperGenPass password, even if a site requires you to.


This is a huge downside -- accidentally type your password on a device you shouldn't have? You now have to update every site rather than just updating your master password.



I use EngimaPass[1], which, as an extension, renders outside the DOM, so it's not.

[1] https://chrome.google.com/webstore/detail/enigmapass/bgkipgf...


Thank you, I was looking for one and thinking of buying a yubikey as well.


For people applying the security patches, please beware that Rails 3.2.11 has broken some things (I've been having issues related to bad JSON parsing).

Fortunately, the community is stepping up with patches[1]. Hopefully, these patches are not adding further vulnerabilities.

[1] https://github.com/rails/rails/pull/8855


For this very reason I patched a version of Rails 3.2.8 with the following patch files distributed by the ror-security mailing list[1]:

3-2-dynamic_finder_injection.patch 3-2-null_array_param.patch 3-2-xml_parsing.patch

The changelogs didn't cleanly apply but everything else did. In your Gemfile,

gem 'rails', :git => 'git://github.com/adamonduty/rails', :branch => '3.2.8_with_security_patches'

This will install version 3.2.8a. If you get a bundler error "NoMethodError: undefined method [] for nil:NilClass", try upgrading your rubygems-bundler gem to version 1.1.0.

See https://github.com/adamonduty/rails/tree/3.2.8_with_security... for the commits.

Given the number of changes and known issues in 3.2.9, I don't understand why the core team didn't perform a similar release.

[1] https://groups.google.com/forum/?fromgroups=#!topic/rubyonra...


This bit me too, upgrading an app from 3.2.8:

https://github.com/rails/rails/issues/8269


I believe I am also seeing these issues in 3.1.10.


Good thinking. Might also be good to prune some auth tokens and app specific passwords for your Google accounts, too. https://accounts.google.com/IssuedAuthSubTokens


Thanks for the tip to lock down Twitter, I hadn't thought of that.


Thanks for the reminder. I totally forgot I have a redmine server running....


I can't echo this sentiment enough: If you have a vulnerable application anywhere, on any port it will be found and compromised.

If anyone needs convincing, here's how it will go down:

- Google will help them find you. They'll search for things found in pages typically produced by different Rails apps.

- They will look at the entire network allocation and scan the whole range. (Try running whois against an IP address.)

- If they can identify the hosting platform, it will make it that much easier to know where to look.

- Even if you don't show up in the results, your neighbor might and you're next.

Edit: formatting.


I would add one additional point of caution:

The Rails community seems unusually keen on the 'curl example.org/script | sh' as an installation method (see Pow.cx etc.).

I'd usually recommend reading these scripts before execution, but for now especially so as it would seem an obvious target if people are looking to leverage this exploit to acquire more boxen.


While I fully agree with you that the practice of curling a script and piping it into `sh` is—to say the least—risky, notice that this risk has been widely accepted long time ago. Each time you download an executable file—be it an exe for Windows, apk for Android, a Linux binary, an OS X executable—you're doing the same thing. I'll go one step further: each time you download a free/open source tarball you do not read the code before typing `make`. You make your machine run some code of unknown functionality and only plausible origin.

Arguably, HTTPS is one step forward, however vulnerabilities like the one discussed here make us defenceless. To make matters worse the line of defence based on reading the script works only in the case of relatively short, unobfuscated and unminified scripts written in plain text. It also requires the person who's downloading to have skills which despite being common for this community's audience are not widely spread across the population.

Sure, many projects sign their releases or announce cryptographic hashes of published files. But let's be honest: how many of us actually run `gpg` od `sha256sum -c` to verify them?

Spreading paranoia is not my goal here, however I hope that this comment will end up being thought-provoking.


I think the point he was making is moreso that these rails-centric sites are going to get nailed &, as a result, one should be more wary during the next few weeks using this sort of installation method for stuff.

O should be generally quite wary of it in the first place given the ease one could swap out a single file & wreck havoc.


But how do you patch it?


If running Rails 3, you can disable XML parsing of parameters by adding this to an initializer:

  ActionDispatch::ParamsParser::DEFAULT_PARSERS.delete(Mime::XML)
source: https://groups.google.com/forum/#!topic/rubyonrails-security...


What happens with this fix is scary at first, but is actually OK.

If you patched as described, and run the exploit code on your local server, the exploit code will return a 200 response. As in:

  [-] POSTing puts 'boom' to http://localhost:3000 ...
  [-] Success!
This doesn't mean your site is vulnerable. Rails is entirely disregarding the parameters as specified by your initializer code.

Testing locally, watch your server when the request comes through and ensure there are no parameters being registered. You don't want to see something like this:

  Started POST "/" for 127.0.0.1 at 2013-01-09 21:00:04 -0800
  Processing by StartController#index as */*
  Parameters: {"secret"=>"--- !ruby/hash:ActionDispatch::Routing::RouteSet::NamedRouteCollection\n? {}\n: !ruby/object:OpenStruct\n  table:\n    :defaults:\n      :action: create\n      :controller: foos\n    :required_parts: []\n    :requirements:\n      :action: create\n      :controller: foos\n    :segment_keys:\n    - :format\n  modifiable: true"}


On a patched server I see a Hash::DisallowedType (Disallowed type attribute: "yaml") exception raised by activesupport.

As a simpler test-case, I modified the rails_rce.rb ( for CVE-2013-0156 )and passed a simpler yaml that creates a Time object:

yaml = %{ --- !ruby/object:Time {} } xml = %{ <?xml version="1.0" encoding="UTF-8"?> <#{param} type="yaml">#{yaml}</#{param}> }.strip

print_info "POSTing #{code} to #{url} ..."

response = http_request( :method => :post, :url => url, :headers => {:content_type => 'text/xml'}, :body => xml )

This way a vulnerable server's log file shows something like give below(ie. the Time object was actually created from the yaml):

Started POST "/users/sign_in" for 127.0.0.1 at 2013-01-10 00:26:40 -0500 Processing by Devise::SessionsController#create as / Parameters: {"secret"=>1969-12-31 19:00:00 -0500}

A patched server raises a Hash::DisallowedType (Disallowed type attribute: "yaml") exception.


I'm unable to reproduce- not sure why. Seeing no parameters. I'm getting:

  Started POST "/user/sign_in" for 127.0.0.1 at 2013-01-09 21:40:56 -0800
  Processing by UsersController#sign_in as */*
If I comment out the patch, I see:

  Parameters: {"secret"=>1969-12-31 16:00:00 -0800}
So I suspect others may not see the same exception... I'm using ActiveSupport 3.0.3, fyi.


Yep. I see what you mean.

activesupport-3.2.11/lib/active_support/core_ext/hash/conversions.rb has a -- DISALLOWED_XML_TYPES = %w(symbol yaml) -- which is used by its def typecast_xml_value to raise the exception.

I don't see these lines of code in activesupport-3.0.3/lib/active_support/core_ext/hash/conversions.rb

In my case I could upgrade to 3.2.11.

In your case, I am guessing you added the lines of code that disable xml and yaml parameter parsing to an initializer (or application.rb). This way, activesupport simply wouldn't try to convert the parameter value in question into a ruby object.


Right. Thanks!


Thanks for posting this, you gave me the bits I needed to make a simple test I could use to verify the patch, without needing the metasploit stuff. (on rails 2.3, I don't get the disallowedType error, but I can verify in the logs that the patch works)

http://www.railsperformance.com/2013/01/simple-test-for-rail...


If you're running a relatively new version of Rails (hopefully 3.2.10) you can just goto your app on your dev machine and type:

bundle update rails

Then commit and deploy in the usual manner. You'll probably experience no issues with your application, but it's good practice to run your test suite before you do any sort of production deploy anyway.

If you're running an older version of Rails (pre bundler) or if you don't have a solid test-suite, you'll have a tougher time updating. But you definitely need to put aside time to figure it out now.


You'll need to change the version of rails listed in the Gemfile. I don't think "bundle update" modifies that.


Actually it depends on what your Gemfile looks like. If you've specified the exact version in your Gemfile you'll need to edit it before running `bundle update rails`. This means your Gemfile might look like:

    gem 'rails', '3.2.10'
Running `bundle update rails` when you've absolutely specified the version number won't do anything. Your Gemfile.lock file (which is the file that bundler uses to determine which gems to require) won't be changed. Instead of absolutely specifying the version of Rails you want to use, considering using the '~>' specifier. My Gemfile at Airtasker looks like:

    gem 'rails', '~> 3.2.8'
That means that when I run `bundle update rails` it will update the patch version of rails. Our Gemfile.lock has the following entry for rails:

    rails (3.2.11)
By using the '~>' specifier it means that we can easily update our gems patch versions without worrying about API changes between major and minor versions.

You can read: http://gembundler.com/v1.2/gemfile.html to find out more.


For completeness it's also worth mentioning that gem 'rails', '~> 3.2.0' may not necessarily update Rails to the latest version when you run `bundle update rails`. The reason is if there is another gem which requires a gem that does not match the requirement of the latest version of Rails, it will backtrack until it finds a compatible version across the board.

So you will want to double-check the specific version that you end up with. If it doesn't work you probably can fix it with `bundle update rails something_else`, unless you have a true conflict in which case you will have to do some spelunking yourself, and perhaps the Rails hotfix should be in place in your app first.


Thank you, in fact I had the 3.1.2 version specified in my Gemfile. So I changed that to 3.1.10 and now bundle show rails shows that version as installed so I think I'm in the clear.


You may want to change it to a less-specific version so future patches (and they are coming) are less work to apply.


That's exactly what bundle update does.


Only in Gemfile.lock. If you've hard-coded a version in Gemfile, you'll need to update it.

Eg. These will update fine:

    gem "rails"
    gem "rails", "~> 3.2"
This wont:

    gem "rails", "= 3.2.10"


No, bundle update only updates the Gemfile.lock

If your gemfile looks like:

    gem 'rails', '3.2.10'
Then `bundle update` does nothing.


bundle update activesupport


If there is one thing every black-hat attacker in the world is truly gifted at, it is pivoting from shell access on one server to shell access on every goddam server.

I'm not even a hacker, and I have done this before.


Thanks for this reminder. Yesterday I went through all the apps that matter to me but today I fixed all the apps on the box.


How big of a problem do you think secondary (browser) exploits are going to be? Should users take any browsing precautions?


>* Now would not be the worst time in the world to go to your Twitter config, hit Settings -> Apps, and scrub out all the stuff you don't use.

OT: It'd be nice if Twitter would recognize mass abuse of accounts via an oAuth app and automatically disable that app until action can be taken. (They may do this, I don't know).


If they did, they might not say so.


I disabled all 3rd party access to my Twitter account. Every single one. Yeah I had to re-enable my Twitter client… worth it.

YOU SHOULD ALSO disable any Oauth access to your Gmail account. If you're like me, you trusted a few "email as a game" or whatever apps to take a peek, but you can't trust them to stay on top of this security flaw.

It's hard as hell to find the instructions in Google's documentation, so here's the account management link:

https://www.google.com/settings/account?hl=en

Click Security, then the bottom option in the main body of the page.



For those who aren't familiar with the Metasploit Project, it's an open source collection of safe and vetted exploits. Once an exploit module makes it into the Metasploit Framework, it's immediately available to ~250K users. The Metasploit Framework isn't just exploits though, it's an integration point for offensive capabilities that simply work together. It's also very easy to hook your own stuff into it.

There are several programs that build on the Metasploit Framework and take advantage of it. Rapid7 has commercial penetration testing products. I build the open source Armitage GUI for it and a commercial add-on called Cobalt Strike.

It's worth spending some time to learn how it works and what it does. Here's a few links:

Metasploit Unleashed Wiki: http://www.offensive-security.com/metasploit-unleashed/Main_...

My 7-part course on pen testing (with Cobalt Strike & MSF): http://www.advancedpentest.com/training

Quick demo of what it looks like to attack a workstation and use it as a hop point to get other things: http://www.youtube.com/watch?feature=player_embedded&v=S...

The best way to try the Metasploit Framework is to setup BackTrack Linux in a virtual machine: http://www.backtrack-linux.org/

A free vulnerable target is the Metasploitable virtual machine, available at: http://sourceforge.net/projects/metasploitable/files/Metaspl...


This particular gentleman also has a few good presentations on Metasploit I saw at Defcon/Bsides:

http://www.youtube.com/watch?v=G-JaHWaLmgc&hd=1

http://www.youtube.com/watch?v=y2M3SpOzeJY&hd=1



How prevalent is Rails 3 right now?

I somehow feel that defensive programming practices would have caught this bug. There is a lot of "magic" going on that lead to this exploit.

Sending text/xml to an application shouldn't have a huge impact but when you are dynamically creating objects out of the content it can lead to some serious problems.

Python has this too with pickle. However with Python it is pretty damn obvious that pickle is NOT SAFE. You also have to import it manually.


The bug has been there for 6 years and is present in virtually ALL deployed versions of Rails. Bugs happen (and they are certainly more common in dynamic frameworks like Rails)


I don't think defensive programming would have based on the current situation across a number of languages. Basically no serializer in any language PHP, Python, Rails, etc is completely safe. There are known exploits for all of them.

There is lots of magic in rails, but I bet there are a number of popular PHP and Python libraries/frameworks that are unserializing in an unsafe way.


There's an enormous difference between serializers that make any effort at all to be safe, and those like Ruby's YAML library, which make no effort. Python's YAML, for example, exposes a safe_load() method.

It's really criminally negligent that no such method exists in Ruby's YAML library.


Python's Pickle lib had something similar to safe_load(), that they removed because it gave a false sense of security.


If you are accepting pickled objects from a remote and using it ... you are an idiot.


Defensive programming in this case would be to not use a general purpose de-serializer for a format that is explicitly designed to allow serialising arbitrary objects....


Serializers aren't inherently unsafe. Look at Lisp's (read) method, for example, which does not evaluate anything.


Proof-of-Concept Exploits for CVE-2013-0156 and CVE-2013-0155 are here:

rails_rce.rb (https://gist.github.com/4499206)

rails_sqli.rb (https://gist.github.com/4499032)

rails_dos.rb (https://gist.github.com/4499017)

rails_jsonq.rb (https://gist.github.com/4499030)



The Good:

The way the Rails team responded so quickly in patching this.

The Bad:

The patches for this and other recent security issues had little time for testing and hence broke things. The old failed idea of trying to prevent full disclosure, which ultimately harms the community whilst doing nothing to really prevent the bad guys arming themselves with working exploit code, and all the resulting kerfuffle we saw.

The Ugly:

The Rails codebase. Seriously. As you read this, interested people are now pouring over it, looking for new vectors of attack, and we are awaiting the next series of having to scramble and fix the bad things that the magic in Rails enabled.


If you installed Rails from Ubuntu, you have to patch it by hand since they're not patching it.

Use the conversions.rb patch for 2.x from https://groups.google.com/forum/#!topic/rubyonrails-security...

"We're not patching it" statement: https://launchpad.net/bugs/320813


In my experience, using Ruby or gems from your package manager leads to headaches down the line - I'd highly recommend using bundler to manage your gems at the very least, and rvm or rbenv to manage rubies.


Does this also apply to end-users of Ruby apps that are just an apt-get away? I don't really want to learn all that stuff (and remember to redo it on all installs) just to use some tool that happens to be written in Ruby.


I don't believe there are any rails-apps-as-packages in the official debian/ubuntu repositories, but if there were I assume they would use bundler to bundle their gems internally.


Yes there are, in our case Redmine. A pretty popular piece of software I believe. In Debian it's in main and in Ubuntu it's in universe.

Re. bundler/gems, I don't know what those are - the file "core_ext/hash/conversions.rb" I hand-patched was from a package called ruby-activesupport-2.3 which is a dependency of the Rails package.


It was redmine I was using when I had the issues actually. The real problem though was that I was trying to use a newer version of Redmine than was available in the repo, and I did still manage to satisfy the dependencies but upgrading my Ruby version broke literally everything.

I think if 100% of your eco system is from the package manager you would be fine, but if even a single component needs to come from outside I would reach straight for rvm and bundler (no prejudice against rbenv, rvm is just what I use)


Gems are ruby packages, and bundler is a way to use specific versions in an app, independent of what versions are installed globally. I think bundler would be a good fit for redmine, just because you don't really gain anything other than disk space by being able to share ruby-activesupport-2.3 between apps.


If you apt-get your gems you're doing it wrong.


It shouldn't apply in that situation. My advice applies more to people developing or hosting ruby applications.



"We're not patching it"

That's not what was said. They don't maintain it.


They did too - they're not patching it unless someone "from the community" comes, does the work and succesfully lobbies to get it sponsored. Which almost certainly won't happen in time (if at all). Have a look at the track record of security bugs in launchpad that apply to universe (aka "community maintained") packages.


The bug you link is some kind of 3d-related bug, what does it have to do with Rails?


Yep, paste fail, andreaja's link is the right one.


So all these rails exploits are cropping up because someone decided to actually look at a single module of code? I saw in the last thread someone comparing these Rails vulnerabilities to Python Pickle (which has a huge warning screaming "DO NOT USE") and Django's various issues but that comparison seems pretty unfair now. Rails seems to be at the PHP4 league of bad code and reading the security threads on this back in time you see tons of people brushing the initial vulnerability off ("You need the secret key" "Only projects that leak the secret are vulnerable" etc)

Is a simple code review all it took for this fail cascade?


It wasn't a simple code review, it was a vulnerability that existed in code unnoticed for a number of years. It required skilled security researchers to unearth it. Vulnerabilities exist unnoticed in a number of foundational OS projects like this, and it's only when a CVE is released that people realize it had been there for quite some time.


Unnoticed...

Do not make the mistake of assuming that because there's no CVE, that a vulnerability is unknown. Not everyone who analyzes code is wearing a white hat.


Very true. There were no widespread reports of incidents based on this vulnerability in the wild, though, or it would have been discovered already. Thankfully, the folks that found this exploit (and the others they are sitting on for the moment) were wearing white hats. Also thankfully, the core team responded quickly to rectify and publicize the issue.


It requires skilled security researchers to unearth this?

<?xml version="1.0" encoding="UTF-8"?> <bang type="yaml">--- !ruby/object:Time {} </bang>


When the execution path looks like this, yes: http://blog.codeclimate.com/blog/2013/01/10/rails-remote-cod...

An exploit that's simple to use does not mean that it was simple to discover. In fact, the opposite is often the case.


What is a 'metasploit'? I just suddenly read this 50 times on twitter and I have no idea what's going on. I'm halfway through this article and still not making sense.


It's an automated vulnerability tester.

It will probe websites (and local networks) to find out the OS/server information (IIS, apache, etc), database info (mysql, mssql) and language (asp, php).

It then uses a database of known exploits and scripts (all types XSS, SQL injections, etc).


It's a collection of attacks. The open source framework doesn't have this kind of automation. It's used by professionals to conduct manual hacks.


It used to have an autopwn actually in which they took out a few years ago. It would essentially scan ports, see whats open and throw the entire kitchen sink at it.


Armitage a GUI for Metasploit has a "hail mary" button that does the same.


Right, I should have clarified it's used in conjunction with Nexpose (or other scanners) which does the automated scanning.


So is it a good tool (testing tool) or malicious?

If it's a good tool, why would he release it so soon without giving people much time to update?

If it's malicious, why is he telling people about it?


It's both. Or maybe neither.

See http://en.wikipedia.org/wiki/Full_disclosure it's a debate that has been going on a long time. Consider that right now the only people scanning IP blocks for vulnerable apps are bad guys.


It's intended to be used for security testing against one's own machines, demonstrating a vulnerability if it exists by directly exploiting it. The general nature of such tools, however, is that they can be used for good or for evil. It's just assumed that the bad guys have something of the sort already.


You know, when I look at what it is and what it does... I'm not sure I believe that's what it's 'intended' for, although that certainly is what they say it's intended for.

But anyway, in the end, it doesn't matter, it exists. Authorial intent is so 20th century.


Think of it like this. The exploits are already out there, whether they're public or not. MSF takes these exploits, and packages them into a coordinated tool. Sounds evil, right? A script kiddie can grab this tool, update it so it has the latest exploits, and voila! pwn the internet.

Well, they can do that without MSF. It's just harder.

Where MSF helps is with pentesters and other security professionals. When they perform a pentest or audit, tools like MSF/SET/Nexpose allow them to rapidly and accurately determine if a network or system is vulnerable, and prove it (within the bounds of the engagement's scope). Without these tools, a pentest would require far more tedious work.


That's exactly what it's intended for, actually. It was built by pentesters to perform their work. I'd rather my security tools be open sourced and available for all the world to see and contribute to. Doing so also compels lazy vendors to patch awful vulnerabilities.


Wikipedia says:

> The Metasploit Project is also well known for anti-forensic and evasion tools, some of which are built into the Metasploit Framework.

If so, how are such functions valuable for a pentest/audit?


Say you're doing a pentest, and your target has an IDS/IPS to both prevent infiltration and exfiltration of data. And while you discover a system that is vulnerable, the payload that you want to deploy to this vulnerable system would normally trigger an alert by an IDS. With some of the tools in MSF and BackTrack, you can use an encoding process to obfuscate the payload enough to get past the IDS/IPS.

Now a blackhat would be able to do this without BT or Metasploit. The tools are out there, and well known. So the fact that these tools are in BT and Metasploit doesn't change that. But it does make it easier for a pentester to prove a system is vulnerable, and to help a company address their vulnerabilities through remediation.


Intent doesn't matter that much here... fact of the matter is people have an enormous pressure to patch up their known vulnerabilities and this is mostly a good thing.


It's a tool, like a gun. If it's pointed at a computer you have authorization to exploit, it's good. If someone malicious is pointing it at you, it's not.


>If it's malicious, why is he telling people about it?

Because the impact of warning people is almost surely greater than someone new and malicious stumbling onto metasploit at exactly this time when such a large vulnerability is at play. Especially given that metasploit has been around for some time and will continue to long after this exploit is a smudge on RoR's history.


Metasploit is a software package that incorporates all sorts of security vulnerabilities and tools for using them. It's essentially a console where you can target machines, scan for and exploit vulnerabilities, and then install common payloads (like a backdoor for shell access) on the compromised machine.


I wasn't particularly familiar with it either, however there's a wikipedia entry :

http://en.wikipedia.org/wiki/Metasploit_Project

edit : "metasploit module" is likely a reference to the metasploit framework, which is written in Ruby.


> What is a 'metasploit'?

Ruh roh.


Boy did I pick the right week to leave my last job.


I just saw a PoC on twitter a few minutes ago. I tested it locally and was able to run arbitrary ruby code via YAML.load, including shell commands.


which one works w/o Rails?


How about an exploit that patches Rails? or at least disables it?


Current US law would still see you as hacking ("unauthorized remote computer access"). Hey, if you break some apps, their owners would probably agree, so don't do it.


I wrote up a detailed explanation of the issue and how the proof of concept works here: http://blog.codeclimate.com/blog/2013/01/10/rails-remote-cod...


What's the easiest way to verify that "ActionDispatch::ParamsParser::DEFAULT_PARSERS.delete(Mime::XML)" is handling requests with xml parameters properly? Does one of you beautiful people know of a tool that I could use?


I posted about this a little bit further up. Run the exploit code on your local server and ensure that no parameters are getting logged.

You can test by running this ruby file: https://gist.github.com/4499206

  $ ruby rails_rce.rb http://localhost:3000 param "User.destroy_all"
Monitor your server and ensure it is disregarding the post parameters.


thanks!


So for old versions of rails 2, can someone tell me where to put this?

ActionController::Base.param_parsers.delete(Mime::XML)

if i just stick it at the bottom of environment.rb, will that work? i am not sure how to test to confirm that i've fixed it.


run this for rails 2.x

  curl -i -H "Content-Type: application/xml" -X POST -d '<id type="yaml">--- !ruby/object:ActionController::Base bar: 1</id>' http://localhost:3000
If in your logs the params[:id] is an object, then you are vulnerable. If it's just a string, then your fix worked.

I put mine in an intializers file.

  ActiveSupport::CoreExtensions::Hash::Conversions::XML_PARSING.delete('symbol')
  ActiveSupport::CoreExtensions::Hash::Conversions::XML_PARSING.delete('yaml')


Thank you, very helpful.

EDIT AGAIN: I think putting it at the end of environment.rb works. I somehow messed it up and got confused, but then I tried it again and it worked. Just make sure you confirm your fix with the curl command!


I can confirm that this method produced the desired change in a Rails 2.1.2 app.


It should work at any place where activesupport has been loaded.


I just updated our app from 2.3.14 to 2.3.15, yet when I run your curl command I'm getting:

    Parameters: {"id"=>#<ActionController::Base:0xb570d2e8 @bar=1>}
Why do you think I'm still unprotected after updating to the fixed version 2.3.15 referenced here? http://weblog.rubyonrails.org/2013/1/8/Rails-3-2-11-3-1-10-3...

Or is there something I'm missing?


You should double check or restart your server or something. I tried it on 2.3.15 and I am getting

  Disallowed type attribute: "yaml"


This is what I had done:

1. Edit Gemfile and change rails version from 2.3.14 to 2.3.15.

2. Commit and push changes

3. bundle exec cap deploy (this is our standard method and works well.)

4. Double check rails -v returns 2.3.15 on production server.

5. On the production server run

    curl -i -H "Content-Type: application/xml" -X POST -d '<id type="yaml">--- !ruby/object:ActionController::Base bar: 1</id>' --insecure http://localhost
Result:

    Parameters: {"id"=>#<ActionController::Base:0xb570d2e8 @bar=1>}
Once I got the curl command to run against my production site (see my comment just below) and saw that it was still vulnerable, I quickly hacked

    ActionController::Base.param_parsers.delete(Mime::XML)
into environments.rb and re-deployed and that fixed it. Now when I run the curl command I get a parameter-less GET request in the log. I still do not understand why updating to 2.3.15 per the recommended method did not fix the problem, but at least our app doesn't need the xml in that way.


Are you running this test against your development or production site? I can't get it to run against my production site, (nothing appears in the log at all,) but I have triple checked everything and am still getting the same unwanted result against my development site.


Are you maybe hitting a cached page on production that can get handled without rails being involved?


Thank you, it turns out I was able to get the curl command to run against production site by adding the --insecure option and using the default port:

    curl -i -H "Content-Type: application/xml" -X POST -d '<id type="yaml">--- !ruby/object:ActionController::Base bar: 1</id>' --insecure https:localhost


Is Rails 1.X vulnerable at all? Tried running the snippet on a Rails 1.X app without any patches, and I got the id as a string, not an object. Why? The Rails guys seemingly implied 1.X is also vulnerable just that they don't give a damn about investigating what would fix it because it's too damn old.


Per pixeltrix's comment it appears that you're safe.

  "If you mean that your Ruby on Rails version is 1.2.6 then, no the vulnerability does not affect you as the feature was introduced in Ruby on Rails 2.0"
source: http://weblog.rubyonrails.org/2013/1/8/Rails-3-2-11-3-1-10-3...


Using the above curl command I was able to verify that an old rails 1.2.3 app that I pretend-maintain for a friend returned the ID as a string. Since it is a string and not an object, it's safe. That's all I know. From what I gather/conjecture, rails 1.x didn't have the functionality that caused this vulnerability in the first place.


I am not quite sure how those 2 lines interfer with my rails 2.0 app (we are using XML features). What exactly is being disabled here? What features will i not able to use anymore after applying this fix?

Thanks for the help!


Is this good or bad:

Parameters: {"action"=>"list", "id"=>#<ActionController::Base:0x6dc4177ed940 @bar=1>, "controller"=>"news"}


Bad. It means you succesfully made an ActionController object .

Good would look something like this:

  Parameters: {"action"=>"index", "id"=>"--- !ruby/object:ActionController::Base


okay - (forgive me). I have a very old project i'm trying to fix. I updated rails to 3.2.11

It's been like 4 years since i did this and haven't touched rails since.

What else do i need to do here to fix this? i Thought rails 3.2.11 was ok..


Rails 3.2.11 is OK. I upgraded a website to it and did nothing else and when I try the curl command ( curl -i -H "Content-Type: application/xml" -X POST -d '<id type="yaml">--- !ruby/object:ActionController::Base bar: 1</id>' http://example.com/ ) I see something like this in the production log:

  Hash::DisallowedType (Disallowed type attribute: "yaml"):
  activesupport (3.2.11) lib/active_support/core_ext/hash/conversions.rb:112:in `typecast_xml_value'


Yeah my site is on a shared host.. and it's picking up a different version of rails. fuuuuck.


If you're running a very old version of rails, it might not be that easy to update to the latest rails version. In that case, just stick this in your config/environment.rb

ActionController::Base.param_parsers.delete(Mime::XML)

This will disable parsing of xml which most people never use anyway


Yes, please. I maintain a Rails 2.3 app. What do I need to do?

Edit: fixed using above info from vinhboy. thanks heaps :)


People on Rails 2.2 can refer to this:

https://github.com/drasch/rails/commits/2-2-stable


Can anyone confirm whether or not I still need to add ActionDispatch::ParamsParser::DEFAULT_PARSERS.delete(Mime::XML) to an initializer if my apps have been updated to rails 3.2.11?


I suggest that you test it. You should get something like 'Disallowed type attribute' error if everything is installed correctly. You can use Postman Chrome extension to quickly test if your server is ok.


You will only get that error if you upgrade to 3.2.11 (or whatever) and do not disable XML. If you disable XML, you'll just silently not parse the XML (regardless of update status).


If you disable xml or yaml parsing, you'll see the raw yaml come through in the parameters hash.


Im 90% certain you do not. That's just if you don't really have the mojo to update your app right now.

But don't quote me on that...


Updating to 3.2.11 is sufficient.


Question: this only affects Rails Version 2 and higher, right?

I suspect I'm not the only person out there with legacy Rails Version 1 sites, so it would be very good to know the answer to that!


Note the comment about Yaml parsing in 1.1 era rails apps. You'll want to at least look into what your app is doing.


Yeah - my eventual decision was that it wasn't worth the risk. One extremely busy day later, we don't have any active Rails apps.


That is correct


After reading tptacek's comment, I'm curious:

Has this never happened before? Has there never been an exploit of this significance that has made it to Metasploit, and why not?


Yes, many many times. It's just new to startupland.


On the scale of Internet exploit significance, this is actually quite low compared to many of the exploits in MSF.


Metasploit releases significant exploits pretty frequently. HD (the author of the blog post) is featured for one of his more interesting exploits in this NYTimes article from last year: http://www.nytimes.com/2012/01/23/technology/flaws-in-videoc...


Metasploit Framework, the 2nd most important software framework written in Ruby, and Rails, the topmost ...sweet irony ;)


I wonder if Heroku's current downtime on their MyApps site is related?


I've found a lot of the posts so far to not cover both how to use Metasploit and actual secure a site. So I wrote http://blog.endpoint.com/2013/01/rails-CVE-2013-0156-metaspl....


Sheeeeit.

Is there a quick way to fix this problem for sites running older versions of Rails? I can't be the only one who is in that situation. Frankly, provided it leaves some version of the website online, I'm OK with losing some functionality, even.



Yes. MUCH older. I still have version 1 Rails sites...


Not sure this is correct. I just skimmed the article.

http://www.insinuator.net/2013/01/rails-yaml/

From the comments:

"From what i’ve see here is that versions below 2.0 should not be affected by this issue as they do not support xml parameters and don’t have any XMLMini implementations."


[deleted]


This is not going to be an SQL injection its going to be a remote code execution of anything you want.

Also expect that all network inspectors based on rules are going to miss it. You can serialize so many objects using YAML (Symbols, Objects, REGEXES, Structs, extend Kernel::HASH, etc..and I've only looked for a few hours), that there are going to be just too many attack vectors.


So does Apache's mod_security... but this isn't a normal SQLi and I certainly wouldn't rely on that saving you.


[deleted]


I'm not a Ruby developer, but keeping any application up-to-date is a necessity. If a server is connected to a network, it is already vulnerable.


Given the nature of this exploit it's unlikely to weed out anything at all, unless they've pushed something out to look specifically for this case.


Since I haven't seen the exploit, I don't know that it will do anything at all!


If I download a fresh version of rails from http://rubyonrails.org/ is it vulnerable to this?


No, rails-3.2.11 has fixed this vulnerability.


metasploit people seem to be getting stuck trying to get a version that runs against rails 2.x 3.x and different ruby versions. so might be a while before they release :)

however, there are multiple PoCs floating around so it is fairly safe to assume attackers have access to it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: