Hacker News new | comments | show | ask | jobs | submit login

Expect a point-and-click exploit that will run arbitrary code on vulnerable servers.

If you've never dealt with a problem like this, you may not be ready. So, here's the most important thing you need to understand:

If you have a vulnerable application anywhere, on any port it will be found and compromised. This is a spray-and-pray vulnerability. It costs attackers nothing to try, attempts don't crash servers, and so people will try everywhere.

If you lose an application in your data center / hosting environment, that's the ballgame. It doesn't matter that the app you lost was the testing instance of a status dashboard with no real data in it, because the exploit coughs up shell access on that server. If there is one thing every black-hat attacker in the world is truly gifted at, it is pivoting from shell access on one server to shell access on every goddam server.

Please make sure you didn't just patch the app servers you know/care about. THEY ALL NEED TO BE PATCHED OR RETIRED.

Additionally:

* If you are one of those "same password on a whole bunch of services people", now is a good time to make sure nothing you care about has that password. Some app somewhere is about to lose that password.

* Now would not be the worst time in the world to go to your Twitter config, hit Settings -> Apps, and scrub out all the stuff you don't use.

* Now you know why you never give 3rd party web apps your Gmail password.




(Bah, great point about passwords. I need to reform my ways.)

To amplify and expand on Thomas here: when this was announced I pushed the Big Red Button and pushed three emergency patches to my servers at 3 to 5 AM Japan time. My perception was "This just can't wait." I went to sleep with the vague feeling that I had probably broken something (there's always something that slips when you're tired and hasty) but that it was almost certainly acceptable given the alternative.

Sure enough: despite automated and smoke tests passing and metrics remaining nominal, Appointment Reminder suffered breaking downtime for some customers (it depended on browser - long story not relevant). This ended up locking them out for about 16 hours, felicitously mostly not during the US working day.

After being told of the issue by a few mighty pissed end users, I fixed it and spent a second awake-to-9 AM night both writing a to-all-customers apology email and fielding questions. I went into detail on why I screwed up (acted too fast) and a simplified version of why I had to (third-party software required an urgent patch; delaying deployment by one day would have been an unacceptable risk to customer data).

Several customers - including a few of the ones most inconvenienced - got it touch to say "Right call." One of them was of the opinion that, if I hadn't patched, he would be in Big Red Button mode today, because no machine or data on a local network with an unpatched Rails instance is safe. "I honestly prefer knowing it broke because you were on top of things than it being stable because you weren't" end quote.

I'm not a security guy, I'm just a systems engineer, but my take on it is that this does not just require the Big Red Button, this is the paradigmatic example of why you have a Big Reg Button. If you don't, or if you pushed it yesterday like you should have and something blew up, this is an excellent opportunity to improve procedures for next time.

Edit: Big Red Button is funny shorthand for "Immediately drop what you're doing, pull out the In Case Shit Happens folder, and have the relevant people immediately execute on the emergency plan." We call it something different in big Japanese megacorps but I always liked the imagery.


For me "big red button" is "Emergency Power Off", which I guess is also a viable response to some showstopper bugs, until you have time to fix, sometimes.


I used turned on heroku maintenance mode for a couple of apps I didn't care to patch right away, according to its description it prevents requests from reaching dynos so should be good for preventing access to existing data/stored keys in the meantime.


Likewise - we had a big meeting yesterday with an important customer, and my cofounder ended up taking much of it alone because this couldn't wait. I actually found out about this a little bit prior to it being publicly announced, so luckily had a little bit of lead time, but it was a 'showstopper' for us nonetheless, especially as a website security company. ;)

Also, if anyone wants help or explanation on the vulnerability (though there are plenty of posts that do a great job), I'd be happy to chat about it - feel free to email me whenever at borski@tinfoilsecurity.com


LastPass makes it much, much easier to never re-use passwords. Just make sure your master password is unique and strong and never re-used and you use 2FA!



You can use LastPass as completely zero knowledge app by not utilizing the binary extensions for Chrome or Firefox.


I prefer KeePass (2 or X) with the encrypted pwd database backed up to SpiderOak (or Tarsnap, the only two really secure backup services I know of).

OSS, local not cloud-based, encrypted pwd file, excellent pwd generator, easy to use.


I use SuperGenPass, it's comparable and requires storing no state, thus nothing to lose or back up.


Downside: You can never change your SuperGenPass password, even if a site requires you to.


This is a huge downside -- accidentally type your password on a device you shouldn't have? You now have to update every site rather than just updating your master password.



I use EngimaPass[1], which, as an extension, renders outside the DOM, so it's not.

[1] https://chrome.google.com/webstore/detail/enigmapass/bgkipgf...


Thank you, I was looking for one and thinking of buying a yubikey as well.


For people applying the security patches, please beware that Rails 3.2.11 has broken some things (I've been having issues related to bad JSON parsing).

Fortunately, the community is stepping up with patches[1]. Hopefully, these patches are not adding further vulnerabilities.

[1] https://github.com/rails/rails/pull/8855


For this very reason I patched a version of Rails 3.2.8 with the following patch files distributed by the ror-security mailing list[1]:

3-2-dynamic_finder_injection.patch 3-2-null_array_param.patch 3-2-xml_parsing.patch

The changelogs didn't cleanly apply but everything else did. In your Gemfile,

gem 'rails', :git => 'git://github.com/adamonduty/rails', :branch => '3.2.8_with_security_patches'

This will install version 3.2.8a. If you get a bundler error "NoMethodError: undefined method [] for nil:NilClass", try upgrading your rubygems-bundler gem to version 1.1.0.

See https://github.com/adamonduty/rails/tree/3.2.8_with_security... for the commits.

Given the number of changes and known issues in 3.2.9, I don't understand why the core team didn't perform a similar release.

[1] https://groups.google.com/forum/?fromgroups=#!topic/rubyonra...


This bit me too, upgrading an app from 3.2.8:

https://github.com/rails/rails/issues/8269


I believe I am also seeing these issues in 3.1.10.


Good thinking. Might also be good to prune some auth tokens and app specific passwords for your Google accounts, too. https://accounts.google.com/IssuedAuthSubTokens


Thanks for the tip to lock down Twitter, I hadn't thought of that.


Thanks for the reminder. I totally forgot I have a redmine server running....


I can't echo this sentiment enough: If you have a vulnerable application anywhere, on any port it will be found and compromised.

If anyone needs convincing, here's how it will go down:

- Google will help them find you. They'll search for things found in pages typically produced by different Rails apps.

- They will look at the entire network allocation and scan the whole range. (Try running whois against an IP address.)

- If they can identify the hosting platform, it will make it that much easier to know where to look.

- Even if you don't show up in the results, your neighbor might and you're next.

Edit: formatting.


I would add one additional point of caution:

The Rails community seems unusually keen on the 'curl example.org/script | sh' as an installation method (see Pow.cx etc.).

I'd usually recommend reading these scripts before execution, but for now especially so as it would seem an obvious target if people are looking to leverage this exploit to acquire more boxen.


While I fully agree with you that the practice of curling a script and piping it into `sh` is—to say the least—risky, notice that this risk has been widely accepted long time ago. Each time you download an executable file—be it an exe for Windows, apk for Android, a Linux binary, an OS X executable—you're doing the same thing. I'll go one step further: each time you download a free/open source tarball you do not read the code before typing `make`. You make your machine run some code of unknown functionality and only plausible origin.

Arguably, HTTPS is one step forward, however vulnerabilities like the one discussed here make us defenceless. To make matters worse the line of defence based on reading the script works only in the case of relatively short, unobfuscated and unminified scripts written in plain text. It also requires the person who's downloading to have skills which despite being common for this community's audience are not widely spread across the population.

Sure, many projects sign their releases or announce cryptographic hashes of published files. But let's be honest: how many of us actually run `gpg` od `sha256sum -c` to verify them?

Spreading paranoia is not my goal here, however I hope that this comment will end up being thought-provoking.


I think the point he was making is moreso that these rails-centric sites are going to get nailed &, as a result, one should be more wary during the next few weeks using this sort of installation method for stuff.

O should be generally quite wary of it in the first place given the ease one could swap out a single file & wreck havoc.


But how do you patch it?


If running Rails 3, you can disable XML parsing of parameters by adding this to an initializer:

  ActionDispatch::ParamsParser::DEFAULT_PARSERS.delete(Mime::XML)
source: https://groups.google.com/forum/#!topic/rubyonrails-security...


What happens with this fix is scary at first, but is actually OK.

If you patched as described, and run the exploit code on your local server, the exploit code will return a 200 response. As in:

  [-] POSTing puts 'boom' to http://localhost:3000 ...
  [-] Success!
This doesn't mean your site is vulnerable. Rails is entirely disregarding the parameters as specified by your initializer code.

Testing locally, watch your server when the request comes through and ensure there are no parameters being registered. You don't want to see something like this:

  Started POST "/" for 127.0.0.1 at 2013-01-09 21:00:04 -0800
  Processing by StartController#index as */*
  Parameters: {"secret"=>"--- !ruby/hash:ActionDispatch::Routing::RouteSet::NamedRouteCollection\n? {}\n: !ruby/object:OpenStruct\n  table:\n    :defaults:\n      :action: create\n      :controller: foos\n    :required_parts: []\n    :requirements:\n      :action: create\n      :controller: foos\n    :segment_keys:\n    - :format\n  modifiable: true"}


On a patched server I see a Hash::DisallowedType (Disallowed type attribute: "yaml") exception raised by activesupport.

As a simpler test-case, I modified the rails_rce.rb ( for CVE-2013-0156 )and passed a simpler yaml that creates a Time object:

yaml = %{ --- !ruby/object:Time {} } xml = %{ <?xml version="1.0" encoding="UTF-8"?> <#{param} type="yaml">#{yaml}</#{param}> }.strip

print_info "POSTing #{code} to #{url} ..."

response = http_request( :method => :post, :url => url, :headers => {:content_type => 'text/xml'}, :body => xml )

This way a vulnerable server's log file shows something like give below(ie. the Time object was actually created from the yaml):

Started POST "/users/sign_in" for 127.0.0.1 at 2013-01-10 00:26:40 -0500 Processing by Devise::SessionsController#create as / Parameters: {"secret"=>1969-12-31 19:00:00 -0500}

A patched server raises a Hash::DisallowedType (Disallowed type attribute: "yaml") exception.


I'm unable to reproduce- not sure why. Seeing no parameters. I'm getting:

  Started POST "/user/sign_in" for 127.0.0.1 at 2013-01-09 21:40:56 -0800
  Processing by UsersController#sign_in as */*
If I comment out the patch, I see:

  Parameters: {"secret"=>1969-12-31 16:00:00 -0800}
So I suspect others may not see the same exception... I'm using ActiveSupport 3.0.3, fyi.


Yep. I see what you mean.

activesupport-3.2.11/lib/active_support/core_ext/hash/conversions.rb has a -- DISALLOWED_XML_TYPES = %w(symbol yaml) -- which is used by its def typecast_xml_value to raise the exception.

I don't see these lines of code in activesupport-3.0.3/lib/active_support/core_ext/hash/conversions.rb

In my case I could upgrade to 3.2.11.

In your case, I am guessing you added the lines of code that disable xml and yaml parameter parsing to an initializer (or application.rb). This way, activesupport simply wouldn't try to convert the parameter value in question into a ruby object.


Right. Thanks!


Thanks for posting this, you gave me the bits I needed to make a simple test I could use to verify the patch, without needing the metasploit stuff. (on rails 2.3, I don't get the disallowedType error, but I can verify in the logs that the patch works)

http://www.railsperformance.com/2013/01/simple-test-for-rail...


If you're running a relatively new version of Rails (hopefully 3.2.10) you can just goto your app on your dev machine and type:

bundle update rails

Then commit and deploy in the usual manner. You'll probably experience no issues with your application, but it's good practice to run your test suite before you do any sort of production deploy anyway.

If you're running an older version of Rails (pre bundler) or if you don't have a solid test-suite, you'll have a tougher time updating. But you definitely need to put aside time to figure it out now.


You'll need to change the version of rails listed in the Gemfile. I don't think "bundle update" modifies that.


Actually it depends on what your Gemfile looks like. If you've specified the exact version in your Gemfile you'll need to edit it before running `bundle update rails`. This means your Gemfile might look like:

    gem 'rails', '3.2.10'
Running `bundle update rails` when you've absolutely specified the version number won't do anything. Your Gemfile.lock file (which is the file that bundler uses to determine which gems to require) won't be changed. Instead of absolutely specifying the version of Rails you want to use, considering using the '~>' specifier. My Gemfile at Airtasker looks like:

    gem 'rails', '~> 3.2.8'
That means that when I run `bundle update rails` it will update the patch version of rails. Our Gemfile.lock has the following entry for rails:

    rails (3.2.11)
By using the '~>' specifier it means that we can easily update our gems patch versions without worrying about API changes between major and minor versions.

You can read: http://gembundler.com/v1.2/gemfile.html to find out more.


For completeness it's also worth mentioning that gem 'rails', '~> 3.2.0' may not necessarily update Rails to the latest version when you run `bundle update rails`. The reason is if there is another gem which requires a gem that does not match the requirement of the latest version of Rails, it will backtrack until it finds a compatible version across the board.

So you will want to double-check the specific version that you end up with. If it doesn't work you probably can fix it with `bundle update rails something_else`, unless you have a true conflict in which case you will have to do some spelunking yourself, and perhaps the Rails hotfix should be in place in your app first.


Thank you, in fact I had the 3.1.2 version specified in my Gemfile. So I changed that to 3.1.10 and now bundle show rails shows that version as installed so I think I'm in the clear.


You may want to change it to a less-specific version so future patches (and they are coming) are less work to apply.


That's exactly what bundle update does.


Only in Gemfile.lock. If you've hard-coded a version in Gemfile, you'll need to update it.

Eg. These will update fine:

    gem "rails"
    gem "rails", "~> 3.2"
This wont:

    gem "rails", "= 3.2.10"


No, bundle update only updates the Gemfile.lock

If your gemfile looks like:

    gem 'rails', '3.2.10'
Then `bundle update` does nothing.


bundle update activesupport


If there is one thing every black-hat attacker in the world is truly gifted at, it is pivoting from shell access on one server to shell access on every goddam server.

I'm not even a hacker, and I have done this before.


Thanks for this reminder. Yesterday I went through all the apps that matter to me but today I fixed all the apps on the box.


How big of a problem do you think secondary (browser) exploits are going to be? Should users take any browsing precautions?


>* Now would not be the worst time in the world to go to your Twitter config, hit Settings -> Apps, and scrub out all the stuff you don't use.

OT: It'd be nice if Twitter would recognize mass abuse of accounts via an oAuth app and automatically disable that app until action can be taken. (They may do this, I don't know).


If they did, they might not say so.


I disabled all 3rd party access to my Twitter account. Every single one. Yeah I had to re-enable my Twitter client… worth it.

YOU SHOULD ALSO disable any Oauth access to your Gmail account. If you're like me, you trusted a few "email as a game" or whatever apps to take a peek, but you can't trust them to stay on top of this security flaw.

It's hard as hell to find the instructions in Google's documentation, so here's the account management link:

https://www.google.com/settings/account?hl=en

Click Security, then the bottom option in the main body of the page.





Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: