Hacker News new | past | comments | ask | show | jobs | submit login
Rails' Insecure Defaults (codeclimate.com)
226 points by sudonim on March 27, 2013 | hide | past | favorite | 50 comments

> The fix: Rails 4 changed the default session store to be encrypted. Users can no longer decode the contents of the session without the decryption key, which is not available on the client side.

Seriously? So they've added a whole more complexity to the session-layer just because people abused it to store "secret" data? Newsflash: If you don't want the client to be able to read the data, DON'T SEND IT TO THE CLIENT AT ALL. Way more secure than encrypting it.

It isn’t just storing secret data that’s a potential problem:

...even if a user’s session does not contain sensitive data, it can still create risk. By decoding the session data, an attacker can gain useful information about the internals of the application that can be leveraged in an attack. For example, it may be possible to understand which authentication system is in use (Authlogic, Devise, etc.).

I'm sure Authlogic/Devise leaks information about itself in other places than just the session data (e.g. query params, routes).

Besides, this only adds security through obscurity and doesn't actually make it any more secure.

Of course, I agree with you that the goal should not be to add sensitive information to session cookies. But if it's possible to encrypt, and it might provide an attacker with information, why not? I'm interested in the trade-offs. Here's the pull that adds it, I believe: https://github.com/rails/rails/pull/8112/commits

The obvious trade-off is that now I can't see what's in the cookie.

You can quite happily set individual cookies[1] in a very similar way to sessions. You can also output or log the session object on any web page you want to.

If certain data being user-visible is a feature rather than a side effect, the session was the wrong place for you to be putting it.

[1]: http://api.rubyonrails.org/classes/ActionDispatch/Cookies.ht...

In this case the "I" in question is the user for an arbitrary website. Obviously this is not a huge issue, but it's worth noting that this will make many sites a little harder to reverse-engineer. Of course many people might see this as a plus rather than a minus, so make of it what you will.

Is that actually a concern? Sounds like security through obscurity to me. What frameworks besides Rails encrypt session data?

ASP.NET optionally encrypts and signs the ViewState, although there the goal actually is to allow storing sensitive information in the session.

Session (at least ASP.NET session) and ViewState are pretty different though. ASP.NET session is on the server, and looked up via a key fom the session cookie (sent to the client). ViewState is sent to the client to allow...well to allow ASP.NET pages to be really horrible and heavy, but also to allow it to know what the 'previous' value for controls was so it can raise change events etc.. Serializing ASP.NET session out to the client is definitely weird, and would not be a normal thing to do. Is rails 'session' being discussed more analogous to ASP.NET viewstate?

nope, I think that rails session sounds probably like ASP.net session. At any rate, rails session is like PHP session (i am not familar with asp.net).

But yes, the default Rails implementation does not store the session on the server but a key in a cookie. Although you can optionally do that, storing the actual session data in the rdbms, or other server-side places. But the default Rails implementation, yes, actually stores the session info itself in cookies.

One of the benefits of this is, when you've scaled out/load balanced your app servers to multiple instances, they don't need to share a session store somewhere, they each get the entire session via cookie. There are also other disadvantages, including with synchronization.

But also including that your session data is now visible to the client when it wasn't before. So, yeah, encrypting the data is one way to change the trade-off calcuation and make it work for more (but not all) apps/use cases, why not?

It actually doesn't increase the complexity much. If you're storing session data in cookies to begin with, you already needed to crypto sign them, to avoid tampering. If you're involving crypto in there anyway, and a secret key anyway, might as well (as a configurable option, sure; which is the default configuration, why not) tell the crypto library to encrypt+sign/decrypt+verify instead of just sign/verify, why not.

Finding out which library is used would not be a big deal, but there's also no reason why cookie keys would have to reveal that either.

Signed cookies with app name as key should be good enough for all purposes.

But then how we will the users provide us with unlimited cloud storage for stateless horizontal scaling?!

I do believe I was downvoted solely on the basis of combining a question mark and exclamation point.

Dude, if you're going to combine a question mark and an exclamation point, do it RIGHT.

(interrobang ftw)

I'm very glad to hear about the regex exceptions. Ruby's weird behavior for ^ and $ is something I never would have guessed, since Perl, Java, and Python treat it differently. I bet there are a lot Rails apps out there with this insecurity.

I'd also like to see secret tokens extracted from source code to something like an ENV var. I know a lot of Rails dev shops have a "Rails template" They use to start new projects, so beyond the Github issues the article mentions, I wonder how many projects have to same copy/pasted secret token.

I also appreciate the warnings about off-site redirects and hrefs with untrusted input.

> I wonder how many projects have to same copy/pasted secret token

Worse is the number of apps with the token in public GitHub:


People still screw up by failing to anchor expressions in Perl and Python, however.

I don't agree on WEBricks verbose headers being a problem. It should not be used in production anyway and during development getting lots information easily is obviously very useful.

I was surprised they considered this an issue also. Does anyone here run WEBrick in production?

Similarly for binding on If you're running Passenger or Unicorn, this shouldn't be a problem. I don't know about other projects like Thin. But insofar as this is something the core Rails team has control over, we're talking about just WEBrick again, right?

Anyone who does not specify a web sever in the Gemfile and deploys to Heroku is running WEBrick in production.

Really? I can't find anything one way or another on the Cedar stack, but on the old Bamboo stack they used Thin.

Cedar default is webrick - I only know this because I had to switch to Thin (and then later Unicorn) to deal with some issues on an app on Cedar stack.

When the recent RCE exploits came out, the idea of a bunch of people running Rails dev environments on their laptops on became a lot scarier. Even if an intruder getting into a corporate network and scanning for Rails apps sounds unlikely, there are plenty of opportunities on public networks (e.g. all those Rails hipsters at coffeeshops).

The nasty thing was that no "getting into corporate network and scanning" was required. All you had to manage was to get some browser to send a maliciously crafted request to a vulnerable app. It's easy to just have a snippet of javascript code that just posts to localhost:3000 in the hope of hitting an app. Embed that snippet on a forum that lots of rails developers visit - instant pwnage. It's spray and pray, granted, but as long as you're not trying to hit a specific target that should give you plenty of root shells on developer machines.

Good point, even better.

People developing apps in coffeeshops is a serious threat, you're right.

Edit: Sorry if anyone thought this was supposed to be sarcastic; it wasn't.

I will admit to accidentally running it in production when I forgot to put a different server in the Gemfile. I was being pretty thick that day.

pretty thick

and now, Thin?

I normally use Puma these days, as I've found it gives me the best performance for the least amount of hassle.

And I finally got that pun. Only took me like 8 hours.

Even if used in production, isn't hiding the headers security through obscurity?

Maybe you should present headers from some older, more insecure server? Then any script using the Server headers to target specific vulnerabilities will be completely foiled!

Precise version information (in production) is probably not a good idea; obscuring the fact that you're insecure has obvious upsides.

You can't fool proof a framework into making it impossible for a developer to make it insecure.

Does that mean you have no choice but to choose less secure defaults over more secure ones?

Of course not. Don't get me wrong, I'm all about enhancements in secure defaults. But does defaults that can me misused to be insecure mean those defaults are insecure? I don't think so.

Just because you can drive a car on the wrong side of the road doesn't mean it is unsafe.

That analogy doesn't make sense. We are talking about defaults. If the car was automated, insecure defaults would be a car that automatically drove on the wrong side of the road, but you could change it to drive on the right side of the road.

nice analogy

Insecure defaults s^cks.

On a semi-related sidenote all the recent Rails exploits prompted a heated discussion on the Clojure devs (the developers developing Clojure itself) group / mailing-list about an insecure Clojure default value.

Some devs realized that using exploits similar to the Ruby ones in YAML deserialization rogue code could be run on the Clojure webapp server if certain functions were used... And the argument to force the benevolent dictator to act was precisely that the Ruby issue basically turned into a gigantic endless mess of compromised systems and patches.

In Lisps it's all too easy (which is good) to read data and to eval it by mistake (which is bad): Clojure by default ships with read-eval set to true which is maddening. These aren't safe defaults and several webapps are vulnerable. Sadly as I understand it that default behavior is so relied on upon by everything in the Clojure ecosystem that Rich Hickey decided not to change it to false.

Instead Clojure ships with insecure defaults and big fat warnings in the documentation saying: "You should not use this function, use the EDN functions instead". And it is recommended that everyone sets read-eval to false. Still as far as I understand it even when read-eval is set to false some functions do still have side effect (in Java land) and could potentially be used maliciously.

That's the big problem: I don't understand it all because it's super technical (Rich basically told people on the mailing list they weren't qualified enough to discuss about what the defaults should be). All I know is that I'm supposed to set read-eval to false and also not to use read-string etc. but instead use the EDN functions.

And then I guess "cross fingers" because "we promise you this way you'll be secure". I only half-buy it but whatever, I do still love Clojure as of now.

I think it's really sad that security seems to always be an afterthought. The very rare projects I know of which were conceived with security as the main reason are OpenBSD and it's OpenSSH implementation (which is also into Linux etc.) and esL4 (a 7000 lines microkernel from which hundreds (!) of bugs have been found and eliminated using a theorem proover).

I can't help but dream of the day were devs are going to take security seriously and ship with safe defaults.

Probably not for this decade but the situation is getting so messy that I'm certain something shall be done at some point.

And while some here are busy "getting real things done in PHP and Ruby" (which is great as long as you're not diminishing others), I want to thanks all the devs working on torny security issues and thinking about security from the start. Like all these Ph.Ds working on theorem provers etc.

For web development, the two major haskell frameworks (Yesod and Snap) are both designed with security as a major concern, and in general haskell libraries seem to take this stuff pretty seriously (I guess people who like haskell like safety, _and_ eval is not something that's too easy).

As a more extreme example, this language/framework is designed to statically guarantee that you can't suffer from most web vulnerabilities - http://www.impredicative.com/ur/ - but if you want any complicated libraries, they need to be C (or linked through a C shim), which may not be the most desirable.

> esL4

It's seL4 (secure embedded L4). I'm just pointing it out in case someone else is interested in "miceokernels that are formally verified", like a friend of mine who is obsessed with seL4. If you google "esL4" you'll get completely different results so I though I'd point that out :)

Calls to read and eval stand out quite clearly, so I think it's nowhere near as bad a problem.

They stand out clearly until I wrap them in a function, or someone else does, and then that function gets wrapped...

If I had bothered to look at the actual YAML deserialization code it would have immediately looked unsafe. Unsafe code standing out is necessary but not sufficient. It should also be difficult for something to wrap that without you have a clue that it's going on.

But it's still very easy to grep through your codebase and find them, unlike some other issues which can be very context sensitive and far harder to find quickly.

The problem is that if you want security to be in the product from day 1, you need someone trained in security from day 1. The only solution that I see working is that we make standard libraries secure, and provide easy to understand, high level libraries for all things crypto.

I don't have much of a Lisp background is the reader safe in other lisps?

But then, in CL nobody assumes (read) is safe.

The article is good, to the point, highlighting tangible points well. However, I do worry it is wasted a little. The bulk of Ruby on Fails developers are too enamoured with Magic and the promise of the five-minute-blog to really think about security or software engineering.

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact