
Rails' Insecure Defaults - sudonim
http://blog.codeclimate.com/blog/2013/03/27/rails-insecure-defaults/
======
judofyr
> The fix: Rails 4 changed the default session store to be encrypted. Users
> can no longer decode the contents of the session without the decryption key,
> which is not available on the client side.

Seriously? So they've added a whole more complexity to the session-layer just
because people abused it to store "secret" data? Newsflash: If you don't want
the client to be able to read the data, DON'T SEND IT TO THE CLIENT AT ALL.
Way more secure than encrypting it.

~~~
mjhoy
It isn’t just storing secret data that’s a potential problem:

 _...even if a user’s session does not contain sensitive data, it can still
create risk. By decoding the session data, an attacker can gain useful
information about the internals of the application that can be leveraged in an
attack. For example, it may be possible to understand which authentication
system is in use (Authlogic, Devise, etc.)._

~~~
judofyr
I'm sure Authlogic/Devise leaks information about itself in other places than
just the session data (e.g. query params, routes).

Besides, this only adds security through obscurity and doesn't actually make
it any more secure.

~~~
mjhoy
Of course, I agree with you that the goal should not be to add sensitive
information to session cookies. But if it's possible to encrypt, and it might
provide an attacker with information, why not? I'm interested in the trade-
offs. Here's the pull that adds it, I believe:
<https://github.com/rails/rails/pull/8112/commits>

~~~
mnarayan01
The obvious trade-off is that now I can't see what's in the cookie.

~~~
garethadams
You can quite happily set individual cookies[1] in a very similar way to
sessions. You can also output or log the session object on any web page you
want to.

If certain data being user-visible is a feature rather than a side effect, the
session was the wrong place for you to be putting it.

[1]:
[http://api.rubyonrails.org/classes/ActionDispatch/Cookies.ht...](http://api.rubyonrails.org/classes/ActionDispatch/Cookies.html)

~~~
mnarayan01
In this case the "I" in question is the user for an arbitrary website.
Obviously this is not a huge issue, but it's worth noting that this will make
many sites a little harder to reverse-engineer. Of course many people might
see this as a plus rather than a minus, so make of it what you will.

------
pjungwir
I'm very glad to hear about the regex exceptions. Ruby's weird behavior for ^
and $ is something I never would have guessed, since Perl, Java, and Python
treat it differently. I bet there are a lot Rails apps out there with this
insecurity.

I'd also like to see secret tokens extracted from source code to something
like an ENV var. I know a lot of Rails dev shops have a "Rails template" They
use to start new projects, so beyond the Github issues the article mentions, I
wonder how many projects have to same copy/pasted secret token.

I also appreciate the warnings about off-site redirects and hrefs with
untrusted input.

~~~
gkop
> I wonder how many projects have to same copy/pasted secret token

Worse is the number of apps with the token in public GitHub:

[https://github.com/search?q=%27Application.config.secret_tok...](https://github.com/search?q=%27Application.config.secret_token+%3D%27+1+extension%3Arb+path%3Aconfig%2Finitializers&type=Code&ref=searchresults)

------
symmetricsaurus
I don't agree on WEBricks verbose headers being a problem. It should not be
used in production anyway and during development getting lots information
easily is obviously very useful.

~~~
pjungwir
I was surprised they considered this an issue also. Does anyone here run
WEBrick in production?

Similarly for binding on 0.0.0.0. If you're running Passenger or Unicorn, this
shouldn't be a problem. I don't know about other projects like Thin. But
insofar as this is something the core Rails team has control over, we're
talking about just WEBrick again, right?

~~~
InAnEmergency
When the recent RCE exploits came out, the idea of a bunch of people running
Rails dev environments on their laptops on 0.0.0.0:3000 became a lot scarier.
Even if an intruder getting into a corporate network and scanning for Rails
apps sounds unlikely, there are plenty of opportunities on public networks
(e.g. all those Rails hipsters at coffeeshops).

~~~
Xylakant
The nasty thing was that no "getting into corporate network and scanning" was
required. All you had to manage was to get some browser to send a maliciously
crafted request to a vulnerable app. It's easy to just have a snippet of
javascript code that just posts to localhost:3000 in the hope of hitting an
app. Embed that snippet on a forum that lots of rails developers visit -
instant pwnage. It's spray and pray, granted, but as long as you're not trying
to hit a specific target that should give you plenty of root shells on
developer machines.

~~~
InAnEmergency
Good point, even better.

------
deedubaya
You can't fool proof a framework into making it impossible for a developer to
_make_ it insecure.

~~~
pekk
Does that mean you have no choice but to choose less secure defaults over more
secure ones?

~~~
deedubaya
Of course not. Don't get me wrong, I'm all about enhancements in secure
defaults. But does defaults that can me misused to be insecure mean those
defaults are insecure? I don't think so.

Just because you can drive a car on the wrong side of the road doesn't mean it
is unsafe.

~~~
dbpatterson
That analogy doesn't make sense. We are talking about defaults. If the car was
automated, insecure defaults would be a car that automatically drove on the
wrong side of the road, but you could change it to drive on the right side of
the road.

------
martinced
Insecure defaults s^cks.

On a semi-related sidenote all the recent Rails exploits prompted a heated
discussion on the Clojure devs (the developers developing Clojure itself)
group / mailing-list about an insecure Clojure default value.

Some devs realized that using exploits similar to the Ruby ones in YAML
deserialization rogue code could be run on the Clojure webapp server if
certain functions were used... And the argument to force the benevolent
dictator to act was precisely that the Ruby issue basically turned into a
gigantic endless mess of compromised systems and patches.

In Lisps it's all too easy (which is good) to read data and to eval it by
mistake (which is bad): Clojure by default ships with read-eval set to true
which is maddening. These aren't safe defaults and several webapps are
vulnerable. Sadly as I understand it that default behavior is so relied on
upon by everything in the Clojure ecosystem that Rich Hickey decided not to
change it to false.

Instead Clojure ships with insecure defaults and big fat warnings in the
documentation saying: _"You should not use this function, use the EDN
functions instead"_. And it is recommended that everyone sets _read-eval_ to
_false_. Still as far as I understand it even when read-eval is set to _false_
some functions do still have side effect (in Java land) and could potentially
be used maliciously.

That's the big problem: I don't understand it all because it's super technical
(Rich basically told people on the mailing list they weren't qualified enough
to discuss about what the defaults should be). All I know is that I'm supposed
to set read-eval to false and also not to use read-string etc. but instead use
the EDN functions.

And then I guess "cross fingers" because "we promise you this way you'll be
secure". I only half-buy it but whatever, I do still love Clojure as of now.

I think it's really sad that security seems to _always_ be an afterthought.
The very rare projects I know of which were conceived with security as the
main reason are OpenBSD and it's OpenSSH implementation (which is also into
Linux etc.) and esL4 (a 7000 lines microkernel from which hundreds (!) of bugs
have been found and eliminated using a theorem proover).

I can't help but dream of the day were devs are going to take security
seriously and ship with safe defaults.

Probably not for this decade but the situation is getting so messy that I'm
certain something shall be done at some point.

And while some here are busy "getting real things done in PHP and Ruby" (which
is great as long as you're not diminishing others), I want to thanks all the
devs working on torny security issues and thinking about security from the
start. Like all these Ph.Ds working on theorem provers etc.

~~~
lucian1900
Calls to read and eval stand out quite clearly, so I think it's nowhere near
as bad a problem.

~~~
awj
They stand out clearly until I wrap them in a function, or someone else does,
and then that function gets wrapped...

If I had bothered to look at the actual YAML deserialization code it would
have immediately looked unsafe. Unsafe code standing out is necessary but not
sufficient. It should also be difficult for something to wrap that without you
have a clue that it's going on.

~~~
BoyWizard
But it's still very easy to grep through your codebase and find them, unlike
some other issues which can be very context sensitive and far harder to find
quickly.

------
static_typed
The article is good, to the point, highlighting tangible points well. However,
I do worry it is wasted a little. The bulk of Ruby on Fails developers are too
enamoured with Magic and the promise of the five-minute-blog to really think
about security or software engineering.

