

Securing a Rails App against Firesheep with HTTPS - jashkenas
http://blog.documentcloud.org/blog/2011/02/https-by-default/

======
erik_landerholm
I would recommend not doing https in rails. I use nginx to switch between http
and https when needed.

in your nginx config you do this for https setup: # needed for HTTPS
proxy_set_header X-FORWARDED-PROTO https;

then if you want, you can check in rails that certain controllers received
https requests.

using nginx as proxy means your rails app only ever deals with plain text
http.

the only issue, with all http -> https transitions is making sure that things
you are storing in sessions are placed in your forms if you go from a http
page to an https page on form submission. If not you will lose state if you
are storing sessions in the cookies.

The part about relative urls is right. Using cdns etc makes things harder if
they don't support your ssl cert.

At CarWoo! once you login we do everything behind https. For our user creation
form you can be on an http page, but it submits to https.

We created partials that represent our sign up forms (we have many kinds of
landing pages) that automatically take important things out of the session and
put them in the form if needed. These things are not security risks, but are
important for the correct functionality of the app.

~~~
mukyu
https targets for forms on http pages are a security anti-pattern. The
Tunisian government keylogged people's Facebook details because of it.

~~~
tptacek
This is exactly correct. Security teams at our clients routinely "flunk"
applications because they fail to set the "Secure" flag on cookies; this flaw
is even worse than that one.

------
agl
The initial redirect from HTTP to HTTPS is a weak spot. Most users will just
type "example.com" into the address bar and an active attacker can strip HTTPS
from there. There'll be no padlock icon, but how many of your users really
going to notice?

See <http://dev.chromium.org/sts> to fix this.

~~~
jashkenas
I dig the STS header, but isn't fixing the problem for only a small-ish
percentage of browsers ... not really fixing the problem?

~~~
pilif
No. It's not. But without clients knowing about the fact that a site should be
accessed only over SSL, there is no fix. Chrome isn't the only browser to
support this. AFAIK, NoScript for Firefox also adds support and once this
becomes widespread, more browsers might follow.

Fixing the problem for some is certainly better than not fixing it and waiting
for the perfect solution that might never appear.

Especially if the fix is this easy to implement.

~~~
briansmith
Firefox 4 betas have been shipping with STS support since June. See
<http://hg.mozilla.org/mozilla-central/rev/5dc3c2d2dd4f>

------
strooltz
I'm implementing a similar solution. My only concern is that the ssl handshake
takes an anywhere been 600 and 1000ms - far too long as far as I'm concerning.
does anyone have a suggestion to improve this?

My setup 1) Linode $20/mo REE box (will bump up in production) 2) Nginx 3) RoR
3.0.3 4) SSL Through Geotrust. It is a "chained" cert but i don't believe this
is the bottleneck.

thanks in advance...

~~~
WALoeIII
Your chained cert might actually be the bottleneck if the total data exceeds
4K and the user has to do a second round trip to ACK the cert.

[http://journal.paul.querna.org/articles/2010/07/10/overclock...](http://journal.paul.querna.org/articles/2010/07/10/overclocking-
mod_ssl/)

Basically, unless you are certain you need it 4096 bit security, use a 2048
bit key (1024 is not secure anymore) and only include the minimum number of
intermediate certs you can get away with. OSCP stapling doesn't seem worth it
if it cause you to over flow the initial TCP window.

~~~
strooltz
this is a really great piece of advice. i'll check this out.

------
jashkenas
If anyone sees any gaping holes in this scheme, or has a more elegant solution
to the HTTP-for-anonymous/HTTPS-for-logged-in-users pattern, I'd love to hear
the critique.

~~~
EricButler
As a commenter mentioned above, the weak point here is the fact that you only
enforce HTTPS after the user has logged in. Since your login page is served
insecurely, an active attacker could modify it to steal passwords. A well
known tool to do this is SSLStrip:
<http://www.thoughtcrime.org/software/sslstrip/>

The Tunisian government recently took advantage of Facebook's insecure login
page to steal passwords for _everyone in the country_:
[http://blog.jgc.org/2011/01/code-injected-to-steal-
passwords...](http://blog.jgc.org/2011/01/code-injected-to-steal-passwords-
in.html)

Protocol-relative URLs may be useful while migrating to HTTPS, but should not
be needed long-term. All content should only be served securely.

Once a site is fully functional over HTTPS, adding the HSTS header is an
important last step to further mitigate active attacks.
<http://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security>

~~~
jashkenas
Thanks for the example. Looks like we'll give up attempting to serve any HTTP
pages in the future, and do as you recommend -- I don't see any way of getting
around the login-form-phishing hack you describe.

