
EFF: How to Deploy HTTPS Correctly - roder
https://www.eff.org/pages/how-deploy-https-correctly
======
jtchang
Yes we already know HTTPS is secure.

I am sick of the articles saying enabling HTTPS is not going to impact
performance "much".

Even without pulling out jmeter, apache bench, or load runner I can tell you
just hitting F5 on a HTTPS page makes it take longer. It's the responsiveness.
I don't really care if it takes 10% more CPU. CPU cycles are getting cheaper
and cheaper. I do care a lot that it takes 50ms more.

Yes I know you can tweak things so that the HTTPS connection stays open and
doesn't have to handshake everytime. But really, is there anyway to get that
handshake down to something acceptable?

~~~
watchandwait
Actually that's really interesting. How can you keep the https connection
open?

~~~
maggit
I don't know HTTPS, but it should be just HTTP over SSL. That means regular
"Connection: keep-alive" should work just as well on HTTPS as HTTP. In both
cases it will keep the connection alive, allowing several requests after one
another.

------
bluedevil2k
They are correct that everyone should be using HTTPS. It costs as little as
$12/year for the certificate, there's tons of tutorials on how to set it up in
Apache, and there's only a couple of hundred sites in the world that need to
be concerned with the performance hit (no, I doubt your blog is on that list).

~~~
btmorex
https incurs a heavy performance penalty for everyone. Server load isn't the
problem; a large increase in latency for every single connection is.

Your typical blog is probably going to suffer even more from enabling https
globally when compared to major sites because not as much effort has been put
into combining js, css, and images into sprites.

~~~
mootothemax
I wish I could uprate you with more than one vote.

Seriously, https performance sucks from start to finish. If the average user
doesn't care about https, but _does_ care about performance, who are they
going to go with - you or your faster competitor?

~~~
cookiecaper
Probably depends on if their account gets hijacked at your site or the site of
your fastest competitor.

I agree that anonymous or non-logged-in activity has no need of HTTPS, but
anything that's transferring cookies, passwords, or other important user or
session information should be HTTPS.

------
gokhan
Correct me if I'm wrong but one problem is multiple domains on a single
machine. Each SSL cert should correspond to a unique IP, and if you don't have
wildcard SSL, that means every single domain and sub-domain.

Separating requests through host header / name-based virtual hosting is not
supported on https. Startups discussed on HN will most probably own the
machine, but it's problematic for small sites.

~~~
tbrownaw
[http://en.wikipedia.org/wiki/Server_Name_Indication#Client_s...](http://en.wikipedia.org/wiki/Server_Name_Indication#Client_side)

It's possible, but needs XP to die before it's practical.

~~~
timmorgan
Wondering if one could practically implement a site with TLS+SNI for browsers
that support it while letting browsers on XP and others that don't support it
hit the site with HTTP unencrypted.

Of course, not appropriate for all sites, but definitely a step forward for
sites holding out due to incomplete support.

------
Revisor
How would you handle the mixed content e.g. on a secure forum where some
content sent by the users contains non-secure elements like images?

~~~
Semiapies
Good question. If you mean specifically getting around the "some objects on
this page are insecure" popups, a trick I've seen work is to cut off the http
protocol portion of remote URLs, forcing them to look like
"//www.example.com/to/file.jpg".

~~~
mrkurt
That's just a link that's relative to the protocol, the same way /blah is
relative to the protocol + host. If you load an https page with inlined image
links like that, it'll attempt to hit those images over https as well.

This is fine if you control the source and can handle both http/https serving.
It doesn't help for content inlined by users, though, it'll just result in
broken images.

------
xiaomai
One objection to using HTTPS for everything that I occasionally see is the
loading of 3rd party scripts (facebook, ad networks, etc.). The premise of the
argument against using HTTPS is that these 3rd party scripts are only
accessible via HTTP, so users of old versions of IE would get scary popup
warnings.

The other interesting item from this article was that you should always load
resources via HTTPS, because a malicious script would have control over the
DOM. It seems like there is a need for some facility in browsers to let pages
delegate limited privileges to 3rd party scripts (maybe only able to
read/write in a certain div or something?), so that users can still be
confident that their connection is secure.

~~~
bluesmoon
If you have third party scripts on your page, you've already given away
control of your page. A compromise on the third party's servers makes your
site vulnerable as well. The safe thing to do is to fetch third party content
server-side, massage it and then pass it on to your front end.

Naturally, this requires more work, and you'd probably end up getting rate
limited since all API requests now come from a single IP (your server's)
rather than each user's IP.

Alternately, you can iframe third party scripts (or depending on privacy
requirements, you may need to double-iframe them), but this means that the
script cannot directly interact with content on your page.

Trade-offs everywhere, which is why the architect of your system really needs
to know what he or she is doing.

------
caf
It'd be nice if HSTS also allowed you to limit the root CAs that should be
signing your site's certificate.

------
BCGC
This does not fly with our BAFHs. They need to know the content :)

It would be nice if we can have integrity and authenticity without
confidentiality.

