Hacker News new | comments | show | ask | jobs | submit login
EFF: How to Deploy HTTPS Correctly (eff.org)
167 points by roder on Nov 16, 2010 | hide | past | web | favorite | 43 comments

Yes we already know HTTPS is secure.

I am sick of the articles saying enabling HTTPS is not going to impact performance "much".

Even without pulling out jmeter, apache bench, or load runner I can tell you just hitting F5 on a HTTPS page makes it take longer. It's the responsiveness. I don't really care if it takes 10% more CPU. CPU cycles are getting cheaper and cheaper. I do care a lot that it takes 50ms more.

Yes I know you can tweak things so that the HTTPS connection stays open and doesn't have to handshake everytime. But really, is there anyway to get that handshake down to something acceptable?

Actually that's really interesting. How can you keep the https connection open?

I don't know HTTPS, but it should be just HTTP over SSL. That means regular "Connection: keep-alive" should work just as well on HTTPS as HTTP. In both cases it will keep the connection alive, allowing several requests after one another.

They are correct that everyone should be using HTTPS. It costs as little as $12/year for the certificate, there's tons of tutorials on how to set it up in Apache, and there's only a couple of hundred sites in the world that need to be concerned with the performance hit (no, I doubt your blog is on that list).

https incurs a heavy performance penalty for everyone. Server load isn't the problem; a large increase in latency for every single connection is.

Your typical blog is probably going to suffer even more from enabling https globally when compared to major sites because not as much effort has been put into combining js, css, and images into sprites.

I wish I could uprate you with more than one vote.

Seriously, https performance sucks from start to finish. If the average user doesn't care about https, but does care about performance, who are they going to go with - you or your faster competitor?

Probably depends on if their account gets hijacked at your site or the site of your fastest competitor.

I agree that anonymous or non-logged-in activity has no need of HTTPS, but anything that's transferring cookies, passwords, or other important user or session information should be HTTPS.

Google is currently working on something called "False Start" to reduce the number of roundtrips for an SSL connection to reduce latency. I believe there is already code in Chrome for this.

The main detractor is inability for intermediate parties/networks to perform caching.

You can't cache something that's encrypted!

Is there a good way to set up a Varnish or Squid-like caching proxy in front of HTTPS, or is that by design impossible? My fairly small sites don't generally have performance problems, but on the occasions that they get Slashdotted or on the front page of Reddit, the caching sure helps keep things moving.

Edit: It looks like the best way might be to do SSL between the client and nginx on my side acting as a reverse proxy, and then non-SSL internally on my side? Not sure how that setup compares to Varnish in general, but it's probably fine for my purposes.

yep, you'd use something like nginx or apache traffic server and set that up to serve SSL.

One special case though, if you have multiple servers that serve your content load balanced, and if these servers are in different colocations, then you probably need to run any sync between them over SSL. Even if you do control the link between the two boxes, there's that off chance that your link goes down and the IP layer automatically routes traffic through a different set of routers.

one nit to pick... there are still people stuck on slow connections. they are not always there by choice. In some cases the infrastructure is lagging the advances in content proliferation. when you get to dialup (the remaining ubiquitous and typically unlimited connectivity) running HTTPS where it is not needed is painful. v90 is only able to achieve compression on HTTP traffic. HTTPS traffic ends up moving at 40-50% the speed of HTTP traffic. While data security is certainly a valid objective, not everything moving as HTTPS needs to do so.

It's a trade off. Security vs performance. Personally, I think a small reduction in performance for a massive increase in security is a worthwhile exchange. Other people don't think so.

I'm not attempting to verify the veracity of OP's statement but, going on face value, OP says "40%-50% slower". Surely you do not consider that a small reduction in performance, do you?

I don't recognise the 40%-50% figure. I've been using the HTTPS-Everywhere Firefox addon and I notice a very small slow down on some sites, and nothing noticeable on others.

So I've been setting up all my sites as separate domain names, with things like code.domain.com and email.domain.com. Should I migrate these services, or is there a way to use HTTPS on many subdomains?

You can buy a global wildcard certificate that lets you create a domain name for *.domain.com. It costs like $199/year on GoDaddy I think, so you'd have to figure out if you think it would be worth it over getting individual certificates as $12/cert.

> would be worth it over getting individual certificates as $12/cert

Which you can't really do in shared-IP virtual hosting, since SNI support is still a bit spotty.

"everyone should be using HTTPS"

Can't the argument be made that if you are a startup and have launched a product to test the market, it would take up too much developer time to think about https? Depending on the nature of the service, shouldn't you defer the extra effort until only after you've validated the product/market fit?

Well, the contradictory argument can be made as well: If you're a start-up, and your early adopters get annoyed because someone made sure Firesheep works with your web site, and they're all getting pranked, they're going to decide they won't bother.

OTOH, I am writing this comment on an open wireless router.

On the gripping hand, nothing I put here is private, and if someone "pranks" me, I can just login again and delete offensive content. Karma isn't actually money...

Your local network admin, your ISP, any ad networks your ISP has or will have arrangements with and your government can log all the websites you visit and build profiles of you because https isn't used everywhere.

This might not bother you individually, today. But maybe it will cause problems for you in the future if laws change? Maybe it is causing problems for a lot of people who aren't you today? Maybe it is causing problems for citizens in countries other than yours?

The World would be better off if https was used everywhere.

It's $12 and 15 minutes, not a big investment.

Surely you're understating the time? 15 minutes sounds like the "happy path" estimate. Don't you have to think about mixed content and other edge cases? Or do these just not come up that often?

It's obviously going to be easiest if you do it from the beginning, but even retrofitting won't be a huge issue. It's probably also easier if you secure your entire site rather than piecemeal (like just for logins).

I actually did it last night, from yum install httpd to (self-signed) SSL in less than 15 minutes. There are some webapp considerations, but they are negligible when compared to other security efforts like XSS diligence.

Or rather, everyone should be using HTTPS once XP dies. Until then, HTTPS breaks the shared hosting model.

Until XP or IPv4 dies. I'm guessing IPv4 will be around longer than XP though.

Correct me if I'm wrong but one problem is multiple domains on a single machine. Each SSL cert should correspond to a unique IP, and if you don't have wildcard SSL, that means every single domain and sub-domain.

Separating requests through host header / name-based virtual hosting is not supported on https. Startups discussed on HN will most probably own the machine, but it's problematic for small sites.

Each SSL cert should correspond to a unique IP

Not true. SSL doesn't even really know what an IP address is, and it's quite possible to use a single SSL certificate on multiple IP addresses if the CN resolves to all of them (in round robin DNS, for example). If you mean that one IP address can only support a single SSL host, that's never been entirely true, as it has always been possible to support multiple SSL sites on a single IP using different ports (example.com:4443, for example). But you probably don't want to specify the port, which is addressed now by Server Name Indication (SNI): http://en.wikipedia.org/wiki/Server_Name_Indication

if you don't have wildcard SSL, that means every single domain and sub-domain

Some CAs now sometimes include a X509v3 Subject Alternative Name for DNS, so you might get www.example.com tossed in for free when you buy a cert for example.com. Unfortunately, not all clients support this field.

Separating requests through host header / name-based virtual hosting is not supported on https.

Once again, SNI is likely to fix this as soon as it is ubiquitously supported by browsers (already support is pretty good). However, note that name-based virtual hosting is a web server feature that really has nothing to do with SSL/TLS. It's quite possible to use it for HTTPS without any problems for at least a single domain. In fact, I do it for all of my secure sites to ensure that content cannot be requested using the bare IP address or a different domain that resolves to the same IP. This should really be a best practice, but there's a lot of shrill advice against it that is extremely outdated and needs to just die.

In any case, most of the problems you mention are solvable now, even for small sites. The real problem is in saying that obsolete insecure web clients will not be supported on your site, and that's getting easier to do every day.


It's possible, but needs XP to die before it's practical.

Wondering if one could practically implement a site with TLS+SNI for browsers that support it while letting browsers on XP and others that don't support it hit the site with HTTP unencrypted.

Of course, not appropriate for all sites, but definitely a step forward for sites holding out due to incomplete support.

You can with SNI. See here: http://en.wikipedia.org/wiki/Server_Name_Indication Check the support section too.

How would you handle the mixed content e.g. on a secure forum where some content sent by the users contains non-secure elements like images?

The mixed content is, fundamentally, not secured by HTTPS. If your page is important enough to warrant HTTPS in the first place, why are you allowing user-specified content into the mix anyhow? Users are not trustworthy, in general.

Bear in mind if you implement the proxy solution that others suggest, you aren't just serving the content, you're approving it. Blind proxying is not really sensible; again, if it were, then why are you HTTPS in the first place?

You're probably talking about images. You should probably let them upload images and host them yourself, at which point you should actually examine them somehow for security guarantees, such as "yes, this really is a JPG".

If this sounds a bit utopian or a bit hardnosed, what it really comes down to is, do you need HTTPS or not? I won't necessarily guarantee there's never an in-between answer but it's an awfully narrow space. And if the answer is "yes", well, follow through then.

I believe hotmail handles this by proxying all external resources through their own HTTPS proxies.

Good question. If you mean specifically getting around the "some objects on this page are insecure" popups, a trick I've seen work is to cut off the http protocol portion of remote URLs, forcing them to look like "//www.example.com/to/file.jpg".

That's just a link that's relative to the protocol, the same way /blah is relative to the protocol + host. If you load an https page with inlined image links like that, it'll attempt to hit those images over https as well.

This is fine if you control the source and can handle both http/https serving. It doesn't help for content inlined by users, though, it'll just result in broken images.

One objection to using HTTPS for everything that I occasionally see is the loading of 3rd party scripts (facebook, ad networks, etc.). The premise of the argument against using HTTPS is that these 3rd party scripts are only accessible via HTTP, so users of old versions of IE would get scary popup warnings.

The other interesting item from this article was that you should always load resources via HTTPS, because a malicious script would have control over the DOM. It seems like there is a need for some facility in browsers to let pages delegate limited privileges to 3rd party scripts (maybe only able to read/write in a certain div or something?), so that users can still be confident that their connection is secure.

If you have third party scripts on your page, you've already given away control of your page. A compromise on the third party's servers makes your site vulnerable as well. The safe thing to do is to fetch third party content server-side, massage it and then pass it on to your front end.

Naturally, this requires more work, and you'd probably end up getting rate limited since all API requests now come from a single IP (your server's) rather than each user's IP.

Alternately, you can iframe third party scripts (or depending on privacy requirements, you may need to double-iframe them), but this means that the script cannot directly interact with content on your page.

Trade-offs everywhere, which is why the architect of your system really needs to know what he or she is doing.

It seems like there is a need for some facility in browsers to let pages delegate limited privileges to 3rd party scripts (maybe only able to read/write in a certain div or something?), so that users can still be confident that their connection is secure.

There is most definately such a need and here is one attempt to address it:


It's a subset of JavaScript that can be sandboxed within other JavaScript, implemented as a lexical sanitizer. It's a huge hack, but I can't think of a better way to do this with existing technology.

What we really need is a replacement for JavaScript.

The real problem with mixed content is that you're compromising the security of SSL when you allow insecure resources on the page. If a page is otherwise secure and includes an insecure ad script call, for instance, it's relatively easy to hijack that javascript request and munge it to do whatever you want with the DOM and javascript-accessible cookies.

It'd be nice if HSTS also allowed you to limit the root CAs that should be signing your site's certificate.

This does not fly with our BAFHs. They need to know the content :)

It would be nice if we can have integrity and authenticity without confidentiality.

Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact