If your site is much bigger than a personal blog, turning on SSL by default will cost you a a fortune. Your hosting bill will be virtually unchanged, but the CDN costs are ruinous.
Akamai charges about 10x as much for custom SSL service as for regular. Amazon Cloudfront just won't do custom SSL -- you have to use their domain and risk ugly browser warnings.
I don't know about other CDNs, but given the nature of the technology, I expect their cost structures to be similar. (over 40% of clients do not support SNI, so each custom SSL certificate requires a dedicated TCP port, which drives up costs for the CDN. Worse, a lot of clients will reject ports other than 443, forcing you to use a separate IP for each custom certificate.)
If you read the comments, the cloudflare guys mention that, like Amazon CloudFront, it is not yet possible to use your own certificate. That means their cost is essentially zero (some CPU time or an SSL accelerator card). They can serve every request from the same endpoint.
It also means their SSL service is useless for CDN because it will induce warnings in the browser. (You may be able to evade the warnings if you add their custom domain as a SAN on your primary certificate, but I've never tried it, so I cannot speak with certainty to the outcome)
CloudFlare actually does do SSL for your domain without warnings or SNI. They essentially have a large pool of shared certificates with lots of SANs, so your domain ends up as a SAN on one of their shared certificates with half a dozen other CloudFlare customers.
It would be nice if you could have a cryptographic hash on img and other tags, so that when the main page is in https, CDN content could be loaded iff it matches the hash.
I support the notion of enabling HTTPS by default, but calling this "privacy" is a bit optimistic given the way things are these days.
Supposing the NSA wanted to spy on your traffic to tbray.org. Would it not be as simple as them asking Verisign (or whoever signed the server's SSL cert) to hand them the server's private key?
I think true privacy would require a bit more hassle than most users are willing to deal with right now, and that's a shame.
No, it would not be that simple. Verisign does not have the private key, they only have the public and their role here is to confirm that it is the public key.
For the NSA to spy, they need to get Verisign to issue a second certificate, and then to use it to construct a mirror of your site using a man-in-the-middle attack. Then they'd have to route your traffic through their server in one of various ways (the simplest being to confuse your machine about the DNS entry).
It is still doable, and no doubt they do it already. But they can't do it on a wide scale without being caught at it. And they haven't been.
surprised it's not mentioned there, but the perspectives plugin (chrome + firefox) is similar, if anyone wants automated checks now (only time i've noticed it fail, amusingly, was on eff's site)
The wonderful thing about a free and open Internet developed by passionate participants is that they were anticipating not just having bad actors being criminals working outside the law. They anticipated states being the bad actors (which has proven to have been quite prescient, even in the case of "good" states like the US, Canada, UK, Australia, etc.); states with the ability to bribe or force participating network owners to allow unfettered access. And, they built the protocols to work, anyway.
Of course, this is also just a side effect of good security practices. If a state can break it, a well-connected criminal can, too. But, I'm pretty confident the creators of SSL were also thinking of China and North Korea and other violently oppressive regimes.
When opening an SSL connection your browser typically transmits the server name unencrypted, so the server knows which public key to send you. This is known as Server Name Indication [1].
SSL doesn't stop people seeing what servers you connect to, or the amount of traffic you exchange with each one.
That's like saying your bedroom isn't private because some black-helicopter agency could use radar to look through your roof.
While HTTPS does not confer absolute privacy, and it's theoretically possible for a well-financed nation-state to compromise a targeted individual's privacy despite HTTPS, the fact remains that for the other 99.9999999% percent of society it can be considered unbreakable.
Certainly, encouraging widespread adoption of HTTPS is a good idea when the majority of websites are still transmitting login and session credentials in clear text! Even news.yc doesn't enable a secure connection unless you explicitly type https:// in the location bar, and many websites I log in to don't even have HTTPS support.
> There are no free lunches, and this is no exception. That encryption requires quite a lot of computation, which isn’t free.
No, all my computrons will go to waist! Sillyness aside, if your not paying a hosting provider for CPU cycles, you are not going to be affected by turning on SSL. If you are paying for CPU cycles, optimization of the sites own code is the way to go, like programming the site in C instead of php.
Every time I hear people talking about enabling encryption, I hear people talk about performance as if we still had 4k ram, and some single digit MGHz cpu, and that extra overhead of computation will ruin your day. Today, even the embedded devices won't be affected by it.
There are edge cases which does still cause problems. SSL servers demanded entropy and embedded device can easy run out of it. Thats not a computation problem per say, but still something to take into consideration if you put a SSL webbserver into a minicam, light bulb or something similar of small size. There is also an issue if you are a supplier of very small single files like an javascript library. HTTPS can sometimes impact caching, and the handshake will increase the size of data initially sent with an average under 5 kB. The handshake will also increase the number of round-trips required. Neither are computations, but still considerations in regard to SSL.
But please, for any future discussing about encryption. Can we please stop talking about the computational overhead of encryption.
Thats not a computation problem, but an latency issue thanks to the extra round-trips being made (see above about edge cases). Its also only the initial connection, as keep-alive is used to eliminate that particular problem with http and https in equal style.
Latency issues are real (ie, non-trivial problems), but they are of the nature where context is everything. If you have a small javascript file, then http instead of https will be a multitude faster. If however, you are running a dynamic site, not on a CDN, and are using some kind of javascript goodies, then the experienced latency for users are going to be complex. Add in some banners with ads, and round-trip might not be that relevant at all in the experience of loading the first page. What i am saying is that if latency is an issue, https is not the first place to go the address it.
To compare with a real world example, if I load a local news site (round-trip ping of say 30ms), the extra packages of https won't have any direct affect on latency, and the ad-networks from a continent away will be the biggest crime to latency. That is, if I don't enable javascript, as then the load time of the first page will increase a second or so.
In the app i'm working on now, I really want to get the page loading fast-fast-fast. I have all javascript combined into one file, minimized, gzipped, far-future cached. Same with CSS. (Rails does this for me, mostly). javascript file is also marked with 'async' attribute. All images are set up so loading images doesn't block initial page load.
300ms for https round trips remain. shrug. it is unfortunate, but I stick with https anyway. Making sure your web server is doing persistent http connections is about the only thing I know to do to ameliorate somewhat, but first connection is still slower than I'd like, due to https.
something may very well be wrong, and it may be outside of my abilities or time to figure out what. :)
I am looking into increasing the SSL Session keeping length on my apache, have any opinion about that? It does seem to eliminate the extra 'tax' for SSL in my from-slow-network experiments, for the duration of the session eliminating re-negotiation.
As ballpark figures for SSL session cache timeouts, Apache defaults to 5 minutes, F5 load balancers default to 1 hour and JBoss is 1 day. So it really seems like a wide range of values get used out in the world.
Upping to an hour should cause no problems. With SHM cache that will use a bit more memory, should make no practical difference for DBM.
The other thing to watch out for (mentioned at link) is that lots of cheap SSL certs have a cert chain like 4 or 5 items long, bloating startup.
That shows that 5983 bytes of intermediate certs are being sent over and above the actual server cert. That's going to eat a bunch of RTTs during SSL setup.
Akamai charges about 10x as much for custom SSL service as for regular. Amazon Cloudfront just won't do custom SSL -- you have to use their domain and risk ugly browser warnings.
I don't know about other CDNs, but given the nature of the technology, I expect their cost structures to be similar. (over 40% of clients do not support SNI, so each custom SSL certificate requires a dedicated TCP port, which drives up costs for the CDN. Worse, a lot of clients will reject ports other than 443, forcing you to use a separate IP for each custom certificate.)