I am sick of the articles saying enabling HTTPS is not going to impact performance "much".
Even without pulling out jmeter, apache bench, or load runner I can tell you just hitting F5 on a HTTPS page makes it take longer. It's the responsiveness. I don't really care if it takes 10% more CPU. CPU cycles are getting cheaper and cheaper. I do care a lot that it takes 50ms more.
Yes I know you can tweak things so that the HTTPS connection stays open and doesn't have to handshake everytime. But really, is there anyway to get that handshake down to something acceptable?
Your typical blog is probably going to suffer even more from enabling https globally when compared to major sites because not as much effort has been put into combining js, css, and images into sprites.
Seriously, https performance sucks from start to finish. If the average user doesn't care about https, but does care about performance, who are they going to go with - you or your faster competitor?
I agree that anonymous or non-logged-in activity has no need of HTTPS, but anything that's transferring cookies, passwords, or other important user or session information should be HTTPS.
You can't cache something that's encrypted!
Edit: It looks like the best way might be to do SSL between the client and nginx on my side acting as a reverse proxy, and then non-SSL internally on my side? Not sure how that setup compares to Varnish in general, but it's probably fine for my purposes.
One special case though, if you have multiple servers that serve your content load balanced, and if these servers are in different colocations, then you probably need to run any sync between them over SSL. Even if you do control the link between the two boxes, there's that off chance that your link goes down and the IP layer automatically routes traffic through a different set of routers.
Which you can't really do in shared-IP virtual hosting, since SNI support is still a bit spotty.
Can't the argument be made that if you are a startup and have launched a product to test the market, it would take up too much developer time to think about https? Depending on the nature of the service, shouldn't you defer the extra effort until only after you've validated the product/market fit?
OTOH, I am writing this comment on an open wireless router.
On the gripping hand, nothing I put here is private, and if someone "pranks" me, I can just login again and delete offensive content. Karma isn't actually money...
This might not bother you individually, today. But maybe it will cause problems for you in the future if laws change? Maybe it is causing problems for a lot of people who aren't you today? Maybe it is causing problems for citizens in countries other than yours?
The World would be better off if https was used everywhere.
I actually did it last night, from yum install httpd to (self-signed) SSL in less than 15 minutes. There are some webapp considerations, but they are negligible when compared to other security efforts like XSS diligence.
Separating requests through host header / name-based virtual hosting is not supported on https. Startups discussed on HN will most probably own the machine, but it's problematic for small sites.
Not true. SSL doesn't even really know what an IP address is, and it's quite possible to use a single SSL certificate on multiple IP addresses if the CN resolves to all of them (in round robin DNS, for example). If you mean that one IP address can only support a single SSL host, that's never been entirely true, as it has always been possible to support multiple SSL sites on a single IP using different ports (example.com:4443, for example). But you probably don't want to specify the port, which is addressed now by Server Name Indication (SNI): http://en.wikipedia.org/wiki/Server_Name_Indication
if you don't have wildcard SSL, that means every single domain and sub-domain
Some CAs now sometimes include a X509v3 Subject Alternative Name for DNS, so you might get www.example.com tossed in for free when you buy a cert for example.com. Unfortunately, not all clients support this field.
Separating requests through host header / name-based virtual hosting is not supported on https.
Once again, SNI is likely to fix this as soon as it is ubiquitously supported by browsers (already support is pretty good). However, note that name-based virtual hosting is a web server feature that really has nothing to do with SSL/TLS. It's quite possible to use it for HTTPS without any problems for at least a single domain. In fact, I do it for all of my secure sites to ensure that content cannot be requested using the bare IP address or a different domain that resolves to the same IP. This should really be a best practice, but there's a lot of shrill advice against it that is extremely outdated and needs to just die.
In any case, most of the problems you mention are solvable now, even for small sites. The real problem is in saying that obsolete insecure web clients will not be supported on your site, and that's getting easier to do every day.
It's possible, but needs XP to die before it's practical.
Of course, not appropriate for all sites, but definitely a step forward for sites holding out due to incomplete support.
Bear in mind if you implement the proxy solution that others suggest, you aren't just serving the content, you're approving it. Blind proxying is not really sensible; again, if it were, then why are you HTTPS in the first place?
You're probably talking about images. You should probably let them upload images and host them yourself, at which point you should actually examine them somehow for security guarantees, such as "yes, this really is a JPG".
If this sounds a bit utopian or a bit hardnosed, what it really comes down to is, do you need HTTPS or not? I won't necessarily guarantee there's never an in-between answer but it's an awfully narrow space. And if the answer is "yes", well, follow through then.
This is fine if you control the source and can handle both http/https serving. It doesn't help for content inlined by users, though, it'll just result in broken images.
The other interesting item from this article was that you should always load resources via HTTPS, because a malicious script would have control over the DOM. It seems like there is a need for some facility in browsers to let pages delegate limited privileges to 3rd party scripts (maybe only able to read/write in a certain div or something?), so that users can still be confident that their connection is secure.
Naturally, this requires more work, and you'd probably end up getting rate limited since all API requests now come from a single IP (your server's) rather than each user's IP.
Alternately, you can iframe third party scripts (or depending on privacy requirements, you may need to double-iframe them), but this means that the script cannot directly interact with content on your page.
Trade-offs everywhere, which is why the architect of your system really needs to know what he or she is doing.
There is most definately such a need and here is one attempt to address it:
It would be nice if we can have integrity and authenticity without confidentiality.