If you want another reason for enabling HTTPS, even for shitty blogs - on one of my visits to the US, I stayed at a motel that was snooping on my Wifi traffic, injecting their own ads in Google's searches and displaying their own banners in the websites I was visiting.
If you're the owner of a Wifi network, there are already available solutions for monetizing users' traffic, by injecting ads, like this one: http://rgnets.com/
So you, as the publisher of a shitty blog or website, do you really want ISPs, motels or any other third-party to mess with your own content, to inject their own scripts and frames in your HTML, to degrade the experience for your readership?
One easy way of fixing that is HTTPS. HTTPS is not just for security or privacy, it's also for digitally signing the content that's being served.
You can't MITM without the browser giving huge warnings about how the connection is not secure, therefore I noticed that networks that do this avoid doing it for HTTPS connections.
The only way you can MITM successfully without the user noticing is if you control the browser used (e.g. Nokia).
And secure cookies, yes. But as we have seen, those things simply do not work out. As the usual layered security approach, it's good idea not to provide HTTP service at all, so redirection isn't hiding failures to use HTTPS.
You should consider combining CipherScan and SSL Labs into a Tools section, then adding SSLyze. It does nearly everything SSL Labs does from the command line: https://github.com/iSECPartners/sslyze
For big sites, sure. But at least as things currently stand, that's infrastructure-inconvenient and somewhat pricey on the low end, especially if you operate multiple sites and they come and go as hobby projects. Not because of the compute overhead (negligible for my uses), but because of the need for a (non-self-signed) SSL certificate, plus a unique IPv4 address, for each domain. Digital Ocean, for example, won't give you more than 1 IPv4 address per VPS, which means you need a separate VPS for every side project, if you want to go HTTPS. I prefer to multiplex all my tiny projects on one VPS, saving both the extra $60/yr per project and the extra sysadmin overhead. Plus $50/yr for the certificate, and $15 for the domain, and you have a startup cost of $125/yr per side project... versus $15/yr that it currently costs me. Not huge, but more than I want to pay, and raises the mental friction on something that I currently consider near-free/throw-away.
If IPv6 actually gets deployed to the extent that I can have an IPv6-only site without needing IPv4 addresses, and someone solves the SSL signing mess, I'd be happy to use it.
As others mentioned, you can run multiple sites using SNI. You can also get a free cert from StartSSL. The barrier to entry is really low.
The only scenario where I wouldn't do it is a blog. There's probably no sensitive content there and without SSL you can use the free trier at CloudFlare to HN-proof it.
StartSSL has free certificates, or you can pay $60 for validation for a year which gives you unlimited certificates on unlimited domains with a 2Y expiry, so if your set of domains is reasonably static you only have to pay every two years.
You can also get alt names on your certificates, so if you want to support IE on XP or Android 2.2 then you can put several domains on the same certificate.
I have no interest in StartSSL, just a happy customer :).
> Digital Ocean, for example, won't give you more than 1 IPv4 address per VPS, which means you need a separate VPS for every side project, if you want to go HTTPS.
As long as you're not supporting clients running IE on WinXP or other similarly old web browsers, Server Name Indication (where the hostname is included as a part of the handshake) will work and it'll eliminate your need for more than one IP.
I just typed a similar comment at nearly the same time but this is interesting to learn. I had no idea XP didn't like this? I know Android 2.2 and older wouldn't take intermediate SSL certs so I've already written off some of my traffic. Now I'll add XP to the group. In fairness, running XP or Android 2.2 now and you have bigger problems than SSL not working...
Setting up SSL even for small sites can be a HUGE PITA. Adding a Custom Cert to CloudFlare will cost you 200 a month[1].
It shouldn't be hard and you should have to pay such a premium for something that should just work by default. I helped create a Front-end PaaS a little while back[2] that believed in that philosophy, we worked hard to lower the barrier of entry for most things, including SSL.
The reality is understanding SSL, Encryption and everything involved is still overwhelming for most people. This article helps but we need more services to stop gouging people for doing or trying to do the right thing.
You don't need a unique IP per domain for SSL. I've got multiple $10/yr certs via Gandi for domains and I can put several on one DO VPS if I like. Tested and working, I have had no trouble at least with a basic SSL cert.
Just had a look at namecheap, can anyone explain the main reason to forgo the PositiveSSL option ($7/annum) and take out their more expensive option; EssentialSSL ($21/annum).
You get a "site seal" with EssentialSSL. Both are equivalent on a technical level, so go with PositiveSSL unless your users care about SSL branding (most don't).
Interesting, I guess I need to change providers. I last used RapidSSL, which advertises itself as a "low-cost" provider at $49: https://www.rapidssl.com/buy-ssl/
Interestingly you can buy the same RapidSSL certificate through Namecheap for $9.49. I'm not sure why they allow resellers to undercut them so dramatically, but I have been using Namecheap's version for years and the savings add up.
Interestingly, the store side of Amazon appears to only use HTTPS for pages requiring credentials or those directly involved in a transaction. Try to view a product under HTTPS: it will redirect to HTTP. Session cookies can be intercepted and certain actions (adding an item to your wishlist, for example) can be completed without https. It seems that even with their compute resources Amazon has deemed the cost of deploying HTTPS on every page to be prohibitive.
Amazon is in a rare situation. They have razor thin margins and page load times can be equated to actual money lost. I doubt that can be said for very many websites.
Personally I'd very much like what products I look at while shopping to be private. Some enterprising little daughter or son may be snooping the wifi during christmas shopping season trying to figure out what is being bought for them you know.
Many people prefer partial HTTPS only for pages that need it. The danger of hijacking session id, mentioned in the article, is mitigated by IP protection - remembering client IP when session is created, and denying access of this session id for any other IP.
* Server is not powerful enough to handle the extra compute load -- SOLUTION: go buy a server made in the past decade.
* Certificate costs too much -- SOLUTION: go get a free one.
* You hope caching will reduce the load on your servers -- SOLUTION: pay for a CDN or use bittorrent for distribution. Mostly, mid-network caching doesn't happen anyway.
That leaves just one reason:
* You want 3rd world spy agencies and hackers to be able to snoop on and hack your customers just like the NSA can.
Let me give you an example from mobile app development. As many of you know, Amazon's CloudFront lets you make an in-app connection to cdn.example.com, which may be a DNS alias for something like drj6nl5tupx60.cloudfront.net. Amazon will generate a cache hit or miss and, if not cached, connect to your example.com servers.
The ideal way to do this is to generate an SSL certificate for the CloudFront distribution, but Amazon charges you $7,200 per year for that privilege: http://aws.amazon.com/cloudfront/pricing/
So if you want to load non-private images, video, etc. content via HTTPS from cdn.example.com via SSL without giving Amazon $7,200 a year, there will be an invalid certificate chain error.
Some app development environments (not pointing any fingers right now in hopes this gets fixed quickly) do not support bypassing SSL certificate checks. So in some cases the answer is not to deploy HTTPS. :(
> *The ideal way to do this is to generate an SSL certificate for the CloudFront distribution, but Amazon charges you $7,200 per year for that privilege: http://aws.amazon.com/cloudfront/pricing/
Maybe using SSL as an opportunity for a premium price gouge was smart business practice in the 90s. But these days, it's wrong.
Charge for the actual cost. Even add your customary markup. But not some arbitrary $7200 fee.
It may be legal, but it's unethical. Until they change it, Jeff Bezos and Werner Vogels should be ashamed of doing this.
What's wrong with using the https on drj6nl5tupx60.cloudfront.net? Honest question.. no one is going to notice if they are getting assets on cdn.company.com vs drj6nl5tupx60.cloudfront.net, right?
It is a fair question. And you're right that no app user will ever notice.
The problem is that it locks you into Amazon. If another company comes out with a much cheaper offering, or you want to switch to Google Cloud Storage, or (unlikely but I suppose possible) Amazon boots you for one reason or another, you're out of luck until you can get your installed base of app users to upgrade.
Some app development environments (not pointing any fingers right now in hopes this gets fixed quickly) do not support bypassing SSL certificate checks. So in some cases the answer is not to deploy HTTPS. :(
Wouldn't it be better to be able to specify a set of accepted key fingerprints, instead of bypassing security checks altogether? Outright bypassing security checks will make MITM too easy.
What if some of your users will be behind a network level web-filter (like the one soon to be deployed in the UK)? If you run HTTPs your entire site will be blocked because 1 page fails the filter (because of a single swear word for example).
Then HTTPS is doing its job by preventing an active MITM attack by a hostile third party.
Option 1: negotiate with the filter maintainers to get your site whitelisted; that may require changing your content, but a site in which you have a commercial interest probably shouldn't have content on it that trips most filters (with some obvious exceptions on which the filter would be doing its job).
Option 2: inform your users that they need to opt out of the filter in question, assuming they're in a situation/regime that allows doing so. Make it clear to them exactly which content is tripping the filter, and hope their reaction is "that's absurd".
Option 3: maintain an insecure HTTP site for use from such locations, and arrange to redirect there or inform your users that they'll need to use that version due to the filter they're using.
Nonsense. Blocking HTTPS traffic rules out far too many useful/essential things: e-commerce for home users, VPNs for business users away from the office, online banking and other financial services, any sort of government service that requires authentication. The list goes on and on, and the people most likely to defend secure browsing are going to be the likes of governments and financial institutions.
What filter are you discussing? AFAIK UK filtering is done by ISPs and conforms to their own solutions, there is no plan for a statewide filtering system.
ISPs or anyone could still insert malicious javascript into your site, or present download buttons or login features on your site.
Visits to your site could be logged by IT departments, getting your users in trouble.
Also I've actually heard of places where port 80 is actually blocked by default (but 443 isn't) and you need permission on a site by site basis to get it unblocked.
because if security and privacy is only limited to sensitive or controversial sites, then the use of security and privacy become suspect. If ALL internet traffic were encrypted, it would make surveillance that much more difficult because you can't find the signal in the noise. Cory Doctorow's book, Little Brother, talks a bit about this... in fact, that book was one of the motivations for Google starting to encrypt web search traffic years ago.
Account activity should also be a must for your users. Session id will not be as useful when users can monitor their accounts. It's a work in progress, but I'm writing a page on good application login design https://tagsauce.com/docs/designsnippets
From an idealistic point of view you're right, but in practice I disagree as HTTPS doesn't have persistent cache. So even without the encryption overhead, HTTPS can be very costly for popular sites if they don't have user logins (eg news sites).
This is untrue so please don't perpetuate this myth. If you send the `Cache-Control: public` header, the resource will be cached to disk just as it would without HTTPS.
> This is untrue so please don't perpetuate this myth.
That sentence is somewhat unnecessary as I wouldn't have posted that unless I believed it to be true. The rest of your post is valid enough (in fact extremely helpful) not to need to such a prefix.
Anyhow, I'm not out to start an argument and I genuinely am grateful you have corrected me because obviously I wasn't aware of the "public" option in the cache-control header and this is something I can actually put to use right away.
You're right; sorry about that. I do get annoyed when I see bogus reasons for not deploying HTTPS. It's made all the worse by people who actually know better but spread FUD because they stand to make a profit from selling expensive HTTPS accelerator appliances. But that's clearly not the case here and doesn't excuse my comment, which, upon reflection, was too harsh.
It may have been a little harsh, but I think it was necessary to point out that it wasn't true and that he wasn't just posting some hack or workaround.
I still have to build company where I see problem where HTTP makes things faster than HTTPS. When I come to level that HTTPS is causing too much CPU usage I will be super super happy.
SPDY requires encryption and yet it's overall faster than HTTP. If you have HTTPS on you may as well serve your pages over SPDY to the browsers that support it
The CRIME attack isn't just blindly attacking SSL/TLS like you rudely insinuated. CRIME specifically targets traffic over compressed SSL/TLS connections, using the deflation to leak cookies. The simple fix to this is to disable SSL/TLS compression, which is easily done in HTTPS (and in fact should be disabled if you want PCI compliance), but SSL/TLS compression cannot be disabled in SPDY.
Sorry I didn't categorically spell this out for you earlier, I forgot some people need spoon-feeding the facts about the technology they advocate - even after you've already cited a massively dumbed down article on the subject already.
(and nasty tone of my post is a result of me getting fed up with the way how you, and everyone else it seems, feels is appropriate to talk to each other on HN. This place never used to be quite so rude)
it's not quite that easy. If your adnetwork doesn't support SSL, you get mixed-content warnings. Same issue if you allow external images to be included (such as on a forum). You have to then host a caching proxy for external images.
Well as long as you have the money. I know it's might be trivial for some, but for instance paying 20 dollars a month for Heroku SSL when you are running an app and you are in a developing country (me for instance) is a bit much.
The biggest barrier to HTTPS is the fact that it's a royal pain in the ass for someone who's not an OPs guy to set up. Unless you're on a shared host or some sort of PaaS that sets your certificates up for you, it's basically a big "go fuck yourself" to get everything installed and configured properly.
Even the Mozilla article linked in the top comment asks you to choose a ciphersuite. Who the heck has time to know and understand what to use, and then figure out how to make it work on their own server?
Oh, the version of Apache you're running combined with the version of OpenSSL that comes pre-installed on your Linux distro causes "Re-negotiate handshake failed" errors? Good luck finding the answer to that on StackExchange.
For most folks that are just trying to ship something, unless security is a huge issue HTTPS doesn't come first because it's simply too much work. And if HTTP works out of the box, there's little incentive to turn on HTTPS.
I was able to get basic SSL working with basically no effort just by Googling around for 'SSL nginx'; the Mozilla article doesn't tell you to choose a ciphersuite, it tells you 'use this one unless you know what you're doing'. And then if you scroll down it gives you some configuration directives you can just copy-paste into your browser config file.
I'm not even an ops guy and I was able to get it working for my (granted, fairly simple) site without any pain.
I have dozens of unique domain names to secure and growing. The cost of buying an individual cert for each one is prohibitive. What can be done instead?
StartSSL will let you do a multi-domain wildcard cert for $75 for 2-years (with identity verification). You toss all of your root domains on a single cert as wildcards, and use it everywhere. Cert lasts for 2 years and can be updated at any time to add new wildcard domains to it. You could in theory get 4 years out of it by adding a domain on the last day if your identity verification 2-year cycle, which refreshes the 2 years on the cert.
I use this method and it is awesome. One big multi-domain wildcard cert for everything.
I believe you need to buy the identity cert first ($60/year) and then the org cert ($60/year), so more like $120 a year if you want the org cert, but still a good deal.
StartSSL seem technically competent, but at the $100+ per year they no longer become the cheapest and I don't have to put up with their dodgy-as-hell website.
Edit: I take that back. The above link is deceptive and the price charged is for the default number of domains that come with the package. Any additional domains cost extra. This is such a headache already. Lets not devolve this thread into price comparisons and end it here.
yup - shopping around isn't coming up with any actual competitors for StartSSL. Would love to know if there are any. Selling the boss on giving copies of a lot of personally identifying (sensitive) information to a company with that kind of site is going to be hard :( If only they didn't look like they were 10 years old...
Sounds good, but StartCom's certificates don't appear to be recognized by Oracle's Java out of the box; this should be a show-stopper for anyone who is providing a Web API.
Except for anything that might actually involve liability. Put "shop" in the domain for which you want a free certificate and you will have it rejected by Startssl.com
Thanks for the link, it helped me get there. I followed the documentation and tutorials for nodejs/express which defined the cert and key. As your link suggested, mobile browsers are less lenient about missing intermediate ca information so adding this solved it:
var express = require('express'),
path = require('path'),
http = require('http'),
https = require('https'),
fs = require ('fs');
var app = express();
var options = {
key: fs.readFileSync('cert/rsa.key'),
cert: fs.readFileSync('cert/rsa.crt'),
ca: fs.readFileSync('cert/sub.class1.server.ca.pem') // needed this
};
https.createServer(options, app).listen(443);
We host the service for our clients. Our clients choose and pay for their own domain name. The majority of our clients are not-for-profit.
It certainly wouldn't break the bank but it is an additional cost to doing business. The question was a little sly - I was fishing for the solutions my peers used by posing a beginners question. I hadn't heard of StartSSL before so it has been a success.
Alot of ecommerce sites (Amazon, Flipkart) seem to use HTTP over HTTPS. Even when you're logged in. These sessions can easily be hijacked. I assume this is because of the handshake latency of HTTPS. Is there no way around this latency to make your website feel faster? I imagine there isnt, because even amazon uses HTTP.
Sure, and this is what HTTPS certificates from a CA are for. If your users are willing to click through the "warning: self-signed certificate" popups, then they're vulnerable, of course. But if they don't make that mistake, then your DNS result is reliable unless someone compromises the CA. Of course, CAs do get compromised.
IsTom's comment was about how an HTTP-served page might be modified to make the "secure" links actually point to a non-HTTPS fake login page (for example). This assumes the user will not notice that the connection is not secure (which I think is a fair assumption).
Given that, another attack might be to mitm DNS and serve an entirely fake Amazon site, all in HTTP, and the user will not notice there's anything wrong.
I think that's the point mro and troels were trying to make.
The only way I can imagine to mitigate this would be to use HSTS on the amazon.com home page.
Can you explain more on how two sessions would work? I mean if the hijacker hijacks the http session he can convert it to https by following the same steps the user does. Since amazon does not ask the user to reauthenticate on https pages.
You can set the secure flag when creating a cookie which will only send it over an HTTPS connection.
It is possible to use both schemes, but it is likely better to stick to all SSL if possible in case of developer error causing something to get exposed when it shouldn't.
You have two cookies, one for HTTP, one for HTTPS. The latter uses a secure flag so it can't be seen with HTTP connections. When the user logs in, both are set.
Most eCommerce sites do not deploy SSL site-wide, but only on login and checkout actions. Look at Amazon, Wayfair, etc. Depending on how the site was coded, you can mitigate session hijacking over HTTP. I'd love to see examples of Amazon session hijacking because product browse pages are simply over HTTP, I think you'll find that it's not possible to hijack those sessions.
I've noticed this too. Alot of ecommerce sites use HTTP for browsing. I assume this is because of speed. However I dont see why the session cannot be hijacked? If i copy all of the cookies how will the server differentiate the hijacker from the user. They both have the same cookies.
Cookies can be encrypted and changed with every page load, verifying sessions server-side. Session tokens can expire on every page or x-seconds and re-issued by the server.
I think you just need one HTTPS only cookie (Amazon seems to have two) and just check that one (in addition to the others) at purchase time, since that one can't be stolen.
However, the sign in button is served from a insecure page, so a ISP could MITM that and get your password anyway.
What about a CMS for, say, a blog? You want the administration console to be protected with https, but what about the main pages? Could I make admin.mydomain.com (and have the admin console on that) https only and my domain.com http?
I don't want to buy a cert for the site, but I'd be happy to create a self-signed cert for all the people who can access the admin.
> self-signed cert for all the people who can access the admin.
This is what I do. The browser generates a warning on first use, and I then put the domain in the "ignore" list (since I trust my own domain). After that, it works fine.
IANASE, but that should work, so long as you're diligent about keeping everything that requires a login quarantined to a separate subdomain. Once you leak a cookie over to the public side, it's over.
I'm a little surprised such a basic article has made it to frontpage of HN, and especially considering a number of omissions - such as that Cookies should be marked with a 'secure' flag when using HTTPS, or that weak ciphers should be explicitly disabled.
There's no "we're ok, I switched security to ON" in the security world
Did we read the same article? The `secure` flag is the fifth of the "Key Points" at the top of the article, and the entire penultimate section is devoted to it.
Does anyone know if it would be completely save to have a site (blog) with the public part on http:// on one domain (f.e. aa.com), and the administration part on https:// on another domain (f.e. bb.com)?
> Don't forget to disable TLS compression and HTTP compression for pages containing session ids or CSRF tokens.
Dumb question: Is this general advice or is it specific to django due to the "BREACH" [1][2] https attack? It's not clear if it the underlying flaw is in using a CSRF token or something specific to django's implementation. I had never heard this before, so thank you.
General. Don't use TLS compression or HTTP compression for any information you want to keep secret. BREACH and CRIME attacks exploit this.
You can still use, say, HTML minification.
You can compress your js and css as long as you make sure you aren't sending your confidential information in those request/response headers. A good way to do this is to setup a subdomain for media that never uses cookies.
Thanks. We've been segregating authenticated vs non-authenticated traffic to different domains for a few years now (we had the same realization as moot), but I was unaware about this specific exploit related to TLS compression.
That said, it seems on nginx TLS compression was not enabled by default, so we are ok (for this known vulnerability).
It would be better to define HTTPS by default, and plain HTTP for exceptional cases. I would use HTTP only for strictly public and eternally cacheable big files.
If you're the owner of a Wifi network, there are already available solutions for monetizing users' traffic, by injecting ads, like this one: http://rgnets.com/
So you, as the publisher of a shitty blog or website, do you really want ISPs, motels or any other third-party to mess with your own content, to inject their own scripts and frames in your HTML, to degrade the experience for your readership?
One easy way of fixing that is HTTPS. HTTPS is not just for security or privacy, it's also for digitally signing the content that's being served.