Tim Berners-Lee has an article titled 'Web Security - "HTTPS Everywhere" harmful'[1]. He argues not that encryption is bad (as the title might ambiguously suggest), but that the use of a different URI prefix (http vs https) is harmful because it requires use of that prefix to ensure encryption and authentication is used. Rather, the HTTP prefix should be used for encryption, and upgraded in real-time by the client.
> A proposal then is to do HTTPS everywhere in the sense of the protocol but not the URI prefix. A browser gives the secure-looking user interface message, such as displaying the server certificate holder name above the document, only when the document has been fetched in an authenticated over an encrypted channel. This can be done by upgrading the HTTP to include TLS in real time, or in future cases by just trying encrypted version first.
He also states
> It is the user's task to ensure that their interactions are secure.
meaning, users are trained to look for the 's' but browsers are hiding the URI prefix, making this difficult.
This pops up every time. I don't really get what he's trying to tell us here. If we forward HTTP requests to HTTPS and use HSTS on top we're basically doing what he wants, right?
As I understand it, he's suggesting that HTTP should incorporate something equivalent to STARTTLS as seen in some form in IMAP, POP3, FTP, LDAP, Postgres, and probably several others I'm forgrttingg
Well, that gave really bad results in SMTP IMAP, POP3, and LDAP. Postgres just does not suffer much because the client always knows wether to upgrade the connection or not.
Ideally, we should be using DANE or some other kind of signed broadcasted info. Telling the user what happened is just a very good workaround (that will stay useful even if we change to DANE).
> If we forward HTTP requests to HTTPS and use HSTS
Right, I think his point there was, the ecosystem has to make many changes to ensure the links are the "secure version": we create URLs with https, make browser changes to redirect (HSTS), use redirect in server software, etc.
With "just" HTTP, all URLs are uniform, but the protocol is modified upon connection to 'upgrade' the link to something secure, instead. HSTS maybe wouldn't be needed (or would take a slightly different form).
The protocol prefix is a technical implementation detail. It'd be quite a coincidence if it also happened to be the best users interface.
Browser developers aren't in a position to retrain users to their preferred ideology. Their job is to find ways to make the web easier to use.
And in fact they have: green URL bars, padlocks etc. – these are UI metaphors that do a much better job than a one-letter change in a cryptic initialism from the 70'ies.
Even holier-than-thou professionals should profit from this: a green URL actually means the connection is protected, whereas it's quite easy to use https without actually using encryption or at least using completely insecure ciphers.
First, browsers added a padlock icon on the left for https:// sites. Then, phishing sites, and even banking websites, started using a padlock for their favicon, and users couldn't tell the difference. So it was abandoned.
Then that effect was removed, I think it scared some users, or they didn't know to look for it, I don't know.
Now, favicons are never shown in the URL bar so a padlock favicon can no longer mislead. Some browser started hiding "http://", some started coloring "https://". (Now many do both.) The extended-validation green pill thing was added. It's harder than ever to allow a self-signed cert when you're using a service you set up yourself on your LAN.
Apparently there are more changes planned, every year or so. "users" have no chance of keeping this stuff straight, only "experts" like me and you do.
I think people learn to follow whatever their browser does, assuming they bother to notice in the first place.
I wouldn't worry as much about how to get people to notice security, browser vendors are on the right track by popping up giant notices for insecure sites and sites with malware.
They just need to extend the "untrusted download" concept from operating systems to email where you could have "untrusted link" warnings in your browser instead. "Are you sure you want to visit paypal.28753272.server? You really shouldn't trust this link..."
You've clearly never worked with the 99% of people who use the internet who don't care about https, much less know what it is.
They don't understand that when someone says "go to google.com" that something isn't wrong when the url bar says "https://google.com". They think they something is broken.
Heck, Safari now be default doesn't even show you the path, they just show the domain, because once people get to foo.com they don't even understand about the paths, nor care.
> people who use the internet who don't care about https, much less know what it is.
While I agree, I do not believe this is an appropriate excuse for ignorance (I am not accusing you of anything, just criticizing the viewpoint itself).
Computers are part of life. People know about antibiotics, vaccine, stem cells, black holes, and they know atomic radiation causes cancer and the same material can be used for atomic bombs, about wavelengths, and certain ones (UV) cause cancer by breaking DNA, etc.
But, not technology? How is there such a well-formed ignorance when it comes to technology? It's fed to us by the media too, who cite "computer glitches"[1,2,3] when something goes wrong, or when there is a breach of the system and all credentials stolen.
And, not understanding URLs leads to susceptibility to information being stolen from phishing attacks. Whose "fault" is there here?
Those "basic signals of the web" are useless to the user. They carry no meaningful information for them.
You don't need to learn about the Dewey decimal system when reading books, either. For the same reason - it's helpful for finding things, it allows you to classify books, and it doesn't matter at all since there are better ways to find books now.
Because people like you, and "UI/UX experts", have put so much effort into downplaying and hiding simple important pieces of information. This shit is as easy, and as useful, as traffic signs.
Am I missing something, or is this suggestion trivially mitm-able? If you fail to forward the request the upgrade, neither side will know something is missing.
So the suggestion is that in 1994 (when the https protocol was introduced) we should have put big scary warnings on every non-ssl site? I don't see how that could've gone well. It's now 22 years later and people still aren't ready for that.
Is there a good reason why squarespace can't enable https at least for those who "buy" their domain names from squarespace as well? It shouldn't cost squarespace any significant amount of money, should it?
Hey all -- it's actively being worked on and will arrive soon. As someone on this thread correctly pointed out, it's a complex implementation due to how our infrastructure is set up.
My understanding (I talked to them about it a year ago) was they have to do some work on making a big SNI-based load balancer that works with their current network setup.
Of course, but I don't think they're using a generic LB / terminator and while they could move to one, it'd still take a long time to migrate on their scale
Depends on how they've built their stack. At a guess I'd say if it was easy for them to do it, they would already have done it - at least offering it as a premium upgrade.
If they're both the host and the domain registrar, couldn't they simply point dns to a different set of servers where they host the https keys and then send traffic to the real server which hosts the websites? Is it a performance issue?
I have no idea why they haven't done it, and agree that for some stacks it would be easy. But there are some stacks where it would be really difficult because of small details.
Firefox already adds an (i) icon (with a "Connection is Not Secure" note in the popup) for all http:// pages plus a red icon for http:// pages with a password field.
Quick plug for Dreamhost, who have partnered with LetsEncrypt and offer free SSL certificates with their standard web hosting product, easily setup from the control panel: https://www.dreamhost.com/hosting/ssl-tls-certificates/
This is despite the fact they previously made money from selling certificates, and still will take your money for doing so if you want.
(Not an employee, just a satisfied customer who now has free HTTPS hosting etc..)
Thanks for that, I've been on commodity hosting for a while (site5, actually been very happy with them), but they don't offer lets-encrypt, so this is quite attractive.
Think that's part of it - a substantial amount of the web is on shared hosting (and probably dictated by other areas of the business) and as such wont have any level of root access. Therefore the hosting company can control cert installation and thus cost.
Usually add-ons are administered by the hosting company though (?) - so if they want to own the cert purchase/installation flow, then they can certainly do that.
If you have a shell and public I address, you can get a certificate. Root access is only required for Http authentication, you can also authenticate to LE using DNS. I actually just learned about the DNA ability.
Check out Lego project, it makes DNS Auth very easy :)
These hosting companies are already doing enough things "wrong" that if "well, everybody just stop using them" were going to be a viable strategy it would have worked by now for those other things.
We currently do not have the power to change the behavior or the market share of these hosting companies in any significant way. That leaves working around their behavior as the option.
Who said anything about doing it to change their behaviour? Do it because you aren't getting value for money! If there are better alternatives, then use them.
Not if you have user-generated subdomains, or have subdomains that you don't want to advertise in subjectAltName, or simply just want the full power you used to have with HTTP to use any subdomain you want -- then you'll need a wildcard certificate, which Let's Encrypt won't offer you. Nor will any other free CA, except for CAcert, which browsers will not trust.
The absolute cheapest I've found one of those for is $95 a year. So basically, more than all of my server hosting, my domain hosting, and my domain privacy combined, all to sign my CSR that has a one-character change in it.
Of course, if you want both yourdomain.com and www.yourdomain.com (because just *.yourdomain.com won't match the former), then that will, of course, cost extra. A lot extra.
I've heard it's possible, but it would be a substantially increased maintenance burden. You won't be able to put all your subdomains into the subjectAltName section of one certificate, and so you'll need to constantly reload your nginx/Apache configurations to reference all of the different certificates. Plus Let's Encrypt certificates expire every 90 days, and you will have to update all of your certificates before that happens.
I really can't for the life of me understand why they won't offer a wildcard certificate, if you prove you are the owner of the base domain name. The cynic in me expects it's the same reason every CA in existence charges four to ten times more for a wildcard certificate that literally costs them nothing more to make. That entire market would vanish overnight if Let's Encrypt offered them for free. I would even immediately start using them for my site.
I suspect it's both a policy question and the fact that there's not really much of a consensus on wildcard validation in ACME (i.e. how to approach it, and whether it should be a topic for ACME or just a policy decision for the CA).
In terms of policy, wildcards encourage some users to both reuse the same certificate for multiple services (i.e. mail, website, api, etc.), and use certificates that are "broader" than needed (use wildcards everywhere because they're "easier", despite the fact that you only need a certificate for imap.example.com). This increases the impact of Heartbleed-like vulnerabilities significantly (in that unrelated services using the same key are suddenly all vulnerable to MitM attacks). It might not be the worst idea to give the ecosystem some time to get used to non-wildcard certificates in order to discourage that behaviour.
I think there's a good chance we'll see wildcard support sooner or later.
I would welcome key usage constraints to limit a wildcard cert to https only.
But yeah I do hope you're right. I'll switch my domain off self-signing the moment a trusted CA offers a free wildcard cert with elliptic curve signing.
Cloudflare works at the DNS/routing level. You can use their layer to communicate via HTTPS with the browser. The connection between your site and Cloudflare won't be encrypted... which is a bit of an antipattern (as discussed elsewhere).
I once tried to do Let's Encrypt or any free SSL with my namecheap domain and apparently you can't. Is that true? I don't really want to change domain name companies because I'm lazy and it's probably time consuming to get right... can I get free HTTPS if I host my site on Netlify and just point my URL there?
All I use it for is my Jekyll-powered personal site on GitHub pages. I don't mind not using GitHub for it anymore, just want it to work with SSL.
In order to do this, you'd need to put an SSL endpoint between your github page and the client. It could probably be done but it sounds like a no-no.
For instance, I'll bet that if you set it up so that your namecheap DNS A entry was pointed to your own box that had nginx/haproxy/cloudflare/whatever handling SSL decryption (and certs) and then backended to your github pages, it would work, but I'm not a fan of the idea.
GitHub doesn't support custom SSL for pages. A workaround is to change your DNS provider (which is probably Namecheap right now) to Cloudflare and use Cloudflare's free SSL (so it essentially will be an SSL proxy).
You can do this wile still maintaining Namecheap as your registrar, and it's totally free.
Domain registrar has nothing to do with the hosting. As long as you can point a domain to an IP address you can get a certificate.
Namecheap even goes a step further and has an API for domain validation (makes LE certificate authorization easier) so they are very friendly in regards to Let's Encrypt.
Let's hope that the pressure is high enough for these hosts to rethink their strategy. If your website has a visible warning and search engines push it back just because of your host wants more money for a certificate that you can get anywhere else for free, you are likely to switch.
This is a common reaction I see from a lot of people. Tech is hard, but the truth is there are a lot of very smart people working on making it easier for everyone.
When something many people/companies do/use/have feels hard or out of reach, I think it's important to start asking different questions and look around for real solutions.
The blog post is dated September 2, 2016, and it claims:
> Today Chrome's stable channel was updated with a new HTTPS UI.
But I remember noticing these changes at least a week ago if not longer. The Chrome Stable channel link he used supports me in this as it's dated July 20, 2016.
Yeah, I agree. For the record, I'm on Linux and I noticed the change yesterday. According to my package manager's log, I upgraded Chrome from 52 to 53 yesterday around 10am (8am UTC).
I'm on Chrome 52 on Mac and I don't see the "i" icon when using plain http. All I see is a very plain looking 'blank page' icon to the left of the URL.
Regardless of your views of the CA system, trusted certificate issued by a CA are designed to offer both security and authentication. An untrusted certificate however provides only security. I expect the reason browsers treat unsigned certificates as bad is for a few major reasons:
* Security without authentication (While slightly better than plain HTTP) is near useless in the real world. Without the authentication that CA trusted certificates provide, you have no guarantee that the host you are communicating with is the site and not a MitM attacker, all the certificate stops is passive eavesdropping.
* The SSH system of connecting to a host once and saving the certificate is unworkable on the larger web. You do not and can not know the certificate of any random site you visit, the CA system solves this problem.
* Browser vendors and CAs actively want websites to be using CA trusted certificates. If browsers accepted untrusted certificates a larger number of sites would use them, as opposed to CA trusted certificates.
* Unlike HTTP, a minuscule number of websites use unsigned certificates and as a result it is safe to penalise sites using them. The end game for browsers vendors is that every site uses a CA trusted certificate; this update to chrome further shows their intent to deprecate HTTP.
I agree that TLS has little value if you can't verify who you're talking to. But at the very least, it's no worse than unencrypted HTTP. So it doesn't make sense to me that browsers are more stern about untrusted certs.
It's treated as worse because if people get used to clicking "accept" for self signed then they'll do the same when they get MITM'd. Want an example? Any error dialogue.
Fair point - if the browser is displaying the protocol in the URL bar that is. I believe Edge and Safari don't. Chrome and Firefox seem to display https:// but not http://.
In the case of Chrome visiting an untrusted CA, it already displays https:// in red with strikethrough, which seems like a perfectly adequate way to negate any expectation of security. So the scary warning which takes two clicks to pass seems like overkill.
If I was making a browser, I'd just show a "secure" symbol for valid, trusted certs, an "insecure" symbol for anything else, and no protocols in the URL bar.
If only we had a secure DNS system (let's hypothetically call it "DNSSEC"), where you could put your certificate hashes into a TLS record (to make up a name for it, how about "DANE"?) ... but surely if such a thing existed, and for several years at that, browsers would support it, right?
They would support it. Very briefly. They would quickly discover that it doesn't work well in practice, simply due to spotty DNS connectivity for end systems, and so every TLS site would need certificates anyways, and DNSSEC+DANE would just end up being another set of CAs. Then they would remove support from the browser.
It's easy to predict a future that's already happened!
Fun fact: if you have a .COM name, guess who becomes your de facto CA in an all DNSSEC+DANE world? You guessed it: Verisign!
I'm aware of your opposition to DNSSEC+DANE from previous posts here. Like others, I'll agree it's not perfect. Nothing is.
Verizon may sign the .com TLD, but that's fine: there are lots of registration providers that offer free DNSSEC support for you (eg gkg.net). Once you have that, you can convey your certificate hash securely, and thus you no longer have to pay some racketeering jackass company $100-600 a year just to get "*.yourdomain.com" in the common name field.
The current reality with DNS is that it's the wild west. All this emphasis on security and yet a DNS provider is entirely unencrypted and hijackable by anyone to send you to an alternate domain. If you haven't visited the domain before, even HSTS/HPKP won't help you.
The problem isn't that you have to pay to get DNSSEC signatures. The problem is that you have to trust the same entities as you did before, PLUS a bunch of new ones.
Why are we beating around the bush with this? DNSSEC is, on its face, a key escrow scheme. Why would we even consider adopting it?
If the certificate is expired or belongs to a different site, or if the cert chain is clearly broken, then sure, warn the user. But if it just chains to a root you don't recognize, then there's no evidence that anything is really wrong. Just show the page, and use the plain icons (no locks or anything).
The issue with that approach is that it allows man-in-the-middle attacks, because all of the characteristics you are suggesting the browser validate can be faked if the root is untrusted.
I think you're considering only a single request, and not the larger context of how people interact with systems over time.
Imagine that today I visit https://example.com and I see a certificate from a CA that I trust. Everything is good: my browser shows a lock icon. Tomorrow (or a year from now), I visit again, and a malicious state actor intercepts the TCP session and returns a certificate that's self-signed or from a CA I don't trust. What should the user experience be?
It definitely shouldn't be exactly the same as the trusted cert from example.com! It shouldn't be trusted at all. The behavior where the untrusted certificate shows up as "bad" protects me from MITM -- the difference between trusted and untrusted is the protection. Even though self-signed certs are in some sense better than plaintext, treating them that way will compromise the security of path-validated certs: any degree of trust creates a risk that they'll be substituted for CA-signed certs and trick a user into continuing.
In a world where there are only plaintext and self-signed certs, I'd agree that self-signed are better and should be allowed. But in a world with path-validated certs, accepting and trusting self-signed certs has downsides. HSTS helps with this but isn't a complete solution.
That's why I said: Just show the page, and use the plain icons (no locks or anything). It should not have any signs of "trust" and it should look like a plaintext page. I could go further and say that it should be treated as HTTP for all other purposes as well, like access to cookies and cross-site requests.
If you really think sites should never be MITM-able, then you should complain just as loudly on a plaintext site as you do on an untrusted root.
What if it just showed all HTTP and self-signed HTTPS site address bars with a light red background, instead? It would immediately jump out at you on Gmail that something was wrong. Chrome wants to eventually indicate all HTTP sites are insecure, so this is a good way to achieve their goal, is it not?
It doesn't matter whether you believe their actions are altruistic (protect everyone!) or sinister (protect the CAs!) ... by treating self-signed certificates as worse than nothing, they send a pretty clear signal to independents that it's the latter. Treating them both the same would be a great step toward building confidence to those who don't trust their actions currently.
I think this is the first time in this thread someone actually sat down and explained why it's worse instead of just waving it off. In fact, until now, I was even on the self cert side of the debate because the arguments seemed more thought out, but this comment alone essentially changed my viewpoint on this issue.
Just for my own sanity, could I get you to confirm that you'd like the browser to freak out equally on a plaintext site as one with an untrusted root? (Even without HSTS, which would make some noise for gmail in particular.)
A self-signed certificate for a site that I expect to be delivered via HTTPS (either because I typed "https://", or clicked a link that starts with "https://", or because I opened a bookmark that I know to be secure) is definitely more likely to indicate a MitM attack, yes.
If you're arguing for opportunistic encryption for "http://" links, I'm fine with that, but if I'm expecting HTTPS, I want either a trusted certificate or a big, red warning that's hard (or impossible) to bypass. I'm also not convinced that opportunistic encryption is worth the effort nowadays, given that the price of a certificate is zero in all but the most obscure of use-cases.
I think the argument here is that, say, somebody's random blog or forum for their local group or whatever doesn't really need to worry about well-equipped state actors or the mafia or whoever MITM-ing them (and wouldn't be able to defend that anyway if they really became a target).
They do care about someone casually sniffing the password to the blog's admin interface when they post from a coffeeshop, though. But for years -- it's only recently that viable zero-dollar browser-trusted certs became a possibility -- we told those people "screw you, you're so dangerous for wanting that, we have to produce the loudest, nastiest, most terror-inducing popup warnings we can come up with just to slap on your disgustingly evil site and shame you into paying hundreds of dollars to a corrupt cartel".
Which confuses the poor schmuck who just wants that little bit of security against that particular, but far more common than MITM, threat model. De-conflating the two concerns -- inability to eavesdrop, and inability to impersonate -- might not be such a bad idea for those people.
Right, but doing this for HTTPS URLs would change the semantics of those URLs. People expect HTTPS to mean you either have a trusted CA certificate or you see a big, red warning. Changing these semantics would be terrible for security.
You can argue for opportunistic encryption in HTTP, but I'd argue that it's cheaper and easier for most affected parties if browser vendors just help push the price of publicly-trusted certificates to zero, which is exactly what Mozilla and Chrome (among others) did. Rolling out a new protocol would've taken years as well, so I don't think we've lost much time choosing this approach.
People expect HTTPS to mean you either have a trusted CA certificate or you see a big, red warning. Changing these semantics would be terrible for security.
No. You and some other folks on HN expect that.
The general public does not know or care that HTTPS conflates multiple issues. Some random person who just wants to have a secure way to hit the login page of their blog does not know or care. There's room for a spectrum of things and ways to indicate where on that spectrum a particular site is, but HTTPS has been forced to be the one-size-fits-every-entity-in-the-universe solution and so has bundled in things that just aren't relevant to a lot of use cases and made it a binary "either you implement 100% of this 100% of the time or else" proposition when that makes no sense to do.
Hence the huge warning page in Chrome warning about certificate errors. In the corporation I work for it has stopped a number of very nasty sites from compromising our security.
If I'm being phished by organized crime, I'd still rather the NSA not also be able to listen in.
The blinkard refusal of parts of the security community to understand this is appalling. An untrusted certificate means I know I am communicating with one and exactly one agent, whose identity I may later negotiate by other means.
> The blinkard refusal of parts of the security community to understand this is appalling. An untrusted certificate means I know I am communicating with one and exactly one agent, whose identity I may later negotiate by other means.
All you're doing, then, is attempting to replace a heavily vetted, non-MITMable identity-verification scheme with your own ad-hoc version of the same, and what the security community understands is that you're incredibly likely to fuck this up in a way that makes it trivial for your attacker to forge their identity.
Any of 2,000+ root and intermediary CAs can sign a certificate for any domain name. "That won't happen", except it did: we know of at least DigiNotar and MCS Holdings.
Any hostile government can compel a CA to issue a rogue certificate, and then force them to not disclose that this happened. Our browsers today trust such certificates from basically every country on earth.
Name Constraints would help protect against this, except that no browser implements them. Probably because it'd also make self-signed certificates a lot safer in the process (much easier to trust a self-signed CA that can only validate one domain name.)
DNSSEC+DANE would also help, but again, that'd also reduce reliance on the CA system, so don't expect that to ever happen.
Plus, if you're using a work machine, your IT staff can install a certificate that Chrome and Firefox helpfully refuse to accept HPKP on and will allow said certificate to validate any domain it wants to. The same goes for any malware that stealthily installs a trusted CA for you. And 99.9% of users don't even know where their CA store is, let alone which ones shouldn't be there.
"We don't want anyone spying on you! ... unless it's your school, workplace, or laptop vendor (eg Lenovo.)"
> Any of 2,000+ root and intermediary CAs can sign a certificate for any domain name. "That won't happen", except it did: we know of at least DigiNotar and MCS Holdings.
> Any hostile government can compel a CA to issue a rogue certificate, and then force them to not disclose that this happened. Our browsers today trust such certificates from basically every country on earth.
There's a massive difference between "anyone can MitM this connection" and "a small number of nation-state actors might be able to MitM this connection if you do not implement HPKP." A system that prevent such mis-issuance events from staying secret (Certificate Transparency) is also being worked on and is picking up speed.
> Name Constraints would help protect against this, except that no browser implements them. Probably because it'd also make self-signed certificates a lot safer in the process (much easier to trust a self-signed CA that can only validate one domain name.)
AFAIK all browsers except Safari support name constraints.
> Plus, if you're using a work machine, your IT staff can install a certificate that Chrome and Firefox helpfully refuse to accept HPKP on and will allow said certificate to validate any domain it wants to. The same goes for any malware that stealthily installs a trusted CA for you. And 99.9% of users don't even know where their CA store is, let alone which ones shouldn't be there.
This argument doesn't make any sense - if someone is capable of installing a trusted root certificate on your device, it's over and any enforcement by the browser could easily be bypassed.
A system that prevent such mis-issuance events from staying secret (Certificate Transparency) is also being worked on and is picking up speed.
So what's next? Browsers begin refusing all connections to sites which aren't pinning their keys and using a CA doing transparency? How much of the web do we have to shut off and force into ludicrously-insecure plaintext connections before people start to get that maybe there's room for a spectrum of options here?
I don't think anyone has indicated that there are any plans to enforce key pinning (which is completely separate from Certificate Transparency).
Certificate Transparency is required for some CAs (with known mis-issuance events) in Chrome, and I expect it will become mandatory for all publicly-trusted CAs within the next couple of years. I doubt this would affect custom (internal) CAs.
I'm not sure I understand the rest of your argument.
> There's a massive difference between "anyone can MitM this connection" and "a small number of nation-state actors might be able to MitM this connection if you do not implement HPKP."
I agree completely. But the parent poster said "not-MitMable", which was false. It's more difficult, but the CA system has quite a lot of flaws. Even HPKP isn't a total solution: there's still the initial connection. I don't think there's an HPKP browser built-in solution like with HSTS yet. DANE would be the better option here -- we could reduce the number of trusted authorities from 2,000+ down to ~5 or so. At least enough that we don't have to have CNNIC when we reside in the US.
Further, HPKP scares me. It requires you to keep a secondary key, but if something happens like a fire, you could lose both keys. Not everyone is going to make duplicate backups of certificate keys and store them in bank vaults or doubly encrypted cloud storage locations or something. Especially not for small-time personal websites. Combine it with HSTS and really long max-age values and someone can get themselves into a world of pain quite easily.
> AFAIK all browsers except Safari support name constraints.
Oh, that's encouraging! When I read up on it, I found "The Name Constraints extension is mostly unsupported by existing implementations of SSL. They are likely to ignore the extension." And I know it tends to take browsers several years to support anything related to security (see Firefox's open bug for Curve25519 for the past three years; how many years it took us to get AES256-GCM cipher support [still not in Firefox 48 but will be in 50]; etc.)
So then, perhaps we can petition browser makers to allow self-signed certificate installations to be less terrifying/complicated when they are name constrained to just the domain in question? Yeah, didn't think so.
> This argument doesn't make any sense - if someone is capable of installing a trusted root certificate on your device, it's over and any enforcement by the browser could easily be bypassed.
You're right that once someone has administrator access, it's game over. But you could at least make their lives more difficult: force their hand into compiling a hacked up version of your browser that bypasses HPKP. That would discourage a lot of casual workplace snooping (you'd have to block users from installing browers/browser updates, too.)
The real objection I have is that right now, browsers say, "yes this connection is totally safe and secure!", even though it's actually not. You're being MiTM'ed by your employer/school/etc. If the fear of self-signed certificates is a false sense of security, then surely this is much worse since you even see the green lock.
There are plenty of stories about employers snooping on SSL traffic, and their employees being stunned/unaware their employers are doing it.
> Further, HPKP scares me. It requires you to keep a secondary key, but if something happens like a fire, you could lose both keys.
You're free to outsource the risk of losing key access by pinning to a small number of CAs you trust (and possibly have a business relationship with) instead of keys under your control. You're already willing to trust at least a small number of organizations for DANE, so you might as well do the same for HPKP.
> So then, perhaps we can petition browser makers to allow self-signed certificate installations to be less terrifying/complicated when they are name constrained to just the domain in question? Yeah, didn't think so.
How is it complicated? Firefox has a simple UI where you can import certificate files in the trust store (I just counted - it's < 10 clicks). Other browsers typically use the OS trust store, to which you can add certificates by just opening a pem/pfx file and following a few steps in a wizard. For the group of users that I'd trust to make reasonable decisions when they're facing a certificate warning, you can't honestly tell me that this would be an obstacle.
> But you could at least make their lives more difficult: force their hand into compiling a hacked up version of your browser that bypasses HPKP.
Or in other words, give users a false sense of privacy, which you're never going to get on a device that's controlled by someone else.
You're also ignoring the fact that this could be a hindrance to HPKP deployment, as users that visit sites that deploy HPKP would see warnings and possibly be scared off, meaning the site might lose visitors initially. That's typically a good way to make sure no one adopts a protocol. I'd rather have a protocol that does not attempt to solve an impossible problem than one that doesn't see any real-life usage.
If I'm on an unsecured wireless network at the local Starbucks, untrusted TLS means I know nobody else in the coffee shop is snooping on it or injecting anything into it.
Untrusted TLS does not mean that nobody else in the coffee shop is snooping on it.
That Starbucks access point you've connected to might actually be the Pineapple belonging to the guy at the corner table, and he's injecting his own untrusted SSL certs into your HTTPS traffic and snooping/injecting into all your traffic.
EDIT - and to clarify what others have said: HTTPS implies a trusted connection (unlike plain HTTP). And a self-signed certificate cannot be trusted under any circumstances.
A self-signed certificate can be trusted when my client recognizes it as one that I already trust. E.g., maybe it's for a server in my home, to which I have previously connected via my wired home network.
This part is wrong, sorry. An attacker can sign his own certificate for your domain, and send you that. Then he can communicate with the real site on your behalf.
Your only protection would be if you pre-installed said self-signed certificate before going to the coffee shop. Your warning would be that the certificate was suddenly untrusted again.
It also wouldn't prevent an attack by someone that could forge a certificate from a trusted intermediate CA. HPKP could help there; but if you have someone like that targeting you in a coffee shop, you have much bigger problems.
The important thing though is that this is still not worse than plain HTTP. And in fact, it's better: it makes the attacker's job a lot more difficult. Especially when done at large scale (eg ISP ad injection.)
Doesn't it just mean that they'd have to MITM you to snoop or inject? It's better, since it raises the bar some, but it doesn't really change what's possible there.
It's not strictly better -- it gives a false sense of security. As you plainly showed by believing that having https at a Starbucks with an untrusted cert is safe, which it is clearly not.
Security involve both a technical and social aspect. While the unsigned cert may offer better security (but there is no way for you to know) it most certainly offers a worse social aspect, that being a false sense of security.
So I'd say you're wrong -- it is strictly worse, because it offers no better security but a worse sense of security.
For non-CA certs, you either manually verify hashes or install the self-signed root cert. So no, the attacker would have to at minimum find a SHA1 hash collision, otherwise the client will know they're being MitM'ed.
So, how severe should warnings be for untrusted certificates and for plaintext?
For untrusted certificates, the answer is clear: very severe. If https://www.facebook.com suddenly has an untrusted certificate, it is almost certainly a case of MITM. The typical end user is not in a position to make an informed decision about trusting it anyway "because it looks right", so the page should just be blocked. In current browsers, bypassing these warnings is very cumbersome. And frankly, making the warnings less conspicuous would be irresponsible.
Now, you may argue that plaintext is even worse than an untrusted certificate. But whether we like it or not, in today's internet a browser cannot just block all plaintext connections. Making plaintext warnings as conspicuous as untrusted certificate warnings is unrealistic.
That said, there are other steps browsers could take to keep pushing https. Browsers could warn when transmitting passwords unencrypted. And, I don't know if it goes too far, but perhaps browsers could even deprecate persistent cookies for unencrypted connections.
An untrusted cert is strictly no different than plaintext. Replacing "a plaintext exchange with your attacker" with "an encrypted but unauthenticated exchange with your attacker" is rearranging deckchairs on the Titanic after it's hit the ocean floor
You know that no party other than the sender has read or altered the resource in transit.
Do you really not see the difference there?
I have a blog. If I link to it here with HTTPS, you still have no idea who I am. A third party's assurance that I am who I say I am doesn't help you, because you still don't know who I am. What you care to know is that nobody has read or altered the contents of the transmission (yes: it's possible you're being phished, but that was true before -- now you know this phishing attempt is not also being eavesdropped or itself altered). Which untrusted SSL provides.
Why people ignore such an obvious point is beyond me.
I think the point is if a cert is suddenly from an untrusted source it's a sign of malicious activity, hence the increased negative feedback to the user relative to an insecure connection.
Granted this opens up attacks like ssl stripping, https://github.com/moxie0/sslstrip, but certificate pinning is making that more difficult.
It can be eavesdropped or altered by MITMing the connection and substituting a different untreated certificate. With trusted certs, you have a reasonably strong assurance that you're talking to the domain owner, even though you may not know who that is.
Cloudflare free flexible ssl plan, powers a lot of internet - - do you know you talk to server of the owner? No. You even don't know if cloudflare is transmitting everything in plain text after reaching them. Probably most of the time it does.
HTTPS in any form would not stop Cloudflare from doing what you describe.
Think about it, an encryption system that stops an end-point server from passing decrypted data to somewhere else
is basically impossible. It would mean finding a DRM scheme that actually worked, and we know that's impossible because of the analog hole.
The only way to stop this would be for the end-point server to be unable to decrypt the data ever, but still manipulate it. That's called homonorphic encryption and it's still only theoretical.
You have no idea who I am. "Bandrami" is not a meaningful string to you.
What does Thawte's assurance that I am who I claim to be add to this situation? You shouldn't trust your MITM'er any more than you should trust a verified me.
It means that if I'm at Starbucks and I want to read "bandrami's" blog, I know that the guy in the corner with the Pineapple hasn't changed the content, because Thawte has at least verified that bandrami.com is the one who actually requested the cert.
What I care about is the admin interface of my blog. I want that, and only that, to live on a secure connection, for me and only me. I literally just want to be able to type in the password to my blog so I can post stuff to it.
Why does this have to be forced to be such a jump-through-hoops-and-pay-strangers-money thing? Why shouldn't I just make my own cert, note its fingerprint, and call it a day? Why does my browser have to repeatedly go absolutely apeshit over that?
And more importantly, if this is really such a massive unbelievable vector that literally every web-connected computer on earth must behave this way, why does SSH still just say "want to trust this key? OK, I'll get out of your way now"?
To put it bluntly: The inconvenience caused to you (and other users who might be able to make an informed decision when facing a certificate warning) is less important than the risk that a non-technical user accepts a certificate mistakenly and falls victim to a MitM attack.
Oh, and every browser I'm aware of allows you to import CA certificates, so why not just do that once and get a nice, green lock?
I don't know many non-technical users who use SSH regularly, but even for technical users, I doubt too many check the fingerprint, so the TOFU property of SSH is still a problem. That being said, SSH's TOFU is not all that different from the process of installing a CA certificate in your browser so ... just do that.
When I click on a link someone has sent me, I want to see what the server at that link is sending me. I don't want to see what a MITM is sending me. Maybe I shouldn't trust the person who sent me the link and/or the method it was sent. But maybe I have good reason to. Why should you assume weaknesses in a previous link on the chain in order to justify a weakness in the link of the chain that is being discussed?
I think the point of dispute is something like this:
1) Yes, browsers could accept SSL with a bad cert chain and just show the UI as if it was plain text. However, then they'd need to have multiple codepaths to support what should be a very rare occurrence, in a very security sensitive part of the code: you have to teach the address bar to display http:// when it's really https etc, because otherwise you're diluting the meaning of https, but URLs themselves still say https, etc. It's just complicated.
2) Your justification for why it's meaningful doesn't really work: if browsers used very long lived connections then it might be helpful but they don't, they re-negotiate and re-establish connections constantly. If an eavesdropper wants to MITM you and you aren't doing cert checking then they just need to wait a minute or two until you click another link on the page. It's not an especially interesting security upgrade. In your blog example you don't know the blog wasn't altered in transmission.
Have a look at Netlify (https://www.netlify.com) - our free tier is like GitHub pages on steroids including HTTPS for custom domains (and deploy previews, use any build tool not just Jekyll, redirect rules, etc...).
We have a Metalsmith static site that requires "npm build" to build and Netlify seemed to figure it out automatically. Pulls directly from GitHub, built, deployed, SSL with LetsEncrypt. All for free.
I host my static sites out of S3 and use Route53 to manage my DNS. If you go this route, it's only a few clicks through a setup wizard to get an SSL cert and enable https.
I use CloudFlare with SSL (Full) for this. Not optimal, but according to a recent comment on HN (https://news.ycombinator.com/item?id=12389850) CloudFlare plan to offer "more granular origin certificate validation" in the future.
"Don't let the perfect be the enemy of the good". While not the optimal solution this route significantly increases security while at the same time being easy and free to setup. I hope that CF starts to validate github pages certs so we can switch to their "strict" option in the future.
Worth migrating to HTTPS soon then. There's a couple tools here to help make sure you don't end up with HTTPS mixed content issues: https://httpschecker.net/ (Desktop & online.)
The article mentions free DV certs from Heroku alongside offerings from Let’s Encrypt and CloudFlare. But Heroku don’t offer certificates for your domain like LE / CloudFlare, only ones for *.herokuapp.com. You have to obtain a certificate yourself.
That's a fight you'll have to take up with various root programs and the CA/B Forum. There are provisions that allow root program operators such as Microsoft to force Let's Encrypt to revoke certificate, though as far as I'm aware this should be limited to domains hosting phishing sites or malware.
I'd argue Let's Encrypt's policy is as good as it can be while still following the policies required for root inclusion. Here's a relevant blog post[1].
After updating Chrome, I visit google.com and see the gray i due to "a weak security configuration". If users are seeing this gray i everywhere, it will just be ignored, right?
To be honest, I didn't even realize that the grey 'i' was supposed to be a warning. I'm so used to seeing the little "blank document" icon there that my eyes totally glossed over it. I doubt many people are even going to notice the change, let alone worry about what it means.
It's even worse in incognito mode now where the little padlock icon and "https" are both plain white, exactly the same color as the main URL. To a cursory glance, secure and non-secure both look very similar.
EV certs don't look as cool as they used to. The green they use is to close to black. I know Edge got rid of the green bar as well but the green they use stands out.
I feel like I'm missing something here, as both you and the article heavily imply that Chrome displayed EV certs prior to 52. In my experience, Chrome hasn't displayed EVs; here's an EV cert site for me under Chrome 51: https://i.imgur.com/azKdzPd.png — this is the same presentation it gives DV certificates.
I'd see a MitM in the cert chain if I manually inspect it, wouldn't I? (It'd be signed by the corporate MitM CA cert, right?)
Interesting that it shows up in your screenshot though; BoA on both Chrome 51 (on OS X) and Chrome 51 on Linux doesn't display the EV for BoA, or GitHub.
(I doubt the MitM one, since the Linux machine is my home one. The OS X one is my corp laptop, so corp MitM'ing is believable there.)
or is the screenshot incredibly outdated, since it says Chrome 8, and CT came later?
> I'd see a MitM in the cert chain if I manually inspect it, wouldn't I? (It'd be signed by the corporate MitM CA cert, right?)
Yep, it should show up in the cert chain.
> Interesting that it shows up in your screenshot though; BoA on both Chrome 51 (on OS X) and Chrome 51 on Linux doesn't display the EV for BoA, or GitHub.
I'd guess for some reason Chrome doesn't think it has received a qualified SCT for the certificate and is refusing EV treatment. Not sure which SCT delivery methods Chrome supports, and why they might be failing here.
> or is the screenshot incredibly outdated, since it says Chrome 8, and CT came later?
It's definitely old, but I don't believe it's related. FWIW I'm getting the EV UI on OS X with Chrome 54 when I visit https://www.bankofamerica.com/.
Author here. It's a more accurate reflection of what EV offers: it's about matching identities to certificates, not 'selling green bars' the big CAs have been doing for years.
I find it sad that corporations have decided the internet should be predominantly a private space.
Think of it, all of those brilliant http://c2.org wiki's and ASCII-doc sites are now marked "insecure".
Fundamentally -- some wonderful scientists and engineers created a paradise of free exchange. Commercial enterprise introduced the concept of 'money'. A bunch of thieves robbed them. And so we all have to put up walls with locks on them.
> I find it sad that corporations have decided the internet should be predominantly a private space.
This has very little to do with "corporations deciding the internet should be a predominantly private space", and everything to do with governments deciding that the internet is open season for information warfare and spying on citizens. The big push for HTTPS-everywhere hasn't been driven by "corporations affected by thieves", it's been driven in the wake of the Snowden revelations.
As long as Advanced Persistent Threats exist, you have no way of verifying you're even reading the real c2.org, or ASCII-doc site, or some censored replacement a privileged network opponent is impersonating it with.
HTTPS gives us some measure of security that we're talking with who we think we're talking with back.
It's not just about money, right? Even if it was just scientists writing essays and sharing them with each other we would still need https to prevent, say, providers from snooping on or altering page contents.
Since I am the creator and maintainer of a scientific web app for molecular dynamics (shameless plug: http://www.bottledsaft.org) which is not on https (yet), I decided to check whether scientists browsing the net will quickly learn to ignore this warning.
Finding so far: Yes, they will. No need to hurry, but I will go https eventually. Of the sites I checked quickly, these are running non-https by default:
While this is good evidence of what is actually happening, please let's not suggest, even implicitly, that publishers are a place to look for guides to best practices [0]. I don't know about the sciences more generally, but, in math, "because the publishers do it" is just about the best reason possible not to do it [1].
[0] Except maybe that they publish them. :-)
[1] Except the AMS, which, in my limited experience, seems to be populated by, and perhaps even more important, in the charge of, people who Get It.
I agree. As I said, I really should get https working. I also really should finish a bunch of papers in various stages of completion and review, etc. This lets me decide whether the new Chrome behaviour rearranges my priorities or not.
>c2.org
>This site requires JavaScript and Cookies to be enabled.
Executing JavaScript from insecure http is a bad idea.[1] Especially for developers/IT admins who are targets of state sponsored hacking.[2]
>Please change your browser settings or upgrade your browser.
How about, no? There's a reason I block JS with noscript. It's not about page load speed, browser responsiveness, or ad blocking. (Although it helps in those areas remarkably) I block JS because executing untrusted JS code[3] is a good way to get hacked.
I consider cookies much more secure than the alternatives. Here's a detailed explanation as it relates to JWT. The main takeaway is, if properly configured by the site, cookies are more secure than alternatives like web storage.
But that's coming from a site admin point of view. As a user, it would be great to have a browser plugin that allows me to reject cookies that aren't secure/httponly.
Never requires a lawyer or an accountant to fill in paperwork.
I have a lot of hate for the whole concept of EV. Not only because it's theatre, but because I have to involve my accountant, and my legal team every time I buy one from Comodo. It's like pulling teeth.
I think that jeffehobbs is suggesting that your call to action, rather than the facts on which it is based, should be viewed with extra scrutiny because of your affiliation.
I really prefer to do what is best for my users: fast loading simple pages for users with slow connections on the other side of the world. (So, HTTP, not HTTPS.) But there is no sense fighting the powers that be.
EDIT: Thanks for the down votes! Unpopular thoughts like mine should be discouraged. Groupthink is best!
Not necessarily. Once you enable and optimize your TLS stack you're also well on your way to deploying HTTP/2. Unlike HTTP/1.1, HTTP/2 requires only a single connection per origin, which means fewer sockets, memory buffers, TLS handshakes, and so on. As a result, it may well be the case that you will be able to handle more users with fewer resources.
That applies to modern hardware and modern infrastructue – not to people in third world countries on decade old cheap hardware with barely working networks.
Then you need to be deploying HTTP/2 over TLS. You are doing a disservice to users with slow connections due to the cost of opening up multiple sockets. Multiplexing your data over an encrypted channel over HTTP/2 and TLS will be faster than domain-sharding over HTTP.
Not OP but IME it can be faster, setting up multiple HTTP connections can be the bottleneck, YMMV a lot. That's why connections are one aspect of pageload metrics. Some sites are such that they'll only get ~1.something page-views per client per day.
If your page is anything over a few KB in size you are really messing up then. Break CSS and JS into separate files so the browser can cache them and doesn't have to ask for them every single time the page is reloaded.
Could you do HTTPS as default then offer to bump people to HTTP (and a slimmer site) if they want it? Do people with strong requirements for fast pages tend to use proxies that lighten the page (Opera used to have this built in) by reducing image sizes and such - presumably then it wouldn't matter if the original connection was HTTPS??
Would you mind sharing the site in question and the modal (ie most common) specs of your users devices?
> A proposal then is to do HTTPS everywhere in the sense of the protocol but not the URI prefix. A browser gives the secure-looking user interface message, such as displaying the server certificate holder name above the document, only when the document has been fetched in an authenticated over an encrypted channel. This can be done by upgrading the HTTP to include TLS in real time, or in future cases by just trying encrypted version first.
He also states
> It is the user's task to ensure that their interactions are secure.
meaning, users are trained to look for the 's' but browsers are hiding the URI prefix, making this difficult.
[1] https://www.w3.org/DesignIssues/Security-NotTheS.html