So, a trusted third-party, then? To, you know, certify the site? Like... a Certification Authority, perhaps?
> If this is the first time you are visiting this website, a warning message will appear. Your browser will provide some useful information about this site (see below) and ask you whether you want to continue and really visit this site.
How's that going to work when visiting any of the majority of sites today that pull in resources from perhaps 20 or more different domains? Google Analytics, some other analytics, ads, all the trackers, ...
On a whim, I just pulled up CNN's web site . uBlock Origin reports a total of 21 domains. How is the average user supposed to know if zor.fyre.co is legitimate -- or a tracker or a phishing site or "evil Russian hackers" and shouldn't be trusted?
There's a lot that's wrong with the current state of PKI on the web (too many CAs, questionable CAs, etc.) but that doesn't mean we should just throw it all out tomorrow. We're stuck with it -- at least until something better is devised and rolled out (which would take years!) -- so the best we can do is hope for continued incremental improvements, such as TLS 1.3.
I don't want it either as it easily trains users to click though it.
User goes to new website: "Oh yeah I have to click allow <CLICK>"...
User returns to site but its now being MITM'ed: "Oh I thought I had clicked allow. <CLICK>"...
Even if the warning is something like "the public key for this site has changed, do you wish to continue?" The gen public will click continue. The problem is not designing a system us on hacker news can use, understand and keep us secure but its building a system that "non-technical" (Throw in generalisation about parents/grandparents not understanding the interwebs here...) minded people can use, understand and be kept secured.
The current system doesn't grant any legitimacy, it only binds a name to a cert.
There are some very large problems that this introduces and does not solve:
- How do kiosk machines now work? How can I trust any website when every use is first use?
- What about https domains for images and scripts? Is the user supposed to trust each domain separately?
- Assuming trusting a domain trusts all its other domains mentioned in its content-security-policy, how is this trust revoked for misuse?
- Is the author suggesting that we retain all users of browsers everywhere? I believe that to be a herculean task.
Certificates have their problems, but this is not a solution. I will not propose one here, because it is an extremely complicated problem that needs many separate parties working together to solve.
Should you ever really trust kiosk machines? They could easily be setup with MITM eavesdropping software.
> What about https domains for images and scripts? Is the user supposed to trust each domain separately?
Presumably there could be some sort of <meta> header that listed the public key fingerprint/IDs of any subdomains that the page was going to pull in. This would make the UX better.
> Assuming trusting a domain trusts all its other domains mentioned in its content-security-policy, how is this trust revoked for misuse?
AFAIK, revocation isn't very good even with the current CA infrastructure.
I should have the choice. This doesn't even give me that.
> Presumably there could be some sort of <meta> header that listed the public key fingerprint/IDs of any subdomains that the page was going to pull in. This would make the UX better.
This would make the UX barely passable. If any of these domains change the content at all so the hash changes, how does it get updated?
> AFAIK, revocation isn't very good even with the current CA infrastructure.
I completely agree with you, so lets not make it any worse.
I don't understand why you say this. Can you explain more?
> This would make the UX barely passable. If any of these domains change the content at all so the hash changes, how does it get updated?
Not a hash of the contents, just the sub/external domains' key-ids. Yes, the main page would have to change if you updated the keys. Doesn't seem too onerous to me.
>I don't understand why you say this. Can you explain more?
Not who you replied to, but with this system, I need to trust everything between the kiosk and the website server to not be MitMed. With the certificate system, I only need to trust the kiosk itself. Specifically, I need to trust the browser does TLS right, and I need to trust the installed root certificates are correct.
>> Not a hash of the contents, just the sub/external domains' key-ids. Yes, the main page would have to change if you updated the keys. Doesn't seem too onerous to me.
Then that means each external domain has to tell all its linkers that its key will change whenever it does. Assuming it even has such a list. What if a party doesn't respond? I understand it would be said parties problem, but it sure does make re-keying difficult.
This has already been answered, but I'll offer a different perspective. For a lot of people kiosk machines are the only way they get to use the internet. Not everyone has the privilege of owning their own computer or internet-enabled phone.
I don't know what you are commenting on, because the author of the linked article suggests we use Trust On First Use, not HPKP.
2) Key rollovers are accepted silently if signed with the previous key. So an attacker who steals the key can sign their valid ones forever, right? If the server can invalidate previous keys to reduce that risk, what happens when the attacker uses that feature themselves against the actual site owners?
3) How is TOFU better than previous proposals like Perspectives and Convergence?
yeah.. this brings us back to basically the same setup as before.
your device needs to reach out to some 'trusted party' to ask it whether it should warn you about the key.
how does your device connect securely to this trusted party, so someone can't mitm the trust question? ship the trusted parties key on the device.
.. sound familiar?
That's more in the mindset of how the original X.509 design was supposed to work: you trust your company/university, who installs set Z on their workstations. Except, in this case, you're much more free to switch who you trust.
Of course, in the "carrier profile" case, a lot of the smaller ISPs might delegate their cert-verification to some company set up specifically to be an, ahem, Authority for Certificates.
But those companies would be "CAs" in a very different sense than the one we have today: they wouldn't issue certs; they'd instead generate pin lists and run OCSP portals, i.e., verify certs—including certs that are not their own. They'd essentially provide "TLS stack plugins" for devices that subscribed to them, instructing those devices on what makes for a valid or invalid cert. (Or you could build the infrastructure the other way around, and make a "TLS thin client" library that talks to its CA every time it wants to know if a cert is valid. Given that OCSP already exists, it wouldn't be that much more traffic.)
This would also mean that the TLS "technology" could evolve much more quickly, since TLS extensions wouldn't have to get 100% traction with TLS client software before they'd be useful. Instead, the extension would merely need to be taken into consideration by one CA when that CA generated its pin-lists and OCSP databases (or, dynamically by webapp-like software, in the TLS-thin-client case.)
You don't just need to trust the root certificates intsalled, you need to trust all signed CA certificates those root certificates installed (except for public key pinning, which will ignore installed root certificates).
Maybe. Unless you're poor, in which case you often don't have any other access besides wifi hotspots.
Critically, unbinding public/private keys from DNS names removes a layer of trust. I trust my browser-maker to grant only legitimate CAs approval to sign certificates that will be accepted by default by my browser, and my DNS provider to send me the correct IP for a given name, and I trust myself that I know the DNS name of the site I want to visit. Those three things work in concert with each other to give me a relatively high level of assurance that I'm talking to the site I mean to be. What's more, thanks to certificates, I can verify this all myself! Export the site certificate, and check its signing chain to go back to the CA, which I can also decide for myself if I want to trust or not, if I don't trust the browser-maker's judgment.
Add to that certificate revocation, and I fail to see how the proposed plan makes any sense at all. The author never makes a case that name-based certificates are actually a problem. Browser makers are free to add extra layers of security to the current system that could do what the author proposes. But from little details of what he says I suspect what's really going on is that he just doesn't want to pay for certificates or deal with setting up Lets Encrypt.
This article is really about PKI more than TLS, and what it means to be a user's agent ("user agent"). Forgive my critical tone in advance, it's just that proposals like this need to go through the wringer...
> you are supposed to be certain that you are communicating with the website you intended to visit. the green padlock in front of the URL in your browser tells you TLS is securing the connection. But this reasoning is wrong.
No, it's not. It's correct to reason that the green lock (as implemented by most modern browsers) indicates a secure connection to the site in the URL bar. Now, whether that site is behaving virtuously... is a different matter, and not one that TLS is designed to solve. It also doesn't mean TLS is broken. (TLS is quite good, especially the emerging 1.3 standard. Qualms with TLS usually end up being about PKI, which is the best thing we've got widely deployed right now, whether you like it or not.)
"Intended to visit" -- when we look at the browser as the role of the user agent, or in other words, a program that is supposed to act helpfully on behalf of the user -- the browser infers intent from what the user tells it to do. If you have a private chauffeur and you tell him to take you to such-and-such place, he will go there, even if you intended to go to such-another place. Oops? Maybe agents who know you really well could double-check with you if they think you meant something else, but do you really want your browser to know that much about you?
> The real solution: get rid of certicificates
This is ridiculous. Certificates make possible the cryptographic exchanges that guarantee confidentiality, host (and client) authenticity, and data integrity. Don't throw away the single most valuable component of modern Web security.
> Being able to tell that we are visiting the same website as before allows us to establish a trust relationship with that website over time. With every positive interaction, our trust increases (and with every bad experience, our trust drops). We don’t need the website’s name for that.
This isn't about TLS, this is now about PKI and trust model issues. Yes, there are various trust models and each have advantages and disadvantages. We went with PKI for the wider Internet because it fits the mold best when connecting machines to other machines at scale. In contrast, WoT and other models (including TOFU) work better in small groups of people or machines in semi-controlled environments (e.g. TOFU is manageable with your small-medium-sized fleet of servers).
No. That might be how you use SSH, but you are not really using SSH correctly IMO so you are not qualified to tell me HTTPS should run like SSH...
For a properly secure connection you should be verifying the fingerprint against a reference you received through a different trustworthy communications channel. If I'm connecting to my own servers I know the value is correct as I setup the server and have record of it. If it is a newly setup machine/VM as a hosting location I might have to go on trust that the first one I see is OK but if I'm connecting to an existing service I expect to be told the fingerprint so I can check it. FYI: in my day job I handle file transfers potentially containing our clients' client data so we have to do this sort of thing properly.
Ignoring the "you are doing SSH wrong" thing for a moment, there is another major flaw in TOFU: users will just click OK without thinking when presented with an "are you sure" question. The people most vulnerable to the current problems with TLS are not going to have their risk profile changed at all (or if it changes it'll be for the worse) by this proposal.
Now I can deploy as many new machines as I like and I only have to sign their host keys for all of my SSH clients across my machines to trust that new machine before I first use it.
If this is new information; I have left example instructions at https://paste.debian.net/plainh/6b332023
Certificate for SSH isn't something that the OP is going to like though, as he is trying to use SSH as part of a rational for doing away with certificates for HTTPS...
Clients that have a CA known_hosts(5) entry for a given name (or a mask that matches the name they're connecting to) will prefer host certificates over host keys and attempt to verify them that way if the server presents one, but if that fails they'll still fall back to regular host key validation (unless you explicitly set the HostKeyAlgorithms directive to eliminate non-certificate-based ones).
It's the best of all worlds, in my opinion.
I don't see how this can work. If we need to update keys because of compromise, the keys cannot be relied on to sign the new keys, because they have been compromised. Nor can such a case be pre-announced.
The main problem I'd see is training users to get that initial trust phase correct. Too many wouldn't try to parse the info given and just click Trust regardless (or run away from every site with the warning).
Not if you are doing SSH right... That "fingerprint validation" message you get the first time you connection to a SSH host: you should be verifying that against a copy received through another (preferably secure of course) channel, not just blindly trusting the first fingerprint you are shown.
That way, we can add CAs and they can be trusted at the root for a small amount of space.
Obviously, we'd need to do some amount of grandfathering, but CAs should know who they've issued for.
Is there any reason this isn't plausible?
See also https://wiki.mozilla.org/CA:NameConstraints
That's not a TLS extension, it's two separate things you've linked
The first thing is a standard feature of X509v3, which doesn't help you at the root because although root keys are distributed in the form of x509v3 certificates for convenience, they aren't really treated as certificates, after all you're trusting them, if you can't trust the root then Game Over. We use these name constraints all the time, they often protect a corporate subCA.
The second thing is a proposal to Mozilla policy from 2015, ie, if it was implemented, this would be a change to the behaviour of Mozilla's TLS client, most specifically Firefox.
It isn't currently being discussed by m.d.s.policy, the group which decides Mozilla policy. If you care about Web PKI policy m.d.s.policy is basically the only place you can influence it, everybody else of any significance makes these decisions in private (e.g. Microsoft, Apple, Oracle) or takes their cue from m.d.s.policy (most Free Unices, more obscure browsers).
This will cost a lot of money, but if they're pointlessly running a TLD then SAP is already burning money, presumably they can just fork a few million extra onto the bonfire.
However, this is basically the CA system in disguise. Instead of trusting CAs, we need to trust domain servers. Thing is, domain servers tend to be controlled by governments even more so than CAs.
There are some other issues with the technical implementation of DNS-sec, though I do not recall them at this moment.
I like a DNS-based approach, because it ensures only one authority can authorize a key, though it has the issue of often being a state-controlled authority.
But the fact that domain names are hierarchical means you have to trust one or two entities at most. That's why domain name theft is historically less common than illegitimate TLS certificates.
You'll find all sorts of arguments that the CAs do more things than that and no end of security professionals who work for them. Some of these arguments are probably correct but that doesn't change the overall picture how a secure global TLS PKI should operate in the presence of domain names. It does the current system quite resistant to change however.
For any country TLD domain (.us .uk .nl), those keys are held by a state actor. That TLD then gets to sign any lower level DNS servers.
Say whomever controlled .com wanted to change the key for hackernews.com. They'd simply change the .NS record for hackernews.com to point to a different DNS box with different keys, that proxies DNS requests for the real server but re-signs them. It can then simply serve a new DANE record. (DANE records are the DNS-sec version of CAs).
This is essentially the same procedure as a CA issuing a wrong certificate.
The source of this problem is that you don't just need to trust the root DNS servers, but you also need to trust the TLD-servers that are signed by the root DNS servers. (And down recursively if there are more NS records in between)
A better solution might have browsers try to intelligently identify suspicious looking URL's (including URL's that start with a popular, external URL, like paypal.com.notpaypal.cc). As for the actual certificates themselves, perhaps this is a good application for a Blockchain technology, with certs being created by website owners and submitted to a Blockchain by a machine assigned to the DNS name that wants to create its cert. But in this case, we still require DNS which is built on trust... so perhaps DNS could also be put on a blockchain. How? I don't know. But until we figure that out, we have to rely on various organizations to provide trust.
In the proposed scheme, they are ALWAYS clicking though blindly. There's no way for a user to know when he gets the original public key or someone else's. How is a user supposed to verify it, call the owners of the website and ask them to dictate their public key?
This warning screen is thus just a security theater.
The whole proposed "solution" is not a solution at all and doesn't work — the author could have saved a lot of time typing this blog post if they just listed threats that certificates are trying to solve on a piece of paper and put a check mark next to each if their proposed "solution" solves it.
That said, the key flaw that's raised at the start of the article (how do you know that amazon.de is Amazon) is not really solved with this solution. You still need a trusted third party to bootstrap that first visit.
The current TLS/PKI allows for much better UX, users would reject the proposed solution.
This TOFU thing could be useful with self-signed certificates (like those shipped in servers' BMC).
But that's how it already works, at least traditionally (how don't know about Chrome): you connect to it, and the browser asks you if you want to make a permanent exception for that cert. Next time, it'll just connect.
I didn't know this. This is horrible and does not make any sense.
How is the proposed solution where MITM attack could be going on the very first time I visit a website I want to trust a better solution than the current state of PKI?
The proposed solution demands that we trust the website on the first visit if the website behaves well. So now it is very easy for Mallory to perform MITM and behave well so that I trust Mallory's public key. Now Mallory can see all my traffic in subsequent visits. How can even one imagine that this is an acceptable solution?
All your usual websites wouldn't have this icon, but paypa1.com would, and it would be easy for the user to think "wait, what do you mean I've never visited Paypal before?".
Certificate rotation/expiration/invalidation, multiple certificate per domain, multiple subdomain certificates, multiple feature certificates, a dizzying array of certificate extensions, not to mention the organizational control of their issuance across multiple orgs for the same domain, as well as user certificates, are all necessary and all are abandoned by public key. There's a lot more I don't even remember off the top of my head.
Might be fun to usability test that scenario in a VM or something just to see how it works in practice.
Not to mention that this scheme still uses a CA of sorts. Lose-lose.