The fact is, if your browser trust roots contain only trustworthy CA's, and you look to make sure you're using a secure site when you expect to, SSL isn't broken. Those sound like two big "ifs", and they may be, but consider: they are both user interface issues:
* There is no collaborative interface for community-rating CA trustworthiness, or even a reasonable interface for pruning CA's out of your browser --- let alone "do I trust certificates delegated from this CA, or just the CA itself". We are still using an interface for this functionality designed in the mid-'90s.
* The "first introduction" problem that key continuity suffers from is, in the SSL case, caused by the fact that the SSL cues in the browser are subtle. But why should that be the case? Is there really no interface anyone can come up with that appropriately signals "this page should be and is secured by SSL"? Because if you have that (or STS), sslstrip doesn't work.
Key continuity's problems, on the other hand, aren't user interface issues. In key continuity schemes, when your browser is first introduced to a new website, you simply are insecure. If at that moment you are MITM'd by an attacker, you might never again be secure. This isn't a UI issue: key continuity protocols don't have the information available to them on first connection to make that decision.
Think to yourself, if you ran the great firewall of China, which system you'd rather have: the flawed one we have now, or the one that explicitly allows your connection on the first and subsequent connections to a new service to be hijacked?
I am our resident SSL/TLS apologist. But let me be clear that this deck is good, and raises a lot of valid concerns. I think SSL/TLS is a good system, but I share the (implied) disgust at the way our PKI has been managed. Some kind of reform is needed. I just don't think it needs to come with additional engineering sacrifices.
Key management is the weakest link of HTTPS/TLS. And the slide deck does a good job of identifying some of the forces that weaken it still further. It assumes (I believe correctly) that the various actors in this play are all working with their own self interest at heart.
So "tofu/pop", to be an improvement (rather than simply training people to click 'ok' on a lot of pop-ups) needs to have some counteracting forces.
But we don't have CA's which get their CA status revoked when they are shown to have given out untrustworthy certificates, nor do we have liability concerns on the browser maker for failing to warn you that something suspicious was going on, and we don't have nation state actors who will transparently step into the game.
I don't have a good answer, but something that is purely algorithmic is going to be challenging to get right. We trust the bank when we walk up to the brick and mortar office because the cost of 'fooling' average people would be too high to build this sort of ruse. However we have no way to currently transfer the trust of the bank to its presence on the web site. An example of that would be taking home a 'certificate' key from the branch manager and then using it at home.
I started looking at this as part of some Java security work in the 90's and got to appreciate just how difficult a problem set it is. And worse, the group that does the infrastructure has to be 100% open so that folks will trust it and 100% free so that it will be widely adopted. That makes for a very very difficult business model.
> Think to yourself, if you ran the great firewall of China, which system you'd rather have: the flawed one we have now, or the one that explicitly allows your connection on the first and subsequent connections to a new service to be hijacked?
The one we have now explicitly allows your connection on the first and subsequent connections to a new service to be hijacked by the Chinese government, assuming they can get your packets to go to them, because CNNIC has a CA cert in your browser. (Have you removed it? I haven't. I didn't know it was there.) Additionally, it allows any connection to any service, old or new, to be hijacked, even if you previously connected to that service without being hijacked, and it doesn't leave any evidence that this has happened. TOFU/POP solves these additional problems, without solving the introduction problem.
It's possible to solve the introduction problem --- which is what CAs do --- in a variety of more or less secure ways. SSH gives you a key fingerprint and prompts you, which pretty much pushes the problem back on the user. But you can do better than that in a bunch of ways.
What's the issue here? CNNIC isn't trustworthy. You should get a warning every time you visit a CNNIC-signed site.
I don't understand how people can build entire advocacy campaigns on getting other people to switch from IE, but removing untrustworthy certificates? Time to scrap the protocol and move to a crappier one that we can at least get our heads around!
This isn't to say TOFU/POP is the right move. But there's a very real problem here. One that HTTPS's strengths do nothing to address or offset.
 And of those, I doubt they all remember to check their CA list at every browser install/upgrade/borrowed laptop browsing situation.
This is not a problem with the protocol, but with the criteria used for trusting a signature.
At the end of the day, we're talking about a very simple UI problem. Yes, our notion of "trust" is broken. But it's broken in large part because nobody has put any effort into communicating to end-users enough information to enable them to make informed decisions. Then, after putting no effort into the problem, we look at and say "you're asking my mom to understand X.509 CNs!", as if that was the extent of what engineering could bring to bear against this problem.
Then we throw up our hands and say "let's make a simpler protocol; who cares if predictably and reliably allows middlemen to hijack our connections, at least we won't have to explain X.509 to our moms".
Can you blame me for being a bit cynical about this?
Under such a system, you'd know that you were only at the mercy of the Chinese government when browsing .cn sites - which is neither unsurprising nor too much of a worry.
Not to mention that this would fix some of the incentive problems. DNS top-levels would effectively be competing on the strength of their certificate-issuing properties - if a particular TLD was known to have a more corruption-resistant registrar, then that TLD ought to become more sought-after by security-concious sites.
This is true, but it is also a problem that needs to be fixed. A CA should not only check whether you control the domain, but should also check your (real-world) identity, and in the case of a company, should also check other documents. Yes, most don't, or do it only ineffectually, but they should do it.
At least with a certificate issued by the registrar, there's some logical reason to believe that it was issued to whoever purchased the domain (whoever that is), which is often enough to establish an encrypted connection at a level the user expects. Significant problems remain, however: 1) Most domain owners have no idea how to generate a key and CSR (the registrar shouldn't do it) and 2) a truly reliable system for certificate revocation must exist to handle domain transfers. These aren't new problems, they just don't go away by shifting to this "easier" approach.
This is just another reason why it's so important to get technologies such as DNSSEC and DNSCurve working. As many have pointed out, it make sense to replace X.509 with DNSEC for exactly this reason, so you can verify the SSL certificate by simply doing a secure dns lookup.
What if you add that information to a key continuity system? I feel like this is an obvious idea, but I've never heard of a system using it. It's simple: when you first see a certificate you send it to a trusted third party to check. That third party doesn't need to be or even know about the CA who originally issued the cert; all it has to do is make its own connection to the server and make sure it gets the same cert you did. That protects against any MITM on your end of the connection.
To succeed, a MITM would have to control either the server's connection (which becomes the server operator's problem), or both your connection and the trusted third party's connection (which becomes the trusted third party's problem). I'd rather trust a single third party of my choosing than a huge list of CAs chosen by an opaque and somewhat arbitrary process.
And even if they didn't, I wouldn't consider hackers getting my bank info-- or governments reading my email-- the server operator's problem.
It certainly wouldn't be worth setting up a MiTM attack between my house and the first Comcast router my cable hits. That approach doesn't scale. Sure maybe if I'm Warren Buffet (or some mobster if we're talking about the government) it might make sense. And if they did do that, then NO source I connect to is a trusted channel, as they could MiTM any and all connections.
The scheme wouldn't (always) protect you from an attacker setup somewhere in between you and your destination, like the NSA servers at AT&T, or the Firewall of China. Your trusted source could hit the same bad path.
And by your own acknowledgment, the scheme doesn't protect you from a hacker breaking into a bank's data center. If a hacker is smart, and only grabs say every 100,000th credit card, how long will it take to isolate the location of the exploit?
So the scheme doesn't really protect against anything with any level of confidence.
Sure, the scheme is only as good as your trusted source. But the trusted source could be very good. It could have multiple servers in different ISPs and even different countries verifying certs for you, and could use other methods of verification as well.
The bank can easily check if their connections are getting MITM attacked. The hacker has to modify the certs seen by anyone who contacts the bank, even if he doesn't MITM every connection, and that would be easy for the bank to detect.
That would be the empty set, because neither have I any idea of what criteria I'd use to trust a CA , nor do I trust CAs not to give up their private keys to the govt given even the slightest pressure.
 Thinking about it a bit more: I'd trust a CA that I'd set up myself. But I wouldn't necessarily trust the hardware it was running on.
The problem with first introduction is that while this is great for technical people, it will not pass the Bill Gates' mum test. I'm not saying that TOFU/POP is or isn't the right approach, I'm highlighting that there would be some resistance to change from people who don't understand and don't want to understand.
Introducing (from a tech luddite's perspective) a complex intermediary step to getting online to a bank or ecommerce site is going to be something that people work against. Some people don't actually care that the government may listen in (for various values of people, care and government).
I pretty much agree with 99% of what you're saying. I think this is a big problem and also political. I have no idea what the answer is though.
This would be fairly straightforward to implement for the Chinese government. Another way to do this is to get your CA accepted (as has happened with CINIC) by default in various web browsers and abuse that.
A stronger disagreement: after the presentation puts so much time into linking the structure of PKI and the social incentives that mess it up, to just handwave that away as "a misconfiguration of SSL caused by a political quandary" is kind of missing the point. Any system designed with security in mind must take into account the human factor from the start; human behavior cannot be changed independently of the system's structure.
Paint me paranoid, but CA-assisted man in the middle attacks is a real source of concern for me. I would absolutely love to have the SSH-style trust model for my HTTPS connections and I will pay good money for an implementation. A simple browser plugin that implements TOFU/POP on top of PKI would be an excellent start.
Just delete your CA list.
Go into your browser's preferences. Find the "security" section. It should have a button to open up the list of trusted certificate authorities.
Delete them all.
Now, every time you connect to a secure page, one of two things will happen:
1. If you've never visited that page before, your browser will prompt you for instructions. Use whatever means you like (second browser, remote server, call them, etc) to verify the server's SSL fingerprint. If it checks out, mark it as valid (aka "add an exception").
2. If you have visited that site before, its exception will already be present in your browser, and it will work like normal.
After a few days of this, you'll see why the SSH model hasn't seen widespread adoption for HTTPS.
> After a few days of this, you'll see why the SSH model hasn't seen widespread adoption for HTTPS
That's not really true. It has simply never been done for browsers, because many people think it's an abomination and, to paraphrase well-known HNers from previous discussions, supposedly useless and little more than obfuscation instead of encryption.
It's not the first time TOFU/POP has been suggested on HN, too. I find it curious this proposal got so many upvotes here today. I don't remember that many people coming to mine and other people's defenses when we suggested the exact same thing a few months back and crypto gurus were rending their hair like we had committed the most stupidly blasphemous act conceivable to modern computer science.
Notably, SSH will not automatically connect. It asks first. That's why it's useful. Additionally, SSH is such a niche tool that its users can be expected to be security-minded -- for example, by checking that the fingerprint matches the expected string instead of just clicking through.
If browsers automatically trusted a page the first time they hit, attackers can just redirect the user to http://paypa1.com/ and feed them a fancy, green-url certificate.
> Notably, SSH will not automatically connect. It asks first. That's why it's useful.
How many sysadmins really do check their newly installed server's fingerprint? They just type "yes" on first connect and grab the server's certificate. I believe very few people actually do manual loopups when SSH pops the first-connection warning.
Most hosting providers will send you cleartext passwords for your server by email. When you log onto that server for the first time, there is no easy way to tell if you're the victim of an elaborate MITM attack. So that's happening right now.
> If browsers automatically trusted a page the first time they hit, attackers can just redirect the user to http://paypa1.com/ and feed them a fancy, green-url certificate.
Indeed they can. Nobody's suggesting otherwise. By the way, I can do that today, by registering a cert for paypa1.com, just to see how many people actually take a second look at the content of the certificate.
It was an old Usenix conference trick --- I think it's Dug Song's, but I'm not really sure --- to snarf people's SSH logins by capitalizing on their lack of interest in those messages. It's a trivial attack.
You are misrepresenting my position and then attacking me for it.
(All bets are off if someone is targeting your organization specifically, of course: they'd presumably have heard about the server changes in advance and take that opportunity to attack. If I considered that a serious concern in my circumstances then I'd ramp up my security level across the board.)
Though I guess DNSSEC puts you back at PKI ...
There are my own servers and there is the rest of the Internet. I want to use self-signed certs on my servers and I want these to be sticky. If any of these servers would suddenly start presenting different certificates (most notably, signed by a CA that sits on browser's trusted list), I want to know about this lovely fact.
For the rest of the Internet I can live with existing PKI.
I can understand why this might be a good feature to have for end-users and internet novices but why isn't there an option to revert it back to the old "yes/no" box instead of having to click 5 times when I know exactly why there is a self-signed certificate in use...
Also, considering this:
> Global PKI, as currently implemented in browsers, does not work.
makes this feature even more dubious and even worse, together with the green SSL indicator it gives a false sense of security.
Yes, that is possibly the single most annoying thing about any piece of software ever written, at least for those of us who develop secured web sites (and therefore self-sign all kinds of stuff before going to production).
It's quite straightforward to do this with openssl.
A good fix but under the light of the looming issues describe in the link above: why invest all that time implementing and then defending this feature in the first place - for a negligible (at best) security benefit?
This doesn't look like a win to me. Browsers are introduced to websites all the time.
To me, this solution looks like "let's accept an ambient loss of ~5-10% of all connections to MITM attacks, because it's politically too hard to configure the PKI-based SSL protocol properly".
That would - at least to start with - be additional security.
I care most about a handful of sites, most of which are controlled either by me or by people I trust. My co-located mail server would be one of these sites.
There. You now have "tofu/pop".
I think the claim is that this is already the case. There are certificate signing certificates in the hands of people who should not have them and there's no accounting of who has them (beyond the government mentioned) or what they're doing with them. I don't think this presentation is suggesting deleting all your installed CAs, just monitoring whether things are changing under you and coordinating those observations with others you trust. The coordination certainly goes beyond on the SSH model.
You can do that now. Note the cert ID first time you login. If it is different the next time you login, then it is a different cert.
This would be an easy browser extension.
- First, hosting provider. They can say "yeah, that stuff's on our network". There's IPsec, but no public keys float around, so it's only for VPNs now.
- Second, DNS registrar. They can say "yeah, that's the domain registered with us". That's already invented and called DNSSEC, although browsers should query for CERT records.
- Third, the notary. That's the current X.509 PKI with CAs. It is NOT broken, it's just insufficient and misunderstood.
- Fourth, yourself. Self-issued certificates can be, in fact, highly trusted, if you met site owner personally and verified the fingerprints. Or they can mean virtually nothing, if you don't know their source.
- Fifth, others. "We cooperate with those guys, they provide us nice stuff, everything works smoothly, and we trust them", "Been there, had everything I wanted, seen nothing weird, trust those guys". WoTs are extremely powerful, as they correspond to natural human trust networks. Implementation, is, of course, hard.
Obvious part of solution would be to allow multiple signatures, so the trust diagram would be a graph, not a tree.
I still argue that the is no 'usability' problem, but rather 'ignorance'. The fact that https has been hailed as 'secure' to the user (who has no formal understanding of what 'secure' is) is what has led up to the problem he is venting about...
The observation of this pattern is perhaps obvious, but important.
Summary of what I think the main points are:
- certificate authorities are not trustworthy currently. It is deceptive and potentially dangerous to pretend that they are.
- the solution is actually a shift in what we think security is: rather than saying "this site is trustworthy" or "this site is not", we consider multiple factors to decide whether it is likely to be trustworthy or not.
My thoughts: I think the second point is an excellent idea, and I would take it farther by presenting that information to the user.
It would be great if there were a little meter in my URL bar that was more full (or changed color, or whatever) when a site used more secure practices. It would be even better if I could click on that meter and get a list of things the site did, with a note by each one saying "a trustworthy site would do this" or "a trustworthy site would probably not do that".
This would have the benefit of making the security model transparent to the user, which would let them make better decisions.
It would also allow for experimentation with new security ideas, if we allow many possible factors and not expect each site to use all of them. You could easily add a factor for sneakernet key-signing, for instance - if you happen to have a cert directly from the company (i.e. your bank gave you a USB key), your browser could take that into account.
Edit: it would also allow for a gradual transition from the old model. You could make having a CA-signed key a factor. It just wouldn't be the only factor.
First off, there is a different sort of modern attack that's far worse than most others, and it's attacking the trust model. TLS and PKI hasn't been broken, the way it is used and managed has raised some questions though. The thing that can happen here, a lot of people will just jump ship on it all (and quite honestly, there isn't an alternative right now, the alternative is no security) and that's not good. This is happening a couple different ways, first people don't understand it, second people are raising fears over foreign CAs being included in browsers and OSes.
It seems like the bigger problem to me is that PKI is a dynamic system and the implementations are all static. When a web site is authenticated, the browser could establish a connection to the CA provide, signed by another CA even, and check to see if it has been revoked, if it has been signed, etc.. It could actually verify the third party authentication, live in real-time. To fake a site you'd need to compromise 2 different CA's in that case. And revocation certificates could be sent and honored, in real-time.
It seems to me like it's all rooted in the pay for security model. CAs need to verify each other and the distribution of a new CA needs to be a fairly steep and difficult thing to do (not impossible, but maybe face-to-face contact is required) and any hint of mis-use needs to result in revocation.
I think it's time to split encryption protocols. The defacto default of non-encryption instead of encryption due to big browser warnings are preventing web developers from implementing any encryption at all. The browsers are indirectly allowing anyone to snoop on what you view on the non-encrypted websites.
I want to use some basic encryption on my connection rather than no encryption when visiting websites like Wikipedia.
> Should Firefox throw a generic warning for a 5 minute
> expired certificate for Wikipedia?
> I want to use some basic encryption on my connection
> rather than no encryption when visiting websites like
Encryption without authentication is useless. If mybank.com only supports encryption, I have no idea whether I'm actually connected to mybank.com , or to the guy running tcpdump in the next booth.
Firefox gets a lot of grief (from idiots) for refusing self-signed certificates by default, but it's a huge credit to their security team that they've resisted the pressure to ease up.
The argument that encryption without authentication is useless seems to be too focused on the security of the connection. The larger the percentage of network traffic that is actually encrypted the less 'suspicious' encrypted connections look. In general, this is a state of affairs that we want to get to. Using encryption shouldn't make you stand out from the general population.
How is connecting to a site using HTTP any more secure than connecting to a site with HTTPS and a self-signed certificate so long as the user is not presented with any indication that the site is now 'secure?' Even better, how about having an icon that is always in one of two states: "secure" or "insecure." So now the user is presented with something that indicates "insecure" even when they are browsing over HTTP. This more accurately represents the state of affairs. The way browsers present security to the user currently is more along the lines of:
HTTP => I can browse the web and login to Facebook
HTTPS (CA-signed cert) => I can now use my credit card
HTTPS (self-signed cert) => ZOMG! I'M BEING HACKED!
I think there is potentially some middle ground to be found (e.g. a URL prefix which works over SSL but makes no implication of security) but clearly Mozilla decided that making it really irritating without any workaround was the best solution to the problem, regardless of any collatoral damage it causes.
I'm asking why an insecure SSL connection is worse than an insecure HTTP connection. If they are both completely insecure, why does Firefox turn one of them into a compete UI nightmare?
If the reasoning is that https:// somehow implies security, then the obvious and sensible solution is to create a new prefix for insecure SSL sites such as httpu://, and throw up a warning page when visiting the https version that links to the insecure SSL version.
I see this as being preferable to the awkward and annoying way Firefox treats self-signed certificates at the moment. I also suspect that had Mozilla taken the lead, the major browsers would have followed suit.
Which is exactly my point.
The fact is, even if the browser so much as displays "https" in the addressbar, you are giving the unknowlegable user a form of positive feedback when none whatsoever is warrented. Strong netagive feedback is a much more appropriate solution.
Also, consider that there are probably some users who think they understand the implications of trusting a self-signed cert while they actually do not, and the browser has no way of differentiating this.
It's mostly useless. Anyone trying to eavesdrop needs to expend slightly more resources, and needs to take the risk that there was some out-of-band authentication that they aren't aware of.
No, it mostly gets grief (from sane people) for not also refusing plain HTTP under the same pretenses.
If Firefox silently accepted self-signed certificates, and presented the same UI as an unencrypted page, attackers could intercept requests to https://paypal.com/ and very few users would ever notice.
Additionally, automatically accepting self-signed certificates would render ideas like "HTTPS-only cookies" useless, because now the browser would happily send them to anyone who asks.
I don't believe that. I've run into tons of self signed/invalid certs that weren't attacks. If you're implicitly limiting your statement to large sites (paypal, amazon and the like) then you're probably right. But unqualified, it is definitely not the case.
Well, of course encryption without authentication is useless for authentication.
It is still useful for encryption, though :-)
Generally speaking, and regardless of today's protocols, surely passive eavesdropping is and will always remain easier to accomplish than actively mounting a MITM attack.
The vulnerability here isn't "someone running Firesheep" - that's the exploit. The vulnerability here is "an open WiFi network is a completely trusted medium".
Are you happy to visit a John Doe blog with no encryption because he didn't want to pay for a certificate?
Authentication and confidentiality aren't just two "features" of a protocol. You have to have some degree of both, or you're just wanking.
The crux of the issue is privacy in browsing. Some parts of Wikipedia are pseudo-illegal to view in some parts of the world. Viewing it would put the user on the watchlist or more.
It adds TOFU/POP style behavior to Firefox and really should be a default feature of all browsers.
So the remaining threat is if the attacker gains access to the routing equipment, for instance a public WiFi access point or a government.
In the case of a government doing a MITM attack, then the easiest way is to subvert the download sites, so that firefox, MSIE etc includes CAs that the government is in control of. There is no solution for this attack, since both TOFU/POP and CAs are subverted.
In the second case, where public WiFi is under control of criminals, then TOFU/POP has more general applicability. In fact, a change of CA is probably a sign that the connection has been tampered.
Please don't present a complex topic with 87 slides.
Certified Lies: Detecting and Defeating Government
Interception Attacks Against SSL (Christopher Soghoian and Sid Stamm)
It makes it very clear how every trusted CA in the world currently has the power to circumvent the security of any https-protected website; by themselves or compelled by someone else.
Of course transition period would be bit painful if everybody would begin signing their own certs. Tracking individual certs allows much nicer transition, as the certs can continue to be signed by global CAs, but then citibank-like solutions would be problematic.
Yes, it will be done less efficiently and with less innovation than the market. That trade off would be worth it.
What does that mean? It means that when looking at the fun little multi step accepting certificates bells and whistles that FF throws I have absolutely no way of knowing if there is a 3rd party. There is nothing helping me out. And I am a "power user". Put those messages to a regular user and you get 3x as dumbfounded. So if 99% of the people don't understand WTF is going on, how can we help prevent any fraud?