Hacker News new | past | comments | ask | show | jobs | submit login
It's Time to Fix HTTPS (docs.google.com)
376 points by fogus on Mar 27, 2011 | hide | past | web | favorite | 120 comments

I think it's important to know that, whether you agree with this deck that key continuity ("tofu/pop") is a better approach than PKI or not, what we're talking about here is a misconfiguration of SSL caused by a political quandary, not a fundamental problem with SSL/HTTPS.

The fact is, if your browser trust roots contain only trustworthy CA's, and you look to make sure you're using a secure site when you expect to, SSL isn't broken. Those sound like two big "ifs", and they may be, but consider: they are both user interface issues:

* There is no collaborative interface for community-rating CA trustworthiness, or even a reasonable interface for pruning CA's out of your browser --- let alone "do I trust certificates delegated from this CA, or just the CA itself". We are still using an interface for this functionality designed in the mid-'90s.

* The "first introduction" problem that key continuity suffers from is, in the SSL case, caused by the fact that the SSL cues in the browser are subtle. But why should that be the case? Is there really no interface anyone can come up with that appropriately signals "this page should be and is secured by SSL"? Because if you have that (or STS), sslstrip doesn't work.

Key continuity's problems, on the other hand, aren't user interface issues. In key continuity schemes, when your browser is first introduced to a new website, you simply are insecure. If at that moment you are MITM'd by an attacker, you might never again be secure. This isn't a UI issue: key continuity protocols don't have the information available to them on first connection to make that decision.

Think to yourself, if you ran the great firewall of China, which system you'd rather have: the flawed one we have now, or the one that explicitly allows your connection on the first and subsequent connections to a new service to be hijacked?

I am our resident SSL/TLS apologist. But let me be clear that this deck is good, and raises a lot of valid concerns. I think SSL/TLS is a good system, but I share the (implied) disgust at the way our PKI has been managed. Some kind of reform is needed. I just don't think it needs to come with additional engineering sacrifices.

Raise your hand if you've been to a PGP signing party.

Key management is the weakest link of HTTPS/TLS. And the slide deck does a good job of identifying some of the forces that weaken it still further. It assumes (I believe correctly) that the various actors in this play are all working with their own self interest at heart.

So "tofu/pop", to be an improvement (rather than simply training people to click 'ok' on a lot of pop-ups) needs to have some counteracting forces.

But we don't have CA's which get their CA status revoked when they are shown to have given out untrustworthy certificates, nor do we have liability concerns on the browser maker for failing to warn you that something suspicious was going on, and we don't have nation state actors who will transparently step into the game.

I don't have a good answer, but something that is purely algorithmic is going to be challenging to get right. We trust the bank when we walk up to the brick and mortar office because the cost of 'fooling' average people would be too high to build this sort of ruse. However we have no way to currently transfer the trust of the bank to its presence on the web site. An example of that would be taking home a 'certificate' key from the branch manager and then using it at home.

I started looking at this as part of some Java security work in the 90's and got to appreciate just how difficult a problem set it is. And worse, the group that does the infrastructure has to be 100% open so that folks will trust it and 100% free so that it will be widely adopted. That makes for a very very difficult business model.

I used to work with Allan Schiffman, although well after Terisa, so I have my own share of SSL/TLS distaste, but I agree that it's basically workable, and I agree with you that the current PKI is broken. However, I think Chris, Seth, and Peter are right about how badly broken it is, and you're wrong. Specifically:

> Think to yourself, if you ran the great firewall of China, which system you'd rather have: the flawed one we have now, or the one that explicitly allows your connection on the first and subsequent connections to a new service to be hijacked?

The one we have now explicitly allows your connection on the first and subsequent connections to a new service to be hijacked by the Chinese government, assuming they can get your packets to go to them, because CNNIC has a CA cert in your browser. (Have you removed it? I haven't. I didn't know it was there.) Additionally, it allows any connection to any service, old or new, to be hijacked, even if you previously connected to that service without being hijacked, and it doesn't leave any evidence that this has happened. TOFU/POP solves these additional problems, without solving the introduction problem.

It's possible to solve the introduction problem --- which is what CAs do --- in a variety of more or less secure ways. SSH gives you a key fingerprint and prompts you, which pretty much pushes the problem back on the user. But you can do better than that in a bunch of ways.

... so, remove the CNNIC cert.

What's the issue here? CNNIC isn't trustworthy. You should get a warning every time you visit a CNNIC-signed site.

I don't understand how people can build entire advocacy campaigns on getting other people to switch from IE, but removing untrustworthy certificates? Time to scrap the protocol and move to a crappier one that we can at least get our heads around!

The issue is that Firefox still includes CNNIC by default. A few hundred (thousand?) people on the planet know that's a horrible idea.[1] When the community can't manage a no-brainer, pro-user change from the Mozilla foundation, maybe there's something to the compounded self-interest problem?

This isn't to say TOFU/POP is the right move. But there's a very real problem here. One that HTTPS's strengths do nothing to address or offset.

[1] And of those, I doubt they all remember to check their CA list at every browser install/upgrade/borrowed laptop browsing situation.

Removing the CNNIC cert is not an option if you actually need to talk to sites that use it. And at this point there are literally hundreds of untrustworthy root CA certs in browsers. Just because I want to talk to some server in Turkey doesn't mean I ought to rely on a Turkish organization to vouch for the identity of Bank of America.

This is not a problem with the protocol, but with the criteria used for trusting a signature.

I know you're a reasonable guy. But, look: you either trust CNNIC or (like me), you don't. If you can't trust CNNIC, you axe their cert and click through to make persistent exceptions to the sites that need that cert.

At the end of the day, we're talking about a very simple UI problem. Yes, our notion of "trust" is broken. But it's broken in large part because nobody has put any effort into communicating to end-users enough information to enable them to make informed decisions. Then, after putting no effort into the problem, we look at and say "you're asking my mom to understand X.509 CNs!", as if that was the extent of what engineering could bring to bear against this problem.

Then we throw up our hands and say "let's make a simpler protocol; who cares if predictably and reliably allows middlemen to hijack our connections, at least we won't have to explain X.509 to our moms".

Can you blame me for being a bit cynical about this?

The fundamental problem is with X.509... It uses a flat namespace so all CAs can issue a valid certificate for any domain on the internet. Much of the problem could have been avoided if it was using a proper hierarchical structure, e.g. mirroring DNS, so only a .com CA could create a valid google.com certificate.

Completely agreed. Note under such a system we wouldn't have even had to create separate CAs - it would have been natural for DNS registrars to take on this role, signing a certificate for your domain at the same time that they delegated your name server records. Certificate expiry dates could have been directly linked to domain registration expiry dates.

Under such a system, you'd know that you were only at the mercy of the Chinese government when browsing .cn sites - which is neither unsurprising nor too much of a worry.

Why do you trust DNS registrars more than you trust CAs?

It is a logical conclusion that I trust the set (my DNS registrar) at least as much as I trust the set (my DNS registrar ∩ All CAs in the world).

Not to mention that this would fix some of the incentive problems. DNS top-levels would effectively be competing on the strength of their certificate-issuing properties - if a particular TLD was known to have a more corruption-resistant registrar, then that TLD ought to become more sought-after by security-concious sites.

HTTPS doesn't depend on you trusting your DNS registrar!

There is a transitive trust relationship, because the a CA will typically supply a certificate to anyone who can prove that they control the domain, and the DNS registrar has ultimate control of the domain.

> a CA will typically supply a certificate to anyone who can prove that they control the domain

This is true, but it is also a problem that needs to be fixed. A CA should not only check whether you control the domain, but should also check your (real-world) identity, and in the case of a company, should also check other documents. Yes, most don't, or do it only ineffectually, but they should do it.

As always, trustworthy encryption boils down to identity assurance. Even in real-world personal interactions, identity assurance is subject to a leap of faith ("I believe this person is who they say they are based on the evidence they presented"). As such, is this problem even solvable? People rely on a much less stringent level of assurance to create secure connections on the Internet, because they want it handled within a single mouse click, and that expectation isn't going to change.

At least with a certificate issued by the registrar, there's some logical reason to believe that it was issued to whoever purchased the domain (whoever that is), which is often enough to establish an encrypted connection at a level the user expects. Significant problems remain, however: 1) Most domain owners have no idea how to generate a key and CSR (the registrar shouldn't do it) and 2) a truly reliable system for certificate revocation must exist to handle domain transfers. These aren't new problems, they just don't go away by shifting to this "easier" approach.

You are already trusting DNS more than CAs. The CN is set to your domain, not to mention that the authorization process for many CAs uses email, which depends on DNS.

This is just another reason why it's so important to get technologies such as DNSSEC and DNSCurve working. As many have pointed out, it make sense to replace X.509 with DNSEC for exactly this reason, so you can verify the SSL certificate by simply doing a secure dns lookup.

Danica Patrick isn't inherently trustworthy?

The IETF "DANE" working group is specifying a way to do this using DNSSEC.

In key continuity schemes, when your browser is first introduced to a new website, you simply are insecure. If at that moment you are MITM'd by an attacker, you might never again be secure. This isn't a UI issue: key continuity protocols don't have the information available to them on first connection to make that decision.

What if you add that information to a key continuity system? I feel like this is an obvious idea, but I've never heard of a system using it. It's simple: when you first see a certificate you send it to a trusted third party to check. That third party doesn't need to be or even know about the CA who originally issued the cert; all it has to do is make its own connection to the server and make sure it gets the same cert you did. That protects against any MITM on your end of the connection.

To succeed, a MITM would have to control either the server's connection (which becomes the server operator's problem), or both your connection and the trusted third party's connection (which becomes the trusted third party's problem). I'd rather trust a single third party of my choosing than a huge list of CAs chosen by an opaque and somewhat arbitrary process.

There are multiple hops from you to the server. That scheme would only verify that BOTH your and the third party's initial hops haven't been simultaneously MiTM'ed. It's very possible the routes overlap for several hops.

And even if they didn't, I wouldn't consider hackers getting my bank info-- or governments reading my email-- the server operator's problem.

The middle hops are much (much) harder to MITM than the initial hops. If hackers control the bank's internet connection then they get everyone's bank info, which is absolutely the bank's problem (and as such they would take measures to prevent it, like third-party monitoring of their certificates from other points on the Internet). As for governments reading email, the current CA system hardly prevents that. Under this system, you could detect MITM after the fact by using a trusted channel to compare the client's saved cert with the server's real cert. That would expose large-scale MITM tampering.

I don't see any reason why it would be much harder to hit the middle hops for a MiTM attack.

It certainly wouldn't be worth setting up a MiTM attack between my house and the first Comcast router my cable hits. That approach doesn't scale. Sure maybe if I'm Warren Buffet (or some mobster if we're talking about the government) it might make sense. And if they did do that, then NO source I connect to is a trusted channel, as they could MiTM any and all connections.

The scheme wouldn't (always) protect you from an attacker setup somewhere in between you and your destination, like the NSA servers at AT&T, or the Firewall of China. Your trusted source could hit the same bad path.

And by your own acknowledgment, the scheme doesn't protect you from a hacker breaking into a bank's data center. If a hacker is smart, and only grabs say every 100,000th credit card, how long will it take to isolate the location of the exploit?

So the scheme doesn't really protect against anything with any level of confidence.

You are getting out of scope. A hacker breaking into a bank's data center is not at all releated to the discussion at hand.

You get the initial trusted source cert through a secure channel in exactly the same way you get CA certs today, so this scheme is just as secure as SSL against an attacker who controls your entire connection.

Sure, the scheme is only as good as your trusted source. But the trusted source could be very good. It could have multiple servers in different ISPs and even different countries verifying certs for you, and could use other methods of verification as well.

The bank can easily check if their connections are getting MITM attacked. The hacker has to modify the certs seen by anyone who contacts the bank, even if he doesn't MITM every connection, and that would be easy for the bank to detect.

Perspectives (mentioned in the slides) goes a couple better: multiple third parties that poll sites and keep a history of what cert they offer.

The fact is, if your browser trust roots contain only trustworthy CA's

That would be the empty set, because neither have I any idea of what criteria I'd use to trust a CA [1], nor do I trust CAs not to give up their private keys to the govt given even the slightest pressure.

[1] Thinking about it a bit more: I'd trust a CA that I'd set up myself. But I wouldn't necessarily trust the hardware it was running on.

> Is there really no interface anyone can come up with that appropriately signals "this page should be and is secured by SSL"?

The problem with first introduction is that while this is great for technical people, it will not pass the Bill Gates' mum test. I'm not saying that TOFU/POP is or isn't the right approach, I'm highlighting that there would be some resistance to change from people who don't understand and don't want to understand.

Introducing (from a tech luddite's perspective) a complex intermediary step to getting online to a bank or ecommerce site is going to be something that people work against. Some people don't actually care that the government may listen in (for various values of people, care and government).

I pretty much agree with 99% of what you're saying. I think this is a big problem and also political. I have no idea what the answer is though.

> Think to yourself, if you ran the great firewall of China, which system you'd rather have: the flawed one we have now, or the one that explicitly allows your connection on the first and subsequent connections to a new service to be hijacked?

This would be fairly straightforward to implement for the Chinese government. Another way to do this is to get your CA accepted (as has happened with CINIC) by default in various web browsers and abuse that.

I agree that the "first introduction" is an issue, but the use of Perspectives and key signing seems to be a fairly strong attempt to overcome that.

A stronger disagreement: after the presentation puts so much time into linking the structure of PKI and the social incentives that mess it up, to just handwave that away as "a misconfiguration of SSL caused by a political quandary" is kind of missing the point. Any system designed with security in mind must take into account the human factor from the start; human behavior cannot be changed independently of the system's structure.

YES. Please somebody make this.

Paint me paranoid, but CA-assisted man in the middle attacks is a real source of concern for me. I would absolutely love to have the SSH-style trust model for my HTTPS connections and I will pay good money for an implementation. A simple browser plugin that implements TOFU/POP on top of PKI would be an excellent start.

You can have it today! No money or plugins needed. And it only takes a minute to get started.

Just delete your CA list.

Go into your browser's preferences. Find the "security" section. It should have a button to open up the list of trusted certificate authorities.

Delete them all.

Now, every time you connect to a secure page, one of two things will happen:

1. If you've never visited that page before, your browser will prompt you for instructions. Use whatever means you like (second browser, remote server, call them, etc) to verify the server's SSL fingerprint. If it checks out, mark it as valid (aka "add an exception").

2. If you have visited that site before, its exception will already be present in your browser, and it will work like normal.

After a few days of this, you'll see why the SSH model hasn't seen widespread adoption for HTTPS.

What you describe is exactly the behavior the author calls "Firefox' war on self-signed certificates". The actual proposal calls for a browser that does NOT prompt the user all the time. It would just prompt when something extraordinary happens (such as the server's already known certificate changing).

> After a few days of this, you'll see why the SSH model hasn't seen widespread adoption for HTTPS

That's not really true. It has simply never been done for browsers, because many people think it's an abomination and, to paraphrase well-known HNers from previous discussions, supposedly useless and little more than obfuscation instead of encryption.

It's not the first time TOFU/POP has been suggested on HN, too. I find it curious this proposal got so many upvotes here today. I don't remember that many people coming to mine and other people's defenses when we suggested the exact same thing a few months back and crypto gurus were rending their hair like we had committed the most stupidly blasphemous act conceivable to modern computer science.

The first time you connect to an SSH server, your client will display that server's key, and prompt for instructions.

Notably, SSH will not automatically connect. It asks first. That's why it's useful. Additionally, SSH is such a niche tool that its users can be expected to be security-minded -- for example, by checking that the fingerprint matches the expected string instead of just clicking through.

If browsers automatically trusted a page the first time they hit, attackers can just redirect the user to http://paypa1.com/ and feed them a fancy, green-url certificate.

Replicating SSH's UI is not the point here.

> Notably, SSH will not automatically connect. It asks first. That's why it's useful.

How many sysadmins really do check their newly installed server's fingerprint? They just type "yes" on first connect and grab the server's certificate. I believe very few people actually do manual loopups when SSH pops the first-connection warning.

Most hosting providers will send you cleartext passwords for your server by email. When you log onto that server for the first time, there is no easy way to tell if you're the victim of an elaborate MITM attack. So that's happening right now.

> If browsers automatically trusted a page the first time they hit, attackers can just redirect the user to http://paypa1.com/ and feed them a fancy, green-url certificate.

Indeed they can. Nobody's suggesting otherwise. By the way, I can do that today, by registering a cert for paypa1.com, just to see how many people actually take a second look at the content of the certificate.

You really need to start looking more carefully at those SSH messages. In particular, the fact that you get an SSH warning when a site's key suddenly changes is 99% of the security value of SSH.

It was an old Usenix conference trick --- I think it's Dug Song's, but I'm not really sure --- to snarf people's SSH logins by capitalizing on their lack of interest in those messages. It's a trivial attack.

I wasn't talking about ignoring any SSH warnings that occur when a site's key suddenly changes. Really, look at my comments, that's not at all what I said, is it?

You are misrepresenting my position and then attacking me for it.

There is not a difference between your fingerprint changing, and you not having existing knowledge to compare it to. Your action in either case should be the same.

I disagree. If I get a warning out of the blue, yeah, I take it seriously: something unexpected is going on, and I'm not going to trust the server until I know what it is. But if I know there's going to be a server upgrade overnight and I get an SSH warning in the morning, I figure odds are good that it's because of the new server rather than a coincidentally timed MITM attack. That's not perfect security, certainly, but as long as MITM attacks are rare it doesn't cost you that much of SSH's value.

(All bets are off if someone is targeting your organization specifically, of course: they'd presumably have heard about the server changes in advance and take that opportunity to attack. If I considered that a serious concern in my circumstances then I'd ramp up my security level across the board.)

I have had security conscious hosting providers who have done a proper fingerprint validation over the phone half each way for ssh fingerprints. Its rare though. Sign of a good provider...

However, see-also DNSSEC and http://benctechnicalblog.blogspot.com/2011/03/sshfp-dns.html

Though I guess DNSSEC puts you back at PKI ...

Ha-ha-ha. Not too practical, is it?

There are my own servers and there is the rest of the Internet. I want to use self-signed certs on my servers and I want these to be sticky. If any of these servers would suddenly start presenting different certificates (most notably, signed by a CA that sits on browser's trusted list), I want to know about this lovely fact.

For the rest of the Internet I can live with existing PKI.

That plugin already exists - if you run Firefox, install "Certificate Patrol". It will tell you whenever a site's certificate changes, and it'll show the relevant fields of the old and new certificates side-by-side, so that you can decide whether the change looks benign or malignant.

Ever since they changed Firefox's behavior when an unknown certificate is encountered this would be especially crippling.

I can understand why this might be a good feature to have for end-users and internet novices but why isn't there an option to revert it back to the old "yes/no" box instead of having to click 5 times when I know exactly why there is a self-signed certificate in use...

Also, considering this:

> Global PKI, as currently implemented in browsers, does not work.

makes this feature even more dubious and even worse, together with the green SSL indicator it gives a false sense of security.

> Ever since they changed Firefox's behavior when an unknown certificate is encountered this would be especially crippling.

Yes, that is possibly the single most annoying thing about any piece of software ever written, at least for those of us who develop secured web sites (and therefore self-sign all kinds of stuff before going to production).

If it causes that much irritation, have you considered making your own CA, adding it to your browsers, and then using that to sign your temporary certificates?

It's quite straightforward to do this with openssl.

I suppose that would be a possibility, but we'd still have to configure every browser on every machine to support that CA and then generate a certificate for every new project. I'd rather just be able to click once in my browser to accept the unsigned certificate, instead of jumping through tedious hoops that I don't believe really help anyone.

So rather than click once to install your internal CA you'd prefer to click once for every new site you add?

You're assuming it would be a single CA. That is, unfortunately, unlikely: we work on a variety of projects for various clients and in conjunction with various other contractors/development teams, and each project will have its own requirements for what needs to be signed and by whom. About the only thing they do have in common is that no-one goes out and spends lots of money on certs for every random development server they're using, they all self-sign in one form or another.

> If it causes that much irritation, have you considered making your own CA, adding it to your browsers, and then using that to sign your temporary certificates?

A good fix but under the light of the looming issues describe in the link above: why invest all that time implementing and then defending this feature in the first place - for a negligible (at best) security benefit?

How is key continuity any better than PKI against man-in-the-middle attacks? The tradeoff is simple: every first introduction between a browser and a website is insecure, so that subsequent connections might be less insecure.

This doesn't look like a win to me. Browsers are introduced to websites all the time.

To me, this solution looks like "let's accept an ambient loss of ~5-10% of all connections to MITM attacks, because it's politically too hard to configure the PKI-based SSL protocol properly".

Read this sentence again: A simple browser plugin that implements TOFU/POP on top of PKI would be an excellent start.

That would - at least to start with - be additional security.

I don't disagree with that.

> Browsers are introduced to websites all the time.

I care most about a handful of sites, most of which are controlled either by me or by people I trust. My co-located mail server would be one of these sites.

How many times do you set up new computers, or install new browsers, or access a site you care about from a friend's computer? And: you're someone particularly careful about which sites you access. But how is this going to work for my mom? If the problems we're concerned about are government wiretapping and fraud, how does creating a guaranteed and predictable flow of insecure connections not make the problem worse?

Well, it appears that you are responding in the context of the original paper ("let's fix this mess") and I am thinking in the context on my own, very specific immediate need. I do want TOFU/POP now.

Use Firefox only to access your few trusted sites, and use Chrome or Safari to access the rest of the Internet. Delete all the CA certificates from Firefox. Visit your trusted cites, adding their certs to Firefox as the warnings pop up.

There. You now have "tofu/pop".

Strange thing there's no emacs in the picture :) Call me eccentric but I don't want to use two browsers, or even two browser profiles. I'd rather make a one time setup change and then go back to my established way of doing things.

"every first introduction between a browser and a website is insecure, so that subsequent connections might be less insecure."

I think the claim is that this is already the case. There are certificate signing certificates in the hands of people who should not have them and there's no accounting of who has them (beyond the government mentioned) or what they're doing with them. I don't think this presentation is suggesting deleting all your installed CAs, just monitoring whether things are changing under you and coordinating those observations with others you trust. The coordination certainly goes beyond on the SSH model.

If you believe that, and you don't want to apply software engineering to that problem, then just delete all your CA keys and accept back the ones corresponding to sites you trust. No change to SSL is required to make that process work.

> SSH-style trust model for my HTTPS connections and I will pay good money for an implementation

You can do that now. Note the cert ID first time you login. If it is different the next time you login, then it is a different cert.

This would be an easy browser extension.

Let's think rationally. Who can add to certify that your site is, indeed, yours?

- First, hosting provider. They can say "yeah, that stuff's on our network". There's IPsec, but no public keys float around, so it's only for VPNs now.

- Second, DNS registrar. They can say "yeah, that's the domain registered with us". That's already invented and called DNSSEC, although browsers should query for CERT records.

- Third, the notary. That's the current X.509 PKI with CAs. It is NOT broken, it's just insufficient and misunderstood.

- Fourth, yourself. Self-issued certificates can be, in fact, highly trusted, if you met site owner personally and verified the fingerprints. Or they can mean virtually nothing, if you don't know their source.

- Fifth, others. "We cooperate with those guys, they provide us nice stuff, everything works smoothly, and we trust them", "Been there, had everything I wanted, seen nothing weird, trust those guys". WoTs are extremely powerful, as they correspond to natural human trust networks. Implementation, is, of course, hard.

Obvious part of solution would be to allow multiple signatures, so the trust diagram would be a graph, not a tree.

At first I thought "The title is bait.... what a waste of time, etc...". Clearly the problem isn't HTTPS, it is the current CA structure. I then went on Chris' site (http://noncombatant.org/) and there he summarizes it well: "The problems are social and economic more than technical. The technical problems are in usability, not in cryptography. In general, security people should start learning about usability."

I still argue that the is no 'usability' problem, but rather 'ignorance'. The fact that https has been hailed as 'secure' to the user (who has no formal understanding of what 'secure' is) is what has led up to the problem he is venting about...

I’m not sure that usable security is possible. In general, it’s the pattern that security requires less than ideal user experiences. For example, being emailed your password when you forget it would be “nicer” than getting a password-reset link, but requires breach of server-side best practice; not having to use a password at all would of course be the very easiest and least secure “solution” for authentication; the most secure measures require two-factor authentication and are necessarily the most annoying.

The observation of this pattern is perhaps obvious, but important.

I like the ideas presented in here.

Summary of what I think the main points are:

- certificate authorities are not trustworthy currently. It is deceptive and potentially dangerous to pretend that they are.

- the solution is actually a shift in what we think security is: rather than saying "this site is trustworthy" or "this site is not", we consider multiple factors to decide whether it is likely to be trustworthy or not.

My thoughts: I think the second point is an excellent idea, and I would take it farther by presenting that information to the user.

It would be great if there were a little meter in my URL bar that was more full (or changed color, or whatever) when a site used more secure practices. It would be even better if I could click on that meter and get a list of things the site did, with a note by each one saying "a trustworthy site would do this" or "a trustworthy site would probably not do that".

This would have the benefit of making the security model transparent to the user, which would let them make better decisions.

It would also allow for experimentation with new security ideas, if we allow many possible factors and not expect each site to use all of them. You could easily add a factor for sneakernet key-signing, for instance - if you happen to have a cert directly from the company (i.e. your bank gave you a USB key), your browser could take that into account.

Edit: it would also allow for a gradual transition from the old model. You could make having a CA-signed key a factor. It just wouldn't be the only factor.

I'll chime in on this.. Some buddies and I have been debating this since the comodo stuff first started leaking out.

First off, there is a different sort of modern attack that's far worse than most others, and it's attacking the trust model. TLS and PKI hasn't been broken, the way it is used and managed has raised some questions though. The thing that can happen here, a lot of people will just jump ship on it all (and quite honestly, there isn't an alternative right now, the alternative is no security) and that's not good. This is happening a couple different ways, first people don't understand it, second people are raising fears over foreign CAs being included in browsers and OSes.

It seems like the bigger problem to me is that PKI is a dynamic system and the implementations are all static. When a web site is authenticated, the browser could establish a connection to the CA provide, signed by another CA even, and check to see if it has been revoked, if it has been signed, etc.. It could actually verify the third party authentication, live in real-time. To fake a site you'd need to compromise 2 different CA's in that case. And revocation certificates could be sent and honored, in real-time.

It seems to me like it's all rooted in the pay for security model. CAs need to verify each other and the distribution of a new CA needs to be a fairly steep and difficult thing to do (not impossible, but maybe face-to-face contact is required) and any hint of mis-use needs to result in revocation.

Should Firefox throw a generic warning for a 5 minute expired certificate for Wikipedia?

I think it's time to split encryption protocols. The defacto default of non-encryption instead of encryption due to big browser warnings are preventing web developers from implementing any encryption at all. The browsers are indirectly allowing anyone to snoop on what you view on the non-encrypted websites.

I want to use some basic encryption on my connection rather than no encryption when visiting websites like Wikipedia.

  > Should Firefox throw a generic warning for a 5 minute
  > expired certificate for Wikipedia?

  > I want to use some basic encryption on my connection
  > rather than no encryption when visiting websites like
  > Wikipedia.
Then use Wikipedia's secure URL at https://secure.wikimedia.org/wikipedia/en/wiki/Main_Page

Encryption without authentication is useless. If mybank.com only supports encryption, I have no idea whether I'm actually connected to mybank.com , or to the guy running tcpdump in the next booth.

Firefox gets a lot of grief (from idiots) for refusing self-signed certificates by default, but it's a huge credit to their security team that they've resisted the pressure to ease up.

The argument against self-signed certificates seems to be that people will get a false sense of security because they think that the connection is 'secure.' How about accepting the connection but not displaying the normal visual cues that say "this site is secure?" (e.g. the lock icon) Is that a significantly worse alternative?

The argument that encryption without authentication is useless seems to be too focused on the security of the connection. The larger the percentage of network traffic that is actually encrypted the less 'suspicious' encrypted connections look. In general, this is a state of affairs that we want to get to. Using encryption shouldn't make you stand out from the general population.

How is connecting to a site using HTTP any more secure than connecting to a site with HTTPS and a self-signed certificate so long as the user is not presented with any indication that the site is now 'secure?' Even better, how about having an icon that is always in one of two states: "secure" or "insecure." So now the user is presented with something that indicates "insecure" even when they are browsing over HTTP. This more accurately represents the state of affairs. The way browsers present security to the user currently is more along the lines of:

  HTTP => I can browse the web and login to Facebook
  HTTPS (CA-signed cert) => I can now use my credit card
  HTTPS (self-signed cert) => ZOMG! I'M BEING HACKED!

The argument isn't whether encryption without authentication is useless, but whether encryption without authentication is significantly worse than useless and is therefore deserving of Firefox's awkward hoop-jumping UI. I'm not convinced.

I think there is potentially some middle ground to be found (e.g. a URL prefix which works over SSL but makes no implication of security) but clearly Mozilla decided that making it really irritating without any workaround was the best solution to the problem, regardless of any collatoral damage it causes.

This is not a hard concept to understand. You can't have secure encryption without trustworthy keys. Without the "authentication" functionality of SSL, you can't trust keys, because a man in the middle can simply swap them out of your session. When we say "you can't have encryption without authentication", we mean that every mechanism for agreeing on an AES key with a counterparty over the Internet relies on some form of authentication.

I understand the concept. I don't understand why you're pointing out the obvious to me, nor how your reply is a relevent response to my post.

I'm asking why an insecure SSL connection is worse than an insecure HTTP connection. If they are both completely insecure, why does Firefox turn one of them into a compete UI nightmare?

If the reasoning is that https:// somehow implies security, then the obvious and sensible solution is to create a new prefix for insecure SSL sites such as httpu://, and throw up a warning page when visiting the https version that links to the insecure SSL version.

I see this as being preferable to the awkward and annoying way Firefox treats self-signed certificates at the moment. I also suspect that had Mozilla taken the lead, the major browsers would have followed suit.

The plain HTTP connection doesn't claim to offer any security. You know not to give it your credit card number or password. What trust can you extend the "poorly encrypted" protocol?

You should give it the exact same amount of trust as a plain HTTP connection.

Which is exactly my point.

Then why the hell would you even use it?

The fact is, even if the browser so much as displays "https" in the addressbar, you are giving the unknowlegable user a form of positive feedback when none whatsoever is warrented. Strong netagive feedback is a much more appropriate solution.

You would use it because it makes encrypted traffic on the network the norm. It makes it impossible to decipher your traffic without an active deliberate security attack instead of a passive sniffing session. It means when a traffic sniffer happens on your traffic they will get (plain text, encrypted data) and they won't know whether the encrypted data is 'good' or 'bad' and will have an easier time with the plain text - the "just have to run faster than the other guy" approach.

It's not encrypted if the keys aren't secure.

It filters out some kinds of common attacks, e.g. most (all?) uses of Firesheep session-cookie snooping at coffee shops. Of course, for us nerd types https isn't really necessary for that, because I can just websurf over an ssh -D proxy to a VPS, to encrypt the coffee-shop side of the link.

Browsers are already having a really hard time making users of average proficiency understand SSL feedback and act wisely in response to warnings. Whatever special case they add to the UI to accomodate geeks using self-signed certs becomes yet another complexity that the normal user has to understand.

Also, consider that there are probably some users who think they understand the implications of trusting a self-signed cert while they actually do not, and the browser has no way of differentiating this.

Encryption without authentication is useless.

It's mostly useless. Anyone trying to eavesdrop needs to expend slightly more resources, and needs to take the risk that there was some out-of-band authentication that they aren't aware of.

Firefox gets a lot of grief (from idiots) for refusing self-signed certificates by default, but it's a huge credit to their security team that they've resisted the pressure to ease up.

No, it mostly gets grief (from sane people) for not also refusing plain HTTP under the same pretenses.

An invalid certificate is far more likely to be an attack than a misconfigured server.

If Firefox silently accepted self-signed certificates, and presented the same UI as an unencrypted page, attackers could intercept requests to https://paypal.com/ and very few users would ever notice.

Additionally, automatically accepting self-signed certificates would render ideas like "HTTPS-only cookies" useless, because now the browser would happily send them to anyone who asks.

> An invalid certificate is far more likely to be an attack than a misconfigured server.

I don't believe that. I've run into tons of self signed/invalid certs that weren't attacks. If you're implicitly limiting your statement to large sites (paypal, amazon and the like) then you're probably right. But unqualified, it is definitely not the case.

Plain HTTP isn't lying to the end-user.

Would you support if browsers made password fields not-starred-out on HTTP sites? Just to make it clear that there is no implication of security there?

No, because that would be stupid.

Those people aren't "idiots," they're normal (non-geek) people. They just want to buy stuff on Amazon or whatever and not have their credit card number stolen.

Re-read my post -- the idiots are people who believe Firefox should accept self-signed certificates by default. Amazon has nothing to do with this.

Forget the Amazon example (there are plenty of legit sites that self-sign, like my school's online grading system). It scares the hell out of your average computer user, and it doesn't provide much gain.

The extension https everywhere auto-encrypts wikipedia


Encryption without authentication is useless. If mybank.com only supports encryption, I have no idea whether I'm actually connected to mybank.com , or to the guy running tcpdump in the next booth.

Well, of course encryption without authentication is useless for authentication.

It is still useful for encryption, though :-)

No, it's not. To encrypt data between two parties who don't know each other, both sides have to agree on a key. There is no protocol that does that securely in the presence of a man in the middle without a "tiebreaker"; the tiebreaker SSL PKI uses is certificates.

But doesn't (unauthenticated) encryption at least protect against eavesdroppers who don't have the ability to modify the stream, i.e. to mount a MITM attack?

"eavesdroppers who don't have the ability to modify the stream" are a nice fairytale, but they don't really exist in practice, with current network protocols. DNSSEC might change this, once it becomes universal.

What about Firesheep users? Surely there are many people who find it a lot easier to just capture some WiFi traffic via libpcap, WireShark or the like than to set up a fake WiFi access point, poison a DNS cache or mount other such attacks that would allow them to actually modify traffic.

Why are we talking about defenses that are defeated by just a couple lines of code? Firesheep could use pcap_write in addition to pcap_loop and redirect connections. What's the point of a defense that breaks Firesheep 1.0 only to fall to Firesheep 2.0?

Isn't that a bit like saying that it's pointless for policemen to wear bulletproof vests because there could always be a sniper aiming at their head?

Generally speaking, and regardless of today's protocols, surely passive eavesdropping is and will always remain easier to accomplish than actively mounting a MITM attack.

No. The opposite is true. It's actually easier to MITM in 2011 than it is to sniff passively; the MITM only needs to play packet games long enough to get the victim to connect to her.

Interesting, I didn't realize that. Since you're tptacek I'll take your word for it :-)

The only reason Firesheep doesn't modify traffic is because it doesn't have to. Faking DNS replies or similar would be trivial to add, were it needed.

The vulnerability here isn't "someone running Firesheep" - that's the exploit. The vulnerability here is "an open WiFi network is a completely trusted medium".

Given the recent EFF story about AT&T shunting all their traffic through NSA computers, I would say it most certainly exists in practice. Basic encryption with no authentication doesn't hold up at all to directed attacks, but it would definitely help with big siphoning attacks that are actually happening right now.

MITM attacks don't have to be directed. It's not technically challenging to mass-MITM a channel --- but you probably wouldn't do that, because you can just pick "interesting" connections (like, to Google Mail) to intercept.

Well, they are also a huge credit to preventing encryption from becoming the standard for all websites by ignoring 'idiots' and I'm talking non-banks like Wikipedia.

Are you happy to visit a John Doe blog with no encryption because he didn't want to pay for a certificate?

I'd rather have an unencrypted connection than encrypted to an unknown endpoint. If John Doe's blog is sensitive enough to require a secure connection, he can use one of the free certificate vendors like StartSSL.

What's the point of encrypting HTTP if any man in the middle can decrypt it? The whole point of the deck you (hopefully) read is that HTTPS falls to MITM attacks.

Authentication and confidentiality aren't just two "features" of a protocol. You have to have some degree of both, or you're just wanking.

The default standard as it exists is non-encryption. Browsers throw warnings for invalid certificates of a bank, wikipedia, random blog, etc. Not unencrypted connections like a random website like a John Doe blog.

The crux of the issue is privacy in browsing. Some parts of Wikipedia are pseudo-illegal to view in some parts of the world. Viewing it would put the user on the watchlist or more.

How does it help the user to lie to him and tell him his stuff is encrypted, when in fact he'd have absolutely no way to know whether his keys were force-fed to his browser by his government?

Am I the only one who is frustrated by Hacker News' domain abbreviation with links like this? It makes the link look like it's from Google, and not just a hosted google doc. Could the URL abbreviation change to include subdomains as well?

It was in the slides, but for those who are too lazy to go through them all: install Certificate Patrol if you are using Firefox.

It adds TOFU/POP style behavior to Firefox and really should be a default feature of all browsers.

Maybe it's time to give D. J. Bernstein's CurveCP (http://curvecp.org/) another look. I quite liked the private "key as endpoint" technique that allowed moving a connection across IP addresses without disconnecting.

Even if the CA is cheap and sloppy, an effective attack requires MITM through subverting DNS. This requires the attacker to either have access to the machine or access to the routing equipment. If the attacker owns the machine then it is already game over.

So the remaining threat is if the attacker gains access to the routing equipment, for instance a public WiFi access point or a government.

In the case of a government doing a MITM attack, then the easiest way is to subvert the download sites, so that firefox, MSIE etc includes CAs that the government is in control of. There is no solution for this attack, since both TOFU/POP and CAs are subverted.

In the second case, where public WiFi is under control of criminals, then TOFU/POP has more general applicability. In fact, a change of CA is probably a sign that the connection has been tampered.

It's time to fix ppt-misuse.

Please don't present a complex topic with 87 slides.

There is also a longer paper (mentioned in the presentation) dealing with this issue from a specific point of view:


Certifi ed Lies: Detecting and Defeating Government Interception Attacks Against SSL (Christopher Soghoian and Sid Stamm)

It makes it very clear how every trusted CA in the world currently has the power to circumvent the security of any https-protected website; by themselves or compelled by someone else.

TOFU would be a lot nicer if certificates were hierarchical, ie. I could say that trust this CA for all *.citibank.com certs. Wildcard certs mitigate the need for that a little, but they are bit more insecure (imho. with wildcard cert attacker needs to gain access to any of your servers to impersonate any other server. With individual certs, attacker needs to either gain access to your CA or is only able to impersonate the compromised server).

Of course transition period would be bit painful if everybody would begin signing their own certs. Tracking individual certs allows much nicer transition, as the certs can continue to be signed by global CAs, but then citibank-like solutions would be problematic.

"If people don't understand it, we engineered it wrong."


Yes. To clarify: if people don't understand how to use it, it's engineered wrong.

I'm not convinced that tofu is the correct solution, but every criticism of https is true. In addition, it requires a public ip, something that may become scarce soon. I think that tofu combined with sites that poll is possibly a solution. Then you can crosscheck, and revoke authorities without bringing the whole system down. It would also make it harder for governments to mitm, since they would have to subvert many places instead of just one.

I am not a big fan of more government involvement in just about anything, but I can see a role for government here. They are in the best position to establish identity, have little profit incentive, and have ample experience with comparable trust processes like issuing passports and bank notes.

Yes, it will be done less efficiently and with less innovation than the market. That trade off would be worth it.

If like me, you live or travel to China and were horrified at the idea of having CNNIC be an accepted CA in your browser, check this link out: http://www.imminentweb.com/technologies/remove-cnnic-ca

Digital Signatures are only good for specifying identity, not authenticity. Perhaps this fact should be explained in colloquial English for users.

I must agree... Heres an sisue. 99% of the time there are no CAs for a https of a regular site that usually uses http. HOWEVER there is no problem with the certificate.

What does that mean? It means that when looking at the fun little multi step accepting certificates bells and whistles that FF throws I have absolutely no way of knowing if there is a 3rd party. There is nothing helping me out. And I am a "power user". Put those messages to a regular user and you get 3x as dumbfounded. So if 99% of the people don't understand WTF is going on, how can we help prevent any fraud?

Mind: "No sire, that right now would be procrastination! Lets just use HTTPS as it is..."

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact