It is already the case today that for Chrome and Firefox users, a compromised CA can't easily hijack connections to Google Mail. Not only that, but any attempt to hijack Google Mail connections in the large will run aground on Chrome and Firefox users, who will not only not accept the rogue certificates, but will also alert Google, which will put a gun to the head of the CA.
The feature that enables this is called certificate pinning. It works well for small numbers of high-profile sites, but requires manual intervention on the part of browser vendors.
TACK pushes certificate pinning out to site operators. It works like HSTS: the first connection to a website is trusted, and that connection loads up state that the browser holds. Subsequent connections check for consistency with the first connection. Dynamic pins, or "tacks", make dragnet surveillance of all sites asymptotically as risky as spoofing Google Mail. The attacker is nearly certain to accidentally catch someone with a tack loaded, and at that point the game is up: the attempt to present an otherwise-valid certificate that violates a tack is a smoking gun, to which Google and Mozilla ca respond with their own firepower.
The nice thing about TACK is that it works alongside the CA hierarchy, and even derives some value from it. A tiny fraction of the Internet could adopt TACK and still make life much harder for attackers. The effort required from site operators is small, and the whole system is invisible to end-users.
Fixing the CA hierarchy is a lot less sexy than ground-up rewrites of the whole Internet security model. But the ground-up rewrite is never going to happen, and the incremental fixes are not only doable, but doable by the kinds of generalist developers who are champing at the bit to stick it to the NSA. The biggest security problem on the Internet isn't protocols; it's browser UX.
This is one of those "Things which somebody would probably bring up at an anti-trust meeting if anybody at an anti-trust meeting had the foggiest clue of what was going on", incidentally. (The hypothetical threat is "You give our web properties a better SLA than anyone else in the world gets, or we will use the coincidental fact that a large portion of the world's web traffic runs code under our control to end you.")
It's funny, people (including me) always thought that Google's big swinging Wand of Annihilation was google.com, but now they have at least four of them.
I think it is security UX in general. Anything around certificate issuance/life cycle (SMIME or PGP signed/encrypted mail), PGP key exchanges, etc. That problem has not been solved.
At first I thought an active MITM could drop TACK negotiation from ClientHello and wait 30 days until pins expire, but as I read it, I think that should result in a "contradicted" pin.
You could do browser profiling though, and only MITM clients which don't send a TackExtension in the ClientHello, or behaviourally look like IE, say. I wonder if it would have been better not to indicate that client supports TACK? (Maybe constraints not obvious to me).
The other thing I'm not sure about is overlapping TACK handling. I don't see what's to prevent an MITM from adding an additional new TACK of their own in the ServerHello, gradually superseding the "valid" TACK. That would take like 60-90 days though.
This looks like a massive improvement, although I wonder if it actually protects clients which do not support the extension?
Right, that would work, assuming there are people going around doing it for every web site (the TACK police!). I can easily see that happening, it is just the kind of thing moxie would do, say.
It still seems to me that it would be simpler if the client just didn't advertise the extension. I am probably missing something though.
Only indirectly through network effects. It's much harder for a valid attack certificate to stay undetected in the wild when a subset of the people you might attack are running TACK. You're right that it might not help clients if the MiTM software intelligently avoids attacking TACK-capable clients.
Maybe we can get some good out of this week's focus on security.
If you can grok TACK better than I do at the moment, start writing patches for web servers.
Pinning directly in the app puts trust in the developers of the app instead, which is indirect and prone to lag. It is also generally fragile (have to issue app updates for cert revocations) and can be hard to scale. How many secure sites does your app need to connect to? Is it flexible, unknown? How are you, the app developer, going to validate those certs beyond relying on the CA PKI? (and then you're back to square one).
I'm asking, because I dimly remember you not being a fan of OE -- have I confused you with someone else there?
This is wrong, though: although new users will, in fact, be temporarily MITMed, all returning users will see a big scary warning page that will cause them to (perhaps automatically) report the problem to their browser vendor. Google/Mozilla will promptly drop the offending URL into the malware-sites list, and the new users will thereby be rescued as well.
We're talking about referring to Telehash's approach as "ubiquitous encryption" with "self-consistent authentication" but nobody seems to agree on what authenticating an identity means in the first place.
Pinning encryption to addressing solves those problems at a lower layer, and leaves phishing attacks to be solved separately. I don't know that anything I'd call "perfect authentication" can be solved within the X.509 framework.
Telehash is taking an approach that completely sidesteps the problem of human-memorable names, though. It uses the public key fingerprint as the "network address" of a node in a DHT. The Telehash address is globally routable, like an IP address, but there is no MITM possible, because only a node with the private key generating the address (fingerprint) can communicate at all using that address.
There is still the problem that humans don't want to type in an IP address, let alone remember something unwieldy as 9ba9c175c3c26af9df5c8163ea91d4ae4eca59ba95d66deb287c89ea0c596979. But deciding whether to trust that key is distinct from verifying data is signed with the same fingerprint.
So the difference is the reporting mechanism, that allows a powerful organisation to make meaningful threats to protect others?
Because in [Garfinkel 2003] it's similar: returning users get a big, fat warning, but they can't meaningfully do anything about it on their own, except not trust the other side.
Of those four, Google is pushing hardest for CA-compromise countermeasures.
Google's current preferred solution is Certificate Transparency. I don't love CT, for reasons very similar to those that the typical HN reader would come up with after reading it, but it's still a step forward.
We sponsored development of some TACK code last summer, but none of the browser vendors are itching to integrate TACK. Google has nice things to say about it, but "backburnered" would be a fair summary of where it stands right now.
Well, not to be an opportunist, but let's hope that this kind of incidents will move the adoption of TACK a little bit faster.
If however we do use CA's as the trust model in order to trust the first connection, then TACK is nothing but a policy system to attempt keep CA honest. Users would still need to put their trust to third-parties that they never meet, and who's priorities and objectives are unknown.
I do not believe in pure peer-to-peer schemes as workable solutions, but I do believe that we could in 5 years have a system where I can trust ACLU first, and then Verisign as a fallback.
Yeh, I can't see that going wrong any time soon.
1. It amounts to a preservation of today's pay-for-security system (the no-so "nice thing" you mentioned), which is not necessary. It is not necessary, thanks to distributed databases like Namecoin, to have to pay for SSL certificates anymore (or fax credentials, or any of that).
2. It doesn't offer a strong mathematical proof of authenticity the way a blockchain-based solution does. 
It wouldn't surprise me in the slightest if the blockchain-solution were actually simpler to implement and deploy. Fetching public key fingerprints involves a single HTTP request that returns some JSON. That's about it.
The other option is to a full namecoin client with an up to date chain, correct?
It's assumed that you find yourself (or a close friend) trustworthy.
DNSChain is designed to be run by individuals, with no powerful deciding authority (like browser vendors) deciding who you should trust (as with CAs today).
Today, you trust the least trustworthy of hundreds of organizations that you've never heard of.
With this proposal, anyone is free to trust whoever they want, and they can change that instantly without any browser updates or anything along those lines. It's about as trustworthy as you can get.
- real-ownership of domain names (free of political pressures that result in domain-seizures)
- a powerful, global identity system (not run by any government or mega-corporation)
Some concepts & partial implementation: http://okturtles.com/#open-source
Replacement of DNS with a blockchain protocol is never going to happen. It's hard enough to talk DNS operators out of baking the CA system into DNS, despite the utter inapplicability of DNS to that problem. DNS has a fierce, powerful status quo advantage.
If you believe strongly that blockchains are going to be the future of global networking, a better plan would be to build a system that ignored the DNS and used a blockchain protocol instead. For instance: the DNS doesn't play any role in matching Google search terms to SERPs, nor does it control how AIM matches names to IM accounts, nor does it control how IRC matches nicks to receivers.
Forklifting out giant chunks of the Internet is a bad plan. Deprecate the Internet and build a new layer on top of it. Eventually, TCP/IP will find itself in the same role as Ethernet; it's inevitable.
That's sorta what's taking place (not the entire Internet, but a part of it that's not serving us well). It's interesting that nearly that exact language was used when DNSChain (back then "DNSNMC") was introduced:
[therightkey] DNSNMC deprecates Certificate Authorities and fixes HTTPS security
> "is never going to happen."
How many times has humanity heard that refrain repeated?
> "For instance: the DNS doesn't play any role in matching Google search terms to SERPs, nor does it control how AIM matches names to IM accounts, nor does it control how IRC matches nicks to receivers."
You seem to not understand that DNSChain is not just a DNS server. It also is a RESTful HTTP API and interface to the blockchain. This means using HTTP, not DNS. DNS is just icing on the cake (and not "throwing the baby out with the bath water").
BTW, some of those things are already starting to happen. For instance, there's PoC Pidgen fork that works with Namecoin, and also a working Bitmessage + Namecoin client out there:
The web of trust model doesn't scale, that was made abundantly clear by PGP when it first came out. Even Phil Zimmerman, the guy that practically invented it, agreed it didn't scale and something else was needed. X.509 came about not because some person foisted it on the universe, rather a bunch of people who were writing security systems at the time (myself included) got together with other cryptographers, engineers, and administrators under a group hosted by "Public Key Partners" (the folks collecting together the Patent pool associated with public keys) and tried to come up with ways this might work.
It has had some fabulous successes, certificate authority compromised? Pull their root cert and blam none of their keys are trusted any more. It had some failures. Call the baby ugly if you must, but at least propose something that hasn't already been tried and shown not to solve the problem.
[Edit: I really need to keep peoples names in different buckets in my head]
I like to design secure protocols for fun, and they all inevitably converge into a PKI when you start adding the non-trivial required features. It's incredibly frustatring.
It's also got links to various security usability studies that are required reading for anyone who cares about this space.
tl;dr - the article is dumb, heartbleed doesn't even have anything to do with X.509.
It doesn't appear at the moment that pushing this technology to web browsers would increase the security of the most of the users.
1) "certificate authorities are a closed off oligopoly" - This is absolutely not true. Pretty much anyone can start their very own Certificate Authority. The code isn't that complicated (the specs are all available), the math is no longer patented so you don't have to pay tributes to PKP. What you do have to do though is convince three people you're trustable, Mozilla, Microsoft, and Google. If they add you to their trusted root certificate list then you've covered a whole ton of the market. I know of at least one "private" certificate authority which shares its ROOT CA with individuals who want to trust that the sites which have a cert from them is "legit." (for some definition of legitimate).
2) "any state they are located in will just seize their keys ..." - this conflates two things, one is trust and one is seizure. If you live in the US, and have a PGP key that is trusted by the target of an investigation, and law enforcement can convince a judge that using that is the only way for them to get the proof they need, you may find yourself on the receiving end of a subpoena which demands you hand over access to your key. You can refuse of course, and the court can put you in jail for contempt. This issue is completely separate from the Certificate Tree or Web of Trust choice. The purpose of the certificate is to establish trust not privacy. The purpose of the TLS protocol (aka SSL) is to establish privacy. Its necessary (but not sufficient) to be able to trust the other end.
3) "... and read your traffic." - Which is a violation of privacy relates to how you established privacy as opposed to the mechanisms in that protocol. And I suspect that you think that is a splitting semantic hairs but bear with me for a moment. The heartbeat bug is in OpenSSL, not the certificate infrastructure. There are lots of things that used different protocols, and X.509 certificates, that are just as secure today as they were before this bug was disclosed. The key here is that they used a different protocol.
I can completely relate to the OP's angst over the challenges of keeping things secure in today's world, its something of a life we "chose" relative to using open source.
This may be technically true, but the process of becoming a CA was described a year or two ago on the randombit cryptography list and it was estimated that it is a (roughly) 1+ million dollar undertaking, just to get up and running and accepted in the browsers.
Taking out any of these things and you'd be left with something that is significantly worse.
That's why it costs money to be a CA _that browsers trust_. Of course if you want to be a CA that doesn't care about browsers, that's like three lines of code at the command line.
This does not mean that the CA system is broken. There's a huge middle ground between "anyone can do it for free" and "totalitarian oligopoly". $1M to start a business is not that high compared to many other businesses.
And it fails in that case. If a government forces a root CA to give it a copy of the root cert key. Nobody can trust any received certificates signed by that CA ever again.
Does the US Govt care? Maybe ... there are no references to such a root key seizure in any of his docs so far. Or maybe not. Just lots of talk about stealing private keys directly from the original holders. But who can really know?
Suffice it to say there are attackers in this world other than the NSA.
Now external trust is not very helpful for some things. It won't protect you from state-sponsored MITM attacks. Also non-EV certs may not inspire a whole lot of trust.
But you have the fundamental question which X.509 tries to answer: "Before I give you my credit card number, how do I know you are who you say you are?" You need some form of external trust there to answer that question.
External trust is not foolproof. See Thompson's important paper on the limitations of it. However, it is very good at addressing certain classes of threats (and very bad at others).
In the end we need both models and there is no real way around that.
Do you really think Granny is going to be happy with the tablet she bought that can't connect to her online banking account out of the box? Have fun explaining to her that she needs to exchange keys with enough trusted intermediaries to have a valid trust path to her bank. I'm sure there plenty of key signing parties happening at the 'ol retirement home.
Or maybe you can explain to Granny why her money was stolen when a scammer managed to compromise one of her trusted keys and then created a compromised subgraph in the WoT leading to a fake certificate to her bank?
The WoT is a usability nightmare. Sure, the PKI isn't too great, but it's what we have, and it is currently more practical than any other solution out there. Security needs to be usable to be useful.
EDIT: for a good rebuttal to the OP, read this blog post by Mike Hearn which covers the issues I raised and more: https://medium.com/bitcoin-security-functionality/b64cf5912a...
First of all because it makes the assumption that all of them are stupid somehow - or for the less adaptable ones that have problems with newer technology, it makes the assumption that the current status quo works. Do you think that granny from your example wouldn't click "Ignore" on a browser warning?
Second of all, if we really get down to an argument about elders, society and making the world a better place, the priority shouldn't be to keep the status quo because the elders wouldn't cope with change - because in that equation, today's children are more important, don't you think?
> Have fun explaining to her that she needs to exchange keys with enough trusted intermediaries to have a valid trust path to her bank
That's false - she only needs to exchange keys with the bank directly.
The point isn't the slur on elderly users (though that often applies), but to think of the least-technical, large-base user likely to be trying to make use of your product.
In my experience, I've encountered technically challenged users of all stripes: the illiterate, PhDs, strangers on the Internet, immediate family and friends, children, the elderly, mentally or psychologically challenged, executives (but I repeat myself), entrepreneurs, the harried, etc. And, put quite bluntly, there's a hell of a lot of them.
Within the tech world we tend to be fairly insulated from the larger scope of this problem, and yet in my experience it's still ubiquitous.
The point of the example isn't to take affront, but to realize that for widely-deployed systems, base-level usability is crucially important.
Different interests. Different focuses. No one will want to listen to you explain that its for the best, or the current issues with CAs (Also what a CA is). They just want to check their damn gmail.
Apple can act as iPad users' first WoT node. If a user logs into Facebook they immediately add every Facebook friend to their web. etc, etc.
Just because WoTs are currently usability nightmares doesn't mean they have to be forever.
And I certainly could be wrong in my understanding, but I believe all it takes is a single malicious (or pressured) actor to ruin that chain.
That seems scary to me.
I specifically said that PGP may not be the solution, but what we have now is just ridiculous if you really think about it. We have no choice but to trust 4 companies on precisely nothing but their word. Even if you mistrust their word - and I do - there is no alternative choice.
Security always boils down to trust in the end, and the status quo outsources it. It is the definition of stupid.
The status quo outsources trust because that's what you do in an economy. We trust the government to secure the value of our money. We trust banks with storing that money, and we trust that the government again will make sure that they do.
If you want to see what happens when you DON'T outsource trust, look at how terrorist networks operate. They only deal with trusted associates who know each other personally, they only communicate through trusted couriers, and they live in fucking caves. It's not exactly conducive to a modern economy.
You have to outsource some level of trust. Otherwise you waste so much productivity on maintaining and verifying your trust network that you can't actually do anything worthwhile with it. I think the real question is "to whom?" and "for what purposes?" If you need something to be really secure, then you should probably do an in-person key exchange. For the majority of things people do you only need "mostly secure" because there are other protection measures in place in case the communication is fraudulent.
On a technical level there's no meaningful connection.
Just talking philosophically they "live in caves" because the US & other govt's have armies trying to kill them. It has nothing to do with trust networks. If anything that style of trust networking has made them more secure as it's difficult to penetrate. The point that OP was making.
Finally, personal trust networks have worked remarkably well. Look at guanxi in China, social societies like the Freemasons (not in a "control the world" way, just better business contacts, etc.). These are all based on networks of trust.
I have no idea if this is the best way forward for the web but a comparison to terrorist networks is meaningless.
The OP believes that to be economically viable, trust networks must be large. Hence, outsourced trust.
But I agree with you: once your personal network grows beyond a certain size, the property connecting you directly to any particular node is no longer exclusively "trust", but will increasingly be "convenience". Usually followed shortly thereafter by "abused by".
Personal trust works well, and nobody's implying that you can't or shouldn't use more peer-to-peer solutions where you feel you need more security -- but it's not going to form the backbone of the global economy. At the end of the day, you need some form of centralized trusted authority with which individuals can contract to provide trust-management services, otherwise you spend all your time verifying trust and not actually doing anything.
Or are we working under the assumption that every Granny has a grandson who is just as technically competent as you are? The fact of the matter is, PGP has just enough friction that if implemented correctly, will still lead to the vast majority of non-technical users simply signing up to some SaaS to handle it for them, and with that you end up in square one, where a handful of SaaS providers are the gatekeepers to everyones identity.
Why would you trust your friends keys to validate say, your bank? You wouldn't. You'd trust your government, and various regulatory bodies to do that.
You'd trust friend keys to validates your friend's websitss or the like.
Different trust paths for different things. This is really the problem with UX on all crypto at the moment though - way too absolutist about 'trust', rather then considering use cases.
> Sure, the PKI isn't too great, but its what we have, and it is currently more practical than any other solution out there. Security needs to be usable to be useful.
But I disagree with you here, there are better solutions that are just as easy to use:
- Some options: namecoin. If you own the domain you can easily sign stuff with the same key you use to own the domain
- Put stuff in DNS's TXT record once DNSSEC is rolled out. (Or create a new record)
Or take a look at TOR hidden services for example.
You enter a onion domain. And you're there. Guaranteed. No messing around checking if there is a green lock or messing around with a WoT.
Note namecoin might not be there yet to be usable for 08/15 users. And most of them probably don't want the blockchain locally. But it's easy to imagine that you just have your ISP still provide you some sort of DNS service.
Online-only or remote businesses like social networks and airlines would face a tougher problem.
I think if you weren't exhausted by the sheer length of the post by the time you reach that proposal tucked at the very end, you might think to ask some critical questions. Like, what are the vulnerabilities and exploits of a peer-to-peer system? Would this not be open season on socially engineering average folks to trust the wrong peer? How vulnerable to attack are local geeks and university computer science departments? How are compromises noticed and handled by the average folks who trust a small local authority? How will the verification work be paid for, or will it be completely volunteer based, and how efficient will that be?
Moreover, what the author fundamentally misunderstands is the importance of usability in security. Web security isn't perfect but that's because more perfect security would make ecommerce annoyingly difficult. Then people start taking shortcuts or just ignore security completely, which is a worse outcome. It's not enough to point fingers at users and yell that they're doing it wrong; security architects have to take responsibility for security outcomes. A peer-to-peer system would be significantly more inconvenient for average folks to use correctly, if only because of figuring out who to trust in the first place.
A different UI might reveal the trust path more directly, so that if I navigate to my bank that path might be forced into view.
I, for one, would love it if my browser displayed the trusted path used to connect to my bank before loading any part of the page. The same goes for self-signed certs. Would I avoid HN if their cert was self-signed? Nope.
To avoid this most people will start just trusting larger companies; Google, Facebook, Apple, Mozilla. And only checking their keys, since they will trust that company's key. And these companies will handle signing new websites. Small websites won't care if you personally trust them, they'll only care if one of the 'big companies' trust them.
In the end we wind up exactly where we started. Large companies are implicitly trusted by everyone. Sure you may sign your key off to a few dev friends so you can access their test sites, which will make self signing easier. The cost will be mitigated, but in reality nothing will change. Even likely within a 3-4 Browser Generations we'll see non-Company trusted PGP keys get scrapped in all but the more free (as in beer) browsers.
No. What little trust there is, it comes from people trusting them to be afraid for their own interests. But even then people generally acknowledge that the customer's interest might not always win here.
See eg. Linux refusal to use Intel's hardware RNG.
The shortest route will always favor the person with the most keys and the most trust. Who invariably will figure out that he/she can make money getting more keys and more trust. Which lucky for us their are both a finite number of persons and a finite number of keys that will be signed by each key. We end up with a pyramid scheme.
Where the more trust and keys you have, the easier it'll be to get more trust and keys.
The problem is capitalism. In all honestly we'll likely see the PGP network end up in the hands of banks. You want secure access to you online account? Sign each other keys. Now the bank has a 5 million person strong trusted key. They'll sell that trust naturally. I trust most tech companies enough not to instantly monetize the PGP web, but some would.
Likely some tech company attempts to monetize it, they get yelled at. They stop. Another does, nothing changes so people accept it as the new norm. The arguments made it allows for faster page loads, easier access. Nobody says a word after a year.
With PKI, you can't choose the root CAs. Today, Verisign abuses the shit out of their market dominance to price gouge certs, and I have no reason to ever trust that company with anything, they don't give me a reason to, and they almost certainly have their root keys in the pockets of groups like the NSA.
So if I don't want to run my WoT through Google, I could choose not to. For the average user they shouldn't care, but I would at least have the choice. Right now there is none.
With the CA system as it is now, once a CA is trusted, it's effectively trusted FOREVER.
The other great thing is that PGP is not just for sites but for people, so even if all the private keys handled by nginx/apache/whatever were compromised Heartbleed-style, the core person-to-person trust relationships would be unaffected; the core of the web of trust would be intact, only the endpoints would need re-verified.
It also reduces the burden on your bank for maintaining the security of their keys (to some extent). It's still very important, but the consequences are no longer quite so catastrophic.
The author also underestimates the consequences of performing a MitM attack with a root certificate. MitM attacks can be detected and a copy of the signed cert is proof. If the NSA were abusing a root cert, there is a chance it could be noticed.
So what if it was? Well, that certificate would be removed from browsers and operating systems. The CA would be placed under suspicion. In a worst case scenario, the CA could be completely ostracised, perhaps even to the point bankruptcy. An abuse of a root certificate could potentially do hundreds of millions of dollars worth of damage.
That's not even covering the diplomatic fallout. If the CA points the finger at the NSA, the President would have to explain why the target was so important that it merited destroying part of the root trust system of the Internet.
There are far less messy ways of dealing with a high-value target. I'd be more concerned about other zero-day vulnerabilities the NSA might have found.
It's vanishingly unlikely that Google, Microsoft, and Apple would remove a Big 4 CA root cert and break the trusted path of 25% of the secured market.
Browsers don't have to turn a root CA off all at once, either. They could start by turning off Extended Validation for the compromised CA, or they could release a statement saying that if they don't get guarantees this won't happen again, they'll remove the CA in a year's time. They could allow connections, but change the SSL icon to indicate the certificate has been compromised. Browsers have a lot of options to put pressure on root CAs, even without removing the cert.
If one was to attempt to formally specify X.509 in terms of math or logic we'd get to this part and have no choice but to write "the security of this portion is because we say so". How many times must we be betrayed before this isn't good enough?
That's not to say there aren't better mechanisms for verifying trust, but you'll never eliminate it entirely. There's always going to be some assumption, such as "the central authorities are trustworthy" in the case of SSL, or "the majority of nodes are trustworthy" in the case of Tor, or "the CPU majority is trustworthy" in the case of Bitcoin.
In a sense, it's worse than that, because a "queen" can actually sign (correctly or not) any "princess-baby" in any "lineage".
Unfortunately not too many people know this, and it's a really important issue.
BTW like a lot of other people here, I didn't like the "Queen" analogy. IMO it didn't make the explanation any simpler.
The average internet user has no idea who's trustworthy and who isn't. If they have to personally grant trust in order to get at some content they're looking for, they'll simply do it. This is the same behavior that causes people to execute boobs.exe attached to a random email that landed in their inbox.
In order for this to work, the average internet user must cede the trust decision-making process to some other entity who claims to be more qualified to do it, like say the company who makes their browser. There are four browser makers that account for probably 90+% of usage. Now you're right back to where you started with the current oligopoly system, except that with the new system there's a much larger attack surface for nefarious agents to use when trying to insert themselves into the trust chain because anyone at all could let them in.
Cynically, that's the problem with internet security protocols in general - they have to work not only for smart, self-interested people but also for stupid people who are actively self-harming. That's a really tough bar to meet.
1) It is probably easier for casual attackers to trick a local geek to trust a phony key. Determined attackers and state-level actors can probably compromise CAs as well, but most day-to-day threats are of the casual type.
2) When a local geek accidentally trusts a phony key, and other people realize it and point it out to them, all that happens is "Oops, I'm sorry." When Comodo is caught issuing phony certificates, there will be a Silicon Valley-wide uproar, browser vendors will very quickly invalidate the offending intermediate key, and the incident will hurt Comodo's bottom line for many years afterward. In other words, Comodo is more accountable than any private individual, not because it's any more ethical, nor because it is any more competent, but simply because it is a highly visible target of public scrutiny whose very survival depends on its public image as a trustworthy CA.
3) Most people (including but not limited to grandmas) who are just beginning to use the Internet have no way to know which keys to trust. We in the programmer community are an exception, not the rule. So what's actually going to happen is that browsers will trust, by default, a bunch of highly reputable individuals or groups (perhaps the browser vendors themselves) and advise the user to trust whomever these people trust. That's not really different from the current situation with CAs. We just replace Verisign and Comodo with @cperciva and @tptacek.
As some of the other commenters have mentioned, the problem seems to be that these social mechanisms don't scale.
Please take my point 2) in combination with point 3). As I said, techies are the exception, not the rule. It's not just Grandma who will have trouble with a web of trust, it's pretty much everyone except us. How do they even know which peers to distrust? Will there be a news feed about compromised peers? Will everyone have to subscribe to one? What if someone wants to explore a part of the web that none of their peers, or their peers' peers, have ever heard about?
The single most important advantage of a centralized model of trust is that a list of trustworthy vs. untrustworthy parties can be quickly and widely distributed in an automated fashion. Comodo issues phony certs? 12 hours later, every copy of Firefox receives an updated list of revoked keys. I know it doesn't currently work like that, but it's entirely possible. Whereas with a web of trust, millions of people will be left trusting compromised peers for many months afterward because they didn't get the news.
Not true, see here: http://privacy-pc.com/articles/ssl-and-the-future-of-authent...
Problems here is that free market model doesn't work once you're a big player. Instead of Comodo being bashed by MS/GOOG/Moz it's sill there all shiny and bright serving SSLs.
So the current model is flawed and can be exploited by technically unskilled users, but worst than that, it doesn't seem to care about it's failures.
In general, I'm a fan of analogy, but I'm having trouble following this whole queen/princess/baby thing. Putting that aside, I think you're claiming that CAs can present your certs to random clients?
This might be an indictment against the DNS system, which directs the clients to an IP address of its choosing, but if the client makes it to your server, your server chooses which cert to present to the client.
> What we have done here is fitted our doors with some mega heavy duty locks, and given the master keys to a loyal little dog.
Again with the strained analogy. Who's the dog? What does the mega lock represent?
I think this belies a fundamental misunderstanding of what the CA is doing. The client asks your service to validate itself, your service does so by saying that Verisign/Thawte/etc. has previously signed the cert that your service sent to the client. The client does not have to automatically trust Verisign or Thawte or whomever you say signed it, and furthermore, if it decides that it does trust that party, the NSA is not able to use that to its advantage in any way as a result of Heartbleed.
> As of today, that green padlock no longer means what it once did. And the reason for that is because of the business conditions of gatekeepers.
No, it doesn't mean what it did yesterday because of a bug in an implementation of OpenSSL. The protocol is still just as valid. The business conditions of the gatekeepers, while distasteful to you, doesn't invalidate the mechanisms by which that little green padlock gained its fame.
Ummm, that's kind of a big "if". The whole point of authentication is to resist an adversary who controls the network. We already know we can't rely on DNS (or any of the other 37 moving parts involved).
>> I think you're claiming that CAs can present your certs to random clients? This might be an indictment against the DNS system, which directs the clients to an IP address of its choosing, but if the client makes it to your server, your server chooses which cert to present to the client.
Here I am fairly confident that he is talking about a situation in which a CA signs a key for your domain and gives it to someone else (NSA/GCHQ) and they preform a MITM attack on a user like this:
Client -> Fake key for yourdomain.com provided by MITM proxy server -> decrypt data then encrypt with real key for yourdomain.com -> Your Server
CA's have been compromised before  (and I'd be willing to bet there are quite a few more incidents that they have swept under the rug) and so there has been discussion on what happens when you can sign a certificate for any domain. I believe this is what the OP is referencing.
>> Again with the strained analogy. Who's the dog? What does the mega lock represent?
I agree with you, this one is harder to understand. As I see it the mega lock = CA's private keys, dog = CA's. When he talks about the dog being tempted by a steak he is referencing the rumors that the NSA/GCHQ have back room agreements (steak) with CA's or have simply hacked the CA's and taken what they needed (for this I would say something like "the dog was asleep").
>> No, it doesn't mean what it did yesterday because of a bug in an implementation of OpenSSL. The protocol is still just as valid. The business conditions of the gatekeepers, while distasteful to you, doesn't invalidate the mechanisms by which that little green padlock gained its fame.
This is less cut and dry than you suggest. The green padlock has always meant jack-shit when it comes to state actors (if you subscribe to the theory that they have either bought off one or more CA's or hacked them, which I do), what it did protect you from was your run-of-the-mill online criminal. It made it impossible for them to sniff your login credentials a la Firesheep (Yes the padlock itself didn't do that, the PKI did but it gave people a simple way to check if the connection was secure and the website was who it said it was). What the heartbleed bug did was allow ANYONE to potentially steal your private key right off your server, opening the door to not only NSA/GCHQ but anyone with an internet connection (and the knowledge to exploit it).
The OP is suggesting that CA's should have revoked certificates to force people to fix their servers but they never would due to the backlash. CA's have the ability to revoke certificates that are compromised and we have to assume every certificate has been. I don't know what the right course should be but one that spring to mind iss giving everyone a deadline at which point all certificates will be revoked and refuse to re-issue a certificate to a url that is still vulnerable to heartbleed. YES, this is extremely and no it's neither simple nor easy but I think there are very good reasons for why it should be done. The thing is, at least IMO, that CA's really don't give a shit, like the OP suggests they care about one thing and one thing only: their investors. If they really did care about making the web a safer and more secure place then why aren't they sponsoring OpenSSL or working on their own open source SSL library?
And the description from the Monkeysphere site on why they are a better alternative for HTTPS: http://web.monkeysphere.info/why/#index1h3
TACK, what tptacek mentioned, is an orthogonal strategy for solving the same problem, but it assumes that some MITM will be detected. An ideal solution would involve a combination of both TACK and monkeysphere.
> 90% of that guff can be automated and hidden underneath a good UI, but can we
> dispense with the need for key exchange parties? Absolutely we can.
But somehow I am qualified to inform the world as to why PGP is superior to X.509.
I'm not debating that point, and informed debate would be welcome. And I have to say that I find it refreshing for a blogger to so inform me in the first paragraph as to just how quickly I should skim through or close their rant.
I really did appreciate that. Though somehow I find myself investing more time in the writing of this comment than in the consumption of the article. Fortunately, like floss, 't'will soon be forgotten.
With the DNSsec extensions it should be possible to publish enough information to authenticate a given site against a certificate. If your DNS has been compromised you've got bigger problems than your SSL cert.
I think the solution needs to be something like Moxie's Convergence, which allows for users to decide who they trust, and revoke such trust at any time.
It's a shame that Convergence is basically dead, although there's still some activity in Perspectives on which it was based. (http://perspectives-project.org/)
Uhm no? Because you can litterally just buy a new valid cert for it? As long as we're talking Domain Validated.
I doubt your average user will notice that there isn't a green bar anymore or that the certificate lacks ownership information.
You can, it's called DANE and is a future standard . We're just waiting for DNSSEC to spread because without DNSSEC everything is unsecure.
The complaints here are basically "w.o.t. is not usable", but that's basically what the author said. He therefore also indicated this is as much a design problem as anything else. That's a useful insight we shouldn't dismiss, at least not until some thoughtful, imaginative designers have actually taken a crack at it.
The OpenSSL bug that allows heartbleed is nothing at all to do with the (many) flaws in the public trust system.
The fundamental problem here (as I see it) is that you're trying to set up trust between parties that have no existing relationship. This requires third parties and externalised trust whether you use a CA or a P2P net.
Either way, it's nothing much to do with heartbleed, which would have leaked the keys to the kingdom under either model.
Until there is a implementation of OpenPGP that uses a permissive license, getting the world plus dog to switch to PGP is a non starter.
"And fundamentally you have to trust that they who hold the Queens aren’t dishing out copies of your certificates."
The entity holding the Queens can give out a copy of your certificate, sure, but in most cases, they do not hold the crown jewels -- your private key -- which is the part of the Heartbleed bug that is really bad.
There have been cases of CAs either issuing or being compromised and issuing new certs which duplicate a site identity, but that is different then releasing the private key of a particular certificate.
This seems like utter nonsense to me. Certification authorities should never get to look at my private key, and I don't care about them giving out my public key (it's public, after all). The best they can do, if they're evil, is create a new pair with information that impersonates me.
Without centralized, trusted gateways, it's not even clear that your communications are secure. They need to be centralized to make them easy to monitor and audit. With a distributed trust model, the compromise of one node can be catastrophic; all you're really doing is handing control of the trust network over to botnets.
This is a really hard problem. I can't think of a better solution that would serve the same niche as our current one.
I wonder if we wouldn't be better of with something similar to what SSH does. Accept trust the first time and verify that the signature doesn't change on every subsequent connection attempt. This way one would be immune to hijacks.
It wouldn't solve first time verification, but how likely is a first time spoof? And for really sensitive communications you could use pre-shared keys. I could for instance get a hardware token from my bank containing their public key.
"Why not use public-key encryption for everything?
At face value, it seems that the existence of public-key encryption algorithms obsoletes all our previous secret-key encryption algorithms. We could just use public key encryption for everything, avoiding all the added complexity of having to do key agreement for our symmetric algorithms. By far the most important reason for this is performance. Compared to our speedy stream ciphers (native or otherwise), public-key encryption mechanisms are extremely slow. A single 2048-bit RSA encryption takes 0.29 megacycles, decryption takes a whopping 11.12 megacycles. To put this into comparison, symmetric key algorithms work in order of magnitude 10 or so cycles per byte in either direction. In order to encrypt or decrypt 2048 bytes, that means approximately 20 kilocycles."
EDIT: I suck at copy-pasta