
Heartbleed should bleed X.509 to death - lorddoig
http://lorddoig.svbtle.com/heartbleed-should-bleed-x509-to-death
======
tptacek
What you want is Moxie Marlinspike and Trevor Perrin's TACK.

It is already the case today that for Chrome and Firefox users, a compromised
CA can't easily hijack connections to Google Mail. Not only that, but any
attempt to hijack Google Mail connections in the large will run aground on
Chrome and Firefox users, who will not only not accept the rogue certificates,
but will also alert Google, which will put a gun to the head of the CA.

The feature that enables this is called certificate pinning. It works well for
small numbers of high-profile sites, but requires manual intervention on the
part of browser vendors.

TACK pushes certificate pinning out to site operators. It works like HSTS: the
first connection to a website is trusted, and that connection loads up state
that the browser holds. Subsequent connections check for consistency with the
first connection. Dynamic pins, or "tacks", make dragnet surveillance of all
sites asymptotically as risky as spoofing Google Mail. The attacker is nearly
certain to accidentally catch someone with a tack loaded, and at that point
the game is up: the attempt to present an otherwise-valid certificate that
violates a tack is a smoking gun, to which Google and Mozilla ca respond with
their own firepower.

The nice thing about TACK is that it works alongside the CA hierarchy, and
even derives some value from it. A tiny fraction of the Internet could adopt
TACK and still make life much harder for attackers. The effort required from
site operators is small, and the whole system is invisible to end-users.

Fixing the CA hierarchy is a lot less sexy than ground-up rewrites of the
whole Internet security model. But the ground-up rewrite is never going to
happen, and the incremental fixes are not only doable, but doable by the kinds
of generalist developers who are champing at the bit to stick it to the NSA.
The biggest security problem on the Internet isn't protocols; it's browser UX.

~~~
patio11
_Not only that, but any attempt to hijack Google Mail connections in the large
will run aground on Chrome and Firefox users, who will not only not accept the
rogue certificates, but will also alert Google, which will put a gun to the
head of the CA._

This is one of those "Things which somebody would probably bring up at an
anti-trust meeting if anybody at an anti-trust meeting had the foggiest clue
of what was going on", incidentally. (The hypothetical threat is "You give our
web properties a better SLA than anyone else in the world gets, or we will use
the coincidental fact that a large portion of the world's web traffic runs
code under our control _to end you_.")

It's funny, people (including me) always thought that Google's big swinging
Wand of Annihilation was google.com, but now they have at least _four_ of
them.

~~~
kbenson
It's not just google properties, the Strict Transport Security sectionof the
chromium dev docs [1] lists multiple properties they do this for (for example,
twitter and paypal), and it appears you can specify your own as well through
the command line (and probably elsewhere).

[1]: [http://dev.chromium.org/sts](http://dev.chromium.org/sts)

~~~
Anderkent
I believe patio meant Google/Mozilla going to the CA and saying 'You
duplicated our cert, you better explain or we will stop trusting you in our
browsers'. Which would end the CA, of course. As they deserve to.

~~~
kbenson
I took it as an implication that Google properties were getting special
treatment by Chrome. I'm not sure how Chrome blacklisting a CA could be
construed as anti-trust, even if it essentially killed the CA, because there's
plenty of healthy competition in the browser space. They could just switch to
Firefox, and not even lose the extra protections they were getting since
Firefox pins google property certs as well.

------
ChuckMcM
Sigh, nice try but it doesn't work. It does remind me of the adage that goes,
"For every complex problem there is an answer that is simple and wrong."

The web of trust model doesn't scale, that was made abundantly clear by PGP
when it first came out. Even Phil Zimmerman, the guy that practically invented
it, agreed it didn't scale and something else was needed. X.509 came about not
because some person foisted it on the universe, rather a bunch of people who
were writing security systems at the time (myself included) got together with
other cryptographers, engineers, and administrators under a group hosted by
"Public Key Partners" (the folks collecting together the Patent pool
associated with public keys) and tried to come up with ways this might work.

It has had some fabulous successes, certificate authority compromised? Pull
their root cert and blam none of their keys are trusted any more. It had some
failures. Call the baby ugly if you must, but at least propose something that
hasn't already been tried and shown not to solve the problem.

[Edit: I really need to keep peoples names in different buckets in my head]

~~~
zanny
It doesn't really solve OPs points though, the largest of which is that
certificate authorities are a closed off oligopoly. But we have literally no
way to trust them, even beyond the CA price gouging, because any state they
are located in will just seize their keys and read your traffic.

~~~
ChuckMcM
Trust me zanny, I'm not picking on you, I'd like to examine your three claims
in a bit more detail:

1) _" certificate authorities are a closed off oligopoly"_ \- This is
absolutely not true. Pretty much anyone can start their very own Certificate
Authority. The code isn't that complicated (the specs are all available), the
math is no longer patented so you don't have to pay tributes to PKP. What you
do have to do though is convince three people you're trustable, Mozilla,
Microsoft, and Google. If they add you to their trusted root certificate list
then you've covered a whole _ton_ of the market. I know of at least one
"private" certificate authority which shares its ROOT CA with individuals who
want to trust that the sites which have a cert from them is "legit." (for some
definition of legitimate).

2) _" any state they are located in will just seize their keys ..."_ \- this
conflates two things, one is trust and one is seizure. If you live in the US,
and have a PGP key that is trusted by the _target_ of an investigation, and
law enforcement can convince a judge that using that is the only way for them
to get the proof they need, you may find yourself on the receiving end of a
subpoena which demands you hand over access to your key. You can refuse of
course, and the court can put you in jail for contempt. This issue is
completely separate from the Certificate Tree or Web of Trust choice. The
purpose of the _certificate_ is to establish _trust_ not _privacy_. The
purpose of the TLS _protocol_ (aka SSL) is to establish privacy. Its necessary
(but not sufficient) to be able to trust the other end.

3) _"... and read your traffic."_ \- Which is a violation of _privacy_ relates
to _how you established privacy_ as opposed to the mechanisms in that
protocol. And I suspect that you think that is a splitting semantic hairs but
bear with me for a moment. The heartbeat bug is in OpenSSL, _not_ the
certificate infrastructure. There are lots of things that used different
protocols, _and_ X.509 certificates, that are just as secure today as they
were before this bug was disclosed. The key here is that they used a different
protocol.

The OPs rant might more properly be leveraged at what is effectively a mono-
culture around OpenSSL, and I would agree that is a bad thing, but to not have
that be the case you would have to have people _write their own TLS
libraries._ And while that would assure that vulnerabilities were contained,
it won't happen because nobody can afford these days to hire a programmer to
write TLS libraries that the only 'hire' has been someone coding up some
Javascript and CSS.

I can completely relate to the OP's angst over the challenges of keeping
things secure in today's world, its something of a life we "chose" relative to
using open source.

~~~
rsync
"Pretty much anyone can start their very own Certificate Authority."

This may be technically true, but the process of becoming a CA was described a
year or two ago on the randombit cryptography list and it was estimated that
it is a (roughly) 1+ million dollar undertaking, just to get up and running
and accepted in the browsers.

~~~
mike_hearn
Yes, that's because browsers don't want to put security of hundreds of
millions of users in the hands of any Joe Random that asks for it. Running a
CA is expensive because they are held to very high standards. You need to have
your root keys inside an HSM, you need to have multiple people on your board
who can access those keys, you need to set policies for certificate issuance
that meet the CA/B requirements, you need to run OCSP servers, you need to be
audited by a third party to verify you're actually following all those rules,
all those things take money so then you need billing and charging people money
implies you need support. In future you may need to take part in the CT audit
logging system as well.

Taking out any of these things and you'd be left with something that is
significantly worse.

That's why it costs money to be a CA _that browsers trust_. Of course if you
want to be a CA that doesn't care about browsers, that's like three lines of
code at the command line.

This does not mean that the CA system is broken. There's a huge middle ground
between "anyone can do it for free" and "totalitarian oligopoly". $1M to start
a business is not that high compared to many other businesses.

------
hendzen
Not going to happen. The WoT is a usability nightmare for the 99.9% of
nontechnical users that don't care about things like 'p2p' & 'decentralized'.

Do you really think Granny is going to be happy with the tablet she bought
that can't connect to her online banking account out of the box? Have fun
explaining to her that she needs to exchange keys with enough trusted
intermediaries to have a valid trust path to her bank. I'm sure there plenty
of key signing parties happening at the 'ol retirement home.

Or maybe you can explain to Granny why her money was stolen when a scammer
managed to compromise one of her trusted keys and then created a compromised
subgraph in the WoT leading to a fake certificate to her bank?

The WoT is a usability nightmare. Sure, the PKI isn't too great, but it's what
we have, and it is currently more practical than any other solution out there.
Security needs to be usable to be useful.

EDIT: for a good rebuttal to the OP, read this blog post by Mike Hearn which
covers the issues I raised and more: [https://medium.com/bitcoin-security-
functionality/b64cf5912a...](https://medium.com/bitcoin-security-
functionality/b64cf5912aa7)

~~~
tlrobinson
A "web of trust" is essentially an extension of the CA "tree of trust". Why
can't we have both?

Apple can act as iPad users' first WoT node. If a user logs into Facebook they
immediately add every Facebook friend to their web. etc, etc.

Just because WoTs are _currently_ usability nightmares doesn't mean they have
to be forever.

~~~
zanny
And you can easily have a web of distrust, in that if one of your more
immediate trustees stops trusting a distant branch, you can at least prompt
that something is wrong. That way you can avoid the whole "compromise one node
and screw everyone over" problem as long as someone realizes the compromise
before you trust it.

------
abalone
tl;dr: Don't trust big scary corporations like Symantec to verify sites, trust
your friendly local geek's network.

I think if you weren't exhausted by the sheer length of the post by the time
you reach that proposal tucked at the very end, you might think to ask some
critical questions. Like, what are the vulnerabilities and exploits of a peer-
to-peer system? Would this not be open season on socially engineering average
folks to trust the wrong peer? How vulnerable to attack are local geeks and
university computer science departments? How are compromises noticed and
handled by the average folks who trust a small local authority? How will the
verification work be paid for, or will it be completely volunteer based, and
how efficient will that be?

Moreover, what the author fundamentally misunderstands is the importance of
usability in security. Web security isn't perfect but that's because more
perfect security would make ecommerce annoyingly difficult. Then people start
taking shortcuts or just ignore security completely, which is a worse outcome.
It's not enough to point fingers at users and yell that they're doing it
wrong; security architects have to take responsibility for security outcomes.
A peer-to-peer system would be significantly more inconvenient for average
folks to use correctly, if only because of figuring out who to trust in the
first place.

~~~
politician
Well, to be fair, the author raised the UI/UX question which could be a great
way to overcome the bullshit "green padlock == safe" idea. Which it doesn't
now post-Heartbeat, and never did.

A different UI might reveal the trust path more directly, so that if I
navigate to my bank that path might be forced into view.

I, for one, would love it if my browser displayed the trusted path used to
connect to my bank before loading any part of the page. The same goes for
self-signed certs. Would I avoid HN if their cert was self-signed? Nope.

------
valarauca1
The problem I see with PGP is you'll end up with thousands if not millions of
keys you need to keep on hand to decrypting everything. Not to mention the web
of trust will be massive and navigating will likely start taking very large
CPU power if its strictly peer to peer.

To avoid this most people will start just trusting larger companies; Google,
Facebook, Apple, Mozilla. And only checking their keys, since they will trust
_that company 's_ key. And these companies will handle signing new websites.
Small websites won't care if you personally trust them, they'll only care if
one of the 'big companies' trust them.

In the end we wind up exactly where we started. Large companies are implicitly
trusted by everyone. Sure you may sign your key off to a few dev friends so
you can access their test sites, which will make self signing easier. The cost
will be mitigated, but in reality nothing will change. Even likely within a
3-4 Browser Generations we'll see non-Company trusted PGP keys get scrapped in
all but the more free (as in beer) browsers.

~~~
lorddoig
I'm not sure this is the likely outcome. There are a lot of people out there
savvy enough to not jump on the big company bandwagon - and they would be very
vocal about why it's a bad idea too. I don't see your vision becoming a
reality for as many as you expect, especially if PGP is brought into our
lifestyles a bit more (e.g. email, chat) and the general population begin to
understand it - it wouldn't be long before they understood enough to value
building their own trust relationships.

~~~
valarauca1
The problem is its not a personal savvy problem, its a mathematics problem.
PGP will pick the shortest route. So if you trust google.com, and google.com
trusts the site, bing-bang-boom your done.

The shortest route will always favor the person with the most keys and the
most trust. Who invariably will figure out that he/she can make money getting
more keys and more trust. Which lucky for us their are both a finite number of
persons and a finite number of keys that will be signed by each key. We end up
with a pyramid scheme.

Where the more trust and keys you have, the easier it'll be to get more trust
and keys.

:.:.:

The problem is capitalism. In all honestly we'll likely see the PGP network
end up in the hands of banks. You want secure access to you online account?
Sign each other keys. Now the bank has a 5 million person strong trusted key.
They'll sell that trust naturally. I trust most tech companies enough not to
instantly monetize the PGP web, but some would.

Likely some tech company attempts to monetize it, they get yelled at. They
stop. Another does, nothing changes so people accept it as the new norm. The
arguments made it allows for faster page loads, easier access. Nobody says a
word after a year.

~~~
saintgimp
I completely agree with the capitalism problem, but the most likely hands to
end up holding the web of trust would be the browser makers, not the banks.

------
wmf
A bug in a PGP implementation could have leaked your PGP private key. A bug in
an SSH implementation could have leaked your SSH private key. CAs may be a
flawed concept, but I don't think they have anything to do with Heartbleed.

~~~
lorddoig
Heartbleed knocked us down, the CA system is going to make it very difficult
for us to stand up straight again - that's the point.

~~~
wmf
How much effort would it be to rebuild a web of trust after all the keys were
simultaneously assumed compromised?

~~~
lorddoig
Good point, but will it be less than all the effort and money about to be
expended in the coming months? Who knows. For the record I did say that PGP
may not be the solution.

The other great thing is that PGP is not just for sites but for _people_ , so
even if all the private keys handled by nginx/apache/whatever were compromised
Heartbleed-style, the core person-to-person trust relationships would be
unaffected; the core of the web of trust would be intact, only the endpoints
would need re-verified.

~~~
politician
That's a great point worth repeating. Your personal trust relationships
probably don't also change when your bank gets hacked and needs to replace
their keys, and a network of a certain size will restore access to your bank
relatively quickly due to friend of a friend connections.

It also reduces the burden on your bank for maintaining the security of their
keys (to some extent). It's still very important, but the consequences are no
longer quite so catastrophic.

------
weavejester
The Queen/Princess/DNA analogy was more confusing than actual system of
certificate signing.

The author also underestimates the consequences of performing a MitM attack
with a root certificate. MitM attacks can be detected and a copy of the signed
cert is proof. If the NSA were abusing a root cert, there is a chance it could
be noticed.

So what if it was? Well, that certificate would be removed from browsers and
operating systems. The CA would be placed under suspicion. In a worst case
scenario, the CA could be completely ostracised, perhaps even to the point
bankruptcy. An abuse of a root certificate could potentially do hundreds of
millions of dollars worth of damage.

That's not even covering the diplomatic fallout. If the CA points the finger
at the NSA, the President would have to explain why the target was so
important that it merited destroying part of the root trust system of the
Internet.

There are far less messy ways of dealing with a high-value target. I'd be more
concerned about other zero-day vulnerabilities the NSA might have found.

~~~
politician
If one of the Big 4 were compromised (which we should all agree is most
probably the case for all of them), even then, "too big to fail" rules the
day.

It's vanishingly unlikely that Google, Microsoft, and Apple would remove a Big
4 CA root cert and break the trusted path of 25% of the secured market.

~~~
weavejester
It wouldn't just be the browsers removing the CA. There would be a strong
incentive for websites to switch as well, particularly foreign ones, so you'd
find a mass exodus anyway, even without browser support.

Browsers don't have to turn a root CA off all at once, either. They could
start by turning off Extended Validation for the compromised CA, or they could
release a statement saying that if they don't get guarantees this won't happen
again, they'll remove the CA in a year's time. They could allow connections,
but change the SSL icon to indicate the certificate has been compromised.
Browsers have a lot of options to put pressure on root CAs, even without
removing the cert.

------
jessaustin
_He who controls a Queen can make functionally equivalent copies of every
Princess and Princess-baby in the Queen’s lineage. They have the skeleton keys
to your ‘secure’ kingdom and could at any time decide to become a fraud
factory and dish out copies of your keys to whomever they fancy._

In a sense, it's worse than that, because a "queen" can actually sign
(correctly or not) _any_ "princess-baby" in _any_ "lineage".

~~~
PhantomGremlin
YES YES YES, 1000 TIMES YES!!!!

Unfortunately not too many people know this, and it's a really important
issue.

BTW like a lot of other people here, I didn't like the "Queen" analogy. IMO it
didn't make the explanation any simpler.

------
saintgimp
A couple of problems:

The average internet user has no idea who's trustworthy and who isn't. If they
have to personally grant trust in order to get at some content they're looking
for, they'll simply do it. This is the same behavior that causes people to
execute boobs.exe attached to a random email that landed in their inbox.

In order for this to work, the average internet user must cede the trust
decision-making process to some other entity who claims to be more qualified
to do it, like say the company who makes their browser. There are four browser
makers that account for probably 90+% of usage. Now you're right back to where
you started with the current oligopoly system, except that with the new system
there's a much larger attack surface for nefarious agents to use when trying
to insert themselves into the trust chain because anyone at all could let them
in.

Cynically, that's the problem with internet security protocols in general -
they have to work not only for smart, self-interested people but also for
stupid people who are actively self-harming. That's a really tough bar to
meet.

------
kijin
I would much rather trust a handful of multinational corporations than a group
of "local geeks" to tell me which keys I should trust.

Why?

1) It is probably easier for casual attackers to trick a local geek to trust a
phony key. Determined attackers and state-level actors can probably compromise
CAs as well, but most day-to-day threats are of the casual type.

2) When a local geek accidentally trusts a phony key, and other people realize
it and point it out to them, all that happens is "Oops, I'm sorry." When
Comodo is caught issuing phony certificates, there will be a Silicon Valley-
wide uproar, browser vendors will very quickly invalidate the offending
intermediate key, and the incident will hurt Comodo's bottom line for many
years afterward. In other words, Comodo is more accountable than any private
individual, not because it's any more ethical, nor because it is any more
competent, but simply because it is a highly visible target of public scrutiny
whose very survival depends on its public image as a trustworthy CA.

3) Most people (including but not limited to grandmas) who are just beginning
to use the Internet have no way to know which keys to trust. We in the
programmer community are an exception, not the rule. So what's actually going
to happen is that browsers will trust, by default, a bunch of highly reputable
individuals or groups (perhaps the browser vendors themselves) and advise the
user to trust whomever these people trust. That's not really different from
the current situation with CAs. We just replace Verisign and Comodo with
@cperciva and @tptacek.

~~~
TeMPOraL
I strongly disagree with your point 2). The reality is, if Comodo is caught
issuing phony certificates, there's some media shitstorm that never actually
changes anything, stocks go up and down a bit, and few days later nobody ever
remembers any it or cares about it, and the company continues doing it's
business as usual (don't believe me? then why GoDaddy still exists?). On the
other hand, we have social mechanisms for dealing with mistrust in place since
forever. If you are caught untrustworthy once about something, you'll probably
never be trusted again on that issue. People know how to deal with those
situations effectively between themselves. It's also easier to boycott an
untrustworthy peer than a multinational corporation. You have many friends to
choose from, but there is usually no other company to go to for a comparable
service.

~~~
kijin
> _we have social mechanisms for dealing with mistrust in place since forever
> ... People know how to deal with those situations effectively between
> themselves._

As some of the other commenters have mentioned, the problem seems to be that
these social mechanisms don't scale.

Please take my point 2) in combination with point 3). As I said, techies are
the exception, not the rule. It's not just Grandma who will have trouble with
a web of trust, it's pretty much everyone except us. How do they even know
which peers to distrust? Will there be a news feed about compromised peers?
Will everyone have to subscribe to one? What if someone wants to explore a
part of the web that none of their peers, or their peers' peers, have ever
heard about?

The single most important advantage of a centralized model of trust is that a
list of trustworthy vs. untrustworthy parties can be quickly and widely
distributed in an automated fashion. Comodo issues phony certs? 12 hours
later, every copy of Firefox receives an updated list of revoked keys. I know
it doesn't currently work like that, but it's entirely possible. Whereas with
a web of trust, millions of people will be left trusting compromised peers for
many months afterward because they didn't get the news.

------
notdonspaulding
> And fundamentally you have to trust that they who hold the Queens aren’t
> dishing out copies of your certificates.

In general, I'm a fan of analogy, but I'm having trouble following this whole
queen/princess/baby thing. Putting that aside, I think you're claiming that
CAs can present your certs to random clients?

This might be an indictment against the DNS system, which directs the clients
to an IP address of its choosing, but if the client makes it to your server,
your server chooses which cert to present to the client.

> What we have done here is fitted our doors with some mega heavy duty locks,
> and given the master keys to a loyal little dog.

Again with the strained analogy. Who's the dog? What does the mega lock
represent?

I think this belies a fundamental misunderstanding of what the CA is doing.
The client asks your service to validate itself, your service does so by
saying that Verisign/Thawte/etc. has previously signed the cert that your
service sent to the client. The client does not have to automatically trust
Verisign or Thawte or whomever you say signed it, and furthermore, if it
decides that it does trust that party, the NSA is not able to use that to its
advantage in any way as a result of Heartbleed.

> As of today, that green padlock no longer means what it once did. And the
> reason for that is because of the business conditions of gatekeepers.

No, it doesn't mean what it did yesterday _because of a bug in an
implementation of OpenSSL_. The protocol is still just as valid. The business
conditions of the gatekeepers, while distasteful to you, doesn't invalidate
the mechanisms by which that little green padlock gained its fame.

~~~
joshstrange
Not the OP but I think I might be able to help.

>> I think you're claiming that CAs can present your certs to random clients?
This might be an indictment against the DNS system, which directs the clients
to an IP address of its choosing, but if the client makes it to your server,
your server chooses which cert to present to the client.

Here I am fairly confident that he is talking about a situation in which a CA
signs a key for your domain and gives it to someone else (NSA/GCHQ) and they
preform a MITM attack on a user like this:

Client -> Fake key for yourdomain.com provided by MITM proxy server -> decrypt
data then encrypt with real key for yourdomain.com -> Your Server

CA's have been compromised before [0] (and I'd be willing to bet there are
quite a few more incidents that they have swept under the rug) and so there
has been discussion on what happens when you can sign a certificate for any
domain. I believe this is what the OP is referencing.

>> Again with the strained analogy. Who's the dog? What does the mega lock
represent?

I agree with you, this one is harder to understand. As I see it the mega lock
= CA's private keys, dog = CA's. When he talks about the dog being tempted by
a steak he is referencing the rumors that the NSA/GCHQ have back room
agreements (steak) with CA's or have simply hacked the CA's and taken what
they needed (for this I would say something like "the dog was asleep").

>> No, it doesn't mean what it did yesterday because of a bug in an
implementation of OpenSSL. The protocol is still just as valid. The business
conditions of the gatekeepers, while distasteful to you, doesn't invalidate
the mechanisms by which that little green padlock gained its fame.

This is less cut and dry than you suggest. The green padlock has always meant
jack-shit when it comes to state actors (if you subscribe to the theory that
they have either bought off one or more CA's or hacked them, which I do), what
it did protect you from was your run-of-the-mill online criminal. It made it
impossible for them to sniff your login credentials a la Firesheep[1] (Yes the
padlock itself didn't do that, the PKI did but it gave people a simple way to
check if the connection was secure and the website was who it said it was).
What the heartbleed bug did was allow ANYONE to potentially steal your private
key right off your server, opening the door to not only NSA/GCHQ but anyone
with an internet connection (and the knowledge to exploit it).

The OP is suggesting that CA's should have revoked certificates to force
people to fix their servers but they never would due to the backlash. CA's
have the ability to revoke certificates that are compromised and we have to
assume every certificate has been. I don't know what the right course should
be but one that spring to mind iss giving everyone a deadline at which point
all certificates will be revoked and refuse to re-issue a certificate to a url
that is still vulnerable to heartbleed. YES, this is extremely and no it's
neither simple nor easy but I think there are very good reasons for why it
should be done. The thing is, at least IMO, that CA's really don't give a
shit, like the OP suggests they care about one thing and one thing only: their
investors. If they really did care about making the web a safer and more
secure place then why aren't they sponsoring OpenSSL or working on their own
open source SSL library?

[0]
[http://crypto.stackexchange.com/a/11765](http://crypto.stackexchange.com/a/11765)

[1] [http://codebutler.com/firesheep/](http://codebutler.com/firesheep/)

~~~
throwaway2048
with a properly done CSR (certificate signing request), a CA never has access
to your private key, therefore cannot "give it to someone else"

~~~
joshstrange
Correct, they cannot give but they can sign a new key for your domain which
the attacker can use.

------
jboynyc
Previous HN discussion on Monkeysphere, a Debian project which implements
something like what the author envisions:
[https://news.ycombinator.com/item?id=6617132](https://news.ycombinator.com/item?id=6617132)

And the description from the Monkeysphere site on why they are a better
alternative for HTTPS:
[http://web.monkeysphere.info/why/#index1h3](http://web.monkeysphere.info/why/#index1h3)

~~~
infinity0
Indeed, this is pretty much _exactly_ what the OP is talking about. The
problem is that it's hard to bootstrap, since correct verification procedures
are not widely-known.

TACK, what tptacek mentioned, is an orthogonal strategy for solving the same
problem, but it assumes that some MITM will be detected. An ideal solution
would involve a combination of both TACK and monkeysphere.

~~~
aidenn0
There's also convergence, which currently can work for the case where the
client is undergoing MITM, but not the server. Add support for notaries to
cache TACK responses and you are pretty secure.

------
joeblau
I used to work on PKI and this right here would have the old guard of system
security architects up in arms:

    
    
      > 90% of that guff can be automated and hidden underneath a good UI, but can we
      > dispense with the need for key exchange parties? Absolutely we can.
    

So who builds this "good UI that everyone trusts"? Without details of how this
works, there is no way this system can grow. There is no way to have efficient
key exchange except though an arduous process of everyone creating this mesh
of trust manually. PKI creates this "good UI everyone trusts" with a bad UI
that everyone trusts which has turned into these 4 companies that are
mentioned in the article. It sounds good, but it's an iron triangle.

------
PeterWhittaker
_I’m not a cryptographer; nor am I a hard core C guru; nor have I invented
some brilliant library that gives me street cred to talk about this stuff. I’m
a nobody._

But somehow I am qualified to inform the world as to why PGP is superior to
X.509.

I'm not debating that point, and informed debate would be welcome. And I have
to say that I find it refreshing for a blogger to so inform me in the first
paragraph as to just how quickly I should skim through or close their rant.

I really did appreciate that. Though somehow I find myself investing more time
in the writing of this comment than in the consumption of the article.
Fortunately, like floss, 't'will soon be forgotten.

------
astrodust
Can't this be solved with some kind of distributed, authenticated, pre-
existing protocol? Something like...

DNS?

With the DNSsec extensions it should be possible to publish enough information
to authenticate a given site against a certificate. If your DNS has been
compromised you've got bigger problems than your SSL cert.

~~~
josteink
If your DNS is compromised SSL and x509 is currently what's protecting your
users from a fraudulent site.

~~~
lawl
> _If your DNS is compromised SSL and x509 is currently what 's protecting
> your users from a fraudulent site._

Uhm no? Because you can litterally just buy a new valid cert for it? As long
as we're talking Domain Validated.

I doubt your average user will notice that there isn't a green bar anymore or
that the certificate lacks ownership information.

------
mcgwiz
Rather than all the engineers and tech-minded people here naysay the idea into
oblivion, I think it's worthwhile that we encourage designers to take an
earnest stab at this problem.

The complaints here are basically "w.o.t. is not usable", but that's basically
what the author said. He therefore also indicated this is as much a design
problem as anything else. That's a useful insight we shouldn't dismiss, at
least not until some thoughtful, imaginative designers have actually taken a
crack at it.

------
Nursie
Heartbleed and X.509 are basically unrelated aren't they?

The OpenSSL bug that allows heartbleed is nothing at all to do with the (many)
flaws in the public trust system.

The fundamental problem here (as I see it) is that you're trying to set up
trust between parties that have no existing relationship. This requires third
parties and externalised trust whether you use a CA or a P2P net.

Either way, it's nothing much to do with heartbleed, which would have leaked
the keys to the kingdom under either model.

------
spiralpolitik
Not going to happen because the main OpenPGP implementation (gpg and gpg2)
currently has a non permissive license that as such that it cannot be used
"Everywhere".

Until there is a implementation of OpenPGP that uses a permissive license,
getting the world plus dog to switch to PGP is a non starter.

------
GoodPractice
From the perspective of a layperson with limited tech knowledge I really like
the way you explain things!

------
jmspring
The article makes a generalization that is not correct in _most_ cases around
certificate request and issuance --

"And fundamentally you have to trust that they who hold the Queens aren’t
dishing out copies of your certificates."

The entity holding the Queens can give out a copy of your certificate, sure,
but in most cases, they do not hold the crown jewels -- your private key --
which is the part of the Heartbleed bug that is really bad.

There have been cases of CAs either issuing or being compromised and issuing
new certs which duplicate a site identity, but that is different then
releasing the private key of a particular certificate.

------
pskocik
_He who controls a Queen can make functionally equivalent copies of every
Princess and Princess-baby in the Queen’s lineage. They have the skeleton keys
to your ‘secure’ kingdom and could at any time decide to become a fraud
factory and dish out copies of your keys to whomever they fancy._

This seems like utter nonsense to me. Certification authorities should never
get to look at my private key, and I don't care about them giving out my
public key (it's public, after all). The best they can do, if they're evil, is
create a new pair with information that impersonates me.

------
mrerrormessage
Surely if Zuck got half the world signed up for a network that does nothing
but suck our eyeballs in return for money out of advertisers pockets, we could
get a few million, even say, 10-20 million people using PGP. Remember that Tor
was once considered a niche tool as well.

------
lorddoig
A follow up for the keen and eager

[http://lorddoig.svbtle.com/should-we-make-a-working-group-
to...](http://lorddoig.svbtle.com/should-we-make-a-working-group-to-kill-x509)

------
exelius
The problem isn't any one cryptography scheme; the problem is trust. How do we
build a trust framework that facilitates commerce on a wide scale while
remaining truly secure? I don't think we can; so we give up a little bit of
security for a whole lot of economic benefit.

Without centralized, trusted gateways, it's not even clear that your
communications are secure. They need to be centralized to make them easy to
monitor and audit. With a distributed trust model, the compromise of one node
can be catastrophic; all you're really doing is handing control of the trust
network over to botnets.

This is a really hard problem. I can't think of a better solution that would
serve the same niche as our current one.

------
negamax
Next 10 years will be all about decentralization of every infrastructure and
institution. Only in a trustless system we can have any chance at trust. So no
CAs, no authorities.

------
nardi
The missing piece of this for me is: How do we fix X.509 for mobile apps,
considering 80%+ of mobile usage is in apps, not browsers?

------
ClashTheBunny
This already exists:
[http://web.monkeysphere.info/](http://web.monkeysphere.info/)

------
elchief
Anybody know which, if any, of the SSL cert vendors don't use OpenSSL?

------
avodonosov
OMG, you've exposed all those intruders-oligopolists!

------
strictfp
WoT and CA systems are both problematic since they can be altered on the fly
and thus 'hijacked'.

I wonder if we wouldn't be better of with something similar to what SSH does.
Accept trust the first time and verify that the signature doesn't change on
every subsequent connection attempt. This way one would be immune to hijacks.

It wouldn't solve first time verification, but how likely is a first time
spoof? And for really sensitive communications you could use pre-shared keys.
I could for instance get a hardware token from my bank containing their public
key.

------
rubbsdecvik
PGP would be a problem for high load servers too.

"Why not use public-key encryption for everything?

At face value, it seems that the existence of public-key encryption algorithms
obsoletes all our previous secret-key encryption algorithms. We could just use
public key encryption for everything, avoiding all the added complexity of
having to do key agreement for our symmetric algorithms. By far the most
important reason for this is performance. Compared to our speedy stream
ciphers (native or otherwise), public-key encryption mechanisms are extremely
slow. A single 2048-bit RSA encryption takes 0.29 megacycles, decryption takes
a whopping 11.12 megacycles. To put this into comparison, symmetric key
algorithms work in order of magnitude 10 or so cycles per byte in either
direction. In order to encrypt or decrypt 2048 bytes, that means approximately
20 kilocycles."

[https://www.crypto101.io/](https://www.crypto101.io/)

EDIT: I suck at copy-pasta

~~~
wmf
I think the author is proposing to replace CAs with PGP-like web of trust but
keep the rest of SSL/TLS the same, so public key crypto would only be used to
setup a session key.

~~~
rubbsdecvik
That's fair. I re-read the article and see your point. I would still agree
with other comments here that a WoT would be difficult to implement in a user
friendly way that wouldn't also be exploited.

~~~
zanny
How is it hard for a browser vendor to implicitly trust itself, and build its
WoT from there? Get Chrome, trust Google. Get Firefox, trust Mozilla. It means
_you_ have to trust your browser, but.. you kind of already have to do that,
you are putting all your personal info through its text fields and such.

