
No, don't enable revocation checking - moonboots
https://www.imperialviolet.org/2014/04/19/revchecking.html
======
wpietri
For those, like me, wondering who the author might be, it appears to be this
guy: "Adam Langley works on both Google’s HTTPS serving infrastructure and
Google Chrome’s network stack. From the point of view of a browser, Langley
has seen many HTTPS sites getting it dreadfully wrong and, from the point of
view of a server, he’s part of what is probably the largest HTTPS serving
system in the world - See more at:
[http://www.rsaconference.com/speakers/adam-
langley#sthash.HM...](http://www.rsaconference.com/speakers/adam-
langley#sthash.HMdxPRI7.dpuf")

~~~
einhverfr
On the other hand we found that as of Friday, Chrome DID NOT recognize that
one of our wildcard certs for Efficito had been revoked. We sent out an email
to our customers saying to enable cert revocation checking.

Revovation isn't perfect and I would not suggest the current status quo is OK
but the intermediary approach Chrome takes cannot be trusted as they have now
shown.

If Chome will not show our cert as revoked what is the point of revoking the
cert? The author has points but the approach Google ie taking is a cure worse
than the disease...

~~~
chmars
Honestly, I don't see a point in certificate revocations anymore, i.e., your
implicit conclusion seems to be correct. And I don't blame Google for our
broken revocation system – especially because the even the best revocation
system couldn't fix the current certification system that is broken in its
core.

~~~
einhverfr
Google's problem is they decide which revocations are worth passing on to the
browser. That's at least as broken by design.....

Believe me I am aware of the limits of soft-fail, but the answer cannot be
even in the short-run to let a browser vendor tell us which revocations are
worth knowing about.

~~~
enneff
Soft fail doesn't work at all. CRLset works for the certificates that it
covers (some 25k of them, btw).

Which approach is worse?

------
hereonbusiness
So this complete infrastructure is crap. OpenSSL, a software half the internet
uses but no one cares about because it's crap. CA's not revoking keys even
though they know they're compromised. Revocation being worthless because it's
too much of a hassle for anyone to bother.

Great. Maybe now, when half the internet is already compromised and all our
certificates are not worth the bytes they're made of ... maybe we should try
to come up with something better.

edit: Actually, this whole heartbleed affair has been quite eyeopening for me,
so I'm thankful for that. But it certainly didn't help with the paranoia I
feel the last couple of years while using services on the internet.

~~~
tptacek
Yes! Now it's time for us to generate a whole new broken infrastructure! I'm
sure if we just rewrite all the Internet's crypto in Rust, everything will be
great 10 years from now. No way will a radically different new transport
cryptosystem grant researchers 100 new bugs to play with; after all, we'll
have option types.

~~~
lucian1900
You're right to mock the attitude people have that the _only_ thing wrong with
OpenSSL is the language it's written in, but memory unsafety has nevertheless
been a factor in many security flaws.

------
gojomo
Still seeing lots of explanation about why the current system sucks, and not
much about how a more robust system might be created and promptly adopted.
Langley (the author) mentions short-lived certificates (either rapid
expiration or via a 'must staple')... how soon can we enforce that? How short
can that make the danger-period where the CA, and Google, and the "connected
web" all know that a certificate is invalid, but a user-at-risk does not?

Why not other ways to rapid-broadcast invalidity in censorship-proof ways, so
that a browser encircled by an enemy can quickly figure out something's wrong?
(Or, why can't security professionals get around interdiction as effectively
as copyright pirates do?)

~~~
moe
_how a more robust system might be created and promptly adopted_

I'm quite fond of how the SSH host key system works.

Prompt me the first time I see a new key, provide me with supporting evidence
(e.g. show me how many people have previously accepted this fingerprint for
this domain) and alert me the same way in the future if the key ever changes.

If the 'supporting evidence' was plugin-based then this system could quickly
become more user-friendly _and_ trustworthy than the current centralised
system can ever be.

There could be plugins to automatically trigger a SMS challenge on first
contact with particularly sensitive sites. Multiple competing P2P web-of-trust
plugins, plugins that let you follow trust-lists from third parties, etc.

In the current system you rely on a single, very questionable opinion on the
trustworthiness of a given certificate. In the new system you'd be presented
with a trust-score compiled from a whole range of opinions. The sources of
which _you_ chose before-hand.

Of course this approach doesn't include a license to print money for corrupt
CA organisations and is not going to happen for that reason alone.

~~~
einhverfr
Supporting evidence, to be worthwhile, must come from a trusted source, right?

~~~
moe
As said it should be pluggable. Yes you will still need seed fingerprints for
a few independent plugins (like the CA list of today), and you still need to
trust or vet the browser itself (also like today).

The point is that once this seed trust has been established, which ideally
would need to happen only once in your lifetime (given proper sync/backup
facilities), you gain actual control over who you want to delegate your trust
to, if at all. On a site-per-site basis.

If an american, a chinese _and_ an EU database independently agree on a
fingerprint for a site then that would be an actual trust indicator. Very much
unlike the perpetually compromised zoo of certificate authorities of today.

And obviously once there's a market for plugins you'd quickly see plugins
going far beyond what we get to know today (read: essentially nothing). There
could be subscription-based plugins providing detailed information about the
remote party, down to credit ratings, company history, you name it.

------
AhtiK
My quick write-up on this from few days ago,
[http://www.ahtik.com/blog/startssl-revocation-fees-will-
not-...](http://www.ahtik.com/blog/startssl-revocation-fees-will-not-matter-
and-ssl-certs-are-funky_u1g8E/)

Yes, revoke is broken by design, especially with mobile and Chrome browser.
I'd say it's broken everywhere except Firefox with OCSP Hard Fail enabled.

Thanks to this flaw StartSSL business model has become somewhat outdated IMHO
with the free certs and paid revocations.

I'm dreaming that we can fix the revocations issue with 24hour valid
certificates. Suggested at the end of my post.

But I must be naive on this as it's too simple, just haven't found the flaw in
this myself. Yes, it needs technical orchestration, but at least it does not
add extra layer of single point of failure for every session.

EDIT: Just finished the OP post and it does indeed also mention "short-lived
certificates" in the end as a potential solution.

~~~
ivanr
Indeed, short-lived certificates do seem like a solution to this problem. One
downside might be the fact that (anecdotally) many users have inaccurate
clocks. I read somewhere recently that a large web site has to back-date their
new certificates, because, otherwise, certificate rotation/revocation causes a
large spike in support tickets.

Short-lived certificates were explored in Towards Short-Lived Certificates
[http://crypto.stanford.edu/~dabo/pubs/abstracts/ssl-
shortliv...](http://crypto.stanford.edu/~dabo/pubs/abstracts/ssl-
shortlived.html)

------
e12e
I think the author is a little disingenuous with the term "security theatre".
Basically he argues that OCSP doesn't work because hard fail might cause DOS
-- but fails to conclude that without OCSP SSL/TLS _is useless_. It's a long
argument for saying that the CA system is broken (you can only trust the
white-list chrome provides) -- and the sensible conclusion is that _you cannot
trust any other certificate chains_ (without OCSP) is left out.

Without certificates, SSL/TLS falls apart.

Perhaps a better use of CAs would be to always delegate authority to the
domain owner -- we'd only need OCSP for the CAs, and domain owners could issue
hour/day-valid certs via a cert infrastructure. That would push a lot of
complexity down to domain owners, it would probably lead to a lot of errors in
implementation -- but those errors would only affect the domains -- not the
main CA trust chain as such.

I'm not sure if that would be an improvement or not -- but at least you could
know that if a domain was run correctly, a valid certificate could actually be
trusted...

~~~
dvanduzer
The "better use of CAs" you describe is essentially DNSSEC.

~~~
tptacek
Sure, if "making the DNS totally unreliable", "baking 1990s crypto into the
core of the Internet", and "conceding the CA PKI to world governments" is your
idea of "better use of CAs".

~~~
dfc

      > "conceding the CA PKI to world governments" is your idea of "better use of CAs".
    

How much different is this than the current CA situation? Just recently a
subordinate CA of ANSSI (the French Network and Information Security Agency)
issued a wildcard cert that could MITM just about anything.[^1] Firefox's list
of trusted CAs includes:[^2]

    
    
      China Internet Network Information Center (CNNIC)
      Government of France
      Government of Hong Kong (SAR), Hongkong Post
      Government of Japan, Ministry of Internal Affairs and Communications
      Government of Spain, Autoritat de Certificació de la Comunitat Valenciana (ACCV)
      Government of Spain (CAV), Izenpe S.A.
      Government of The Netherlands, PKIoverheid
      Government of Taiwan, Government Root Certification Authority (GRCA)
      Government of Turkey, Kamu Sertifikasyon Merkezi (Kamu SM)    
      Hong Kong
    

Firefox's list of pending CAs includes additional government CAs.[^3] Things
are no different in Redmond. There are at least 56 government CAs (56 of the
certs start with government probably others with less obvious names) in
Microsoft's Root Certificate Program.[^4]

[^1]: [https://blog.mozilla.org/security/2013/12/09/revoking-
trust-...](https://blog.mozilla.org/security/2013/12/09/revoking-trust-in-one-
anssi-certificate/)

[^2]: [https://www.mozilla.org/en-
US/about/governance/policies/secu...](https://www.mozilla.org/en-
US/about/governance/policies/security-group/certs/included/)

[^3]: [https://www.mozilla.org/en-
US/about/governance/policies/secu...](https://www.mozilla.org/en-
US/about/governance/policies/security-group/certs/pending/)

[^4]:
[https://social.technet.microsoft.com/wiki/contents/articles/...](https://social.technet.microsoft.com/wiki/contents/articles/14217.windows-
and-windows-phone-8-ssl-root-certificate-program-april-2012-e-g.aspx)

~~~
tptacek
It's not different from the current CA situation. That's my point.

~~~
dvanduzer
That was my point, too: the theorized system already exists. I'm definitely
not advocating its use.

Ultimately, the URL bar needs to go away. More fundamentally, the asymmetric
relationship between very large organizations that authenticate their identity
with browser CA certs, and individuals who authenticate their identity with
passwords needs to change.

Cryptographically generated addressing schemes like Telehash can do the
automate-able stuff better than the current CA situation. The problem (and
solution) I'm struggling to articulate involves the fact that granular
authorization systems and trust databases need better UI before we can really
fix this.

I suspect cheaper hardware tokens will play a significant role.

------
pronoiac
Could we use bloom filters on the CRLs, before checking with OCSP? Maybe I
should go crunch the numbers on the viability of that.

~~~
MertsA
I just did and if you were okay with a 0.001 probability of false positive you
could list all 500,000 (possibly way off) certificates potentially exposed
through heartbleed in only 877.5KB of space. The current Chrome CRL contains
24,161 serial numbers and takes up 305.3KB of space. While it isn't a perfect
fix for the revocation problem it would certainly be much better than the
status quo.

One problem might be that the 0.1% of sites hit by the false positive
effectively couldn't use OCSP stapling but Chrome could just first call back
to Google as a CRL proxy to avoid making an OCSP request when the site stapled
a valid but potentially revoked OCSP response. Then just store that response
from Google for current version of CRL in the cache. End result is that the
unlucky false positive sites don't have tons of unnecessary (unnecessary as
far as the OCSP spec is concerned) OCSP requests going to the CAs and the only
thing they would notice is that a new visitior takes 100ms longer to make the
first page load.

And through the magic of bloom filters if you wanted to bump the false
positive rate down to 1 in 10,000 it only bloats the list to 1.14MB.
Furthermore, there are methods to make the bloom filter scale-able such that a
client doesn't have to necessarily download the whole bloom filter again if a
bunch of elements are added to it and instead just download a portion of the
data required for a full update.

The more I think about it the more I wonder why this isn't already in Chrome
in some form or another. The only downside is weird networks where OCSP might
be filtered, but not https, and access to Google is filtered.

Edit: One thing I feel stupid for overlooking is that Bloom filters aren't
cryptographically secure so an attacker could theoretically find a serial
number for some CA that would cause a site to always be a false positive but I
don't think any CAs are still giving out serial numbers in a predictable way
after the MD5 debacle and even if they were it would seem to be impractical to
me. The fix would just be do a SHA256 hash of the serial instead of the serial
itself.

~~~
pronoiac
Oh, I wasn't thinking about having this be a perfect oracle; just a better and
smaller first pass _(edit:)_ for CRL, not for OCSP.

I got the idea from Squid and the network of caches.[1] That body of
experience may be helpful.

For shrinking the size, RLE might work (most entries would be 0), and rsync
may reduce bandwidth. It looks like the Squid network just used http requests
for refreshes. There's probably a sweet spot for bandwidth, and I'd guess that
90-99% would work fine; you're balancing the size of the continually updated
bloom filter vs. the requests for certificates that match it. I didn't worry
about false positives, because it could just send an OCSP query in that case.

Your numbers for revocations sounded _very_ low, but I just used crlset-
tools[2] and checked, and it's about right. Which is weird, because someone
else[3] mentioned a size of "4.107Kb" at version 1567, but that's somehow
different - compression, perhaps. I thought I'd heard about CRLs megabytes
long, but Google Chrome seems heavily curated re: CRLs.

I'd hash over _signatures_ instead of the oft-predictable serial numbers, as
you noted.

[1] [http://wiki.squid-cache.org/SquidFaq/CacheDigests](http://wiki.squid-
cache.org/SquidFaq/CacheDigests)

[2] [https://github.com/agl/crlset-tools](https://github.com/agl/crlset-tools)

[3] [https://scotthelme.co.uk/certificate-revocation-google-
chrom...](https://scotthelme.co.uk/certificate-revocation-google-chrome/)

------
eps
Just make sure you still check revocation of code signing certificates.
Otherwise you will end up running malware that is signed with a legit key they
got off my stolen Windows laptop.

------
colons
This argument only holds if the attacker controls every internet connection
you use. If you're on a portable device or you're otherwise connecting through
various networks, only a subset of which are compromised, revocations are
still useful.

~~~
captainmuon
Exactly. If I'm on my trusted network at home and receive a big revocation
list, and a few weeks later go to, say, Egypt, and someone tries to MITM me
there with a stolen certificate, then it would show up as invalid.

------
chacham15
When I hear these arguments, I always look for what is wrong with OCSP Must
Staple. The author says that at the bottom it might be a solution with short
lived certs, but I dont see the need for super short lived certs, only short
lived OCSP staples. The author presents this as the problem:

> if the attacker still has control of the site, they can hop from CA to CA
> getting certificates. (And they will have the full OCSP validity period to
> use after each revocation.)

The solution here is to not allow OCSP stapling to request a new certificate
and use a full OCSP check to verify that the cert wasnt revoked.

------
wyager
I'm honestly kind of surprised how little action there has been to assist with
a migration away from the CA model. The technology is there, but people just
don't seem interested enough to leverage it.

Systems like Namecoin could serve this purpose marvelously. Powerful devices
have direct access to the entire cryptographically authenticated DNS and
certificate database. Weak devices can specify whom they trust to provide them
with DNS/certificate data, and even those devices get some cryptographic
security guarantees thanks to technologies like SPV.

~~~
sarahj
Why have a single entity at all? Moxie Marlinspike proposed Convergence
([https://www.youtube.com/watch?v=Z7Wl2FW2TcA](https://www.youtube.com/watch?v=Z7Wl2FW2TcA))
as a solution - I think that something like that has far more potential wheels
to travel than a Namecoin based system.

I should be able to choose who I trust, a notary system would allow me to do
just that. No central CA systems.

The biggest concern I can see is Identity management, but, as mentioned by
Moxie, most of these CA don't do anything close to proper Identity management
any more - I have a number of certificates bought from quite a few different
CA's all made out to my rabbit, at no fixed address.

Notaries can, of course, do additional verification - they could even
advertise this as a premium.

I don't see why this can't be extended to DNS lookup's either. I trust X
notaries and pin the results I get, I can choose to trust a majority, or be
hyper paranoid and require everyone to agree. No need to run a power hungry
blockchain, no single point of technology failure.

Technically, all of that is feasible today. And I imagine we will see a number
of different technologies combined to form a proper, decentralised, system.

~~~
vxNsr
The project seems to have lost support, the last github commit was over 2
years ago.

Do you know if there was a specific reason or were people just not
interested/none of the browsers jumped onboard?

~~~
higherpurpose
From Moxie:

 _" Convergence is blocking on TACK, which is blocking on browser vendors."_

[https://twitter.com/moxie/status/451020203099299840](https://twitter.com/moxie/status/451020203099299840)

~~~
ivanr
There should be a clear statement about the status of Convergence on the web
site. IIRC, the Firefox extension has been broken for more than a year now.
Why? If Mozilla broke their APIs and made it impossible for the extension to
work, then we should know about that. Otherwise, what's the excuse for the
extension being broken for so long?

Convergence had the momentum, and there was a small but vocal group of people
willing to support it. But, due to project mismanagement and lack of
communication, that momentum has been lost.

~~~
0x006A
There are some more active forks like [https://github.com/mk-
fg/convergence/](https://github.com/mk-fg/convergence/) but they too seam to
not really work in current versions of Firefox.

------
mobiplayer
I've wondered many times why OCSP isn't distributed as DNS is. When we talk
about websites, surely there's no more than one certificate per hostname (or
less, i.e. wildcards). I don't think we're talking here of something
impossible to do or not feasible with our current technology and computing
power.

Also, certificate "whitelisting" could be a part of the DNS protocol itself
(return the IP address of the requested hostname and the hash of its current,
valid certificate).

~~~
mobiplayer
Just to clarify: OCSP is distributed, but I can't ask my local ISP OCSP server
about your certificates. I have to ask your OCSP server about your
certificates.

------
phunehehe0
It seems the only problem with hard-fail is the risk of DoS attacks by
targeting OCSP servers. However, if you include OCSP stapling you won't be
affected. So a solution may be to encourage all users to enable revocation
checking with hard-fail, and all servers to support OCSP stapling.

------
Khaine
It sounds like the internet is broken Without CRL/OSCP we cannot truly trust
that we are securely communicating.

Something has to give. We need to abolish SSL/TLS and migrate to something
that isn't broken by design

~~~
lazyjones
> _the internet is broken_

It's not the Internet, just the CA system. There are better systems for
handling trust out there, for example, people have been signing each other's
PGP keys at key signing parties for decades.

~~~
einhverfr
> It's not the Internet, just the CA system.

Ok, so it is just the portion of the internet that involves purchasing things
with credit cards and requiring passwords to access sites. The rest of the
internet is just fine.

Great. I thought for a moment that the commercial basis of the internet might
be in danger. Now to determine what percentage of the internet is not
dependent on the CA system.....

------
yuhong
I am thinking that a HSTS option enabling hard-fail OCSP plus OCSP stapling is
probably a good idea, though probably less secure than putting it in the
certificate.

------
Splendor
Let me get this straight. Sites across the internet are (hopefully) revoking
their CAs and issuing new ones to address Heartbleed but Mr. Langley is
suggesting that we shouldn't check for revoked CAs because it might not do
anything and it's slow?

Sorry, but after the last few weeks I'll happily accept a little slowness for
the security revocation checking provides in the cases where it does work,
even if it's not 100% of the cases.

~~~
ars
I guess you didn't read the article? He's saying there _are_ no cases where it
works. Making it completely pointless.

~~~
Splendor
I did read the article.

> "In order to end on a positive note, I'll mention a case where online
> revocation checking does work..."

~~~
ars
That's even more proof you didn't read it. How does enabling it in chrome make
any difference to code signing?

~~~
Splendor
Does enabling revocation checking make me less safe?

~~~
tptacek
Yes! It involves you reporting all the sites you visit to a CA!

~~~
Splendor
I guess I knew that but hadn't grasped the security problem this presents.
You've changed my mind. Thank you.

------
anaphor
Can't the replay attack mentioned in the article be mitigated by using nonces?
Why doesn't anyone do this? I'm genuinely confused by this.

------
fragmede
The article gives two reasons for why 'soft-fail' is required: Captive-
portals, and OCSP server failure.

To deal with captive portals: have an SSL signed
'subdomain.google.com/you_are_on_the_internet' site/page that Google Chrome
can use to check to see if it's captive or not. If it's captive, enable soft-
fail. If internet access is available, set to hard-fail.

Websites these days are complex, with many (digital) moving parts - the
database server(s), the static image server(s), dynamic response server(s),
gateway server, probably a memcache server or something similar. If any one of
those goes down, the site is unusable. Why then, should the OCSP server going
down be considered any differently? Is a black-hat rented bot-net running a
DDoS going to care if it's the main gateway server or the OCSP server?

But let's say we do consider disabled OCSP servers to be a client-side issue.
Google could query and cache the OCSP server status, either with OCSP stapling
or via some side-channel they build into Google Chrome.

The combination of both would allow hard-fail to be an option in Google
Chrome.

------
papaf
Theres one thing that I do not understand - why not download the full
revocation list?

~~~
pencilcode
Potentially too much data to download, imagine, all revoked certificates in
the entire internet.

------
dwightgunning
Why not hard-fail by default and give the user the option to ignore/override
it? Similar to the way other certificate warnings are shown to the end-user.

~~~
x0x0
answered in adam's blog; see the paragraph beginning with

    
    
       Everyone does soft-fail

~~~
dwightgunning
I guess that's true when a hard-fail causes the connection to be refused
immediately by the client with no user input. In that case a DoS on the OCSP
servers breaks things badly.

However what I meant to suggest is a third option. Something like hard-fail
with a latch. The client should opt to fail but give the user the choice to
proceed.

This would seem more desirable than the current soft-fail implementations when
seem to be entirely silent to the end user.

~~~
richardwhiuk
Users make terrible security decisions. ~95% of users click through
certificate failure pages, ~99% of users don't notice if a website
transparently downgrades to HTTP. Delegating the choice, which would be
borderline impossible to explain to the user is another way of saying 'Always
say yes to proceed'.

------
rdl
Why are we not using OCSP Must Staple right now?

------
akerl_
The author appears to entirely ignore attack vectors where the malicious party
can record but not modify/block traffic.

Edit: I get it, I missed that for sites where the key has been changed the
stolen key no longer allows such eavesdropping. Thank you to yuhong for
helping point this out rather than just laughing at my ignorance while pushing
me down the page.

~~~
yuhong
Then a SSL MITM attack would not be possible at all.

------
dvanduzer
I'm having a lot of trouble getting past: "Certificates bind a public key and
an identity (commonly a DNS name) together."

X.509 certificates bind a public key and a human recognizable string (a
"common name") together to _create_ a verifiable digital identity. Over-
simplified, X.509 is about solving the "I'm Spartacus" problem.

CRLs solve the "He was Spartacus" problem. I agree with the broad conclusion
that CRLs aren't effective for _human_ trust, but they are perfectly
reasonable for _machine_ trust.

Why didn't the author mention Kerberos? The default lifetime of a Kerberos
ticket is designed around humans: roughly the length of a work shift in front
of a computer terminal.

final edit: meta-moderation is hard

~~~
lnanek2
For HTTPS to a web site, the common name is the site domain name. We're not
talking about anything else here.

~~~
dvanduzer
Aha. That perhaps explains all the downvoting. But this was my objection:

Name != Identity

It's nearly impossible to have a meaningful English conversation about these
problems without getting that straight.

