
Revocation still doesn't work - wglb
https://www.imperialviolet.org/2014/04/29/revocationagain.html?radio
======
gojomo
It's the fatalism about fixing this is most discouraging in Langley's
writings.

Why shouldn't someone who cares very much about security be able to tell their
browser to get the full 30MB it would require today, or the 300MB Langley
projects it will take in 5 years? That's not that much data!

(Google's hand-curated tiny CRLSet lifeboat is currently only ~308KB of
blacklisted certificate data.)

Why can't the organizations with multi-hundred-million-dollar budgets for web
projects design the privacy-preserving/space-saving/probabilistic systems that
would make this work with much smaller overhead?

Once the original site, the CA, and a major browser vendor all know, via
public info, that a certificate is compromised, why should any browser user
have to wait days/weeks/months/indefinitely, while showing the false "lock
icon", before warning users?

~~~
thirsteh
That's the thing I don't understand about this whole discussion: The
SafeBrowsing blacklist bloom filter that Google itself made and uses is
significantly smaller, is updated in real time, and doesn't have a big runtime
overhead. Why not just have a bloom filter for revocations, with a Google
server doing the false-positive elimination?

(Yes, there is a little bit of a chicken-and-egg problem when it comes to the
Google server, but that's a much smaller problem than being unable to check if
most certificates in the world are revoked or not.)

Edit: Looks like Adam (of course) already considered this:
[https://www.imperialviolet.org/2011/04/29/filters.html](https://www.imperialviolet.org/2011/04/29/filters.html)

------
dsl
Do people still take GRC seriously?

Back in 2001-2002 people dedicated a lot of time to debunking his every
statement, but he just sort of faded into the nether after TechTV blew up. He
still however maintains his charlatan listing:
[http://attrition.org/errata/charlatan/steve_gibson/](http://attrition.org/errata/charlatan/steve_gibson/)

------
joshpeek
It seems like Langley and Gibson both agree that OCSP Must-Staple would
resolve all this.

"If we want a scalable solution to the revocation problem then it's probably
going to come in the form of short-lived certificates or something like OCSP
Must Staple." \-
[https://www.imperialviolet.org/2014/04/19/revchecking.html](https://www.imperialviolet.org/2014/04/19/revchecking.html)

"The case for “OCSP Must-Staple" \-
[https://www.grc.com/revocation/commentary.htm](https://www.grc.com/revocation/commentary.htm)

~~~
yuhong
Personally, I am for a hard fail OCSP option in HSTS or certificate plus OCSP
stapling. Default to soft fail with a warning message for now. Remember
captive portals can use OCSP stapling too.

------
danielweber
Best summary I've seen:

"If the bad guy can MITM your browser going to www.bank.com, they can break
your browser going to the online revocation list."

(Implicit: the browser will fail open to connect to the website anyway in case
it can't connect to the revocation list.)

~~~
gojomo
If the revocation data is available from "everywhere", and the browser sees
that it hasn't been able to get it recently from "anywhere", that's a suitably
alarming situation that the user should get a blocking warning, as with any
other self-signed/untrusted-CA certificate.

In the usual browsing situation, it also doesn't have to be a blocking
verification before displaying the first bytes from a site (requiring
subsecond checking). It could load the page but continue verification in the
background, only becoming a blocking warning some number of seconds later,
when the user enters form data or attempts other navigation.

In fact, even minutes into a session, the information that either – (a) it
took a while but we now see that this certificate is definitely bad; or (b)
you are currently so cut-off from the real "Internet" that suspicion towards
all sites is justified – is useful.

That is, even a "delayed hard fail" would still be useful.

The alternative is that people chat, transact, etc. indefinitely at the mercy
of attackers, because the browser isn't even trying to acquire good data via
censor-resistant channels.

~~~
danielweber
You have to keep the online revocation service up all the time. People will
DDoS just for the lulz it causes when it being broken breaks everyone's
browser.

While sitting in a meeting I realized there is one situation that online
revocation helps: someone steals bank.com's DNS record. They can't necessarily
MITM anyone with that.

But, TACK is probably a better countermeasure there.

~~~
gojomo
DDoSing DNS would break everyone's browser for lulz too, but somehow the net
manages to chug along. And if/when it happens, at least people know they're
under attack and direct attention to fixing things.

A service with a clear single point to DDoS is obviously the wrong choice.

What if services all over the net could tell you the latest (<5 minutes old)
root/summary-hash of the shared dataset? And, there are many places to either
update yourself to that version, or receive a trustworthy proof a certificate
is/isn't in that dataset?

If they're _all_ blocked, an alarm _should_ go off, letting the user know it's
dangerous to proceed. The failure shouldn't be secret, with the confidence-
inspiring lock icon still appearing, for indefinitely long periods.

------
anaphor
This just reads as rationalization for why Chrome is the best browser. Nobody
has come up with a perfect solution to the revocation problem, but if you want
the ability to actually check with the OCSP responder and fail if it cannot be
reached, then you should be using Firefox, not Chrome (also if you don't want
Google controlling every single aspect of what you do).

------
datapolitical
This is some of the most awful writing I've seen on the HN front page in a
while. I'm sure if you're well-versed in the issues surrounding revocation the
discussion makes a great deal of sense.

For the rest of us, it's a forrest of technical jargon, which is stunning to
me given that the concepts at play here aren't insanely complicated.

>So I think the claim is that doing blocking OCSP lookups is a good idea
because, if you use CAPI on Windows, then you might cache 50 OCSP responses
for a given CA certificate. Then you'll download and cache a CRL for a while
and then, depending on whether the CA splits their CRLs, you might have some
revocations cached for a site that you visit.

~~~
nkurz
_I 'm sure if you're well-versed in the issues surrounding revocation the
discussion makes a great deal of sense._

I think you've summarized the traditional appeal of Hacker News: items for
experts, written by experts, discussed by experts. Even if it's not a field
that I understand (what better way to learn?) I'm always glad to see these
articles on the front page, for it means the real HN is still alive, deep
within.

