Hacker News new | past | comments | ask | show | jobs | submit login
Around 293 intermediate CAs in violation of CA/Browser guidelines (mail-archive.com)
254 points by mbrinkers on July 6, 2020 | hide | past | favorite | 129 comments



The link is very technical, resulting in some confusion as to why this is such a big problem. The comments on HN reflect that. Here’s my understanding:

This isn’t a problem because a sub-CA can revoke any certificate from any other sub-CA of the same CA. That would be bad, but, at worst, it’s denial-of-service.

Rather, this is a problem because any sub-CA can effectively reverse the revocation of any other sub-CA, or the CA itself. That’s immensely problematic. Suddenly, the CA has no reliable way fully revoke certificates. Revocation is already somewhat broken as it is, but this gives a lot of entities the ability to deliberately interfere with revocation in ways that they shouldn’t be able to.

The author goes on to explain that revocation of the affected certificates is insufficient, because they could be used to effectively reverse their own revocation at any point in the future. Instead, it must be proven that all copies of the keys have been destroyed. That’s quite an undertaking.

What the author fails to mention is that revocation is already pretty broken. Most major browsers have their own built-in CRL replacements that contain the most important revocations they need to know about. Some browsers, like Firefox, may make additional efforts to ensure that any given certificate hasn’t been revoked; others, like Chrome, don’t. If you’ve ever visited a site that gives you a certificate error in Firefox but not Chrome, that’s likely why.

In the case of browsers, it should be possible for each browser to forcefully revoke affected certificates, but revoking a sub-CA certificate is quite disruptive, so I’d be surprised if that happens within 7 days. The catch is that this technique is really only effective in modern, up-to-date browsers.

In any case, the title is misleading. I don’t see where the author guarantees that this will happen within 7 days. The author claims it should happen within 7 days, but considering that the damage is already done and cannot be fully reversed by revocation, I find it hard to believe enforcing that deadline makes sense here.


There's a reply from Ben Wilson (Mozilla) further down in the thread / the next day stating that Firefox as a client is not affected by the security issue (OCSP responses signed by these intermediate CAs would be rejected), and that Mozilla is not planning on enforcing the 7 day deadline for revocation of these intermediate certificates. The CAs will still need to replace these intermediate CAs, but with a more gradual timeline.

https://www.mail-archive.com/dev-security-policy@lists.mozil...

> We are concerned that revoking these impacted intermediate certificates within 7 days could cause more damage to the ecosystem than is warranted for this particular problem. Therefore, Mozilla does not plan to hold CAs to the BR requirement to revoke these certificates within 7 days. However, an additional Incident Report for delayed revocation will still be required, as per our documented process[2]. We want to work with CAs to identify a path forward, which includes determining a reasonable timeline and approach to replacing the certificates that incorrectly have the id-kp-OCSPSigning EKU (and performing key destruction for them).


Of course, the point is that by bylaws the CAs themselves agreed to, in their own BRs, they're required to revoke the certificates within 7 days. It's a SHALL requirement.

The CAs can't have it both ways: a BR balloting process that they rely on for moral authority when disputing that the majority of deployed browsers have added new security requirements (like shorter-lived certificates), and BRs that they ignore when they screw up.


Can't they have it however they want as long as the vendors that matter go along with it? I feel like Ryan is sort of pointing that out, that an excuse of "oh well we needed to support vendor X" gives them carte blanche. It's not like customers are going to knock down their doors for not following BRs. At the end of the day, the biggest vendors are where their bread gets buttered.

If Mozilla isn't the majority browser vendor, who cares what they insist on? And if all the CAs band together and say, sorry losers, we're gonna keep doing things our way, what are the browsers gonna do? Cut all their users off from the internet "because principles"? Apple is playing a dangerous game that I don't think will work out in different circumstances. They can't hide behind "protecting users" if their users end up unable to access the internet securely.

We got into this mess because we wanted organizational independence and distributed trust, without considering what internal conflicts would mean to the end users. I'm going to call it and say that within a decade, you'll have to pick which CA you want to trust at browser install time (though you can guess which CA will be the default on which devices).


This is a weird comment. Leave aside that anything the Mozilla root program does will, if history is a guide, likely be backed immediately by Google. Minority browser or not, if your CA isn't trusted by Mozilla, you will lose all your customers. Nobody is going to do business with the CA that offers "the majority of browser users" when the rest of the CAs offer all the browser users. The CAs have almost no cards to play here.


> And if all the CAs band together and say, sorry losers, we're gonna keep doing things our way, what are the browsers gonna do?

For one thing you should consider that several of the companies that make browsers also operate a Certificate Authority. For example Google's GTS is represented in that m.d.s.policy thread by Ryan Hurst (whereas Ryan Sleevi is there mostly choosing not to put on his Chromium Web PKI hat). Microsoft likewise controls trusted roots. Apple does operate its own PKI but is not presently broadly trusted, though if it felt the need I'm sure they have people.

Also the most popular (for this purpose) Certificate Authority is ISRG's Let's Encrypt and they've got no reason not to co-operate. The Web PKI is all they do anyway.

This is often portrayed as though browsers are obliged to either distrust everything instantly or allow CAs to do whatever they like, but neither of those is realistic. The major trust stores all already have imposed constraints short of distrust.

You bring up Apple as an example. Unless I've missed it somewhere Apple never announced their 398 day limit as a matter of issuance policy it's simply a fact that Safari and the Mac operating system won't trust new certificates with longer lifespans after a set date. So if one or more CAs decided not to co-operate, nothing happens at first. Nothing whatsoever.

Then, gradually, a few subscribers buy (or renew) 2 year certificates, and these new certificates don't work in Safari. Some of these subscribers will call customer services at the CA where they purchased the certificate. Why doesn't it work in Safari? How can they fix this?

What does the CA say? "We intentionally sold you a product that won't work. Ha ha ha, it's a funny joke, we have your money and you've got useless garbage" ? Maybe they instead try to blame Apple. Apple will point out that the CA knew this wouldn't work and suggest the subscriber seek a refund.

The subscriber demands a refund. The CA is now losing money and it is seeing negative reputational impact. Somebody threatens to sue. It is not a good day to be the CA.

At the subscriber's offices, an IT person has a brainwave and switches to a provider that isn't deliberately disobeying. The web site is back working. Champagne all round.

To achieve the desired robustness/ reliability the Web PKI is structured in a way that makes any individual Certificate Authority expendable. As a subscriber this means you should plan for your CA going away with at most a few days notice. Most people won't do that. Too bad, individually you're expendable too.


Apple did actually make the limit a matter of their policy. Start date is September 1, technical enforcement comes 'later', but the requirement is going to be part of their policy. https://lists.cabforum.org/pipermail/public/2020-April/01494... Plus, it's not just Safari - it's the TLS stack in macOS/iOS etc. - Apple's KB article mentions nothing of Safari, but just like their enforcement of CT, it's basically OS-wide.


That’s about what I expected. It doesn’t make sense for any parties involved to enforce the 7 day deadline, including users. This is a reasonable solution.


It's just slightly disappointing that the CAs and browsers have agreed to operate under a policy that requires the CAs to revoke mis-issued subordinate CAs within 7 days (CA/B Forum BR 4.9.1.2), but that's not actually feasible in practice - at least not at this scale. Certificate issuance would need to be automated enough that it would be feasible to actually replace all certificates issued by the affected CAs within a 7-day period.

Let's Encrypt represents the state of the art in terms of certificate automation, but last time they had an (impending) mass-revocation event, it turned out that even the ACME protocol / client implementations didn't really have any concept of an automated "this certificate is about to be revoked, please re-issue" process. As a result of that event, certbot at least got support for triggering renewal after revocation: https://github.com/certbot/certbot/issues/1028#issuecomment-... -> https://github.com/certbot/certbot/pull/7829


I think it's not so much that it isn't feasible as that it doesn't serve any useful purpose at least to Mozilla.

The Firefox out-of-band revocation mechanism (OneCRL) certainly could revoke all these intermediates but Firefox isn't vulnerable to a problem here, so there's no obvious upside to doing that and it's disruptive.


Maybe I'm a bit uninformed but I feel that the whole certification system is quite convoluted and confusing. Couldn't there be a simpler solution of solving the simple trust problem that a user has over the authenticity of a website?


There are technical issues involved and the system has evolved over the years, but part of the issue is that certificate companies make a vast amount of money doing very little and can spend a bunch of money resisting any change to the system that would cut them out of the loop.

IMO, DANE might make sense if DNSSEC wasn't such a mess, although it is a very similar group of parasitic companies involved in DNS. In general, alternative name systems (such as the GNU Name System) could also potentially replace the certificate system and many name and certificate issues are related. Many of the hardest technical issues around certificates relate to revocation and the demonstrated inability of almost anyone to secure anything.

Other options that make a lot of sense in many ways would have govenments or banks involved in identity in a direct way. This is resisted for a varity of reasons.


> if DNSSEC wasn't such a mess

I hear this a lot, but in my experience (managing c. 1000 DNS zones all with DNSSEC enabled, using a strictly DNSSEC-validating resolver for >5 years, and having built DNSSEC infrastructure for DNS hosting providers), it is both reasonably well designed and generally quite well implemented. What is the mess that you perceive?


> What is the mess that you perceive?

Not GP, but the mess is near-zero clients and few recursive resolvers are actually doing DNSSEC validation in practice, after 20 years of deployment.

It’s like IPv6.

Also I believe most active ZSKs are actually held and managed by the larger DNS providers on their customers’ behalf. This leads to very little assurance improvement over unsigned records, as credentials to update a web form is all that is needed to “sign” records. There are no real key management requirements for ZSKs as there are with browser CAs.

The only additional assurance provided by a DNSSEC response is that there was likely no MITM between the authoritative server and validating resolver. Which is something, but that problem is more easily and completely solved by DoH which adds privacy as well as authenticity.


The bigger problem is that none of the browsers support it, or intend to support it. DoH could change that (ironically, the DNSSEC crowd opposes the one DNS change that could offer any chance of viability for DNSSEC), but it's unlikely to.


Well, if the system resolver is doing DNSSEC validation, the browser doesn't need to support it. That's how I've had my system set up for years. Unfortunately, macOS and Windows (other than Server) still don't have any DNSSEC support as far as I'm aware. Support is built in to systemd-resolved now though, which I believe is the default resolver on various Linuxes, and unbound is of course available in all major distros.


If the goal is to have DNSSEC replace the CAs, then you need DANE, and for DANE to work, browsers have to support it.


That can indeed be a goal, but DNSSEC is useful even without DANE.


(a) No it can't be, and (b) that's not what this thread was about.


Interestingly the GNU Name System IETF draft was just officially filed yesterday: https://tools.ietf.org/id/draft-schanzen-gns-01.html


I'm afraid the solution cannot be simple, not only because the trust problem is not so simple, but because the nature of the problem is political, not technical. And so the solution must be political (CA/B forum, ballots, convoluted documents, processes for misbehaving entities).


>The author goes on to explain that revocation of the affected certificates is insufficient, because they could be used to effectively reverse their own revocation at any point in the future. Instead, it must be proven that all copies of the keys have been destroyed. That’s quite an undertaking.

How would this be verified? Presumably the keys are stored on HSMs, but you can I'm not sure how you can prove that you didn't make a backup of the key.


It is largely impossible to fully prove. CAs are supposed to keep detailed records of any issuing keys and what was called for was specifically "witnessed Key Destruction Reports" which involves third party independent confirmation of destruction of documented keys.

In the event that a key with a Key Destruction Report shows up again, the responsible party for that key will have shown unacceptable negligence and will potentially be subject to the exclusion of their keys as a valid certificate signer.

A lot of these companies core businesses rely on remaining in a position to sign certificates so it is in their best interest to protect that privilege by following the documentation requirements, and properly destroy their keys. It's effectively a pretty good stick.


It’s impossible to prevent a truly dedicated malicious actor from doing so, but enforcement through both policy and independent auditors — and quality of response to security incidents — provide several layers of defenses against this scenario. (As with all things, a perfect defense is ultimately impossible, but they put in a lot of effort to get a lot of nines of certainty.)


In principle, you would hope a CA would be able to precisely account for the number of backup copies of their private key.

In practice, of course, that doesn't mean every one of them will have done. There's 293 of them, after all.


The point is that you can't verify it. Sometimes HN humor is too dry for its own good.


What are the state-actor-level attack implications of this? Before this was revealed, a party that compromised (or was able to be issued) a certificate for a website could be reasonably likely to be detected and have that certificate revoked if they used it for large-scale MITM or redirection. But now, if the actor were to also compromise any one of these sub-CAs before the key was deleted, could they permanently be in a position to be able to unilaterally reverse any such revocations, effectively giving them carte blanche to begin a campaign of compromising websites in earnest with the knowledge that their attacks would now be "sticky?"

What would the recourse be here if one of those keys were to be compromised, or even if there was reason to believe one might have been? Would the entire CA-level trust chain need to be distrusted, requiring re-issuance of all certificates on that chain?


Yes and almost-yes - you don't need to destroy the root CA itself, but you need to physically destroy the keys for any affected subordinate CAs. (Revoking them isn't sufficient, because they can just un-revoke themselves.) From the post: "... the degree of this issue likely also justifies requiring witnessed Key Destruction Reports, in order to preserve the integrity of the issuer of these certificates (which may include the CA's root)."

The "good" news is that most people haven't really been treating revocation (and OCSP) as a reliable mechanism. The major browsers all have out-of-band mechanisms for revoking known-malicious certs via something equivalent to the software update channel, which bypasses reliance on the CA infrastructure. If there's a large-scale attack, the relevant cert/CA will probably be disabled through that mechanism. And most of the smaller clients don't even bother with revocation checking at all (e.g., I'm pretty sure that on an average Linux system, things like curl or "import requests" do no revocation checking) so there's no point in undoing revocation if you're trying to attack one of those systems.


> e.g., I'm pretty sure that on an average Linux system, things like curl or "import requests" do no revocation checking

This is correct. Even where there's some provision for checking, it's usually a mechanism where you can supply a CRL (Certificate Revocation List, a signed and dated document that says which certificates were revoked). CRLs are practical for a small private CA but they make no sense at scale. Let's Encrypt doesn't even have CRLs because they'd be enormous.

To be fair 10-15 years ago there's a good chance that average Linux system has a set of CA roots which hasn't been updated in a decade, and most such clients aren't actually checking even CN let alone SANs so bad guys don't need a google.com certificate (or whatever) they can just get themselves a real certificate for actual-bad-guys.example and the client won't check the name matches anyway.


Having sadly worked with plenty of developers whose first reaction to an invalid certificate error is to Google how to disable the checking, I don't think you need to go back 10-15 years.


Interesting. I do imagine there are a nonzero number of systems that do revocation checking without out-of-band mechanisms, and that those would become susceptible. But all such systems are likely vulnerable to unpatched zero-days anyways, so this may not make things any worse than they already are :/


> What are the state-actor-level attack implications of this? Before this was revealed, a party that compromised (or was able to be issued) a certificate for a website could be reasonably likely to be detected and have that certificate revoked

This is a convenient fiction. CA system never protected anyone against state actors. Never did, never will. Subverting a single CA is enough to compromise entire system. And there are hundreds of them.

Security is always grounded in knowledge and physical control — understanding and exercising your capabilities to preserve them. A blind, deaf and fully paralysed person can't be expected to safeguard their own physical security, and neither can an average user — their TLS security. Especially against state actors. More so, when the parties they have to rely on are commercial enterprises whose entire existence revolves around getting paid to issue certificates.


This is all confusing to me but I've been having certificate issues today and it seems like this could be related. It's a weird coincidence if not!

Basically on Chrome one of my sites is throwing:

"NETT::ERR_CERTIFICATE_TRANSPARENCY_REQUIRED"

for most users but not all, even though they're all on Chrome. It seems to work fine in other browsers.

https://transparencyreport.google.com/https/certificates

When I check my domain here it seems like I have got the transparency certificate so I shouldn't be getting this error.

Is this related to what you're talking about? I would really appreciate any help. I'm using https://letsencrypt.org/ for the cert.


Does your cert have an SCT? It would be strange for a Let's Encrypt cert to be missing it but certainly possible. Try running (replace both example.coms with your domain name)

    openssl s_client -connect example.com:443 -servername example.com </dev/null | openssl x509 -noout -text
which should print an SCT extension at the end - my version displays it by numeric identifier "1.3.6.1.4.1.11129.2.4.2" but maybe newer versions display it by name.

Alternatively, I think you might able to go to https://www.ssllabs.com/ssltest/ and see if your cert has "Certificate Transparency: Yes", but I'm not sure exactly what that means.

Anyway, I don't think this is related, the question at hand is about OCSP, which is a different mechanism from Certificate Transparency. (Arguably Certificate Transparency is a replacement for revocation in general being flawed in practice for many reasons, but they're different mechanisms.)


It's a weird coincidence for you but for everybody else it's to be expected as there are dozens or hundreds of people having issues every day.

It's extremely unlikely to have anything to do with this incident.

You should obtain a copy of the certificate which triggers NET:ERR_CERTIFICATE_TRANSPARENCY_REQUIRED and take a look at that. There's an excellent chance there's something else even more obvious wrong (from your point of view as a human) but Chrome decided to focus on the lack of trustworthy SCTs.

My instinct would be that it's likely a middle box (e.g. "anti-virus software" on a PC can install itself to snoop on all HTTPS sites, or a corporate "data loss prevention" proxy or that sort of thing) and the bogus certificate will likely make that pretty obvious if you examine it.


I think it's an interplay between system clock skew, Chromium's SCT validation implementation, and (very) recently issued certificates (which are backdated by 1 hour).

It's a bit of a heisenbug but it's occasionally reported on the Let's Encrypt forums. It always goes away for the reporters just by waiting a little bit.

It would be really nice if a user who runs into this could generate a Chromium event log which would hopefully include the SCT events (chrome://net-internals).


Thanks! It does seem to have gone away today. Very strange.


That is probably not related.

If you run SSLLabs against your host name, does it say “Certificate Transparency: Yes” or No?

https://www.ssllabs.com/ssltest/analyze.html?d=your-hostname


Thanks. It says "yes".


"...revocation of the affected certificates is insufficient... Instead, it must be proven that all copies of the keys have been destroyed."

Isn't the main reason you would want to revoke keys because they were disclosed, making it impossible to destroy all copies?


Normally, yes. This is not a normal circumstance though. In this scenario, the misissued intermediates effectively have sudo access to cancel a revocation issued by the parent CA. This is equivalent to being told that a non-root user could cancel a userdel command run by root. For policy compliance, the intermediates have to revoke their certificates — but since the intermediates can immediately un-revoke themselves, proof of key destruction is necessary as well to ensure that they cannot.


Further, if I can destroy all keys do I even need to revoke the cert? (Honest question, I’m actually not sure)


Revocation is required by policy, so the question is technically moot. It’s generally good practice to generate and publish a revocation prior to destroying a private key, though.

To provide an analogy in the context of PGP keys, if an attacker somehow finds a backup of your revoked and destroyed private key someday, they will have trouble using it because your revocation will be public and on record.


Flagging for a title change, as the revocation deadline was proposed by a single individual on a mailing list, and there’s no evidence from further discussion in that thread that a consensus has been reached. I would not expect all these certificates to be revoked within 7 days.


That "single individual on a mailing list" is Ryan Sleevi, who is basically the voice of Google in all matters to do with TLS PKI, and as I understand it is not working with a "opinions my own" cop-out in this context. It's not some rando


In addition to his role with Google/Chrome, he is also a peer of the Mozilla CA module and has a lot of influence in Mozilla's policies as well.


It’s still one person. There needs to be a consensus, and that consensus hasn’t been reached. As has already been mentioned in other comments, Mozilla said they wouldn’t enforce the 7-day deadline.


Flagging doesn’t result in title changes, it just terminates the post. Emailing the mods using the footer Contact link does, though.


Thanks. It’s a moot point now, but I’ll note that for the future.


I changed the title. Although I heard from someone involved that the intermediates really should be revoked in 7 days. Let's wait and see.


They definitely should be—that’s what the author is claiming is mandated, and it would make sense. However, I’m a bit skeptical about browsers being able to enforce that timeline here.

Also, given that the underlying cause appears to be ignorance, it would be prudent to take things slow and ensure that this doesn’t happen again. As I said before, the damage is already done—revoking appears to be insufficient here.

If this does actually happen within 7 days, though, I will be thoroughly impressed.


It could be considered a tacit warning that browsers may choose to mistrust the impacted subCAs in the near future. I don’t know the specifics, but I assume they can revoke for non-compliance using in-browser mechanisms without depending on the revocation process.

EDIT: Mozilla’s reply: https://news.ycombinator.com/item?id=23748561


You don't have to trust Sleevi (though: you always should); you can just read the BRs. The revocation requirement is in this case black-letter SHALL.

https://cabforum.org/wp-content/uploads/CA-Browser-Forum-BR-...


> Section 4.9.9 of the BRs requires that OCSP Delegated Responders MUST include an id-pkix-ocsp-nocheck extension.

>For example, consider a certificate like https://crt.sh/?id=2657658699 . This certificate, from HARICA, meets Mozilla's definition of "Technically Constrained" for TLS, in that it lacks the id-kp-serverAuth EKU. However, because it includes the OCSP Signing EKU, this certificate can be used to sign arbitrary OCSP messages for HARICA's Root!

>This also applies to non-technically-constrained sub-CAs. For example, consider this certificate https://crt.sh/?id=21606064 . It was issued by DigiCert to Microsoft, granting Microsoft the ability to provide OCSP responses for any certificate issued by Digicert's Baltimore CyberTrust Root. We know from DigiCert's disclosures that this is independently operated by Microsoft.

So my understanding is this: The CA's have issued certificates/sub-CA certs without the proper extension (or with too many extensions), causing those to be able to sign a OCSP response. And the Online Certificate Status Protocol (OSCP) is used to check the revocation status of certificates with the CA.

So, this would allow e.g Microsoft to generate a fake OCSP response? That would perhaps be useful in some kind of MITM-attack scenario?

While not good, perhaps not an end of the world problem either? However, I wonder how much problem will come for people needing to replace those soon to be revoked sub-CA certs...


Many of the violations discussed on the security lists are not end of the world at all. The 63-bit instead of 64-bit serial number entropy issue is a good example of this. But the strict enforcement of all violations makes it easier to spot bad actors or at least those who aren't competently handling all of the requirements to be a CA. Bottom line: the entire CA system is built on trust.

Would you trust someone who doesn't take issues seriously because they think they're small or unimportant?

EDIT: reading the full report, it seems that the underlying risk is that if one of the intermediate CAs were to be compromised, even if it was revoked it could theoretically forge an OCSP response that it is still valid (and as a trusted CA issue certs for anything). So the response is very appropriate given the potential impact.


The nocheck thing confused the hell out of me.

As I understand it: the issue isn't the nocheck; it's where the OCSPSigning EKU is. You're supposed to see OCSPSigning on end-entity (CA:NO) certificates; the purpose of the EKU is to delegate a non-CA cert the authority to revoke certificates for its parents. When you see that EKU on a CA:TRUE cert, what you're really seeing expressed is that CA's parent delegating OCSP for the root; ie, the CA is granting its customer the right to control revocation for the whole CA.

What nocheck expresses is: "you can't trust this OCSP Delegated Responder to revoke itself, because that's silly; seek confidence in its validity elsewhere". "Elsewhere" apparently usually means "the fact that this certificate has a very short lifetime", which is feasible for an end-entity cert but not so much for a CA.

My understanding is that nocheck (or, lack of it) is how Ryan spotted these certificates, but isn't really the big problem with them.


If intermediate certificate private key leaks, basically you can block revocation. As you can sign OCSP message with this certificate. Kinda defeats purpose of revocation.

But as far as I know, browsers are not failing hard on OCSP failure, if you can mitm the connection possibly you can block OCSP requests too.


The author claims it’s a problem because one sub-CA can effectively un-revoke its own certificate and certificates from other sub-CAs. That’s bad because it defeats the most important purpose of revocation.

If someone compromises a key, typically, you would want to revoke it. However, if that key also allows to revocation to be reversed, you’re in trouble.

I’ve explained more in a top-level comment: https://news.ycombinator.com/item?id=23747524


Digicert’s reply is called out by the original poster as being particularly high quality: https://www.mail-archive.com/dev-security-policy@lists.mozil...


From the linked post:

“ This is https://misissued.com/batch/138/

A quick inspection among the affected CAs include O fields of: QuoVadis, GlobalSign, Digicert, HARICA, Certinomis, AS Sertifitseeimiskeskus, Actalis, Atos, AC Camerfirma, SECOM, T-Systems, WISeKey, SCEE, and CNNIC.”


Hilarious. And they’re referencing specs so deep nobody understands what they’re talking about. Certificates are BS.


> I've flagged this as a SECURITY matter for CAs to carefully review, because in the cases where a third-party, other than the Issuing CA, operates such a certificate, the Issuing CA has delegated the ability to mint arbitrary OCSP responses to this third-party!

> For example, consider this certificate https://crt.sh/?id=21606064 . It was issued by DigiCert to Microsoft, granting Microsoft the ability to provide OCSP responses for any certificate issued by Digicert's Baltimore CyberTrust Root. We know from DigiCert's disclosures that this is independently operated by Microsoft.


If certificates are BS, give me hosts.txt entry I can point INSTAGRAM.COM that my browser will actually honor. It should be easy.

Certificates are problematic. But that's because the world is problematic. They were much more problematic 10-15 years ago. Google and Mozilla drastically mitigated their problems. They're still imperfect, but anything that expresses web trust across the entire world is always goign to be imperfect, and the webPKI at least has some stakeholders that are both empowered and deeply give a shit about security.


I thought the BS here was that this compromises OCSP which no one uses regardless due to its numerous design faults.


Yep. Revocation was broken by design so hardly anything supported it, and now the implementation is broken so revocation is getting revoked. The whole thing is ridiculous.


Indeed. I am still in awe people supportive of PKI are referred to as "security experts". PKI is literally where we decided that a bunch of companies nobody's heard of should all be the Most Trusted for the entire Internet, and be able to tell us if everyone else is trustworthy. And then our web browsers, one of which is run by an adtech company, should decide whether or not to trust those entities, and whether or not to let the user override that decision about trustworthiness, to show us the website we wanted to get to.


The PKI, like democracy, is the worst system except for all the others.

I think the main alternatives people suggest are

- something involving a distributed ledger, where revocation isn't even an option, so that clearly doesn't make it better than the current system if we're talking about revocation being a mess (we could just amend the current system to get rid of revocation and throw out a whole bunch of technical complexity if we wanted)

- something involving DNS, which also involves trusting a bunch of companies nobody's heard of (sometimes the same companies, in fact?) who are hardly obviously better at operating cryptographic infrastructure than the existing CAs

- a TOFU approach like SSH, which hasn't been demonstrated to scale well beyond the dozen or so machines in your known_hosts file (most large companies are using something other than TOFU even for internal SSH)

I don't think PKI is an objectively good system, it's just difficult to picture a better one. The main flaws with PKI in practice aren't really about the companies nobody's heard of or a web browser being run by an adtech company - the main flaws are that people want a lot of things out of the system, some of which are contradictory, and running cryptography at this level of scale is genuinely hard. The alternatives don't really address those problems.


A DNS-based system reduces the attack surface for any given domain massively: The gTLD registrar and your domain registrar become the sole entities that can create trusted certificates involving your site.

Right now, how many different companies could issue a microsoft.com cert if compromised or sketchy? Hundreds?

Right now CAs delegate trust to bunches of questionable sites as seen here with poor oversight or security based on business interest. On a DNS-based system, the entities involved are limited to those who actually manage your DNS.

It also removes the agency of browsers to decide who does and doesn't get to play, which is the current system.


The attacks are different, though. Under Certificate Transparency, approximately no one can issue a microsoft.com certificate and get away with it. Under a DNS-based system, the domain registry can do whatever, and there's no effective way to distrust them - if Verisign (who still manages .com, but who was too incompetent to run a CA and sold it to people who have been hard at work trying to clean up the mess) does something unreasonable with .com, the only option is for Microsoft to find a different TLD.

Given that most of the problems with the CA system historically have not been active attacks but incompetence, I don't think we win much from moving to a system where we can, in fact, kick TURKTRUST out of the pool to one where the question is whether .tr remains part of the internet or not. If Verisign screws up with .com in any way short of revealing a letter from the FBI saying "Please help us MITM Windows Update," there will be immense pressure to allow Verisign to continue being the .com registry and continue holding the .com signing keys.

For similar reasons, I'm not convinced that moving from "Hundreds of unqualified companies could issue a bad cert, but hopefully they won't" to "One unqualified company could issue a bad cert, but hopefully it won't" is a meaningful benefit. It doesn't reduce the theoretical bounds on the attacks, and again in practice, these hundreds of companies haven't been misissuing. (The present story is about mis-delegating the power to issue revocation/non-revocation responses, which is certainly a problem, but only relevant in practice if there are actual end-entity certs that are misissued in the first place.) So while it certainly feels better to have fewer entities that can sign - and to be clear, I am all for distrusting many if not most of them - I don't think it addresses either the fundamental theoretical problems nor the actual real-world attacks.


> Verisign (who still manages .com, but who was too incompetent to run a CA and sold it to people who have been hard at work trying to clean up the mess)

The Verisign CA function was sold to Symantec. That name might ring a bell too, because with these CAs set to be distrusted as a result of Symantec's mismanagement the whole business was again sold to DigiCert in 2017.

I think the perverse part of your reasoning is that you think .com is trustworthy now. It's one of the worst run registries. Its popularity with businesses probably tells you more about how scammy most businesses are than whether .com is trustworthy, and not very much about either.


Not sure if you're directing that at me or the parent comment - my position is definitely that Verisign should not be trusted with certificate signing authority over .com. The comment I'm replying to seems to advocate Verisign (and nobody else) being able to issue microsoft.com certs, which I think is a bad idea.


If Microsoft is comfortable with microsoft.com despite the .com registry being appallingly run I don't see any problem with that, just as I wouldn't see any problem with Microsoft choosing to open a Microsoft store in the almost-abandoned decaying mall at the far edge of town whose only other tenants are a discount furniture store and a company that sells only a single item and never has any customers.

It's a mistake to separate out the certificate signing authority for different attention if it would be (as in DNSSEC) hierarchically constrained. Verisign can already screw up badly enough to cause Microsoft to lose control of microsoft.com or let somebody else have it. They've apparently decided they're comfortable with their capacity to mitigate that risk. Fine.


It was supposed to be a "proof of stake" originally I suppose, if a company was caught doing shady thing it would lose its CA status so they're incentivized not to do so. Sort of like internet notaries.

That might have worked decently in the early internet but it does seem seriously flawed with the current stakes.

That being said, what's the alternative? TOFU? Web of Trust? Those have massive security implications as well. They have the advantage of putting the user back in control but given that the vast majority of the people using the web today doesn't have a deep understanding of the underlying technology and security model I don't see how this wouldn't end up in a massive catastrophe.

It's a tough problem to solve.


The problem is a lot of companies have done shady things and they are still participating in PKI. And a huge issue is that I can't pick my trustworthy parties: For instance, I do not trust Google. But a huge portion of the web won't work unless my browser assumes Google can issue certs for any domain in the world. I also don't trust a half a dozen CAs in countries I don't deal with and would rather prefer not have access to at all. When a Chinese PKI provider fails, I first wonder why I'm even trusting these CAs to begin with.

I'd prefer a system backed by DNS, and based on verifying the ownership of domains and the authorized DNS provider for that domain. Presumably, in my example, the only domains Google would be authorized to secure would be domains provided via Google's DNS and domain products.


> For instance, I do not trust Google. But a huge portion of the web won't work unless my browser assumes Google can issue certs for any domain in the world.

Um no. Google's four production roots (GTS Root R1 through R4) are essentially dormant. You could (but probably shouldn't) manually distrust these roots with no impact.


Do you have an alternative? DANE looks good but it would require lots of people to get on board with DNSSEC first...


The interesting thing is that web browsers can make people "get on board" with anything. Most of the PKI and TLS changes in the last couple years have happened because Chrome/Firefox/Safari have decided to say "this or your page won't work".

Understanding where web security is right now is about understanding who is making the decisions (regardless of any claims about committees and processes), and what motivations they have to make the decisions they do.


Doesn't this just shift the exact same trust to registrars?


It shifts the trust to a single CA instead of all the CAs.


More precicely, it means that compromising the public key infrastructure requires compromising one specific CA, rather than compromising any single CA out of hundreds. Ideally, we would it to instead require compromising all CAs out of hundreds, but as long as the defective-by-design X.509 PKI is used, that's not very possible, much less likely.


This is a solution that works reasonably well.

Formulating a working alternative is far from trivial.


To be fair, it's not PKI in general, but specifically X.509 PKI that needs to die in a fire.


I am so glad I'm not alone in this viewpoint. When we first learned about CA's back in college, I still remember the double-take I had in class.

Never could get the professor to double back around to th He mote problematic stuff.


Complexity kills.


Indeed.


> Certificates are BS.

Pretty much. The whole business model never really made sense: the relying parties have no relationship with the certificate authorities, while the HTTPS servers are the customers of the CAs.

I think it would make a lot more sense for certificates to be issued by domain owners, esp. since the original idea of tying sites to real-world businesses (e.g. with Dun & Bradstreet numbers) has been reduced to just verifying domain-name ownership.

Edit: I think people misunderstand what I am saying here. What I mean is that I think that when one purchases a subdomain of domain, that domain should just issue a certificate — and that domain should only be allowed to issue certificates for its children. So e.g. if one purchases foo.com, then com issues a certificate for foo.com; if one purchases bar.net, then net issues a certificate for bar.net; if one purchases baz.ac.uk then ac.uk issued a certificate for baz.ac.uk. This is essentially what Let's Encrypt and ACME already do: com has the technical ability to reassign any of its subdomains at any time it wants to, and can get a certificate issued for any of them by reassigning & registering a certificate.

And while we're at it, maybe we could kill ASN.1 with fire?

Edit: if you downvoted for this, you have never tried to debug an ASN.1 BER file.


Then there's the whole issue of putting domain names/common names inside the certificates, but relying on external verification; rather than just having the DNS NICs directly sign for domains they have obvious authority over.


> I think it would make a lot more sense for certificates to be issued by domain owners, esp. since the original idea of tying sites to real-world businesses (e.g. with Dun & Bradstreet numbers) has been reduced to just verifying domain-name ownership.

The problem with that approach is that anyone can create a certificate for any domain; so if I go to "example.com" then it's kinda hard for me to detect if my connection is being MITM'd, especially if this is the first time I'm visiting example.com.

This is why ACME requires a verification that you actually control example.com (via http or dns).

I don't think the CA model is perfect by any means, but I don't think it's completely without value either.


>The problem with that approach is that anyone can create a certificate for any domain; so if I go to "example.com" then it's kinda hard for me to detect if my connection is being MITM'd, especially if this is the first time I'm visiting example.com. //

I thought they meant the .tld registry would issue the certificate, so any registrar could sell you the domain+cert but it would have to come from the registry (ICANN say, for .com).

Can't the DNS data have a hash of the cert to avoid 3rd party certs (unless the 3rd party controls the domain registry entry, but then MitM is a [ahem] dead cert).


DNS providers should be your HTTPS providers though. Presumably the certs your browser would have to "just know" would be for the root TLDs, so you could verify with them what DNS provider a given domain was entrusted to, and then query that DNS provider whether or not your domain's certificate was legitimate.

The idea that any CA can issue a valid cert for any domain is the heart of what's wrong with PKI.


The name constraint extension (https://tools.ietf.org/html/rfc5280#section-4.2.1.10) can help a lot with that, we chose to trust CA for all names but we could have had CAs for a way more limited set of domains.

Software support is far from universal sadly.


> The problem with that approach is that anyone can create a certificate for any domain; so if I go to "example.com" then it's kinda hard for me to detect if my connection is being MITM'd, especially if this is the first time I'm visiting example.com.

You misunderstand what I mean: I advocate that the owner of .com be permitted to mint certificates for foo.com, bar.com, since right now the owner of .com and can point those subdomains to any host he wishes, and then generate a certificate using ACME (because he actually controls every subdomain of .com).


Oops, sorry; I misunderstood your comment. I thought that with "domain owners" you meant "the person who registered the domain" (i.e. self-signed certs).

Using DNS providers for certificates is an interesting idea; one I haven't heard before. I can't really think of any downsides of that at the moment.


> original idea of tying sites to real-world businesses

Ironically, the only system PKI had to attempt this, Extended Validation, is opposed by the loudest voices in PKI today. Despite arguably being the only real benefit PKI potentially offered: Notarizing that a domain really belonged to a given real-world entity.

EV had flaws, but it should've been improved, not axed. Security detached from people-understandable real-world entities will never provide real security, because at the end of the day people still need to interact with the system.


What's an easy way to test if I am affected by this?


Contact the customer support of whoever issued your SSL certificates and ask if you are affected by this.

If you use any of the major auto-issue, auto-renew certificate platforms, then you do not need to take any action — either you aren’t affected or they’ll issue a new certificate to you automatically. (Let’s Encrypt, AWS Certificate Manager, Google-Managed SSL Certificates, Heroku Automated Certificate Management)


Check if your browser shows a lock symbol on any important sites you visit.

(This affects the PKI as a whole, because a single unconstrained compromised sub-CA can misissue for any domain - so if you use HTTPS, you're affected).

If your question is whether you should take action, the answer is no, unless you're in some way responsible for an intermediate CA, which is something you'd know about.


Hi Jake! Enter your website URL in here, and it'll tell you if you're affected or not: https://ohdear.app/tools/certificate


Shameless plug, but I built a monitoring solution with extensive SSL checks that’ll catch & report these if they’re revoked: https://ohdear.app


This looks nice, I've recently started using instatus.com for the status page, but love your monitoring parts, might use the trial to further check it out!


Ironically, I get a HTTPS error when opening the link for this post.

Edit: Now getting mac error: https://i.imgur.com/JmdC8Yi.png


Others aren’t reporting that. What error?


> Websites prove their identity via certificates. Firefox does not trust https://www.mail-archive.com/dev-security-policy@lists.mozil... because its certificate issuer is unknown, the certificate is self-signed, or the server is not sending the correct intermediate certificates.


The certificate is issued by Let’s Encrypt [2], and has a valid and correct intermediate chain from the server [1]. Have you knowingly altered your browser’s TLS security settings, or certificate root store settings (for example, to distrust X3), or are you running an especially old browser on an out-of-date platform? Being able to see a screenshot of which intermediate your browser is refusing to trust would be helpful [3]. (Unless you’re somehow being MITM’d, which can happen on some internet connections or with certain ‘security’ software on Windows or by mitmproxy left enabled, in which case the screenshot of the certificate chain will look nothing like Let’s Encrypt at all and help diagnose that too.)

[1] Normal LE: https://www.ssllabs.com/ssltest/analyze.html?d=www.mail-arch...

[2] Test site: https://valid-isrgrootx1.letsencrypt.org/

[3] In the developer console, there should be a security tab with a View Details button.


I am using a mobile browser,it could be out of date but I didn't tinker with the network settings. I doubt only this site would get MITM'd. I will see if I can get more details from it.

Screenshot: https://i.imgur.com/JmdC8Yi.png

Now I get a MAC error insteaf of cert error


I would say with near certainty that your issues stem from your OS/browser, or if you have any security apps installed, those could be at fault too (since they sometimes run network interception). You might test a browser that ships its own SSL stack (I believe Firefox Android does, though I’m not 100% certain) and see if it Just Works in that, but at the end of the day, I’d simply recommend backing up your data and settings, factory resetting the device and updating it to latest, and then restoring your data and settings — there’s far too many things that can go wrong, especially in rooted scenarios, and I don’t have the ability to triage and repair beyond highlighting the three possible vectors you could tackle exploring yourself.


Not rooted,no security software, tried different browsers and I am using FF android on that screenshot.

Perhaps the stingrays are acting up this morning ;)


I wish I knew how to diagnose SSL issues in Firefox Android in order to learn more about why you’re experiencing issues here. If you’re on cellular, try WiFi? Does it affect any other phones in your house? Etc.


The really labour intensive thing you could do goes like this:

1. Get a nice shiny modern Wireshark

2. Tell Firefox you want it to keep records of the session secrets that secure TLS. Set environment variable SSLKEYLOGFILE=/some/path/to/log/secret.keys

3. Packet capture the session you're interested in

4. Give Wireshark the packet capture (if not captured inside Wireshark itself) and the secret.keys

5. Now Wireshark can show the TLS session and you can see what went wrong in detail. So long as you didn't actually do anything secret you can give all these pieces to somebody else to look at.

6. Otherwise, after your investigation destroy the secret.keys and optionally the packet capture itself.

I've used this level of effort to show a customer that, contrary to what they believed they were not presenting the nice client certificate I'd issued them when connecting. It turned out to be a config difference between their staging and production systems or something. But they were absolutely insistent their software was being turned away despite using a client cert (we used mutual TLS) so it took posting a Wireshark capture proving otherwise to get them to actually investigate.


can anyone explain what that actually means for us endusers?


There are some entities that can sign certificates that don't conform to the standard the browser makers have agreed on. Those entities are about to have their signing rights revoked. Anything they signed will be invalidated, because there's no way to tell if the signature is legit or impersonated. If some sites are using certificates signed by those entities, they're going to have a bad time, and so will you as an end user of those sites. I'm not an expert in this, but this is the impact as far as I understand it. If I'm wrong and someone knows better, please correct me.


Q: Does this affect you?

No action is otherwise required for end users, except for those site operators who manually deploy SSL certificates.

If you manually deploy SSL certificates, then make sure that your contact information with the issuer (e.g. Digicert) is up-to-date; if your certificates are affected, they will contact you and advise you on how to proceed with reissuing and redeploying. Or, you can contact their customer service, reference the mail-archive link, and ask if your certificates are affected.

If your SSL certificates are deployed automatically (Let’s Encrypt, AWS ACM, etc.) then no action is required. Either your certificates are unaffected, or they’ll be updated automatically by the automation.

Q: What’s a simple summary of the problem?

Imagine if root tried to userdel a malicious account, and the non-root user being deleted was able to tell the system “ignore that, I’m still valid”. The system would need to be quarantined and a replacement built without that flaw, since you could not state definitively that you had deleted the malicious user account.)

This issue would allow a non-root certificate (‘intermediate’, ‘subCA’) to undo deletion (‘revocation’) of itself by the root authority, as well as undo deletion of its siblings (other intermediates issued by that root authority). To correct the issue, all non-root certificates issued incorrectly in this manner must be revoked and then destroyed, with proof of destruction. This ensures that the mis-issued intermediates can’t zombie-return someday by undoing their own deletion or the deletion of others.


I was going to blog this after tormenting Sleevi on Twitter with more dumb questions, but I guess we're just going to do this here.

The issue† --- it's super complicated in its particulars but not in its outline --- is that CAs have been issuing constrained intermediate CAs (CA's that can sign only a subset of certificates) that, because of a misconfiguration, can sign any OCSP message for their root CA. So, for instance, Sleevi points out a HARICA (Greek university CA) cert that is constrained by dint of not having the serverAuth EKU (so it can't sign TLS certs), but because it has the OCSPSigning EKU, can be used to sign OCSP messages.

What happened was this: lots of CAs run by companies in the wild (remember: there are all sorts of constrained CAs running inside companies that you aren't supposed to have to care about, in part because they're constrained) run on Microsoft's CA software. Many of those CAs want to support OCSP. The way you generally set up OCSP is to sign a Delegated Responder certificate, which is an end-entity (non-CA) cert with the OCSPSigning EKU, which says "this certificate can be used to sign OCSP certs for this part of the PKI". But Microsoft's CA is broken: it won't let you sign a cert with the OCSPSigning EKU unless your CA cert also has that EKU (the EKUs must "chain"). So, that's what CAs did, despite the fact that chaining EKUs like that changed the semantics of what the certs were intended to express.

So, as Sleevi put it, basically a bunch of CAs accidentally signed the OCSP equivalent of a bunch of CA:TRUE certificates. They gave their customers the ability to essentially disable revocation for the whole CA.

We can go back and forth on how sound the WebPKI revocation infrastructure is with or without this mistake. But at a minimum we should be able to stipulate that strengthening revocation is an important project for browser vendors, and CAs that accidentally break the revocation infrastructure by misissuing certificates are an impediment to that project.

There's an interesting backstory to this, which is what I actually wanted to write about, and that's CA/B Forum SC31. The CA/B Forum is the standards body that coordinates between browsers and CAs; they maintain the BRs, which are the bylaws for operating a CA that browsers will trust. The politics of CA/B Forum are weird, because CAs and browsers are structurally adversarial: browsers want maximal security regardless of the commercial implications for CAs (as they should). As a result of that weirdness, the browser vendors rely not only on the BRs, but also on their own bylaws for their respective root certificate programs. SC31 is an attempt by Ryan Sleevi to align the CA-approved BRs with the browser-approved root programs.

Of course, the browser root programs, particularly Mozilla and Google's, are the only thing that really matters, because those rules determine whether the browsers themselves will honor a CA's certificates. But the CA/B Forum is dominated by CAs who sort of dispute this fact. So, SC31 asks that the BRs import that (now common) browser root program rule that certs can only live for just over a year. The "one year only" rule was considered previously by the CA/B Forum and failed in a vote (CA customers don't like that rule), so the CA's are unhappy to be relitigating that point. But then, the litigation itself is kind of theatrical, in that browsers already made this decision and mooted the debate. Standards bodies are weird.

Anyways, what makes this funny is that, while the one-year restriction is the real problem that seemed to piss off the CAs in SC31, they also challenged some of its OCSP language. In the course of debating with them about whether SC31 described reality or not, Sleevi started looking at their OCSP issuances. And here we are: the CAs aren't even following their own BRs, and are breaking their bylaws in ways that materially impact TLS security.

I understood literally none of this when I first read Ryan's message. I thought I knew some stuff about the WebPKI, but even with help on Twitter I had trouble with this eldritch stuff. I don't know how people like Sleevi do it, and, apparently, neither do many CA operators.

This probably would have been a boring blog post anyways, but to make Kurt happy, I'll conclude with: "use Fly.io!". :)

Wrinkle: Sleevi flags these CA certs as missing the ocsp-nocheck attribute, which is presumably how he spotted them. ocsp-nocheck says "this certificate can't be trusted to OCSP revoke itself, because that's silly"; the idea is, certs flagged ocsp-nocheck rely on very short lifetimes rather than OCSP to manage revocation. Very short lifetimes are things you expect on end-entity certificates, which is what OCSP Delegated Responders are, but not so much on CA:TRUE certificates, which are what Sleevi actually found.


> The CA/B Forum is the standards body that coordinates between browsers and CAs

I would quibble with the description of CA/B as a standards body. It's a standing meeting. Any CA/B documents including the Baseline Requirements manage relationships only between (potential) parties to CA/B itself, the browsers and public CAs.

The reason the meeting exists anyway is that it sucks for the public CAs if the major root programmes have conflicting rules or interpretations of those rules. The idea of the BRs is to as much as possible agree rules with everybody to avoid such conflicts.

Also I would rate Microsoft and Apple as equally significant with Google and Mozilla and it's at least tempting to add Oracle (because Java has its own root trust programme) too.

In terms of whether they'll throw their weight around we're used to seeing that from Google and Mozilla (e.g. on SHA-1 and on the Blessed Methods) but Apple proves here (on the one year certificates thing) that sleeping giants might wake up and kick over everything you're doing if it displeases them.

If Microsoft were to, for example, have refused to trust ISRG (they took a very long time to actually make any decision) I'm sure that we'd moan about it, but we can't make them, and without being trusted in millions of Windows PCs when their cross signatures expire that would the end of the story for Let's Encrypt right?


The strength of Mozilla is that it runs its root certificate policy very publicly, with public review periods and chances to objection, as well as detailing the incident reports on its wiki in full view of the public. I know some of the Google engineers are also very present on Mozilla's lists.

The other non-Google, non-Mozilla companies don't have anywhere near this level of public disclosure from what I've seen, and I suspect that they may follow some of Mozilla's groundwork (especially in terms of dishing out punishments in response to incidents).


Without any doubt Mozilla's public oversight role (via m.d.s.policy) is extremely important.

Google actually doesn't operate transparently either, except in the sense that it chooses to participate in m.d.s.policy. You won't find a public process behind Google's decision to require CT for Symantec's roots before it was mandatory for other roots for example, they just announced the policy as a done deal.

You are not alone in concluding that Mozilla's distrust decisions (I wouldn't characterise them as "punishment") are in practice copied by the other root trust stores. It is entirely possible that Microsoft (for example) has a large team of dedicated experts independently investigating incidents and just coming to coincidentally similar conclusions. After all, the facts won't be different if a Microsoft team investigates them than they are when Mozilla and third parties do so for m.d.s.policy. But it's a hell of a coincidence...

I would note that for initial trust decisions Microsoft in particular does not follow m.d.s.policy. If you run Windows there's an excellent chance that your computer (and thus Internet Explorer, Edge and Chrome on that computer but not Firefox) trusts poorly run Certificate Authorities from a variety of organisations and countries which don't seem very trustworthy.

For example the governments of Sweden, Slovenia and Thailand.

[Edited: This used to mention Venezuela but the Venezuelan government CA was in fact distrusted by Microsoft]

Now maybe Microsoft's team carefully vetted all these dozens of Certificate Authorities that aren't trusted elsewhere and concluded they're doing a great job. In some cases we know they weren't able to satisfy Mozilla (or volunteers contributing to m.d.s.policy) but in other cases they never applied at all. Maybe they're just shy?

So far we can say this doesn't seem to have caused any serious reported problems. So maybe it's fine.


If Apple is the sleeping giant of PKI, Microsoft is the come-back kid. The actual set of CAs trusted by Microsoft has massively shrunk under the leadership of their new Root Program manager, and their transparency greatly improved. https://aka.ms/rootupdates shows a regular cadence, particularly on even months, of removing trust in a large number of CAs. While they still add CAs faster than any other program, they also have strong contractual guarantees on CAs in a way unlike that of Mozilla, Apple, or Google. And Microsoft is notoriously not afraid of using lawyers for noble causes.


[[Ryan Sleevi wrote the m.d.s.policy post this HN item is about]]

That link says the even number month changes are CA led.

Now of course you certainly have much better insight than I do into what's behind those CA led changes because I'm just a Relying Party with their nose pressed against the window. Maybe that new Root Program manager is encouraging participants to clean stuff up with an implied threat that if they don't Microsoft will. But as an outsider it still looks a lot like the old Microsoft root programme to me. Also Microsoft's "revoke or else" rule still sits badly with me despite its purported use to prevent people scamming Microsoft's customers. But I guess I'm glad to hear you think they've "greatly improved".


"Within 7 days" counting from July 1?


Grab the popcorn


Yes


Wow https://crt.sh/?id=7846179 is also on the list, the certificate used by Dutch government to authenticate citizens. Yikes.


Here's a very quick way to check if you may be affected: this will check all certificates in the chain to see if they match the fingerprint of the to-be-revoked list.

https://ohdear.app/tools/certificate


There's some related technical background in this 2014 bug report on Firefox rejecting a TLS server certificate issued by such an intermediate CA with the problematic 'OCSP Signing' EKU: https://bugzilla.mozilla.org/show_bug.cgi?id=991209#c7

The problem with intermediate CAs with the 'OCSP Signing' EKU being able to sign OCSP responses for any sibling certificates issued by the root CA seems to have been recognized for a while. But in this case, Mozilla allowed and ignored the 'OCSP Signing' EKU for vendor interop with existing intermediate CAs. Mozilla products would reject such OCSP responses, but the CA policy on issuing intermediate CA certificates with the 'OCSP Signing' EKU was not fixed.

Specifically, MSADCS (Microsoft software used to issue certificates) requiring the CA certificate to have the 'OCSP Signing' EKU, "by design": https://support.microsoft.com/en-us/help/2962991/you-cannot-...

The situation seems to be that the MSADCS policy on requiring the 'OCSP Signing' EKU on the intermediate CA certificate is incorrect, and clients validating OCSP responses signed by a delegated OCSP responder certificate do not require the 'OCSP Signing' EKUs on the issuing (intermediate/root) CA certificate(s): https://groups.google.com/forum/#!msg/mozilla.dev.security.p...

This is in contrast to e.g. the TLS serverAuth EKU, where these EKUs chain: a technically constrained CA certificate must have a TLS serverAuth EKU in order to issue end-entity certificates with a TLS serverAuth EKU.

The 'OCSP Signing' EKU on the intermediate CA certificate is unnecessary: the intermediate CA does not need the EKU to sign OCSP responses directly, nor to issue a delegated OCSP responder certificate with the 'OCSP Signing' EKU. The unnecessary 'OCSP Signing' EKU on the intermediate CA certificate is harmful, because it may be interpreted as a delegated OCSP responder certificate for the issuing root CA, capable of signing OCSP responses for any certificate issued by the root CA.

The workaround for MSADCS is to use an untrusted CA certificate (e.g. self-signed) with the same public key and subject DN as the intermediate CA to issue the OCSP responder certificate, adding the 'OCSP Signing' EKU to both the OCSP responder certificate and the workaround-CA. The resulting OCSP responder certificate with the 'OCSP Signing' EKU will validate as an OCSP responder certificate for the actual intermediate CA, without including the 'OCSP Signing' EKU in the CA certificate.

This BR policy violation approach seems to be an attempt to highlight the security problems and fix this long-standing issue, limiting the use of the 'OCSP Signing' EKU in order to protect clients from OCSP responses signed by intermediate CA certificates that were not intended to be delegated OCSP responder certificates for their root CAs.


Anyone remember two years ago when the Comodo CEO emailed hundred of private keys?


That isn't actually what happened, but I don't doubt that's how you've remembered it.

Comodo at this point controlled the CA roots that had belonged to Symantec. Trustico, a Symantec reseller (same sort of relationship to Symantec that your local Ford dealer has to the Ford motor company) asked Comodo to mass revoke thousands of certificates it had sold to third party subscribers as reseller for Symantec.

It's not clear what Trustico hoped to achieve by that, maybe they believed they could get back the cost of the certificates? We don't know the details of the (confidential) contract between Trustico and Symantec or to what extent the contract terms survived transfer to Comodo. Maybe Trustico just wanted to push its customers into new deals, because it was not a Comodo reseller and risked being frozen out.

Anyway, Jeremy Rowley, a Comodo VP asked for a reason to revoke these certificates, and by return he got thousands of private keys. Private Key compromise is a valid reason for revoking certificates, so Rowley confirmed the certificates matched these private keys and Comodo began revoking them.

Trustico are the people who had thousands of private keys. A CA is strictly prohibited from having your private keys (and as we saw, if they are shown them they should revoke your certificates) but of course whether a CA enforces this rule on its resellers (via contract terms) is a matter between the CA and reseller. The whole point of private keys is that they're private. So, in one sense Trustico's customers got what they deserved - do not give your private keys to some reseller or trust them to pick keys for you.

Nothing went wrong at Comodo here. And if you as a subscriber followed good practices you weren't affected either even if you'd bought certificates through Trustico. Only customers who'd gone with Trustico and done something inherently unsafe got burned.


You said Comodo throughout, but it was DigiCert :)


Doh! I can offer no explanation for this mistake.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: