Even when Mozilla does fully remove stuff from their root store, in some cases it has taken distros a year+ to ship the updated version
Not to mention stuff like this: https://bugs.launchpad.net/ubuntu/+source/ca-certificates/+b..., where Ubuntu just unilaterally reverted Mozilla’s removal of a cert in their package, because it was breaking nuget… Note that this was early 2021 — Mozilla removed Symantec from their trust store in October 2018!
> where Ubuntu just unilaterally reverted Mozilla’s removal of a cert in their package, because it was breaking nuget… Note that this was early 2021 — Mozilla removed Symantec from their trust store in October 2018!
Debian and Ubuntu had jumped the gun by a few weeks and there were certificates still being used that had not been renewed yet, so we had to revert temporarily.
Mozilla had used the CKA_NSS_SERVER_DISTRUST_AFTER tag with a date to specify newer certs issued by that CA were not valid, but as the article above states, the crypto libraries being used in Linux don't support that kind of thing.
For those who are interested in more details about this, I wrote a paper that examines the delay and trust discrepancies between Mozilla and its derivative root stores (e.g., Linux, NodeJS, etc.): https://zanema.com/papers/imc21_roots.pdf
The bug report quite clearly states that Ubuntu followed the same revert that Debian did: "The Debian ca-certificates package removed this CA for both TLS (expected) and other uses (like timestamping) (unexpected). Trust was added back in a subsequent update."
So no, Ubuntu didn't do anything wrong or unreasonable here.
>So yes shipping a CA known to have intentionally issued false certificates is very on-brand for them.
Did TrustCor turn out to have done that? The last time I checked in on that, the distrust was mainly founded on some not-very-trustworthy behavior involving spyware in a related company within the same corporate umbrella.
The removal of Symantec was a bit questionable as well. There is reason for strictness here, but I don't think having used their certs a bit longer was a relevant security threat at all.
Trustcor allegedly spread spyware, which I do think makes anyone untrustworthy. But it should be checked if allegations can be confirmed.
You’re 100% right. They were unable to convince the right folks that they were trustworthy, and they appear to be in a prime position to abuse misplaced trust due to some fundamental conflicts of interest, but “known to have intentionally issued false certificates” is a false accusation.
Their strong point has always been design and branding. I love their fonts and a few years ago they were the only ones with a patched libfreetype that didn't make your eyes bleed.
As for engineering decisions, let's just say better stick with debian on anything non-desktop.
The only issue with Debian is if you want the latest version of some application, e.g. Firefox then installing it is far from a nice experience. And Debian testing is quite unstable (I run it on my laptop so I know).
Love it for my servers but it is not very convenient for the desktop.
I have been running Debian Stable on my (gaming/coding/general use) desktop and on my work laptop for close to a decade now, and I have had very few problems, things just work. The last problem of "too old system libraries" nature I remember was maybe 5 years ago, when Steam client did not work because of too old glibc, but one Debian release later the problem was gone.
I don't think it's as bad as most people think it is, nowadays.
There are other issues with Debian. They will radically rearrange upstream software to follow their own standards (e.g. try using Tomcat on Debian sometime). This is bad enough when they apply it to regular software, and downright insane when they do the same thing for security-critical software. It predictably caused quite possibly the worst general-purpose OS bug in history (their SSL key generation one). They did not change their policy in response to that incident and see nothing wrong.
Interestingly, this time around, Ubuntu actually removed TrustCor from its root store almost immediately. That technically means that some certificates that should still be valid are now invalid on Ubuntu.
There is no API to actually handle trust.
If you are asking `curl` or `openssl` to verify the chain, it needs to read it from somewhere. By convention, there is `OPENSSLDIR`, for example.
But that just tells us where the root certs are, not much more.
In the API, I load the root CAs and check validity of the cert, nothing beyond that.
And I can do that in multiple different TLS engines.
Given that there is no API for this, all the TLS engines need to re-implement this logic.
Hell, in most cases, even things like validity range checks are handled by the applicative logic.
On Windows, there is the builtin TLS API `schannel` which means that you have a lot more structure / order in this. Including the ability to express those sort of details.
There is another aspect, as well. You need to be able to express those issues as errors, otherwise the user is going to be left with a broken box and no idea what is going on.
The various TLS libraries already agree on at least one thing: what certificates look like on disk (e.g. PEM and DER formats). There's no reason why some committee couldn't sit down and standardize a format for ancillary data such as the trust date cutoffs mentioned here. Then the TLS libraries would have to implement it, and return appropriate errors from their validation functions. This could possibly be done in a way without needing applications to also be updated, though that depends on how each TLS library reports validation errors.
But even if a particular TLS library doesn't have a way to express these errors properly, I think it's better to have a user confused as to why they can't connect to a site, than silently connect to a site "protected" by a certificate that shouldn't be trusted.
Java actually solves the complaint in this thread, and has done for a long time. The JDK has its own root store program run by Sun and now Oracle, TLS APIs use the bundled root store automatically, it provides tools to manage that root store, exceptions are easily rendered to users in a generic way, and it provides a mix of high and low level APIs.
What you're probably thinking of is the JKS format. Java defined a way to represent a root store on disk that's Java specific, I think because there was no standard at the time. But it migrated to the PKCS#12 format in recent releases.
Java has the exact opposite of a solution to this, though their intentions were very admirable. PKCS#12 is a terrible solution, and Java does it's best to enforce it, and it's move to adopt it further is the exact opposite direction that everyone has been moving in.
What everyone's realized in recent years is that trust is context-dependent, and that you don't want to use the same roots of trust (or trust anchors, or CAs, though each of those terms has a slightly different meaning) everywhere, nor do you want to co-mingle them. It doesn't make sense to have one file that specifies "Here are all the CAs I trust, and here are rules for each of them"; It makes a lot more sense to have configuration per use-case of "Here are all the CAs for this usecase", and to rely much more on code within the program to disambigutate. Having one file with all those CA certificates and your private keys is... Backwards. Keytool and their custom format at least kept a clear distinction between "these are my private keys" and "these are keys I trust". PKCS#12 has no such distinction, and the tooling around it (Particularly Java's!) makes it hard to disambiguate safebags, so you're left with one giant mess.
It's actually the same problem as Global Variables[1]; Security and trust isn't one object. It's many. One program may act as itself to many providers, but it might also act as three or four different roles within it's scope to the same provider.
Well, but the default of having a bag of certs with a trust store program does make sense - apps want to just be able to talk to random web servers in a secure manner and that's the only way to get that.
In many use cases you could use a store with only a self-signed certificate. That works, the APIs make it straightforward enough. It has many advantages when possible, e.g. you can set the expiry time to so far in the future it never expires, you don't have to muck about with LetsEncrypt and CAs. You do have to control the client and the server. If your frontend is a web app that's a problem. If it's a mobile or desktop app that's easy. Control over the client has other advantages too like not needing HTTP load balancers anymore (in many cases), reducing costs.
That is one bag of trust, yes. "The web CA pool" (there's not just one, there's actually at least four in common use in America (Google/Chrome, Mozilla, Microsoft, and Apple), with several more besides)
But most modern apps don't just connect to the internet, they connect to internal stuff like databases using TLS as well. It's actually a valid strategy to just use the web trust pool for that as well - I personally use it in places, with the certificate managed by CertBot reused for the webserver and the postgres server.
It's not a solution for the modern enterprise, where mutual TLS is sometimes a requirement and is attractive if not. You will want to have at least one non-web CA pool (Trust Bundle) for your enterprise, and sometimes several more for other departments, or other companies you collaborate with.
Unfortunately (in the case of TRUST stores) it uses pkcs12 in a non-standard way. You cannot, for example, use openssl pkc12 to create a p12 store java can read. Java expects some oracle-specific non-standard attributes on the bag.
Java has always been its own separate world. For instance, recently we had daylight saving issues on a server (we no longer have daylight savings time, but the Java application thought it had just started). The server did have an updated timezone database with the correct rules... but Java uses a separate timezone database, which hadn't been updated.
Yeah, this confused me when working with java because some of the tools just silently changed to generating PKCS12 certificates when the same options used to generate their proprietary format.
AFAIK those are standards for allowing CAs to revoke certificates. Trust conditions for root stores are a completely different thing.
Or at least, I'm certainly not aware of any mechanism in the CRL standard to "partially revoke" a CA certificate based on complex conditions like "What's the notBefore date on the leaf certificate we're currently validating?".
Ah I see, you're right I guess they didn't contemplate the scenario of "we want to distrust this CA - against its wishes - but we still trust it enough to make it an orderly transition instead of an emergency". I guess it was previously assumed that CA trust would be binary and that CAs would be cooperative - seems like a pretty common evolution that internet standards go through.
The problem is, code that handles these things is safety critical and requires real and sustained investment to be reliable. Given this overhead (and the gaps in CRL/OCSP support that I mentioned), it's unclear if this use case will be common enough to commit to supporting.
Yeah, TFA seems to make a good argument for a libca-certificates, or so.
> There is another aspect, as well. You need to be able to express those issues as errors, otherwise the user is going to be left with a broken box and no idea what is going on.
TLS libraries are terrible at this. Part of it is C: the "integer is the only error type you'll ever need" cannot convey the necessary context, such as which cert is the problem.
Having a library might also help with path building bugs. I've seen bugs in both OpenSSL and GnuTLS in building a correct path, and both just from when the ISRG cross-sign expired.
… and maybe a more complicated data structure in /etc/certs (or whatever the path is) will keep prying vendors out. I've seen a lot of people make out of band changes there, and I would suspect those qualify as "undefined behavior", but OpenSSL's docs don't seem to answer the question of "what happens if vendors do random stuff?" (Like drop non-CA certs into the list of CAs, or don't update the symlinks that seem to form an index…)
> TLS libraries are terrible at this. Part of it is C: the "integer is the only error type you'll ever need" cannot convey the necessary context, such as which cert is the problem.
Not really; I mean, sure, the validation function will probably rely on returning an integer error code, but there's no reason that the API couldn't also provide a function to retrieve the certificate chain at hand, where an application could walk the chain backwards to the root and figure out the last one (er, or first one, depending on how you look at it) that isn't trusted.
Whether an application would choose to go to the trouble to make use of these APIs is another matter, of course. At any rate, from perspective of the average user, TLS errors are already not so easy to understand, so giving detailed information in this case might not be all that helpful anyway. Just a simple "we don't trust this website, so you shouldn't either" is about as much as most users would understand, probably.
We might need something more - but I wonder i trust-stores could be modified so that:
- every device get a "host" CA (like a ssh key)
- this CA is the only one trusted
- in turn, this CA signs issuer/top-level CA (or even cross-signs the intermediary certs directly) (eg with - new/-force_pubkey)
Additional logic could be applied in the signing step - say rules to set the validity, or rules to sign/not-sign. One might set a 48h lifetime, and run a crown job every night - allowing for "dynamic" "revocation" (cert expiry).
Not sure if this would work out of the box though - I have not looked that deep into ca and ca trust.
Certificates advertise if they're meant to be roots or not and libraries enforce that via things like the path length constraints, so it wouldn't work. But it also just seems kind of complicated - how is that any easier than just updating the root store in a cron job, which is already done via normal package maintenance routines anyway?
The reason the Linux root stores don't have a notion of 'trust until' like the article wants is simple - nothing stops an untrusted CA just issuing certificates that claim to have been issued before the cutoff, so it seems pointless. Browsers have ways to tackle that like Certificate Transparency or crawling the web to try and identify every extant cert, but most apps don't use that.
Linux root stores could piggyback on the browsers' efforts here though. The attack scenario you described was sensible in the past, so it didn't make sense for non-browser clients to go ahead with "trust until".
However, now CAs can't really do this anymore, because - as you say - they'd risk immediate exclusion from browsers if this is detected via CT analysis. So Linux distros can actually benefit from CT and browser's impact in the CA space in an indirect manner.
There is definitely a power imbalance though. E.g., even if a distro implemented "trust until", they could not realistically make their own rules that are stricter than what browsers do: CAs could backdate certs so they get accepted by the distro, but if browsers consider the CA fully trusted, they might not care about the backdated certificates.
I think it's more that Linux root stores date from an era when everyone approached CA trust as a binary thing (even browsers), and there has never been enough pressure and coordination to evolve them into a more complex system, unlike browsers. My memory is that browsers added conditional distrust and conditional limits on CAs and various similar things when they became convinced that it would be too bad of a user experience to simply remove CAs but also too dangerous to retain them in fully empowered form. Having conditional distrust also gave browsers more power over CAs, because now browsers had more options for dealing with marginal but (semi-)popular ones.
If you can name a set of trust anchors (e.g., with an environment variable, in a configuration parameter, in a command-line argument, as a function/method parameter), then you've got the power to specify what to trust contextually. The problem is that a) apps usually don't give you such a control, and b) the names of these sets of trust anchors have to be meaningful to users.
When I renewed my Let's Encrypt certifcate the last time I noticed that my postfix was still using the certificate that had expired 3 months later. I don't receive a whole lot of email on that machine, but I did not notice any delivey failures.
I started to think (but not check specs): Is it even defined what domain name should be checked? MX records can be different from server names. Maybe certificate checking is just not defined for SMTP over TLS?
Considering you basically can only get a certificate for a domain name that refers to a host (and offering an IP address for an unqualified domain name is an anomaly) I’d expect the required subject to be the hostname. But the RFC does not specify it:
The decision of whether or not to believe the authenticity of the
other party in a TLS negotiation is a local matter. However, some
general rules for the decisions are:
- A SMTP client would probably only want to authenticate an SMTP
server whose server certificate has a domain name that is the
domain name that the client thought it was connecting to.
However in the case of Postfix there’s also the matter of connection to services like an ldap server to check things. In that case it’ll also by default happily connect to an ldaps service with a self signed, expired certificate.
Thanks for the reference. But this does not seem to make sense. Email delivery is about interoperability. How can that be achieved if parties make their local decisions?
P.S. As you probably guessed in my previous comment "3 months later" should of course have been "3 months earlier". Sorry about the mistake.
It’s delivering email, there is no way for the receiving site to state a certificate is required anyway. Traditionally it’s just a plaintext connection on port 25 and using and supporting STARTTLS is optional.
I suppose you could write up a way to specify this in DNS but it’d take decades to be implemented and you would have to deal with pushback from the snoops who I’m sure don’t mind they can peep into the email you receive.
I guess the support of TLS was added for confidentiality and integrity on untrusted networks. If you don't verify the certificate a MITM attack is possible. So what is the remaining benefit of using TLS?
Note that if you don't trust the CA, you shouldn't trust the issue date either. A dishonest CA would backdate any certificates signed.
So having an arbitrary cutoff date in software seems unnecessary. If you still trust the CA to behave honestly for now, then you can simply instruct them not to issue any more certificates.
If you don't trust the CA to act honestly for now, then you need to remove them from the trust store entirely (and maybe use a whitelist of previously issued certificates if you believe they were trustworthy in the past).
> Note that if you don't trust the CA, you shouldn't trust the issue date either. A dishonest CA would backdate any certificates signed.
Dare I say it, but isn't this the exact problem solved by a chain of published certificate hashes?
This is pretty much how certificate transparency lists (CTL) work too. It's like a blockchain without any proof of work or stake, just a merkle tree of hashes that ensures you cannot retroactively insert one with an older date.
> ... (and maybe use a whitelist of previously issued certificates if you believe they were trustworthy in the past).
Which I guess would be only trusting CTL published certificates up until the last point of trust.
You're not describing a blockchain here, but rather a Merkle tree (also called a hash tree). Or, (because the tree-like behavior is actually undesirable here) just an array of hashes.
Just because something has a set of hashes in a roughly linear timeline doesn't make it a blockchain. For example, git isn't a blockchain either.
Clearly you disagree on the definition of the word “blockchain”. Let's consult Wikipedia, a trustworthy source on anything [1].
> A blockchain is a type of distributed ledger technology (DLT) that consists of growing lists of records, called blocks, that are securely linked together using cryptography. [...] Since each block contains information about the previous block, they effectively form a chain, with each additional block linking to the ones before it. Consequently, blockchain transactions are irreversible in that, once they are recorded, the data in any given block cannot be altered retroactively without altering all subsequent blocks.
So far this is exactly what's going on in the Certificate Transparency scheme.
> Blockchains are typically managed by a peer-to-peer (P2P) computer network for use as a public distributed ledger, where nodes collectively adhere to a consensus algorithm protocol to add and validate new transaction blocks.
This part deviates from CT (no validation is going on in real time, browsers usually just have a few hardcoded CT logs and a custom rule on when a certificate is trusted). But note the word “typically”: this doesn't mean CT can't be called a blockchain just because it doesn't use PoW, PoC or any other consensus algorithms conventional in the cryptocurrency world.
Calling Git a blockchain is arguably more of a stretch because the purpose of keeping the old records intact is not what we use Git for. Maybe you can build blockchain on Git though? (edit: OF COURSE YOU CAN https://github.com/CouleeApps/git-power)
The whole CA model is broken by the OS and browser "every CA is trusted for every use case across any TLD or IP range" without any sort of context of scope except dates. Why is it every single one of the 30+ CA roots that Mozilla or Google trusts can issue .com domain certs, not just ones for their local TLDs or local IP ranges which would dramatically limit the blast radius for a breach? Do you really trust that some of these CAs aren't issuing sketchy certificates to their local intelligence agencies, as one of the Middle Eastern CAs was caught doing? To pick on Chrome since they're the biggest, they've got Turkish and Greek, Chinese and Hong Kong and Taiwanese CAs in there, how do you think people in those countries feel about trusting TLS certs from the other sides of disputes?
The high bar to establish a trusted CA root in terms of cybersecurity capital investments and audit requirements means there's very few developing country root CAs. What if LacNIC or AfriNIC could run their own CA roots, but they were scoped only to work on IPv4 or IPv6 blocks they managed?
Meanwhile, in a corporate enterprise use case, how about being able to trust a CA only to issue certs for my dev and test .local domain, possibly with a scoped range of IPs? At that point, I don't care if that internal CA gets compromised and an attacker issues fake .com certs, my OSes and browsers I installed the CA on would know they're only valid for foobarlab.local on 10.10.x.x - 10.100.x.x or some IPv6 equivalent.
X.509 technically does support a parameter in CA certificates called Name Constraints which allows them to be restricted to issuing certificates within a specific set of names. Historically this feature has not been well supported though it seems like the browsers have added it more recently.
I agree wholeheartedly that this feature should be used more widely to restrict CAs where practical, obviously limiting government CAs to their respective ccTLD(s) seems like an easy one. Personally I'd also like to see this extended to allow for a domain owner to get a private CA certificate issued for their domain(s) which can then be used to issue individual certificates within that/those domain(s) as a more secure alternative to wildcard certificates.
There is no substantial technical reason this couldn't be done, just a lot of older software that wouldn't understand the restrictions and could either reject the certs entirely or consider them valid even if they shouldn't be.
There are definitely limitations in GNU/Linux root stores. E.g. to my knowledge it's impossible to distinguish between the root coming from the distro vs coming from a custom configuration, e.g. for corporate MITM proxies. So you can't e.g. require SCTs to be present for former but not require them for latter, like how Chrome does it for example.
Also, is this additional information even published by Mozilla in a standardized format or if it's just put somewhere into Firefox source code?
Also note that, given how Firefox doesn't implement SCT requirements, TrustCor could also just back-time certificates. So Firefox has similar vulnerability properties as users of distros that haven't removed TrustCor as a CA.
So ideally you would also make it possible to distinguish between OS provided and custom configured certificates, to allow libraries to check for SCTs at all. Otherwise you haven't gained much from the additional information of limiting a CAs ability to issue new certificates.
> Certificates signed by TrustCor that were issued before December 1st will still be trusted (for now); certificates issued on December 1st or later will not be.
How does this work? If TrustCor is no longer trusted, what keeps them from creating certificates which claim to be issued before December 1st, even after that date?
> If there is reason to believe that the CA has mis-used certificates or the CA backdates certificates to bypass the distrust-after settings, then remove the root certificates from Mozilla’s root store in an expedited timeline, without waiting for the end-entity certificates to expire.
Right now, they're being slowly removed for poor behaviour in general, but there's no direct evidence of abuse of CA powers. If any clear evidence of that appears in future, including backdating certificates, then they'll be completely removed from the trust store immediately.
If you don't then the certificates usually aren't usable at all. All modern browsers should reject certs from any root CA if the cert isn't correctly included in a CT log.
Chrome and Safari require that TLS certificates include cryptographic promises of future log inclusion ('SCTs') from N trusted CT logs. As far as I know, neither of them actually contact the log's API endpoints to make sure that this has gone through, but in practice IMHO it's not much of a security gap for various reasons.
The SCT is a promise of the log to include the certificate (or pre-certificate, which is used for embedded SCTs) within a time window. The only way a cert could have a valid embedded SCT is to have actually sent the pre-certificate to the log in question.
The SCT contains a signature over some log related information (which is also included in the SCT itself) and everything in the pre-certificate except the signature and poison (which means everything in the real cert, except the SCTs and signature). This means the browser can reconstruct the signed data and verify the signature.
Thus the only way to have a valid SCT and not have at least the pre-certificate show up in the log (after the merge delay) is if the log operator/software messed up. For transparency purposes, a pre-certificate is basically as good as a full certificate, although if I recall correctly the CAs are also supposed to submit the full certificates too.
IMO the correct solution would be one based on DNS. (E.g. DNSSEC+DANE or something similar.) Right now (unless we bring back extended validation) the entire purpose of certs on the web is to tell browsers what private key is associated with a domain. It's silly to involve completely unrelated third parties in that process when you could just get the information from the authoritative source (DNS) directly.
No, it's not. One of the important benefits you get from this layer of indirection is revocability of trust. CAs can be removed from browsers (Google and Mozilla have both removed some of the largest CAs), and DNS roots cannot. Browsers and root store operators can pressure CAs to adopt safeguards like Certificate Transparency --- they simply won't trust CAs that don't. There is no DANE Transparency in part because nobody can pressure TLD operators to adopt it.
There are a whole variety of other problems with a DNS PKI, and with DNSSEC in particular, but when we're talking about issues like this thread, revocability is the the big thing.
You raise some good points. There are definitely some benefits to having a diversity of different trust anchors rather than just one. The problem though is that the way things are currently set up that diversity is somewhat of an illusion; DNS is still a central point of failure for all CAs (e.g. via the ACME DNS challenge), and each individual CA is also a single point of failure unto itself (a CA can issue certs for any domain, not just those of its customers).
I also think you may be underestimating the amount of leverage browser developers have over DNS, should they choose to exercise it. If Google, Apple, Microsoft, and Mozilla all agreed tomorrow that they wanted to migrate the web to a new set of DNS root servers they could probably do it. (Though yes, it would be a huge ordeal for everyone involved.) The real reason we haven't gotten DANE transparency (or anything similar) is simply because it hasn't been a priority for browsers. (Why pressure DNS roots to improve a system that you aren't even using in the first place?)
Still, I can't deny what you're saying. The CA system is probably a lot more flexible than a naively designed DNS-based system would be. I'll have to think about that a bit more. Maybe it would make sense to have something like SCTs in the certificate transparency system, where a number of different independently operated organizations all have to sign off on the validity of each cert _in addition_ to the DNS operator that actually issued the certificate. That way you'd get the benefits of diversified trust anchors without the downside of having hundreds of distributed single points of failure.
The browsers absolutely cannot migrate to a different set of DNS roots. It's only been in the last 5-10 years that they've managed to get as much control over the root programs as they have now, and there was a lot of behind-the-scenes drama involved in that. And TLS certificates are, when you come right down to it, a browser feature.
That was just a hypothetical example to illustrate how much power browser vendors have over the web; they wouldn't need to literally migrate to a different DNS root in order to effect change in this space, but I maintain they absolutely could go that far (again, hypothetically) if they all agreed and there were a sufficiently compelling reason.
Browsers always had full control over CA root programs, it's just a question of how willing they were to actually assert themselves in that fashion. I agree they've become more bold about that over the years, but what Mozilla just did to TrustCo was always an option for the major browser vendors, both legally and technically (or at least since automatic updates became a thing). DNS is also effectively a browser feature these days, as evidenced by all the browsers suddenly supporting things like DNS-over-HTTPS despite host operating systems lacking support. Given a sufficiently smooth transition path for users and website operators, browser vendors could collectively decide to alter their DNS implementations in pretty much any way they want.
I don't think this is really true at all, and I think it underestimates (significantly) the amount of behind-the-scenes work that went into the current WebPKI situation with activist root programs. I don't think there's any reason at all to believe that browsers would have similar success governing a DNS PKI, and there are specific reasons --- evidence, even --- to believe they wouldn't. We can go round and round on this stuff, but I feel like I'm repeating myself at this point.
Like I said, I agree with you that a naive, purely DNS-based PKI would be less flexible in that regard. That was a good point, and well noted.
However, the current status quo is sort of ignoring the big DNS-shaped elephant in the room. You can build all the validation and transparency solutions you want on top of the CA system, but it's still fundamentally dependent on the security of a DNS system that currently requires no cryptographic assurances that the records the CAs are validating against are actually correct.
> If Google, Apple, Microsoft, and Mozilla all agreed tomorrow that they wanted to migrate the web to a new set of DNS root servers they could probably do it.
Yes, there is. But all certificate issuance is cryptographically logged, and DNSSEC signatures aren't. You can spot and react to misissuance of certificates.
If only Kerberos had been adopted as an alternative to TLS for securing have to traffic... things could have been so different (at least for the purposes of an organisation wanting to enable secure communications between its own clients and servers)
The whole CA model is broken by the OS and browser model of "Every CA is trusted for every use case across any TLD or IP range" without any sort of context of scope except dates. Do you REALLY trust that some of these CAs aren't issuing sketchy certificates to their local intelligence agencies, as one of the Middle Eastern CAs was caught doing? Why is it every single one of the 30+ CAs that Mozilla or Google trusts can issue .com domain certs, not just ones for their local TLDs or local which would dramatically limit the blast radius for a breach? To pick on Chrome since they're the biggest, they've got Turkish and Greek, Chinese and Hong Kong and Taiwanese CAs in there, how do you think people in those countries feel about trusting TLS certs from the other?
The high bar to establish a trusted CA in terms of cybersecurity capital investments and audit requirements means there's very few developing country CAs. What if LacNIC or AfriNIC could run their own CAs, but they were scoped only to work on IPv4 or IPv6 blocks they managed?
Meanwhile, in a corporate enterprise use case, how about being able to trust a CA only to issue certs for my dev and test .local domain, possibly with a scoped range of IPs? At that point, I don't care if that internal CA gets breached and an attacker issues fake .com certs, my OSes and browsers I installed the CA on would know they're only valid for foobarlab.local on 10.10.x.x - 10.100.x.x or some IPv6 equivalent.
> Do you REALLY trust that some of these CAs aren't issuing sketchy certificates to their local intelligence agencies, as one of the Middle Eastern CAs was caught doing?
A decade ago, probably. Today, with CT logging being mandatory in most browsers? Much less likely.
CA authorities can set the date when the certificate was issued to any date they like, bypassing the "trusted until" mechanism. Such a CA can still issue any certificate they like, pass browser checks and do MitM. In this way you still trust them to sign the correct issue date while you want to distrust them. Isn't Mozilla's mechanism kind of weird by being not "too simple"?
They can issue any cert they like, but these days browsers also require that certs are written to certificate transparency logs to be trusted, and if they did that it would be detected immediately.
I don't think using curl and openssl directly check CT, though....
This is so obvious and so "acceptable" even in security circles that I have thought for a while that it is a tacit acknowledgement that state actor attacks are "OK".
Trusted CAs should be one of the most scrutinized and controversial aspects of system configuration, and OSes should support a variety of trust models.
I've run this by a few security "experts" and the response has always been that it's a UX issue and the goal is to have websites load correctly without the user seeing and complaining about opaque security errors.
It's quite absurd. The rewards for compromising a single CA are so great that surely most state actors have succeeded in doing it at least once.
By "Linux" it really means any SSL lib that just uses system's "a dir with a bunch of CAs" approach.
That approach is nice for ops (don't have to worry about commands to add/remove certs, just drop files into dir) and relatively performant (just one read on cert's fingerprint file name in modern distros).
I think simplest one would be just adding meta file with a bunch of conditions ?
While we're at it, allow the certs to be imported only for certain domain, so ops can, for example, import internal CA of their partner but limit it to only partner's domains
I hate to say it but... I want a systemd-certd that TLS libraries can call into saying 'verify this please' and then the logic for verifying trust path, validity, revocation status etc can be done in one place, consistently and correctly.
The issue here seems to be that Mozilla/Google/Microsoft are the ultimate trust authorities for TLS but that is not reflected in how things are structured. If Mozilla/Google/Microsoft certified the current root trust authorities then an OS could just download the whole mess from Mozilla/Google/Microsoft and could check the validity of them on their own taking into account any revocations.
The problem is very broad though. CA/B forum is very effective for what it does, but equivalents for other uses are less so.
Most places crib and slightly rework the Mozilla CA pack with highly variable levels of maintenance and accuracy.
macOS and windows have richer models, but integrations are often poor. For many years homebrew had code that seeded an OpenSSL compatible pem with explicitly untrusted certs on macOS.
We could do with something as effective as CA/B that covers more use cases. Along with it should come some amount of reference implementation material and regularly updated test vectors.
Essentially we need the client side of let’s encrypt, and it can’t be let’s encrypt that does it.
Yesterday I have removed that certificate from all my Linux systems. So far I have not seen any impact in connectivity for any applications or websites I use.
And even if, all I'd get is a "this site is not trusted" that I can manually override if required.
I also have a problem with Debian method of compiling a list of CAs.
ca-certificates package rolls in Java and Mongo CAs as well as Mozilla.
Their tools to do that is also not very clear on how they are put together the ca-certs either from 32 different subdirectories.
A (ahem) short write up that also includes suggestion of improvements. Debian bug has been filed but the Debian maintainer does not seem to be interested in moving this along.
Any "consistent systemwide policy" is a too-simple view of trust. There are almost never two different applications that should trust the same authorities.
There's been a ton of discussion on what constitutes "Trust" in the modern web world of late, but very little of it seems to have borne fruit. One minor advance Kubernetes formally adopting a proposal for a trust anchor[1][2]; Which is to say, a set of CAs that are equivalent.
Someone mentioned earlier that "At least we all agree that they're PEM and DER Formatted", to which someone mentioned that Java had it's own world, and ugh... That's a whole thing. Java has moved backwards in recent years; Their old Keystore format was useful for a single context, in that you could say "Here's the keystore for communicating with Google, it's got a list of public keys to trust and list of private keys that we are"; And that more or less worked with the tooling. The move to PKCS#12 moves to one unified bag with all your private and public keys mixed in and you have to specify which are which, which is theoretically better but in practice represents a configuration nightmare over just specifying separate files with the trusted CAs, our public keys, and our private keys. You still can't deliver a single PKCS12 file to all of your servers as a whole, because SafeBags aren't; They're DES encrypted, which is to say: Not at all[3], but at least it's clear that they aren't.
The right answer is basically what Cloudflare[4] and then Google Cloud[5] tried to do and failed: Create a single file-format for "Public Key and accompanying Private Key", and make it easy to extract the public key from many of those to form a list of CAs.
How would I know if I should trust $THIS_CA? That's why it's good that the browser vendors do this, they know better than me which ones are trustworthy.
The choice of what CA certificates to "trust" should, optionally, in some circumstances, be the computer owner's decision. For example, when the issue with TrustCor was made public, Android users could disable the TrustCor certificates. Whether that actually works, I cannot say. The certificates are probably still there, not removed, not deleted. Same goes for disabling certificates within the browser. This is likely an illusion of owner control. That a system of "trust" is provided by a company that many do not trust is comical by any objective measure.
Personally I rely on a TLS forward proxy that uses the certificates in /etc/ssl/certs, as represented by ca-certificates.srt. Thus, I can remove certificates, regenerate ca-certificates.crt and effectively deny certificates that the advertising-supported browser company chooses to accept.
Web browsers from advertising companies, or organisations that subsist from advertising company profits, like Mozilla, ignore /etc/ssl/certs. They do not allow computer owners to remove and delete CA certificates. They refuse to allow the computer owner to add their own choices for CA certficates, e.g., certificates generated by the owners themselves, without forcing annoyances on the owner to try to discourage such owner control.
The "CA system" should be accountable to computer users, not advertising companies, or organisations that depend on advertising company profits, not "tech" companies. As it stands, computer users are locked out of the decision-making process with respect to CA certificates. The CA system has become a tool of "tech" companies that seek to commercialise every aspect of the internet. The fact is, much of the data "protected" by TLS is data belonging to or about computer users that is being sent to "tech" companies without the user's informed consent. In effect, TLS is used to stop the computer owner from seeing what data is leaving their computers and networks.
We need a CA system that is governed by computer users, not "tech" companies. All computer owners should not be discouraged from trusting themselves. The CA certificates computer owners generate should not be discriminated against in favour of ones approved by "tech" companies. Someone is no doubt going to reply to this with something about LetsEncrypt. But that is not letting users trust themselves. It requires users to rent domain names and ask to be trusted by LE. It is part of the existing CA system where users must (cf. may) let third parties decide who is trustworthy.
Not to mention stuff like this: https://bugs.launchpad.net/ubuntu/+source/ca-certificates/+b..., where Ubuntu just unilaterally reverted Mozilla’s removal of a cert in their package, because it was breaking nuget… Note that this was early 2021 — Mozilla removed Symantec from their trust store in October 2018!
In general it just seems like a bit of a mess.