There is just one thing missing from this. Name Constraints.
This doesn't get brought up enough but a Name Constraint on a root cert lets you limit where the root cert can be signed to. So instead of this cert being able to impersonate any website on the internet, you ratchet it down to just the domain (or single website) that you want to sign for.
Browser support for it is pretty new, which is why it's so often missed. It only happened in mid/late 2023.
I've been shopping a talk since then about how to set up a name-constrained root certificate, and what it should look like. It's still hard! CFSSL is my go-to tool, and it doesn't have support. I had to fork it to make it work. OpenSSL has support, but it's configuration is like all OpenSSL configuration - Poorly documented and nonstandard, mixing INI objects and object-refs.
There is mainstream browser support for name constraints now?! That is huge, I had given up hoping for adoption progress already and was one of my major gripes regarding web stagnation.
> The name constraints extension, which MUST be used only in a CA certificate, indicates a name space within which all subject names in subsequent certificates in a certification path MUST be located. Restrictions apply to the subject distinguished name and apply to subject alternative names. Restrictions apply only when the specified name form is present. If no name of the type is in the certificate, the certificate is acceptable.
> Name constraints are not applied to self-issued certificates (unless the certificate is the final certificate in the path). (This could prevent CAs that use name constraints from employing self-issued certificates to implement key rollover.)
If this is now finally supported that's great. The issue was that for it to be useful it has to be marked critical / fail-closed, because a CA with ignored name constraint == an unrestricted CA. But if you make it critical, then clients who don't understand it will just fail. You can see how this doesn't help adoption.
It says "Proposed Standard" on the RFC; maybe that's why it's not widely implemented if that's the case?
https://bettertls.com/ has Name Constraints implementation validation tests, but "Archived Results" doesn't seem to have recent versions of SSL clients listed?
> Registrants publish a "CAA" Domain Name System (DNS) resource record which compliant certificate authorities check for before issuing digital certificates.
And hopefully they require DNSSEC signatures and DoH/DoT/DoQ when querying for CAA records.
Name Constraints has been around at least since 1999 (RFC 2459).
I'm not sure why CAA is brought up here. I guess it is somewhat complementary in "reducing" the power of CAs, but it defends against good CAs misissuing stuff, not limiting the power of arbitrary CAs (as it's checked at issuance time, not at time of use).
Stuff like this is why I consider giving people a CA how to akin to a loaded gun. They almost invariably are not going to securely store the keys properly, set up CRLs, or manage their PKI in a safe manner.
Some of us are aware of the risks and choose to accept them. Last week I tried to analyze HTTPS traffic on my Linux machine using MITM to check what some programs were sending back home, but omg, it was a pain, I also partially failed. Some apps just ignore system certs and use their own. Tools like mitmproxy help (docs are lacking btw). I paid for both the devices and the software, shouldn't I be able to take a peek at what they are doing?
I certainly wouldn’t trust myself. Now if I could import a root cert and specify what domains to trust that would be another thing, and it seems browsers are starting to pay attention to name constraints which has not taken 20 odd years.
I’d rather be able to further constraint at the cert store though.
Doesn't matter. PKI for https is a solution in search of a problem.
In reality all it does is just validate domain name ownership, something that could have more easily been done with DKIM keys. We don't need certificate authorities.
Probably not much to say other than all of our current takes are based on a half-finished system originally designed for telephone and X.500 (DAP+LDAP && GoBlue) standards. Pretty new to me, after 25+ years ;), and the x509v3 things I kept using in openssl finally made some sense, see
https://en.m.wikipedia.org/wiki/X.500 and especially the "The relationship of the X.500 Directory and X.509v3 digital certificates" which say that web servers/commerce was crafted on a system which didn't support remote directories and needed something local, hence the CAs and CA stores. Copied here for reference:
"The current use of X.509v3 certificates outside the Directory structure loaded directly into web browsers was necessary for e-commerce to develop by allowing for secure web based (SSL/TLS) communications which did not require the X.500 directory as a source of digital certificates as originally conceived in X.500 (1988). One should contrast the role of X.500 and X.509 to understand their relationship in that X.509 was designed to be the secure access method for updating X.500 before the WWW, but when web browsers became popular there needed to be a simple method of encrypting connections on the transport layer to web sites. Hence the trusted root certificates for supported certificate authorities were pre loaded into certificate storage areas on the personal computer or device."
I previously used openssl-based scripts to generate certificates to use for local development or applications on a private network. I have since moved to using the step CLI [1].
OpenSSL is powerful, but it's hard to figure out how to use correctly. Each command seems cryptic no matter how many times I use it.
The step CLI is a lot simpler, even though it has a few quirks: generating PKCS1 formatted private keys instead of the newer PKCS7 format, making every leaf certificate eligible to be either a server certificate or a client certificate, and absurdly low default certificate expirations.
I'm hosting my own internal CA using Hashicorp Vault and some ansible + CI. The root CA is valid for 20 years, intermediate CA 10 years, client certs three months.
Initial setup is a handful of commands interacting with Vault's CLI, from there, with CI in place, client certs are renewed automatically. Services are restarted / reloaded as well. Works flawlessly.
I should maybe write a (small) blog explaining how it works.
Wasn't there a similar article a week or so back that used Yubikey and a TRNG for signing your certs? It seems like that might be a better way to do it?
Is it coming? I notice that OpenSSL now has support for raw public keys.
The spec (RFC 7250, "Using Raw Public Keys in Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS)") suggests DANE/DNSSEC as a mechanism to bind identities to public keys (section 6).
for testing on localhost (web applications) i will usually write the web app to support http, that way there is no need to deal with certs on the local machine
This doesn't get brought up enough but a Name Constraint on a root cert lets you limit where the root cert can be signed to. So instead of this cert being able to impersonate any website on the internet, you ratchet it down to just the domain (or single website) that you want to sign for.