The decrease in lifetimes has had a fair bit of discussion, but I haven't seen a lot of discussion about the mTLS changes. Is anyone else running into issues there? We'll be hit by it, as we use mTLS as one of several methods for our customers to authenticate the webhooks we deliver them, but haven't determined what we'll be doing yet.
The certificate offered from server to client and the certificate the server expects from the client do not need to share a CA.
This only affects you if you have a server set up to verify mTLS clients against the Let's Encrypt root certificate(s), or maybe every trusted CA on the system. You might do that if you're using the host HTTPS certificates handed out by certbot or other CAs as mTLS client certificates.
You can still generate your own mTLS key pairs and use them to authenticate over a connection whose hostname is verified with Let's Encrypt, which is what most people will be doing.
> The certificate offered from server to client and the certificate the server expects from the client do not need to share a CA.
Sure, but it seems like all the CAs are stopping issuing certificates with the client EKU. At least LetsEncrypt and DigiCert, since by the Google requirement they can't do that and normal certs, and I guess there's not enough market to have one just for that.
> You might do that if you're using the host HTTPS certificates handed out by certbot or other CAs as mTLS client certificate
Sure, what's wrong with that?
> You can still generate your own mTLS key pairs and use them to authenticate over a connection whose hostname is verified with Let's Encrypt, which is what most people will be doing.
That lets the client verify the host, but the server doesn't know where the connection is coming from. Generating mTLS pairs means pinning and coordinated rotation and all that. Currently servers can simply keep an up to date CA store (which is common and easy), and check the subject name, freeing the client to easily rotate their cert.
I would expect that using the same certificate authorities for servers is probably not useful for client authentication, although maybe it might be if the only thing you care about is the domain name (although, it shouldn't be; anyways, many clients might not even have a domain name).
But, if you really need to use certificates from the CAs anyways, you might ignore some of the fields of the certificate.
> you might ignore some of the fields of the certificate
A lot of software really doesn't like ignoring the constraints. You can make it work, but there's a good chance it'll require messing with the validation logic of your TLS library, or worse, having to write your own validation code.
I had partially written a program (in C) to parse X.509 certificates, but the part that is missing is the cryptographic stuff (to validate signatures, and also should be able to extract the public key for use by a separate TLS implementation). (I intended also to make it to be able to make X.509 certificates; the cryptographic stuff will be needed for that too, to make private and public key pairs and signatures. A separate library for cryptographic functions should probably be used for this purpose, if I would have a suitable one. A separate library should also be used for TLS (OpenSSL is rather confusing, and I want to use my own handling of the certificates but OpenSSL makes it too confusing to do that).)
Nothing, in principle. I suppose you can use that to validate domain ownership, or use Let's Encrypt as a weird authentication service for your cluster. However, it's not exactly common to do so as far as I can tell.
> Currently servers can simply keep an up to date CA store (which is common and easy), and check the subject name, freeing the client to easily rotate their cert.
I understand the ease of use in that approach, but it leaves your authentication wide open to rogue certificates, i.e. through old DNS entries on a subdomain, or accidentally letting someone read email destined to hostmaster@domain.tld, or maybe by a rogue CA if you want to go full conspiracy mode.
As for pinning: you're required to pick a key store anyway, you can just point it at whatever CA file you want.
As for automated rotation: you can host your own ACME server for your own CA (it's like 10 lines of config in Caddy) and have other servers point an account on their certbot/acme.sh/etc. at it. This gives you even more control and lets you decide how long you want certificates to last.
It's not as easy as relying on CAs to do that validation for you, but also much better than the old-fashioned manual key configuration of yore.
We use locally generated certs for Mtls with different lifetimes. Relying on public CAs for chains of trust like that makes me nervous, especially if something gets revoked.
Can I ask - if you're using publicly-trusted TLS server certificates for client authentication...what are you actually authenticating?
Just that someone has a certificate that can be chained back to a trust-anchor in a common trust-store? (ie your authentication is that they have an internet connection and perhaps the ability to read).