Kinda funny to call the current 90 day certs "long lived". When Let's Encrypted started out more than 10 years ago most certs from major vendors had a 1 year life span. Let's Encrypt was (one of) the first to use drastically shorter life spans, hence all the ACME automation effort.
To someone like me with hobby-level serving needs, the 90 day certificate life is pretty inconvenient, despite having automation set up. I run a tiny VPS that hosts basic household stuff like e-mail and a few tiny web sites for people, and letsencrypt/certbot automation around certificate renewal is the only thing that I seem to need to regularly babysit and log in to manually run/fix. Everything else just hums along, but I know it's been 90 days because I suddenly can't connect to my E-mail or one of the web virtual hosts went down again. And sure enough, I just need to run certbot renew manually or restart lighttpd or whatever.
> To someone like me with hobby-level serving needs, the 90 day certificate life is pretty inconvenient
I's only inconvenient because it isn't properly automated. That's by design.
When this can be a acme.sh script cronjob, there isn't much of an excuse. Even my Raspberry Pi dedicated to my 3D printer is happily renewing certificates.
At least with this thing breaking every 90 days you have it fresh on your mind. One year away you may not even remember what you have to do.
Let's Encrypt doesn't work great when the Let's Encrypt client software has a bug or is misconfigured (one of those is true for your situation).
I think keeping the validity long just removes incentives for people to bother fixing their setups. We've seen the shift from "Craig needs to spend a few days on certificate renewal every year" to full automation in most environments when the 90 day validity period was introduced, and shortening it to a week will only help further automation.
You'll always have the option to skip the hassle (for a small fee, unless a Let's Encrypt competitor joins the market), but I feel the benefits outweigh the downsides.
I personally would've preferred something like DANE working, but because the best we've got is DNSSEC and most of the internet doesn't even bother implementing that, I doubt we'll ever see that replace the current CA system.
Clearly it's not working correctly, so a longer certificate lifetime wouldn't address the root cause - you would just have to fix your setup less often.
> To someone like me with hobby-level serving needs, the 90 day certificate life is pretty inconvenient, despite having automation set up.
I also have hobby-level serving needs. I've been using LetsEncrypt since whenever it was they started. I have two top level domains and a whole lot of subdomains.
I've never had to babysit certificate renewal, nor had to log in manually to fix anything. Not once. How comes?
I also use Certbot (v2.1.0) for my small VPS/hobby setup (www + email) and I haven't had to mess with it since I set it up in 2021. Just adding another data point so you know it doesn't have to be painful. I'll be happy to help, just drop me a line.
> To someone like me with hobby-level serving needs, the 90 day certificate life is pretty inconvenient, despite having automation set up.
I've been running an LE client (official one, dehydrated, others) on various system for ~8 years, and the one time I had an issue with renewing was when (AIUI) the LE folks changed CDNs and so their responses were (slightly) different and dehydrated needed to be tweaked:
Speaking of the topic of automation, does anyone know of a domain registry that is suitable for issuing Let's Encrypt certificates for a machine behind a firewall (which requires using the DNS challenge)? I currently use Namecheap, but they started requiring you to manually whitelist the client IP address to use their API, which is annoying when your residential ISP changes your IP address.
Edit: seems like using Cloudflare as the DNS host is the way to go here. Thanks everyone!
If you are not allergic to Cloudflare, they work very well with the DNS-01 challenge and they provide both registrar services as well as DNS. Of course, you can use Namecheap domains with Cloudflare or any other DNS provider and that should solve your problem too.
Is it possible for you to run Caddy as a reverse proxy in front of your services? I've done this in the past and it really is set and forget when it's configured correctly.
I don't know what your issues are, but perhaps the know-it-all people who comments on this with a variation of "you're doing it wrong" or a problem of "not enough automation" could cool down a bit and realize the web PKI is hacks build from hacks and there are many reasons why the public ACME system may not be entirely robust for every application.
On the top of my head, that could be because one or more domains are not accessible from the public Internet (which could be for a variety of reasons), a subset of the subject domains having expired for legitimate reasons but you might not know which in advance (certificates being what they are some application rely on them having alternative names), intermittently flaky routing (which might not be a problem for the application), and a number of other reasons. That's without including potentially hostile actors. Then there are plenty of offline uses for certificates!
That said, Let's Encrypt has really been a revolution and made life better for many people. But it's not perfect and the PKI system itself has many warts. It's absolutely a system that may need a non negligible amount of babysitting when you venture outside the absolute mainstream.
If you're using LetsEncrypt without automation you're doing it wrong, and the reason that the WebPKI is so hacky is that it was insulated from basic computer science for 2 decades and run by enterprise software companies.
You have to automate certificates. You can't do these by hand anymore. Certificate lifetimes are going to get inexorably shorter.
Not really. PKI has always been that way since before the web. Mainly because the use cases are so varied and it there is the tendency to support every possibility under the sun.
For the longest time the web PKI lacked a singular view on what exactly they were supposed to be signing. Its usage reflects that.
That is deeply rooted in culture. I mean, we do speak about a culture in which X.509 was a reasonable choice. Years after the X.500 universe was cold to the touch at that.
The rest of your comment seems directed at someone else. Framing this on automation is misleading, which is what the examples in my comment were intended to show.
... which means automation was not setup correctly and 90 days is still too long that you just tolerated it. If it was 6 days after a few turns you would have decided "fuck it I'm going to spend time fixing it once and for all".
How could the person you’re replying to have reasonably phrased their comment to avoid this snark from you?
I’m 1,000% sure that they know what you’re trying to espouse. Nowhere in the comment does it say “here is an exhaustive list of hosted email providers”. It’s a JOKE.
This is not a substantive comment by any measure. To be blunt, this is almost certainly a skill issue, and a skill issue that would very likely be an issue in other aspects of system administration, even at a hobbyist level. Please be less political, and actually contribute to the discussion.
Unsurprisingly the 100% true comment in here is gray: PKI is breaking the Internet and because the PKI folks have literally no guardrails of any kind, they're committed to breaking it further despite still virtually zero benefit from constantly making the Internet more fragile.
But hey, there's an upside: When they finally break this toy badly enough, everyone will finally evict the CAB from their lives and do something else.
Doesn't that run into their rate limits if you generate a certificate every few minutes all the time? Or at least might be a burden, even if it didn't hit an absolute limit. (I'm assuming you're not the only person in the world doing this, so I mostly mean the collective effect this sort of usage pattern has)
Sorry, I should have clarified. You can't do certificates that fast on Let's Encrypt no. I meant running a custom CA inside/alongside Kubernetes, and using that to issue 20-minute validity certs to pods.
When Let's Encrypt got started in 2014, CAs could issue certificates valid for up to five years - and many did. The CA/Browser Forum has slowly been ratcheting that down.
That (five year certs) was technically true, but the CA/B BRs already told you that was going away in 2015 when Let's Encrypt was started. I don't know how many were still actually selling such a product by the point Let's Encrypt is on the scene.
I think the drop-dead date for this product was like April 2015 or so. The ideal customer for a product like this (lazy and also incompetent but with plenty of money) is also likely to leave it too late. I won't guarantee we'd have caught that, but unlike forbidden steps taken to avert a bigger mess of ones own making (as happened for SHA-1 deprecation, some notable financial outfits secured certs which should not have existed, to cover for the fact they hadn't properly managed their own technical risks) this seems like a product category thing, nobody was openly selling certs that would just break in Chrome, that's a bad product.
[Why would such certificates break in Chrome? Google hate these long lived certs so Chrome treats certificates which have validity exceeding what the BRs authorise as immediately invalid, if you want to moan to Google about why your prohibited certs don't work you're basically admitting you violated your agreement with them so it's like showing up to claim your stolen rucksack full of cocaine from the cops...]
IP certs improve a niche but interesting use case for me. I run a domain registrar that implements a simple OAuth2 protocol[0] for delegating domains/subdomains. I also have an open source tunneling tool called boringproxy that implements the client side of this protocol[1].
boringproxy needs to provide a callback redirect_uri to the oauth server in order to retrieve it's token, which it can then use for setting DNS records. However, it can't provide an HTTPS endpoint until it can set up those DNS records and get a cert. Chicken/egg. Currently the spec requires the server to implement a `GET /temp-domain` endpoint which creates a DNS record like 157-245-231-242.example.com which points at the client's IP. This lets boringproxy bootstrap a secure OAuth2 callback endpoint.
IP certs would remove an entire step from this process.
I remember being surprised when Cloudflare launched https://1.1.1.1 with a valid cert and I immediately wanted one, but couldn’t find an easy way to get one.
I am gonna try to run a DoH resolver on this and see how it goes.
I’m really glad they have a page on that IP. I use it decently often for “is the problem DNS?” troubleshooting.
Because if zero pages load, but that one does, the issue is DNS.
Ping is easy too of course, but I can ask people to type four ones with periods between into their search bar over the phone. No command line required.
I remember calling Clint and Jeremy at DigiCert and asking: "hey we have this cool IP address—what are the odds you guys can issue a certificate for it?"
I'm not sure if they had to dust off some code or process to do it, but they got it done really quickly once the demonstration of control was handled.
What's the end goal here? A new cert per connection? I think if, hypothetically, that were the case, where Let's Encrypt validates the domain owner on every connection, then that'd move the attack surface from trying to get private cert keys to... other attacks, in general. Is there reason to believe that "other attacks" are less likely? Have there been many cases of should-have-been-revoked certs being used improperly?
>We expect to issue the first valid short-lived certificates to ourselves in February of this year. Around April we will enable short-lived certificates for a small set of early adopting subscribers. We hope to make short-lived certificates generally available by the end of 2025.
This feels like a disaster waiting to happen -- like what happens if (when?) Let's Encrypt suffers a significant outage and sites can't refresh certificates? Do we just tolerate a significant portion of the Internet being down or broken due to expired certificates? And for what tradeoff? A very small amount of extra security? Is this because certificate revocation is a harder problem to solve / implement at Internet scale?
I agree. Anecdotally, the last time LE had an outage that prevented my cert from renewing, it took about ~4.5 days from when I reported the issue to them to when they started looking and provided a workaround. Since this was a 90-day cert it still had 30 days left on it, so I wasn't worried. If it had been a 6-day cert and only had 2 days left on it, I would've had to go to red alert and switch to another CA ASAP.
If they do start providing 6-day certs I hope their turnaround on issue reports is faster than that (and ideally have something better for reporting issues than a community forum where you have to suffer clueless morons spamming your thread).
In average half of the certs would expire in half of the time. A 3.5 days sustained DDoS attack would cause half of the sites using a 6 day certificate to be offline.
I am not saying 6 days is long enough, but if your automation always wait until the last minute to renew certs, you may have more issues to worry about than the CA's availability. If I am going to use a cert with 6 days lifetime I will be renewing it at least once a day.
If you have multiple hosts the set should not be the same, no? From the linked page the comparison is a set comparison: one host at hosta.example.com and one host at hostb.example.com each with their own cert bot won't conflict.
Fortunately, most ACME clients, including my own, support other CAs as fallbacks. (Caddy's ACME stack falls back to ZeroSSL by default, automatically.)
That, and extended week-long outages are extremely unlikely.
> That, and extended week-long outages are extremely unlikely.
You only need the outage to last for the window of [begin renewal attempts, expiration], not the entire 6d lifetime.
For example, with the 90d certs, I think cert-manager defaults to renewal at 30d out. Let's assume the same grace, of ~33% of the total life, for the 6d certs: that means renew at 2d out. So if an outage persisted for 2d, those certs would be at risk of expiring.
Sounds likes a surefire way to DDOS the next CA in line (and then all the others), since supposedly they wouldn't be prepared for that kind of traffic since LetsEncrypt is currently the default choice almost everywhere.
> The dns-01 challenge type will not be available because the DNS is not involved in validating IP addresses. Additionally, there is no mechanism to check CAA records for IP addresses.
Is in-addr.arpa. not usable for these purposes? Given how you can do PTR records to map IP address to domain name, I had just assumed it would be at least theoretically usable for more, even if few or no hosts exposed it so at present.
Note that ACME profiles are new, to the extent that the draft spec is (a) personal (and not prefixed with draft-ietf…), and (b) currently versioned -00:
Yeah... thankfully, it's "pretty simple" for clients to implement. Just add a JSON field to your payload and that's it. (There's a little bit of error checking logic and input validation, like making sure the value is valid, but overall it's not too complex, unlike ARI, which is much more work.)
I don't know much about CT requirements, but can't they prune data out of their logs after some time? Since the certs only last 6 days, the growth of the logs can be capped at some point right? If not now, provisions for such operations could surely be implemented, I imagine.
> I don't know much about CT requirements, but can't they prune data out of their logs after some time? Since the certs only last 6 days, the growth of the logs can be capped at some point right?
That's what happens - logs are "expired" after a few years. But if you want to have an exhaustive monitor, you probably don't want to discard the records of expired certificates.
Hmm, I wonder if it's possible to do dedicated intermediate certificates that promise to only sign short-lived certificates for a single site? That way the CT-log could be taught to only keep the intermediate?
Impossible to say, as most people probably don't even know that their private key is stolen. I've personally seen it only once on a real certificate revocation. Yet another reason to have shorter lifespan.
It's a pretty narrow threat model for Alice to get her cert stolen by Bob, be completely unaware that this has happened, and the means Bob used only works once.
Hmmm. This solution still leaves quite a few days a compromised certificate can be used(!).. that's significant.. but I guess it's better than nothing?
While we're on the subject of cert lifetimes. Is there a longer lived, public CA-issued cert for TLS client purposes?
I sometimes deal with a relying party that insists on public CA issued certs for TLS client use, and then makes rotation very painful behind a portal with 2FA etc. This would be fine if public CAs issued certs for 5 years but they seem to be limited to 1 year now because of browser policy.
It's often very difficult to get domain names in large orgs, but very easy to get public IPs. An IP can be as easy as a couple of buttons to get a static IP and assign it to a cloud LB in AWS or Google Cloud. Domain Names usually require choosing a domain name (without picking a name that reveals internal project details), then convincing someone with budget to buy the domain, then someone has to manage the domain name forevermore. For quick demos, or simple environments, it'd easier to just get a static IP and use that.
but the TLS cert on https://1.1.1.1 (or https://[2606:4700:4700::1111] on ipv6) is still valid for the ipaddress otherwise your browser would put up a warning during the tls handshake.
I'm very interested in trying this. acme.sh is planning to support certificate profiles, so hopefully that'll be ready when LE's short-lived certificates become available.
(Or I'll switch to a different ACME client I suppose)
ZeroSSL I think will get you IP certificates with their cheapest plan. (Disclaimer: I work on Caddy, which is a ZeroSSL project; but I do so independently.)
Six days? I can't even set the cron job to weekly. Maybe that is the point of this though from being on call I really hate thing restarting every day. Caddy, Nginx, HAProxy, and IIS all seem to handle certs without a full restart. MS SQL Server, nope.
While it wouldn't help currently, I'm sure in time accomodations will be made - for example the acme-client on openbsd will only renew if <30 days from expiration, so it's crond weekly. A client will just need to support custom times, so call it daily and it will renew when 1 or 2 days out to be safe
AFAIK, Caddy is the only integrated ACME client that is tuned for short-lived certificates. All its own self-signed certs are already 24-hour certificates, so 6-day certs will be no problem.
I'm happy to agree that caddy is easier, but the claim here is that it's "tuned for short-lived certificates", which... I guess could be true, but I seriously doubt that it's meaningful (on the basis that reloading certs isn't exactly expensive on any other major web server, so even if the most obvious interpretation is true and the made it take, say, 100 ms instead of 1000 ms, but we're talking about reloading every few days, who cares?).
You have a link to a previous discussion on this? I'm curious if there is some hidden thing occurring or if just connection resets are happening or something else you are aware of.
It feels like there's something of an attack vector here with cloud providers who lease IPs for hours at a time.
1. Lease IP
2. Obtain cert (verify can receive traffic to IP on port 80)
3. Give IP back
4. Cloud provider gives IP to another customer
5. Bgp attack IP with 6 days.
While I support the idea of IP certs I do wonder how thought through this is and what the future consequences for security are.
I agree with another commenter here who said this should be limited to IPs behind RPKI.
Possibly also needs a mechanism for IP owners to clamp the cert time to be below their IP re-lease policy. As an example a provider like AWS could require max certs of (say) 6 hours and ensure any returned IPs stay unleased for 6 hours before reissuing them)
If you control the IP or domain via a BGP hack, you can get a certificate issued while you control it, as long as you control it from the perspective of their CA.
You've got to be pretty lucky, or do a lot of IP cycling for your vector to be terribly useful. A paranoid user of IP certs would let their new public facing assignments settle for a week before using them; but I suspect few people will start using IP address certs, because of usability.
I wouldn't write off the use of IP certs just yet.
AFAIK IP address certs would provide a way to create a secure browsing context in your browser, which is required for service worker ('offline' background threads) and some File API, which could open up a new class of programs that host for friends and family.
This is exactly why the LE IP certs will be limited to 6 days: this exact attack is possible today against any IP address cert, and such certs in general are allowed to have lifetimes up to 398 days. LE isn't comfortable with that situation, so IP certs will have the shortest feasible lifetimes.
> Our six-day certificates will not include OCSP or CRL URLs.
If someone else did this, Mozilla would be threatening to remove them from their trusted roots.
IP address certs sound like a security nightmare that could be subverted by BGP hijacking. Which is why most CAs don't issue them. Does accessing the ACME challenge from multiple endpoints adequately prevent this type of attack?
> Short-lived Subscriber Certificate: For Certificates issued on or after 15 March 2024 and prior to 15 March 2026, a Subscriber Certificate with a Validity Period less than or equal to 10 days (864,000 seconds). For Certificates issued on or after 15 March 2026, a Subscriber Certificate with a Validity Period less than or equal to 7 days (604,800 seconds).
[…]
> §7.1.2.11.2 CRL Distribution Points
> The CRL Distribution Points extension MUST be present in: Subordinate CA Certificates; and Subscriber Certificates that 1) do not qualify as “Short-lived Subscriber Certificates” and 2) do not include an Authority Information Access extension with an id-ad-ocspaccessMethod.
> IP address certs sound like a security nightmare that could be subverted by BGP hijacking.
The attack scenario is exactly the same as hostname certificates, which are often validated by HTTP or TLS ACME challenges.
> Does accessing the ACME challenge from multiple endpoints adequately prevent this type of attack?
Yes. You'd essentially have to MitM all traffic towards the IP for it to work, and with more and more networks rolling out BGP origin validation a global BGP hijack becomes harder and harder to pull off.
You'd still be in trouble if you expect your own ISP to be hostile, of course. Don't single-home with an ISP you don't trust, or stick with domain name certs and force DNS challenges.
reply