E.g. "Hundreds of thousands of domains' certs expire after 2099". Yeah, but no publicly trusted certs. They're capped at a bit more than 2 years and there's a discussion to cap them even more.
The certs they're seeing are almost certainly mostly: "let's create a test selfsigned cert for this host. how long should it last? let's type in a large number so we aren't bothered by it any time soon."
Historically there were some certs that kept getting grandfathered in after lifetimes were restricted because they'd been issued before there were any rules - maybe ten years to expire or even more? I think the last of those probably went away because of the Symantec distrust (not that they were issued by Symantec, but they were issued by a CA which was bought by a CA which in turn was bought by Symantec before it was distrusted) and also of course they'd have either MD5 or SHA-1 signatures, which are not accepted today anyway.
There were still certs issued right up until the end of March 2018 with the old 39 month lifetime maximum. You can see them most easily in the annualised CT logs for 2021. A while back CT log operators realised that logs just get longer (of course) and so they would need to periodically make new ones and archive the old ones. Very quickly they struck upon the idea of annualising them, instead of running FooBar Log, run FooBar Log 2019, FooBar Log 2020 and FooBar Log 2021, and then require any submissions to use the log matching the year of expiry of the certificate they were logging. This way you can archive FooBar Log 2019 when people get back from holidays after celebrating New Year 2020, all the certs in that log are expired anyway now.
I guess that today the last of those 39 month certs will actually expire before a brand new 825 day cert, but they are still out there, so don't write software that assumes leaf certs can't last more than 825 days just yet.
The real concern is the percentage of people using browsers that are no longer supported and therefore do not implement on the new standards like Internet Explorer. Microst really needs to stop caving to pressure and get rid of these ancient applications like paint.exe and internet explorer. That or just make new applications and give them the same names.
For reference: https://gist.github.com/ScottHelme/5531ed88b1ff0c1e8ce8af565...
> (Over 1.5M expire in the 2040s alone!)
I doubt ~75% of the self signed certs expire in this dataset expire 21-30 years from now. Surely the author is correct that there are a good number of public certs with a very long expiration. Would need to download and filter the dataset to know for sure though.
Back in 2011 when the Baseline Requirements were first written, they set 60 months (5 years) as the upper limit, with the intent to further restrict to 39 months in a few years and that eventually happened in 2015 or so.
Last year 39 months went down to 825 days and there's currently pressure to reduce it further in 2020.
Note that it's not just self-signed certs you'd be considering, many of the certs in this dataset will be issued by a CA but not a public CA. Could be an internal CA (e.g. Windows Server provides software to run one, so does RHEL) or could even be one of the private-use-only CAs run by the same companies that operate public CAs.
There were about half a million certs in that dataset issued by SomeOrganization with email address firstname.lastname@example.org. Not technically self-signed, but obviously that's not a public CA.
These applications are also frequently heavily firewalled to the internet so security risks are limited, but not with smaller operations, you might not have PKI infrastructure in place to manage your own root certs (especially in BYOD orgs).
If a certificate rollover takes your application offline for an hour, your not going to want to do it every week.
Also, the open source Let's Encrypt CA software is tailored for how we do things. It's not an "off the shelf" component that you can easily run in another context and end up with a proper compliant CA.
Starting yet another CA is definitely not to be taken lightly.
Also, timeline. It took Let’s Encrypt a couple years to get in root programs and have good penetration, right? Would it be faster or slower now?
If someone was willing to undergo this ordeal and could secure funding so it wouldn’t split the philanthropic community and affect LE, would LE be willing to cross-sign (once they’re in good standing) to get them off the ground, the way IdenTrust did for the ISRG roots?
Asking for a friend ;)
It's not about the money, either. LetsEncrypt had their intermediate certificates cross-signed by IdenTrust, otherwise they wouldn't have the 30% market share.
It takes a lot of time with the CA application process. You will need to convince Microsoft, Apple, Mozilla, Java, etc that you will be a good CA with good practices. Multiple security audits (starting at about $40K... KPMG don't come cheap), hardware (HSM, servers for validation, OCSP, CT, etc) and people (needs to be trusted people, and often are paid top dollars).
It's easy to _buy_ a CA than creating one. You can get your new root certificates included later easily now that you have a root already, from which you can cross-sign until they are included. CAs are sold out more often than many of us think.
Yea I have a pretty good idea of what’s involved. None of that is impossible to do. Time consuming? Sure. Impossible? Far from it. The root programs have documented processes to get included and will (eventually) accept a legit new CA that’s passed WebTrust audit and has good practices (afaik). Hardest thing is finding someone to cross-sign to accelerate the process. That’s why I asked jaas if he would do so :P.
I run a company that does public key infrastructure stuff (smallstep.com) so I have the software and some of the people. I think I could track down some hosting and a couple operators from a partner or two. What’s left seems like maybe a few hundred thousand a year for audits and HSMs, mostly. Pretty sure I could scrape that together too.
Idunno though. Mostly it’s Friday night and this is a fun thought experiment.
Good thought on buying an existing CA. Are any currently up for sale, I wonder? If anyone who reads this knows, plz email me (mike at smallstep).
They got bootstrapped with a cross-signing from IdenTrust and so were able to get off the ground fairly quickly. That won't have been cheap, and I'd be very surprised if anyone would offer the same service now - there's some pretty big risks associated with it.
Getting a new trusted CA off the ground is slow and expensive, sadly, and of course you can't really issue anything 'trusted' until you get a critical mass of browsers and OSs to include and distribute the root.
Of course it's quicker now than it used to be, with automated updates and better update cadence. You'll still have problems in some areas (embedded devices, older Android devices).
Funny how a project can have problems because there's a ton of customers with 'smart' TVs that only support old roots and have no update mechanism...
Buying a CA or an existing root is the best way for now, probably. Not cheap. Amazon and Google both did this (roots from GoDaddy and Globalsign respectively).
Not sure if any are for sale at the moment, but it would never hurt to ask - although the pricetag might be scary!
(Disclosure: I've been doing this over 16 years, at one of the larger CAs. nick -at- sectigo -dot- com)
At least partly through corporate sponsership:
Or, more importantly, what happens when the US government decides you're too brown or Jewish and decides to sanction you?
Let's Encrypt has to follow US sanctions, so no more certs for you!
For what it's worth, I'm pretty sure even Let's Encrypt wants to see competitors to Let's Encrypt.
No it wouldn't. If there are 9999 CAs, you need to trust every single one of them.
A better argument for having multiple CAs is that it would increase resilience (against takedowns, bugs, money running out, etc.)
To be clear, all of the evidence shows that LE uses this power only for good. But that’s the threat. 30% market share is probably fine. 90% or 100% would be scary.
There are way to decentralize the internet, but there never will be much interest for it.
If Let's encrypt is compromised, either by being able to issue certificate for arbitrary domains or if the CA itself is compromised, the impact would be huge given the current number of certificates signed by it and this number is likely to grow in the futur.
With several CAs in different organizations, you have a far lower risk of seeing all the CAs being compromised at the same time, thus limiting the impact of a CA being compromised.
Note that it doesn't mean a full decentralization, but rather a set of CAs (5 or 10) independent of each other and providing a service as conviniant as Let's Encrypt.
As a side note, an idea a colleague of mine suggested to extend and better the security model of CAs/certificate would be the ability to have a certificate signed by several CAs.
Let's say you are a bank or a security sensitive website, you could have your certificate signed by 4 or 5 CAs, and then, you could publish a policy, for example in a TXT DNS record, stating that you need at least 2 or 3 valid CA signatures. This way 1) your certificate doesn't need to be renewed in emergency if its CA is compromised, 2) if a CA is compromised (without people noticing), it reduces the overall impact as an attacker would have to in fact compromised several CAs to actually exploit this (MiM or impersonating websites).
With multiple signatures, at least, your certificate is still valid as it has at least another valid CA signature.
Also, right now, if one CA is compromised and if this goes unnoticed, the effect is a bit catastrophic as it's possible to create valid certificates for any domain. With multiple CA signatures in conjunction with a security policy mechanism stating how many of these signatures must be valid, you mitigate this issue as it would require several CAs to be compromised and controlled by the same actor to issue valid certificates.
It's unlikely they would be down for so long that certs would actually expire.
Just pick a discrete number of days before expiration such that you have a comfortable amount of time to deal with issues.
We (Let's Encrypt) recommend renewing with 30 days left, so every 60 days. If your system is automated like we recommend that shouldn't be burdensome, no reason to do it less often than that.
I can see pros/cons either way, just thought you might be interested in this feedback.
Having more options means less fallout if a CA gets compromised.
We comply with U.S. sanctions by not issuing to entities on the SDN list, but that doesn't prevent us from serving the vast majority of people in those countries.
When I registered my domain I got WHOIS protection and then LE just made sure I owned the domain. There was no asking about who I am or where I reside or what lists I might be on.
Some entries have specific domains associated with them, but we are obligated to not serve any domain directly associated with (i.e. controlled by) an SDN entity, not just those listed.
If we become aware that we may be serving an entity on the SDN list (often via someone emailing us) we have to conduct an investigation which may result in termination of service.
What are good non-US CAs?
It's not LE's fault or anything they can do about, but there's plenty of reasons to have alternative options.
Or better yet, is there a way to start a decentralized organization itself so that no jurisdiction has absolute power over it?
CAs exist solely because you CAN trust them, otherwise what's the point? We'd just have every site self-sign and let the users choose who to trust.
That's the idea, but in practice, I don't really trust the vast majority of them.
Of course there's a question of where you should put such a thing. Even Russians apparently don't find the idea of a Russian CA trustworthy (they expect that Putin would have his thumb on it, and who am I to argue?), a European CA is certainly not free of American influence, so where would this alternative be based? I don't have a good answer.
Or the government could lean in on them require them to not grant certificates to some foreign institutions. Even worse, they could be forced by NSLs to issue fraudulent certs (CT log verification isn't mandatory yet, is it?)
Let's encrypt is awesome because it's instantaneous! You'd think Google would understand that aspect.
Edit: GCP people: Please give us a an explicit "retry" button to press when we've set up the DNS records. (I'm talking about the Google Cloud Balancing Service here. The UI was awesome up until the point it wasn't. The one thing lacking was a "retry" button.)
Or maybe they just want to keep google CA, limited to Google properties. The same way you put things in googleusercontent.com and not google.com.
Have you noticed where they get the certs from? Let's Encrypt! So there's really no reason for that.
I mean, like... Yes, I understand that by default Google does things at scale, but sometimes you need to do things by precision/on demand, too.
Better hope you don't get your account automatically banned because one of your employees does sketchy things on their own personal account
What I mean here is that TLS up until TLS 1.2 goes like this:
Client: "Hi, I know ciphersuites A, B, C, D, E, F and G"
Server: "OK, let's do C"
And so you can't tell whether the server actually knows A, B, D, E, F, G or even I through Z. They just decided to pick C this time for some reason.
In TLS 1.3 it's sort of better (all of the ciphersuites you shouldn't use don't exist any more) and sort of worse because now it goes:
Client: "Hi, let's do method B, I'll go first, 123456 and a goldfish and the colour yellow"
Server: "Cool, method B works for me, I pick 567890, a swan and mauve"
Probably all TLS 1.3 servers today are willing to do anything method not just method B, since all the methods are shiny and new. But perhaps not, and you can't tell except by failing the connection which takes more round trips.
Disclaimer: I run Shodan.
You can have a round trip all the time, or just in the case where the client chooses an ciphersuite the server doesn't support. TLS 1.3 makes the typical cases much better without hurting the worst case much.
1. Why not integrate and let user monitor all their SSL certs for a domain in a single shot? Like retrieve all certs for a domain (similar to https://crt.sh identity like search result)
2. When registered, I did not receive an email confirmation/validation. So I am not sure if I will get an email before my certs are up for renewal.
I still don’t know if that vs version with more paying customers is a good thing in long run. Good for now, though.
In the end, the "free" version is good enough for most people. I would suggest if you're using it in a commercial environment that you consider setting up a scheduled donation. Which likely does help a lot for what they are doing.
Why do the certificates expire after 90 days? What would be the downside of giving them a longer expiration time?
> They limit damage from key compromise and mis-issuance. Stolen keys and mis-issued certificates are valid for a shorter period of time.
> They encourage automation, which is absolutely essential for ease-of-use. If we’re going to move the entire Web to HTTPS, we can’t continue to expect system administrators to manually handle renewals. Once issuance and renewal are automated, shorter lifetimes won’t be any less convenient than longer ones.
90 days was what they decided the best compromise for usability and security.
Taken to the extreme ‘immediate’ certificate expiration starts to look a lot like Kerberos which is maybe what we always wanted.
This doesn't make sense. Even assuming you included the ones that had expired certs in "had a cert that expired in the 2010s", it would only be 1.6+3.7~=5.3M.
Where does the rest 4.3M come from?
"If the domain name is not renewed, redeemed, or purchased through an auction, it is returned to its registry. The registry determines when the domain name is released again for registration."
(At least if you are comfortable terminating your SSL at the load balancer. If you plan to use SSL between the AWS LBs and your EC2 instances, then AWS certs don't work there and you'll need to provision them yourself using something like LE).
Interesting note - ALBs/ELBs (NLBs with SSL termination as well, I would assume, but I am not sure) do not perform validation of your backend certificate. You can terminate at the load balancer and use an expired self signed SHA1 cert for all AWS cares.
I used an Ansible role to provision it, antonier77/caddy-ansible and it has worked nicely.
But, as another comment mentioned: If you are in an AWS load balancer, you probably want to use the AWS certificates.
Certs/keys get added to a key:value store that is monitored by the edges.
Each edge know the timestamp of the current key:cert pair that the edge successfully wrote (if there was an update) or the timestamp of the current key:cert pair that the config-baker wrote during the edge built process if edge has never wrote a key key:cert pair. If the timestamp is newer, then the edge updates the key/cert and restarts itself with a new cert.
The key:value store for keys is also monitored by a config-baker. When a config-baker detects a change in key pairs, it writes a new initial configuration JSON with new keys/certs which is stored in the infrastructure management git. So when a new edge is built and launched by the infrastructure policy enforcer, it would immediately have the keys on it as soon as it comes into service at which point it will become just another edge following the same "protocol"
Currently it manages 671 certificates on 16 different edges. After a new key:cert is published in a "go" mode it rolls out in ~1 minute to all the edges.
For production we are only using it, currently, for the cert at the backup location, and we couldn't use certbot because of the way it is packaged for Ubuntu, it wouldn't work with Route53 DNS validation. Because it's the backup site, HTTP requests aren't normally directed at the server doing the requests. So I switched to "acme.sh" and that's been really reliable.
Moreover, I don't see how this article is relevant to Let's Encrypt's popularity. There are only two mentions in the article about LE, this is one: "These actors use Let's Encrypts, Comodo, Sectigo, and self-signed certificates in their MitM servers to gain the initial round of credentials." So they use normal public infrastructure to get certificates? That is concerning, how?
It allows someone to discover every one of your HTTPS certificates that you've requested.
For instance, here is some free rabbitmq clusters to use...
Default password of guest/guest works on http://rabbitmq.avtomain-crypto.com/#/
Several distinct outfits sell what they call "passive DNS" which is a feed of snooped DNS queries and their answers, minus any identifying information. So you don't need a certificate, if anybody, anywhere, looks up the name and it has an answer then these systems will tell you what it is.
The records come through roughly like this:
What they aren't is coupled to other cryptocurrency nonsense like mineable/tradeable tokens.
For Bitcoin there's currently around $132,000 of incentive for mining given out every 10 minutes. In a race to the bottom that basically equates to a bit under $132,000 worth of compute resources expended for mining every 10 minutes. An attacker with enough resources to perform an attack on Bitcoin would have to spend more than that $132,000 every 10 minutes and keep it up for the duration of their attack.
"Blockchain" for random hip projects more often than not is just buzzword nonsense that does nothing to improve security. A blockchain without mining is just silly, mining only works when it provides sufficient incentive such that attacking that blockchain is much more expensive than any payoff of attacking it is worth.