But LE has been around for a while and its certs mostly work fine everywhere! That's partially because it was already in a ton of root programs, and partially because of cross-signed roots. A cross-signed root is effectively the root CA except signed by a different CA. A client that doesn't trust the new root yet trusts it via the cros-signed root.
Most sites need 2 certificates in the chain to be validated: the leaf certificate certifying the site itself and the intermediate online CA that signs leaf certs. The intermediate is signed by the root, which is in your trust store so you don't need to send it. Lots of sites send it anyway. This is a misconfiguration and just serves to make TLS connections slower: you're sending an extra certificate along that by definition nobody uses. This is because it's self-signed: you either already trusted it anyway (and then didn't need it) or you don't trust it and then the self-signed certificate is unconvincing.
New CAs also need their cross signed root sent along to be widely supported, so they basically get that worse performance all the time. Unfortunately, you don't know in advance which roots a TLS client will trust, so until you're in every major trust store, you have to. Now that LE is in every major trust store, it's a real first-class CA and everyone gets to look forward to doing away with the cross-signed root crutch. (You can't do it immediately because old, unpatched machines.)
LE is great and has done wonders to make all the other CAs up their game.
You can read more about LE's trust chain here: https://letsencrypt.org/certificates/
New roots get spun up all the time (maybe a dozen a year? That sort of ballpark). For Let's Encrypt that meant the cross signing arrangement described, but for an existing CA they can use their older roots to sign newer ones while they wait for trust stores to do their thing, this produces a certificate for the same name, but instead of being self-signed it's signed by the old root.
If you send this cert as well as your ordinary intermediate, some older clients that had no reason to trust the intermediate can now connect it back to a root they do trust. So for a few bytes you get a compatibility win.
This type of thing is also done when a root is replaced urgently due to distrust. Clients that are on top of their game trust the replacement root. Clients that don't know about this distrust trust the old root, which has signed the replacement. So everybody trusts the replacement, yet up-to-date devices aren't at risk from the old root any more.
A mechanism called AIA chasing lets browsers which use it "fix" an incomplete chain, but at some cost to privacy, another approach is to cache bits of chains and try to use the cached bits to fix broken chains, again this has privacy problems. Definitely send your intermediate (any modern leaf cert in the Web PKI has an intermediate) but more might be necessary.
I would like to see AIA chasing done by TLS servers (there's no major privacy concern there) so that this isn't extra work for admins, but we don't live in that world, so it's on you (or tools you use) to get it right.
As a result there are two versions of each intermediate certificate, one signed by DST Root CA X3, one signed by ISRG Root X1. The URL for the former is baked into your leaf certificate, you _can_ configure servers to send the other version, and Let's Encrypt in fact does so for the test server required by Mozilla's CA root trust program, but most people don't need to do that.
If there ever is a good reason to switch over (maybe in a few years when DST Root CA X3 is due to expire) thanks to the relatively short lifetime of Let's Encrypt certificates they can move all their subscribers (at least, all those with compliant ACME implementations, if you hard code everything you get to keep both halves when it breaks) without those subscribers needing to even know about it, let alone make any changes.
Also what is stopping a government from getting access to these private keys?
A number of CAs are effectively under the control of a government: https://ccadb-public.secure.force.com/mozilla/IncludedCACert...
The CABforum BRs define, de facto, the rules/requirements for being a CA: https://cabforum.org/baseline-requirements-documents/
Note that there have been several ballots which have been won by the CAs against the browsers, after which the browsers have turned around, shrugged, and just enforced the new requirements for their root stores without them becoming CAB BRs.
Most obviously this applies to Google and Apple requiring all newly issued certificates to be CT qualified.
But beyond that warm fuzzy feeling, and what they write about themselves on their web site, I'd love to know more particulars about the nature of why they're different, why so many people trust them, how they came about, who's running it, who's supporting it, who are they competing against, and what their mission and back story is. Why are they pushing other CAs to up their game, and why weren't those other CAs doing what Let's Encrypt is now pushing them to do, in the first place?
Provides certificate for free that are accepted by all browser/OS.
Let the user easily generate wildcard certificate if you control the domain link to it.
In other words, remove the human interaction.
Server can now easily and automatically renew certificate. No need to go through weird interfaces and obscure settings to always have an up-to-date certificate.
And, if you run your own server, their client even generates the configs for most major web servers on most major OS', so you don't even need to lift a finger. Type 1 command and boom, you're secure.
Doesn't get easier than that.
Has anyone set up the new wildcard certs? If so, who did you choose as your DNS 01 Challenge provider? I currently do DNS through a local provider and they don't have an API so it's been out of my reach.
Are you unable to add arbitrary TXT records with your provider?
cert-manager also supports DNS 01, but of course they support the bigger providers (so they'll take some options and do the web requests to set up the TXT records)...
I haven't looked into it a crazy amount (since in the end I can still just make multiple http 01 validated certs), but was just curious.
acme-dns is specifically designed for this purpose: https://github.com/joohoi/acme-dns
By "UPDATE ACL" I believe that you are referring to the DNS UPDATE RFC -- it looks like cert-manager doesn't support generic UPDATEs yet.
Thinking about it again I'm not sure that I fully understand what you were suggesting -- are you suggesting adding a CNAME for x.example.com that redirects to yyyy.different-provider.com, and letting let's encrypt follow and work it out?
I also wanted to know how everyone was trusting with their DNS requirements/how people were making the decision (cost, trust, privacy, country of origin, whatever else).
Probably the easiest way forward if you have any infrastructure yourself is to simply delegate some subzone of one of your domains to a nameserver you run yourself (like, delegate letsencrypt.yourdomain.example to your own nameserver), then point your CNAME to a name beneath that, and configure that nameserver for dynamic updates so your LE client can change the TXT record(s) on that server as needed.
Alternatively, you can delegate the _acme-challenge zone to a nameserver under your control, although you then have to configure each of the zones on the nameserver too.
Since I'll need an _acme-challenge.* PER-DOMAIN, doesn't this just move the goalpost to setting up the CNAME records (instead of TXT records directly), assuming my current registrar doesn't support wildcard CNAME entries?
If I'm understanding DNS 01 ACME challenges correctly, to register two subdomains first.one.example.com and second.one.example.com, I need to set up TWO TXT records, _acme-challenge.first.one.example.com and _acme-challenge.second.one.example.com. This means I need two CNAME records (or one if wildcards were enabled)...
I do thank you for your input, thinking about this has lead me to the possibility of just running my own nameserver all-together, I'm going to evaluate all these approaches and see how they pan out, and write a blog post to share.
According my reading of the challenge specification  it shouldn't work. Notice that it says:
> 2. Query for TXT records for the validation domain name
Verify that the contents of one of the TXT records match the digest value
To me, this means that it should issue a query for TXT records only (not for ANY) and hence the server shouldn't even see your CNAME "redirector" record.
Thus, if your method works, either the server is wrongly implemented, there's a flaw in my reading comprehension, or the specification should be amended...
My beef with CF is that I can not see which sites are behind CF.
But you are completely correct that running a CDN (HTTP or HTTPS) requires you to MITM everything. The same complaint applies to Akamai, Level 3, or any other CDN you can name. It definitely is a problem, but not one of CloudFlare's own making.
It would be a fair criticism of CloudFlare to say that they've made their defaults tend towards MITM even though it is very likely that most websites don't actually need a CDN -- meaning that they are MITM-ing more traffic than they need to. And they have had pretty bad bugs in the past that revealed large amounts of private data that was sent over TLS but was MITM'd by them.
I do agree that CloudFlare being so central to so many large websites is a problem though. I just don't agree that this discounts their use as a purely-DNS service.
Using Cloudflare for DNS, and only DNS, doesn't subject you to this.
If you decide to use their reverse proxy features, then sure, the MITM criticism applies.
* `server: cloudflare` - Although CloudFlare uses a nginx, they report
themselves properly in the server header
* `Cookie: _cfudid:*` - CloudFlare uses the cookie header to identify
users and prevent abuse. If you delete this cookie too many times,
your IP is flagged by CloudFlare and you may receive an interstitial
blocking you from accessing a site.
* IP Ranges: https://www.cloudflare.com/ips-v4 and
https://www.cloudflare.com/ips-v6 - CloudFlare owns the routing
to these IP addresses. If you want, setup some Firewall Rules to block
access to these ranges.
I'm second most concerned about my ISP. They see every outgoing connection I make, and have no trouble tying it all back to me.
Cloudflare is... just not that big a deal. Are you concerned about Microsoft being able to MITM every connection to a site hosted on Azure? Amazon being able to MITM every connection made to AWS? Google being able to MITM every connection made to GCE?
"Yes" is a fair answer, but it means you're using a minuscule fraction of the available internet. Otherwise I don't really see the need to pick on Cloudflare. They're doing exactly what the company that's using them asked them to do (and getting paid for it too...)
It's not just Cloudflare themselves though. It's everyone else on the open Internet between the Cloudflare edge node and the site I actually wanted to connect to.
I'm not too worried about the parties that the site operator has a direct contractual relationship with, but traffic from Cloudflare could be going unencrypted to literally anyone with an AS number.
But how do I, website user, can know it? Given how many sites are served by CF, my private, decrypted, data can be aggregated and I would have no clue.
For ISPs use VPN. And I doubt (seriously) AWS (Azure) has means to do MITM, reading private keys from virtual machines? cmon.
Banking is a real bitch, agree :)
That is to say I now believe that not only are Google, Cloudflare, Amazon not proactively sniffing traffic, but also that they'll have invested a massive amount of money making sure it's really hard to do undetected.
Of course I also fully expect that any one of them would give me up to law enforcement iff compelled by a court.
that's only if the website(s) are only using their IaaS offerings (which I doubt because they're crazy expensive compared to DO or vultr) and not their PaaS offerings. With PaaS (think heroku), they terminate the SSL and control the software for the http server, not you.
Google and Facebook have legal taps, users willingly provide their chats, emails, links, likes, photos, connections, locations, because its great service and its free. Both are Ad companies by main revenue, and its vital for them to use people's data.
AWS, Azure, Apple are not Ad companies, their main revenue is paid infrastructure, paid software and paid hardware. Their customers are not users, but companies. Reputation risks of openly using the data tap themselves will ruin existing revenue. What companies doing with users data is not their concern. Apple is an exception, with closed ecosystem, strong privacy and security and main income from hardware.
Cloudflare is something in between. They provide reverse proxy services, where your little site sits behind huge wall, for free. Income comes from paid WAF security features and ability to upload to CF your own SSL certs. In any case, you have to allow MITM of people's data.
Incentive for CF to use user's decrypted data is huge - it may shoot it up to ranks of Google and Facebook, to $100x Billions. So I have my doubts if that data is not being harvested.
I think I've said too much already, shutting up :)
Doesn't most ISPs have to live up to certain laws about protecting the customers? I think those regulations are much more strict than what is required of CloudFlare.
Sites behind CF usually include two headers in the responses: cf-ray and expect-ct.
If you see these headers, it's almost certain the response is coming from CF. So its likely those extensions are doing that, perhaps you might be able to verify the source code.
If the thought of connecting to a site hosted by Cloudflare absolutely disgusts you. Vist https://www.cloudflare.com/ips/ for a list of IPs that you can block.
Do other CDNs offer free plans with SSL?
Seems like you just have an issue with CloudFlare, and will keep changing the subject.
This is against the whole idea of SSL, a closed tunnel between users and websites, so yes, I have an issue.
Plus many users set their DNS resolvers to CF DNS, browsing history goes here.
And...that's it. CloudFlare operates in this spirit. It does not route traffic from its edge nodes across the open internet. It routes it across its private network.
So, no, it's not against "the whole idea of SSL"; it's what you have decided the idea of SSL is and nobody else on the internet really agrees with.
The amount of disingenuity you're hucking in this thread is pretty gross and you should stop.
Now, it would be great if Cloudflare supported LE integration at it's free tier (replacing the Cloudflare wildcard cert).
Thanks For Everything You Guys Have Done To Accomplish This Let's Encrypt!
> The ‘certificates per registered domain per week’ limit has been raised from 20 to 50.
This can especially be a problem because the renewal exception to the rate limit doesn't work like you might expect. If a particular cert (meaning the exact same set of domains) has already been created, it can be renewed regardless of whether it would exceed the rate limit - but it still counts against the rate limit. If 45 certs have already been renewed in the last week, you can only create 5 new ones. If 80 certs have been renewed in the last week, you can't create any new ones. They plan to change this, but it hasn't happened yet: https://github.com/letsencrypt/boulder/issues/2800
Some organizations have gotten rate limit exceptions to handle this particular issue. Maybe they looked at some internal metrics and decided raising it to 50 would reduce the number of exceptions they have to make while still curbing misuse.
That might seem like a ton per week, but consider a PaaS (example-123.herokuapp.com) or a blog platform (example-diary.someblogapp.com).
Personally I'd prefer a wildcard cert there, but at organizations where certificate inventory is a requirement (where they need to track, procure, and invalidate on a per-subdomain basis) Let's Encrypt is a solid option.
If you have a lot of subdomains, you may want to combine them into a single certificate, up to a limit of 100 Names per Certificate. Combined with the above limit, that means you can issue certificates containing up to 2,000 unique subdomains per week. A certificate with multiple names is often called a SAN certificate, or sometimes a UCC certificate."
Edit: lvh has explained this better at https://news.ycombinator.com/item?id=17699037
What about Linux and the BSD's?
Tangential questions: OS's usually are the system's primary stores of root certs, if I understand correctly, but browsers and other applications store them too. How are conflicts resolved? If Mozilla untrusts Fubar CA's root cert and the OS still trusts it, what happens? And why have redundant stores? I suspect the answer is that the browser vendor wants to ensure the user has a happy TLS experience despite OS problems, but that's just a reasonable guess.
 A reference right in front of my nose: https://news.ycombinator.com/item?id=17699037
For Mozilla their NSS is almost completely independent of OS trust stores, with the special case that on Windows (maybe macOS but I'm not sure) they offer to look in your OS trust store for any additions you've made to the OS vendor store and trust those on the rationale that you must have had some reason to do that.
For Chrome the OS trust store is used, (on Android this of course is Google's trust store but on a desktop it isn't) but, Chrome layers some Google policy rules on top.
> Only two major browser vendors also operate a distinct major trust store. If you're Microsoft (IE, Edge) or Apple (Safari) this is de facto not a problem since you also control the OS.
> For Chrome the OS trust store is used, (on Android this of course is Google's trust store but on a desktop it isn't) but, Chrome layers some Google policy rules on top.
If only two major browser vendors operate a distinct major trust store, and they aren't Microsoft or Apple, I infer that Google operates a distinct major trust store (along with Mozilla). But that seems to contradict the second statement: Why operates a trust store that you don't use. For ChromeOS?
Linux distros typically use Mozilla's root list.
> If Mozilla untrusts Fubar CA's root cert and the OS still trusts it, what happens?
Then it no longer works in Firefox but works in other apps.
I don't know anything about Gordon Bock (credited with that page), or indeed Microsoft's PM for their root trust programme, Mike Reilly. All the trust programmes (except Mozilla's) are run in a way that doesn't give us (as relying parties, or as subscribers) much insight. I'd love to know more about why it took so long to approve ISRG, but likely we'll never be told.
• You can use CAA DNS records to choose which CAs can create certs for your domain.
• You can watch Certificate Transparency logs to catch CAs that didn't obey.
AFAIK both are becoming mandatory for CAs. It doesn't technically stop violations, but ensures they get caught and shut down if they fail to obey the rules (like StartCom and Symantec).
Isn't there a timing issue there? Eg, if I get a cert from Comodo and change my CAA record to specify Let's Encrypt immediately afterwards, anyone checking if issuer doesn't match CAA can get a false positive
Point is, people will be able to figure out that you're lying if you attempt to claim that the cert was issued incorrectly.
Why would you do that?
I'm taking issue with the idea of checking CAA records against CT logs after the fact as a means of verifying CA compliance with CAA.
This is orthogonal to CAA records. You can check CT logs without having a CAA records, and CT logs can also be used to detect misbehavior from a CA you authorized in your CAA records. At the same time, CAA records are preventative, whilst CT logs only allow detection after the fact.
I would be very interested to see the percentage of the Internet that is actively using LE certificates vs. the number of certificates that have simply been generated for valid domains.
Most German banks for example use EV (extended Validation) certs, where the organization name appears in the browser's address bar. However, the benefit of EV certificates is debatable, since it's pretty easy to register a valid-sounding company under some jurisdiction or another.
Also, organization structure aren't transparent to everybody (how many of your non-tech friends would be surprised if google.com had a certificate issued for "Alphabet Inc."?).
A clear example of this is KLM, where www.klm.com's certificate is registered to "KONINKLIJKE LUCHTVAART MAATSCHAPPIJ N.V." (try that on a mobile browser!). It's sufficiently different to what people expect (which is, admittedly, just an initialism) that I've known various people who actually understand EV certificates get thrown by it.
Like a lot of big companies, Amazon has a cert from Digicert. To my knowledge, Digicert does not issue DV certs, only OV and EV.
That said, I agree that DV certs are good enough for production for most people.
EV certificates tell you that a site is owned by a company with a particular name, not that it is the company you actually want. There's a reason browser vendors are de-emphasising EV: it isn't very useful.
Fine for e-commerce. They don't do extended validation or any of the more "I am really who I say I am certs."
They're used to seeing EV SSL type address bar when they sign in to their online banking and such.
Some people think that EV SSL is like $400, it's not, you really shouldn't be paying over $100/year. Still a racket in my opinion but not one that's easy to circumvent.
You could totally register any random name like Really Legit Internet Enterprise LLC with some state government, put $100 in a bank account, scan the incorporation paperwork and get an EV SSL cert.
Is this a good approach? FYI, I have a blog and not too paranoid about security between CF and my web server.
(If you're hosting an apex domain, e.g., example.com and not just www.example.com, it also makes things easier if you can use Route 53 as your DNS host, because CloudFront IPs keep changing and you can't make a CNAME for an apex.)
Technically this is implemented in two ways. One: Where Trust Stores themselves have distinct trust for different feature sets, ISRG/ Let's Encrypt asked only for the flags needed to do Web PKI. e.g. In Mozilla they didn't ask for S/MIME or code signing (today Mozilla doesn't do code signing anyway).
Two: Certificates have a section called Extended Key Usage (EKU) which can list arbitrary purposes for which the certificate's issuer says the Public Key included in this certificate is to be used. EKUs on Let's Encrypt certificates specify two purposes, 184.108.40.206.220.127.116.11.1 (TLS Server) and 18.104.22.168.22.214.171.124.2 (TLS Client) so these certificate proclaim themselves not to be suitable for other purposes.
Without the client EKU bit, a conforming TLS implementation would reject the connection.
Rather sooner, at the end of September 2021 the DST Root CA X3 that cross-signs their existing intermediates expires.
In practice many systems don't directly obey expiries baked into root certs, a self-signed root certificate is largely a vehicle for conveniently moving the key inside it, it's not signed by anybody we trust independently so why care what it does or does not say about that key?
And of course if the IdenTrust / ISRG relationship remains good there's no reason IdenTrust can't sign new Let's Encrypt intermediates with another of their CA roots that hasn't expired. The short lifetime of Let's Encrypt leaf certs means they wouldn't even need to have decided before 2021 what to do about this.
The answer is that for Oracle's Java updating to 7u111 is necessary for this to work (or to 8u101 if you run Java 8) and that for other people's Java implementations it will depend upon where they get their trust store, Oracle's is the most popular in Java.
Now I'm wondering how many invisible Stripe widgets I've been missing.