Hacker News new | past | comments | ask | show | jobs | submit login
Let's Encrypt Root Trusted by All Major Root Programs (letsencrypt.org)
1532 points by okket on Aug 6, 2018 | hide | past | favorite | 144 comments

Some context on how this works: your system (usually OS) ships with a bunch of root certificates that are allowed to sign any certificate (de facto). New CAs need to get in that store to be trusted, but they don't get to be in that store until they convince a vendor (here, "root program") to include them.

But LE has been around for a while and its certs mostly work fine everywhere! That's partially because it was already in a ton of root programs, and partially because of cross-signed roots. A cross-signed root is effectively the root CA except signed by a different CA. A client that doesn't trust the new root yet trusts it via the cros-signed root.

Most sites need 2 certificates in the chain to be validated: the leaf certificate certifying the site itself and the intermediate online CA that signs leaf certs. The intermediate is signed by the root, which is in your trust store so you don't need to send it. Lots of sites send it anyway. This is a misconfiguration and just serves to make TLS connections slower: you're sending an extra certificate along that by definition nobody uses. This is because it's self-signed: you either already trusted it anyway (and then didn't need it) or you don't trust it and then the self-signed certificate is unconvincing.

New CAs also need their cross signed root sent along to be widely supported, so they basically get that worse performance all the time. Unfortunately, you don't know in advance which roots a TLS client will trust, so until you're in every major trust store, you have to. Now that LE is in every major trust store, it's a real first-class CA and everyone gets to look forward to doing away with the cross-signed root crutch. (You can't do it immediately because old, unpatched machines.)

LE is great and has done wonders to make all the other CAs up their game.

You can read more about LE's trust chain here: https://letsencrypt.org/certificates/

A reason you might want to send a "root" in some cases:

New roots get spun up all the time (maybe a dozen a year? That sort of ballpark). For Let's Encrypt that meant the cross signing arrangement described, but for an existing CA they can use their older roots to sign newer ones while they wait for trust stores to do their thing, this produces a certificate for the same name, but instead of being self-signed it's signed by the old root.

If you send this cert as well as your ordinary intermediate, some older clients that had no reason to trust the intermediate can now connect it back to a root they do trust. So for a few bytes you get a compatibility win.

This type of thing is also done when a root is replaced urgently due to distrust. Clients that are on top of their game trust the replacement root. Clients that don't know about this distrust trust the old root, which has signed the replacement. So everybody trusts the replacement, yet up-to-date devices aren't at risk from the old root any more.

A mechanism called AIA chasing lets browsers which use it "fix" an incomplete chain, but at some cost to privacy, another approach is to cache bits of chains and try to use the cached bits to fix broken chains, again this has privacy problems. Definitely send your intermediate (any modern leaf cert in the Web PKI has an intermediate) but more might be necessary.

I would like to see AIA chasing done by TLS servers (there's no major privacy concern there) so that this isn't extra work for admins, but we don't live in that world, so it's on you (or tools you use) to get it right.

You're correct, but for clarity for those playing along at home: that's just another variant of a cross-signed root. It's not an actual root CA in the sense that I was using it (self-signed): that is still always pointless.

Interestingly, as shown in the diagram you've linked, in this particular scenario, Let's Encrypt didn't have IdenTrust (owners of the DST Root CA X3 used for the cross signature) sign their new root (ISRG Root X1). Instead DST Root CA X3 was used to sign Let's Encrypt's intermediates, so there isn't actually anything to "do away with" really.

As a result there are two versions of each intermediate certificate, one signed by DST Root CA X3, one signed by ISRG Root X1. The URL for the former is baked into your leaf certificate, you _can_ configure servers to send the other version, and Let's Encrypt in fact does so for the test server required by Mozilla's CA root trust program, but most people don't need to do that.

If there ever is a good reason to switch over (maybe in a few years when DST Root CA X3 is due to expire) thanks to the relatively short lifetime of Let's Encrypt certificates they can move all their subscribers (at least, all those with compliant ACME implementations, if you hard code everything you get to keep both halves when it breaks) without those subscribers needing to even know about it, let alone make any changes.

If somehow the private key of a root CA gets leaked, will that make all devices vulnerable?

Also what is stopping a government from getting access to these private keys?

Until we discover the compromise, yes. We have a few better tools for doing that now, like Certificate Transparency.

A number of CAs are effectively under the control of a government: https://ccadb-public.secure.force.com/mozilla/IncludedCACert...

The CABforum BRs define, de facto, the rules/requirements for being a CA: https://cabforum.org/baseline-requirements-documents/

> The CABforum BRs define, de facto, the rules/requirements for being a CA: https://cabforum.org/baseline-requirements-documents/

Note that there have been several ballots which have been won by the CAs against the browsers, after which the browsers have turned around, shrugged, and just enforced the new requirements for their root stores without them becoming CAB BRs.

Most obviously this applies to Google and Apple requiring all newly issued certificates to be CT qualified.

Note that CABF bylaws require a simple majority of browsers to vote positively for a ballot for it to pass, regardless of how CAs vote.

Yes; nothing.

I've heard great things about Let's Encrypt, many people who I trust use and trust Let's Encrypt, and I use and trust them too.

But beyond that warm fuzzy feeling, and what they write about themselves on their web site, I'd love to know more particulars about the nature of why they're different, why so many people trust them, how they came about, who's running it, who's supporting it, who are they competing against, and what their mission and back story is. Why are they pushing other CAs to up their game, and why weren't those other CAs doing what Let's Encrypt is now pushing them to do, in the first place?

Provides an API to generate, renew and revoke certificate.

Provides certificate for free that are accepted by all browser/OS.

Let the user easily generate wildcard certificate if you control the domain link to it.

In other words, remove the human interaction.

Server can now easily and automatically renew certificate. No need to go through weird interfaces and obscure settings to always have an up-to-date certificate.

Belphemur said it best: before LE SSL/TLS was relatively 'obscure', sure many sites had it, but it was an optional extra that cost a yearly fee. Now, any old joe bloggs can setup a webserver (or use tools like cPanel which have LE integrated) and get the nice shiny green padlock with 'secure' in their browser.

And, if you run your own server, their client even generates the configs for most major web servers on most major OS', so you don't even need to lift a finger. Type 1 command and boom, you're secure.

Doesn't get easier than that.

Glad to hear this! Let's Encrypt is such an amazing resource -- it single-handedly caused a leap in internet security as well as removing what was essentially a artificially inflated (IMO) barrier to entry (for most people) to getting certs.

Has anyone set up the new wildcard certs? If so, who did you choose as your DNS 01 Challenge[0] provider? I currently do DNS through a local provider and they don't have an API so it's been out of my reach.

For wildcard certs, I run my own DNS infrastructure, so I just created the _acme-challenge records as indicated by the first run of certbot

Are you unable to add arbitrary TXT records with your provider?

I am, but I'm trying not to do it manually -- I actually use cert-manager[0] on a tiny kubernetes cluster -- which means when I make an Ingress for an application (app.example.com) (which does what it sounds like), watcher processes kick off and go get a cert for me with http 01 validation currently. This works thanks to cert-manager being able to automate the process of setting up the proper /.well-known/xxx route with access to kubernetes features.

cert-manager also supports DNS 01, but of course they support the bigger providers (so they'll take some options and do the web requests to set up the TXT records)...

I haven't looked into it a crazy amount (since in the end I can still just make multiple http 01 validated certs), but was just curious.

[0]: https://github.com/jetstack/cert-manager/

FWIW Personally I just use the DNS RFC2136 plugin for certbot. I use bind for DNS.

Could you expand on this? You run bind on your server, then you add your own server as an (additional?) nameserver to your domain's registrar?

You can also set up a CNAME record to delegate the challenges to a provider with a supported API.

Doing it this way is also more secure, since it means you don't have to give your web server unrestricted write access to every DNS record under your domain.

acme-dns is specifically designed for this purpose: https://github.com/joohoi/acme-dns

If you use BIND you can set an UPDATE ACL that allows your web server access to change acme challenges only: https://fanf.dreamwidth.org/123294.html

BIND looks fantastic but I really like the restricted nature of acme-dns -- I don't know much about DNS and I don't want to inherit a huge amount of functionality that I don't know how to properly administer -- I really only want to manage a nameserver for acme challenges.

By "UPDATE ACL" I believe that you are referring to the DNS UPDATE RFC[0] -- it looks like cert-manager doesn't support generic UPDATEs yet[1].

[0]: https://tools.ietf.org/html/rfc2136

[1]: https://github.com/jetstack/cert-manager/issues/468

It’s pretty easy to make `dehydrated` use `nsupdate` for DNS challenges: https://github.com/fanf2/doh101/blob/master/roles/doh101/fil...

This is huge, thanks! It looks like there's also a PR into to cert-manager to support it:


Thanks for the tip -- that sounds like a great way forward. I can't find any documentation on it however after checking the community page (https://community.letsencrypt.org/t/acme-v2-production-envir...)... Do you have a pointer to the documentation on this feature?

Thinking about it again I'm not sure that I fully understand what you were suggesting -- are you suggesting adding a CNAME for x.example.com that redirects to yyyy.different-provider.com, and letting let's encrypt follow and work it out?

I also wanted to know how everyone was trusting with their DNS requirements/how people were making the decision (cost, trust, privacy, country of origin, whatever else).

_acme-challenge.yourdomain.example CNAME whatever.name.your.txt.record.lives.at.invalid

Probably the easiest way forward if you have any infrastructure yourself is to simply delegate some subzone of one of your domains to a nameserver you run yourself (like, delegate letsencrypt.yourdomain.example to your own nameserver), then point your CNAME to a name beneath that, and configure that nameserver for dynamic updates so your LE client can change the TXT record(s) on that server as needed.

Not sure why this is downvoted, this works and is supported.

Alternatively, you can delegate the _acme-challenge zone to a nameserver under your control, although you then have to configure each of the zones on the nameserver too.

One more thing that I'm not sure I understand.

Since I'll need an _acme-challenge.* PER-DOMAIN, doesn't this just move the goalpost to setting up the CNAME records (instead of TXT records directly), assuming my current registrar doesn't support wildcard CNAME entries?

If I'm understanding DNS 01 ACME challenges correctly, to register two subdomains first.one.example.com and second.one.example.com, I need to set up TWO TXT records, _acme-challenge.first.one.example.com and _acme-challenge.second.one.example.com. This means I need two CNAME records (or one if wildcards were enabled)...

I do thank you for your input, thinking about this has lead me to the possibility of just running my own nameserver all-together, I'm going to evaluate all these approaches and see how they pan out, and write a blog post to share.

I haven't used wildcard LE certs yet, but from what I know those only need a TXT record under one name. What you describe applies when you want a certificate that lists multiple explicit names, then each name gets validated individually, and so you need one TXT record per name to be validated (though you can still point all the CNAMEs to the same TXT record name, as long as they aren't used for concurrent validation).

Have you tried it?

According my reading of the challenge specification [1] it shouldn't work. Notice that it says:

> 2. Query for TXT records for the validation domain name Verify that the contents of one of the TXT records match the digest value

To me, this means that it should issue a query for TXT records only (not for ANY) and hence the server shouldn't even see your CNAME "redirector" record.

Thus, if your method works, either the server is wrongly implemented, there's a flaw in my reading comprehension, or the specification should be amended...

[1] https://ietf-wg-acme.github.io/acme/draft-ietf-acme-acme.htm...

CNAME records are dereferenced by the recursive resolver, not the client software, so querying for TXT records will work

You might want to read the DNS specification to find out what "query for TXT records" means :-)

I've had no issues with wildcard certs and Cloudflare as my DNS provider.

You're OK giving your browsing history to Cloudflare?

The parent comment is reffering to the fact that they use CloudFlare to host DNS for the domains they control. That has nothing to do with the DNS resolvers their computers use (CloudFlare is certainly a reasonable choice there though... unless you don't use DNS at all, you have to trust someone)

Ok. But if they use Cloudflare, which MITMs traffic, all their users data is in plaintext to Cloudflare. Which leaks not only history, but also logins/passwords of site users.

My beef with CF is that I can not see which sites are behind CF.

CloudFlare can be used purely for DNS -- in which case they are one of the better DNS services because they have an API that almost everyone supports.

But you are completely correct that running a CDN (HTTP or HTTPS) requires you to MITM everything. The same complaint applies to Akamai, Level 3, or any other CDN you can name. It definitely is a problem, but not one of CloudFlare's own making.

It would be a fair criticism of CloudFlare to say that they've made their defaults tend towards MITM even though it is very likely that most websites don't actually need a CDN -- meaning that they are MITM-ing more traffic than they need to. And they have had pretty bad bugs in the past that revealed large amounts of private data that was sent over TLS but was MITM'd by them[1].

I do agree that CloudFlare being so central to so many large websites is a problem though. I just don't agree that this discounts their use as a purely-DNS service.

[1]: https://blog.cloudflare.com/incident-report-on-memory-leak-c...

I'm not alone, praise be. Lol :)

> Ok. But if they use Cloudflare, which MITMs traffic, all their users data is in plaintext to Cloudflare.

Using Cloudflare for DNS, and only DNS, doesn't subject you to this.

If you decide to use their reverse proxy features, then sure, the MITM criticism applies.

That's optional though, right? IIRC, you could still have SSL termination occur on your end but you lose tons of features which would require CF MiTM.

Yes, that's optional.

Cloudflare has very specifically owned IPs and a number of tell tales to show that a site is behind it. Why do you have beef when it's practically dead simple to see that a site is protected by cloudflare. There's zero obfuscation.

Please, how exactly in browser I can see it?

Well, thanks, but as I thought, its not that easy. First one is by CF themselves, no source :) Second is not used and not working. And nothing for Safari.

Incorrect, the source is freely available, :).


well done, my bad :)

So some thing you can look for in a request:

  * `server: cloudflare` - Although CloudFlare uses a nginx, they report 
    themselves properly in the server header
  * `Cookie: _cfudid:*` - CloudFlare uses the cookie header to identify 
    users and prevent abuse. If you delete this cookie too many times,
    your IP is flagged by CloudFlare and you may receive an interstitial 
    blocking you from accessing a site.
  * IP Ranges: https://www.cloudflare.com/ips-v4 and 
    https://www.cloudflare.com/ips-v6 - CloudFlare owns the routing 
    to these IP addresses. If you want, setup some Firewall Rules to block 
    access to these ranges.
All in all, CloudFlare is probably the least of your worries. You might want to do some investigation on your ISP, some of which MITM and track any insecure content.

No joke. CloudFlare is near the bottom of my list of worries. I'm most concerned about my bank. They know goddamn everything about my spending history, and it's a complete treasure trove of data because it actually shows where I spend money.

I'm second most concerned about my ISP. They see every outgoing connection I make, and have no trouble tying it all back to me.

Cloudflare is... just not that big a deal. Are you concerned about Microsoft being able to MITM every connection to a site hosted on Azure? Amazon being able to MITM every connection made to AWS? Google being able to MITM every connection made to GCE?

"Yes" is a fair answer, but it means you're using a minuscule fraction of the available internet. Otherwise I don't really see the need to pick on Cloudflare. They're doing exactly what the company that's using them asked them to do (and getting paid for it too...)

> Cloudflare is... just not that big a deal. Are you concerned about Microsoft being able to MITM every connection to a site hosted on Azure? Amazon being able to MITM every connection made to AWS? Google being able to MITM every connection made to GCE?

It's not just Cloudflare themselves though. It's everyone else on the open Internet between the Cloudflare edge node and the site I actually wanted to connect to.

I'm not too worried about the parties that the site operator has a direct contractual relationship with, but traffic from Cloudflare could be going unencrypted to literally anyone with an AS number.

> doing exactly what the company that's using them asked them to do

But how do I, website user, can know it? Given how many sites are served by CF, my private, decrypted, data can be aggregated and I would have no clue.

For ISPs use VPN. And I doubt (seriously) AWS (Azure) has means to do MITM, reading private keys from virtual machines? cmon.

Banking is a real bitch, agree :)

Personally I trust that GDPR and its potentially enormous fines provide sufficient economic incentive for these big cloud companies to do the right thing.

That is to say I now believe that not only are Google, Cloudflare, Amazon not proactively sniffing traffic, but also that they'll have invested a massive amount of money making sure it's really hard to do undetected.

Of course I also fully expect that any one of them would give me up to law enforcement iff compelled by a court.

>And I doubt (seriously) AWS (Azure) has means to do MITM, reading private keys from virtual machines? cmon.

that's only if the website(s) are only using their IaaS offerings (which I doubt because they're crazy expensive compared to DO or vultr) and not their PaaS offerings. With PaaS (think heroku), they terminate the SSL and control the software for the http server, not you.

Today, data is the new oil. If you have a legal tap to people's data - you're valued hundreds of billions.

Google and Facebook have legal taps, users willingly provide their chats, emails, links, likes, photos, connections, locations, because its great service and its free. Both are Ad companies by main revenue, and its vital for them to use people's data.

AWS, Azure, Apple are not Ad companies, their main revenue is paid infrastructure, paid software and paid hardware. Their customers are not users, but companies. Reputation risks of openly using the data tap themselves will ruin existing revenue. What companies doing with users data is not their concern. Apple is an exception, with closed ecosystem, strong privacy and security and main income from hardware.

Cloudflare is something in between. They provide reverse proxy services, where your little site sits behind huge wall, for free. Income comes from paid WAF security features and ability to upload to CF your own SSL certs. In any case, you have to allow MITM of people's data.

Incentive for CF to use user's decrypted data is huge - it may shoot it up to ranks of Google and Facebook, to $100x Billions. So I have my doubts if that data is not being harvested.

I think I've said too much already, shutting up :)

We've told you several times how to know it as a user. You just conveniently are skipping over it..

You posted link to source for Claire after making this comment. I said thanks above.

How many people using AWS, GCP or Azure are terminating TLS on their instances, instead of on the offered load-balancing services? How many services run (partially) not in VMs, but on PaaS (e.g. App Engine), load data directly from storage services (e.g. Firebase or S3), ...

> You might want to do some investigation on your ISP

Doesn't most ISPs have to live up to certain laws about protecting the customers? I think those regulations are much more strict than what is required of CloudFlare.

> My beef with CF is that I can not see which sites are behind CF.

Sites behind CF usually include two headers in the responses: cf-ray and expect-ct.

If you see these headers, it's almost certain the response is coming from CF. So its likely those extensions are doing that, perhaps you might be able to verify the source code.

If the thought of connecting to a site hosted by Cloudflare absolutely disgusts you. Vist https://www.cloudflare.com/ips/ for a list of IPs that you can block.

Yes, thanks, I knew about headers and ips. Disgust is too strong word, aware is better :) some info may be sensitive and it goes in plaintext via CF. Its time to write my first extention, sigh.

All content delivery networks have this limitation. Not sure why you're targeting Cloudflare specifically.

No reason. Maybe because they have good PR and offer 'free' SSL, which many just take. I'm unaware of market size of other CDNs.

Do other CDNs offer free plans with SSL?

That has nothing to do with someone's browsing history...

Seems like you just have an issue with CloudFlare, and will keep changing the subject.

CF is in unique position to aggregate decrypted data from all users of many websites, attracted by 'free' plan with provided SSL.

This is against the whole idea of SSL, a closed tunnel between users and websites, so yes, I have an issue.

Plus many users set their DNS resolvers to CF DNS, browsing history goes here.

Let's Encrypt effectively shoots a hole--and this is a good thing--in the idea that TLS is for a meaningful kind of identification and establishes once and for all that the primary reason for TLS is for secured communication across the open internet.

And...that's it. CloudFlare operates in this spirit. It does not route traffic from its edge nodes across the open internet. It routes it across its private network.

So, no, it's not against "the whole idea of SSL"; it's what you have decided the idea of SSL is and nobody else on the internet really agrees with.

The amount of disingenuity you're hucking in this thread is pretty gross and you should stop.

I think he meant that his site is using cloudflare as its dns provider, not his personal computer.

Haven't tried with the wildcard certs yet but I've been using lego + Cloudflare as the DNS challenge provider with no issue. Works very nicely and it's scripted to automatically renew every 60 days.

Now, it would be great if Cloudflare supported LE integration at it's free tier (replacing the Cloudflare wildcard cert).

I use digitalocean, but also have a few wildcard manually verified with the --manual flag and a bit of cut & paste. It is 5 minutes every 3 months but it is free so I haven't bothered to automate it (just get an intern doing it every couple months). Lame, I know.

yeah, I bought a secondary domain for my budgeting project on Saturday and as soon as I had the dns configured I ran certbot on the server. something like 30 minutes from finding a good domain name to having ssl set up.

I use digitalocean's DNS and had no problems getting wildcard certs. It's a bit annoying to wait for the updated records to resolve but other than that I had no issues.

Wildcard certs suddenly make auto renewing a challenge. For now, I just manually do it by updating DNS when monit tells me certs are getting old.

That's great I'm always glad to hear about the progress updates from them. Setting there service up on my personal website was one of the easiest improvements I have ever made. Thanks mostly to the fact that I have shell access to the server I host on. The auto renew works perfectly haven't even had to put thought about renewing my cert since last year and Let's Encrypt Certs don't last that long. We are getting very close for there to be no reason not to have an HTTPS connection to any website which is a great progression.

Thanks For Everything You Guys Have Done To Accomplish This Let's Encrypt!

Also of note, they've raised the rate limit for certificates per registered domain per week: https://community.letsencrypt.org/t/certificates-per-registe...

> The ‘certificates per registered domain per week’ limit has been raised from 20 to 50.


Why would you need 50 new certs a week for a single domain?

The 20 per week limit has been an issue for large organizations with tons of different subdomains managed by different groups. Universities are a good example: you might have things like bobsmith.faculty.example.edu, cs101.compsci.example.edu, etc., which all count against the example.edu rate limit.

This can especially be a problem because the renewal exception to the rate limit doesn't work like you might expect. If a particular cert (meaning the exact same set of domains) has already been created, it can be renewed regardless of whether it would exceed the rate limit - but it still counts against the rate limit. If 45 certs have already been renewed in the last week, you can only create 5 new ones. If 80 certs have been renewed in the last week, you can't create any new ones. They plan to change this, but it hasn't happened yet: https://github.com/letsencrypt/boulder/issues/2800

Some organizations have gotten rate limit exceptions to handle this particular issue. Maybe they looked at some internal metrics and decided raising it to 50 would reduce the number of exceptions they have to make while still curbing misuse.

Why wouldn’t they use multi domains certs (SAN) or wildcard certs? (Unless there is no trust between departments).

Because you could have 50 subdomains, for which you'd need either a wildcard cert or 50 separate certs.

That might seem like a ton per week, but consider a PaaS (example-123.herokuapp.com) or a blog platform (example-diary.someblogapp.com).

Personally I'd prefer a wildcard cert there, but at organizations where certificate inventory is a requirement (where they need to track, procure, and invalidate on a per-subdomain basis) Let's Encrypt is a solid option.

Note that herokuapp.com is a public suffix, so subdomains under it have separate Let's Encrypt quotas.

"The main limit is Certificates per Registered Domain, (50 per week). A registered domain is, generally speaking, the part of the domain you purchased from your domain name registrar. For instance, in the name www.example.com, the registered domain is example.com. In new.blog.example.co.uk, the registered domain is example.co.uk. We use the Public Suffix List to calculate the registered domain.

If you have a lot of subdomains, you may want to combine them into a single certificate, up to a limit of 100 Names per Certificate. Combined with the above limit, that means you can issue certificates containing up to 2,000 unique subdomains per week. A certificate with multiple names is often called a SAN certificate, or sometimes a UCC certificate."

In my case, we provision a wild card for accounts on our service (*.account.companycustomers.com). While we bundle these together with a few other sign ups, it's sometimes better to get it out the door initially, and then bundle them with more subdomains on renewal. We have received an exemption to the certificate limit per domain to achieve this at least.

Just in case anyone isn't aware, this isn't about browser compatibility today (which uses the DST certificate from IdenTrust) but about future compatibility without the IdenTrust cross signature.


Edit: lvh has explained this better at https://news.ycombinator.com/item?id=17699037

> Our root is now trusted by all major root programs, including Microsoft, Google, Apple, Mozilla, Oracle, and Blackberry.

What about Linux and the BSD's?

Tangential questions: OS's usually are the system's primary stores of root certs, if I understand correctly[0], but browsers and other applications store them too. How are conflicts resolved? If Mozilla untrusts Fubar CA's root cert and the OS still trusts it, what happens? And why have redundant stores? I suspect the answer is that the browser vendor wants to ensure the user has a happy TLS experience despite OS problems, but that's just a reasonable guess.

[0] A reference right in front of my nose: https://news.ycombinator.com/item?id=17699037

To answer your tangent: Only two major browser vendors also operate a distinct major trust store. If you're Microsoft (IE, Edge) or Apple (Safari) this is de facto not a problem since you also control the OS.

For Mozilla their NSS is almost completely independent of OS trust stores, with the special case that on Windows (maybe macOS but I'm not sure) they offer to look in your OS trust store for any additions you've made to the OS vendor store and trust those on the rationale that you must have had some reason to do that.

For Chrome the OS trust store is used, (on Android this of course is Google's trust store but on a desktop it isn't) but, Chrome layers some Google policy rules on top.

Thanks; that's helpful. One point confuses me:

> Only two major browser vendors also operate a distinct major trust store. If you're Microsoft (IE, Edge) or Apple (Safari) this is de facto not a problem since you also control the OS.

> For Chrome the OS trust store is used, (on Android this of course is Google's trust store but on a desktop it isn't) but, Chrome layers some Google policy rules on top.

If only two major browser vendors operate a distinct major trust store, and they aren't Microsoft or Apple, I infer that Google operates a distinct major trust store (along with Mozilla). But that seems to contradict the second statement: Why operates a trust store that you don't use. For ChromeOS?

For both ChromeOS and Android Google are the OS vendor. That's a lot of devices, so certainly not a "trust store that you don't use" although if you only run Chrome on Windows it might seem that way.

Android, of course. I really wish HN would let me go back and edit that one.

> What about Linux and the BSD's?

Linux distros typically use Mozilla's root list.

> If Mozilla untrusts Fubar CA's root cert and the OS still trusts it, what happens?

Then it no longer works in Firefox but works in other apps.

using debian as an example, the ca-certificates package uses the mozilla root CA list as an upstream source.


Here's Microsoft's notice:


I don't know anything about Gordon Bock (credited with that page), or indeed Microsoft's PM for their root trust programme, Mike Reilly. All the trust programmes (except Mozilla's) are run in a way that doesn't give us (as relying parties, or as subscribers) much insight. I'd love to know more about why it took so long to approve ISRG, but likely we'll never be told.

It’s amazing to me that there a multiple entities which can sign valid TLS certain for the same domain. Seems like a serious design flaw, especially with some of those trust authorities being overseas

This is being fixed:

• You can use CAA DNS records to choose which CAs can create certs for your domain.

• You can watch Certificate Transparency logs to catch CAs that didn't obey.

AFAIK both are becoming mandatory for CAs. It doesn't technically stop violations, but ensures they get caught and shut down if they fail to obey the rules (like StartCom and Symantec).

Will browsers actually refuse a 'valid' cert if my DNS record indicates a different CA than the one presented?

No, the check is done when the CA issues the cert. This allows you to change your CAA record without making your cert invalid (see also https://tools.ietf.org/html/rfc6844#page-2)

> You can watch Certificate Transparency logs to catch CAs that didn't obey.

Isn't there a timing issue there? Eg, if I get a cert from Comodo and change my CAA record to specify Let's Encrypt immediately afterwards, anyone checking if issuer doesn't match CAA can get a false positive

There are people who store historical DNS records for forensic and validation purposes. And any CA worth its salt that does DV will be doing the same for any domain they're issuing certs for, at a minimum.

Point is, people will be able to figure out that you're lying if you attempt to claim that the cert was issued incorrectly.

Ah, quite interesting, thanks for that. On one hand, sounds good that the obvious loophole is not wide open, on the other, it smells a bit like self regulation. I guess the next frontier is getting one of these historical DNS records made readily available alongside the CT logs.

> if I get a cert from Comodo and change my CAA record to specify Let's Encrypt immediately afterwards

Why would you do that?

To cause grief for a CA you don't much like?

I'm taking issue with the idea of checking CAA records against CT logs after the fact as a means of verifying CA compliance with CAA.

The idea isn't for third parties to check CT logs against CAA records. Instead, the idea is for the domain owner to check CT logs to detect CAs issuing certs that shouldn't be there.

This is orthogonal to CAA records. You can check CT logs without having a CAA records, and CT logs can also be used to detect misbehavior from a CA you authorized in your CAA records. At the same time, CAA records are preventative, whilst CT logs only allow detection after the fact.

It's only for issuers to check not clients

You can limit that by deploying CAA records for your domain in DNS. All trusted CAs are required to respect those records.

Damn those untrustworthy "overseas" people, the only trustworthy people happen to be born in the same place I was! (Wherever that is)

Congratulations to the LE team, we have been a very happy major sponsor the product. It works amazingly well in conjunction with Caddy for https://www.phishprotection.com.

I would be very interested to see the percentage of the Internet that is actively using LE certificates vs. the number of certificates that have simply been generated for valid domains.

I am so proud of letsencrypt. This is a huge step forward for https everywhere!

Is Let's Encrypt good enough for say a production e-commerce site, or is it more for personal blogs and the HTTPS everywhere movement?

Let's Encrypt gives you a DV (domain validated) cert. That is good enough for most use cases, including amazon.com.

Most German banks for example use EV (extended Validation) certs, where the organization name appears in the browser's address bar. However, the benefit of EV certificates is debatable, since it's pretty easy to register a valid-sounding company under some jurisdiction or another.

Also, organization structure aren't transparent to everybody (how many of your non-tech friends would be surprised if google.com had a certificate issued for "Alphabet Inc."?).

> Also, organization structure aren't transparent to everybody (how many of your non-tech friends would be surprised if google.com had a certificate issued for "Alphabet Inc."?).

A clear example of this is KLM, where www.klm.com's certificate is registered to "KONINKLIJKE LUCHTVAART MAATSCHAPPIJ N.V." (try that on a mobile browser!). It's sufficiently different to what people expect (which is, admittedly, just an initialism) that I've known various people who actually understand EV certificates get thrown by it.

Amazon.com does not have DV cert, they have an OV cert. You can tell because the country, state, locality, and organization name fields have values. In a DV cert they are empty (since a DV cert does not verify those things).

Like a lot of big companies, Amazon has a cert from Digicert. To my knowledge, Digicert does not issue DV certs, only OV and EV.

That said, I agree that DV certs are good enough for production for most people.

I heard the cost of EV certs is pretty high so it's much less likely a scammer will buy an EV cert vs just a similar domain and a regular cert.

Took this guy $177 to register a Delaware corporation called Stripe Inc and get Comodo to issue him an EV certificate that looks exactly like the real payment gateway. After Comodo revoked his cert, GoDaddy gave him one.


EV certificates tell you that a site is owned by a company with a particular name, not that it is the company you actually want. There's a reason browser vendors are de-emphasising EV: it isn't very useful.

Shopify uses Let Encrypt for their shops, so I'd imagine it's pretty safe for e-commerce sites.


Awesome, that's great to hear! Thank you.

Its certs are the same in the end as anyone else's certs. You just don't have to pay or go through a bunch of hoops to get one.

Fine for e-commerce. They don't do extended validation or any of the more "I am really who I say I am certs."

From a technical standpoint it's no different than any other DV (domain validated) cert. If you're selling a LOT of stuff online and care about user interface, the $85/year that it costs to have an EV SSL cert may be worth it just for the "green bar" user interface change which seems to be reassuring to non-technical users.

They're used to seeing EV SSL type address bar when they sign in to their online banking and such.

Some people think that EV SSL is like $400, it's not, you really shouldn't be paying over $100/year. Still a racket in my opinion but not one that's easy to circumvent.


It's almost a false sense of "reassuring" as the users are blindly made to think it may be more trustworthy, in fact they don't know who did what to gain the (fake) trust.

I absolutely agree with you. It's an unfortunate case of having to deal with the perceptions of vast numbers of generally non-technical users, who have already been confused in the past by things like the GUI switch to Windows 10. When they sign in to Paypal they see the big green bar and are reassured.

You could totally register any random name like Really Legit Internet Enterprise LLC with some state government, put $100 in a bank account, scan the incorporation paperwork and get an EV SSL cert.

I have a letsencrypt cert for my domain but I still use the cloudflare’s certificate and terminate (I keep my cert code commented). I do think so that I reduce the load/content between Cloudflare and my web server.

Is this a good approach? FYI, I have a blog and not too paranoid about security between CF and my web server.

As a side question is there a guide to setup https for a static website hosted at AWS buckets?

I looked at using Let's Encrypt for this, but it's much simpler to use AWS Certificate Manager's own certificate authority for this, because it's also free and it's built in so it will handle renewals for you. It's basically a checkbox in CloudFront; just put your files in S3 and set up a CloudFront distribution.

(If you're hosting an apex domain, e.g., example.com and not just www.example.com, it also makes things easier if you can use Route 53 as your DNS host, because CloudFront IPs keep changing and you can't make a CNAME for an apex.)

Yes, you will need to front the bucket with CloudFront and use the AWS Cert Manager to manage your own cert or to get one through AWS (free) and apply it to CloudFront.

Cloudflare is your friend. Either with a cloud front distribution or with a simple S3 bucket website. Guide: step 1, sign up for free at cloudflare. Step 2, follow instructions. As simple as it comes!

Doesn't that only encrypt half the trip?

The description is vague. Cloudflare offers customers a certificate from its private CA which can secure connections from Cloudflare to your systems without you needing a publicly trusted cert. This secures the other half of the circuit successfully if you take that route. Arguably in this limited role it's more secure since there's no third party.

Odd question I know, but does this mean that LE certs can be used to sign Java applets?

No. Let's Encrypt is focused only on the Web PKI. So that includes web servers (obviously), and everything that speaks TLS on the Internet (e.g an IMAP server for your email) but it doesn't include either type of end-to-end email encryption (S/MIME or PGP), nor any kind of code signing, document signing, timestamp services or similar.

Technically this is implemented in two ways. One: Where Trust Stores themselves have distinct trust for different feature sets, ISRG/ Let's Encrypt asked only for the flags needed to do Web PKI. e.g. In Mozilla they didn't ask for S/MIME or code signing (today Mozilla doesn't do code signing anyway).

Two: Certificates have a section called Extended Key Usage (EKU) which can list arbitrary purposes for which the certificate's issuer says the Public Key included in this certificate is to be used. EKUs on Let's Encrypt certificates specify two purposes, (TLS Server) and (TLS Client) so these certificate proclaim themselves not to be suitable for other purposes.

I wonder why LE certs specify TLS Client EKU.

As zimmerfrei said, it allows one webserver to connect to another (e.g. a load balancer, with a certificate, to connect to a backend webserver) and present that cert to the server it is connecting to.

Without the client EKU bit, a conforming TLS implementation would reject the connection.

That enables mutually-authenticated TLS connections between servers.

The last time I looked into this, the LE certs did not support code signing.

This is very good news, though they don't really show their work on that five year number. Are they waiting on a specific operating system to drop below a certain level?

Reading between the lines a bit: the last major vendor to accept the ISRG Root was Microsoft. Windows is supposed to pick up new roots through updates, but there are probably enough systems out there with updates disabled or broken that it's safest to wait a few years for them to cycle out.

I don't have an explanation for the five years beyond your guess.

Rather sooner, at the end of September 2021 the DST Root CA X3 that cross-signs their existing intermediates expires.

In practice many systems don't directly obey expiries baked into root certs, a self-signed root certificate is largely a vehicle for conveniently moving the key inside it, it's not signed by anybody we trust independently so why care what it does or does not say about that key?

And of course if the IdenTrust / ISRG relationship remains good there's no reason IdenTrust can't sign new Let's Encrypt intermediates with another of their CA roots that hasn't expired. The short lifetime of Let's Encrypt leaf certs means they wouldn't even need to have decided before 2021 what to do about this.

They'll probably re-evaluate market share once they get closer to the five year number. I bet they'll find they'll need to cross-sign for more like 10 years.

I'm wondering when I can use them for my wildcard SSL certs on Heroku. Anyone hear of them adding support?

`certbot --certonly` + `heroku certs:add` + cron?

Congrats Josh!



That's a very terse question, but I'll take it that you're interested in whether TLS clients running on Java 7 will (out of the box) trust Let's Encrypt server certificates

The answer is that for Oracle's Java updating to 7u111 is necessary for this to work (or to 8u101 if you run Java 8) and that for other people's Java implementations it will depend upon where they get their trust store, Oracle's is the most popular in Java.

congrats Letsencrypt!


I would really like it if some of these services would support non-Paypal donations. I have little interest in supporting Paypal and their questionable practices, but many services have no other donation option.

We provide multiple options, including PayPal. The primary option here uses Stripe via DonorBox:


Looks like the Stripe widget is completely invisible when JS is disabled, so all I saw was the PayPal button. There isn't even so much as a link or text, it's just completely blank.

Now I'm wondering how many invisible Stripe widgets I've been missing.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact