Hacker News new | past | comments | ask | show | jobs | submit login
Let's Encrypt Has Issued a Billion Certificates (letsencrypt.org)
1160 points by jaas 32 days ago | hide | past | web | favorite | 258 comments

>In June of 2017 we were serving approximately 46M websites, and we did so with 11 full time staff and an annual budget of $2.61M. Today we serve nearly 192M websites with 13 full time staff and an annual budget of approximately $3.35M.

That's awesome. Congrats

Glad to see they aren't going the way of wikimedia. They could pretty easily raise way more money and bloat their budget to an absurd degree, it takes a very disciplined focus to avoid boat in the face of absurd success.

Time will tell. Wikipedia has been around for much longer.

Sure, but, we can hope for them to do the right thing and praise them as and when they do.

What are your issues with Wikimedia? The (somewhat annoying) donation banners?

The issue that is occasionally brought up is that Wikimedia's expenses are growing exponentially[1]. This is a somewhat old graph but the trend appears to continue. Their total expenses in 2019 was 91.4MM[2].

[1] https://www.quora.com/What-is-the-annual-budget-of-Wikipedia [2] https://upload.wikimedia.org/wikipedia/foundation/3/31/Wikim...

What about their userbase? If it's also been increasing exponentially then it doesn't seem that unreasonable for their expenses to grow exponentially.

A bigger userbase would mostly increase server costs. But those are a tiny portion of the total budget, and might actually mostly be covered by in-kind donations from tech companies.

looks linear to me

Only when the first four years are ignored.

Then it means that growth has slowed down, right?

Exponential growth has ever-increasing slope.

I'm not sure what conclusion you are drawing from my reply as I'm not claiming the growth is exponential.

I'm was just referring to the op's observation that the growth is linear.

My point was you can only achieve that linear result if the first few data points are ignored.

The budgets are rising exponentially each year, apparently

I don't mean to nitpick, but exponentially? Really?

So I just typed the "Total Expenses" line from page 3 of each of their financial reports [0] into Excel, this is the result: https://imgur.com/lIL79IK

It looks like quadratic growth until around 2011, after that it becomes linear. This still seems like a lot of growth, considering the number of pageviews per article seems to be roughly constant over the last 5 years [1]

0: https://wikimediafoundation.org/about/financial-reports/

1: https://tools.wmflabs.org/pageviews/?project=en.wikipedia.or...

Edit: this is how last year's expenses break down:

    Salaries and wages:             46,146,897
    Awards and grants:              12,653,284
    Internet hosting:                2,335,918
    In-kind service expenses:        1,361,958
    Donation processing expenses:    4,977,583
    Professional service expenses:   8,998,261
    Other operating expenses:        9,005,744
    Travel and conferences:          2,867,774
    Depreciation and amortization:   2,856,901
    Special event expense, net:        209,690
    Total expenses:                 91,414,010
(sorry if that's unreadable on mobile)

So they have huge expenses for a nonprofit, but the people saying exponential growth are actually hyperbolic. Got it.

It's a genuine thing for people these days to misuse the phrase "exponential growth" when they just mean "really fast grow". I don't the hyperbole was strictly intended.

> hyperbolic


If the expenses grew with inflation, wouldn't that be exponential growth?

1.3 mill for donation processing expenses? What tge heck? Well, I surely am not donating to them anymore, that is terrible.

Why is this surprising? Everyone who accepts credit cards has to deal with processing fees, fraud prevention, chargebacks, etc. And at a larger scale they need to deal with lawyers and contracts.

Nope - 4.9 million. I'm wondering - do they mean stuff like credit card processing fees, etc?

Most likely. It appears to be around 5% of this years budget. It’s typical to pay 2-3.6% in CC fees alone plus often a transaction fee of 10-30c which can add up on small $1-$5 donations (30c is 6% of $5 and much more of $1).

They may have raised more or less than their annual budget this year plus I imagine probably spend some money on their email campaigns etc.

So it doesn’t seem totally out of line.

Why is no-one commenting that nearly half their expenses are in salaries and wages. They have 354 staff. That's ~130 000 USD per person.

Depending on where they are, and what those positions are, that's pretty cheap... that likely means their average employee's take home is around 80K give or take. Insurance, employer side taxes and other expenses aren't cheap. The government cuts both ways.

If you live in SF, Seattle, Portland, NYC, Boston, Chicago or Philly, 80K is lower-middle class if that. Other cities/countries, it's incredible... just depends.

If your company is spending 130k to employee someone, that likely means the persons actual salary is around 90k. Think of the insurance, administrative costs, recruitment costs etc the company bears.

That's pretty reasonable: benefits like health insurance are expensive!

Excuse my ignorance. Does wikimedia pay content writers?

No. All the writing and most of the moderation is done by volunteers. The closest to that on the Foundation's payroll are the about three community managers. The Foundation does employ the engineers writing the software, the sysadmins keeping everything running, a UX team of 14 people, a lot of people related to fundraising, as well as outreach and public relations, some legal staff and various administrative staff. There's a full list of employees and contractors at https://wikimediafoundation.org/role/staff-contractors/

They have a UX team? The UX hasn't changed in years!

Page previews are a fairly recent addition.


Nearly two years ago

It's changed pretty heavily for the people who actually do the editing.

It's well-documented that active writers are paid shills for corporate clients. I've read interviews where the writer brags about his "ownership" of certain pages.

Does anyone have links to the “well-documented” evidence mentioned above?

And still not even a spec on the chart compared to the other websites of their size.

But they blatantly lie to the public when they say they need donations to continue Wikipedia. If you look at the expense report, they absolutely don't; Wikipedia is a mote of dust compared to the amount that is fundraised every year.

It's just pure greed. These sort of disinformation campaigns by nonprofits should be illegal.

Wow I had no idea those ads on wikipedia were blatant lies. I've never donated but I've definitely thought about it a few times whenever I saw those ads. This is somewhat eye-opening.

They aren't lies, the money is entirely used on the wikimedia foundation. The big "scandal" is that wikimedia has other projects outside of wikipedia like wikidata which the funds also get used on. Wikimedia has plenty of money to keep wikipedia running as it is, they need extra money to expand other fields of open data.

And if you've ever led anything, you understand the value of "buffer states". Having those satellite projects, R&D investments, etc that you nurture in the good times, because you've seen hard times, and don't care to return to them.

Donations follow the same axiom as pricing: ask for what the market will bear.

That would all be fine if the donors understood what was happening. Instead, the ads give the impression that Wikipedia is in dire need of funds. Most donors don't even know that the Wikimedia foundation does anything besides Wikipedia.

> Instead, the ads give the impression that Wikipedia is in dire need of funds.

while "dire" might be a bit of an overstatement, I agree with this impression because it is in part why I donated once or twice.

Wikidata is actually used by Wikipedia though.

Yes, I agree, that use of the money is more defensible, although it still doesn't excuse the general urgency of the fundraising pitch, which suggests (but does not say) money is needed to keep the lights on.

> because you've seen hard times, and don't care to return to them.

This is a reasonable stance to take, except when your donations are enough to cover the core product for decades.

> They aren't lies

It's not literally fraud in the legal sense, but it's purposefully taking advantage of the goodwill of unsophisticated donors to fund projects and people the donors predictably wouldn't approve of if they understood what was happening. The donors think they are supporting Wikipedia, and the money is largely going elsewhere. Morally, it's lying.

Fund raising has its own domain knowledge.

Not defending. Having done some myself, I'd guess they're mostly following the standard playbook.

> Wikimedia has plenty of money to keep wikipedia running as it is,

And yet every donation dialogue on their page is a desperate plea for money to preserve Wikipedia itself.

This is the misleading theme around Wikimedia's donation strategy.

I had no idea. That's horrible. Are they trying to raise an endowment or something so that they can run on interest? If so that would be reasonable, but they should still be honest about it.

They have projects they want to fund other than Wikipedia. This in itself is not bad, but the misleading nature of it all definitely is.

it's a nice jobs program :)

Yes. An the smugness of it all.

That's less than $0.02 per site served. Yes, awesome!

Those two extra staff are expensive, but worth it.

Honestly, I don't know anything about running a business. But if there's one thing I learned in engineering, it's the great if it ain't broke, don't fix it.

(I know you're joking)But even if they're costly, if they keep the service running and bring in funds, why would anyone risk damaging their business and start cutting costs ?

In grad school, my MBA-level technology course had a case on Zara and their POS software.

The case problem was that the software was running on MS-DOS, had a janky text-based interface, but managed to work well for Zara's fast-fashion inventory.

The 'right' solution for the case was to not change anything at all.

Just curious how one would know if it was the 'right' solution without having tried other solutions?

I find it hard to believe that any business would not benefit from moving from an ancient computer system to one with error validations and better tooling so that their boots on the ground can make less mistakes. Forget the cost of transitioning since at scale that's a whole executive job function, but purely from a day-to-day I don't understand how what you said can be true.

That's why it was taught as a business case, because it forces you to look at the requirements for switching and evaluate the cost/benefit of the action.

Even passenger jets still use floppy disks. AFAIK Zara did update their POS system later though, to be more compatible with payment device hardware.

FYI: The parent comment is a (pretty funny) joke.

Seriously... they're serving 146M sites on their own

The scale isn't really the major issue here, I think.

Assume each site makes one certificate issue request per month: 146 * 10^6 / (30 * 24* 3600) ~= 56 reqs/sec. Maybe double or triple the number to account for office hour-related peak hours.

I don't even think sharding would be required to handle that, strictly speaking. A single medium-to-high end machine could probably handle billions of sites this way, assuming everything has been done right (I assume it has).

I do however think a lot work has to go into:

a) developing/documenting client tools

b) developing/maintaining the server codebase

c) developing/documenting/maintaining security protocols

d) devops for the server(s) doing the key signing

e) devops for maintaining security, according to the protocols developed

and so on...

Certbot [1] is simple and efficient to work with.

[1] https://certbot.eff.org/about/

Who pays for the development of certbot? EFF or Letsencrypt?

Assume only ~70% is staff. Requires more than just payroll to run a company.



Let's Encrypt has caused a revolution in the world of public XMPP servers. Now they are pretty much all properly encrypted for both client to server and server to server. The list of servers shows a sea of Let's Encrypt:

* https://list.jabber.at/

I suspect that there are a lot of other non-web applications out there that have hugely benefited as well...

Domain validated code signing certs are super high on my wish list. My domain is a better indicator of my trustworthiness than my name.

For me the primary value of code signing doesn't come from knowing that the signer Donald McRonald would never build malware, it comes from knowing that if this was malware I would have a clear path to finding out who created it and having that person persecuted. And since no malware writer wants to go to jail I can be reasonably certain that the signed software is innocent.

Domains don't have the same property, owning a domain under a false name is trivial.

You probably have excess faith in real world identities, whether Donald McRonald or McRonald Enterprises Ltd of Yorkshire, England.

For about $100 someone can, from the comfort of their apartment in Moscow, or on a beach in New Zealand or wherever they want, create a totally legitimate business in many countries such as the UK and United States despite never having even visited. For their money all the paperwork will be done by local lawyers who won't ask any awkward questions about who they actually are or why they suddenly want to have a business in their country which has zero physical presence. The lawyer can arrange for post sent "to" this company to be delivered to an office which can scan and forward it to the owners by email if they want though that'll cost an extra fee per year. The people working in the office know no more than a postman does, so persecuting (I hope you meant prosecuting? But either way) them achieves nothing for you.

Thus, even if you've 100% checked this is a real company or person in your country your "clear path" can end in a cul-de-sac.

Human names don't even need somebody to pay some dodgy lawyer a fee to do the paperwork, you can just lie. In the UK for example you're allowed to change your name at will. Using this to commit crimes is itself illegal, but that doesn't actually help you find someone.

If so many services use Let's Encrypt, do you think there's a risk they become a target for sneaky surveillance activities?

There's not much to spy on. It's not like LetsEncrypt generate your private keys and give them to you. They sign a request that you send them (and they don't get to ever see your actual private key). So an evil CA won't be able to crack your encrypted website.

An evil CA can of course generate fake certificates for any hostname they like, but those people already exist. Have you taken a look at just how many different root CAs are out there, and are trusted by your OS/browser? It includes hundreds of companies and governments. Then there are even more (countless?) 'second level CAs' (I forget the proper term, sorry!) who can also generate and sign a trusted certificate for any hostname, because, while they aren't a root CA, their authority has been signed in turn by a top-level CA. The web of trust is very large and has many points of failure.

> 'second level CAs' (I forget the proper term, sorry!)

Perhaps intermediate CA.


> So an evil CA won't be able to crack your encrypted website.

If the evil CA is default-trusted by major OS and browser vendors then they can - with their own private key then do a MITM.

That would require them to issue a separate certificate. Certificate Transparency provides significant deterrence against that.

They can either log it into Certificate Transparency (and include the SCT, the "receipt" for logging it, in the cert), or not do that.

If they log it, the server operator can see (in the public CT logs) that a certificate was issued for his domain by someone else, and raise hell.

If they don't log it, the certificate won't have an SCT. That means that software that enforces CT can treat the certificate as invalid, and if someone saw the cert on the wire, it would immediately appear suspicious since all Let's Encrypt certs are supposed to have an SCT.

They could also get an evil CT log to issue a SCT without logging it, but that would generate irrefutable cryptographic proof of malfeasance of the CT log.

They could also get an evil CT log to issue a SCT without logging it…

If a SCT is issued, it has to be logged within a certain amout of time; 3rd party auditors check for this. [1]:

An auditor is a third party that keeps log operators honest. They query logs from various vantage points on the internet and gossip with each other about what order certificates are in. They’re like the paparazzi of the PKI. They also keep track of whether an SCT has been honored or not by measuring the time it took between the SCT’s timestamp and the corresponding certificate showing up in the log.

And only certain logs are trusted anyway [2]; a CT log that doesn't meet certain requirements wouldn't be trusted to begin with. And if a trusted CT log turned evil, it would quickly be noticed.

[1]: https://blog.cloudflare.com/introducing-certificate-transpar...

[2]: https://github.com/chromium/ct-policy/blob/master/log_policy...

The malicious cert would presumably only be used against specific targets, reducing the chance that an auditor gets to see it.

But indeed, an SCT and a later timestamp on the log without having logged the cert would be the cryptographic evidence I talked about.

1. Certification Authority Authorization (CAA) specifies which CAs are allowed to issue certificates for your website. Of course if the CA is truly evil, they would ignore whatever is specified. [1]

2. A DNSSEC-signed website can use DANE to specify which certificate, intermediate or key is authorized to be used. [2][3]

[1]: https://blog.qualys.com/ssllabs/2017/03/13/caa-mandated-by-c...

[2]: https://weberblog.net/how-to-use-danetlsa/

[3]: https://community.letsencrypt.org/t/please-avoid-3-0-1-and-3...

No it can't, because no mainstream browser supports or ever will support DANE.

Notice I never mentioned browsers.

Breaking news: there are other applications on the internet other than browsers.

And some of them use TLSA records. It’s the de facto standard for for secure email, especially if you want guaranteed protection from downgrade attacks.

Virtually no email sent on the Internet is protected by DANE, DNSSEC, or TLSA. Nor is it likely ever to be, since the major email providers standardized MTA-STS specifically to avoid DNSSEC, as you can see from the opening paragraphs of the standard.

This is provably false.

DNSSEC, DANE and TLSA are widely used to secure SMTP servers. From the web page showing the nearly 2 million domains with signed MX and DANE records [1]:

The following graph depicts the number of domains that have deployed DANE/SMTP. Specifically, their zone is signed, their MX records all point to hosts that have DANE TLSA records.

Regarding MTA-STS, it's a work-around for the large global email providers that can’t implement DNSSEC for a variety of reasons.

Again, MTA-STS has problems, the main one is being susceptible to downgrade attacks because MTA-STS doesn’t use DNSSEC. An attacker can MiTM a connection preventing a TLS connection.

The MTA-STS RFC [2] confirms this:

    The DNS-Based Authentication of a Named Entities (DANE)TLSA
    record [RFC7672] is similar, in that DANE is also designed
    to upgrade unauthenticated encryption or plaintext
    transmission into authenticated, downgrade-resistant
    encrypted transmission.  DANE requires DNSSEC [RFC4033] for
    authentication; the mechanism described here instead relies
    on certification authorities (CAs) and does not require
    DNSSEC, at a cost of risking malicious downgrades.
There’s no dispute about any of this, so it’s not clear why you continue to make false and misleading statements about these technologies, regardless of your opinions about them.

You know the old saying: you’re entitled to your own opinion but not your own facts.

[1]: https://stats.dnssec-tools.org/#dnssec

[2]: https://tools.ietf.org/html/rfc8461#section-2

A more complete description I posted previously: https://news.ycombinator.com/item?id=22338742

As you well know, since this is the third time you've tried to make this point, the overwhelming majority of those zones were signed automatically by their European registrars; these counts have no relationship to actual emails sent.

The entire idea behind STS protocols is to use continuity schemes to defeat downgrades. It's literally the only threat model.

these counts have no relationship to actual emails sent

These domains wouldn’t have MX or TLSA records unless they were doing email.

Even if a zone is pre-signed, an admin has to intentionally create DANE records, which they wouldn’t do unless they were hosting SMTP servers.

But it does show the statement that DNSSEC, DANE and TLSA aren’t used for secure email to be utterly untrue.

MTA-STS has other issues:

    * Authenticates domain control via CA leap of faith

    * Vulnerable to MiTM at cert bootstrap

    * Vulnerable to weakest root CA, and unauthorized certs

    * Open to downgrade on first (or irregular) contact

    * Complex mix of HTTPS, unsigned DNS and SMTP

Horseshit. I have something like 50 domains on a popular retail registrar and every single one of them has an MX record, despite me never doing a single thing other than claiming the name; they come by default with new domains. If I was in the Netherlands, every one of them would have DNSSEC signatures too, because European registrars opt domains into DNSSEC by default.

I'm having a hard time articulating how silly it is to try to dunk on MTA-STS for being "vulnerable" to downgrade attacks; it's like trying to say that HSTS is vulnerable to SSL-stripping attacks. You have to not understand the idea behind the attack or the countermeasure to lead with that argument.

Virtually no email sent on the Internet is protected by DANE, DNSSEC, or TLSA.

Here's a short list of domains that are using DNSSEC, DANE and TLSA to protect their email. I also provide a transcript of a utility that connects to and verifies the SMTP server and the DANE/TLSA records for openssl.org and could have done so for every domain on this list but there's no reason to get carried away.

* geektimes.com

* gmx.com

* mail.com

* comcast.net

* dd24.net

* debian.org

* freebsd.org

* gentoo.org

* ietf.org

* isc.org

* netbsd.org

* openssl.org

* samba.org

* torproject.org

There's a DANE TLS SMTP server checking tool: https://www.huque.com/bin/danecheck-smtp

Here's what it does: This application checks a DANE SMTP Service. It queries the MX record set for the given domain, looks up DANE TLSA records at the MX targets, connects to the target servers, negotiates STARTTLS, and then attempts to verify the TLS server certificate against the TLSA records.

Lets test openssl.org:

    Domain Name: openssl.org

    MX host: 50 mta.openssl.org

    ### CHECKING MX HOST: mta.openssl.org

    TLSA records found: 1
    TLSA: 3 1 1 6cf12d78fbf242909d01b96ab5590812954058dc32f8415f048fff064291921e

    Connecting to IPv6 address: 2001:608:c00:180::1:e6 port 25
    recv: 220-mta.openssl.org ESMTP Postfix
    recv: 220 mta.openssl.org ESMTP Postfix
    send: EHLO cheetara.huque.com
    recv: 250-mta.openssl.org
    recv: 250-PIPELINING
    recv: 250-SIZE 36700160
    recv: 250-VRFY
    recv: 250-ETRN
    recv: 250-STARTTLS
    recv: 250-8BITMIME
    recv: 250 DSN
    send: STARTTLS
    recv: 220 2.0.0 Ready to start TLS
    TLSv1.2 handshake succeeded.
    Cipher: TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384
    Peer Certificate chain:
     0 Subject CN: mta.openssl.org
     Issuer  CN: Let's Encrypt Authority X3
      1 Subject CN: Let's Encrypt Authority X3
     Issuer  CN: DST Root CA X3
      SAN dNSName: mta.openssl.org
    DANE TLSA 3 1 1 [6cf12d78fbf2...] matched EE certificate at depth 0
    Validated Certificate chain:
     0 Subject CN: mta.openssl.org
       Issuer  CN: Let's Encrypt Authority X3
     SAN dNSName: mta.openssl.org

    Connecting to IPv4 address: port 25
    recv: 220-mta.openssl.org ESMTP Postfix
    recv: 220 mta.openssl.org ESMTP Postfix
    send: EHLO cheetara.huque.com
    recv: 250-mta.openssl.org
    recv: 250-PIPELINING
    recv: 250-SIZE 36700160
    recv: 250-VRFY
    recv: 250-ETRN
    recv: 250-STARTTLS
    recv: 250-8BITMIME
    recv: 250 DSN
    send: STARTTLS
    recv: 220 2.0.0 Ready to start TLS
    TLSv1.2 handshake succeeded.
    Cipher: TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384
    Peer Certificate chain:
     0 Subject CN: mta.openssl.org
       Issuer  CN: Let's Encrypt Authority X3
     1 Subject CN: Let's Encrypt Authority X3
       Issuer  CN: DST Root CA X3
     SAN dNSName: mta.openssl.org
    DANE TLSA 3 1 1 [6cf12d78fbf2...] matched EE certificate at depth 0
    Validated Certificate chain:
     0 Subject CN: mta.openssl.org
       Issuer  CN: Let's Encrypt Authority X3
     SAN dNSName: mta.openssl.org

    [0] Authentication succeeded for all (2) peers.

With the note that those are literally the best domain names you can come up with, and that you can go to the search bar below and look at my comments to see me running the Moz 500 through the same analysis, I feel like your list makes my point for me. Thanks.

DNSSEC standardization began in NINETEEN NINETY FIVE. That's twenty five years ago. They got GENTOO.ORG. That's the win you're crowing over. Congratulations! As goes GENTOO.ORG, so too goes the Internet.

Internet protocol evolution is in a funny place currently. IPv6 is just as old and it's only recently been widely deployed. There's just so much inertia.

>An evil CA can of course generate fake certificates for any hostname they like, but those people already exist

[citation needed]

If that were true, I'd expect browser/OS makers to promptly revoke trusted CA status for them.


It takes a very long time to revoke trust if you don't want to break the web. Symantec was discovered to have misissued a bunch of certs in 2015[1] (including for google.com). No one removed trust for them. Then in Jan 2017 it was discovered they misissued a bunch more[2]. I believe it was the end of 2018 before browsers finally removed trust for them.

[1] https://security.googleblog.com/2015/10/sustaining-digital-c...

[2] https://security.googleblog.com/2017/09/chromes-plan-to-dist...

I believe you've confirmed mpoteat's fears are well founded. Thanks for doing that for me ;) Here's your upboat.

I think you misinterpreted.

An evil CA can generate fake certificates, because a CA can sign (generate) any certificate they want, because all they're doing is saying "this is legit, trust me".

People with the power to generate fake certificates for any hostname already exist regardless of the size of Lets Encrypt. Consequently LE aren't a particular threat. (And the short lifespan of LE certs significantly reduces the value of them as a target. Once the dodginess is identified, you could detrust them within a much shorter timeframe)

Promptly meaning as soon as they find out, which is likely nearly instant these days for large services.

Comodo issued bogus certs, so did MCS. It's happened a few times over the years.

Only if they abused it. If it's a government CA, and they use it only to trap pedophiles and ISIL, I don't see them doing anything.

I’m pretty sure browsers will de-trust CAs that intentionally sign forged certificates regardless of what they’re used for.

Only pedophiles and ISIL need encryption, so what are you worried about... just do all your banking etc. over plain HTTP... I'm sure no foreign governments are listening on ANY compromised devices in the middle, or have any motivations you disagree with.

A few years ago I attended a conference talk from one of the Let's Encrypt folks. At the end, when he took questions, someone asked something along the lines of "what if the government targets Let's Encrypt for sneaky surveillance things". And his reply was something along the lines of "We're partially founded by the EFF. The EFF is actively looking for organizations that the government has done that with, in order to sue the government."

Edit: https://media.libreplanet.org/u/libreplanet/m/seth-schoen-le... the question is at 46:22.

It is much better to target a small and obscure certificate authority if you are interested in conducting wide scale MITM attacks. The entire system is only as strong as the weakest link.

Isn't this what certificate transparency was meant to solve?


why target them when it's OSINT? :)

[ed: compromise of their HSM would be bad and I didn't consider active attacks, only passive]

Are there any documented cases of HSM breaches anywhere or involving a CA?

Attackers had effective control over the DigiNotar CA before it was distrusted and eventually went bankrupt in, I think, 2011. They may not have been able to extract the keys from the HSM (this would probably require physical access) but they had the ability to cause issuance without accurate records kept so there's not a lot of practical difference.

Incidents at WoSign/ StartCom presumably involved malfeasance by key staff. I guess that doesn't count as a breach unless you'd call it a "Bank raid" if the manager just empties the vault into his own car and flees.

At Symantec they knew third parties had the independent ability to issue with any of their CAs but that was specifically contracted third parties (in particular a Korean firm named CrossCert) not just random people, it's just that issuance records weren't properly kept and oversight was inadequate. Again the ability to cause issuance isn't technically a breach, the keys stayed inside the HSM but it was possible to cause unrecorded issuance so there's not much moral difference.

tialaramex did a great job answering the second part of your question, so I'll take a swing at the first.

I don't know of any public cases where an org has disclosed that an external attacker exfiltrated key material from an HSM. That being said, there have been a number of disclosed vulnerabilities against HSMs/vendors that could allow this sort of attack to happen. CVE-2015-5464 is my favorite of these. There are also plenty of attacks that compromise the servers that talk to the HSMs, which usually would give an attacker the ability to perform arbitrary crypto operations using the keys in the HSM with no restrictions and little-to-no audit trail. I also know of attacks where the compromised "servers" are part of the HSM itself, but outside of the crypto/FIPS boundary.

IIRC, the Diginotar attack (used to make a fake certs and MITM *.google.com for many Iranians) involved replacing some of the dll files used to interface with HSMs. Dunno if it's confirmed that this is how the bad certs were made.

Like what? All the certs they issue are already public.

Just because the certs they issue are "public" doesn't mean the information can be decrypted.

But if you compromise the security of a PKI, you can decrypt the traffic.

Only by misissuing certificates, which hopefully can be caught with CT nowadays. Let's Encrypt doesn't have the ability to decrypt subscribers' traffic.

Specifically, in SSL or with older (non-browser) TLS up to 1.2 the encryption of traffic depends on a session key which was itself encrypted using Public Key cryptography based on the public key in the certificate. The CA doesn't have the corresponding Private Key (if you use Let's Encrypt's popular Certbot tool it is minting that key on your machine and never sends it anywhere) and so it could not decrypt this message and get the session key. This is a bit clunky but it works. Bad guys who steal your key (from you not from Let's Encrypt) could retrospectively decrypt stuff though.

In TLS 1.3 the session key is always chosen before any certificates go anywhere, using some flavour of elliptic curve Diffie Hellman key agreement - both sides agree on the same secret key without it being sent anywhere because Mathematics is Cool. So even if you were a fool and entrusted the Private Key corresponding to your web servers to somebody you shouldn't there's no way for bad guys to decrypt stuff without a live MITM attack.

The middle ground (modern browser, co-operative server, but older TLS version) is in between, bad guys can't retrospectively decrypt but it is clunky and some stuff (e.g. certificates) is not encrypted at all.

> a session key which was itself encrypted using Public Key cryptography based on the public key in the certificate

This all depends on the ciphersuite used by the client and server. In TLS 1.2 some ciphersuites use forward-secret methods (typically "DHE" and "ECDHE"), and some don't.

In TLS 1.3 you always get the benefits that you mention, but in TLS 1.2 and earlier you sometimes get them. :-)

ECDHE or DHE with TLSv1.2 is in practice not "sometimes" but "almost all the time". See https://www.ssllabs.com/ssl-pulse/ "Forward Secrecy" is not supported on 2.1% of sites.

Thanks, that's a nice explanation

What prevented them to use free certificate before? StartTLS and WoSign issued free certificates for years before letsencrypt was a thing. And it was much easier than setting up letsencrypt if you ask me.

With proper tools like Caddy or Apache's mod_md, there's no setup needed - they will simply work.

Using those traditional free CAs still required getting the signed certs through a sometimes obscure process from their website, transferring it manually to the target system etc.

The manual handling alone, which causes work, adds another point of possibly exposing secrets, does not scale, can't be automated, etc was reason enough to replace it with ACME, even if setting it up would be quite complex (which it isn't at all).

I don't believe there was a way to automate the certificate signing from StartTLS or WoSign, was there?

StartCom actually did build a rival service after Let's Encrypt was started named StartAPI.

There were some fairly grave noob-type security bugs in that service (to be fair Let's Encrypt almost went live with a fairly grave but more subtle to understand bug, and did go live with tls-sni-01 which it turns out is unsafe if people use Apache HTTPD for bulk web hosting which er, they do). Because of the grave problems StarCom turned their API off after not very long, presumably with the intent to eventually repair it and re-launch.

A surprising number of people became convinced Let's Encrypt was some sort of attack on StartCom and that the other problems with StartCom/ WoSign (eventually leading to them being distrusted) were a conspiracy to popularize Let's Encrypt.

They issued 1-year long certificate (I think that wosign even issued 3-years certificate at some point), so it was never a problem. Now with letsencrypt I always forget to update some of my certificates, because they expire so fast.

That you have to remember to update your certificates is the problem here.

Automation solves these problems. Don't be the computer's slave--tell it what to do.

I'll spend more time trying to automate that, than renewing it for a 20 years manually.

1) That's just plainly and self-evidently silly. `0 0 * * 0 certbot --nginx`. Season to taste. (There are more involved options out there--but you bought into them by making other choices ahead of time, and that's a learning experience for you to better understand your tradeoffs in the future.)

2) 20 years' expiration is a security risk that should not be taken. Letting you do that would be a bad idea not just for you--you can make stupid decisions on your own--but for anyone who connects to you, and you aren't entitled to do that.

I don't understand why someone who is theoretically a programmer would be so odd about this.

Thanks for making it all so painless. It's so good I forget it's even there. Easily the best piece of infrastructure tech I've ever used.

Also, to folks who wish to "pay" for the certs, you can do so at https://letsencrypt.org/donate/. A yearly recurring donation for the avg price of an SSL cert is what I do.

we still can't certify on an alternative port, DNS is not always an option and so there's people stuck with having to shut down servers while certbot does it's thing

Let's Encrypt would be allowed by the Baseline Requirements to offer ports 22, 25 as well as 80 and 443. But realistically only port 80 makes sense for the http-01 and only port 443 makes sense for tls-alpn-01. Why would you speak HTTP on port 25?

[[ Ha, they got around to removing 115 from the list at some point, that was always funny. It got on the list because port 115 is in IANA's list as SFTP, but that's because IANA thinks SFTP means "Simple File Transfer Protocol" a long obsolete protocol like TFTP whereas the SFTP we know today uses port 22 because it's just SSH ]]

There's no intention to add more ports to the Authorized Ports list in the Baseline Requirements AFAIK. Control over other ports doesn't have a very strong connection to control over the whole named machine.

what practical situation do you encounter that DNS isn't an option?

why are you shutting down servers to rotate certificates? a reload should be totally possible!

I think it refers to stopping the main service so that certbot can bind port 80 during the verification process.

The person you're responding to is asking about verification through dns, which is an option that avoids the need for http verification.

certbot can host the verification files on most webservers people would already be running (like Apache and nginx) so this isn’t necessary.

I have solved that problem by putting Nginx in front to redirect the .well-known and pass everything else through to the application.

Well, if you're running something on a non-standard port then you could just use a tool (like certbot) on the standard ports and copy over the certificates when you're done?

Did you try something like acme-dns[1]? It is pretty universal. The installation instructions aren't the best, but it allows you to use DNS authentication without the need for a specific adapter for your DNS provider.


> DNS is not always an option

That is why I am suggesting acme-dns. It is different from the normal DNS option. Normally, you require an adapter for your DNS provider and for many DNS providers there are no adapters out there.

But with acme-dns, you just set a static DNS entry once and host your own DNS server solely for acme-challenges. So yes, it uses the DNS protocol, but the implications are very different from the normal DNS challenge option.

Let's Encrypt isn't just making the web more secure; it's enabling a new generation of startups to exist and thrive.

My company relies heavily on Let's Encrypt to offer SSL for all kinds of use cases including wildcard domains. We wouldn't be here without them and we're proud to be an official sponsor.

If your company can afford it, do consider a corporate sponsorship: https://letsencrypt.org/become-a-sponsor/

One of the best things to happen to the internet. We started working on our product a year before Let's Encrypt and in the early days onboarding new customers was a nightmare. This is because our product required a domain to work and we spent lot of time hand-holding customers to purchase a certificate and set it up. After Let's Encrypt, this has never been a problem.

Wanted to add that after the dns-01 challenge went live, it has become even more awesome. Many customers saw opening up port 80 as a security concern for their firewall.

DNS challenges are great for internal services too, where you want server-to-server communication to be encrypted but the servers aren't accessible over the public internet.

Since LE has certificiate transparency, can we determine which cert was the billionth? I registered two certs with them today, here's hoping it was me ;)

It's surprisingly hard to tell which certificate was the billionth issued. You'd have to decide on a source of truth, and CT is probably not a good one for this purpose. Submissions to CT are not necessarily made and processed in issuance order. The goal is to get all certs into CT as quickly as possible, but ordering isn't particularly important (maybe one submission from our CA starts a second before another but the network request is delayed until after the second one).

At the heart of things, our certificate signing infrastructure includes multiple HSMs, each one with multiple signing cores. This means that we're signing certificates in parallel all the time.

The signed certificates are inserted into our internal database in a serialized order, but due to how we optimize our database it's not easy for us to just ask "what is the billionth one." That kind of query is usually not a very useful one for us to make.

So, what you're saying is that my certificate was definitely the billionth.

No, he's saying that he cannot deny that your certificate was the billionth.

Certificates have issuance date, no? Given ntp is a thing now, we might get close to finding the billiont.

The issuance date inside a certificate is not required to be accurate. Technical reasons for inaccuracy include a widespread choice to have stuff "Just work" if you install a brand new certificate even though it's not rare for end users to have clocks wrong by ~1 hour and historically a preference to put entropy (randomness) in the date-time fields because they're near the beginning of the certificate.

The latter doesn't apply to Let's Encrypt (they're too new and this is no longer the Done Thing) but the former certainly does and there might be other technical reasons for small deviations.

Inaccuracies in the timestamps are only forbidden if their purpose seems to be to defeat some other policy. For example during SHA-1 deprecation new certificates were forbidden because once you cease issuing there's no new risk from collision attacks. You can't travel back in time with knowledge of a collision and get certificates, so if a collision is found in 2017 but no certs were issued after 2016 then we're safe. To enforce the prohibition certificates using SHA-1 but dated after the prohibition weren't trusted. But a misbehaving CA (in this case WoSign) could back-date a SHA-1 certificate presumably for a hefty mark-up over their usual prices. This was against the rules, Gerv (who has since died) investigated and built up good evidence that's what happened though.

Anyway, there are less than 100 000 seconds in a day and in that time Let's Encrypt issues typically over a million certificates. So there might be dozens of certificates issued in the same second as the billionth one no matter how you count.

Do you have a reference describing how entropy was added to certificate lifetimes? I can't see anything about it in old versions of the CA/B forum baseline requirements, and my understanding is that random serial numbers provide enough unpredictability so they don't need to randomize the other fields.

Could we get a winner according to some source? It's not like it matters a whole lot. Going by issuance date is probably good enough for most readers, but I have no idea how to go about finding that.

It's kind of labor-intensive!

Like, in theory you can look at all of



to see all of them, and then find the billionth one. But wait, there are already 1.7 billion logged on just CAID 16418 alone...?

That's because of something called CT precertificates which are a weird hack where CAs may issue every certificate twice (once in a usable form and once in an unusable form) in order to facilitate CT logging verification. So some (but not all!) of those 1.7 billion are duplicates, which you can determine by excluding the ones that have the "precertificate poison" X.509 extension. Which I don't think crt.sh lets you do easily in this query interface.

Also, I think its interface can only show you results in log order rather than sorted by issuance date. So you would potentially need to do your own CT log parsing into a database in order to run this moderately complicated query, unless you can find someone else with a parsed CT log database who will let you run more specific database queries than crt.sh.

It's always weird when something computer-related is like "that data is public, but not currently present in a useful database schema that you can query to answer your exact question", but I think this is one of those cases!

crt.sh actually has their postgres database open to the public!

You can connect with:

  psql -h crt.sh -p 5432 -U guest certwatch
From what you're saying, the query would be:

  FROM certificate c
  WHERE (c.ISSUER_CA_ID = 16418 OR c.ISSUER_CA_ID = 7395) AND NOT x509_hasExtension(c.CERTIFICATE, '')
  OFFSET 1000000000 LIMIT 1;
Unfortunately (but expectedly) it times out

I don't know how I didn't know that -- thanks a ton for letting me know!

And the ICANN fees we pay covered none of this (when funding this sort of thing would be an obvious benefit vs what ICANN does spend money on).

we have the fun ceremony videos though

Wouldn't it have been something if every domain purchase went towards infrastructure and projects like this one. Even an email provider that provides free mail (and you can pay extra to get more email storage) service for your domain. I wonder if gmail would of ever grown the way it did.

It's worth reading the paper: https://jhalderm.com/pub/papers/letsencrypt-ccs19.pdf

I don't think this is just about having money and funding, it's a very careful and tactical approach.

Agree it would be nice if we find and capitalize on opportunities like this, but (to me) it would be hard to run an email provider with the operational goals they list:

- Minimal logic

- Minimal data

- Full automation

- Functional isolation

- Operational isolation

- Continuous availability

Can't that be achieved with POP3 email? It downloads emails to your system and then deletes them on the server.

Most people don't want to have a desktop at home be the single source of truth for their email anymore, though.

I shudder to think about what will happen if they go down for whatever reason. Or get compromised, or the renewal gets bugged etc.

It is all cool and great, but is it sustainable long term, who guarantees that everything will work in 10 years? Monocultures are not desirable.

A ray of hope - you can get a two-year certificate for 10 bucks - so right there, another major benefit of Let's Encrypt, they makes for better competition. (ok I know Apple will stop honoring these starting in the Fall, but I still got two years since the expiration will be for new certificates)

The nice thing about ACME is that it's a standardized protocol, so its really easy to configure most clients to use a different provider. All we need is a few more CAs supporting the ACME protocol, and it'd be trivial to switch over to them if Let's Encrypt ever had a problem. (Come to think of it, automatic failover would be a pretty interesting feature to include in an ACME client.)

This should be a state provided service. Or provided by the UN. Essential stuff like this or root DNS should be a planetary responsibility.

In a lot of countries, giving the government the ability to issue SSL certificates would be a catastrophe in terms of surveillance and censorship.

Didn't think of that.

It doesn't have to be just the state or the UN though. It's not about removing all the other actors on the market.

But if I make a website on Python programming in french, I'm quite ok using the french gov as a CA provider.

I mean, I trust mozilla because it's mozilla. But I wouldn't trust most companies.

There was an incident just last year with Kazakhstan:


Not exactly the same as for-website certificates but in the same realm of government certificate issuance.

The problem is, there’s no way to enforce this. If you are a CA, there is nothing technically stopping you from issuing a certificate for google.com. There’s no infrastructure for the ability to say “this CA is valid only for sites that agree with the risks involved in using it”.

Actually, I believe there are. IIRC it's possible for a CA's root cert to be limited to only a certain TLD. There's an extension called "Name Constraints" that limits a Root SSL Cert, and I believe it is honored by chrome.


It's an optional extension, so a client may ignore it without failing the connection. It needs a lot more adoption. I could, for example, instead of getting a wildcard cert, get LE to sign a CA valid for my domain. That way I can issue certs for my domain myself and put load off of LE.

Actually Firefox for example carries around code to arbitrarily set any restrictions on any of the root CAs in Mozilla's trust store.

These are documented, on a best effort basis, at:


It is true, however, that no equivalent rules are enforced in most generic TLS clients (e.g. using OpenSSL directly or something like Python).

Protocols allow for namespace restrictions on CA certificates. There could, in concept, be something stopping you from spoofing google.

https://www.ietf.org/rfc/rfc2459.txt - section, Name Constraints

Given that I don't think you can actually buy that kind of cert, and the term "CA" is accepted to imply "universal CA", you're correct.

I'm by no means an expert, but I believe Certificate Transparency provides a way to catch that rogue CA after the fact.

Only if the certificate authority doesn't implement a way of not submitting their cert to the CT log servers. Of course, eventually someone might notice (if the certificate is used widely enough - if it's used to attack specific known people, maybe not) and realise that a certificate is being served that doesn't have corresponding CT info, but browsers don't check that.

Browser can check that a cert has an equivalent CT Log entry. Chrome does since 2018. https://tools.ietf.org/html/rfc6962#section-3.3 https://groups.google.com/a/chromium.org/forum/#!msg/ct-poli...

The ct-policy link you gave and that section of RFC 6962 are about SCTs. The existence of the Signed Certificate Timestamp does not prove the certificate was actually logged, it just proves that the Log promises to log this certificate within the Maximum Merge Delay (24 hours in the case of the Web PKI). A log might in fact be lying or have suffered a grave technical problem.

Figuring out how best to have browsers check the logs are truly accurate while preserving user privacy (e.g. obviously if you call a log to ask "Did you log this certificate for clown-porn.example ?" it gives them a good idea about your taste in circus-related adult content which you probably don't want them to have) is an ongoing topic of research.

With SCT checks you're in very good shape already today - bad guys would need to compromise several distinct entities to successfully pull this off. It's just not quite "Fire and forget" safe yet.

If the certificate wasn't submitted to an otherwise trustworthy CT log server then there's no way to provide an SCT ("Signed Certificate Timestamp") for a qualified CT log server. Some browsers (including Chrome and Safari) expect to be shown qualified SCTs for certificates issued after a certain date.

The SCT is signed by the log, so if you want them to lie you'd need to control enough logs to get enough qualified SCTs. Bad news, for Chrome that means it must include a Google log, which means now you're asking Google to conspire against themselves.

For now you could sidestep this by issuing bogus certificates with back-dated issuance dates. But that will stop working once those dates are too long ago for a certificate to still be valid. I think that happens some point later this year or in 2021 maybe? If you show Chrome (perhaps other browsers but I've read the code in Chrome) a certificate in which the lifespan (between NotBefore and NotAfter) exceeds the maximum permissible lifespan at that issuance date, it treats the certificate as invalid anyway.

This Mozilla isn’t the Mozilla from back in day.

It’s hard to imagine the old Mozilla enabling DNS over HTTPS by default to Cloudflare, with DoH’s tracking issues: https://labs.ripe.net/Members/bert_hubert/centralised-doh-is...

In all countries, giving the government the ability to issue SSL certificates would be a catastrophe in terms of surveillance and censorship.


By the way, there have been a number of openly government-operated root CAs in numerous countries for decades. Before I got involved with Let's Encrypt, I was very nervous about government influence in PKI and thought this was quite concerning. (That was also before Certificate Transparency.)

A quick look at a root store turns up

subject=C = CN, O = China Financial Certification Authority, CN = CFCA EV ROOT

subject=C = HK, O = Hongkong Post, CN = Hongkong Post Root CA 1

subject=C = NL, O = Staat der Nederlanden, CN = Staat der Nederlanden EV Root CA [and other roots with the same organization]

subject=C = TW, O = Government Root Certification Authority

I think there are several others, probably especially at the intermediate level.

Just what we need is to get politicians involved in this.

One benefit thought would be they could more easily block access to sites they don't want folks visiting by just denying a CA cert (which would then generate warnings / blocking in many browsers).

One benefit? Did you mean to say "the most obvious downside" instead?

Ok, the more I read answers to my comment, the less I think it's a good idea.

Looks like your browser didn't render the <sarcasm> tags.

> planetary responsibility.

I would like to hear from the planetary president to see what he has to say about that.

Seriously this is all great but unfortunately with all the mess going on with "the big powers" I'm not sure you want to give them that. I wished we lived in a world where we could tho.

Oh, you mean like the state provided service that is trying to sell off the .org domain registry right now?

LetsEncrypt have a backup CA for emergencies such as this: https://letsencrypt.org/certificates/

Here's the best guess what I think would happen:

Some large player - today it'd probably be Google, in the future maybe some other company - or a collection of players would decide that it's in their best business interest not to let half of the open Internet go down. And they'd come to the conclusion that providing a replacement that's basically just "change this url in your ACME software and you can continue do things as before" doesn't cost them too much (Google already runs a CA, as do many other large IT corps) and they should do it.

Oh yeah, it'd be great if Google could MITM half the SSL on the internet...

As I understand it, a CA doesn't have a way of MITMing connections just by virtue of them being the one validating the cert for a certain website. You don't share the private keys of your certs when you generate them[0], you just need for a CA to attest that yes, this certificate's public key is allowed to be used for whatever use you're applying for. ACME doesn't change that, it just allows this verification to be done automatically.

Let's Encrypt doesn't have any more ways of MITMing people using their certs than any other CA - that is, they _could_ do it by generating rogue certs, but that's no different than what Google can already do since they're a CA as well. Plus, certificate transparency logs should make it visible if they ever do so.

0: Barring weird cases I've seen of some companies letting you generate a cert entirely on their website, letting you download the private key once it's done. Which is bad practice for the reason you're talking about right now, since by then you have no assurance that they haven't kept a copy of that private key for later use.

Any CA can sign any certificate they want, including ones they generate themselves. If a bad actor got control of, or even could coerce, a CA, and could do the same for DNS, the end users would be hard pressed to know.

It's a very valid attack, although minimal. To say they don't have any way of MITM'ing a connection is wrong even if it's unlikely.

This is what certificate transparency logs are designed to solve.

Chrome and Safari currently validates that a certificate has been published in the publicly available transparency logs as part of considering it valid.

Either Google doesn't publish the certificate in the logs and it's not valid, or they publish it and people are able to see the misissuance.

It's not foolproof, but it makes the attack even less likely.

> To say they don't have any way of MITM'ing a connection is wrong even if it's unlikely.

I totally agree, it's why I qualified it with "just by virtue of them being the one validating the cert for a certain website" and later on adding that they could do so in other ways, like the one you're suggesting. Reading it again makes me realize that it could be understood that way, though, sorry if I wasn't clear enough. Sometimes not being a native English speaker betrays me a little bit. =)

Wouldn't it be equally easy for a CA to MITM a site that got it's real cert from them vs from a different CA?

CAA DNS records aim to make that more difficult, actually, but otherwise AFAIK you're right. =)


CAA is for telling competent CAs that you don't want to use their service, so as to avoid them being fooled by bad guys who pretend to be you. If you think their methods are dubious or just won't be effective due to how your names are managed, CAA lets you flag that they shouldn't issue for your names at all.

If a CA is incompetent or malevolent it would just ignore CAA records or not check them at all.

It would be a serious bug if a web browser for example went "Hey this site has a cert from Bob's CA but the CAA records for the domain say only Alice's CA is to issue" and rejected the certificate from Bob's CA. The CAA notice is about allowing new issuances right now but maybe last week when I got this certificate from Bob's CA I didn't set that CAA record so that was fine.

It would be valid (maybe not a brilliant idea, but valid) to set CAA to refuse all issuance, changing it only for a few minutes once a week while you do all your certificate changes.

Oh wow, and here I thought having clients check that record was the whole point, as a layer of defense against rogue CAs. Thank you so much, I hadn't realized. =)

> and could do the same for DNS

Google also controls a major public DNS resolver ( and the most popular browser.

But hey, y'know, they're the good guys or whatever so I'm sure it's fine. (Probably. Right up until the moment when it's not.)

This is absolutly not how a certificate signing request work at all. Please don't say something so wrong just to scare people, you cannot read encrypted packets just because you signed a certificate.

Cloudflare already MITMs like 2/3 of the traffic on the internet. Personally I'd trust Google more than them. Google may be evil but at least their motives are clear.

> Cloudflare already MITMs like 2/3 of the traffic on the internet.

Cloudflare (MiTM [0] or not) is on record that 10% (and not 66%) of the all internet traffic now flows through their networks.

Ref: https://news.ycombinator.com/item?id=19204085

[0] https://blog.cloudflare.com/keyless-delegation/

Yo, learn how cert signing works. That's not it.

Counterpoint to what you're saying: Any trusted CA can MITM almost any site on the internet (ignoring CA extensions like Name Restrictions, pinned certs, etc). Google has a trusted CA, so while the OP is wrong it's not something they can't do.

That's not how CAs work.

> Oh yeah, it'd be great if Google could MITM half the SSL on the internet...

How exactly would Google MITM half the SSL on the internet by virtue of issuing certificates via ACME?

The private key never leaves the subject's system (the system hosting a website for example). Google would never have access to the private key for which it would issue the public key certificate.

Further, if Google abuses its power by issuing a fake certificate for another website and uses that to MITM all traffic to that website, all browsers and systems would remove the offending CA certificates from their trust store immediately. Look what happened to DigiNotar.

Not that I expect Google to issue fake certs, but DigiNotar also doesn't command 80%+ browser marketshare to soften the blowback.

> Not that I expect Google to issue fake certs, but DigiNotar also doesn't command 80%+ browser marketshare to soften the blowback.

Not sure how that is relevant. DigiNotar was a trusted root CA in all major browsers. So if an attacker managed to get a fake certificate issued by DigiNotar, they could attack 100% of the users visiting the website for which the fake certificate was issued.

In fact, they did issue fake certificates by accident due to a security breach. As soon as the error was caught, their CA certificates were removed from all browsers. They went bankrupt! That's how serious this business of issuing certificates is.

It's relevant because Google isn't likely to remove _themselves_ from their browser, which is currently the most popular web browser on the planet.

For this reason alone, having a major browser dev as a CA is not a good idea, regardless of how much or little you trust google.

Google is already a CA.


These points could be stated for any major certificate issuer. They could encounter an availability issue, bugs, vulnerabilities, etc. The funding model doesn't change these risks.

Are any of the other providers this big? 10% of all websites and growing. If the NSA get hold of the keys then it's just like the old days, or am I missing something?

> If the NSA get hold of the keys

In X.509, the CA never sees the private key associated with a certificate. So while a state actor could always manufacture a “legitimate CA-signed” replacement cert and MITM you with it, they can’t do anything about modern defense-in-depth security approaches like certificate pinning, since it’s the particular public key of the original cert being pinned, not the CA’s authority + CN.

Pinning can be used with anything in the chain not just your leaf.

Some pinning strategies assume a particular CA is trustworthy and pin the public key for that CA, so that they don't need to update with new pins just because they got a new certificate. Obviously you do need to stay on your toes (if you pinned Symantec and then it got distrusted... need to get new pins pronto) but that's true for any pinning strategy. Laziness plus pinning is a bad combination. Like er... keeping tigers as pets and having a toddler maybe?

Considering 90% of websites aren't served by LE certificates, I wouldn't be surprised to hear if one of the legacy issuers had a larger market share.

It would not work for mass data collection, maybe only on targeted attacks.

> ok I know Apple will stop honoring these starting in the Fall ...

Do you mind expanding on this? It’s not clear to me what status quo Apple will stop maintaining.

Starting September 1, Safari will not honor new certificates that have validity periods longer than one year. I don't have a link to their documentation because the announcement was reportedly made at a face-to-face meeting at the CA/Browser Forum in February. https://www.thesslstore.com/blog/ssl-certificate-validity-wi...

Safari will will no longer be honoring certificates valid for more than 13 months starting September [0]

[0]: https://www.theregister.co.uk/2020/02/20/apple_shorter_cert_...

> I shudder to think about what will happen if they go down for whatever reason.

We would quickly see article and reminder that free means you are not the client :)

Just recently I had issues with Lets Encrypt bot. Apparently on new version of Centos it craps out with some incompatible python libraries. Once I fixed it it turned out I am blacklisted from requesting new certs for 7 days, per their rate limits. That made me come back to drawing board for 60-some domains I ran and just purchase the damn certs. They so cheap these days unless you ran a hobby website its just not worth the savings.

> it turned out I am blacklisted from requesting new certs for 7 days, per their rate limits

The rate limit for failed issuance is one hour, not 7 days. The 7 day rate limit applies when you successfully issue 5 identical certificates; then you have to wait 7 days more to issue a new identical certificate.


It's unlikely that you would be subject to the 7 day duplicate certificate limit due to a Certbot dependency problem, because the dependency problem would have to allow Certbot to obtain the certificates correctly but fail to save them properly.

(Source: I'm a Certbot developer.)

I should also add that I'm really sorry that you experienced an installation problem on CentOS. Dependency and packaging problems continue to be a challenge for Certbot and our colleagues are still trying to find a more universal solution to them (probably involving snaps).

I had similar problems with that bot recently (i.e. the python bot "certbot"), so I looked around and found a very nice alternative named PJAC[0]. I have now switched a bunch of my domains to use PJAC, and so far it works like a charm. Its only dependency is Java (version 8 or higher). Everything else is bundled in the downloaded artifact, just as it should be.

[0] https://github.com/porunov/acme_client

I’m a fan of acme.sh [1]. It’s a battle-tested Bash script that handles many different use cases, including many DNS providers if dns-01 is how you roll.

I run my own DNS server using Knot and acme.sh supports it.

[1]: https://github.com/acmesh-official/acme.sh

The worst thing that could happen is a concern taking them over with select malicious intent.

There's nothing a malicious actor could do by taking over LE that they couldn't do by buying a lower-profile trusted CA more cheaply. LE does not have access to sites' private keys, for example.

The same rule as for any critical system applies: Make sure your site uses 2 ACME providers and implement fallback on failure.

The default setup is to renew 30 days before expiry. If it fails at that point, you still have 30 days for your "fallback" to kick in.

I'm not sure it's worthwhile that your fallback is either ACME or automated. In fact, the failure isn't even one that should page anyone in the middle of the night; 30 days gives enough time to manually sort things out during regular work hours.

Sure. However, it costs nothing to have multiple providers with automated retries and fallbacks.

I like my automated functions to stay that way as long as possible!

Redundancy of suppliers (of anything) is a general good practice.

Who else provides free ACME other than let's encrypt?

Congrats and Thanks Let's Encrypt crew! Using your services for awhile now.

What made Let’s encrypt go through the roof is a whole bunch of providers effortlessly make your site https compatible.

Like netlify (hosting a static site with build is super easy). Same with Cloudflare and a bunch of others.

As a user it’s all abstracted at simply the push of a button.

Why does the number in the TOTAL row and TOTAL column contain 1.7 billion? https://crt.sh/?caid=16418

Probably because it includes pre-certificates. I have to run so I don't have time to explain, but basically at a certain point we started submitting each certificate twice, the first one being a "pre-cert." You can probably Google for more info.

Thanks for answering my question as well as others in this thread! Let's Encrypt is awesome.

Would be interesting to know how many of those billion+ are flat out bad/malicious.

Not to take away from this achievement, but no one else bothered about this massive centralisation of critical security infrastructure?

I've always been concerned about the centralised model of SSL. But interestingly whenever I mention that the model is broken and web of trust is the best thing we have I get downvoted and shouted at here on HN.

I get the same.

Single point of failure and you don’t even get insurance if something does go tits up. Unlike you would with a normal paid SSL certificate .

If ICANN can’t be trusted what makes me want to trust LetsEncrypt.

I don’t trust it, I won’t use it. I don’t like it.

And that's where they realise they stored the serial number on an int32!

Here's the boulder CA function that generates serials https://github.com/letsencrypt/boulder/blob/master/ca/ca.go#...

int32 is enough to store one cert per ipv4 address....

So a two month supply, since serial numbers don't get reused.

Which isn't really enough, considering SNI and subdomains.

Now with domain prices being threatened to skyrocket, let's create/adopt a free alternative to ICANN with alternate roots supported by all modern browsers like OpenNIC

Yeah I know there are billions of legacy devices that will never work on it but gotta start somewhere or adopt a current alternative asap so a decade from now it's common.

Stats[1] indicate about 1.5-1.7 [2] websites exist.

This would mean letsencrypt certificates make up more 2/3 of the internet!

[1] https://www.websitehostingrating.com/internet-statistics-fac...

[2] https://www.internetlivestats.com/total-number-of-websites/

Edit: Sounds like these include renewals from the thread, but it’s still over 10pct of the internet!

It sounds like a billion certificates includes renewals, so not anywhere close to 2/3rds.

This is correct. We currently serve about 195 million websites, where website is defined as a unique Fully Qualified Domain Name.

Not only that, but you also made a profit of zero billion dollars!!! ;) Thanks for such a valuable public service.


I'd gladly pay a dollar a year to use the service (not a dollar a year per certificate).

They do accept donations!


Disclaimer : not affiliated but if you want to support a non-profit that's the best way to do it :)

That’s over 10pct! It’s still very impressive!

Great job, and thank you so much!

That's still awesome though!

Why you make me think hard with pct? Why not just use a % sign?

How many were issued to phishing/malvertising/malicious sites?

If you're interested in running your own CA (eg. for .lan, and not having to deal with dozens of self signed certs) and want the benefits for ACME, smallstep (step-ca) makes this really easy.

Unfortunately lots of user interfaces don't support setting a custom ACME directory (the clients mostly do, but both pfsense and proxmox require fiddling to add your custom CA working via the web UI), but it's getting better.

I converted my website from http to https last weekend.

Many thanks to LetsEncrypt with CertBot for making it pretty much painless.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact