Hacker News new | comments | ask | show | jobs | submit login
Let's Encrypt Stats (letsencrypt.org)
134 points by sinak on Dec 5, 2015 | hide | past | web | favorite | 34 comments



As their verification process already requires DNS to be correct, can we get a way to verify directly via DNS?

Maybe I could put a public key in DNS, sign the CSR with the private key, they’d poll the DNS for the key, verify, and approve it?

That would improve the ability to automate it for me a lot, without having to use a webserver.


As clarification: A poll for a new certificate could then be as simple as "sign the CSR with the key, do a POST to their URL with the CSR in the form data, get the results back".

Not necessarily – obviously, many other approaches would work, too – but having to run a webserver to get it approved, and move files around, etc, makes the process a bit bothersome if one wants to automate it without giving the script access to the webserver, or doesn’t want to run one.

EDIT: Obviously, I have to be missing something, right? What’s the error in my concept? I mean, if it was that simple, they’d have already implemented it.


DNS verification is already in the ACME spec, but not yet supported by the live servers. So the answer is "soon".


Sad that they don’t support it yet, I’d have loved to use it. It seems so simple and logical.


Am I the only one a bit annoyed by the 90 day expiry limit? Although the reasoning for automation is sound, I don't see how a small expiry limit actually improves security [as stated in one of the reasons] in any way.

Especially given the cert is basically now a requirement to get onboard with http2 at all.

I truly appreciate the heroic efforts of let's encrypt, however that seems to be an annoying middle step before DNSSEC.


The security argument makes sense.

The background is the following: Right now certificate revocation is essentially broken. There are two mechanisms (CRL and OCSP), and both don't work. CRL for obvious scalability issues and OCSP because you'd need near 100% uptime of the OCSP responder for it to be reasonable and never have any firewall (think of captive portals) that blocks the conncetion to OCSP. Therefore browsers decided either to implement OCSP insecurely with a soft fail (mostly pointless) or not at all.

So we have no working revocation. Someone hacks into your server and steals your key and uses it to attack your users. What do you do? Of course you revoke your cert, but it doesn't really make a difference.

People have been thinking how to fix revocation. One way would be OCSP Stapling + must staple. But that's still a long way till this will work (the ocsp stapling implementations of both apache and nginx would need a huge overhaul, they are pretty bad and not up to the task). Short lived certificates are a way to make revocation less important, because it reduces the time a hacked key can be used.

And if this really bothers you a lot: There are two other free CAs that will give you one year certs: StartSSL and Wosign.


> Especially given the cert is basically now a requirement to get onboard with http2 at all.

That was a proposal before the specs of HTTP/2 were official, it's not a requirement anymore.

> I truly appreciate the heroic efforts of let's encrypt, however that seems to be an annoying middle step before DNSSEC.

I don't think that DNSSEC is a great idea. Chrome supported DANE for a while, but they removed the support for it.

TLS + HPKP + HSTS is working great, DNSSEC don't improve a lot when we would add it.


It's not a requirement per spec, however it is a requirement for all major browsers.


I'm tired of downvotes without explaination. You don't see a problem with a new automation that when breaks basically brings down your website? Or any problem with letsencrypt basically giving you only <90 days to generate a cert with somebody else?

I also really want to have the argument for improved security. Because a compromised host can now generate new certs for himself at zero cost as well, so what's the real point behind the 90 day limit? Giving NSA less time to crack your key?

Thanks for clarifying.


> You don't see a problem with a new automation that when breaks basically brings down your website?

A TLS cert change doesn't bring down your website. Swap the old cert for the new and reload. Done.

> Or any problem with letsencrypt basically giving you only <90 days to generate a cert with somebody else?

Look at the folks behind the project. Take a good look at the folks running and funding the project. They know how to run an essential Internet service. Moreover, they will give you more than 90 days notice if they ever need to shut Let's Encrypt down.

Frankly, you can level this criticism at any vendor that's a potential SPoF. It's not a good complaint when it's so general.

> Because a compromised host can now generate new certs for himself at zero cost as well...

Because -AIUI- CRLs don't currently work well -in practice-, a 90 day window of cert validity limits the amount of time a given cert can be out in the wild, impersonating your site.

> ...what's the real point behind the 90 day limit? Giving NSA less time to crack your key?

If I might snark for a second:

1) If your box is pwnd, and your private keys aren't in an HSM, then there's no need to try crack your key.

2) AFAIK, there is no non-brute force attack against the public part of a correctly generated TLS keypair that will reveal the private part of the key. (Or am I totally wrong about how TLS works, and there's only a private key? I can't remember. :( )


> a 90 day window of cert validity limits the amount of time a given cert can be out in the wild, impersonating your site.

But a the OP pointed-out, if a site using LE is compromised then the attacker has basically an infinite duration of valid certs, because they are automatically renewed.

There is no manual challenge to receive a new cert. So essentially the validity period of an LE cert means nothing.


...that's only true while the attacker controls the site.

The 90 day window mitigates the effect of the cert being leaked.

Just because I grabbed a site's cert on day doesn't mean I control the site. In fact, there's nothing to say I even compromised the site to get the cert in the first place. Perhaps it was mishandled internally? Or grabbed via a heartbleed-style attack?


> But a the OP pointed-out, if a site using LE is compromised then the attacker has basically an infinite duration of valid certs, because they are automatically renewed.

As zeendo mentions, this is only true while the attacker can continue to provide proof that they control the site in question.

> There is no manual challenge to receive a new cert.

You're talking about a server on which an attacker has the ability to

* Read the server's ACME private key

And one or more of

* Add new documents to [scheme]://[domain]/.well-known/acme-challenge/

* Stand up a HTTPS server at the domain for which a key is being requested that responds with LE-provided data using a LE-provided temporary key.

* Sign challenge information using a previously issued private key.

* Add a TXT record for the domain in question containing data specified by the LE server.

This requires a server to be pretty thoroughly pwnt.

What's more, even though cert revocation works poorly in the real world, the LE servers almost certainly respond correctly to cert revocation requests. So, if a server operator notices that his box has been pwnt, he can revoke the certs that were issued during the time, closing the "Sign challenge info using a previously issued LE key" barn door for good. (Or until his machine gets pwnt again.)

Frankly, I think that OP hasn't actually had a good look at how the LE software works, or the design of ACME.


Maybe I could see annoyance about the perceived dependency on Let's Encrypt. If someone is used to setting it for multiple years, then feeling like the transaction is complete for a long while, then LE could appear too involved. It's almost like the server needs a permanent attachment to an organization, to keep pinging to ask permission to exist on the web. If everyone goes 100% HTTPS/2, with automated renewal, then one could imagine a possible scenario where Let's Encrypt going out of service would break large chunks of the web. Or gives them too much power to selectively deny renewal and break sites with unattentive admins.

I'd be a little hesitant to switch commercial or critical sites to using Let's Encrypt right now. But overall, no, I like automated renewal. I think I'd choose to get a new certificate monthly, instead of 3 months. The provider I was using before with very low prices requires me to go through a manual process through their website. The main feature to me that'll cause me to switch is being able to set it and forget it. With automated monthly renewals, the 90 day expiry is extra buffer in case something goes wrong.


There is no way to reliably revoke certificates. Therefore it's crucial to keep a short expiry limit.

90 days isn't short, it's about 89 days too long.


> however that seems to be an annoying middle step before DNSSEC.

Do tell. How is LE a middle step before DNSSEC?

(I would appreciate it if only tenfingers replies. I'm interested in his response, not speculation from the rest of us. :) )


Is there a particularly large marketing presence for LE in Germany? or am I just misinformed re: the popularity of the .de TLD?


LE has been covered for months now in all major (online) trade magazines and web sites, like Heise's c't and iX, Golem, Chip etc.

Additionally, .de is the most popular CCTLD, second only to .com, according to Wikipedia (I'm not sure what that is based off, new registrations or total registrations, though).


No and probably not. You're underestimating just how large the German software development/hacker community is.


It's cheap, it's pretty, it's suitable for puns and anyone can buy .de domains, so probably it is used by many people everywhere.


Very nice. Now do wildcards!


I know wildcards are useful and all, but they enable bad practices and increase risk. Now that LE makes it easy to mass-request certificates, excuses for using a wildcard are wearing thin. I'd rather LE never allowed them.


Wildcards allow me to serve all of my sub-domains over the same HTTP2 connection rather than causing the client to reconnect/renegotiate TLS/SSL.

This allows me to use domain sharding for those browsers still on HTTP 1.1, yet not have a performance penalty for doing so in the case of HTTP2.


Let's Encrypt does not support wildcard certs but they do support SAN certs, so if your list of subdomains is fixed then you can still benefit from socket reuse with HTTP/2.


Want so bad, but that said, I registred 6 subdomains at once, totally painless.


yes, would be very useful so I can use it with a private instance of ngrok


why? the signed cert is on your disk in 2 seconds. just request it on the fly. what's your use case for wildcard certs?


One usecase is a server that intercepts traffic for multiple domains. For example, a load balancer. I use AWS ELB/cloudfront for SSL termunation and they only except one cert, but they can send to multiple back ends. Then the server is routed on the host based on URL.


You may simply want AWS ELB to support SNI, and thus allow you to install multiple certificates.


Rats! When I read the headline, I thought the article would be about a proposal to encrypt the statistics generated by web servers.


I should probably switch to Apache. :-/


Why? The client can work pretty well with the --webroot issue method on any webserver. Hell, if you can take your webserver offline for a short time you can use the --standalone option's built in server.

Even where you webroot is wired into something else (a proxy, fastcgi, uwsgi/django, etc), you can rig it to try to see if a file exists in another directory before handing off.

    location / {
        root /web/$site/webroot;
        try_files $uri $uri/ @django;
        access_log off;
    }

    location @django {
        include uwsgi_params;
        uwsgi_pass unix:/web/$site/socket;
    }
Simple! I reuse this in many sites (hence the $site variable) but you could decouple this and use a single directory for all sites/certificates (it doesn't have to be in the webroot, just something nginx can read).


Thanks to the ISRG and to all the sponsors (Except facebook. I don't like them :)) for making Let's Encrypt.

This does not fix the CA problem, but it does a whole lot to make the internet a safer place as it is now.


You don't have to like Facebook to thank them, I'm not a fan of Facebook either but appreciate their sponsorship of something so fundamental to the web.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: