Hacker News new | past | comments | ask | show | jobs | submit login
ACME v2 and Wildcard Certificate Support is Live (letsencrypt.org)
1024 points by schoen on March 13, 2018 | hide | past | favorite | 310 comments



First, congrats, this is great news! There's a lot of use cases out there that require a wildcard cert or work far better with them.

> It is our intent to transition all clients and subscribers to ACMEv2, though we have not set an end-of-life date for our ACMEv1 API yet.

Please don't do this. It will break millions of sites needlessly. Most installations of lets encrypt plugins aren't going to auto update to v2. A lot of us are also using custom v1 code for various reasons that may not be easy to change.

The preferable end-of-life date for ACMEv1 (sparing any existential security issues) should be never. Otherwise you will be executing a Geocities-sized web meltdown every time you phase out a version of the API.


The reason we haven't announced an EOL for ACMEv1 is that we won't announce one until we are confident we won't cause the kind of meltdown you describe.


You could block new domains (new to Lets Encrypt) from using v1.


This will break many tools which currently rely on LE. E.g. mailinabox, which uses LE to set itself up.


I doubt they would just break it. I imagine if they do this then this will be announced sufficiently in advance (probably around two or three years) to allow people to update their ACME clients. Then you can just operate the ACMEv1 for existing domains until noone is asking for more (and scale down the architecture).


The problem is that LE is being used as plumbing. I noticed MIAB was using LE because I recognise that SSL-out-of-the-box is something interesting, and I investigated. But I wager most people who use it will have no idea. They just install it, and "it works", as it should. great. What's HTTPS? That's the entire point of tools like MIAB, mind you:

> Technically, Mail-in-a-Box turns a fresh cloud computer into a working mail server. But you don’t need to be a technology expert to set it up.

https://mailinabox.email

I'm just choosing MIAB as an example here. This applies to anything that LE now enables. People don't know they're using LE, much like IOT users don't know they're using HTTP/1.1. It's part of the plumbing. What's an ACME client? What's LE? What's v1?

This is probably happening for IOT devices across the globe just the same. A 2y expiration date is an order of magnitude too low for plumbing. Imagine if we suddenly decided to phase out HTTP/1.1 within two years.

We have to recognise that we are shoving HTTPS down people's throats. Pretty soon, HTTP will get big f-off warnings. OK: fair enough. However, if we're doing that, we should also provide a viable alternative, with the same reliability. Otherwise, HTTPS is a massive step backwards for the decentralised web. LE is that alternative, but not if we start breaking backwards compatibility every 2 years.


Again, I'm not saying that the two year expiration date means "v1 stops working".

Rather, "after this point, no new domains may setup via v1", so any existing certificates and installations are grandfathered. Two years is sufficient for MIAB to update their software and distribute to users.

>LE is that alternative, but not if we start breaking backwards compatibility every 2 years.

Not what I'm saying either. They have a v2 now, we don't know if they need a v3. And they want to keep v1 running for a while.

But there will be a point where v1 will need to be switched off, similar to how modern browsers have switched off SSLv1 despite a lot of people still having servers running with that.

LE will, at some point, have to decide between keeping v1 running or moving away from old protocols to be able to evolve. And that cannot be infinitely pushed backwards.


That's good to know, thank you for the info.


You might simplify things for yourself to some extent by requiring ACMEv2 for wildcard requests, which will reduce the number of people deploying the old client and spur many to upgrade.

And your old client still works on the systems it's deployed on (by definition) so you could just stop development on that.


Wildcards are only available via ACMEv2. The post linked to here says that.


d'oh, I overlooked that (and I still had the post open in another window!). Thanks.

It's bad enough that some people comment without reading -- I apparently commented without paying attention.


If the EoL is far enough after the release of V2 then I think it is preferable that people start getting security warnings for sites that stop working: it is an indication that they are no longer maintained so potentially not receiving security updates for other matters.

Obviously a decent length of grace period would be the correct way of deprecating the older version, to give people time to update their infrastructure accordingly. I would suggest at least a full year (giving at least four renewal cycles to test changes in a QA environment before being forced to update production), probably more. Perhaps, if possible, a year for new certificates and two years for renewals?


Since the certificate need to be updated every three months they have access to exact number of how many people use ACMEv1. They also have as naturally part of the process the domain names of those users. This should allow them to very slowly watch as the number of v1 users drops until there is so few that they can try contact any remaining users before deciding to set an end-of-life to that version.


You are supposed to provide a valid email address when you register for a let's encrypt certificate. In theory they should be able to contact all v1 client users.


The email address is optional, though most clients tend to hide that fact (or make it mandatory).


When you run the Let's Encrypt official client (certbot), it updates itself.


Only if you use certbot-auto, not if you use an OS package.

(I'm on the Certbot team.)

Also, some people dislike this feature quite a bit, and there are about 100 different clients.

https://letsencrypt.org/docs/client-options/


And some people take clients like acme.sh and modify them. I do that myself.

There's enough entrenched inertia to HTTPS without giving people more ammunition regarding the actual amount of work involved. Unless there's a security reason to eliminate the v1 endpoint, please don't.


They already disabled TLS-SNI-01 for new certificates because of security issues [1].

This was a major breaking change, without any advance notice, but nothing melted down.

I'm sure the other validation endpoints are used a lot more, but the effect shouldn't be any different, especially if give a deprecation notice of a year or two.

[1]: https://community.letsencrypt.org/t/important-what-you-need-...


While there was no world-destroying core meltdown, it was still super-annoying to deal with. Lots of code needed to be touched. I'd really like a comeback of a fixed TLS-SNI challenge as running a port 80 HTTP server just for LE sucks somewhat.

DNS challenges exist and are useful but have more extensive infrastructure requirements. Nothing beats the ease of use of "just put the box up and it'll retrieve its cert as needed".


Depends only on how much different v2 is from v1.


> The preferable end-of-life date for ACMEv1 should be never.

As would be the preferable end-of-life date for SSLv3 and HTTP.


The SSL zealotry drives me nuts. The infosec community screams constantly about "HTTPS everywhere", but they either don't know or don't care about all the effort and pain they're creating for developers who just want their software to work. How many perfectly good sites will be marked ominously as "insecure" by Chrome in the next few months? Sites that were working just fine until someone at Big G decided they weren't.

(Related, a big thanks to Google for un-trusting that whole big Symantec security chain. Yeah, I realize they weren't competent, but I also realize that it had no practical effect on my site's security, as I don't have nation states or motivated hackers in my threat model.)

Security measures should be weighed like everything else - as cost/benefit. In many cases the cost of the security is not worth it.

Edit: I'd just like to point out the irony in some of the replies to this comment. I'm complaining about zealotry, and the vast majority of nasty replies I've received to this comment are using language that only zealots and ideologues would use. My god, you'd think I'm killing puppies based on some of these responses. Nope, just advocating for using HTTPS where it makes sense, and not having it forced down your throat.


> developers who just want their software to work.

Those devs are gonna be really surprised when they find out that unencrypted connections are routinely tampered with.

> they either don't know or don't care about all the effort and pain they're creating

You have not been paying attention to the hundreds of tools available to make HTTPS painless.

> until someone at Big G decided they weren't.

And Mozilla. And countless research papers. And real-world attacks that are reported over and over again. The fact is that the global Web has become hostile, regardless of your prejudice against Google's Web security teams.

> In many cases the cost of the security is not worth it.

The problem is that it's not YOUR security, it's other people's. If websites don't implement HTTPS, it's the users of the Web who pay the price. It's their privacy being deprived. And the website becomes easy to impersonate and manipulate, increasing the liability of having a website. HTTP is bad news all around.


What about hosting HTTP content because you verify GPG signatures upon download? These content would then be super easy to cache on the local network. HTTPS defeats this and makes it uncachable.

I hardly ever see people talk about this use case and how to solve it with https everywhere. AND it's super widely used: e.g. debian repositories.


HTTPS doesn't make it uncacheable - you can still mirror an HTTPS repository with another HTTPS repository (with its own domain name and certificate), and preserve the PGP signatures inside the repository. apt works fine with exactly this model: you use HTTPS for transport-layer protection and GPG for the existing things Debian's security model was already good at. The Debian repository is behind HTTPS at https://deb.debian.org - in existing Debian releases you may need to install apt-transport-https, and then just set your sources.list to

    deb https://deb.debian.org/debian stable main
HTTPS cannot be used as a replacement for PGP in this scenario, but that's the wrong way to see HTTPS. It doesn't provide purpose-built security for people who have custom threat models and need to build security infrastructure anyway (e.g., Debian verifies PGP signatures on sets of packages uploaded by developers, and then builds those packages and puts them into signed archives). HTTPS is baseline security - it's the security that every web connection should just have. It's not surprising that some specific use case like Debian repositories needs more-than-baseline security.

And because HTTPS is nothing more than baseline security, it's possible to automate it with things like Let's Encrypt and not add any more checking beyond current control of DNS or HTTP traffic to the domain.

(Another confusion along these lines is assuming HTTPS is useful as an assertion that a site isn't malware. It asserts no such thing, only that the site is who it claims to be and network attackers are not present. If I am the person who registered paypal-online-secure-totes-legit.com, I should be able to get a cert for it, because HTTPS attests to nothing else.)


I'm not talking about a mirror, which has a different domain name. I'm talking about a transparent cache like squid. This will mean I don't have to change the OS images that I might not even control in order to get traffic savings, whereas under your model I would have to, which again, may not even be feasible.


Clients for this aren’t web browsers. Making browsers warn about them doesn’t break anything.


There are a variety of attacks against GPG-signed repositories - an article [1] by Joe Damato explains them, and that all can be trivially mitigated by serving the repositories with TLS.

[1]: https://blog.packagecloud.io/eng/2018/02/21/attacks-against-...



I'm actually surprised debian repos are still HTTP.

Don't get me wrong GPG signatures with pinned public key is a lot better than trust TLS of a random mirror.

But isn't it nice to have two layers, the two key systems are independent and orthogonal that seems like a solid win.

Need I remind of Heartbleed (openssl) or the very debian specific gpg key derivation bug years ago.

There will always be bugs, we can only hope they aren't exposed concurrently :)


That, and the way gpg is used for apt provides no confidentiality at all, just authenticity & integrity. Someone who can see the traffic will still know which packages you've downloaded.


Indeed. The same is also true for repositories, served via SSL.

Majority of HTTPS traffic is sniffable and largely non-confidential, unless you pad every file and web-request to several gigabytes in size.

Does your website use gzip? Good, now padding won't help you either, — unless it is way bigger than original content. Oh, and make sure, that you defend against timing attacks as well! Passive sniffers totally won't identify specific webpage based on it's generation time, will they?!

As for authenticity… Surely, you are going to use certificate pinning (which is already removed from Google Chrome for political reasons). And personally sue certificate issuer, when Certificate Transparency logs reveal, that one of Let's Encrypt employees sold a bunch of private keys to third parties. Of course, that won't protect authenticity, but at least, you will avenge it, right?

SSL-protected HTTP is just barely ahead of unencrypted HTTP in terms of transport-level security. But it is being sold as golden bullet, and people like you are the ones to blame.


TLS is getting better and there is a LOT of momentum to this.

I bet the SNI issues will eventually be fixed too.

And yes, with momentum behind certificate transparency, it could definitely hold CAs to the fire :)

TLS is no silver bullet, but it's a good base layer to always add.


Having two independent system while destroying traffic savings from a transparent caching system seems like a bad trade off to me.

Consider you're a cloud provider running customer images. If everyone downloaded the same package via https over and over again, the incurred network utilization would be massive (to both you and the debian repository in general) compared to if everyone used http and verified via GPG, all from your transparent squid cache you setup on the local network.


I fear the trust issues with generic HTTP caching makes it infeasible.

It would probably be better to use a distributed system design for this.. BitTorrent or who knows ipfs maybe..


> What about hosting HTTP content because you verify GPG signatures upon download

If you're doing this, then you've made your own HTTP client so you can do whatever you want.

"HTTPS Everywhere" is a web browser thing.


So it's just bad naming. Everywhere to me implies everywhere, not just everywhere in the browser. Regardless, it looks like there are still people confused about it like me discussing in this thread, tho.


> What about hosting HTTP content because you verify GPG signatures upon download?

Because the rest of the content is not verified?????? That's the whole point of HTTPS????????


I didn't downvote this and this is a valid misunderstanding.

The whole point of having GPG is that you (as the distributor/debian repo/whatever) have already somehow distributed the public key to your clients (customers/debian installations/whatever). Having HTTPS is redundant as it is presumed that initial key distribution was done securely.


Those who you trust with your internet browsing is usually also those who you trust with HTTPS certificates. Eg. Your browser, your operating system, your ISP, et.al are still able to spy on you unless the site uses certificate pinning, which is unfortunately not feasible with Letsencrypt due to certs only lasting 3 months.


"It's their privacy being deprived."

I wonder if anyone will be surprised when they learn how HTTPS and HTTP/2 will be used to push more advertising to users and exfiltrate more user data from them than HTTP would ever allow.

Will these "advances" benefit users more than they benefit the companies serving ads, collecting user data and "overseeing the www" generally? Is there a trade-off?

To users, will protecting traffic from manipulation be viewed as a step forward if as a result they only see an increase in ads and data collection?

Even more, perhaps they will have limited ability to "see" the increase in data collection if they have effectively no control over the encryption process. (e.g., too complex, inability to monitor the data being sent, etc.)


I wonder if anyone will be surprised when they learn how HTTPS and HTTP/2 will be used to push more advertising to users and exfiltrate more user data from them than HTTP would ever allow.

We're talking only by HTTPS. Adding HTTP/2 is just mudding the conversation.

Care to give any argument on how does adding a TLS layer over the exact same protocol (HTTP/1.1) will be used to do that?


/I wonder/s/HTTPS/TLS/


> Those devs are gonna be really surprised when they find out that unencrypted connections are routinely tampered with.

Except most big orgs now employ MitM tools like BlueCoat to sniff SSL connections too.

> You have not been paying attention to the hundreds of tools available to make HTTPS painless.

I have, and they don't. They make it easier, but you know what's truly painless? Hosting an html file over HTTP. What happens when Let's Encrypt is down for an extended period? What happens when someone compromises them?

> And real-world attacks that are reported over and over again.

Care to link to a few?

> The problem is that it's not YOUR security, it's other people's.

Oh, so you know better than me what kind of content is on my site? So a static site with my resume needs SSL then to protect the other users?


> Care to link a few?

From Friday, in which Turkey takes advantage of HTTP downloads to install spyware on YPG computers: https://citizenlab.ca/2018/03/bad-traffic-sandvines-packetlo...

From a couple months ago, where Comcast injects JavaScript into HTTP connections: http://forums.xfinity.com/t5/Customer-Service/Are-you-aware/...


> Oh, so you know better than me what kind of content is on my site? So a static site with my resume needs SSL then to protect the other users?

Without TLS how do YOU know that the user is receiving your static resume. Any MitM can tamper with the connection and replace your content with something malicious. With properly configured TLS that's simply not possible (with the exception you describe in corporate settings where BlueCoat's cert has to be added to the machine's trust store in order for that sniffing to be possible). Hopefully in the future even that wont be possible.


And web servers can even detect TLS MITM: https://caddyserver.com/docs/mitm-detection


> Oh, so you know better than me what kind of content is on my site?

The content of your site is irrelevant. We do know that your lack of concern for your user's safety is a problem though.

I also wish that managing certs was better, but until then, passing negative externalities to your users is pretty sleazy.


Oh, so you know better than me what kind of content is on my site? So a static site with my resume needs SSL then to protect the other users?

Absolutely yes. Without that layer of security, anyone looking at your resume could either be served something that's not your resume (to your professional detriment) or more likely, the malware-of-the-week. (Also to your professional detriment).

Do you care for the general safety of web users? Secure your shit. If not for them, for your own career.


So I've heard this argument countless times, and it completely makes sense from a theoretical perspective. Yes, it's very possible for MitM to happen, and that would cause one of the two scenarios you described.

But how likely is it to actually happen? For the former, someone would need to target both you and specifically the person who you think will view your resume, and that's, let's be honest, completely unlikely for most people. The second case I can see happening more in theory as it's less discriminating, but does it actually happen often enough in real life to the point where it's a real concern?

FWIW, I have HTTPS on all my websites (because, as everyone mentioned already, it's dead simple to add) including personal and internal, but I still question the probability of an attack on any of them actually happening.


I have been MitM'd by my ISP, Comcast, multiple times. Their injection only works on HTTP without TLS.


Sure, I've heard of the Xfinity MitMs which IIRC tracked users in some way. But would that realistically cause any "professional detriment" as expressed by the parent comment? Most users wouldn't even notice it's happening.

Basically, I see it this way:

- You can be MitMed broadly, like the Xfinity case, but the company in question can't really do anything crazy like inject viruses or do something that would cause the user to actually notice because then their ass is going to be on the line when it's exposed that Comcast installed viruses on millions of computers or stole everyone's data.

- Or you can be MitMed specifically, which will cause professional detriment, but would require someone to specifically target you and your users. And I don't see this as that likely for the average Joe.

Really, what I would like to know is: How realistic is it that I, as a site owner, will be adversely affected by the MitM that could theoretically happen to my users on HTTP?


As less and less content is served over HTTP, it becomes more and more realistic for an attacker to simply inject their garbage into every unencrypted connection that has a browser user agent in it.

Consider the websites you view every day.. most of them are probably HTTPS by now.

It's the wild west, basically. Regardless of how likely it is that someone is waiting for you to hit a HTTP site right now so they can screw with it, why even take that risk when the alternative is so easy?


> As less and less content is served over HTTP, it becomes more and more realistic for an attacker to simply inject their garbage into every unencrypted connection that has a browser user agent in it.

I've already covered the general case above. Anyone in a position to intercept HTTP communications like that (into every unencrypted connection) is in a position where if they intercept and do enough to materially harm me or my users through their act, then they will likely be discovered and the world will turn against them. They have far more to lose than to gain by doing something actively malicious that can be perceived by the user. So I don't realistically see it happening.

> Regardless of how likely it is that someone is waiting for you to hit a HTTP site right now so they can screw with it, why even take that risk when the alternative is so easy?

I already said I use HTTPS, so your advice isn't really warranted. I also specifically asked how likely it is, so you can't just "regardless" it away. I get that there's a theoretical risk, and I've already addressed it. But as a thought experiment, it is helpful to know how realistic the threat actually is. So far, I haven't really been convinced it actually is anything other than a theoretical attack vector.


You are making it sound like "injecting random garbage into HTTP" is some new hotness. It have been done since forever. By the way, — email still works that way. But Google and a couple of other corporations would not like you to trample their email-harvesting business, so there is disproportionately less FUD and fear-mongering being spread around email connections.

Internet providers have been injecting ads into websites for years. Hackers and government have been doing same to executables and other forms of unprotected payload.

Hashes, cryptographic signatures, executables signing, Content-Security-Policy, sub-resource integrity — numerous specifications have been created to address integrity of web. There is no indication, that those specifications failed (and in fact, they remain useful even after widespread adoption of HTTPS).

For the most part, integrity of modern web communication is already controlled even in absence of SSL. The only missing piece is somehow verifying integrity of initial HTML page.


A lot of ISPs, some huge like the "XfinityWifi" SSID, routinely inject their own javascript in HTTP pages. Some even take no care to namespace their javascript and wreck a party on your window globals, too.


This could be solved without HTTPS. People choose not to for ideological reasons.


How would you solve it without HTTPS?


By signing it.

"Injection" is the process of inserting content into the payload of a transport stream somewhere along its network path other than the origin. To prevent injection, you simply need to verify the contents of the payload are the same as they were at the origin. There are many ways to do this.

One method is a checksum. Simply provide a checksum of the payload in the header of the message. The browser would verify the checksum before rendering the page. However, if you can modify the payload, you could also modify this header.

The next method is to use a cryptographic signature. By signing the checksum, you can use a public key to verify the checksum was created by the origin. However, if the first transfer of the public key is not secure, an attacker can replace it with their own public key, making it impossible to tell if this is the origin's content.

One way to solve this is with PKI. If a client maintains a list of trusted certificate authorities, it can verify signed messages in a way that an attacker cannot circumvent by injection. Now we can verify not only that the payload has not changed, but also who signed it (which key, or certificate).

Note that this does not require a secure transport tunnel. Your payload is in the clear, and thus can be easily cached and proxied by any intermediary, but they can not change your data. So why don't we do this?

Simple: the people who have the most influence over these technologies do not want plaintext data on the network, even if its authenticity and integrity are assured. They value privacy over all else, to the point of detriment to users and organizations who would otherwise benefit from such capability.


And what happens when the content changes? Cacheability is not always a good thing. Your solution is vulnerable to replay attacks. You could be seeing an outdated version of a resource without knowing it. This is only acceptable for truly static content, which is becoming increasingly rare on the web.


This content should not change, or change very rarely. A bulk of the data on the web is media files and static resources. Until browsers started locking down 3rd party requests, handling these over HTTP was standard. Obviously it was a security problem, but it wouldn't have been with this alternate method.

However, it's not that hard to avoid replay after cache expires. HTTP sends the Date of the response along with Cache-Control instructions. If the headers are also signed they can also be verified by a client. If the client sees that the response has clearly expired, it can discard the document. As a more dirty hack it can also retry it with a new unique query string, or provide it as an HTTP header and token which must be returned in the response.


Sounds like you just reinvented HTTPS with a null encryption cipher. I don't see how this makes anything easier or better.


I would love if null encryption ciphers actually worked in real life, but they don't (for the same reason why plaintext HTTP/2 does not — everyone disabled them under political pressure).

By the way, — signing is not equal to "null encryption". Signing can be done in advance, once. Signed data can be served via sendfile(). It does not incur CPU overhead on each request. Signing does not require communicating with untrusted parties using vulnerable SSL libraries (which can compromise your entire server).

As we speak, your SSL connection may be tampered with. Someone may be using a heardbleed-like vulnerability in the server or your browser (or both). You won't know about this, because you aren't personally auditing the binary data, that goes in and out of wire… Humorously enough, one needs to actively MITM and record connections to audit them. Plaintext data is easier to audit and reason about.


And how do you sign these requests? How do you get browsers to trust the signature? Oh, well, we already have a similar solution that also protects the entire connection from spying... it's called HTTPS.


It's like one apt package and one cronjob away. I think some acme clients even do the Cron handling for you. So, like one command. There is a really great acme client written in bash which is incredibly painless to set up.

Literally in the time you've spent thinking about and composing your reply you could have implemented free, secure TLS for your users.


It's not that easy if you don't want to run public http server. I had to write acme client myself because I didn't find a single one simple enough. I spent weeks doing that, comparing to 5 minutes issuing 3-year certificate from wosign when it was a thing. I hate that Google destroyed every free ssl certificate issuer and pushed their child to further dominate the world.


>wosign

Are you name dropping wosign just to be obtuse? They were untrusted because they were untrustworthy, not because Google just doesn't like them. https://www.schrauger.com/the-story-of-how-wosign-gave-me-an...


I don't trust any US company, so it's not any more untrustworthy for me than DigiCert, for example. I'm dropping its name because they were offering free 3-year certificates and it was the best TLS experience I've ever had.


There's a lot of countries I don't trust to keep sensitive data in. But my point is that Wosign was provably untrustworthy, rather than speculation on government interference in other CAs. I saw from your Github that you live in Kazakhstan, I would remind you the government is less than trustworthy as well[0] in regards to digital privacy.

[0]: http://www.slate.com/blogs/future_tense/2015/12/14/kazakhsta...


I doubt, that any government is inherently more trustworthy than any other.

It just coincidentally happens, that US controls 100% of root CAs and Kazakhstan (most likely) controls 0. So the later needs more audacious measures, while former can just issue a gag order to Symantec (or whoever is currently active in market).

CA system is inherently vulnerable to government intervention. There is no point in considering defense against state agents in HTTPS vulnerability model. It is busted by default.


Maybe not 100%. Bermuda has a root CA: QuoVadis Global.


https://github.com/Neilpang/acme.sh does exactly what you want.


What is the point in trusting third parties, if you need to keep trusting them after they were obviously untrustworthy? The entire world depends on the trust chain for SSL, keeping that chain trustworthy is very important.

Marking non-https sites as non-secure is a result of the network having proven itself to be unreliable. This is both the snowden revelations, as well as the cases of ISPs trying to snoop.

Besides, HTTPS isn't hard to get. Worst case means you install nginx appache or the like to reverse proxy and add in TLS. Things got even simpler when let's encrypt came along. Anyone can get a trusted cert these days.


> in my threat model

It isn't your threat model that is important here. It is the users' threat models. Maybe you have full control of that too (the simplest case where that would be true is if you are your only user) but most sites aren't.


The nasty language in reply to your comments is righteous anger. You are advocating to hurt people; the proper response by well-adjusted people to such advocacy is anger.

You will see the same sort of anger at e.g. parents who refuse to get their kids vaccinated (they're my kids, they say; Big Pharma can't make decisions for me, if you want to get your kids vaccinated, that's fine but there's a cost-benefit analysis, I just don't want it forced down my throat). It would be incorrect to conclude that the angry people are the wrong people.


I hear you. Moving to SSL for millions of old websites is a pain in the ass. It's a degree of effort that people often skim over.

Speaking as someone who's maintained a lightweight presence on the Web for over 20 years, I've thought about the tradeoff and I think it is worth it. Our collective original thinking about protocols skipped security and we've been suffering ever since. I was sitting in the NOC at a major ISP when Canter and Siegel spammed Usenet. Ow. Insecure email has cost the world insane amounts of money in the form of spam. Etc., etc., etc.

You and I probably disagree on the cost/benefit analysis here, which is OK. It'd be helpful in discussion if advocates on both sides refrain from assuming zealotry on the other side.


Yeah, I'm not opposed to HTTPS. In fact, the reason I get frustrated is because, like you, I've dealt with it at scale for years. I agree it should be used most places, but what about static documentation sites? What about blogs? I've even used Let's Encrypt a few times, and it seems like a great service. But who wants to set up that machinery for a simple resume site?

That machinery has a cost. With every barrier we throw up on the web, it makes it harder to build a reliable site. I also realize this is an argument I've lost. It's so much easier to just say "HTTPS everywhere" than to examine the tradeoffs.

Oh well.


> It's so much easier to just say "HTTPS everywhere" than to examine the tradeoffs.

This touches on the real point of all this, which doesn't seem to have been contained in any replies to you.

There's no real choice in the matter, https is a requirement if, and that the very big if right there, we truly acknowledge that the network is hostile. With a hostile network the only option is to distrust all non-secure communication.

https isn't about securing the site as you know, it's about securing the transmission of data over the transport layer, and it's needed because the network is hostile.

It doesn't matter one little iota what the data is that's traversing it, as there's no way to determine its importance ahead of time. A resume site might not be of much worth to the creator, but the ecosystem as a whole ends up having to distrust it without a secure transport layer because the hostile network could have altered it.

It doesn't matter the effect of that alteration might be inconsequential, as there's also no way to determine that effect ahead of time. The ecosystems 'defense' is to distrust it entirely.

And that's the situation the browsers/users/all of us are left with. There's is no option but to distrust non-secured communication if the network is hostile.


Yeah, it is an argument you've lost, because it's a bad argument.

Even places like dreamhost give you a letsencrypt cert for free on any domain.

There is no case to be made for not securing your site, on principle or based on what's already happening out in the world, with shady providers injecting code into non-secure HTTP connections.

You see it as "a simple resume site," and I see it as a conduit for malicious providers to inject malicious code. Good on the browser folks for pushing back on you.


Yup, the Dreamhost model, and the model at generic cPanel sites (sadly some places with cPanel disable this to drive revenue to their commercial CA partner) is the Right Thing here - one of the options when setting up or modifying your web site is "Free automatic certificates" and then it's the Host's job to make sure that stays working, just like if you pick "Use latest PHP" or "Strip leading www. from hostname". The guy with a blog about carpentry shouldn't need to care about the ACME protocol any more than he cares about how erbium doped optical amplifiers work when calling his grandmother half way around the world. It's just technology.


My favorite part of the internet were always the small hobyist websites. The guy that has an encyclopedic database about Grateful Dead trivia, the other guy that collects pictures of plants. Those people are independent, they're not technical and their 90s looking websites are going to go under because of blanket security policies that don't concern them.


You do realize you're making this complaint on a discussion about a tool that makes HTTPS easier for said small hobbyist websites? I've updated all of my hobby sites using Let's Encrypt, and I really appreciate how it was easy for me while also being good for my users.


My comment isn't against Let's Encrypt. It's against blacklisting text only sites that don't need HTTPS.


If a "text-only" website is on HTTP it can be MITM'ed and used to serve up malicious JS.


Nobody is blacklisting them. The visitors are just being informed of the risks.

The warning used to be the absence of a pad-lock, but who notices that?


If not SSL, then they'd go away at the point some other technical change dropped. Or do you suggest "we" continue using broken protocols forever in order to preserve them? Do you still support telnet to accommodate people who can't handle `ssh-keygen`?

In any case, (a small subset of) the random enthusiast sites and such are close to the only reason I use a browser recreationally anymore. I absolutely agree with you.

The answer isn't to stop fixing things. The answer is to make it easier and cheaper to be secure.

Kinda like what LE is doing, no?


My point is that those sites don't need to be any more secure than they are. A hobbyist website written in HTML in Notepad with only text and images that can be run on IE 5.0 might not require HTTPS and Google and others might change that.


I don't get the notion that some sites don't "need" HTTPS. The threat model it protects against isn't only sensitive information being intercepted, it's also man-in-the-middle attacks that actually change what's delivered. Maybe a hobbyist website only has text and images sitting on its server, but the visitor might receive malware — and that can happen to literally any site served over HTTP.


> I don't get the notion that some sites don't "need" HTTPS.

Your failure to grasp this is fairly evident from the rest of your comment.


Plaintext HTTP being fine for delivering public documents might have been true 10 or 20 years ago. Sadly, attacks on and uninvited mutation/corruption of plaintext content has become that super-common (at least in some parts of the world) that you can be almost certain that one or more of your users will be affected by it if you're not taking precautions.

It sucks badly. I'd prefer a less hostile network myself. Even back then there were bad actors but at least you could somewhat count on well-meaning network operators and ISPs. Nowadays it's ISPs themselves that forge DNS replies and willfully corrupt your plaintext traffic to inject garbage ads and tracking crap into it. And whole nation states that do the same but for censoring instead of ad delivery.


Yeah and what do those hobbyists do? They go to a blogging service provider or something like a wiki provider and they put their stuff. That stuff still happens today. And of course those users wouldn't want someone else coming along and tampering with their collection, so https everywhere is a must. And these users won't even know or care.


Now we have wordpress, medium, etc. It's never been easier to have a personal blog over HTTPS.


>Yeah, I realize they weren't competent, but I also realize that it had no practical effect on my site's security

Can you explain why you think Symantec demonstrating incompetence is completely isolated from your Symantec SSL protected website?

I sense a lot of hostility coming from you. It seems like you think we do these things for fun. Do you imagine a bunch of grumpy men get together, drink beer, and pick a new SSL provider to harass and bully?


[flagged]


> You can't possibly understand...

Oh, I get it. I've worked with lots of people like you.

You're lazy.

As an infosec practitioner, I'm the one that cleans up after the people who claim good current infosec practices are "too hard" or "impractical" or "not cost-effective", which all boil down to sysadmins and developers like you creating negative externalities for people like me. I have heard all of these arguments before. "Oh, we can't risk patching our servers because something might break." "Oh, the millisecond overhead of TLS connection setup is too long and might drive users away." "Oh, this public-facing service doesn't do anything important, so it's no big deal if it gets hacked."

That's irresponsible.

I'm not at all sorry that the wider IT community has raised the standards for good (not best, just good) current infsec practices. If you're going to put stuff out there, for God's sake maintain it especially if it's public-facing. If using the right HTTPS config is that difficult for you, move your stuff behind CloudFront or Cloudflare or something and let them deal with it. If you can't be bothered with some minimal standard of care, you need to exit the IT market.

And good luck finding a job in any industry, in any market, where anyone will think that doing less than the minimal standard, or never improving those minimums, is OK.


> If you can't be bothered with some minimal standard of care, you need to exit the IT market.

My goodness, you just nailed it.

The IT job market is so tight that complete incompetence is still rewarded. Incompetence and negligence that would get you fired immediately or even prosecuted in many if not most other professions.

If restaurant employees treated food safety the way most developers treat code safety, anyone who dined out would run about a 5-10% chance of a hospital visit per trip.

I was just arguing with a “senior developer” who left a wide open SQL injection in an app. “But it will only ever be behind the firewall, it’s not worth fixing.”

That’s like a chef saying “I know it’s old fish but we’ll only serve it to people with strong stomachs, I promise”.


I wrote that in anger, and almost right away removed it when I calmed down. Please see my current comment.


It's rather bad form to do so without noting what you edited in the comment itself, especially as your parent poster replied to it.


But why did it make you so angry? My guess is because my viewpoint is completely unfathomable to you. You can't even believe that someone would advocate for it. In situations like that, I always try and put myself in the shoes of that person. Sometimes they are wrong, and sometimes they have a point. But it's always a useful exercise.

To your parent comment -

No, I don't think it's a cabal of "grumpy old men" - I think it's a cabal of morally righteous security-minded people who have never worked for small companies or realize that most dev teams don't have the time to deal with all this forced entropy.

You care about security, I care about making valuable software. Security can be a roadblock to releasing valuable software on time and within budget. If my software doesn't transmit sensitive data, I surely do not want to pay the SSL tax if I'm on a deadline and it's cutting in to my margins.


What the gently caress does encrypting an HTTP connection have to do with morals or age? You are way outside the realm of making sense, man, and offer commentary that is openly harmful to securing the Internet. Please step back and revisit your woefully misinformed opinion on this.

Most people who advocate for security, including myself, have worked on small teams and understand the resources involved. Putting a TLS certificate on your shit with LE takes minutes. Doing it through another CA is minutes, in a lot of cases. You spent more time downloading, installing, and configuring Apache, then configuring whatever backend you want to run, and writing your product or blog post or whatever it is you’re complaining about securing.

Honestly, in the time you’ve been commenting here, you could have gotten TLS working on several sites. Managing TLS for an operations person is like knowing git for a software developer. It’s a basic skill and is not difficult. If it’s truly that difficult for your team, (a) God help you when someone hacks you, they probably already have and (b) there are services available that will front you with a TLS certificate in even less time than it takes to install one. Cloudflare and done.

> Security can be a roadblock to releasing valuable software on time and within budget.

Great, you've pinpointed it. Step two is washing it off. Ignoring security directly impacts value, and I'm mystified that you don't see this.

But I guess I'm a zealot ¯\_(ツ)_/¯


> Putting a TLS certificate on your shit with LE takes minutes. Doing it through another CA is minutes

if you have one server, yes. else it's the other way around, because if you have multiple servers you need to do a lot of fancy stuff. And LE also does not work in your internal network if you do not have some stuff publicy accessible. And it also does not work against different ports.

Oh and it's extremly hard to have a proxy tls <-> tls server that talks to tls backends, useful behind NAT if you only have one IP, but multiple services behind multiple domains.

IPv6 fixes a lot of these issues.


You can use Let's Encrypt certificates for non-publicly reachable hosts by using the dns-01 challenge type. That, of course, means that you need some way of properly automating your DNS infrastructure to add the necessary TXT records which, admittely, is sadly not the case in many organizations. It's a solvable problem, though.

I don't understand your last point. Where do you see the problem with letting a reverse proxy talk to a TLS backend? You get the requested server name from the SNI extension and can use that to multiplex multiple names onto a single IP address. The big bunch of NATty failure cases apply to plaintext HTTP just as well, no?


Well the last point means that I need to rollout the cert to multiple servers (as the poster below writes)


In the most common setups, the reverse proxy usually terminates the TLS session and uses a different connection to make requests to the backend servers (e.g. nginx proxy_pass directive).

This means the backend server certificates are only ever exposed to your reverse proxy. There's no need to use publicly-trusted certificates for that. Just generate your own ones and make them known to the proxy (either by private CA cert or by explicitly trusting the public keys).


This new version issues wildcard certificates. Get one certificate. Use Puppet, Chef, Ansible, Salt, Bolt, multissh, or GNU parallel to put it on multiple servers for that domain.

If you need lots of different domains, use one of the auto certificate tools.

If you can't use one of those yourself, consider hosting on a platform that can automatically do this for you for all your sites, like cPanel (disclaimer: I work for cPanel, Inc).

If your stuff is never publicly accessible because you're in a fully private network, just run your own CA and add it to the trust root of your clients.

If you need an SNI proxy, search for 'sniproxy' which does exist.

If you're so small that you can't afford an infrastructure person, a consultant, or a few hours to set such things up yourself, then maybe you should shorten the HN thread bemoaning doing it and use the time to learn how.


> offer commentary that is openly harmful to securing the Internet

Funny you mention this.

With this new functionality, I can register valid certs for any domain in the world if their DNS is insecure, or if I can spoof it.

Have we gotten any headway yet on that whole "anyone can hijack BGP from a mom and pop ISP" thing ?

How many CAs are still trusted by browsers, again? How many of those run in countries run by dictators?

HTTPS doesn't secure the Internet. It's security theater for e-commerce.


> I think it's a cabal of morally righteous security-minded people who have never worked for small companies or realize that most dev teams don't have the time to deal with all this forced entropy.

This is just one anecdote, but I worked at a company small enough that I was the only developer/ops person. Time spent managing HTTPS infrastructure couldn't have been more than a handful of hours a year.

What is so painful to you about running your website(s) on HTTPS?


Honestly, from not ever using nginx to having an auto-renewing "A+" HTTPS site took no longer than 3 hours.


Would you be open to having a phone call, or some other more direct way of discussing this?

It may be easier to be more empathetic.


> marked ominously as "insecure"

It's not that ominous. It's not even red!

I think it's pretty obvious to most users that "Insecure" doesn't matter as much on some random blog, but does matter a lot on something that looks like a bank or a store.


That has to be balanced against the potential pain for users who will be accessing that software whilst vulnerable to having that information snooped or modified. Perhaps for social engineering purposes, perhaps to serve up the latest zero-day, perhaps just for the lulz... who knows?

SSL has a history of being a pain in the ass. There are a lot of pain in the ass implementations out there. Everyone gets that.

At the same time, it's never been easier, and basic care for what you're serving your users demands taking that extra step. What Google is doing amounts to disclosing something that's an absolute fact. Plain HTTP is insecure (in the most objective and unarguable way possible), and it is unsuitable for most traffic given the hostile nature of the modern web.

Do you want your users being intercepted, engineered, or served malware on? If the answer is no, secure it. The equation is that simple. Any person or group of people who in 2018 declines to secure their traffic is answering that question in the affirmative and should be treated accordingly!

That's not "zealotry" friend, that's infosec 101.


Your closing argument is essentially “if you’re not with us, you’re against us.” which sounds like quite the zealots argument to me.


Only because having your stuff SSL'ed (not snoopable) is a binary state. And while you might have business reasons for not doing it, putting those above your user's safety is just plain negligent. In the same way that storing plaintext passwords and sending them around via email, or using SMS as a two factor authentication method is negligent.

So in a way, you're right. I'm not sure why that's a negative.


> just want their software to work

Your software does not work if it is not secure. Security is a correctness problem.


If a given software can't handle TLS it's a fundamental problem of the software / development process and not the fault of the infosec community. Update/change the used libraries and everything will be fine. I've switched a whole distributed system from plain communication to TLS secured connections just yesterday.

Yes sometimes it's pain to solve some TLS based errors and I also miss the opportunity to debug each transmitted packet with tcpdump but I also appreciate it that the continuous focus on TLS improves the tooling and libraries and each day it get's a little bit easier to setup a secure encrypted connection.


>developers who just want their software to work.

Do they keep their servers up to date? Why is it so much easier to do that than getting an SSL cert four times a year?

I hope they update their servers more often than that.


You're getting strident comments and downvotes primarily because of the un-necessarily harsh and condescending tone of your post.


Did you file a bug report on the Mozilla site about forcing HTTPS?


One of the wonderful aspects of this, that no-ones pointed out yet, is that these can used for INTERNAL domains, without you having to run your own internal CA.

i.e. lets say your internal network DNS domain is 'my-company-lan.com' - all you have to do is ensure that 'my-company-lan.com' is also registered in public DNS[1], and then you can secure ALL your internal services using a free LE wildcard cert, that's automatically trusted by all platforms and browsers[2]. For some companies that's going to be a BIG cost and resources saving.

--

[1] but not actually used for any public facing services.

[2] Mostly...


Going to reply to my own comment here.

It's at this point that I swear profusely at Microsoft yet again, for pushing the concept of '.local' domain suffixes a decade ago. As it's not a legal TLD, I can't get certs for any of my internal services without rolling my own internal CA, which only works automatically for Windows domain machines, and not for anything else.


The ".local" suffix was a terrible idea, to be sure. Active Directory domain rename in small environments is relatively painless.


Unless of course, you are running Exchange. In which case it's not supported :(


Unfortunately, yes. I've been lucky enough to be able to get domain renames done in Exchange 2003 environments (which is supported) or in non-Exchange environments. Migrating to a new domain because of a poorly-chosen name is a real pain. (I have one Customer who has a "." in their NetBIOS domain name. That creates some interesting kinds of hell-- completely breaks the NPS RADIUS server in Windows 2012.)


I agree that it’s terrible, but the reason they used to recommend .local goes back to their Small Businness Server in the 1990s when it was very expensive and bureaucratic to register a domain - not something they could demand of their target market. MS’s error was their failure to update their recommendations after domain registration became cheap and easy.


IIRC Microsoft does now recommend using a real domain with a real TLD nowadays.


Can you create a CNAME on your internal DNS so server1.company.local = server1.company.com?

Found here: https://community.spiceworks.com/how_to/139715-letsencrypt-w...


And also conflicts with mdns. :(


Just remember that the cert will be logged (Certificate Transparency) so any names there will be disclosed to the public. Wildcards help a little here though.


You could do this before too, without wildcards.


You could, but the wildcard cert makes it much easier...

"One cert to rule them all, and in the darkness 'bind' them."


Can you outline the approach how this would work? It was my understanding that in order to use Let's Encrypt you needed a public facing server to verify ownership.


For the standard LE certs, you need a public facing web server for the domain name in question, and LE give you a keyfile to put into:

'/.well-known/pki-validation'

For the wildcard certs, you just need to add a TXT record to the public DNS entry, no public web server required.

Even if you have no intention of using your internal DNS domain name on the internet, it's good practice to register it anyway.


Is there a "standard" TLD for internal use that will also fit this requirement?

The problem here is that there's no such thing as domain ownership, only domain renting. You forget to pay your bill (read: someone loses an email) and a core part of your infrastructure is up in smoke, or worse, taken over by a squatter.


Of course not. If there was a domain reserved for internal use and everyone could get a cert for it, everyone would be able to impersonate your internal hosts.

I don't think there's a way around coming up with a reliable process for renewing your domain. You somehow manage to do it for lots of other things already.


Some years ago, at least one of the popular CAs used to issue certs for RFC1918 IP addresses. Fun times.


It makes no sense to have publicly trusted certificates for names that have no defined legitimate meaning - what is being certified? Nothing. Accordingly no public CA is permitted to issue such certs.


You can use a dns challenge for v1 "regular" certs - there's no requirement for a web server, in order to use let's encrypt.

See eg point 4: 0https://github.com/Neilpang/acme.sh/wiki/How-to-issue-a-cert


You have multiple authorisation mechanism. The one you are referring too is http but you could also use DNS (you add a pre-agreed string as a TXT entry). Wildcard requires dns validation whereas domain specific certificates can use both.


Instead of fetching the secret via a direct HTTP call, the secret is fetched from the DNS server (eg. _acme-challenge.example.com.) - where the DNS server is usually separate from the server getting the cert. This can be done with ACMEv1 for certs, and now is required for the new wildcard certs.

Most clients that support DNS-01 can use nsupdate or APIs of public DNS providers to make this an automated process.


You just need public facing DNS.


For anyone wondering how to actually obtain a wildcard cert this way, here's the quick version:

1. Use acme.sh: https://github.com/Neilpang/acme.sh

  acme.sh --issue -d *.example.com --dns
If your DNS provider has a supported api, you may be able to automatically publish the DNS records required using a slightly different command - see here: https://github.com/Neilpang/acme.sh/tree/master/dnsapi


Thanks! This looks awesome. Can i automate it as well?

I have been toying a little with wildcard using certbot on my Ubuntu OpenVPN appliance, but was a bit unsuccessful at the moment.

Maybe i should just try and build a very tiny virtual sever that does nothing but spit out a wildcard domain certificate to some predefined destinations to have it used in anything that wants a certificate. Could be beneficial to a (large) infrastructure to have an always-ready certificate to use for free. Dunno if EV validation will uphold though.


For provider with DNS support, you can put it in a cron, and then create symlink or some copy step at the end of cron to copy private key and full chain to appropriate location of your web server.

I think acme.sh is the easiet to use in all of clients.


Thanks again. Everything works.

I've put my DNS to Cloudflare and after that the acme.sh was incredibly easy to implement thanks to their API implementations.

Also learned a valuble lesson: *.provider.com is not the same as provider.com :)


acme.sh is great because it support manual DNS mode. It also way easier to use compare with other similar client. This is all it takes for me.

./acme.sh --issue -d noty.im -d '.noty.im' --dns

It then told me to add TXT record, which I just manually do because I used RackSpace cloudns which has no built-in support.

I manually verify DNS with dig, when it's ready I just do:

./acme.sh --renew -d noty.im -d '.noty.im'

then the cert(private key and full chain) are stored in ~/.acme/noty.im/

These privateky and fullchain can be used directly with nginx without any modification.



My suggested amount: recurring donation of $19.84 per month.


The amount of money I've paid for this... I recon some of these providers are going under soon?


> I recon some of these providers are going under soon?

I really hope so.

The cost to providers is exactly the same for a wildcard and a standard certificate, and yet they costs hundreds of dollars. It's unbelievable it's lasted this long


It’s not that unbelievable, what software service are you aware of that is sold at only it’s marginal cost?


It's not "software" in the historic sense, like buying Photoshop or a paying for hosted Slack service. It's literally a command to generate a certificate from their root CA chain.

Yes there's obviously business costs, and they have to employ people to do verification, etc (which they often do a terrible job at), but I think you see what the parent is getting at..


Why hasn't competition reduced the price?


If it was up to me, even right now.

This is great news!


I don't know. Will Letsencrypt also replace EV certificates?


Their FAQ says no because the process for issuing EVs can't be automated which given the requirements for Extended Validation makes some sense I guess.


I'll happily pay money to get a cert that expires in 3 years instead of 90 days. Some of us don't feel like faffing about with cert renewal every quarter. (I know there are tools and clients that can "make it seamless" - until the ACME endpoints are down or something).


Really long expiration certs are a security issue. The main reason being that if the cert is compromised, there is a much longer window that it can be exploited. With a 90 day window, even if it is compromised, it will stop working soon.

Even in the case that it is compromised and you know it, your only option is certificate revocation. And you are in big trouble if you are relying on revocation because most clients do not keep very up to date with the CRL.

Not only for security, but the 90 days is to encourage automation. And most clients like certbot will check everyday, and if the cert is within 30 days of renewal, it attempts to renew. If letsencrypt is down, it will try again the next day. So you have an entire month before an outage would affect you.


>I'll happily pay money to get a cert that expires in 3 years instead of 90 days.

No way. Every time I've worked with an organization with three years expiry it's guaranteed they have no idea, after three years how to even renew the cert. They are effectively longer in many cases than the hiring cycle and for larger organizations can be a complete nightmare. No one wants to invest in time in automation, training, tracking, etc., because it's so far down the road. The 90 day model makes much more sense because it requires automation. In terms of the ACME endpoints being down, I'm not going to say that won't happen but renewal starts 30 days before the cert expires and if Let's Encrypt's ACME endpoints are down for 30 days or longer there's a good chance we are all dealing with something far more dire than cert renewal at that point.


I've been running a modified copy of the dehydrated client (https://github.com/lukas2511/dehydrated) for, I dunno, a long time now. Since not long after letsencrypt became available.

I have my own domain name servers, so it wasn't hard to wire up DNS-01 support.

Anyway, the client has been running daily out of a cron job, updating certs on remote servers as they need to be, with very little intervention from me, for well over a year now. It's just about a set-it-and-forget-it setup.

Let's Encrypt is intended to be fully automated and you shouldn't have to faff about with it every quarter, it should do its thing all by itself.

...most of the time.


If you are following the recommended practices, it's every 2 months, and ACME would have to be down for a solid month. I think that's fairly unlikely


Well then you are two weeks late. The maximum lifetime for a certificate is now 825 days, most commercial CAs are selling only 1 or 2 year certificates, with the extra days used to allow early renewals to "carry over" a few weeks.


I'm in the same boat. I haven't found a guide for an easy and flawless way to automate cert renewal with letsencrypt when you use multiple services over different servers. For my wildcard, I use the same cert for:

1. Ubuntu VPS #1: a. dovecot ssl b. postfix ssl c. apache multiple virtual domains ssl d. pureftpd ssl

2. Ubuntu VPS #2: a. apache multiple virtual domains ssl

3. Microsoft Server a. IIS multiple virtual domains ssl


Why does it have to be the same cert on every host? Use a separate cert for each and automation will be much easier.

With Let's Encrypt, you don't need to minimize the number of certs just to save some money.


I'm just saying how I'm running things now. Totally open to better ways. Right now I pay $135 for a two year wildcard cert (very small business here). It takes 1 hour of my time to update the cert for all these applications. 1 hour of time and $135 every two years is not a lot. When I do a cursory look of how to reliably automate letsencrypt across all applications, there are people who have created scripts that help, but it does not give me reassurance that everything will run smoothly every 90 days. I am waiting for letsencrypt to get first-class support in dovecot, postfix, pureftpd, and IIS, so it can be set and forget, and I know long term support will be there.


Well you can happily use other CAs if you want to 1. Pay money and 2. manage certs manually. As you did/do it always.


"Why we need to do more to reduce certificate lifetimes”: https://news.ycombinator.com/item?id=16582714


DNS providers and domain name registration companies are probably going to get pestered about API access for updating TXT DNS records now... :)


I never understood why DNS providers are so reluctant to offer standards-based access, like nsupdate(1). It's easy to set up, it can do everything, it's secure, requires no custom anything and it just works.


One option is to run your own BIND instance configured however you like, and pay for one or more secondary DNS services to sync off it. You can even hide your own BIND instance from everyone outside your network and just point your NS records at the secondaries, if you’re worried about misconfiguration/DoS attacks/etc.


A perfectly viable option that is called 'shadow mastering'. dns.he.net lets you do it for free.


That sounds interesting. Would you know of any secondary DNS service headquartered in Europe? I always wanted to host DNS myself but since I lack a secondary DNS...


Unfortunately don't know any EU-based services, but all the big services have their actual servers available in most locations.


Only problems is when your main DNS is down, letsencrypt wont check your secondary, because they use Google DNS.


Take a look at https://github.com/AnalogJ/lexicon. It's a python library that provides standardized, programmatic access to DNS entries for a bunch of major providers.


I started using Cloudflare just for their DNS API - the dynDNS providers baked into my router's firmware went under so I started pointing the DNS record to my home dynamic IP with a cronjob that called CF's API.


It's this exact situation why I decided to write a tool that integrates with the CF API [0].

[0]: https://github.com/wyattjoh/cloudflare-ddns


You can also use our Terraform provider to manage DNS: https://github.com/terraform-providers/terraform-provider-cl....

We've got a number of open PRs as well to add other resources, e.g., load balancing, rate limiting, zone settings, etc. HashiCorp is currently reviewing/merging.


The good news is that most of the major providers already have integrations into clients like lego: https://github.com/xenolf/lego/tree/master/providers/dns


Use Terraform to manage records. They have support for lots of DNS providers (AWS Route53, Google Cloud DNS, Cloudflare, DigitalOcean, Azure DNS, DYN, DNSMadeEasy, NS1, UltraDNS, PowerDNS).


I switched to Terraform + CloudFlare for managing my DNS entries and I absolutely love it. No more messing around with web pages, change a line in a file and you're done. Fantastic.

Warning: I have made services inaccessible by deploying before making sure the git repo I was working from was the latest version. That's the downside of stateless deployments!


Did you do a deploy before a plan? ;)

We've all been there!


No I did the plan and then didn't look at it at all and did the deploy :P


Moving a domain between providers is quite disruptive.


Store your DNS records under revision control, and updating your records can be as simple as a "git commit && git push".

https://dns-api.com/


Shameless plug: https://github.com/StackExchange/dnscontrol is a provider independent way to manage your zones with a single dsl style file in source control.


That's pretty expensive, esp for side projects. I'm using a certbot extension for CloudFlare. Completely free.


I used to have a sliding scale of prices, based on volume, but my customers fall into two camps:

* Those with 1 or 2.

* Those with 10-40.

I suspect lowering the price(s) on a volume-scale would allow me to find customers with 40+ domains, but at the same time I'm happy where I am and seem to have a reasonable niche.


is it common for DNS hosts to provide delegated access at the granularity of individual records?

I don't want my webserver to have the ability to change my entire zonefile just so it can authorise certificates!


Not sure if it will work for your use case, but you can also CNAME the _acme-challenge record to a different domain (or a subdomain with a separate zonefile), dedicated only to authorizing certificates.


If you’re doing DNS-based auth you don’t need to renew the certainly on the web-server at all.

You can generate them on a secure host (or container) which pushes the certs to the machines which needs them.


In OVH you can restrict token access to individual resources (in this case one record) at token creation time.


Now here's to hoping that Heroku supports this soon. That will to mean I can a last migrate a number of apps that require wildcard domains to their platform.


I'm intrigued. What kind of app that you could host on heroku requires wildcard certificates? Bearing in mind that heroku can't really support wildcard subdomains for a single app. Each custom subdomain for an app needs to be added to the app. And then if you enable Automated Certificate Management for the app (which uses LetsEncrypt under the covers), they'll happily fetch a cert for each listed subdomain.

And Heroku already supports wildcard certs (that you need to provide yourself) if you use the SSL addon.


> Bearing in mind that heroku can't really support wildcard subdomains for a single app.

Why not?


Well I suppose they could, but they'd have to be very careful of someone else spinning up an app and adding your domain to it.


I have a feeling they've been working on this. Their https support has always been top notch!


This is great news! Let's Encrypt has helped me secure many of my own boxes without having to maintain my own CA, very happy to see them grow.


Can anyone list any negatives of Let's Encrypt? I've been using it since the start and just can't find any practical downsides.


The only significant concern I have is that if LE were to essentially "take over" the CA industry, you know, due to being free, and awesome, we'd have a massive single point of failure for the entire Internet's security model.

My biggest peeve with the whole "HTTPS Everywhere" push is not the general notion of using encryption, but that the encryption is annoyingly coupled with the CA system, which is terrible for many reasons.


The encryption part is easy -- you don't need CAs for that -- but they're a necessary evil when it comes to verifying ownership. You need to delegate trust to someone, otherwise using the internet becomes too cumbersome.


Automated SSL providers effectively mitigate the idea of "verifying ownership" or "delegating trust", because for example, someone can buy a domain like... googIe.com, get an SSL cert for it, and it's "valid". We're right back to the same level of security of you just checking that the browser bar points at the domain you actually intended to go to. (In this example, bear in mind, Google doesn't use an EV cert, so they'd be equally valid to a web browser. And a lot of EV certs I believe are getting distrusted soon as it is.)

CAs seem like a system that really doesn't work today, we've seen multiple times that many of these CAs aren't worth delegating trust to to begin with, and it causes an unnecessary cost and burden upon just... encrypting traffic.


> We're right back to the same level of security of you just checking that the browser bar points at the domain you actually intended to go to.

So you’re sitting in a cafe, and you go to Facebook.com. Lo and behold, someone’s installed a MITM proxy on the router, that presents its own encryption key instead of Facebook’s, and your browser has no way to tell this because the CA system isn’t a thing. They now have your password, can steal your session to spam your friends, whatever else. How do you prevent that?

Automated domain validated certificates are meant to ensure that when you go to Facebook.com, you’re talking to Facebook.com and not a MITMing router on the way there. They’re not meant to protect against phishing - they’re meant to protect against the very real cases I’ve seen where my mobile ISP adds random JavaScript into the web pages I view, and sells information about me based on my use of the web.


Idea that's been floated before: TOFU plus a distributed network of people automatically sharing what cert fingerprints they encounter. Chances are high that you already hit Facebook on your $device, and if you all of a sudden retrieved a certificate that didn't match the one you had before, or that most other people online hadn't seen, halt and throw up the warnings.

Given the exploitability, laziness, general failure to follow best practices, not to mention misaligned incentives that we're seeing from major CA vendors, having centralized CAs seems like an ever-worsening solution.


Where do you store the trust from all those people to be able to query the statistics? That's just another central point of failure.


It's not as if distributed hash stores are new...


That didn't answer anything. How can you trust the result if anyone can write there. How can you trust the individual store that it doesn't manipulate its contents, etc.


And how would rollover work?


It would wind up being visible to a large chunk of users simultaneously. Furthermore, since we're relying on the wisdom of the crowd rather than a true CA, you'd be able to trust companies' own CAs rather than delegating off to a not-so-trusted third party.

In other words, if someone claiming to be Facebook has told a significant number of people all over the world that Facebook's cert fingerprint is ABCD124, and that fingerprint matches what they're getting presented, it's probably legitimate. We can add additional points for the cert signer being the same one as the previous cert, lack of listing in a CRL, cert transparency logs, etc.

There's no reason this system couldn't bolt on top of the existing CA infrastructure to avoid a bootstrapping problem either.

It adds a probability value into the mix, in other words. That value has always existed, but now we expose it to the user in some way and stop pretending that it does not.


This is what HTTP Public Key Pinning is for; the hash of the public key of the cert tells browsers to not trust a cert for the same domain with a different public key: https://news.ycombinator.com/item?id=16582534


Technically, certificates automatically validated only guarantees that you are on the website that let's encrypt thinks correspond to facebook.com. MiTM state wide could tamper with it


How so?


Presumably, someone could MITM a CA, and get their own domain validated certificate to another site. The cert may protect you from MITM in a coffee shop, but it doesn't necessarily help you against state-level actors.


>The cert may protect you from MITM in a coffee shop, but it doesn't necessarily help you against state-level actors.

I can use HPKP to pin the cert I get from Lets Encrypt; a cert issued for my domain some other way won’t be trusted due to the hash of its public key being different from the one I pinned.

From https://developer.mozilla.org/en-US/docs/Web/HTTP/Public_Key...:

The Public Key Pinning Extension for HTML5 (HPKP) is a security feature that tells a web client to associate a specific cryptographic public key with a certain web server to decrease the risk of MITM attacks with forged certificates.

HPKP makes administration more complicated but if your threat model includes state-level actors, it prevents them from getting a CA to issue a valid certificate for your domain.

Certificate Authority Authorization (CAA) has been mandatory for CAs since September 2017; it uses DNS to specify which CAs are allowed to issue certificates for your domain: https://blog.qualys.com/ssllabs/2017/03/13/caa-mandated-by-c....


It's worth noting that Chrome has plans to deprecate header-based pins in a few months and static pins (the ones baked into binaries) at some point after their Certificate Transparency policy covers all non-expired certificates. That'll make Firefox the only mainstream browser with HPKP support. (Mozilla hasn't announced their intentions so far.)


It’s currently standard for CAs to host multiple verifiers in multiple jurisdictions, to reduce the chances of this happening, afaik.


Let's Encrypt is developing this feature but it might be a little premature to call it "standard"—it's not specified in the Baseline Requirements and I'm not sure whether there's any CA that has announced it as a part of all certificate issuance.


Most CAs aren't automated :) I believe any that do ensure that DNS requests are tried from multiple different locations to prevent this happening. Though you're right, the standards haven't caught up yet.


> someone can buy a domain like... googIe.com, get an SSL cert for it, and it's "valid"

Are you sure that all "old school" CAs wouldn't issue a cert for that?

They were never supposed to fight phishing. Domain Validation certificates literally validate… domains, and nothing more.

It would make more sense to prevent googIe.com from existing at the .com registry level, before any TLS is involved.


I wasn't referring to EV certificates, just to verifying simple ownership of the domain for the purposes of MITM and other attacks of that kind. Let's Encrypt would inform you that the page that appears when you visit googIe.com was indeed served by the owner of that domain (barring server compromises or cert leaks, but that's a separate issue). LE and "basic" certificates do not attempt to answer the question of who owns the domain -- that's also an entirely separate problem.


it's possibly a good target for decentralization + multisig. decentralization so a CA never "goes down", multisig so that a certificate needs N signers, thus if a private key gets hacked then the cert isn't compromised. the hard part seems to be verifying the ownership and integrating with the existing web (the oracle problem)


Does LE have a secure and resilient infrastructure? Like they have multiple sites where they can run all operations from in event of a natural disaster, for example. How about in the event of a government that decides to take it over as a part of their national infrastructure, sounds crazy but we're putting a lot of eggs in their basket.


If you renew your LE certs a month before expiry you still have a month to find an alternative solution should let's encrypt blow up.


>I have is that if LE were to essentially "take over" the CA industry, you know, due to being free, and awesome, we'd have a massive single point of failure for the entire Internet's security model.

single point of failure as in, getting hacked and misiussing certificates?


That's one scenario. Or maybe they run out of funding and need to shut down. Maybe they end up needing to shut down an old API before everyone is ready. Maybe they have a bug and issue a bunch of subtly broken certs (say, not enough entropy).

It's a concern whenever a large portion of decentralized infrastructure has a single centralized dependency. Even if that dependency is awesome and doing great work right now.

Ideally, there would be several free CAs that all used the ACME protocol. But somebody's got to pay for that and somebody's got to go through the effort of setting it up when Let's Encrypt already works really well.


The one that always sticks out is the certs’ extremely short expiration period. The IMHO weak rationale for this was mentioned in another thread here (See jjeaff‘s response upthread).

It would be nice if they simply offered two choices:

1. I love automation! Give me a 90 day certificate.

2. I understand the security trade-offs. Give me a 3 year certificate.


But issuing 3-year certificates would disqualify them as a CA: https://cabforum.org/wp-content/uploads/CA-Browser-Forum-BR-...


Can you elaborate as to why that would disqualify them? I don't think most of us are intimately familiar with the Baseline Requirements, or want to wade through 60-some pages to figure out your reasoning.


Three years is much too long. Last year Google's Ryan Sleevi basically said this needs to be much shorter, it takes far too long to fix anything properly with such long-lived certs. Ryan pointed out that it they couldn't get traction by agreement then Chrome can totally just be modified to count certs as expiring after 90 days and that's that. Unsurprisingly CAs did not go "OK we'll do what Ryan suggests, 90 days it is" but they also didn't try to stick with the status quo of 39 months and call Ryan's bluff. The compromise that got enough votes was 825 days for all certs after 1 March 2018.

For future reference - the BRs have a section with a timeline, it's great for finding upcoming or recent changes significant enough that the CAs needed a deadline.


So a bunch of centrally controlled monopolies agreed to realign their offerings to maximize profit and gain greater control over end-user.

They also pretend, that compromising 3-months certificate is "ok" (or at least less harmful, than compromising a year-long certificate), when in practice there is no reason to assume so, — 3 months is more than enough for any real-life eavesdropper.


No?

Firstly, CA/B explicitly can't talk about pricing or product offerings, because a group of businesses that collaborate on setting prices or product offerings is called a Cartel and is illegal (the example you're probably thinking of, OPEC, exists because its members are sovereign entities, and thus enjoy total immunity from the law). When they meet in person the CA/B members always begin by reading out the rules that lay out what mustn't be discussed for this reason.

Secondly, the idea is not at all that compromising 3-month certs is "ok". Instead Ryan's focus is on the pace of change. During 2016 CAs agreed to use the Ten Blessed Methods for validation, in 2017 that agreement became a concrete rule (thanks to Mozilla) but a 39 month certificate issued under the prior validation status quo would still be trusted until mid-2020.

Historically what has happened is that there's a grace period, and then CAs are supposed to go back and revoke any certificates still outstanding that break the new rules. But this is error-prone, back in early 2017 you can see the list of violations I found while checking that certificates for now prohibited "internal" names were revoked as required, each CA had excuses for why they'd missed some, but the overall lesson is that things will be mised. So Ryan doesn't want to rely on grace periods, he wants a shorter window of validity for the certificates.

MD5 and SHA-1 is the go-to example for this stuff. We expect already that SHA-2 (e.g. SHA-256 used currently in certificates) will fall the same way as the others, because it's the same construction, so we're going to be doing this again in perhaps 5-10 years. But with 39 month certificates the _minimum_ time from changing the rules to getting rid of the problem is 39 months, if it takes a few months to agree what to do, the total may be closer to 4 years. That's a very long time in cryptographic research, too long to predict what's coming. 90 days would be much better from this perspective.


The maximum validity for a cert was recently changed to two years.


"Why we need to do more to reduce certificate lifetimes”: https://news.ycombinator.com/item?id=16582714


The service is great, but they're really the only free SSL cert game in town. As more sites start using their certs, they'll wind up becoming a single point of failure.


They are not the only CA that issues certificates for free. For example, AlwaysOnSSL[0] was on HN a few days ago[1], with some important differences (as pointed out in the HN comments)

[0] https://alwaysonssl.com/

[1] https://news.ycombinator.com/item?id=16566031



It's a very nice feature, but you can't actually get the cert to use on your own servers or devices. You can only use it with AWS services, like their load balancers and Cloudfront. It makes a lot of sense that they do it this way, it makes it very easy to keep secure, since you never get the key. However it doesn't solve the same problems that Let's Encrypt does, and that's ok.


They won’t ever issue EV certs.


Nor S/MIME and code signing certs. They also won't provide auxiliary services like timestamping.


I do hope GitHub employs this for rolling out https for Pages sites using custom domains too.


Not sure if it would help in your situation but I've moved all of my github pages to netlify.com and they have a one button https feature for custom domains.


I did the same. Between Netlify and Zeit.co's Now, I don't see any reason to complain about HTTPS, not to mention the devOps issues that both these services solve.

SSL requires one click with Netlify, and it's on by default with Now.


Btw the only problem I have with Netlify is the close name conflict with netflix. Chrome's autocomplete is completely confused.


Why do they need to support Wildcard Certificates for this? They have already starting rolling out https for custom domain GitHub Pages using LetsEncrypt - check your settings for an Enforce HTTPS option. All my GitHub Pages have it now.


That's great. I just checked and it isn't available/enabled here yet. I'm wondering if GitHub doesn't enable their own SSL if a user is providing that through a service like Cloudflare... perhaps I should disable the latter and see if that makes a difference.


Not Let's Encrypt, but you can run behind cloudflare's free plan for https.


Great news, but interesting to see that they still recommend securing individual domain names. I imagine this is for security purposes?


Yes. Wildcard certificates are useful primarily as an alternative to manually managing many certificates. But in the age of automation (now), LE wildcard certificates are only really useful to avoid rate limits, which is 20 certificates per week per set of names.

Key compromise for a single site is much less disruptive than losing control of a key that protects hundreds or thousands of sites. Generally you want to keep your scope smaller, it's safer. Rather than blanket-verify everything. Wildcards also makes it more difficult for you to see what of your names is going through CT logs.

Caddy will support wildcard certificates, but most users will not need them, because already Caddy can obtain certificates "on demand" - dynamically, during the TLS handshake. Again, the main reason for using wildcards at this point would be to reduce pressure against LE rate limits.


A particularly desired case is Sandstorm.io, which randomly generates a subdomain every time you open a document.


Yeah, there are some edge cases where a wildcard is less secure. https://security.stackexchange.com/questions/8210/what-vulne...


It's not even about edge cases - it's just good security practice to isolate credentials as much as possible and limit their scope.


I imagine so, too. If you have N machines each serving a different site, better to have each only have a key valid for its site so there's less impact from one of them being compromised.

btw, in that scenario, even if the sites all share an IP address, you can use a TCP-level proxy that supports doing the TLS SNI exchange to determine where to send the connection on, so the proxy doesn't need any of the keys and the encryption is end-to-end.


Yeah, I think that if someone hacked your DNS provider, they could add secure-payments.yourbusiness.com and start spamming people with "late payment! enter your credit card!" notices or something.

So I guess, make sure you trust your DNS provider if you're using wildcards. Or is there another exploit I'm missing?


They would need to both hack your DNS entries and have access to the private key of the pair for which the certificate was signed. Having access to the private key probably indicates a significant hole in the site's infrastructure so that is more of a concern than DNS.

Of course such access may be easier for a disgruntled internal actor so it is a risk worth considering (and mitigating via proper separation of concerns/access).


Not sure how the availability of wildcard certs changes that scenario, if I can set the DNS record for secure-payments.yourbusiness.com then I can get a non-wildcard cert for it and get on with the spamming straight away


I think it's somewhat difficult to get a valid (CA-valid) certificate for a domain you don't own, though. At least, that's what the job of the CAs is: to verify that the certs they're issuing are for the actual owner of yourbusiness.com.


I thought that was the case, until CloudFlare issued a cert for a subdomain of mine without a single email round-trip or even notification.

Any DNS-based validation is contingent on full DNS control, and that does mean FULL. CNAME records are absolute, if I CNAME foo to xyz then I'm trusting xyz 100%. I won't get an email round-trip or CAA ping for the certificate unless I'm looking for it, because CNAME implies that all things that apply to xyz apply to anything pointed at it. So the CAA record for xyz applies, not the CAA record for foo - it's not even valid to have any other record types for the same name as a CNAME record, and CAA resolution stops if it gets a valid response versus walking up to the domain root.

To be clear: CloudFlare issued a perfectly valid certificate for a perfectly valid use case, it just bothers me that I couldn't tell it was issued until after-the-fact by seeing it in CT logs, and couldn't have prevented it from being issued by the mechanisms that seem to be built for that.


That sounds like the description of an EV or OV certificate, where the CA takes additional verification steps.

LE is all about DV certs -- you just need to control the web server at secure-payments.yourbusiness.com, and with DNS control you can aim secure-payments.yourbusiness.com anywhere


Nope, DV certs just verify that you control the domain (i.e you can place arbitrary content in a specific location). You don't need to own the domain otherwise SSL would be a lot harder for mysite.hostingcompany.com type providers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: