Wildcard certs have been a common request, but you can already specify multiple domains using SAN. And, since you can issue a new LE cert on-demand, it's actually not necessary for a lot of use cases that would have required a wildcard in the past. In the bad old days, getting a new certificate (even just to change details like adding a name to it) was time-consuming and often cost money. Most of the time when our users have needed a wildcard, it was just because they wanted to save a little money by having all of their subdomains on one cert; and not so much that they really needed to be able to spring up dozens/hundreds/thousands of new subdomains that could just automatically be secured. If you have a fixed number of names that need SSL with the same cert, you can already do that today with Let's Encrypt.
Nonetheless, this is great. I'm just super impressed by how effective the LE folks have been at improving the state of security on the web, and for free! They really deserve every kind word and every donation dollar that comes their way.
I know it's not the first request, but an easy way from the GUI to force all requests to be SSL would be great. The only way I see in GUI is a "redirect everything", but... that also redirects requests for ".well-known" which means renewal requests don't work (have been hit by this a few times). Some recommended ways of handling "always use SSL" with LE renewals would help.
Thanks for VM work!
What issue did you run into with this? Let's Encrypt follows redirects both to HTTP and HTTPS and accepts practically any certificate for redirect targets when validating via http-01, including self-signed, expired and mismatching certificates (which isn't a problem since the initial request is plaintext anyway).
It can be surprisingly tricky to get redirects right, because of how web apps behave and how web apps often take over early request processing in an htaccess file for things like nice URLs and special assets directories and such, and there can be surprising interactions between redirect rules, but I suspect we could make it work in the majority of cases pretty easily. I'll add it to my todo list.
i suspect if you set up some defaults, most virtualmin users would adapt around those.
(and thanks again)
This move by Letsencrypt should hopefully make them the standard for any external service that doesn't require an EV cert.
I'm kind of worried about this myself.
No matter how well intentioned, secure, or "good" lets encrypt is, having a significant portion of the world's TLS be under one umbrella isn't a good thing.
I'm hoping that we will begin to see other services pop up that are similar to lets encrypt (free, even using the ACME protocol) so that we don't have too many of our eggs in one basket here.
There are several reasons that it could be good to have multiple publicly-trusted CAs out there, though most of them don't depend at all on the relative market share of the CAs at a given point in time.
Security risk If a particular CA is compromised through an attack and its intermediate certificates need to be revoked, it would be good to have other CAs already ready to continue issuance to the public. Even if the CA intends to resume operations after a compromise and user-agents are OK with that, it probably wouldn't be prepared to resume operations immediately.
Geographic risk It would be good for availability to have CA datacenters on multiple continents, not just one.
Jurisdictional risk The government of the country where a CA operates could try to compel the CA to stop issuing particular certificates, for example in order to enforce international sanctions, or to facilitate espionage by trying to make it harder for people to get authenticated encrypted connections to certain services. A government could even force a CA in its territory to cease operating entirely. (There's also jurisdictional risk in the other direction, of governments trying to compel misissuance, but this risk is strictly increased by having trusted CAs in more jurisdictions. In general, having more CAs increases the risk of misissuance, while decreasing the risk of certificates being unavailable to a particular site because people don't like that site or its operators for some reason.)
Continuity risk It would be safer to have CAs with more different kinds of funding for their operating expenses.
Institutional/governance risk A particular CA might some day decide to do things that relying parties find improper. Having more CA alternatives can give the relying parties more plausible leverage to get the CA to align its practices with their preferences. (As with the jurisdictional risk point, only a decision not to issue certain certificates can be directly addressed in the short term by having other alternatives. A decision to issue certificates that other people think shouldn't have been issued can probably only be addressed this way by removing trust from a particular issuer.)
Looking over this thread, I do want to emphasize again that misissuance risk gets worse, not better, when there are more CAs. If you're particularly afraid that CAs will be issuing certs improperly because they get attacked or coerced or do a bad job of validation or internal controls, you should probably want fewer CAs rather than more, at least as a response to that particular concern. This is because CAs in the X.509 PKI can't "contradict" another CA's issuance; every assertion about a binding between an identifier and a public key is cumulative and operates in parallel and in addition to every other assertion.
There were a couple of cases where CAs like Trustwave or CNNIC signed intermediate certificates that were capable of issuing publicly-trusted certificates for organizations who lacked the required audits. They were typically intended for corporate/internal MitM proxies, though there was no technical enforcement in place for this, and they could've been used for any MitM attack. The recent investigations into Symantec's CA showed similar, but slightly more complex cases.
There were many other incidents involving problems with behavior of PKI participants, and I'm sure reading this chapter will give people a sense that the ability to remove trust from intermediate CAs is an important ability.
Go pull up your certificate authority list and ask yourself for each one of them if you trust that company more or less than let's encrypt.
Let's encrypt publishes auditable logs of all issued certificates, they're backed by some of the biggest names in online privacy and I trust them much more than other CAs.
I for one would be happy if I could delete all other providers from my browser.
CAs ultimately are centralized and too trusting, giving that same level of trust to less trustworthy companies just damages the overall security of TLS. There's no distributed trust model for CAs, it's pretty much all or nothing, so in the case of CAs, distribution is not a security benefit like it is in say Tor or Bitcoin, but a problem as it means the attack surface has widened.
Short of going to a new, completely decentralized solution like the proposed DNSSEC extensions or Namecoin, a single, very secure CA is probably better than a lot of not secure, often government influenced CAs.
Why is that? The damage from a CA being hacked is not proportional to the size of the CA - they are all equally (small number of exceptions notwithstanding) capable of issuing certificates for any domain which will be trusted by all major browsers.
Is there another aspect I'm not considering? While I see how it feels like a troubling thing, I'm struggling to actually come up with any real consequences of it.
Also, what grandparent said: "The public", in this case, is people who would donate, of which 100% know what a CA is.
If we're quibbling about "the public" then the GP comments only make sense if "the public" means "people who aren't IT professionals", in which case I'd warrant that there are far fewer donors than 30k who aren't IT professionals, indeed it's got to be ~0.
Can't see donor details on the LE pages though? Mind you at approx.av.300k certs issued daily (https://letsencrypt.org/stats/) I concede I could easily be orders of magnitude out in my guesswork.
I don't know of anything concrete, but I can imagine an attack that can exploit the process of verification on their servers to have them sign domains they shouldn't, or DDoS attacks on them to prevent people from renewing their certificates. The bigger they are, the juicier of a target they are for these kinds of things. if they were a provider of 50% of the internet's TLS certificates, you could take down half the internet by continually DDoSing a single company!
Hell I can already imagine someone sending a bunch of signing requests spoofed as someone else, locking that person out of renewing due to rate limiting.
Not to mention that even the country they operate in can be a big deal.
Bugs in the cert verification process are the same amount of risk regardless of whether everyone is using the CA or nobody is, as long as the CA is trusted. There's nothing gained by putting your eggs in multiple baskets.
Also, these all seem like hypotheticals when the old-school CAs have had OCSP downtime, bugs in the cert verification process, incompetent staff signing and publicly logging google.com certs to test their infrastructure, governments asking and receiving unconstrained intermediates, unconstrained intermediates as a publicly advertised product, etc.
Assume for instance that the country of Hackeristan manages to have one of its authorities accepted in major web browsers. This authority is only meant to sign Hackeristan domains and only signs a tiny amount of certificates.
Now let's imagine that this authority is compromised, maybe the Hackeristan government wants to intercept connections to gmail, maybe the authority is vulnerable to hackers. One way or an other, it signs a bogus *.google.com certificate. Well it's game over, since the authority is trusted by all major browsers everybody's vulnerable, even though it was a tiny CA. Only certificate pinning can save you now.
If LE was found to be incompetent and lost control of their private key, browsers would be much less willing to remove them as trusted if they were a significant portion of the web.
And things like the impact of DDoSing LE to take their OCSP servers down and things like that still grow with their size.
To clarify, I love LE and I use them almost exclusively. But I'd feel better if there were others trying to follow in their footsteps.
It's a weakness of the current authority architecture really, trusting a CA is an all or nothing decision. If any of the authorities is compromised you're vulnerable until you remove the CA from your browser, regardless of the number of legit certificates it issued.
Wildcard support is one advantage of ACME v2, but another advantage they list is "ACME v2 was designed with additional input from other CAs besides Let’s Encrypt, so it should be easier for other CAs to use" - https://letsencrypt.org/2017/06/14/acme-v2-api.html
So, in addition to this functional improvement to Lets Encrypt, the change should enable more automated CA options in the future.
However, it's still better than having expired certs and a team trying to figure out who owns the app, trying to get in touch with them, asking them to update the cert, finding out that they are no longer with the company...
It's even worse when the service is something that a small team created as a POC - which then became customer facing and mission critical, with the team having moved on to something else.
And it's funny how often this happens over a holiday weekend.
Yes, I know that the issues are deeper and more to do with large company process and bureaucracy than anything technical. But at least you can have secure services that don't fall over.
I would use them if they turned this on and offered the service at a low cost.
Yes, they could branch beyond that, and create a special category of certificates to issue, that has a cost, and that gives you access to the private key, but that isn't really distinguishable from any other certificate provider out there. In fact, Let's Encrypt offers that for free? Why would Amazon decide to compete with a paid product against a free one, when there's no benefit to the consumer to warrant paying?
They compete with free Cloudflare caching today. And free DNS services. And much cheaper VPS services. There are various reasons customers might choose low cost over free.
For me, since I use ACM already, for AWS hosted resources, I would appreciate the advantage of using it for other resources. Even ones on AWS, like a cert on a Lightsail instance, for example.
I say artificial because all that's missing really is a link to download the cert.
High profile sites can buy multiple top level certificates (with mutual signing, say); sites needing less security can fallback on a simplified consensus system (maybe like above).
They do, however, allow *.<service>.company.com.
I work for a large Fortune 150, one that you've heard of, and we have a security team that is constantly scanning our network for weaknesses and potential exploit vectors. They will kill (firewall off) any sites that might compromise the network and tell the application owner to fix the issue before they allow it back on the public net.
> I work at a large Fortune 150 [...] we have [an] internal CA
(This was a lot clearer when there weren't so many other comments in between.)
I worked in a few places that had a *.company.com which covered, obviously, everything under that domain.
That meant if that wildcard cert leaked then our EV cert for, say, checkout.company.com would be essentially compromised too.
Not to mention. If you have a wildcard cert it's rather likely you're passing those certs around servers, lots of scope for leakage.
I really think that if you feel the need to do wildcard certificates, then you should at least try to figure out another way around it. I'm not saying you absolutely must never use them, but be incredibly mindful of what is at stake and limit the scope and availability of such certs as much as possible.
For instance. Don't put the same wildcard on mail servers and IM servers and git servers and etc; a compromise of one will compromise them all and the revokation system is not good enough.
Everyone keeps saying SaaS is the reason for the use of wildcard certs and I would absolutely argue the point that multi-tennancies weakest tenet is the fact that if you get compromised the scale can be broad. Why intentionally weaken that system? LE can handle thousands of domain creations a minute, they've been very forthcoming with lifting limits for people on domain creation.
The downside is your server sites which need a little overhead for vhost creation but that could be automated with less than a day of ops work.
I'll stand by the assertion that vhosts are probably still better off with a wildcard cert if it's the difference between a single server using a single cert vs a single server holding thousands of certs. In a node compromise it's the same either way. If different servers are serving different subdomains then sure, subdomain certs are the better way to go.
Every time someone adds a domain to Tumblr, they'd have to re-do the certbot challenges for all 50 MILLION domains.
Plus, all 50 million domains are listed IN the certificate. It'd be megabytes worth of additional data for every visitor to Tumblr.
*.tumblr.com makes a lot more sense.
Generate a cert for 100 of your client's domains, use that cert across those domains. Cut your 50m domains down to 500,000 certificates. Serving the right certificate for the right domain is a simple enough task.
As new tumblr domains are registered, generate more certs in batches of 100 domains.
I doubt anyone would ever seriously suggest putting millions of SANs on a single certificate, but 100 isn't too farfetched.
My situation is a bit different: hosting a bunch of subs on the same servers.
With one wildcard I have one server conf with one cert and use the hostname to rewrite each request to the correct directory.
If I did a cert for each sub the nginx conf would need 1000's of server config blocks each with its own cert. I haven't tested, so maybe nginx would handle this just fine, but it is easier to just go with a single wildcard and not worry about it.
As far as I can tell there is no security advantage to having multiple certs instead of one wildcard since I would have all the certs on the same server anyway but if anyone knows of any I would be happy to hear.
Is having separate certs really that big of a deal? Operational overhead with LE is next to none and if you're scared of hitting limits you can contact LE to have the limits increased.
There's certainly a place for wildcard certificates, but I definitely agree that they should be used sparingly.
I've often (always?) fallen into the trap of "Let's just use a wildcard. Setting these things up is a pain in the ass!"
The fact that validation for wildcard certificates is limited to the DNS challenge will hopefully ensure that most users will continue to use non-wildcards, as the other challenges are significantly easier to automate.
But now, with the full LE offer of services, cost is no longer an issue, and all you mention should be easily automated via scripts.
In our prod we use https://github.com/GUI/lua-resty-auto-ssl to generate over 1k lets encrypt certs that refresh as they expire.
Hope these can help
I'm not sure how up to date https://github.com/hlandau/acme#comparison-list-of-client-im... is.
0 9 * * 0 certbot renew --renew-hook "service nginx restart"
I'm sure there are cases where restarting nginx willy-nilly won't fly, but for non-mission critical it's wonderfully simple.
Cerbot is also great for the initial setup. I just add the non-SSL entry, run `certbot --nginx` and follow the simple prompts.
I've been using this in production for more than a year now, and if you google around a bit, most guides for automating renewal on nginx will use that command.
How exactly do the other people need to be involved - there is no purchasing/responding to email for proof/etc required.
As for large organizations and the 90 limit, I find dealing with it at work isn't a big deal. We have so many they have to be automated anyway. Even if we only had a few, the process is much easier/faster than it used to be to have someone in the company buy a cert and figure out what files to get to us. Now we can just take care of it, no credit card required. An easy cert every 90 days or so or one that is much more work once a year? Let's Encrypt has my vote and people who want to make excuses will never run out of them.
The entire call (which we ended up pulling and listening to, and then sent back to COMODO as a prime example as to why we're threw with their business) was designed to have a non-technical decision maker make an impulse decision over the phone to buy thousands of dollars worth of certificates again.
Wildcard certs from Let's Encrypt cannot come soon enough.
We're on LE for 90%. There's a client (there always is...) that demands Network Solutions certs. Yet they cannot put to words why that's their need, other than stupid bullyish business practices.
We're still trying to wrap our heads how LE plans to offer wildcards.. But I digress.
The real security hole was that the operators were patching through salesmen directly to the security staff without verifying who they were...
Are you billing them for the extra cert, and for the extra work that using a non-standard CA implies?
It's a pain,but we have only 14 machines we oversee, with 3 year certs on each. Nagios takes care of alerts within 60 days, so we can easily get the request in time.
The reason these are free has nothing to do with payments but instead allows removing an additional barrier to entry from getting SSL working.
For instance, we are supporters of Vim, but we couldn't make a direct donation to the project. Our corporate policies on this makes things a little stiff as any donation like this is seen as potential publicity. so we couldn't move forward on that. However, we do often buy things for corporate events from Amazon, so we could use the affiliate link to buy our stuff and still contribute to the project.
There was another instance, i can't remember what the project is as I didn't deal with it directly, where you could donate directly or buy 'swag' (like T-shirts, cups etc). I remember one of the teams that wanted to contribute funds managed to expense it as swag for their department so everyone got hats, t-shirts, pens, coffee cups etc. because again, can't directly donate.
And of course, sponsorship is usually out of the question, because they don't want to be known as supporting one specific thing or another.
Sometimes its just easier to make a 'sale' than it is to get a donation from huge users of your product.
An example: I run a vm that exposes mysubdomain.azure.com, can I turn on ssl at that level? A google search says "no" but I figure this is a place where someone might have a workaround.
Something you have to keep in mind are rate limits. Unless the (parent) domain owner has registered the domain in question as a public suffix, you, together with all other users who have subdomains under the parent domain, will be limited to 20 certificates per week.
Some domains, like for example the hostnames EC2 instances get that resolve to their public IP, have also been explicitly blacklisted because they are generally not assigned to anyone for longer periods of time, and it would be easy to mint certificates for a large number of those hostnames by just spawning tons of EC2 instances, which would make those certificates largely useless.
Finally, domain owners may decide to prevent issuance using a CAA DNS record, which are supported by Let's Encrypt.
It looks as though Azure itself also provides a CA , or at least resells one's services, for use with apps hosted on the platform. Depending on your needs, that may be a better alternative, though certainly it will also be more costly. It also appears  that the only route that service offers to satisfy the subdomain requirement is a wildcard cert, so there's that.
So you can turn on encryption at that level, and using Let's Encrypt, the private key for your cert would be unique for you. So private keys for azure.com won't be able to decrypt traffic for mydomain.azure.com.
But for ssl etc to work you would also have to get your vm setup so it "knows it's own (new) name" (alias.example.com).
[you could also use an A record with the ip, but I'm guessing guaranteeing sub-domain.azure.com points to the right ip is easier that updating the ip on updates etc to the vm]
That being said, I know they have a blacklist of certain domains. I've seen it once with amazonaws.com , and it's possible they have similar entries for azure.com, heroku, etc. They don't publicly release their blacklist.
 As in "my-loadbalancer-1234567890.us-west-2.elb.amazonaws.com"
We decided not to offer HTTP-based (file-based) validation via randomly-generated subdomains for wildcards in part because if you're required to set up random subdomains you're modifying DNS to do that, and if you're already modifying DNS you might as well just use the DNS validation method.
1. Namecheap's API is rubbish - extremely rate limited so after doing 2 or 3 in an hour it basically stopped working
2. Propagation delays - I don't know if this was provider specific for Namecheap and Gandi, but sometimes lego would just hang waiting for LE to confirm propagation.
HTTP challenge works fine and is far easier if punched into the load balancer rather than relying on each back end.
Note note of these problems are the fault of LE from what I see. I'm going to see if any of the ACME clients support updating Google Cloud DNS which I use now, as their propagation time seems minimal.
Thanks for your work btw in securing the internet.
If you use customer1.mycorp.com, customer2.mycorp.com, etc. the names of your clients is exposed twice :
- if you issue one certificate with all the domains, all the domains are readable in the certificate (Cloudflare free cert. has this issue too)
- all the LE certificates are published in Certificate Transparency logs. So you can detect if anyone issues a cert. for your domain, but anyone can view the certificates you issued.
With a wildcard certificate, the subdomains used are not public.
Note this issue applies to "internal" subdomains too. You probably don't want to expose the hostname of your backoffice (admin.mycorp.com) or your new top secret project (linux.microsoft.org).
It does not prevent people guesssing it tough.
There's a hack to prevent this that seeds the zone with false entries, but it requires the server to operate as an online signer. Since this is essentially incoherent to the design of the protocol (which makes major cryptographic and usability sacrifices to enable offline signers), there's an "NSEC4" being worked on now.
DNSSEC is silly.
It seems they will evaluate other options, but it's hard to imagine they would use something as convenient as http-01 for wildcards as then it opens up the platform to major abuse.
To be clear, the reason I am asking is that historically a CA was intended to be a way to validate "who" you are talking to. LetsEncrypt is providing a signed cert that does not validate an entity. It just solves the self signed cert, which could also be solved in applications by having a setting to "Accept Self Signed Certs". Some apps and appliances already have this.
I understand that there are about 17k certs with paypal.com in the name. Are there plans to try to prevent some of that in the future?
Here's a blog post explaining why Let's Encrypt does not think it should be the CA's job to prevent this. At least two browser vendors seem to share this sentiment.
For the usability problem you're hinting at, people mistaking DV certs for EV certs, I believe web browsers should consider demoting the color of the pad lock displayed for DV certs from green to plain text color, while still retaining the pad lock symbol (plain http would still be red). This solution would both provide enough distinction between the two types of certs to the normal enduser without retraining them; "look for the green padlock" would still hold.
That said, 17k (even multiples of this) is still a rounding error compared to the total number of certs issued. I believe the public good done here far outweighs the bad.
My point in my previous comment was that browsers should consider exposing the distinction between EV and DV certs to the user in a way that doesn't break their mental model of how browsers indicate the security of websites. How this is implemented is probably better handled by others more knowledgeable in UI design than I.
Teaching people to look for this might be hard, though.
But wait, is the burger place you like the Irish one or the Australian one? The faux German decor and the American accent of their spokespeople on TV give no hint. Turns out - neither, the Top Burgers you love are legally named Upper Deck Barbecue and Burger Company, Inc., and so their EV would need that mouthful on it.
So yeah, EV isn't worthless, but it's probably not going to fix anything much you'd actually care about. If I ran a business with PayPal's money I'd get an EV cert because the price is a rounding error. But for 99.99% that's money they could spend on security or customer service improvements that'd see an actual return.
There shouldn't be any policing at all of which domain names are allowed to have certificates.
However the BRs deliberately don't say what should or should not be on the list. Is Gmail as important as a Russian bank? Probably not if you're Russian!
Also of course CAs are not exactly rushing to reveal everything on their lists, for much the same reason you don't get told every security measure in place at your local bank.
Finally, bad guys will react to any such restriction, if they can't get paypal.example they'll try paypa1.example, not allowed that? How about paypa1-web.example? Even the rules LE have in place today cause problems for somebody a few times per month because their South American trucking business has the same initials as a German bank or whatever.
I ran phishing susceptibility tests for years before LE and would often just expense a $9 certificate for something similar to paypal.com and never had an issue. In fact any time it came up, I got a sales pitch about "this is why you should pay for an OV cert".
LetsEncrypt does this automatically, for free, and in a more user-friendly way. For information on the security considerations involved, see . These are similar considerations to those of most DNS-based CA verification methods (which is most of them).
They do that at a minimum. OV and EV certs require more work and do verify who you are (for some definition of "who") and that's where the more expensive CA's add value.
I understand what the browsers do today and I don't believe this help protect people. It should be very clear what type of transport security is in use, along with a score and what type of identity has been verified and what that means. I honestly don't know the answer to this, but I do know the existing methods in browsers just doesn't give a clear picture to non technical people.
Strangely, our CA called our human resource department to verify a lot of the information. I guess its all in who you chose.
Also some (maybe you don't think they're major?) don't offer DV. After all they can't be cheapest or fastest so why not focus on a product with higher value.
Here's a concrete scenario: You host linuxbender.com and use a self-signed cert. You set your browser to trust self-signed certs. When you connect to your browser to linuxbender.com, I MITM you. I serve you a _different_ self-signed cert, which you then trust. I can read your traffic.
In this scenario, if the site was secured with LetsEncrypt, you wouldn't have to trust self-signed certs, and I wouldn't be able to MITM you.
Key-Pinning goes some way to combating this issue too, but doesn't solve every case.
In practice, I doubt most people that use web browsers actually know the difference. They just see a green lock and assume everything is good to go.
For me accessing your project, third party validation that a consistent entity exists is worth something.
If you pre-share or pin self-signed certs that can be useful but in a different use case.
It is absolutely trivial for even a 5 year old to click a button and perform a downgrade attack on this type of an https:// connection. That is why accepting any self-signed certificates should be treated identically to http:// connections.
That way, Letsencrypt could've just signed a CA cert that was authorised to sign certs for anything under example.com - but not for anything else - and we could bootstrap trust in internal/local CAs just as we now do with certificates.
See: https://www.plex.tv/blog/its-not-easy-being-green-secure-com... and https://blog.filippo.io/how-plex-is-doing-https-for-all-its-...
If your service doesn't provide HTTPS or customers don't have it accessible from the public Internet then you'd need cooperation from them unless you yourselves control the DNS records involved.
[*.a.company.com] and [*.b.company.com]
p.s. How do I get an inline asterisk into my HN comment!?
In seriousness, I pine for the post `~username`, pre-`/username` days where services would hand out subdomains for user management. It still happens -- viz Tumblr, etc. -- but I feel like it's less frequently then it used to. One reason I would avoid is that wildcard certs are pricey. Nice to see that will commoditize a bit come January.
Meanwhile you're at the mercy of third party probably volunteers to make what you want possible.
Wildcard certs, that are generally seen as a security risk, and could have been alleviated for most legitimate uses with higher limits on issuance per domain will be supported.
But S/MIME, the email encryption option that actually works out of the box in basically every mail client, sorry, nothing doing.