In Caddy 2.3, Caddy [1] will default to both Let's Encrypt and ZeroSSL [2]. If it can't get a cert from one, it will try the other. You can configure more too, including self-signed certs, as a last fallback for example. Caddy will be the first web server and ACME client to support multi-issuer fallback. (Pre-releases coming soon, or you can build from source and try it today.)
ZeroSSL's website is being updated to clarify that certs are free and unlimited through ACME. You can even view them in your ZeroSSL dashboard.
I've been using Caddy on some personal and community sites for around a year and it's worked out just great! The built in certificate management has been so nice compared to having to manage external letsencrypt tooling.
More recently I set up a VM at work to be a domain redirector for a bunch of typo domains to our main domains, since the old outsourced redirectors didn't have TLS, and it was dead simple!
I would appreciate better and consistent documentation to do more non-trivial things with Caddy, but for your basic use-cases it takes a lot of the pain away. Unfortunately, nginx and Apache still win on the (probably unfair) basis that they've got well over 10 years of history, and all of the cultural knowledge that comes with that.
Yeah, I feel that. I want to work on docs and knowledge-transfer a lot more in 2021. 2019 and 2020 were definitely full of sprinting to bring Caddy 2 up to par. I think in terms of major functionality it's settling down now, so docs and guides will be more feasible.
In the meantime, I always encourage anyone to contribute to our wiki: https://caddy.community/c/wiki/13 -- there's already lots of great topics there and room for plenty more, especially examples.
Hint: there's a LOT of market space for paid content to master the Caddy web server, if anyone reading this is looking to profit from their expertise...
Agreed, I almost mentioned documentation but I forgot... My use-cases are ridiculously simple, and it's definitely gotten way better in the last year, but just figuring out the syntax for putting my e-mail address in the config file and how to start the server without it asking me questions on stdin required some trial and error. Nothing major, but there just wasn't much on Google at the time.
Docs have gotten way better since. As has the install story. Before it was some scripted thing that I was never quite confident in, now it's a deb package that I feel confident in running updates and Ansible against. And honestly I have not had any problems since installing it.
Yes Caddy is fantastic. My favorite unexpected use case has been testing Apple webhooks and sign in, which requires TLS even for local development. Caddy made this so easy! I just set up DNS for my home address, configured my firewall to serve on public port 443 and fired up Caddy. Would have been endlessly more annoying to configure TLS for local development before Caddy. The config file for this is literally 3 lines:
How about supporting absolute hostnames? Even this site (https://news.ycombinator.com./) supports them just fine, only Caddy doesn’t. Caddy does it so badly that the HSTS preload for caddyserver.com is interpreted by browsers as applying to caddyserver.com. (which is correct), but caddy won't actually serve that spec-compliant website: https://caddyserver.com./ (just throws an SSL error)
The specification explicitly says what you want is the Wrong Thing™. The designers of SNI didn't leave this vague, they spelled it out first in RFC 3546 and most recently in RFC 6066
> "HostName" contains the fully qualified DNS hostname of the server, as understood by the client. The hostname is represented as a byte string using ASCII encoding without a trailing dot. †
So, when your web browser sends the hostname "news.ycombinator.com." that's not an "absolute hostname" that's just an error. Maybe you can demand that web browsers stop doing that, but they'll likely tell you they don't care, and you should stop writing URLs incorrectly.
† The trailing dot rule was in every version, but earliest sources talked about using UTF8 because they pre-date the insight that all this backend (ie not human-facing) stuff can use A-labels which are ASCII and avoid any complicated Unicode problems. This version just says to use ASCII here.
It's not just SNI that's the issue. Caddy also can't handle Host headers with trailing dots, where the spec explicitly states those should be handled.
So while the SNI error is clearly on browsers (and bugs have been already filed), the Host header issue is clearly on caddy (which caddy sees as wontfix)
Honestly, in the 5+ years of Caddy, you're the only person I've ever seen complain about this. And you're relentless about it. You bring it up on every thread on HN that ever has a mention of Caddy. It's very annoying.
That said, you can just make a site block with that host label and Caddy will handle it just fine. You just need to explicitly ask Caddy for it.
This has been a shitty thread to read, tbh. It's clearly not pointless if there are improvements which could be made to make the product more spec-compliant, such as handling terminating dots in Host headers. And I'm saying this as someone who's never used Caddy for anything, ever.
This is all from four years ago. And kuschku hasn't taken no for an answer after all this time. It's frustrating and annoying to hear their complaints on every HN thread since.
I agree with you the thread is shitty. I'm just hoping being direct will end the trolling.
Is it "trolling" if one expects a webserver to work like every other webserver, even offering to make the required changes?
If you want to learn why supporting FQDNs is important, you can actually read the issue and PR on the traefik repo where they chose to add support for this:
Yes, because Caddy is not like every other webserver. It also automates TLS management, can act as an ACME server, etc.
You make it out to be a simple change, but it's really not. And the arguments for making any changes are not compelling enough to warrant the risks involved. Compliance for the sake of compliance is completely pointless.
> Yes, because Caddy is not like every other webserver. It also automates TLS management, can act as an ACME server, etc.
That’s why I’m comparing it to traefik, which can also automate TLS management, in the exact same way (even using the same libraries for most of the functionality, they’re after all both go webservers).
> And the arguments for making any changes are not compelling enough to warrant the risks involved
Why is it that you think you know better than the maintainers of e.g. traefik? It’s not like this is legacy functionality they kept around just for the sake of it, they added it in 2019 because there’s demand for it, today.
> even using the same libraries for most of the functionality
You're wrong. Caddy uses a superior library that's been more stress-tested at scale in production: https://github.com/caddyserver/certmagic and https://github.com/mholt/acmez - neither of which lego uses. Bugs and limitations in lego caused severe problems for several of our larger customers.
I’m not using caddy – but I get complaints that websites hosted on caddy servers don’t work with my tools, which follow the specs and require the absolute URL to work just as well as the relative one.
And I can’t remotely telepathically make all caddy users switch to nginx. So as long as a single HTTP server runs a version of caddy with this bug, I’ve got extra work to do, because the caddy people are wasting my time
Hello, ironically it was me who has created that issue some years ago. I saw that http://ai./ works in a browser and wondered why chagemann.de. doesn't. The culprit was caddy, and the caddy maintainer didn't want to fix it (or apparently accept PRs fixing it). Not sure where the trolling is supposed to be at.
Thank you so much for Caddy and open sourcing it completely. Every few years there comes a piece of software that really opens ones eyes how overcomplicated life was before. Caddy is absolutely one of those.
True, that's one reason, but I've been planning this feature for years and would have implemented it either way. As explained in the linked pull request:
- Let's Encrypt is a busy non-profit organization. We can help maximize their budget by not using it as the exclusive default for every server.
- ZeroSSL does not have rate limits and is also publicly trusted. And yes, it is free to use it with ACME.
- ZeroSSL offers a graphical dashboard where you can log in and see and download your certificates.
- Having more than just 1 free ACME CA is a very, very good thing for the PKI ecosystem.
This is the beauty of standardization; if you give a server a URL, you can give it two and three and four, and not have to worry about global reliance on a single source.
There's also a lot of opportunity for CAs to get better IMO, so competition is useful. I'd hate to see a commercial company displace LE, but there are so many value adds that can be sold once you're the CA of choice that it seems inevitable that a commercial CA with a LE style free tier is going to have a lot of opportunity.
IMO the biggest, easiest feature no CA has implemented is CTLog monitoring / reconciliation. The problem I have with LE even on a small scale is that I'm grabbing certificates for ~20 (sub)domains. I also have several of them set up via Cloudflare. With CTLog monitoring notifications (via Cloudflare and Facebook), I get too many notifications. I don't know what's coming or going or which machines are requesting certificates for which (sub)domains.
A service like ZeroSSL is already acting like a central point of certificate management (for me), so it's the ideal location to do CTLog monitoring since the bulk of certificate issuances happen there. That means legitimate CTLog entries can be reconciled and ignored silently (they'll already show up in the dashboard).
I'm not sure how user accounts work in ACME, but the other thing I'd like is to be able to track which user or machine requested a certificate.
I'm sure something like that could also be built as a proxy. I thought about trying once, but it's firmly in my "things I'll never get to" idea box. Lol.
Another problem I've had with LE that could use a solution is a 3rd party service that I signed up for requesting certificates, but not installing them correctly and hitting the LE limits for that domain. If the mindshare changes from LE to ACME, maybe there'll be a day where 3rd parties will let me specify an ACME provider and link it to my main account somehow.
ACME has a concept of EAB (external account binding) credentials, basically like an API key. https://zerossl.com/documentation/acme/
Caddy supports this, so what you want to do should be covered.
> In an effort to ensure the widest possible SSL certificate coverage around the world, our team has decided to keep all ZeroSSL certificates created using the ACME protocol completely free of charge.
At ZudVPN (https://zudvpn.com) we use Caddy to retrieve certificates for subdomain.zudvpn.com where they are later used for VPN connections instead of web server. This makes us be able to issue personal VPNs secured with SSL/TLS on any major Cloud Providers. I would thank both Caddy and Let's Encrypt that opens endless opportunities around secure connection.
The ACME protocol (used by Let's Encrypt / ZeroSSL) can be used with internal infrastructure, too. I know that some folks already use Let's Encrypt to issue internal TLS certificates, but that's not always ideal. Step CA[1] is an ACME v2-compliant, open source CA that supports all of the challenge types as Let's Encrypt / ZeroSSL.
Note that this only really makes sense if you're using tooling that already came with ACME support, and so this drops in easily. I guess that's becoming pretty common though.
Older tools might support technologies like SCEP but not ACME. These older protocols can't be used (on their own) to get certificates trusted in the Web PKI because the whole point of ACME is to do the proof-of-control step needed to get those certificates. But in your private PKI you likely don't need or even want that feature.
I guess it can make sense for new software that is at least sometimes for public access to just do ACME, but it does feel like maybe Smallstep should have a legacy SCEP mechanism available. I see there is a GitHub ticket for that so no need to raise it again.
Nice, thanks for the tip! I've needed something like this a few times the last few months and knew there had to be something out there I was missing. This looks awesome.
Their pricing page is being revised. You get free unlimited certs through ACME, including wildcards.
> In an effort to ensure the widest possible SSL certificate coverage around the world, our team has decided to keep all ZeroSSL certificates created using the ACME protocol completely free of charge.
Good to know, and I'm glad there's an alternative to Let's Encrypt, just in case. Is ZeroSSL trusted by old Android devices? If so, that might be a work-around for Let's Encrypt's cross-signing from IdenTrust expiring.
Yes as far as I know; their Sectigo/Comodo root is older.
But, you can still use Let's Encrypt with old Android devices until the later part of 2021 using the alternate chain: https://letsencrypt.org/2020/11/06/own-two-feet.html (As a point of reference, Caddy supports configuring this alternate chain.)
IMHO there's an opportunity for a lot of disruption in the CA industry. Managing a lot of certificates gets out of control pretty quickly and if they build a system with decent hierarchical authentication you can start to see a situation where large companies might opt to use them for most (or all) certificates. Put another way, imagine being able to log into your dashboard, create a sub-user and assign permissions for that sub-user to issue certificates for subdomain.example.com.
You can limit certificate issuance to a single issuer via CAA in DNS, so you could set your domains to use ZeroSSL exclusively and ZeroSSL could validate ownership of a domain to allow you to create that hierarchy.
I can think of a lot of value added services that can be sold alongside SSL certificates. One example would be CTLog monitoring including for lookalike (FACEB00K) issuances.
The other thing with SSL is that a lot of people equate it with domain security, so I think there's a certain level of domain monitoring that could be sold alongside certificates. Things like domain expiration monitoring, registration of lookalike domains, NS changes, DMARC reporting, etc. all start to feel like a single "domain security" service.
It's good that there are alternatives, particularly those that are outside the scope of US law, where at least some CAs believe certificates can be revoked for copyright reasons [1]. Let's Encrypt says they can revoke your cert if "our Certificate is being used, or has been used, to enable any criminal activity...[or] ISRG is legally required to revoke Your Certificate pursuant to a valid court order issued by a court of competent jurisdiction" [2]. But I'm sure Austria where ZeroSSL is based is still party to a number of copyright conventions and law enforcement data sharing agreements.
.ml domains tend to disappear when they get popular, and sometimes you can't even buy them at that point. With those free DNSes, you get what you pay for. Don't use them for anything important.
I know three people personally who used Freenom domains and lost access at some point. I've never actually been successful at getting it to give me a domain in the first place, but the last time I tried was years ago. So definitely be careful hosting anything vaguely important on one of those domains.
I had a Freenom account several years ago. At one point they decided that I wouldn't be allowed to register any new domains on that account, or renew any of my existing domains, so I lost a few domain names. I still don't know why.
E-mail is one example. Technically domain names are not necessary to successfully send own mail with own server, i.e., without using a 3rd party email provider. E-mail predates DNS; mail software has always supported IP addresses. However, today, the dominant 3rd party e-mail providers will reject mail coming from an IP address not associated with a domain name.
The patchwork of anti-phishing and anti-spamming measures like SPF and DKIM require DNS TXT records. Allowing mail from an IP address would likely be used almost exclusively for spamming.
The free .tk domains are only free for 1 year. And even if you are only going to use it for a year, some people would rather not give out their credit card number for a free trial.
Please stop spreading misinformation. You don't need a card, you don't have to provide any identification at all. The domains are free for as long as you want (I've been using one for the last 3 years), you just have to click a button once a year, and freenom sends you emails two weeks in advance.
Perhaps I’m mistaken, but I believe at one point in time, years ago, they required a card for the free registration. It certainly wasn’t my intent to spread misinformation.
I have recently tried to pick up a domain name on Freenom and had extreme difficulty getting the website to respond and issue me a domain name. Once I finally locked one down that I liked, my account was dissapeared without a courtesy email or anything. To be fair I was using obviously anonymous/pseudonym details, but I would have still expected a courtesy email.
No, they can be renewed for another free year indefinitely, as far as I know. I don't believe they require a credit card number, either. I've had a few freenom domains I've renewed for several years in a row and just got some new ones a few months ago; it's a very useful service.
I really wish ESNI used certs for IP addresses instead of relying on a DNS hack. It could do onion-style security with the outer handshake protected by the IP address cert, and the inner layer protected by the domain name cert.
Almost all modern documents about certificate policy have the same structure, a structure described in RFC 3647. This makes it easier to find what you're looking for in somebody else's policy document and to verify they ticked all the boxes. If they've got a rule about Certificate Revocation for example that'll go in section 4.9.
So there's going to be a section 3 about "Identification and Authentication" and it's going to have a subsection 3.2 "Initial Identity Validation" and that's going to have a sub-subsection 3.2.2 about how we figure out who this actually is we're talking to, and so these sections will be further divided. 3.2.2.4.x is the "Ten Blessed Methods" (these days there are rather more than ten) for how we validate DNS names in the Web PKI).
> Why not just use Let's Encrypt?
ZeroSSL comes with significant advantages compared to Let's Encrypt, including access to a fully-featured SSL management console, an REST API for SSL management, SSL monitoring, and more.
I love LE, like really really love it. I was surprised to hear that certs were going from 2 to 1 year expiration and that made me really pause for a second to think about the lack of proper infrastructure around certificates, especially LE certs. I envision these short lived certs from LE/ZeroSSL needing some of the components that ZeroSSL mentioned above and much, much more. Eventually, if/when we have 1 week/1 day cert expirations, we'll need a certificate exchange system to better handle complex scenarios where other parties are involved (i.e. when doing client certs, SAML certs, etc.).
I've looked at setting that up for my home lab a few times and when reading the docs I always get hung up on one thing. How do I retrieve certificates on my servers? Do I have to use the Certera API for that?
What I'd like to have is an ACME compatible endpoint so I can change the ACME endpoint in my Traefik config to `https://acme.certera.example.com` and not have to make any other significant changes.
Basically I'd like to have an ACME proxy with a dashboard like Certera.
The thing that's unique about Certera is that it's not opinionated on your existing setup. It doesn't care whether it's Traefik, apache, nginx or IIS. The "glue" is a standard PEM file format, the way it should be. It's up to you how to tell whatever system cares about the PEM and do the "reload" of the cert.
I'm not sure how Traefik would communicate with it as I'm not familiar with Traefik in general. I'm assuming that you'd like Traefik to simply say: "gimme the cert for xyz domain" and have some endpoint/system take care of the rest, right? Don't hesitate to create an issue in GitHub and we can discuss further. Sometimes I lose track of HN comments due to a lack of notifications.
Hey. Thanks for replying. FYI, I just noticed the store link on your site is broken.
Having a non-opinionated system using a simple http call makes sense to me. I would say the main drawback is that a lot of automated certificate management has, in effect, standardized around ACME and hook points for integrating anything else seem like an afterthought. For Traefik specifically, it's not possible to cleanly reload TLS certificates:
So with an ACME provider, Traefik deals with scheduling of renewals and reloading TLS certificates as needed and I don't have to think about it. Obviously that has the downside of being a hard to debug (for me) black box, but I think a lot of people are willing to accept opaque systems if it saves them any amount of effort / thought.
That said, when I started using Traefik for TLS termination a year or two ago, it would have been much easier to set up cron or systemd timers to request certificates from Certera than to learn Traefik's manual config for terminating non-docker endpoints. In fact I might be using Certera and HAProxy for all my TLS termination had I known about it back then.
I'll definitely create an issue on GitHub if I try it and run into problems, but I'll try the existing setup first. I actually prefer HAProxy to Traefik and IIRC the only reason I'm using Traefik is that I didn't have an easy way to solve LE challenges in HAProxy. If I can have Certera playing that role I could drop Traefik and it's one less thing to keep up with.
Genuine question, have you considered using Caddy? With the third party caddy-docker-proxy plugin you get essentially the same benefits of Traefik in that regard, without the frustrations/limitations you've experienced with Traefik.
I didn't know that existed. I skimmed it and it looks like it would be good, but Traefik is working well enough for me that I don't have a reason to change it. To be really honest, I'd have to get some kind of noticeable improvement vs my current setup to make it worth building Caddy to get that plugin.
BTW, I've used Caddy before and I like it. It's the first name that pops into my head when I need a webserver. I mocked out my own plugin to add auth to GitLab pages a couple years ago and remember thinking it (Caddy) was pretty slick.
Another player in this market is Sectigo. They are providing cPanel branded free SSL certificates to cPanel servers. Some hosts have switched to these because of API rate limiting done by Let's Encrypt. Mind you, it's specific to cPanel (a web hosting control panel) but that is a giant market.
It's not for people like you. It's for hosts doing mass amounts of hosting for people who couldn't get a directory listing at a bash prompt if their life depended on it.
This looks like it's only for personal and non-public certs in a commercial context.
From their Terms and Conditions:
" ZeroSSL is for your personal or internal business use only and must be in compliance with all applicable ZeroSSL policies, laws, rules and regulations. As well as this, no third party rights can be violated or infringed upon through use of ZeroSSL. You may not use ZeroSSL for any commercial purpose including but not limited to selling, licensing, providing services, or distributing ZeroSSL to any third party unless you have received the express written consent of ZeroSSL beforehand."
Probably just an oversight, but I find it odd their (ZeroSSL) site does not use a certificate issued by their own CA, it is instead one of CloudFlare's SNI certificates.
You are correct, however CloudFlare does support supplying your own certificate, and I'd consider it an element of dogfooding to use their own CA on their own site.
Note that you’d either need to hand cloudflare the private key for the cert or use their key server and run it on your infrastructure. You’ll also need to manage the certificates lifecycle. Unless you have a very compelling reason to do so, I doubt it’s worth the effort.
It's also worth mentioning that you can give Cloudflare your own certificate to use if you care what users see. I think this option might require one of the paid Cloudflare plans.
You're correct, but I recall a Tor dev saying at one point that HTTPS for .onion wasn't completely useless, I think in that more secure settings (CSP, etc.) apply to pages loaded with HTTPS.
Some onion websites use the certificate as an anti-phishing measure. Since onion domains are hard to remember, a certificate can verify that you are, indeed, connected to e.g. Facebook’s servers and not a phishing website.
Let’s Encrypt is a non profit funded by donors, other vendors sell value add services (the free SSL cert is marketing/a loss leader).
More options are good, Let’s Encrypt is mandatory to ensure good (or non predatory or oligopoly) behavior by other cert providers. It’s a check on their power.
But a more direct answer to the parents question is that they "make money" by providing a service that by virtue of its existence saves the sponsoring companies money and headache.
I'm surprised this model isn't more common as an alternative to licensing.
Collective action problem. You don’t have to sponsor to reap the benefits. You can pull it off for this or that cause celebre, but it’s not a workable model in general.
They can't sell the keys since they don't have them.
NSA could still mount an attack by asking the CA to register NSA's certs as valid, and tamper the victim's network connection. What makes certs secure is our trust in certificate authorities.
Most HTTPS connections today negotiate ephemeral keys at the start of the connection, so even if an attacker has the server's private key (which the CA never sees and couldn't sell!), the attacker can't do passive listening attacks on connections using it. The attacker would have to do an active man-in-the-middle attack that rewrites the connection and swaps out the ephemeral keys with keys known to the attacker, which risks detection.
If an attacker has the CA's private key, then the attacker can mint new HTTPS certificates. They wouldn't be able to do passive listening attacks on connections, but they could use an active man-in-the-middle attack to swap out the server's certificate in the connection. However, this attack could be detected through Certificate Transparency, and the CA's leaked keys would become untrusted by browsers.
Roughly all SSL certificates can be discovered through Certificate Transparency logs. Being the CA doesn't really provide you with any additional information, beyond the source IP of the agent requesting the certificate (which, in most cases, is the same as the IP that the certificate's hostname resolves to).
Well you don't have to scrap it. And a centralised CA authority seems dangerous. I'm not saying LE is bad it one of good thing that came along. However, whenever we trust to one authority it alway gets dangerous. So yes I personally welcome another CA. However, don't think your data is or will not be used for something. Organization change, and people who runs the organization change.
Certificate logs from the certificate transparency project [0] are already public knowledge and shared freely.
The only thing lets encrypt gets in addition to what's in those logs and publicly discoverable is what challenge you chose (dns or tls), and what email you're using.
> So yes I personally welcome another CA
More CAs generally means more chance that one CA loses a private key or has a vulnerability. Tragically, since browsers trust all CAs for all websites, if the new CA has an issue, people can forge TLS certs for my website even though I have no intention of ever using that new CA.
In a very real way, having an excess of CAs is bad for the security of the entire internet. Letting anyone become a trusted CA would be an unequivocal disaster, so clearly more CAs isn't always good.
I do think there's a balance, where we should have several viable CAs that we trust to be secure, but not 100s of them, just 10s. We already trust a ton more roots than that, so right now I see a new CA as being detrimental to security overall.
That all being said, I'm pretty sure this CA is using an existing trusted root and processes, so since it doesn't require cross-signing in a new root, it's less big of a deal.
I have ran my own CA in various capacities (workplace, private [hobby]). I will say that LE is wayyyy easier than trying to splice in my root certificate to X number of systems. Also some systems do NOT support custom self-signed root certificates.
I acknowledge your points with the risk of intelligence leaking. However most DNS is benign / not a state secret in general.
Brings up an interesting question: Why do we need a Let's Encrypt to issue a TLS certificate? At what point can we stop using them?
PKI in a nutshell: the CA signs your CSR which was created with your private key; the CA gives you a cert based on your CSR; you show random people a website using your private key + CA-signed cert; random person's browser trusts your private key+CA-signed cert content because their browser trusts the CA-signed stuff implicitly.
Why we need more than 1 of them: in case one of them goes down and so we can't get new certs; in case one gets compromised and the browser revokes their trusted CA cert; diversity of "features".
What could we do instead of this process?
Idea 1: the registry becomes the CA.
Why?
To get a cert, you have to prove you are allowed to have it. Currently that almost always involves proving that you currently control the IP space that is pointed to by a DNS record. There are other verification methods which are somewhat more problematic than this, but this is the simplest method. And if you can find one single CA which will create you a cert if you can do this, that means that if anyone can do it, they can get a cert. Meaning that this verification method is the minimum security barrier for getting a valid cert.
How can an attacker to subvert this process to generate a valid cert for any domain?
Options: 1) Take over the domain, by hacking the domain's account at the registrar. 2) MITM the registrar, such that updates to the registry's NS end up controlled by the attacker. 3) Take over the domain, by hacking one of the domain's nameservers. 4) Perform a DNS hijack, such as cache poisoning, so when the CA looks up your domain to find the IP space, you return attacker IP space. 5) Perform a BGP hijack so when the CA tries to connect to the IP space, it connects to attacker-controlled IP space. 6) Plenty of others based on other validation methods such as DNS record alone, HTTP [no S] content, a pre-configured list of e-mails for a domain, etc.
How to prevent all but one of these attacks?
Option A: Make the registrar the CA. The registrar would simply request the user use an HTTPS API to request a new cert, using an API token generated by the login for the domain's account. The only way for an attacker to generate a cert at that point is to either hack the registrar account, or hack the customer's server to get the API token (basically the same impact as if they compromised the customer's web server).
The browsers would trust the registrars who would follow the exact same stringent guidelines that CAs use today. The difference to the end-user is the CA is run by the registrar.
Option B: The registrar and the CAs stay independent, but they use a standard secure protocol to authorize signing of certs using a customer API token. Same benefits, but an extra hop. Browsers continue working as before, users use an HTTPS API to the CA to generate the certs.
With either options A or B, there's no more stupid hoops to jump through just to prove you own the domain and thus deserve a certificate.
Requiring DANE to issue certs is like requiring a speedometer, running lights, turn signals, and a breathalyzer in order to ride a bicycle.
All you actually need is a stripped-down REST API (or some wireline protocol with 4 fields) and a TLS connection. No need to require DNS or DNSSEC at all.
The registrar generates a unique token for whatever the domain owner wants to manage a cert for that's based on their domain. "oj3942hur9h239rh9hr4394834r domain.com" or "92839ub9f93hh9hsjdksjd * .foo.domain.com", doesn't matter. CA makes a request with the token and the CSR, registrar looks at the CSR, finds the record matching the token, finds that the CSR matches the record, signs its reply "yes, this is valid" with its own key, and the CA obliges the user. You don't even need DNS to make the request if it's hosted on some static IPs.
If anyone's seriously waiting on DANE in order to resolve a handful of security vulnerabilities that affect the entire Internet (because at this point the entire Internet's security hinges on TLS), we might as well also wait for Jesus to come down from his bachelor pad in the sky and solve our national debt.
Maybe. In the middle scenario where most first-to-recursive DNS calls are DPRIVE but the authoritatives overwhelmingly aren't you get free security by just checking the "use DNSSEC" box and so browsers may just begin doing that, maybe gradually or maybe as a result of a headline making incident. See also the curious case of TLS_FALLBACK_SCSV. (Basically if you don't implement SCSV then unsafe fallbacks are too dangerous to attempt, but anybody implementing SCSV doesn't need an unsafe fallback anyway, so, forget this new protocol feature just never do unsafe fallbacks, duh)
Once you're checking that box, DANE is almost free. Egos involved may determine that it's important to never technically do DANE, so as to save face, see also why TLS 1.3 isn't just named SSL 3.4 or SSL 4.0 even though that's what it "is" in some sense. But the effect, yes. If that scenario plays out it's what will happen.
One blocker for DNSSEC and thus DANE was deployability difficulties facing middleboxes. But middleboxes have choked so much else since that today "defeat middleboxes" is just table stakes. It was needed for TLS 1.3 and it's needed for QUIC and for HTTPS DNS records and... if you're defeating middleboxes anyway you might as well have DNSSEC.
It makes sense to expend the effort to defeat middleboxes when what you win is QUIC or TLS 1.3, both of which work immediately on the Internet today. It doesn't make nearly as much sense when the payback is DANE, is supported almost nowhere --- significantly less than 2% of domains.
This isn't a chicken-and-egg problem: domains can DNSSEC-sign now, without any middlebox interference, and overwhelmingly choose not to.
It is a chicken-and-egg problem though. DNSSEC requires additional infrastructure, especially for key rotation. Yes, it's not that hard (I set it up for the domains that I control) but it's also not just a record that you can set once and forget. It is beyond unsreasonable for widespread DNSSEC support from domains when browsers have been dragging their feet for years. There needs to be an actual benefit that comes with setting up DNSSEC before most will do it.
To be clear, Sectigo was split from Comodo by PE. They are separate companies.
The CEO at Comodo who did the LE trademarking attempt hasn’t been a part of Sectigo and the CA for 3 years. Sectigo have also worked with LE and helped to sponsor the CT log they operate.
It may not change your opinion, but it’s important to be aware of the details.
> Be kind. Don't be snarky. Have curious conversation; don't cross-examine. Please don't fulminate. Please don't sneer, including at the rest of the community. [1]
Put this way: if you didn't use that swearword, I'd have upvoted you. With the swearword, I can't.
> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
The word is slang, but it's described by multiple dictionaries with multiple meanings, and many include a definition that relates to people regardless of gender, based on their actions or behaviour:
> Used as a disparaging term for a person one dislikes or finds extremely disagreeable.
> offensive slang a mean or obnoxious person
> (a contemptuous term used to refer to an unpleasant person.)
It's funny that you didn't worry the comment where I referred to the exact same company/people/activity as "a bunch of pricks". Coincidentally, that phrase can relate to genitals as slang too, but it has other slang definitions... I wonder if these look familiar.
> A person considered to be mean or contemptible, especially a man.
> a nasty, obnoxious, or contemptible person.
Also, I can't help but notice that you replied to a comment where I highlighted the irony of a post complaining about me calling a company/CEO who undertook indefensibly scummy tactics "cunts", by suggesting that I "come off as a tosser".
What's the definition of that word...
> a stupid or despicable person
I didn't see you admonishing that poster for insulting someone else.
The original comment was clearly directed at a company, and those within said company responsible for specific actions.
If you're one of those people: I stand by what I said. The people who thought that idea up, and followed it through, are cunts, or pricks, or yes, even tossers.
If you're not any of those people, why the fuck do you care what I call them?
The world has enough problems without people getting offended on behalf of asshole CEOs and yes-men or yes-women reporting to them, because someone called them a rude word.
I used the word "tosser" -- intentionally ironically -- because it comes from the same general subculture that allows "cunt" as a socially-acceptable word. However, "tosser" is an equal-opportunity epithet. It does not single out any meaningful group. Note also that I did not call you a tosser, I said that you might come off as one. We all do sometimes. But there's a difference.
I actually added that word intentionally. To sort of lighten things, but also to make something of a point. Too subtle, perhaps.
I don't know where you're from, but "cunt" is not in the mainstream in most of the English-speaking world. I don't personally believe that words are magic, but when used as an epithet, that word is an oppressive, offensive, sexist, misogynist projection of some really ugly stuff.
I suspect you know that (how could you not?), and don't care. Bully for you.
Assuming that you are not ill-intentioned, then maybe it's worth asking -- is it surprising to you that two words with similar literal/denotative definitions might have different meanings and effects in practical/connotative usage?
Regarding your recitation of guidelines. Come on, I was civil and pointed out that your usage was not appropriate. The only good faith interpretation of your usage is that you might be a member of a small subculture in which the word is not inappropriate. I acknowledged that, and pointed out that you should still be aware that HN is not a part of that culture.
I don't think I'm getting through here, so I think I'm done. But we're not talking about Comodo, so if that was your goal, maybe your approach was imperfect.
Shorter lifetimes are better. There's really no valid reason to make them longer. Upgrade your tooling to automate renewals, and it's no longer a problem.
In Caddy 2.3, Caddy [1] will default to both Let's Encrypt and ZeroSSL [2]. If it can't get a cert from one, it will try the other. You can configure more too, including self-signed certs, as a last fallback for example. Caddy will be the first web server and ACME client to support multi-issuer fallback. (Pre-releases coming soon, or you can build from source and try it today.)
ZeroSSL's website is being updated to clarify that certs are free and unlimited through ACME. You can even view them in your ZeroSSL dashboard.
[1]: https://caddyserver.com
[2]: https://github.com/caddyserver/caddy/pull/3862