Hacker News new | past | comments | ask | show | jobs | submit login
Another free CA as an alternative to Let's Encrypt (scotthelme.co.uk)
652 points by Jaruzel on Aug 20, 2021 | hide | past | favorite | 202 comments



> I'm using the acme.sh client but the process will be similar no matter which client you choose to use.

Always nice to see some variety in clients along side the official Let's Encrypt one.

While we do use the official Python-based client at works at times, whenever I install it via apt, and it pulls in a whole bunch of dependencies, it's a bit disconcerting to me.

I'm a bit partial to dehydrated, which is a shell script (works under Bash and Zsh): I find it a lot easier to understand. It's handy to put on Linux/POSIX-based appliances like F5s, where the only prerequisites are Bash, cURL, and OpenSSL (and standard Unix tools like sed, grep, etc):

* https://devcentral.f5.com/s/articles/lets-encrypt-on-a-big-i...

* https://github.com/EquateTechnologies/dehydrated-bigip-ansib...


> whenever I install it via apt

You have the option to create a virtualenv and install it with pip, or snap, or use a docker image. See [1]. This has a couple of advantages:

* you'll get the latest version from the maintainers - for instance right now only debian unstable has the latest 1.18.0 version - debian testing bundles 1.12.0-2 * you won't be adding system packages that might affect other parts of the system

[1] https://certbot.eff.org/docs/install.html#alternate-installa...


> You have the option to create a virtualenv and install it with pip, or snap, or use a docker image.

You could jump through all those silly hoops (most of which will be completely alien to people who are not Python devs) in order to use the "official" dependency-heavy Python client.

Or you could just use a single pre-compiled Go binary, LEGO [1].

I have been increasingly favouring Go recently because the functions delivered to the end-user are dependency free, you can just ship simple single binaries instead of having to say "oh you need Python X with this that and whatever other Python library under the kitchen sink installed on your system".

And that's before we start talking about conflicts that can occur between Python libraries....which, let's face it will happen in an "average Joe" environment where Joe is just randomly using apt to install any Python dependencies.

[1]https://github.com/go-acme/lego


Very good point. This is a great selling point for Go. It takes so much deployment pain away.


Indeed, for example this snippet from the Saltstack docs:

>For historical reasons, Salt requires PyCrypto as a "lowest common denominator". However, PyCrypto is unmaintained and best practice is to manually upgrade to use a more maintained library such as PyCryptodome. See Issue #52674 and Issue #54115 for more info

We wouldn't even be having that conversation with Go.

There would be no weird "lowest common denominator" dependency. There would be no concerns about potential conflicts between other crypto libraries, and no choice to make about which "better" library you want to install. Plus of course the 10 other non-crypto dependencies that Salt needs.

All you'd need is a binary, config file(s) and an init script and you'd be good to go.


Sure, static linking and the like make the initial deployment much easier. On the other hand, it also means that any bugs in the dependencies are baked into each application, and fixing those bugs (which may be critical security issues) requires rebuilding all the downstream code. Provided, that is, you can rebuild the downstream code. Users of closed-source binaries are simply out of luck.

Mixing multiple versions of the same (or closely related) libraries in the same program remains an issue even with self-contained applications. It might work out if the users of each version are isolated from each other, at the cost of program size and attack surface, but say you're using PyCrypto in one module and PyCryptodome in another, and you want to pass configuration or key information between them—the types won't match up. You'll also have two different API styles to deal with for the same tasks, which is bad for maintainability, which is ultimately bad for the user because it create more opportunities for bugs and makes it harder to implement new features and enhancements.


> On the other hand, it also means that any bugs in the dependencies are baked into each application, and fixing those bugs (which may be critical security issues) requires rebuilding all the downstream code.

I'm afraid I don't buy that argument at all.

Because Python is either the same or worse.

You've got the potential bugs in the Python code itself, and then the potential bugs in the random Python dependencies. You've got a whole stack of potential bugs there.

P.S. What's that nonsense about "closed-source" Go binaries ? If its an open-source project then its open-source until its compiled ! If the code's on Github then you can clone it and do as you please if you're not happy with the author's Go code.


> Because Python is either the same or worse.

No, you update a python library and it works for everything importing it. The point is that you can fix vulnerabilities in unmaintained apps with a simple update if the vulnerabilities are just in dependencies.

> P.S. What's that nonsense about "closed-source" Go binaries ?

The discussion is about passing around binaries and nothing else.


In practice the apps that don’t publish updated dependency lists are also the ones that end up breaking by when you do update their dependencies. It only works in the simplest of patch versions.


> And that's before we start talking about conflicts that can occur between Python libraries....which, let's face it will happen in an "average Joe" environment where Joe is just randomly using apt to install any Python dependencies.

What kind of conflicts are you talking about? In both Debian and Ubuntu repos, all Python packages and their dependencies are nicely versioned and separated so there can't be any conflicts between Python versions. As for the handful of Python packages that are mutually exclusive (Pillow and PIL), either only one implementation is in the repos or you can install any one of them and have it work since they're drop-on replacements.

Now, if you start installing system-wide packages with sudo pip, you'll of course break things, but if you're installing things from the apt repos, there is no reason to ever do this (and pip will yell at you that you shouldn't do it).


> You could jump through all those silly hoops (most of which will be completely alien to people who are not Python devs) in order to use the "official" dependency-heavy Python client.

Why would snaps or docker be completely alien to people who aren't python devs?


Why would you deploy a snap or an entire docker container for a job a single shell script or binary can do? It's like using a bulldozer to move a pebble.


I didn't say anything about its utility. I was just curious what docker containers and snaps have to do specifically with python developers.


This might be a good scenario for pipx. It’s a Python package manager optimized for deploying applications instead of libraries.

https://pypa.github.io/pipx/


Use pipx-in-pipx to make it even more robust: https://pypi.org/project/pipx-in-pipx/


I hope this is a joke, right?


Sadly, No. Thanks to the way Python works when installed via homebrew on Mac this is actually very necessary if you want to install system-wide python utilities that do not depend on the system python. Without it you can have these utilities broken by a system update or by a homebrew update that bumps the Python version.


Which is another reason why I lean towards dehydrated when I can: a lot fewer "system packages" to deal with.



Which is unsupported and may explode randomly in the future. Like they did with `certbot-auto`.

They only support snap. Which is fucking nuts.


I can't bear to use certbot because snaps on a headless server (along with using aide) are a complete nightmare and no-go for me.

I'll second the suggestion for dehydrated.


Prefer to avoid snaps too. First time I'm hearing of dehydrated. I've dabbled a bit with acme.sh too and it's good (can do ECC certs and it's a single shell script).


I've stopped using Ubuntu in favour of good old Debian due to Canonical foisting snaps. I know you can remove snapd but the whole Ubuntu ecosystem is pivoting towards them.


What does certbot have to do with snaps..? You have all the options of container, build from source, download a binary release, get it from apt (or what have you) repos.


certbot does not officially support binary packages on Linux, so you have three choices for supported releases: snap, docker, or pip (best effort). If I’m not already using docker for the site (spoiler: I’m not) then why the fuck would I install docker if I’m not willing to install snap?


I guess the installation via pip is only best effort in the sense that they can't support every platform that pip is able to run on. For example their dependency on the "cryptography" library might cause problems on platforms which can't get a pre-compiled version from PyPi and have to built it from source.

If you are on x86 and use a distribution with glibc I wouldn't expect any problems.


I've had no issues installing it from debian apt repos on arm64 and amd64... I mean, if you don't want to use certbot and prefer something else, fine, but I can't see how docker/pip/snaps are the only options.

They're even listed as alternative methods here: https://certbot.eff.org/docs/install.html

I wish at least one of the other people downvoting my comment would pipe in to what their issue is, or what I could be missing.

And yes, I've encountered the cryptography-switched-to-rust-thing in various other scenarios.


It’s python so scratch “binary” above and replace with “pip” obviously.

https://packages.debian.org/buster/certbot

You seem confused.


Genuine question : Whya re snaps a nightmare on headless servers? Their auto-updating nature does add some stability issues but any other reasons?


Because it's not conjusive to a minimal attack surface - by way of example: https://www.helpnetsecurity.com/2019/02/13/cve-2019-7304/


Not to completely minimize it, but that says local attacker, not remote attacker. So someone would still have to gain access to the system in question in the first place.


Just because a server is headless does not mean that it isn't interactive in some way or running some user-submitted scripts or code.

Also, compromising a service running as a user (not root) would be sufficient to then escalate.


OpenBSD's 'acme-client' may be a good fit in these kinds of cases - not sure if anyone has ported to other systems but it's probably a pretty direct recompile with maybe a couple of adjustments

https://man.openbsd.org/acme-client.conf.5 http://cvsweb.openbsd.org/src/usr.sbin/acme-client/


I maintain portable fork of the project: https://git.sr.ht/~graywolf/acme-client-portable


It’s the most sane client and should’ve been a standard one.


The OpenBSD acme-client is OpenBSD specific and lacks a whole bunch of features, it is I suppose exactly what you'd expect the default acme-client to be in OpenBSD and so it fits its role perfectly but it doesn't make any sense as the "standard" ACME client with no support for lots of desirable ACME features.


+1 for dehydrated [1]. Aside from being able to run basically anywhere, it is very easy to script: Create a file with the list of domains you want, and run `dehydrated --cron`. dehydrated will obtain certs and/or modify existing and/or renew, or just do nothing.

Certbot is designed for interactive use: obtaining, changing and renewing certificates are all distinct commands, and if you tell it to obtain a cert you already have, it'll just obtain it anyway. Handling this from a script is a huge pain.

[1] https://github.com/dehydrated-io/dehydrated


I use lego everywhere. You download it from github or build it with go, drop it in a bin folder, run it first time and set up a cronjob. That's it. No on site deps, no interpreters, no venv.


I recommend acme-tiny:

https://github.com/Tronde/acme-tiny

It's an acme client in a single, small, stand-alone Python file.

I reverse-engineered it and ported it to Common Lisp. I haven't published the result, but I'd be happy to do so if anyone is interested.


Recommendation from me as well. Have been using this script for multiple years now without a single issue. The minimal code is awesome for avoiding unnecessary external dependencies and complexity.

Be sure to use the latest version from https://github.com/diafygi/acme-tiny though :-)


From the gh page doc it requires too much stuff to set up. Imagine if you need certs for >10 domains I prefer lego + cronjob or caddy. Most web apps speak http now and it's just a matter of proxying to a local port, unless you are using php, uwsgi, psgi. Php-fpm is also well supported.


[flagged]


Do you realize that "HN" isn't one single monolithic entity that needs to achieve 100% consistency in its views? Like if one person here says "use existing libraries" but then a different person says "don't use this existing library", that doesn't actually mean that someone was being hypocritical.

You might also consider that not all advice (especially when presented in an extremely limited form like a comment box on a website) should be interpreted as black or white, full compliance or complete rejection, but might better be treated as a general guideline or recommendation and that standard disclaimers or tradeoffs ought to be implied.

Certbot is a bad example of "use existing libraries and never reinvent the wheel" because it pulls in soooo many unrelated things. These are certbot's python dependencies: https://github.com/certbot/certbot/blob/master/tools/require...

Almost 200 different dependencies. Most of them to handle some niche use-case that are likely not what a given user needs. For some people certbot, with its downsides still might be the best solution. For others, there might be a more appropriate solution that still avoids reinventing the wheel. Eg, I prefer to use a single Go library to do ACME stuff or to just use Traefik or Caddy or some other reverse proxy/load balancer that supports ACME transparently without pulling in a ton of extra packages that aren't relevant to me. But sometimes, cerbot is still the simplest solution, so I use that. I still reserve the right to wish that it had fewer dependencies.


Take advice with a grain of salt is my advice to you. Advice for the average person is not necessarily advice for everyone.

The official client stopped supporting older os like centos 6. Using a third party libruary might be important for some.

People heard using 'goto' is forbidden so many will get upset if one is found in a code base. You can use gotos and in some situations it becomes a better tradeoff. Advice in general is just advice in general..


> People heard using 'goto' is forbidden so many will get upset if one is found in a code base. You can use gotos and in some situations it becomes a better tradeoff.

I'm going completely off-topic on a tangent, but no modern language even supports the goto from the "Goto Considered Harmful" paper. All you can use nowadays (unless you write assembly) is a much weaker version that has none of the problems you'll find there.


The problem the goto solves in a language that have predefined line numbers like basic just doesn't apply with numberless line code bases.

It's an example we can understand..

Like never use jquery use some complex framework when you need 5 actions.


> The problem the goto solves in a language that have predefined line numbers like basic just doesn't apply with numberless line code bases.

That's not true. There are useful times to use goto, but the main issue with its use doesn't have to do with line numbers... Using goto leads to spaghetti code that is difficult to maintain or reason about.


the amount of code in existence in languages that support "goto" is mindblowing. I would wager that the majority of developers have encountered it in a code base. even though it's not modern, it's certainly a relevant example for most people


My suspicion is that the majority of developers today haven’t used anything besides JavaScript or Typescript professionally, with maybe a bit of Ruby or Python.


You mean C?

C hasn't supported the Dijkstra's bad version of goto since the last century. Gcc was one of the first to add a verification against it, so if your code compiles on gcc without any "ancient code" flag, it's not there.


To accomplish basically the same result, dehydrated lists three dependencies (with openssl probably already installed on most systems):

* https://packages.debian.org/bullseye/dehydrated

(python3-)certbot lists more:

* https://packages.debian.org/bullseye/python3-certbot

As a sysadmin I can examine the code path of, and wrap my head around, one of these much more easily than the other. If something breaks I can throw in a "set -x" in the shell script and start getting debugging information.


The --cron flag is brilliant, I will definitely try it.


Well, if a student driver is given the advice, "always drive to the right," that doesn't mean to turn left out of the right lane! Use some critical thinking, like the person you are laughing at did.

Free advice over the Internet is worth exactly what it sounds like - free advice from nameless, anonymous people.


[All of] HN always... always... never...

What an extremely low bar for an argument. Underachievers, please try harder!


Can this one provide wildcard certificates without having to update DNS entries every three months?

That is the one pain point I have with Let's Encrypt.

PS: Yes, you can automate the DNS updates. That is the paintpoint I am talking about. It is one more moving part. One more dependency on a third party. One more thing to set up. One more thing that can break. One more thing that will rot (APIs always change at some point in time).

Many people seem to solve the "automate DNS" by putting their DNS credentials on the server which serves their website. This is the worst thing from a security perspective. Now someone who breaks into your application can take over your DNS, point your domain to wherever they like and get any certificate for it they like. This probably enables them to also overtake your email and then escalate further from there.


If you get a 1 year certificate then yeah, but otherwise no. The requirement to re-validate the DNS record comes not from the CA or the use of ACME, but the Baseline Requirements[1] §4.2.1, to prove you are still in control of the domain on a somewhat regular basis to obtain new certificates. Every 3 months is more frequent than is required, but there is still a regular (398 day) DCV requirement.

[1] https://cabforum.org/wp-content/uploads/CA-Browser-Forum-BR-...


"Delegated Domains" might interest you [0].

In this mode you manually configure a CNAME record once like "_acme-challenge.important.example => _acme-challenge.lessimportant.example" and then setup your client with DNS API keys for the lessimportant.example domain. You still get valid certs for your important domain without exposing creds for it.

A leaked key would prevent attackers from changing your important DNS records, but they could still generate valid certs for your important domain.

We bought a cheap domain (~$14/y) for this purpose and hooked it up to a DNS provider with a better API than our main provider. It has worked great and gives some peace of mind.

[0]: https://cert-manager.io/docs/configuration/acme/dns01/#deleg...


Even more moving parts you need to set up and maintain.

It introduces a 4th party you depend on. Now you have:

1: The datacenter where your application runs

2: The DNS server

3: Let's Encrypt

4: The "DNS provider with a better API"


The "DNS provider with a better API" can be your own bind9 server on the same machine as certbot, updated with RFC 2136. Completely standard, no changing APIs, no 4th party, no maintenance. I set up a server like this at work and haven't touched it for a year and a half.


bind hasn't exactly had a flawless security history...

(also hope that not touching it means you've automated security updates at least)


use the same provider (but a different account)?

you are unavoidably dependent on 1-3 anyway


Neat! Thanks for sharing this.


What I've done is delegate (NS record) only the subdomain _acme-challenge to a standalone DNS-zone the webserver has write access to. This way it cannot escalate to changing root A/MX.


Oh that is really smart. Is there a way to build a simple DNS server into a certbot (or other client) plugin? That DNS server doesn't really have to be available outside of the verification time window.

DNS validation has been a thorn in my side for a while. Not only do I use DNS hosts that don't have APIs (like Google Domains), I also don't really want to give every web server access to my entire zone. That seems like a huge attack surface.


I already had Bind on the machine so it was logical to add the zone there and utilize nsupdate : https://gist.github.com/kronthto/893715f12cc0b1cda9fcfdbd8dc...

But what you are suggesting should work just fine aswell - there should be no need for a persistent service. Of course the service would need to run on port 53, so you actually cannot have another nameserver on that machine already, and also require CAP_NET_BIND_SERVICE .

A quick search lead me to this python project that could be an inspiration: https://github.com/pawitp/acme-dns-server


I run this one: https://github.com/joohoi/acme-dns It's super simple and has a REST API for updating records.


Me too, but I enabled nsupdate on that zone, while I wanted to keep my first-level domain safe.


> Many people seem to solve the "automate DNS" by putting their DNS credentials on the server which serves their website. This is the worst thing from a security perspective.

Indeed. That's why I have a single machine that I use for running acme. (Maybe people don't realize that you don't have to run the acme client on the same machine where the certificate will ultimately be deployed?) It contains all my keys (ssh keys, acme account keys, API tokens) in an encrypted database and a set of scripts for getting updated certificates and installing them. This machine has no open incoming IP ports, only outgoing SSH and HTTPS connections. It also contains copies of all my source code and deployment scripts. I could in theory tear down my entire production infrastructure and rebuild it with a single command from this machine.


How could you validate wildcards without changing DNS?

And the different value every three months thing is definitely a feature not a bug, because otherwise stale values could lead to mis-issuance.


Having control over the HTTP response for the root domain but not being authorized to have SSL certificates over subdomains is already a veriry dubious concern, but if you are that worried you could define a fixed DNS record saying that for this particular domain ownership verification of the main domain translates to all subdomains.


> Having control over the HTTP response for the root domain but not being authorized to have SSL certificates over subdomains is already a veriry dubious concern

What about marketing/static hosting sites like Netlify/Vercel/etc? I can point my domain there. That does not mean they should be authorized to have wildcard certs over the whole domain.


Hmm, isn't that exactly how they work?


No, they work by being able to issue a cert for the specific domain that is pointed at them.

It does not work by allowing them to issue a wildcard cert for the entire domain.

For example, `nrmitchi.com` is pointed at Netlify. Netlify can obtain a certificate for `nrmitchi.com` (and `www.nrmitchi.com`, which is also pointed at them). It does not allow Netlify to obtain a cert for `*.nrmitchi.com`, nor should it.


Technically today it does allow them to get such a certificate, but they choose not to (and Let's Encrypt has never allowed this because it's unsafe). Ballot SC45 for the Baseline Requirements this year fixed that so that from December they will not be able to get a wildcard certificate based on proving control over the parent domain.


CDNs are a bad example, because they usually deal with DNS as well. They usually want to send different replies in different regions, etc. CloudFlare is the only one I used, but I know you can't set it up before you switch to their name servers.

However say you host your root-name website on GitHub pages or similar. You don't want them to have full DNS control over the rest of your zone (emails, app, etc).


    How could you validate wildcards
    without changing DNS?
The same way you validate ownership of a subdomain: By putting stuff into the well-known path. Only that you do it for the root domain. I don't know any case where someone has control over the root domain but is not eligible for a wildcard cert.


This loose allowance is going away. Current BRs allow using 3.2.2.4.18 and 3.2.2.4.19 (Agreed upon change to Website) for wildcards until December 2021. After that:

> For Certificates issued on or after 2021‐12‐01, the CA MUST NOT issue Certificates for other FQDNs that end with all the labels of the validated FQDN unless the CA performs a separate validation for that FQDN using an authorized method. This method is NOT suitable for validating Wildcard Domain Names.

Let's Encrypt are just ahead of the curve here, this was always unsafe because it means if your corporate site https://big-corp.example/ is on some bulk host that bulk host can get (even though presumably they wouldn't) wildcard certificates that will also match mail.big-corp.example and db2.big-corp.example and auth.big-corp.example and vpn.big-corp.example ...


> I don't know any case where someone has control over the root domain but is not eligible for a wildcard cert.

Companies very very often point their root domain at a hosting company for their marketing site; let's use Netlify as an example.

This does NOT mean that I would expect Netlify to be able to issue wildcard certs for my domain.

Basic "www-izer" (redirection) services are another example where the root domain is pointed somewhere that should not be able to issue wildcard certs.


To deal with this, a DNS entry "Root Domain Controls Wildcarts" could be required for the validation.


Even if they would the browser CA forum is clamping down on maximum certificate lifetime. While it will be a while before the max allowed lifetime is 3 months for all certs I suspect eventually it will happen.

The main reason being that revocation is a hard problem to solve. For example OCSP creates a single point of failure. And top of that most software doesn't even check it.


Certificates having short lifetimes doesn't have to mean that verification has to happen at the same frequency. And frequent verification doesn't have to mean that dns records need to be updated each time - for example you could have a dns record saying that HTTP-based authentication for the main domain covers all subdomains.


The Ten Blessed Methods (no there currently aren't ten of them, yes I'm going to keep calling them that) do not allow you to just make up rules like "Let's have a DNS record saying it's OK not to verify anything".

Currently methods 3.2.2.4.18 and 3.2.2.4.19 allow you to get a wildcard based on the web site changes, but that's clearly unsafe and is going away from December. Let's Encrypt never allowed it because it would be hypocritical to have people saying "This is unsafe" while also allowing it.


Updating the DNS entries can and should be automated as well. There's different solutions for different DNS setups, but many, including me, do so and don't have to worry every 3 months.


It can, but is quite a bit more complex since most people are not running their own DNS servers - and even if they are, updating records is nowhere near as standardized as dropping files under .well-known.


It depends--some ACME clients have a list of DNS providers they support and take API credentials to automate DNS verification

I use

1. Free Cloudflare DNS 2. Traefik's built-in ACME client


A little off topic, but since most of ACME/Let's encrypt leverages DNS for validation (directly or indirectly to find well known host IP), why can't we just skip all of this nonsense and do validated self signing using DNS via DANE?


You can put certificates (signatures) in DNS so that clients can look up the appropriate record and verify that certificate that the (e.g.) web server is sending matches:

* https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...

However, DNS records themselves are not protected by default: they can be fiddled with in transit. So you have to enable DNSSEC to prevent them from being altered.

So yes, there are standards in place to do what you are suggesting. You'll 'just' have to convince people to enable all of this infrastructure and then have TLS clients use it.


Which isn't going to happen; DNSSEC and DANE were introduced in browsers several years ago, and the browser developers withdrew support. These are dead-letter standards.


I have a beta service I set up to issue certs via let's encrypt, which helps you circumvent the requirement to update DNS records yourself: https://certs.bx.tc/faqs


Depends on your domain/dns provider, I use cloudflare so I can add an API key to the config in certbot and the DNS updates are automated then.


Yeah, it’s the same for gandi.net, plug the API key in and you are done.


Great and now your application server has control over your DNS which it otherwise would not need.


You can delegate authentication via a permanent DNS record to another DNS hierarchy and give your certificate software authority to change that for ACME purposes. This fixes both the "My API key is allowed to change anything" problem and the "I can't get an API key for the DNS domains I need certificates for" problem, albeit at the cost of needing that one-time setup to tell ACME where to look in DNS.

To do this you need a CNAME from the _acme_challenge DNS name you're being challenged on, to a DNS name you're going to use for this purpose. It needn't be in the same domain or indeed even the same TLD but of course it does need to be a public DNS name.


Couldn't they just build a DNS responder into the ACME client? That way you can put an NS record delegating _acme_challenge to your application server/web server, so it can reply during validation?


Nobody stops you building an ACME client that does this. However I expect it would mostly accumulate confused bug reports from users who don't know their IP address, or don't even have a public IP address, and certainly can't unblock UDP port 53 on their device.


certbot already has a "stand-alone" authorization mechanism that has all those drawbacks, so doing a similar thing for DNS might not be too terrible...

kro pointed out (in this thread) this plugin that is more or less what I described: https://github.com/pawitp/acme-dns-server


None of what your rant implies is accurate if you follow any kind of security practice.


I'd like to see a free CA for S/MIME certificates once again.

Since more than a year or two, all the free S/MIME certificates that you can get these days have issues:

- some of them a valid less than one year which is a huge hassle for S/MIME as opposed to HTTPS because you need to keep all your old certificates around

- some of them will not let you use your own private key (I wish i was kidding)


I don't know S/MIME, why do you need to keep your old certificates?


From my understanding S/MIME is pretty much enterprise PGP, so if you don't have your old certificates you can't access old emails that were encrypted using them


Encryption is performed with private key, not with certificate. If you can issue new certificate for the same private key, it should be able to decrypt old stuff.

Rotating keys might be a good security practice, though. But not necessary.


i've always reencrypted when getting a new cert. not sure if thats an antipattern though. i just didn't want to be bothered by which cert encrypted what data


The screen shot recommends short passwords with one each of upper/lower/numeric/special characters. This policy has never been good. I find it discouraging from a company offering security-related services.

https://scotthelme.co.uk/content/images/2021/08/image-3.png


I see that way too often. At least there isn't a maximum size here.

The proper way to do it is to check with a compromised password database, and maybe with common rules like "your user name and password must be different". But I've almost never seen it used, I have used passwords that I know are compromised or even common, it always passed as long as it had the right characters.

It shouldn't be that hard. Storage is cheap now, save a few GB for the entire haveibeenpwned database and you know you have a good password. There are even online services that do it securely and, I think, for free.

The only real problem, I think, is a UX problem. Having to enter a password and being told repeatedly "haha, nope, try again" with no explicit way to correct it may be more annoying than clear rules but it doesn't strike me as an unsolvable problem.


> At least there isn't a maximum size here.

My bank does this, setting an maximum of 15 characters. It is an incredibly frustrating experience when I have to change my password (which I have to do every 6 months, per their "security" policy): I'm reminded of this ludicrous requirement of < 16 chars because most certainly they store these passwords in plain text.

For context: this is the largest private bank in my country serving 50 million+ of customers.


Just curious... has anyone so far decided to extend the 3-month certificate expiration deadline? I understand that for the intended use case it makes sense, but in some cases it's an overkill and having a CA support such use case could be useful. There's nothing in the technology itself that prevents us from having certs that expire in, say, a year, right?


Let's encrypt decided for 3 month to force users to automate and avoid the burden of too much user support. They are a no-profit after all.

I believe commercial CA are offering free certificates milted to 3 months for the same reasons, and for the up-selling opportunities.

I think also cloud providers offer free 1 year TLS certificates, but of course you are also using their other services.


3 months was an excellent choice because it means you _have_ to automate so it will never expire. While the 1 and 2 year certs always end up expiring on production because someone forgot about them.


It also means you have to automate things that are tied to your certificate lifetime. Apple Pay on the web requires you authenticate your server, and it uses your certificate serial num as proof you still own the server.

This means every three months you need to re-authenticate with Apple Pay. But there is no Acme client for authenticating with Apple Pay. So instead, I was having to re-authenticate something manually every 3 months. It involved logging into an Apple Developer account, downloading a PEM file, uploading to my server and then clicking a button in the Developer Account to check the file.

After doing that dance, I happily paid for 2 year certificates from RapidSSL. Now you can only buy one year certificates. I really hope the CAB isn’t successful in making those non-conforming and requiring shorter certs.

There are plenty of other environments where certificate automation is not possible. And honestly, I haven’t seen arguments as to how on-machine automation is more secure than requiring someone be involved in the process.

While I’m dreaming about improvements to the CA ecosystem, having some way to actually prove your are the company you claim would be amazing. Instead we are actively removing support for anything that tried to provide that…


I'm not sure why you choose to see this as something that "can't be automated, so must use long-expiry certs" and not "apple fails to offer an api to perform an essential-to-automate task".


I see LE as generally trading low cost certs for expensive labor. It’s great when you have scale and need hundreds of certs.

I care more about making it easy for people to pay me than getting “free” certificates that cost me hundreds of dollars in labor costs.

Everyone talks about LE like it is perfect. I’ve just determined after using it at four different orgs that for smaller shops it tends to take more time/money to get it working than using long expiring certs deployed via an automation system.

Honestly, setting up even more automation, like you suggest Apple provide, would probably cost 5x in labor as being able to purchase 3 year certs for the next 12 years.

Automation is great when you have scale. In this case, I don’t. I tend to work at smaller companies, so I’ve never worked at an org big enough for the automation to pay off versus buying certs.


> long expiring certs deployed via an automation system

If you're already automating it why stop halfway? I don't see your point.

> setting up even more automation ... would probably cost 5x in labor [than the cost of 3 year certs over 12 years].

(0. 3 year certs are going away for operational security reasons, but lets skip that for now.)

1. Are you sure it costs more in labor to automate? Are you factoring in: a) the opportunity cost of lost sales and customer dissatisfaction when the certs expire in prod? b) the time it takes to train new employees how to change the certs [which at the average turnover rate is paid at every cert change]? c) the recurring labor of finance and operations professionals and management expensing, accounting, reporting, and reviewing this irregular cost? d) the opportunity cost of avoiding setting up new https-enabled services because it's such a huge pita for the organization?

> Automation is great when you have scale.

2. Lets decompose your scale argument into "vertical" vs "horizontal" scale that is hopefully familiar to people provisioning infrastructure. Here, "vertical scale" is one company needing many certs. I agree that if your vertical scale is low (say 10 certs) then it may not be worth it to spend 200h of labor automating it by yourself. But automation can also be scaled "horizontally": if 100 companies need it they can amortize the labor cost of creating the automation between them and have plenty of hours left over to implement it. This is the central ethos of open source and the reason why it can work at all.


The reduction from 825 days to 1 year was instituted by Apple. The same people apparently forcing you to perform a manual step every time you replace the certificate.

Some people might look at that and conclude Apple is the problem, but you apparently decided it's "the CA ecosystem".

You can get proof you "are the company you claim" from CAs today, both OV and EV support that capability, but let me guess, Apple doesn't make any use of that information and so once again rather than spot where the problem is, you'll give Apple a free pass and blame everybody else.

Edited to add: As to "actively removing support for anything that tried to provide that… " EV doesn't do what people expect it to do. Maybe Apple knows whether they want to do business with the Ohio Funky Rabbit Pizza or the West Virginia Funky Rabbit Pizza, but the customer has no clue that those are even different companies, much less which is which, so the whole "Let's show the company name in the browser" doesn't achieve what its proponents wanted, not least because of course it'll turn out the "Funky Rabbit Pizza" restaurant actually in Coolville Ohio isn't run by either company but instead by Generic Food Holdings Inc. registered in New York, so with an EV cert their legitimate web site says "Generic Food Holdings Inc." which is even more suspicious, not less.


Same issue with Okta if you want to use a custom domain. The only way to update the cert is via the web UI. It's a very frustrating oversight for a security related service...


The only _supported_ way ... if the browser can do it then a headless browser can do it

Did you open a ticket with Okta about that? I'd like to "me, too" and/or watch it, if so


headless browser is not the solution because ui may be very brittle. I always have to update my scrappers as people tend to change ui time to time.

Headless browser is only solution if upstream website don't change their ui and is very stable.


Sure, I doubt anyone who in the history of programming has even fired up a headless browser and thought "whew, I'm glad that work is finished, never to be touched again!"

But if it's a choice between _me_ (a) remembering every 3 months (b) then opening Chrome, authenticating, recalling the 18 clicks to get to the right settings screen, copy-pasting the 3 text fields, hitting submit, then logging out, or puppeteer doing that, there's absolutely no contest which of those is the better use of the company's series-A

---

Separately, I'm not sure where this falls on the HN etiquette guide, but the word is "scraper", because a "scrapper" is someone who collects and recycles metal: https://en.wikipedia.org/wiki/Scrapper

It's just a pet peeve of mine


Do you have links to documentation on the Apple Pay requirement?

That sounds like Apple Pay is encouraging certificate pinning, and I suspect the Apple Root Program may have opinions to the contrary, given how it puts Apple users at risk to encourage pinning.


Except when certbot fails (silently) for some unknown and different reason every three months, and you have to ssh in anyway to fix it. Out of all the software I run on my tiny hobby VPS (a few web sites, E-mail), certbot requires more babysitting by an order of magnitude. Even more than spamassassin, which gets wedged regularly. I'm not a professional sysadmin, so something is probably configured incorrectly, but certbot's error messages are so cryptic and non-actionable that I've never been able to solve it. So, I have a calendar reminder every three months to log in to the VPS and figure out what went wrong this time...


FWIW While your "every three months ssh to the VPS and check" approach can work, I'd commend finding a service (many free ones out there for a small project) that will notify you in a way you're happy with about certificates that are expired or soon-to-expire on your servers.


I agree certbot can be a pain the the arse, specially when combined with the fact that you need to also rely on other moving parts (like DNS updates) that can fail in weird ways too. You could try your luck with acme.sh or dehydrated though.

My previous setup had a lot of weird problems, my current one seems to be doing fine though, I still think capping the certificates to 3 months is a good idea though, well unless people start taking DNSSEC seriously and adopt DANE [1]

[1] https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...


Lego has an option to renew a number of days prior to expiration. LE recommends 30 days. That leaves me with 4 weekly attempts to renew my certificate.


Now you just have to upgrade certbot every 1-2 years instead, it feels like.


I have a Debian server, and it has been running unattended pretty much since I installed it.

I warmly recommend it.

https://wiki.debian.org/UnattendedUpgrades


Me too, also on Debian. But there have been different installation methods over the time, certbot via deb package, installation script by piping in a URL, certbot-auto, something in /opt, lost track :D


I had to look at mine once in 5 years because acme v1 shut down.


Did that this morning. Was a complete pain but got there in the end.


Concerns when automating and connecting with a third-party service is that you can introduce more vulnerability. (Or even introduce a backdoor if you use a malware / insecure software to do so.)


Back before the corporate browsers took over standards via whatwg and worse, unilateral moves, you could have a TLS cert for many, many years. I often set mine for 20. But because of changes browsers have made in the name of security this is no longer possible and so shifts the insecurity to another part of the process (the complex system for automation on both client and server sides).


If they insist on keeping the three month limit I wish they'd come up with some better ways to allow you to secure your server.

If you want to use the www auth you need to allow outbound connections to any IP (they specifically won't release the range they use), otherwise you have the DNS option which means giving the server access to modify the DNS records which is also unsafe should the box get compromised.


> otherwise you have the DNS option which means giving the server access to modify the DNS records which is also unsafe should the box get compromised

This is true, but the machine doing DNS modifications doesn't need to be accessible to any outside initiated connections at all. So if someone has the capacity to compromise such a computer, what would stop them from compromising your desktop computer or your laptop instead?

But either way it would indeed be nice to have further limits on what the box could do. But I think LetsEncrypt does not need to change anything to make this possible.

https://letsencrypt.org/docs/challenge-types/

The DNS verification works by creating a DNS TXT record named "_acme-challenge" with a TXT value on the domain you are verifying.

So really what you want is for your DNS provider to implement into their API access keys that can be restricted so that the absolutely only thing the key is allowed to do is to create, change and delete the DNS TXT record named "_acme-challenge". Perhaps some DNS providers already make this possible? But the one I am using is only able to limit it to a ZONE, but not to a specific record type and not to a specific label.

In fact I wish CloudFlare would allow such specific fine-grained permissions. But even if they did they'd probably make it part of the Enterprise plan and I am still not an Enterprise customer so.

Edit: Meanwhile that I was writing this someone else posted a sibling comment about ACME DNS alias mode, which I had not heard of. https://github.com/acmesh-official/acme.sh/wiki/DNS-alias-mo... That's very close to good enough. Though it would still be nice if DNS providers made it possible to issue API access tokens that are limited to specific record type and specific label.


I do the challenge verification using DNS and Route 53, and the process has permission to update the challenge record and nothing else. So what you are describing is definitely possible.


I looked into this previously and was unhappy to learn that Route53 doesn’t allow permissions based on specific records. The most granular permissions were for a full zone at the time.


> Though it would still be nice if DNS providers made it possible to issue API access tokens that are limited to specific record type and specific label.

A handy CLI utility that can be used in hook scripts that can update dozens of APIs at different DNS providers:

* https://github.com/AnalogJ/lexicon


Yes, but what I am saying is that it’d be nice for the API access tokens issued by the DNS providers to be limited to specific record type and specific label.

For example, the access tokens that you generate for giving tools like that one access to act on your behalf, when using CloudFlare as your DNS provider.

CloudFlare at the moment does not to my knowledge offer such fine grained controls as what I am talking about, on the API access tokens.


With the DNS option the machine doing the request doesn't have to be the machine using the certificate though.

I have a separate machine doing the DNS challenge and the cert is then distributed to the machine needing it.

Technically true for the regular web challenge, but easier with DNS I think.


I'm doing the same for my personal/home lab stuff. I've been using https://github.com/joohoi/acme-dns for the dns server running on a small vps for all my internal certificates and I haven't had any issues with it.


This is what I do as well. I have set up acme.sh[1] on a Raspberry Pi on my home network, which isn't accessible from the outside. It is triggered every night by a systemd timer and renews (using the DNS challenge) and deploys all expiring certificates.

[1] https://github.com/acmesh-official/acme.sh


> If you want to use the www auth you need to allow outbound connections to any IP

Only for the time period when you're requesting the cert though: it does not have to be open to the entire Internet 24/7. While this may not satisfy your personal / particular level of security concern, but it is something worth keeping in mind. Using the dehydrated client as an example, the web server could be started and stopped (or the host's firewall rules altered) in the startup_hook() / exit_hook() functions, or the deploy_challenge() / clean_challenge() functions:

* https://github.com/dehydrated-io/dehydrated/blob/master/docs...

> otherwise you have the DNS option which means giving the server access to modify the DNS records which is also unsafe should the box get compromised.

Are you aware of LE/ACME's "DNS alias" mode?

* https://github.com/acmesh-official/acme.sh/wiki/DNS-alias-mo...

* https://www.eff.org/deeplinks/2018/02/technical-deep-dive-se...

Let us say you want to get a cert for foo.example.com. Letting an ACME client change the value of that could be a risk, as you state. So what you can do is (manually) create a CNAME _acme-challenge.foo.example.com, and point that elsewhere, like _acme-challenge.foo.dnsauth.example.com. You then allow the ACME client to alter (just) the TXT records of _acme-challenge.foo.dnsauth.

People have even written simple DNS server that allow for updating of records via a RESTful API, so you can server just the (e.g.) dnsauth sub-domain from it, leaving your main domain untouched (besides the initial CNAME addition):

* https://github.com/joohoi/acme-dns

There's also a CLI utility that can handle access the APIs of several dozen DNS companies so you don't have to roll your own if you want to server the sub-domain from your current provider:

* https://github.com/AnalogJ/lexicon

And you don't have to use a sub-domain, but something else entirely too: instead of dnsauth.example.com you can point the CNAME to example-dnsauth.com or example.org. So if your primary DNS provider doesn't have an API, you can use another one that does. The destination CNAME does not matter as long as you control and can update it.


> Let's encrypt decided for 3 month to force users to automate and avoid the burden of too much user support.

Yeah, if this is free and you have automated the process then, really, the validity period no longer matters. 1 year, 3 months, 1 month, it's all the same for the majority of purposes.


It’s a tradeoff between comfort and security since the fact that you control a domain now doesn’t guarantee you’ll be controlling it in 5 minutes, not to mention 3 months. This is why Let’s Encrypt gives you tools to automate the renewal process. I also recall them talking about gradually lowering the certificate lifetime so you’d have no choice but to use automatic renewal.

Relevant link: https://letsencrypt.org/2015/11/09/why-90-days.html


Certificates used to be issued with validity up to 10 years way back. There's no technical bound on the validity period AFAIK, but all major browsers will now refuse to trust certificates with validity periods > 1 year so this can be considered the practical limit.

I'm not aware of a dedicated service that offers free 1-year certs in the style of LetsEncrypt, but they'll often be available from e.g. hosting providers as part of a package. Hard to imagine a use-case where 90-day renewals aren't a better option, anyway.


The bound is basically set by the CA/Browser Forum [1] where the current baseline requirements [2] are stipulating:

"6.3.2

Certificate operational periods and key pair usage periods Subscriber Certificates issued on or after 1 September 2020 SHOULD NOT have a Validity Period greater than 397 days and MUST NOT have a Validity Period greater than 398 days. Subscriber Certificates issued after 1 March 2018, but prior to 1 September 2020, MUST NOT have a Validity Period greater than 825 days. Subscriber Certificates issued after 1 July 2016 but prior to 1 March 2018 MUST NOT have a Validity Period greater than 39 months.

For the purpose of calculations, a day is measured as 86,400 seconds. Any amount of time greater than this, including fractional seconds and/or leap seconds, shall represent an additional day. For this reason, Subscriber Certificates SHOULD NOT be issued for the maximum permissible time by default, in order to account for such adjustments."

- CA-Browser-Forum BR 1.7.9, p67

[1] https://cabforum.org/

[2] https://cabforum.org/baseline-requirements-documents/


Sectigo provides SSL certificates with 1y validity via their certbot compatible Acme endpoint.


I think Buypass has (or had) 6 months.


Correct, Buypass Go has a validity of six months. Unfortunately the free version does not support wildcards yet.


buypass (also does acme), and uses 180 days expiration on their certs. I've been using it for a while, but they do limit certs to 5 alternate names, instead of the 99 on LE


Buypass will get you 6 months.


The author said, he randomizes between the 4 free-of-charge SSL provides because of availability and reliability.

What me would interest, if it would be possible cross-sign the certificates by all of those 4 and automate this?


You could do that I suppose, but at the end of the day it means you'll end up with 4 different certificate chains. In TLS, typically only one chain gets delivered. Anything else would be bloat and probably not well-supported by clients.


And not only are the chains different but the leaf is different each time. TLS 1.3 permits you to write any series of certificates you want that might help your peer to decide this leaf is trustworthy (not just "a chain"), but you can't provide more than one leaf.


Why would the leaf be different? Are CAs supposed to refuse to sign certificates that have been signed by a different CA?


In fact it's much more than that, the CA is not supposed to sign any certificate it didn't choose - and it isn't meaningful in X.509 to have more than one signature.

You will often see people describing the sequence of events something like "I send them this CSR and then the CA signs it" but that's not really what happens at all. The CA is obliged to construct a to-be-signed-Certificate (tbsCertificate) of their own choosing, incorporating only information they're actually happy to stand behind, and with a largely random serial number at the start.

The serial number is because X.509 doesn't have a nonce component, serial numbers are very near the start of the document so it would likely be impossible (using known techniques) to collide a certificate hash (even a known broken one like MD5) so long as this number is very random and chosen by the CA not the subscriber.

The constraint on information in the certificate (e.g. Subject information) is because the CA is signing the entire document. Relying Parties have no reason to assume that a certificate for "Johnny Poopy Pants" was actually not issued based on the CA believing this is really "Johnny Poopy Pants" but instead just because it also mentions the DNS name some.cheap-server.example and the subscriber does control some.cheap-server.example. So, any claims of information about the Subject that aren't being warranted by the CA are not included in the certificate. You can send Let's Encrypt a CSR saying your company name, email address, mailing address of the head office, a logo, whatever, it just gets ignored and Let's Encrypt only care about the DNS names you asked for, those are all that will be in the certificate.


I don't think X.509 has a way to express multiple signatures on the same certificate. Issuer name, etc, are limited to one per certificate. See also tialaramex's excellent sibling comment about CAs constructing a new cert to sign including a new serial number.

Cross signed roots work by offering multiple certificates for the same key: you can use a self-signed root (from your trust store) or an intermediate signed by a different issuer, or if your validating stack is really competent, you can send multiple intermediates and the validator will check if any of them chain to an acceptable root.

But, the leaf (or end-entity) cert MUST be the first certificate sent, and only one certificate can be first, so there's no optionality there.

If CAs would be willing to sign limited scope intermediates (and if limited scope intermediates were widely usable), you could get your intermediate widely signed and have your leaf certificate signed by that, and include multiple chains from your intermediate. But that would take you from two certs (leaf + CA intermediate) to one + 2N certs (leaf + (your intermediate signed by CA + CA intermediate) * each CA) and all of that would add up to increase the handshake size and slow down initial communication.

It might be nice in some situations, but it's also costly, and support is iffy if you stray outside browsers.


If you were to use the same private key for the 4 certificates then you could seamlessly switch between whichever leaf certificate you wanted to serve to the client. I'm not aware of the ability to send multiple leaf certificates to a client for consideration though.


More interesting to me will be when one of the ACME CAs will implement RFC 8657, ACME-specific CAA parameters.

Currently privilege separation on a server or a TLS terminator doesn't do much for ACME privileges because an exploit anywhere on the request path can use an arbitrary account to obtain new certs.

Binding to a single ACME account in DNS (accounturi=…) would significantly reduce the attack surface, as would requiring non-http validation methods.



Do I understand correctly that this event also means that ssl.com is the first CA to offer free ECC certificates to the general public?


Let's Encrypt will issue you with "ECC certificates" depending on what your idea of an "ECC certificate" is.

For a generic Let's Encrypt site if you have Elliptic curve keys, the existing R3 issuing intermediate will cheerfully sign you a certificate. The certificate will be for your EC public key, but signed by the R3 intermediate using RSA.

If your reason for wanting EC is that it's less work on your server, this achieves the goal, no RSA signatures from your servers (unless you also need to serve customers that can only do RSA and need a separate setup for that).

If you want, you can enroll in a trial programme for Let's Encrypt EC issuing intermediates where E1 will sign your EC keys (if you enroll R3 still signs any RSA keys you ask for). This chains through ISRG Root X2 and then ISRG Root X1 because ISRG Root X2 is not trusted by most (any?) large trust stores today.

If you can't have RSA anywhere in the chain then yeah, Let's Encrypt can't do that for you in practice today, although it'd be nice if you explain why you'd want that.


Let's Encrypt can issue from an ECC chain, I've tweeted[1] the details on how to enable your account for that.

[1] https://twitter.com/Scott_Helme/status/1392101598852222976


What's the shortest chain-size that those free CAs can offer (assuming android>=5.0 devices)?

It would be great to have a tool somewhere that matches client handshakes & supported CAs vs server config & choice of CA chains


They're all 3 certificates long (leaf/intermediate/root) apart from Let's Encrypt which, due to their cross-signature, are 4 certificates long for ECC.


It' great to have alternatives although I need nothing more than what Let's Encrypt offers.

PS, do you think there is a chance for a similar service to be available in the future, but for EXE file signing ;)


Caddy[0] also uses ZeroSSL[1] alongside Let’s Encrypt.

[0] https://caddyserver.com/docs/automatic-https#overview

[1] https://zerossl.com/


Caddy and ZeroSSL are the same company, "Stack Holdings GmbH".

It used to be prominent on the respective pages, but is now stated in the footer.


We use Caddy for serving our free dashboards and status pages on your own domain at https://checklyhq.com

It was not super easy to set up. I think the whole config is 20 lines or so, but the docs, naming and functionality of how Caddy actually interfaces with LE was tricky to find out. Basically had to scrape together answers from various GitHub issues etc.

I should write a blog post…

Edit: now it runs fine btw


What did you find unclear, specifically? We encourage users to come ask for help on our forums, we'll gladly explain everything you need to know. https://caddy.community


Hey Francis, let me go back to the project.

I managed to stitch together our use case by reading - https://caddy.community/t/https-for-dynamic-subdomains-and-c... - https://caddy.community/t/best-practise-for-multiple-tenant-... and various GitHub issues

Our use case was: "serve SSL certificates on the fly for users hitting :443 AND users hitting *.checklyhq.com, then proxy some content."

The documentation looks visually nice, but is hard to parse as the layout of the config file (and the hierarchy of the items in it) is separate from the explanation of what they do. I was continuously doing a Ctr+F to find words on the page.

Example:

https://caddyserver.com/docs/caddyfile/options

Here the stanzas are clickable.

https://caddyserver.com/docs/caddyfile/directives/tls

Here not.

Last thing. The docs present a lot of config examples in snippets, but seeing how to use them in a full, syntactically correct file was hard. I felt I was missing the big picture.

Sorry to not be more concrete, it was some time ago.


I think if you had found your way to this article in the wiki https://caddy.community/t/serving-tens-of-thousands-of-domai... you would have had an easier time. The On-Demand TLS section in the docs now links to there (as of only a week or two ago) https://caddyserver.com/docs/automatic-https#on-demand-tls

Regarding clickable bits in the docs, those are set up with some JS I wrote which tries to dynamically add links to certain things. I only set it up for Caddyfile directives and for the global options page in particular. Our docs are in markdown so any linking of things needs to be wired up after the fact on page load with some vanilla JS. I'll look into improving that for some more pages.

Thanks for the feedback!


Francis has implemented your suggestions


This probably sounds stupid, but the implied $PWD for the static file server in the Caddyfile tutorial [1] confused me for about 15m because I was trying to run it in Docker.

The common patterns [2] in the reference section shows a better example, but I assumed the reference was a detailed reference, not beginner docs.

So I would say convention / implied config is great once you know how everything works, but it’s awful when you’re trying to learn because you have no idea what’s actually happening. I think the getting started config would be much better if it showed the 5-10 things that most people would expect to see (protocol, port, www-dir, etc.) and then described the shorthand version where all of that’s implied. As is, I have to go look up what the defaults are for all those things and the time it takes is way, way more than I’ll ever save by having that config implied.

Have you ever seen a config file where everything is commented out, but shows every option and the default value? I love those configs.

1. https://caddyserver.com/docs/caddyfile-tutorial

2. https://caddyserver.com/docs/caddyfile/patterns


Thanks for your feedback.

Docker does make easy things hard. (And yes, I know, sometimes it makes hard things easy.)

> I think the getting started config would be much better if it showed the 5-10 things that most people would expect to see (protocol, port, www-dir, etc.) and then described the shorthand version where all of that’s implied.

That may be true, but we also get a lot of compliments for our current docs, that Getting Started is just what they needed to get going. We also know from experience that a lot of people don't read, and I'm afraid that by showing overly-complex configs, people will copy-and-paste them and just use them blindly without understanding them or trimming them down. Then we have a bunch of bloated configs out there, and oftentimes, we've found that removing things from Caddy configs solves problems.

> Have you ever seen a config file where everything is commented out, but shows every option and the default value? I love those configs.

Funny, I hate those. I want to have a minimal file that I feel like I crafted just for my purposes, rather than taking some boilerplate and trying to coerce it into working for me. I also understand my tools better this way.

One of the core opinions of Caddy is to build things up to suit your needs, rather than tearing things down to make them work.


> Have you ever seen a config file where everything is commented out, but shows every option and the default value? I love those configs.

> Funny, I hate those. I want to have a minimal file that I feel like I crafted just for my purposes

There is no harm in providing the full config and allowing the user to minimize it.

At least that way I can do a `grep -v ^#` or something to avoid showing the comments and still get a minimal file when I want to see the minimal version


thats a big TIL

caddy is actually great! auto ssl


Same here! I think it was ZeroSSL that bought Caddy now that I remember.



It also automatically handles a fail-over in case a CA is not available:

https://caddyserver.com/docs/automatic-https#issuer-fallback


[flagged]


It's relevant to the discussion. No other server with ACME supports multiple issuers by default. It speaks directly to the point being made in the linked article (and other previous articles written by Scott).


For what it's worth, Traefik does allow you to change the server you talk to (it's just the staging server for LE they use in the example): https://doc.traefik.io/traefik/v2.0/https/acme/#caserverand

A quick google does suggest that at least one person has tried using ZeroSSL with Traefik: https://spad.uk/get-free-zerossl-certs-using-traefik/


Yes you can configure Traefik with other issuers, but it does not support having multiple configured at the same time and failing over between them in case something goes wrong. That's the point being made in the article. Caddy does this.


Do any free TLS LE alternatives handle AWS/GCP/Azure domains? It's been a frustrating usability blocker on tools like Caddy to have a few of the most popular domains singled out, so been curious..


I like the use of the shuf command to randomize the list, very nice!


Thanks! Sometimes the simple tricks are the best ones :)


Is there any free CA that supports signing on an IP address? e.g. https://1.1.1.1/


This was discussed deeply at the Mozilla forum for CAs, and the answer is "even if someone wished it, nope", mainly because of dynamic IP address, and even if we removed this type of issuance, the next level is modifying your contact details on the IP block, which is hard because there's isn't a single standard (on modifying, the data format is more-or-less standardised at this point).

Google actually wants to do this for their Cloud Platform, and owns the whole chain (i.e. has its own CA, https://pki.goog) but it's currently hard in the current state of CA/B baseline requirements.


I realize it’s not exactly what you asked, but you could use a service like https://nip.io to map up addresses to hostnames.


Is there any free CA supporting intermediate/sub certificates?

This is what I would really be looking for in an alternative to Let's Encrypt.


In practice today a CA would want to actually have physical control over the infrastructure signing for your intermediate, meaning the actual hardware would live in their data centre and they would administrate it, but presumably you'd be paying for it (rackspace in a high availability secure data centre) so that's not going to be free and doesn't offer you any capability you wouldn't have via something like ACME.

To give you physical control over the intermediate basically means they're staking their entire reputation on you doing what you promised and never screwing up.

You could imagine this working with a constrained intermediate, except I can more or less guarantee that the day after you commit to such a thing you discover a client you care about can't handle the constraint, and so you ask for it to be relaxed, whereupon you are back in the exact same situation.

Mozilla requires CAs to tell them about every unconstrained intermediate, so there's actually a complete list you can go look at. It is not large.


What is unconstrained intermediate and what about constrained intermediate (which could only issue certificates for subdomains)?

My main use-case in mind is service which provides HTTPS to its clients, but clients should serve content from their own endpoints. Think about home devices. Proper way to do so is to generate private key on device, send public key to somewhere and receive proper certificate for device-0123.example.com. This could be automated with letsencrypt and DNS verification, but there's limit of 50 certificates per week which would require to register plenty of domains and rotate them. Using intermediate certificate would solve that issue.


X.509 has a feature called constraints, where you say e.g. "This is a CA but it must only issue for names under .foo.example" or "This is a CA but it can only issue for S/MIME" and to make these effective you set a bitflag that means "If you are parsing this certificate and you don't understand this part, the certificate is invalid for you".

The consequence of that second element is that clients which don't understand the constraint (and thus wouldn't know which are or are not trustworthy leafs under the constrained intermediate) mustn't trust the intermediate at all. This means if you need those clients you cannot use the constraints they don't understand, because they'll reject your entire intermediate.

Mozilla defines constrained as it pertains to the problems they care about, so for example they do not consider a CA constrained if it lacks a constraint they require.

For your use-case there are two practical options:

1. Most suitable for commercial projects e.g. you're a startup selling a new IoT device. Talk to a commercial CA and work out a deal where you get what you need. Outfits like Sectigo strike deals like this all the time. They understand that you don't want to pay high prices per certificate for this problem, but on the other hand you may be able to guarantee minimum volume and that means it makes commercial sense compared to piecemeal orders.

2. For a hobby project talk to Let's Encrypt about getting an exemption to the rate limit for your specific application, or, if it fact the devices are owned by third parties, whether this should be on the Public Suffix List and thus exempt from rate limits anyway. (the PSL also means these devices can't share HTTP cookies among each other, which might well be exactly what you wanted anyway).


Thanks a lot. I never heard about Public Suffix List.

From their guidelines: "We do not accept entries whose sole purpose is to circumvent Let's Encrypt rate limits. They have a form you can use."

So probably the best way to go is to talk to LetsEncrypt about lifting limits.


Is there any non-free CA supporting this use-case for reasonable price?


ssl.com "First, register for a free account. Next, you need to get your API credentials"

yeah, pass


Great to have more choice in the free certificate market


Nice, thanks for the creator for making this!


[flagged]


paid certificates also expire. every now and then ill hit up my old companies gitlab or jenkins and be amazed they are still running! its been over 3 months and either my automated cert process is working or someone at my old job put on their big person pants and logged into a server and fixed something for once


How does a paid certificate get automatically replaced on your server if you don't read emails or otherwise deal with it?


There is no reason to differentiate between free certificates and paid certificates. The process works in exactly the same way for either.


obviously, that's why I asked.

EDIT: I see now that the comment I responded to has been totally edited, so the responses don't make sense anymore.


But credit cards expire too, so no matter what there's no "set it and forget it" process.


Wouldn't the same thing happen if you ignored the bills?


That’s ironic.

You’re complaining that LE couldn’t work after “a few years” yet your solution is to pay for something that definitely requires your intervention yearly. Mind you, the latter still requires you to provide a valid address that you have to check.

The internet isn’t set and forget.


Comments like these are always amusing after the latest HN "why you don't need Kubernetes" post - setting up cert-manager is a 5 minute affair.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: