It is a very bad choice for critical sites that can't go down.
AWS's application load balancer with SSL is much more reliable.
> user-hostile behavior
Which behaviour is user-hostile? Perhaps we could address it?
> outright dangerous behavior in the way it fetches SSL certificates
For reference, Caddy uses https://github.com/xenolf/lego/ for its ACME interactions with LetsEncrypt. Could you elaborate on what part of Caddy's behaviour is _dangerous_?
> very bad choice for critical sites
What makes it a bad choice, exactly?
Caddy used to put non-removable advertisements in the response headers. (That one got such a bad backlash that they backed away...)
Caddy refuses to cooperate with OS packaging teams. It reeks of self-importance. It's questionable whether caddy is really FOSS with its odd licensing arrangement.
I've seen a number of backward-incomptatible updates that break config files from point-upgrades.
Caddy EXITS with error on boot if any of its HTTPS enabled sites fail to get certificates.
Which puts your webserver in a fragile state: the server may serve happily for months (with a hidden cert error) until you restart the service. Then all your sites are down.
Migrating a live website from one server to another is (or was, until recently) quite un-supported and hard to do with zero downtime. See caddy needs DNS pointed at it before it can get the cert -- but it can't start serving pages until it has the cert.... Just not acceptable story in a high availability environment.
I understand that now caddy can share its certs with a pool of other workers which may help migration processes moving forward.
I may be bitter about this as it caused a lot of damage and downtime for my business. Not inclined to ever touch Caddy again if I can help it.
That header thing was indeed a bit of a fiasco; a misguided attempt to honour the few that stepped up to support Caddy monetarily. Once the depth of the issue was made clear to the developers, it was indeed walked back.
Regarding OS packaging teams - it's not the dev's responsibility to become approved package maintainers for individual distros; it's generally not done, either. The distro maintainers themselves decide which packages to make available, and how to package them. Caddy doesn't offer repos for the individual popular package managers because of the nature of Caddy's third party plugin architecture - none of the package managers allow arbitrary downloads from a build server (rightly so - the package maintaining process is intended to provide much higher assurance of security), and they don't allow for the package to be built to request either. Not only that, but those plugins may or may not be trusted by the user themselves; the usefulness of anyone being able to extend Caddy and publish their own plugin at any time comes with that downside.
The licensing arrangement was born out of a simple need - Caddy devs gotta eat. The code itself is Apache 2.0 - the Caddy project is as FOSS as it gets. The commercial part is the build server, which isn't open source - if you use it to build your binaries, those binaries are considered either commercial or personal in nature. I can tell you that the devs would like nothing more than to have a different method that would satisfy their monetary requirements so they could make the build server binaries free, too.
The idea behind exiting on start with an error is to ensure that when the user starts the server, they know straight away that there's a problem and Caddy can't do what you're asking it to (which is manage your HTTPS certificates). There are ways to get Caddy up, even with out a valid HTTPS certificate, and get your site online regardless - they're just not _automatic_.
The fragile state concept is one we come across frequently. The truth is that when people say they're restarting the server, the meaning of restart is "shut down, then start", instead of "reload". Caddy has graceful reload capabilities; you can swap the Caddyfile and even the binary itself out without interrupting the server (this isn't true on Windows, though, where varied signaling of the Caddy process is not possible in the same way as it is on *nix based systems).
I myself have posted working solutions to full live server migrations (for the entire set of websites), between two fully working and secured (HTTPS) Caddy instances, accounting for DNS propagation. It's not unsupported or difficult, just not _automatic_; it requires some specific configuration and a careful hand (like most live site migrations). The somewhat-recent filesystem clustering Caddy does isn't even related to migration - it's actually supportive of distributed fleets solving challenges for other instances. You've always been able to share the TLS assets between Caddy instances and have them be used.
I wish I (or the developers) had been given a chance to offer some guidance - I believe we would have been able to help avoid some of the downtime and losses suffered by your business.
Obviously hard to complain too much about a free product - I'm sharing my personal experience for others.
So the thing I like about AWS is that they can give you a cert before pointing the DNS A record at your site. Really fool-proof and excellent. Much better than the let's encrypt flow by design.
In fact on some of my sites I now run Caddy on AWS behind a load balancer with the AWS load balancer providing HTTPS. Works much better and I can sleep at night with less fear.
I believe AWS can do this because they have proof that you own the domain (effectively DNS validation) before handing out certs. Caddy can do similar with DNS validation - fetching your cert without needing to be publicly accessible. It needs you to hook into the API of one of the supported DNS providers though, because validation is still done on a per-request basis (but it has been able to do wildcards for a while). I understand that AWS is more validate once, sign certificates many times, which is quite convenient - and it all hooks into their systems fairly automatically.