Hacker News new | past | comments | ask | show | jobs | submit login
Caddy – The HTTP/2 Web Server with Automatic HTTPS (caddyserver.com)
84 points by tambourine_man on April 25, 2018 | hide | past | favorite | 61 comments



Has there been a shift in the recent times regarding developer teams paying for software? Yes, ease-of-use is great benefit. It takes a while to become accustomed to nginx configuration. But, $25/instance/month for a web server when nginx can do almost everything (and arguably much better than a server that was only launched two years ago)? It doesn't sit with me well.


It's open source, right? Just compile it yourself?

To your lager point, I think paying for software is and has been the norm in many places. So nothing new.


I didn't notice that you can compile the source code yourself to avoid paying for the license. That seems to be a nice alternative.


Only if you're not using it for commercial stuff. If you're using it for commercial stuff you have to pay the $25/month


The code is released under the Apache License. You're free to compile the source code and use it commercially. You're not allowed to do that with the official binaries.


My understanding of their license page is different.

"If your company uses official Caddy binaries internally, in production, or distributes Caddy, a commercial license is required."

"If I build Caddy from source, which license applies?" "The source code is Apache 2.0 licensed."

https://caddyserver.com/products/licenses


Would be interested to see how reproducible builds would interact in such a situation.

I.e. you get a byte-identical binary file if you build it yourself, but you don't have to pay the $25/mo. Even though you end up at exactly the same place.

Something like http://ansuz.sooke.bc.ca/entry/23 "What colour are your bits?" I suppose.


> It's open source, right? Just compile it yourself?

The point was that it was easy to install and get it running. Your point defeats the purpose of Caddy in the first place.

It would be much easier to:

1. `sudo apt-get install nginx-extras python-certbot-nginx`

2. Add simple config with 80 port of your website/reverse proxy.

3. `sudo certbot --nginx`

4. Do not worry that you will be sued by nginx for using it for commercial purposes.


Installing Caddy is easy. https://github.com/mholt/caddy#build

Init scripts are included as well, both for FreeBSD, macOS and for a several different init systems used by various Linux distros. https://github.com/mholt/caddy/tree/master/dist/init

FreeBSD even has it in the ports collection, so you can install it using the package manager that is included with FreeBSD

    doas pkg install caddy
Once installed on FreeBSD with the above command, create a configuration file in the Caddyfile format in /usr/local/www/Caddyfile. Example Caddyfile:

    www.mysite.com {
        root /var/www/com.mysite.www
    }

    mysite.com {
        redir https://www.mysite.com{uri}
    }

    sub.mysite.com {
        root /var/www/com.mysite.sub
        gzip
        log /var/log/com.mysite.sub/access.log
    }
Provide an SSL certificate issuer email in your rc.conf. By providing an email address you automatically agree to letsencrypt.org's general terms and conditions:

    doas sysrc caddy_cert_email="your.email@example.org"
Enable caddy in your rc.conf:

    doas sysrc caddy_enable="YES"
Start the server:

    doas service caddy start
> Do not worry that you will be sued by nginx for using it for commercial purposes.

The source code of Caddy is distributed under the terms of the Apache 2.0 license.

https://github.com/mholt/caddy/blob/master/LICENSE.txt


(Maybe I should switch to FreeBSD?)

Wait I have to manually install Caddy and then fight with systemd to make it start at boot? What is this? The 90?

Since they don't have have a standard package repo, you will also always be behind. You will have to keep track of their releases manually to make sure you are not missing an important update.

It is unclear why these folks have are resisting standard packages. Is that incompatible with their business model?

It made me move away from Caddy and instead go with Nginx and Certbot. Which is simpler and less maintenance work in the long run.


> Maybe I should switch to FreeBSD?

Depends on how tied you are to Linux, but yes it might be worth it. I for one enjoy FreeBSD a lot. I run it on my servers and on one of my laptops. My desktop runs Linux.

If you have the time and motivation I’d say give FreeBSD a shot. Rent a VPS from Vultr for example or install it in a VM on your computer. Play around with it and see how you like it.

As for packaging of Caddy on Linux there seems to be a few issues if you look at the following thread: https://caddy.community/t/packaging-caddy/61

- The developers dislike packaging because it makes “plugins” unavailable. Personally I think they are concerned about this for no reason. There ought to be a version of Caddy in the package repos without additional “plugins” that covers the 99% use-case which is to serve static files from a directory with automatic Let’s Encrypt certificate retrieval. Just like how the package on FreeBSD does. Those wanting customization can build from source; that is very reasonable and in line with the expectations one should have to an OS-provided package manager IMO.

- They don’t want to deal with packaging until they hit v1.0.

- They are talking about wanting to package an auto-updater. This is absolutely the wrong approach IMO, but it relates back to their differing in opinion about the role of their “plugins” in relation to the packages that would be provided. I think Debian for example have the same view as me on this issue, and I think other distros do as well probably. Packages provided by the OS-provided package manager should never rely on needing to retrieve additional files not included in the package, except via dependency on other packages, or where they are forced to do so for licensing reasons.

- Speaking of Debian, they mentioned that packaging software written in Go can be problematic/difficult in relation to the Debian packaging guidelines because it seems that to follow those guidelines strictly, each dependency must be packaged separately rather than being vendored. Whether this is correct or not I don’t know.

- Like I said further up ITT, the source is covered by Apache 2.0 and they also confirm in that thread that compiling binaries and distributing binaries you’ve compiled from source is allowed in accordance with those terms. That’s really no surprise — if that was not allowed then they couldn’t have said that the source was covered by Apache 2.0 — but it’s reassuring to see that they intentionally want to allow redistribution of binaries built from source.

It seems to me that if packages for Linux are going to become available any time soon, someone needs to step up and take care of it. For example by providing a PPA for Ubuntu users. There was one PPA mentioned in the thread but I didn’t look into whether it is still maintained or not since I do not have any current plans of running Caddy on Ubuntu myself.


> The developers dislike packaging because it makes “plugins” unavailable

Maybe the plugin support should be fixed then.


Unfortunately they have no good way of doing that yet. Or, at least, they didn’t back when most of that discussion took place.

From a comment in the thread:

> Go is currently really bad at “runtime” plugins. There are only three strategies I know of that may work:

> 1. Launch separate processes for each plugin and use ipc to communicate. Potentially slow, and a lot could go wrong.

> 2. Some kind of cgo based plugin system that runs dynamicly linked go libs and uses a cgo shim on both sides for compatibility. Introduces build complexity and makes cross-compiling harder.

> 3. Some kind of dynamic language interpreter: there are native go javascript and lua interpreters, but using those for plugins seems pretty crazy,

> Until the go authors implement better support for runtime plugins, I think we are stuck with built-time.


This isn't something easily fixed. Go's big selling point (pros and cons included) is that it generates statically compiled binaries. Go plugins are a complicated problem because of that.


What shift do you mean? Paying for (dev) software is very common.

> arguably much better

That's very arguable because Caddy is great at what it does. I think you should try it out before assuming it can't come close to nginx. Also older software also tends to be stuck with legacy cruft so project age isn't such a simple signal.


And it still doesn't obey the DNS specs.

Specifically, the DNS RFCs define that, given no search domain, a relative hostname (e.g. google.com) is equivalent to its absolute hostname form (e.g. google.com.).

This is used in SSL validation as well, a certificate valid for one is valid for the other, and in reverse.

Every webserver SHOULD respond to both names identically, or redirect from one to the other.

Let's try that. https://news.ycombinator.com./ yup, works. https://www.nginx.com./ as well. In fact, nginx, Apache2, IIS, the Google Cloud load balancers, AWS's load balancers, every major site supports this.

The only http servers breaking the RFC for absolutely no reason are Caddy and Traefik.

And the issue has been closed as WONTFIX months ago.

Do you really want to use an http server that doesn't even consider following the RFCs? And pay for that?


You make it sound like the Caddy folks completely ignored the issue without providing a reason ("absolutely no reason", "doesn't even consider"). After looking into it, I find your post misleading and unfair, to say the least.

Link to the issue for other users who, like me, were concerned by your post:

https://github.com/mholt/caddy/issues/1632

According to the author, the issue is that (a) there are two RFCs that contradict each other on this distinction, and (b) most browsers do not treat them as such for the purpose of their same-origin policies. So he thinks it's best not to enable this alternate URL by default since it's trivial to add it as another route if you want.

I don't know if he's right or wrong, nevertheless, this clearly isn't a dude that doesn't give a shit.


I don't see two conflicting rfcs. Can you point them out? I saw a link to a Mozilla thread discussing how they should do normalization, but nothing about not supporting the fully qualified format. Cross origin rules don't apply for looking up a virtual host.

Also, the concerns over security the caddy developer make aren't clarified and probably don't exist.

No dot has 2 possible meanins and with dot has one _to the resolving application_. To the server resolving a virtual host there is only one for both.


> I don't see two conflicting rfcs. Can you point them out?

The quote is:

"RFC 1034 says the two domains are the same, but RFC 3986 says they are not. I can't tell from the http spec whether the two Host values are equivalent or not."


From rfc 3986:

> The rightmost domain label of a fully qualified domain name in DNS may be followed by a single "." and should be if it is necessary to distinguish between the complete domain name and some local domain.

When there is no ambiguity then they mean the same thing.

Anyway, as a sibling comment said, it's not a good rfc to use for this purpose.


RFC 3986 is absolutely useless for this task, because no one uses it for this, and it considers things inequal that every browser considers equal.

Besides, it was deprecated by the WHATWG URL spec anyway, which does use the same equality rules as RFC 1034 (after normalization)


What's interesting is that it's the first web server not focused on speed or performance but on the user experience. I love nginx, but I had huge headaches configuring it when migrating php apps with huge htaccess rules to nginx. The terms of the license do not seem super clear though.


> I had huge headaches configuring it when migrating php apps with huge htaccess rules to nginx

I also had this issue whenever I did migrations between different web applications that used different URL structures. To preserve my sanity, I now do the redirects in the application level.

I predict nginx will get a lot of competition from servers and applications written in Ru￸st. Until then, it's viable as a reverse proxy and serves static files really great.


nginx has a huge headstart though, it's stable, it's fast, has lots of modules, etc.

I'm open to competition in the field, I'm not sure the competition will emerge as fast as people think (just like competition for other well-established software bricks in general).


and already supports HTTP/2 and HTTPS.


I'm not sure it's the first one to focus on that. I remember Cherokee being really user friendly. It can't do nearly as much as Nginx, though, and there hasn't been a new release in four years so it might be dead.

On the Windows side of the world, I've found IIS to be decently user friendly to set up and administer.


I've been putting off deploying one of my side projects because I'm afraid that I'll mess up an existing site if I touch the nginx config again.

Caddy looks promising and simpler. Might have to take a look this weekend.


How is your configuration managed that this is a concern? While I generally have a whole different conf for each site run and include the directory with them in the main nginx file, you should at least have them in different blocks, even if in a monolithic config. I don't quite understand how you think you're going to break any other site.

Even if you do manage to screw up the config, just do 'nginx -s reload' - this verifies the syntax of the config and then attempts to apply it. If it is successfully applied, (e.g. dns is resolvable for listed upstreams, etc etc etc) then launches new worker processes, and then messages old worker processes running the old config to not accept any new requests (so all go to the new workers) and to shut down after finishing handling any existing requests. If it fails to apply, old workers stay up and keep running with the old config.


That said, I've found the performance to be pretty good. Getting < .01sec response times on a node.js API server that caches data in RAM. I'd expect to see worse if Caddy were a bottleneck.


As someone who is very very new to hosting their own services, this was an absolute piece of cake to set up. My first time dealing with reverse proxies and it took less than an hour to get going.


Good to know! As someone who's been called a senior SRE (I hate titles) and has been building platforms for years, it's great to see high quality software enabling new comers like your self to the field. Welcome! :-)


For someone that hates titles, you both managed to call yourself an 'SRE', and drop the fact that you've been 'building platforms for years'.

Just sayin'.


I think it's always valid to point out in threads like this that everyone on the internet appears to be a domain expert. Everyone is also an entrepreneur, and anyone who is unemployed is still a consultant. It can make it more difficult to evaluate the merits of a piece of software like Caddy because you have to wade through all the "this is awesome!"s which are somehow considered valid contributions to discussion.


$25 a month is quite pricey for the edge case of someone starting up sites that will be commercial but unsure as to profitability yet. I guess I will build and donate $25 for the year. probably building is the smart thing to do anyway for a server you're getting for its security capabilities.


Agree. The $25/month stops me using it for side projects that generate small but non-zero revenue. Love the concept but not sure when I'll get to play time on something genuinely personal.


Can't you build it from source?


It does get quite complex when you need plugins.

I use the abiosoft Docker image and that has simplified it again.


I'm sure he can build it from source, but the "personal" license forbids using it for commercial purposes, so the point is moot.


Those licences are only for the binaries. The source is Apache licenced and can be used for commercial purposes for free.


The whole source is Apache-licensed, so there are no restrictions if you build & integrate it yourself.

The restrictive personal license applies to the official binaries.


They make it quite hard to find out, but here it is:

> Caddy obtains certificates for you automatically using Let's Encrypt.

Not sure why that is not stated front and centre. It's a good idea.


That’s literally the first selling point in 128px font. [1]

It might not mention the implementation details but the concept is what matters.

[1] https://i.imgur.com/hdEaKpG.jpg


No, it's not. It doesn't explain how it works, which makes it come across as an empty marketing promise.


Should've at least said "easily".

Just looks like it's capable of doing that which doesn't take much attention.


A great alternative that we use in production for thousands of domains is https://github.com/GUI/lua-resty-auto-ssl


Caddy can handle tens of thousands of domains. I know a couple of instances which do.



When Caddy was first released I tried, and failed, to get Caddy to serve an https site locally. Is that possible now? The docs[1] hint that it could be if I add an entry to my hosts file pointing an IP address of, say, a Docker container, as that wouldn't technically be localhost or an IP address. It doesn't explicitly say it's possible though. Adding something to the tutorials would be immensely helpful if it does work.

[1] https://caddyserver.com/docs/automatic-https


What do you mean by locally? If you want HTTPS automatically then the site must be publically available so that LetsEncrypt can verify the domain and grant the certificate. If that's not possible then you'll have to use the DNS challenge and setup a provider plugin.

It doesn't matter where the backend points and you can use it to serve a docker container if you want, but that's different from the the host/frontend address you use.


Locally in the sense of a local development server. The issue is that there wouldn't be a real DNS record pointing at the machine (well, unless you added one to point at your external IP address, but that's a pain for teams). I guess if Let's Encrypt needs to verify the domain it won't be possible...


Let's encrypt can issue certs without needing a public route to your machine if you use the dns challenge. Here's what I do:

1. Add a public A record (or host file) local.mydomain.tld to 127.0.0.1.

2. Host my dns with cloudflare (other providers have plugins too), and install the caddy plugin to do the dns challenge for certs.

3. Caddy can then get certs for local.mydomain.tld and serve them locally.


Then you need to acquire the certificate yourself. Or just use a self-signed certificate since it's your own machine....


You can't use automatic https locally, but can use a self signed certificate by adding `tls self_signed`


That's interesting! I wonder how it scales relative to, say, nginx.


Not comprehensive by any means, but some previous discussion: https://news.ycombinator.com/item?id=13357026


It scales fine… if you don’t already know the answer to that question, none of the web servers you choose are the bottle-neck to your site. ;-)


That's a bizarre statement. I could be incredibly familiar with nginx, apache, varnish, etc., and have a website that scales to a huge amount of users and still have no idea how well Caddy scales compared to nginx due to having no prior knowledge of Caddy's performance. Indeed, questions about scaling and performance are going to be some of the very first questions asked by anyone running such a site - that's going to be one of the single most important characteristics about it. (Hopefully after security....)


I'm glad tour aware of the performance characteristics of gp's site!

But seriously, how do you know what their site is? Perhaps they're serving a lot of static content out of memory and a fat pipe, in which case the web server would in fact be a bottleneck.


Apache was an unstable bottleneck before I set up nginx as a reverse proxy.


I am immediately suspicious of this because HTTP/2 only operates over HTTPS. So for them to market this webserver as being special because it defaults to HTTP/2 over HTTPS is the sort of thing a snake oil vendor would do.

Also completely unfond of the license. "Caddy is amazing because it has 3 line config files!" So what? That's only appealing to people who are afraid of editing config files. Here's a harsh reality for the developers (who probably won't see this, ah well), but "config files" are not worth $25/mo or whatever the full scale commercial costs of this are. Do the developers think that their target audience are incapable of configuring traditional webservers?

Just because you pour your blood sweat and tears into a thing doesn't mean that thing is worth any money.


The idea was brilliant before the web changed and made him a bit obsolete.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: