
Caddy – a modern web server (vs. Nginx) - n1try
https://medium.com/@n1try/caddy-a-modern-web-server-vs-nginx-e9e4abc443e
======
kmeade
Shortcomings of the article aside, I'd like to say that I use Caddy for a set
of small websites and I couldn't really be happier with it.

I'm a longtime Windows developer who doesn't have much patience with nerdy
complexities. I started in 1996 with a buggy, gawd-awful Netscape webserver,
moved to the late-lamented O'Reilly WebSite (even had a T-Shirt) and
reluctantly settled on MS IIS, with occasional Apache encounters. Caddy has
been an absolute breath of fresh air.

I currently run 4 sites from a system at my home, using Namecheap dynamic DNS.
Caddy serves the basic web pages and static content and also reverse proxies
to an internal Python server for dynamic content. Sounds a little complicated,
but believe me, configuration is dead-simple thanks to Caddy. Plus I get full
HTTPS from Let's Encrypt for the cost of supplying my email address and
agreeing to a EULA - no configuration needed at all.

I've never used a webserver that was easier to configure or had such low
resource requirements.

~~~
VT_Drew
>I'm a longtime Windows developer who doesn't have much patience with nerdy
complexities.

That is funny because I have setup Apache, Nginx, and IIS web servers before
and by far the one I found to have the most nerdy complexities is IIS.

~~~
tracker1
I think it's often easier because most of the nerdy complexities are via a
GUI... though the web.config offers a lot more in more recent versions. And
adding in Application Request Routing (reverse-proxy) and the like can get
complicated quickly, it's not nearly as bad as either nginx or apache tend to
be. I do like what nginx can do combined with lua though, it's pretty damned
cool.

All in all, I do like Caddy's defaults and out of the box for newer tech. I
haven't really tried it much, mostly out of complacency with nginx/iis in
Linux/windows. And I will definitely never run Apache on a new box again.

~~~
kmeade
I agree with both posts above me. While I normally prefer a GUI configuration
facility, I wouldn't make the argument that IIS is particularly simpler than
Apache.

Getting back to Caddy, configuration is so simple, a GUI would be overkill.

Here's an (entire) example of how to configure a reverse proxy in Caddy...

    
    
      proxy / 127.0.0.1:8080 {
        except /favicon.ico /robots.txt /assets /plainpages /staticstuff /test1
        proxy_header X-Forwarded-Proto {scheme}
      }
    

The "except" line is a list of files and directories that are statically
served by Caddy.

All other content comes from a local server on port 8080.

The "proxy_header" line lets the 2nd server know when contents are being sent
as secured (HTTPS)

Isn't that nice? I think so.

~~~
mwpmaybe
I think you want header_upstream instead of proxy_header, at least in newer
versions of Caddy, and the transparent[0] preset sets Host, X-Forwarded-For,
and X-Forwarded-Proto for you!

    
    
      proxy / 127.0.0.1:8080 {
        except /favicon.ico /robots.txt /assets /plainpages /staticstuff /test1
        transparent
      }
    

0\. [https://caddyserver.com/docs/proxy](https://caddyserver.com/docs/proxy)

~~~
kmeade
Thanks! I didn't notice _transparent_. I added the _proxy_header_ based on
documented requirements of the 2nd server.

------
VT_Drew
>The configuration is not that intuitive and you really need to get into the
syntax and concepts to get an understanding of knobs to turn in order to
achieve a certain goal.

Personally I found Nginx configuration much simpler than Apache. I think Nginx
is pretty intuitive, it certainly is much supplier than IIS, which is a
nightmare.

------
notacoward
> every middleware you want to use needs to be included into the binary and if
> it’s not, you need to re-compile the program

Having to recompile to add functionality that alternatives provide as
dynamically loaded plugins is insane. This is 2017. Linux has had loadable
_kernel_ modules since 1995, and I worked on them for other versions of UNIX
as far back as 1991. I understand that it's hard for a Go program to support
plugins written in other languages using a more conventional runtime. No
problem there. However, there's just no excuse for relying on recompilation
instead of dynamically loading modules also written in Go and thus using the
same runtime. I really want to like Go, but as long as the Go community clings
to long-discredited ideas regarding things like packaging and distribution and
symbol versioning I just don't feel like I can depend on it to build
infrastructure that will remain robust over time.

~~~
mholt
One of the advantages of Caddy is its purely static binaries. You don't even
need libc to run Caddy, and that goes for any platform. The dynamic loading of
plugins is at odds against this advantage.

> However, there's just no excuse for relying on recompilation instead of
> dynamically loading modules also written in Go and thus using the same
> runtime.

Actually there is.

Go plugins are something we're looking at, but it's complicated and the
technology is currently immature. Please read this forum thread in full:
[https://forum.caddyserver.com/t/go-1-8-plugin-
model/934?u=ma...](https://forum.caddyserver.com/t/go-1-8-plugin-
model/934?u=matt)

~~~
lokedhs
Why is this seen as a benefit? If a severe security issue is found in libc, I
have to recompile all software to take advantage of it? Who thought that was a
good idea?

~~~
tmpxkdks
AFAIK, Go doesn't use libc.

Anyway, it is still a problem if you encounter a bug in any dependencie: you
have to recompile every application using it.

------
mwpmaybe
I've been experimenting with Caddy over the past few days and it's great, if
somewhat immature and quirky. The plugin ecosystem is rich but in dire need of
some oversight and QC. What I like most about Caddy is its very sane behavior
out of the box: it just does the right thing in most cases, and requires much
less configuration than NGINX, Apache, HAProxy, etc. (It also has fewer
features, so there's a trade-off.)

I posted a gist[0] with my Caddy+Varnish+PHP-FPM configuration and a README
explaining the hows and whys. I'm moving a bunch of WordPress sites from
Apache/mod_php to this configuration (each site's PHP-FPM instance is
Dockerized, but that's out of the scope of the gist). Hopefully someone finds
it helpful!

0\.
[https://gist.github.com/mwpastore/f42f6f1309a7b067519f4c08e1...](https://gist.github.com/mwpastore/f42f6f1309a7b067519f4c08e18b0b6a)

------
tannhaeuser
I like caddy, but the dreamhost benchmark re apache vs nginx isn't really
telling, as it doesn't mention fundamental apache config properties. I'm
assuming they were benchmarking an unoptimized MPM/prefork setup, but you can
use event-based request processing (and other process models) with apache as
well. There's nothing magical with nginx.

I'd also like to see a benchmark vs. H2O which seems to have the most advanced
HTTP/2 support right now (its running the nghttp2 web site for some time now).

~~~
bwblabs
Any experiences with running h2o on production sites? The documentation is a
bit lacking sometimes, luckily the code is well written.

~~~
tannhaeuser
No experience with h2o yet. I'm not keen on having to pickup Ruby which would
be just one more thing to take care of for me.

~~~
bwblabs
The code is mainly written in C
([https://github.com/h2o/h2o](https://github.com/h2o/h2o)) with some Perl
testing. The Mruby for small (header/push) logic isn't that bad, and not bad
in terms of performance either
([https://h2o.examp1e.net/configure/mruby.html](https://h2o.examp1e.net/configure/mruby.html)),
although I'm also not a Ruby fan.

------
popey456963
Some of your links to [https://caddyserver.com/](https://caddyserver.com/) are
relative and not absolute, leading me to [https://ferdinand-
muetsch.de/caddyserver.com](https://ferdinand-muetsch.de/caddyserver.com) \-
Something you should consider fixing.

All in all however, I like the look of Caddy. The best bit I feel has been
left out though:

"That is why, effective this release, Caddy will automatically serve all live
sites over HTTPS without user intervention."

Which I think is just brilliant.

~~~
n1try
Thanks for that hint, I fixed the link! I think I mentioned that HTTPS is on
by default, didn't I? But anyway, yes it's very useful and a good practice.

------
notheguyouthink
> You don’t need to run any script. You don’t even need to create a Let’s
> Encrypt account or install the certbot.

How is this done? Ie, to not need any account and still get new/valid certs.

And, on that note, what is the difference between having an account and not?
Eg, how might using no accounts harm a production environment?

Just trying to wrap my head around that. Really cool UX for side stuff!

 __edit __: Ah, looks like there is an account involved - it creates one,
possibly using your email address. This makes more sense now.

~~~
n1try
Yes, right. Maybe I should've mentioned that. But I didn't want to talk to
much about the certs.

------
joshstrange
I like caddy and I use in inside of docker for my private home/cloud setup but
lately I've been rethinking even that. I agree it's almost magical in how easy
it is to setup (especially with HTTPS) but I use nginx for serious projects
and a large part of me says it's better to use the same thing everywhere to
keep me up to date and aware of potential issues or features and not mix the
two.

------
richardwhiuk
The graph is marked as 'Apache2 vs. nginx memory usage ' but that's not what
it shows at all - it's 'Requests per second on Apachev2 vs nginx' \- did you
mean to show [https://objects-us-
west-1.dream.io/kbimages/images/Webserver...](https://objects-us-
west-1.dream.io/kbimages/images/Webserver_memory_graph.jpg) instead?

~~~
richardwhiuk
Also, this article is really odd. It starts with a performance based analysis
comparison of a bunch of web servers, and then decides to add a third
completely unknown option, and then decides it's better, based on it's
configuration being 'better' (in a way that's not shown), despite it's
performance being ~ 1/3 of the competitor....

~~~
snug
And there's no comparison of what he was using for NGINX config vs the Caddy
config. He simply states that Caddy is easier to use with little
acknowledgment of what the config would actually look like.

Googling around for a Caddy file[0], it looks fairly similar, and not so much
easier or harder to configure or understand that NGINX config.

[0]
[https://caddyserver.com/docs/caddyfile](https://caddyserver.com/docs/caddyfile)

~~~
mholt
Indeed, in some of my own benchmarks, Caddy performed better than nginx in
recent versions of Go. (Using "out of the box" configurations, anyway.) I know
tuning is possible, as it is with Caddy depending on your situation. There are
so many dimensions to performance of a web server, though, and I don't give
much weight to benchmarks.
[https://gist.github.com/mholt/3f613740ceb417bf63fa](https://gist.github.com/mholt/3f613740ceb417bf63fa)

------
dsl
This looks awesome. I'd switch away in a heartbeat if the Let's Encrypt
integration could handle multi-server deployments.

------
spilk
Does Caddy support client certificates?

~~~
mholt
Yes. [https://caddyserver.com/docs/tls](https://caddyserver.com/docs/tls)

~~~
spilk
Is the certificate or its parameters (subject/issuer/alt names/etc) available
to set downstream headers (for proxy/fastcgi)? I'm sure I'm missing it, but
can't see anything like that in the docs. Or is this something that is more
suitable for a plugin?

~~~
mholt
I'm not sure I understand the question. You want a TLS certificate to set HTTP
headers?

~~~
spilk
Yes - X.509 client certificates are the authentication mechanism (TLS mutual
authentication). How this generally works with other HTTP servers (i.e. nginx)
is the proxy validates that the client's certificate are issued by a given
authority and valid/not revoked, then injects headers with the certificate
subject or other certificate fields (commonly a UPN in the subject alternative
name) to tell the proxied application the identity of the user. nginx's TLS
implementation populates variables with the various client certificate
variables for use elsewhere (the $ssl_client* variables here:
[http://nginx.org/en/docs/http/ngx_http_ssl_module.html#varia...](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#variables)
)

In other cases, the entire client certificate is injected into a header (in
PEM format) for the downstream application to process as it pleases.

This is a common use case in US Government since everyone carries X.509
certificates around their neck (ID badge is also a PIV smartcard). US
Government usually implements this with F5 BIG-IP appliances but I usually use
nginx.

