I used nginx for a very, very long time but abandoned it.
I switched to Caddy because I just got so sick of the constant pain of certificates. Caddy does certificates automatically.
I also was sick of the complexity of nginx configuration. I could get the things done that I needed to do but it was always painful and time consuming.
I've worked out how to do in Caddy everything that I did in nginx though that took alot of time and effort, but I tend to create quite sophisticated web server configs.
At this stage there's nothing really that nginx could offer that would bring me back from Caddy.
I wonder if there's something about changing uses cases. If you design for x and y use cases then you end up with a nice, simple design for x and y. People then start doing a lot of z so you adapt to support z, but it adds some complexity because it's not exactly what you had in mind when you laid the foundations.
A competitor comes along who's designed for y + z, and everyone loves how nice and simple it is. Then use case u comes along...
If we are to trust a certain philosophy based on simplicity, one of the _very hard_ things you have to do as a head maintainer of software is to have the ability to say no when that case arises.
That’s the trick - sometimes, if you don’t add the features, you lose share because the market has shifted and now a different set of features are what is valued. So you can either patch and lose focus, or not patch and lose customers.
Having used Apache, Caddy, Nginx and more, I can tell you it't not about "sophistication" or features necessitating complexity.
It's about bad configuration file design, little thought put to usabillity and ergonomics, and, actually, a total lack of knowledge in that domains (usability) on behalf of the devs...
If you look at those three specific examples, you'll see decades of progress in the field between them. I don't think it's fair to say bad config file design, but maybe more fair to say that we've just gotten better at it as an industry over the last 30 years, and the ability to write better config files has gotten easier as well.
This seems very much like the browser/frontend space, where Webpack was once cool and slim, but is now just as bloated and slow as the systems it set out to replace.
Yeah, what's happened here is that Caddy is optimized for the most common use cases and gives them a "happy path." This simplicity is only possible in a world where something like nginx (and apache httpd before it) exists and people can figure out what the "happy path" should even be.
I’m old enough to remember Nginx and it’s selling point was never easy of configuration. It was always about the number of connections it could handle which made it great as a reverse proxy for web2.0
> I'm old enough to remember the same complaints about Apache, and nginx being the simpler alternative.
Nginx is still far simpler than Apache.
It just so happens that subsequent projects picked specific usecases to work on their happy paths to make them far simpler. Certificates is one of them.
> Something about systems evolving to a level of complexity/sophistication until they collapse under its own weight...
This is nothing of the sort. It's a matter of newer and better options popping up, not that old options got worse.
I came fro Apache through Nginx to Traefik and shortly tried Caddy. Went back to nginx for a lot of stuff because it's easier to understand and debug than Traefik.
I think a shell script with certbot and "nginx reload" is much easier than some closed service I can't take a look at.
I'm using all three, Apache, Caddy and nginx almost daily. For the trivial, Caddy is amazing. When things get more complex alongside needs themselves, nginx is amazingly powerful. Apache at the same point becomes overly verbose, anomalous and painful. It does take some discipline though, to comment and structure your configuration files, but that applies to anything that isn't trivial. return 444; alone is a great feature I can't do without, not to mention all the modules one can use.
The thing with certificates is that most tutorials are terrible. A very seamless way is to use the webroot method. Create a nginx configuration file that places LE's challenges to a specified folder on your system, include that config into your server blocks. Then you instruct certbot to use that folder. Really quite easy.
Wait, how does that makes nginx use the certificates? For me it doesn't have any cerbot/letsencrytpt specific configuration as I use DNS-01. I just point ssl_certificate and ssl_certificate_key to the correct path where cerbot saves the certificates and be done with it.
Making use is easy as well, like you described. Just a matter of specifying ssl_certificate(_key) path. It doesn't support variables so it can't be based on the server block's domain/variables unfortunately.
DNS challenges are a bit more seamless, but I personally don't like giving access to entire zones to a single machine. Like most DNS APIs force you to.
You just have to run those commands once per domain and it'll keep that wildcard certificate valid forever, acme.sh sets up a cronjob to renew the cert when needed and will automatically reload my nginx container after.
And if you use caddy you never need to know what any of that means, since caddy completely automates it. Maybe it's not a pain to you since you're used to it, but it's a major obstacle to most people looking to set up a website, particularly for the first time.
Even though I've set up nginx and certbot before, I'm happy I don't need to think about that stuff with caddy. Total waste of mental resources. I just want to get stuff on the Web.
I’m the same. I’m able to host unlimited subdomains using wildcard TLS and unlimited apex domains using TLS on-demand, all from one Caddy config. It’s amazing.
No disrespect to the good people behind nginx. Caddy is simply part of a new generation of web servers.
One pain point with caddy for me was cloudflare dns validation. I had to rebuild caddy from scratch, build my own container image. This process is slow as hell and very RAM-intensive, I had to add swap just for it. I wish they publish images with popular plugins built-in.
Hm, i found this to be pretty straight forward through Docker:
e.g:
FROM caddy:2-builder AS builder
RUN xcaddy build \
--with github.com/greenpau/caddy-security
FROM caddy:2
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
You can keep your container image, and just do the "COPY --from=builder" step in there. We do this in CI/CD on every build.
I did something similar. It puts my server to knees and takes something like a hour to complete, requiring all services to stop before I can run it. This is extremely unergonomic for me.
VPS with 1 core and 512 MB RAM. I just checked, not really a hour, it spent 25 minutes and then failed with "no space left on device". Right now there's 4.9GB free, so it takes at least around 5 GB disk space which is an issue as well.
Building on separate computer will work, of course, but it makes everything more complex.
You go to the download page. Click the checkbox for cloudflare dns and hit download. The binary you download will have the module built in. It's very simple but of course being nonstandard it makes updates unnecessarily painful. It would be nice if you could wget it with a special hash that represents the modules you need or something like that.
The configuration format is easily the worst aspect of nginx. There was a stretch where I got heavily into using it because it really is capable of quite a lot, but I could never totally grasp the config, despite how I've managed to learn pretty much any programming language given to me. Depending on what you're doing, a certain other thing may be totally moot or out of place, but it's certainly not obvious, and good luck actually debugging your nginx config without resorting to The Google. Once the config is set, nginx can do its job quite well, but it's bad if you're frequently making architectural changes.
every configuration format has it's quirks, and nginx certainly has a couple.
but coming from apache it was the freshest breath of air possible. i still prefer it to any yaml/json alternatives newer products like envoy use. a pure ymmv point.
i don't think that having a manual or reference open while doing more involved setups is a sign of bad design. some things are simply complicated...
I wasn't trying to equate it to a programming language, though. IMO, a config format shouldn't be more difficult to grasp than a Turing-complete language. To me, nginx conf is hard to gain proficiency in because the amount of experience to know whether a declaration is in the appropriate place or won't be ignored is very high. Obviously someone can indeed become proficient, but I've found it to be extraordinarily high for someone who just needs it occasionally; I really don't want to live in nginx land all the time, in part because I'm a software engineer and not really an ops person.
For instance, should `upstream` be defined at the top-level or under `stream`? If you have another server doing a proxy_pass to that upstream, does it also need to be under `stream`? This is in no way obvious by either looking at nginx configs or by reading the docs.
And then what if you have a web server? Does it need to be under `http`, or can it be top-level? It can be either one depending on who's config you're looking at. But if your server you're pointing to is already an HTTP server, why would you want nginx to use its own http serving on top of your HTTP server?
That's just a couple of examples. And yes, there are absolutely logical answers to those, but I do not believe that the nginx config format speaks for itself.
It's quite rare that your webserver will be the bottleneck at this point. Your app will be the bottleneck much more often.
I don't think the difference in performance between Nginx and Caddy would mean you need 3 servers versus 1 of the same spec. But of course, you need to run your own benchmarks on your own config to determine which is best _for you_.
Benchmarks cannot be done generally, because the config drives so much of what happens at runtime, and everyone has different needs.
> I switched to Caddy because I just got so sick of the constant pain of certificates. Caddy does certificates automatically
That may be fine if you only use those certificates for port 443 but it falters where certificates are used for the remaining 65533 ports. Instead of leaving certificate request and renewal to individual tools I prefer to centralise it in one spot - a small container or VM is sufficient - which deals out ready-to-run certificates to those systems which need them. When dealt with this way nginx actually has no problems whatsoever with certificates, all it takes is those three (or more depending on your protocol needs) lines in the config file:
The /etc/letsencrypt/live tree gets pushed to the relevant machines by the certificate handling container/VM with certificate files in the form and shape needed by whatever program is to use them.
So, in short, the Caddy approach may be fine for those who only run web things but it 'fails' (as in 'adds additional complications') for those who venture beyond port 443.
That's not true at all. Caddy can automate certificate issuance with any of the 3 challenge types, either HTTP-01 (requires port 80) TLS-ALPN-01 (requires port 443) or DNS-01 (requires building Caddy with a DNS plugin for your DNS provider).
Also you can cluster Caddy by making sure the filesystems are synced, or using a storage module like Redis. Then any of the Caddy instances can do any step of the issuance phase (one can start it, another can complete it).
You can use these certificates for sites on non-443 ports as well, once you have the cert. Nothing special at all to do there, you just configure Caddy to serve a site with a different port, e.g. `example.com:65533`
You also get numerous other benefits, like OCSP stapling, issuer fallback (if LE is down, it'll try ZeroSSL), etc.
You don't seem to understand what I mean when I say 'port 443' - i.e. HTTPS. I'm talking about the use of certificates in non-web applications, e.g. SMTPS, IMAPS, database servers, etc. I prefer not to run a web server on my database servers, mail servers and what have you.
It's been 5 years since I used any web server as I started using Go to build web applications.
1. HTTP server is production ready.
2. autocert fetches certificate automatically from LetsEncrypt.
3. Graceful restarts (zero downtime) using signals.
I think the the biggest advantage of languages which enable portable applications like Go is not having to mess with web servers anymore, At least for most common use cases.
heh, I'm on the exact opposite side of the fence. Tried caddy, found it confusing and incomplete so I went back to nginx. Buuut then again, I've been writing nginx config for 10-15 years :)
The correct thing with Caddy is to _not configure anything at all_ for TLS. The defaults are the correct thing to use. If you override the defaults, then you're more at risk of bitrot due to not remembering to update your own config. Let Caddy (and the Go stdlib, really) choose what's secure.
> I used nginx for a very, very long time but abandoned it.
I still think that Nginx is pretty good, however there are a few annoying things about it, certificates being just one of them.
Personally, I also found that attempting to use it as a reverse proxy kills the entire instance when there is no DNS record for one of the sites (say, 1 out of 20 that are proxied). I ran into this when running containers that hadn't passed health checks and therefore didn't have any traffic routed to them, which meant that if any of them went down, all of them would be unreachable. Furthermore, the popular suggestion of using a variable for the proxy URL just broke redirects in some apps: https://blog.kronis.dev/everything%20is%20broken/nginx-confi...
Caddy is pretty good, though I couldn't get the equivalents of all the options that are available in other web servers, for example allowing encoded slashes when hosting a Sonatype Nexus instance like what you can do with Apache. Things might have changed, but last I tried, it didn't quite seem to work: https://help.sonatype.com/repomanager3/planning-your-impleme...
Furthermore, there were issues with certificates as well: if the configuration for one of the aforementioned 20 sites was bad and the certificate couldn't be renewed, then the entire instance went down. What does this mean? Well, if I run Caddy as a container, I'll get the "fail fast" approach, which will sadly then mean that I'll get a restart, that will still fail with the bad configuration and eventually will hit Let's Encrypt rate limits. Sure, I can have some alerting (extra work) or have restart backoff/delays (though this will be bad for restart times if the instance were to ever crash for other reasons, e.g. load), but neither seems like an actual solution. Very annoying, especially because everything will once again go down, instead of being built for resiliency.
In the end, I kind of just went with Apache for the time being, because despite being a bit awkward at times, it's still an okay web server for the scales that I work at and is okay to deploy inside of containers. I wrote about it more on my blog, "How and why to use Apache httpd in 2022": https://blog.kronis.dev/tutorials/how-and-why-to-use-apache-...
Of course, I still use Nginx + PHP-FPM with supervisord for containers where I need to run PHP, because of some other weirdness when you try to get it working with Apache, which probably has something to do with how I build container images. That said, I couldn't actually find what the problem was, so it was easier for me to just use Apache on the edge and Nginx for the apps (PHP, or maybe even serving static assets), about which I wrote on yet another blog post, "Containers are broken": https://blog.kronis.dev/everything%20is%20broken/containers-...
In short, it's nice to have a choice of web servers and being able to pick whatever is the best suited for your needs. It's just that there's a lot of weirdness going on with what should be simple configurations.
Thanks for the response, that's really nice to hear!
I'll probably check out the more recent versions of Caddy as well, admittedly they're most likely a rather painless experience for most folks when compared with Apache/Nginx, though I've also heard good things about Traefik, which is even integrated in K3s as an ingress.
I guess most web servers are workable for a variety of use cases with a bit of effort, though I definitely reminded myself why people weren't exactly the biggest fans of the Apache config syntax just recently.
Free CAs are prone to fraud and abuse because they have no innate funding model and there's no barrier to anyone issuing a cert to any particular domain while attempting to circumvent domain authentication.
Nginx bites you in the a**: you start your project, choose the most famous web server (nginx), then you build and build and start needing the pro features. But they don't give you an up-front price! You have to contact them for a custom quote. That's when you start pondering moving to Caddy. But you're too far down the rabbit hole. Luckily, Cloudflare does a decent job of providing the features you need, at a clear price.
note that this is nginx.com, the division of F5 networks that was purchased largely to staunch the bleeding from nginx, lvs, haproxy and other open source load balancers. the purchase was made in the same faith as oracles acquisition of mysql, albeit with less implosion so far.
"part of f5" is written on the logo in almost imperceptible font, that it may evade the PTSD of managers familiar with f5 licenses.
The problem with nginx commercial was it didn't work with the actual resources developers can bring to bare. If an nginx commercial license solves a problem, and the answer is "great, $5000 a year please" then the developer can't take that to their manager - the problem isn't critical yet, may not become critical, and they'll be asked to look for any other solution to work around it (I've been the guy doing this).
A per node, per core, per something license at a much lower rate would've saved them a lot of problems. Because $5 a month per node might get expensive quickly when scaling up, but thats a simple expense I can throw on the corporate credit card and then go "look, now the project is delivered".
A 1000x this. I see it with other software companies chasing enterprises (or at least VC backed companies) - Varnish Cache is a particularly egregious example.
In the end it hurts them because a developer or consultant may find a problem that is worth the license fee to solve - but with no hands on experience of the product due to the cost, it's hard to recommend it and take the reputational risk in case it doesn't live up to its promises.
Yes, they really miss the opportunity to get a foot into many smaller companies and scale up. For anything useful you need at least $10k/year (likely more), but what if you only need a tiny feature in addition to the open source software? Then this pricing feels just too much (although they of course have to financially support the open source software too) and it'll be very likely that you try to solve it with the open source nginx.
If they have modules - why not sell these modules and attach a "reasonable" price to it?
It’s kind of hard to explain what a big deal NGINX/fastcgi was back in the day coming from httpd/mod_php. All of the sudden, a basic EC2 instance could handle orders of magnitude more concurrent requests. Before NodeJS was a thing, it opened up whole new possibilities for my company at the time.
Not sure what my point is. But NGNIX has a special place in my developer heart and it’s really encouraging to hear this.
The big deal was FPM, the PHP FastCGI process manager.
Apache had mod_fastcgi for half a decade before nginx even existed, and then got mod_fcgi, but it was not much good without any support from PHP (which was an even bigger deal back then).
I am probably in the dying cohort who were mod_python, didn't make uplift to fastcgi and are somewhat mentally stuck in expectation of apache hooks and thin paralellism in our code.
The fastcgi thing was true: I don't for a minute deny what you are saying. But it also stranded some "less agile" minds like me.
I know significant sites using Perl use::CGI; still.
Before nginx there was mod_accel for Apache which was hard to configure but provided same ballpark of performance reverse proxying. Earlier work of the same artist.
You can use Apache with FPM. However, Nginx was definitely what I chose to use.
I was working in managed webhosting at the time, and it was just amazing how much could be done in Nginx without needing to track down some abandoned module like had to be done with Apache all the time.
Yep. Electron uses Discord, and any time I want to interact with Electron developers on that platform, I remember that I hate Discord, and I don't care enough to actually open the application to send the message.
Using Matrix would definitely be preferred for me since I have Element/Fractal open all day, everyday.
I am aware. I just don't trust the company at all because of the shade they pull on the desktop application. Snooping processes is just a horrible practice.
I don't know much of the history or politics. I've mostly been a "let's just use (open source, if it needs to be said out loud) nginx" fire-and-forget kind of dude for a while now because I mostly don't care about this layer...
> We realize it complicated matters when we created [...] an open source Ingress controller for Kubernetes, [...] different from the community Ingress solution (also built on NGINX).
> It’s pretty clear that the Gateway API is going to take the place of the Ingress controller in the Kubernetes architecture. So we [...] will make the NGINX Kubernetes Gateway – which will be offered only as an open source product[...]
... but these two bits very much give me a feeling like they're planting their flag here and offering an open source version so that there's no "need" to go making another one that they don't control.
> although I will never be a fan of its JSON config file option
Well, if you use a Caddyfile, the JSON config is just an implementation detail. That decision shouldn't matter to you. The fact is, JSON is the best option for Caddy _at runtime_ to manage its config in memory, especially because Go has first-class support for unmarshaling JSON documents onto a tree of structs.
I love and use JSON every day. I just don't philosophically believe that it's the right thing for configuration. Too much scope for making errors if done by hand.
And also if not doing it by hand, it means additional tooling to create the JSON file. Another layer to deal with with so many layers already.
Also Go's first class support for JSON is not a reason for supporting JSON in configuration files. Pretty much every language supports JSON anyway, so whether it's "first class" or "second class" does not really matter.
Anyway, we love and use Caddy here. We just don't use the JSON config file.
I love nginx and while I use Caddy on my laptop all my servers are nginx.
Now the dynamic configuration API is going open this will give Envoy a good run for its money. The big thing Envoy doesn’t do is static file serving and for smaller (anything but the largest…) deployments Nginx makes a ton more sense.
Global rate-limiting bucketed by Basic-Auth-presented API key? Easy enough in Nginx; no idea how to do it with Caddy.
When people say "production", they mean things like "QoS for a shared-multitenant system, in the presence of customers with really badly tuned and spiky request workloads, whose traffic you must nevertheless mostly accept."
As a counter point, I've been running 100k+ domains through caddy clusters with about 1tb of traffic per month (so not a ton, but not nothing). I built and manage that solo with very little maintenance or support, which I think is a testament to how reliable and performant caddy is.
You already know me, just not under this username! I try not to put my name directly out on forums like this, but I run https://approximated.app and those 100k+ domains aren't all in one cluster, though some clusters are growing quickly and might be there in a few months. Customers paying for the clusters are generally SAAS or hosting related so they can bring a lot of user domains when they come on board and tend to grow steadily.
If you can't already do it with the rate limit module I wrote, open an issue with your detailed requirements: https://github.com/mholt/caddy-ratelimit -- should be pretty straightforward for the most part.
> QoS for a shared-multitenant system, in the presence of customers with really badly tuned and spiky request workloads, whose traffic you must nevertheless mostly accept.
Yah, we see that sometimes. Caddy usually handles it fine, sometimes with a bit of massaging the config.
We can not use Caddy as long as a reliable and battle-tested rate limit module does not exist. "Work in progress" is not something I would like to put on a production system.
It would be great if you would like to finish the work on this project and make it part of the Caddy default distribution.
Honestly I do not understand how people run high-traffic sites without rate limiting. [D]DOS attacks and other misuse (e.g. spambots etc.) are daily issues on the internet, do you just ignore that?
Appreciative of all the work regarding caddy, but the rate limiting seems to be chicken and egg, as few people seem to be willing to test it out in order for it to be accepted into core, and people are unwilling to test it out, because it's not in core.
It's also a blocker for me, so nginx wins by force of inertia.
Happy to finish it if a company would like to sponsor that work. It works well enough for what it was originally commissioned for. It actually will probably work pretty well for you as-is. Try it!
I also only have Caddy running locally to proxy different dev environments and serve certain files, and it is an exceptional tool, but I would never deploy it in production:
1) Nginx configs are (from my experience) easier to template (in our Nomad & Consul cluster architecture)
2) From what I could gather, Nginx is more stable and performant
3) I don’t trust Caddy’s codebase security. It simply has too many dependencies, and Go makes it very easy to get into dependency hell
Honestly if 3) wouldn’t be an issue, and stability from 2) would be proven, I would probably give Caddy a try in production.
Hm, the templating thing surprises me. I'd be interested to know more about that, since Caddyfile is heavily inspired by nginx config. But for automating and scripting, I typically recommend using the JSON config.
Pro tip: Did you know you can use any config format you want to configure Caddy [1]? So if templating the Caddyfile is hard for you, use something else! You can use YAML, TOML, or even NGINX configs.
On 2, that's been pretty well debunked at this point. Caddy is written in Go, and is only a very thin wrapper over the Go standard library, which heartily powers much of Cloudflare's, Netflix's, and Google's infrastructure. Plus you gain memory safety and are exempt from a whole class of vulnerabilities with Caddy. We've seen numerous instances where Caddy has kept sites up while nginx let other sites go down, due to Caddy's resilience in the face of certificate problems, for example.
On 3, sure, I can understand that -- but this is true of any open source project. And it IS open source, so you can "own" your own code base. You're in control. And actually, Go's module proxy protects Go projects more than most C projects. Caddy's extensible architecture means that you can add all the features you need without bloating the code base.
I think it would surprise you. A lot of people seem to holdover that opinion from pre-v2, but a lot is well done. Templates are totally doable, heck you can reference exported shell vars in the config directly.
TBH, v1 was already kilometers ahead compared to Apache or nginx for normal, casual users.
I host some stuff at home and moving from nginx to Caddy v1 was a huge breath of fresh air. V2 made the product extremely good.
I tried a bit to use the API to automate my deployments (which are, again, home deployments) but it stopped to make sense when i discovered that Caddy has the ability to read config files via wildcards.
The primary reason I never looked seriously at Caddy is because it had an installation via a bash one liner, and refused to post a repo or a way to download specific versions. This makes maintaining many identical servers via ansible a real challenge.
To be fair, that was several years ago. But can you explain the logic behind that decision?
The docs unfortunately. It always takes longer (maybe it’s just me) to figure out how to do something non-trivial with Caddy and here both Envoy and Nginx shine.
A lot of the docs don’t explain what a module does - for example:
> dns.providers.azure wraps the provider implementation as a Caddy module.
Well great what does the provider implementation do? It’s the same for all DNS and not so useful if I’m coming from Apache or Nginx. At least explain what I’ll be able to do with this module.
From http.handlers.push:
> http.handlers.push is a middleware for manipulating the request body.
That’s a bit generic - any examples? There’s a lot of ways to modify the request body so what can I do with this?
There are 2 http.authentication.providers.jwt - both unofficial and both with 0 documentation. There has to be some standards here, why link to blank docs from your site?
All-in-all it just feels meh. I like to read docs that’ll give me the technical ins and outs of every function with some examples ideally.
Microsoft does a decent job of this as does Postgres (and others but these two come to mind immediately). Envoy is pretty nice here too and Nginx close.
Re "dns.providers.azure", that's a third-party module, we don't maintain that. The documentation you see in the JSON docs is auto-generated from code comments (godoc) so it's the maintainer's responsibility to write good comments that can act as documentation.
Re "http.handlers.push", fair point, that one's lacking/misleading. But this is a feature that was just added to satisfy a specific need, then fell by the wayside. Mostly because now Chrome is removing support for Server Push, so we'll need to deprecate this feature, in favor of HTTP 103 Early Hints which is the effective replacement for it.
Also, in general, we tend to spend more effort on maintaining the Caddyfile docs than the JSON docs, because the large majority of users use the Caddyfile. See https://caddyserver.com/docs/caddyfile/directives/push for the push handler, complete with examples.
Re "http.authentication.providers.jwt", well again, I think that's an issue with the module's maintainer not sufficiently using godoc comments. The maintainer registered the module under two different module paths (renamed repo) so it caused a duplicate. We'll need to manually remove the duplicate from the database, I think. It might be because there's conflicting ones that no docs are shown (cause the backend which serves the API docs is confused, I dunno, it's a bug clearly, will need to dig deeper).
Another example that was tricky in Caddy: setting up access via either whitelisted ip, or basic_auth (aka apache satisfy-any).
Useful for example for staging environments, whitelisted from certain ips, but allowing access via user-name/password from other IPs and/or the Internet.
Pretty easy (if I understand what you're asking for):
@needsAuth not remote_ip private_ranges
basicauth @needsAuth {
user pass
}
This uses a named matcher[0] `@needsAuth` with the `not` and `remote_ip` matchers, to match all public IP ranges (change `private_ranges` to a list of CIDRs if you prefer), then applies that request matcher to the `basicauth` handler[1], and pass in user/pass pairs to it (passwords are bcrypt hashed).
If the user fails authentication, then they won't be able to get in, as you'd expect. But users inside your allowed IP ranges will get through without basicauth.
Maybe you can better explain what they're asking for. Because otherwise, I find it unclear. Isn't the point of OpenResty to provide a system for building your app right inside the webserver? Because you can absolutely do that with Caddy, by writing plugins.
> But there are many scenarios where being able to extend the HTTP server via Lua is more convenient than writing a plugin I would think?
Well, Caddy is written in Go, so it's only natural to write a plugin in Go. Statically compiled into your binary. We provide a tool called `xcaddy` which is used to produce builds of Caddy with any plugins you need. You just need Go installed on your system to run it, no other dependencies.
The reason why Lua is used for OpenResty is because writing plugins in C is... not fun.
You can absolutely do what you described with an HTTP handler module in Caddy. You'd just wrap the req.Body with a reader that watches the bytes as they're copied through the stream, and when you see the part you want to log, you do that.
We have a replace-response plugin which takes a similar approach, except it manipulates the response as it's being streamed back to the client. https://github.com/caddyserver/replace-response The whole plugin is just one file of Go code.
I don’t follow nginx so what I say might not mean much, but this is a weird read for me. It reads hollow, like a PR document but at the same time also feels abut like trying to make amends to the open source world but fails at that a bit. Was there some history here where nginx the company did wrong by the community or was the community aggrieved with nginx? Something happened for sure.
Also they should have stayed away from erstwhile Ballmer phrases like “open source is not production ready”, who thinks that way? Tells me that this is not written by someone that contributes or understands open source at all.
> three words: modernize, optimize, and extend. We intend to make sure these are not just business buzzwords
Why does everything have to be modernized? Nginx is awesome not because it tried to keep up with the ilks of ill-thought servers that came and went faster than a flash in the pan. No let’s not put it on GitHub and have every literal person contribute. Elegant architectures and code like this don’t just happen because someone “modernized”.
I am an Nginx user and I really do not want to go away from here. It's too performant and too mature. It has a terrible UI problem. I don't want to learn a new tool.
All I ask is a Caddy style JSON configuration format for Nginx. And don't just implement some random JSON schema either, please create a well designed schema that is extensible. Take Prometheus [1] or Alertmanager [2] as examples of well designed and extensible schemas.
Just give me some basic features first, easy HTTPS, easy reverse proxy, easy rate limiting. You can release the rest of the advanced features in your own time.
I love the OpenResty nginx distribution (which is used by the official Kubernetes ingress-nginx). Makes most of the things you'd have to pay for with the normal nginx free.
OpenResty with Redis on the same system - both Lua-powered - is a god-like power.
Ability to do basically control and rewrite all your ingress AND egress and have some state via Redis over UNIX sockets.
What's not to love? After all, it's what powers the bulk of internet.
Granted, it sounds dreamier than it is, since a lot of the contraptions you could come up with might be better placed at application layer. But having operational ability to do these things ad-hoc, security advantages of manipulating at your own infra edge or performance boosts, sure does come in handy!
We did this on production (still runs at webscale) but we abandoned dev effort on it for ${reasons}. I _loved_ it though :) Hook Lua on the HTTP event processing stages and add icing on the cake with Lua modules to hand-off non-trivial stuffs..
Nginx is a superb piece of software but for me it has largely been superseded by Caddy. I like it for its sane defaults, you get most things working from the get go and a fairly high score on the 'rate your webserver security' checkers.
I seem to remember Caddy having some features behind a rather restrictive license - or something like that, my memory is failing me - that made me not consider it at all. But it's been a while, easily a handful of years, and looking at it right now, it looks like they changed that. Seems to be all open-source/Apache 2.0 licensed.
Am I crazy or did they really change the licensing situation? Cause it definitely looks pretty interesting, looking at current docs.
I sympathize a lot with the licensing story of Caddy. /u/mholt has been a very active maintainer and participant in the larger Go community and has every right to create a commercial side to the project and receive compensation for their work.
But licensing is hard. AWS abused the leniency of Elasticsearch's original OSS license, so now the industry is set back 10 years with new entrants trying to find the balance between inviting community participation and getting gob smacked by big corporate interests.
Im not saying Elastic.co is an angel but its a lesson OSS projects have to take to heart, unfortunately. Thanks Amazon.
I think my favorite reaction to Amazon's aggression is Grafana Labs. They released their code under Affero GPL but let you use a different license if you pay them. Sort of like a new version of the Qt model.
AGPL does nothing to protect commercial projects from AWS. Look at MongoDB: they had been AGPL for a loooong time, but moved to a non-FOSS license specifically because of AWS.
AWS has no problems with giving away the code for any managed-X service they. The magic that they charge for is the managed part - deployment, autoscaling, upgrade management, and other Operations stuff that no FOSS license can compel them to make public.
For a time we ran a build service that produced binaries that were commercially licensed if used for business purposes (personal use still free). But that was only with the optional use of our build service. The source code has always been Apache licensed.
Ah! So I guess it was just my junior dev mind that couldn't comprehend the nuances and didn't realize I could just use the open-source version rather than downloading binaries.
Thank you for the info, rather curious about trying it out at this point then. I was about to whip out nginx for a server I wanted to setup over the weekend, guess I'll play around with Caddy then!
For me personally the problem with Caddy is growth and community guides/resources.
Caddyfile is pretty basic, anything advanced doesn't work, and community resources arent even close to Nginx.
So I stick with nginx, even though its annoying to handle Let's encrypt auto renew in containers with normal Nginx, since SWAG doesn't support separate certificates per domain.
What do you mean by "growth" being a problem? (If you're talking about scale, we have companies with tens of thousands of sites behind their Caddy instances.)
> anything advanced doesn't work,
Can you be more specific? I'm sure that's not true because I'm in touch with large companies like Stripe who use some of the most advanced features.
> Caddyfile is pretty basic, anything advanced doesn't work
I disagree. I think we're at about 95%-ish of usecases supported by the Caddyfile. We're continually improving on this.
It's just a reality of having to maintain a config adapter, when the actual underlying config at runtime is JSON. But it's fine, we make it work :)
If you have something specific you find you can't do with the Caddyfile, please open an issue. If you're just not sure, please open a topic on our community forums and we'll help you out.
The most complex thing I remember doing with nginx is video streaming using OBS as a source and multiplexing with nginx to YouTube, Facebook and Twitch.
Well, that's a bit off-topic from the parent comment, which was more about the Caddyfile supporting complex config (versus the underlying JSON config) and not really "complex usecases".
But that said, from a quick Google search... was this an RTMP stream? If so, I suppose you'd want to use https://github.com/mholt/caddy-l4 which is a plugin for Caddy that lets you do TCP-layer things. Caddy's standard distribution just ships an HTTP server (plus TLS and PKI, etc), which is layer-7
You might be able to use caddy-l4's "tee" handler to pipe into multiple "proxy" handlers. But I'm not sure anyone's tried this yet, I had no idea people did this sort of thing. I'd be interested to hear if it does work though.
Their one major disadvantage is the chasm between version 1 and 2. Most info you can find is about version 1 and does not apply to version 2.
I don't like its community much either. The top guys are pretty arrogant. You can meet them in discord.
They spent months trying to get away from adding forward proxy support claiming another project provided support for it. (That project changed three times in that time frame) Now they added the feature like somebody just asked for it and they added it.
I do not know who the "top guys" are but support from Matt and Francis has been extraordinary for me.
When starting with Caddy (v1) I missed part of the docs and asked questions in the community - I never got a RTFM answer. It is difficult to embrace a new product and these guys understand it.
> They spent months trying to get away from adding forward proxy support claiming another project provided support for it.
That is not what happened. We did not understand what people were asking for at the time, partly because it wasn't clearly explained to us.
> Now they added the feature like somebody just asked for it and they added it.
It being added recently was because of other changes to the codebase that made it possible to implement easily. We didn't have response intercepting working correctly in the reverse_proxy module until more recently, and it took some careful refactors to get it right.
Once we were made aware of an issue in Authelia's issue tracker (one of the top open source auth servers) asking for Caddy support, we looked into it more closely. Nobody reached out to us about that issue being open, for whatever reason. We got a massive amount of help from James Elliott at Authelia who did extensive testing and helped us design the config layer so that it would match general expectations.
The issue I think you're talking about is this one: https://github.com/caddyserver/caddy/issues/2894. Early on in v2 betas, we opened the discussion to get community feedback about the "Authenticator" interface, i.e. https://github.com/caddyserver/caddy/blob/master/modules/cad... and what would be needed to support everyone's needs. Nobody ever suggested "forward auth" in that issue. It was all very handwavey suggestions with nothing truly actionable. So the discussion stagnated, and we closed the issue.
> Their one major disadvantage is the chasm between version 1 and 2. Most info you can find is about version 1 and does not apply to version 2.
Caddy v2's been out for 2.5 years now. I think the balance has shifted on this, it's much easier to find info for v2 than some time ago. That came naturally. It's to be expected for _any_ major change in a project's direction. It's not unique to Caddy in any way.
> I don't like its community much either. The top guys are pretty arrogant.
With all the above said, I think it's moreso that we feel the need to defend Caddy and its reputation, especially because it keeps getting attacked due to misinformed comments, for example the question of licensing; the licensing issue people had was mostly invented based on a misunderstanding.
But feel free to elaborate on what you think we're being "arrogant" about. I'd be glad to clarify any misunderstandings.
> A quiet but incredibly innovative corner of the NGINX universe is NGINX JavaScript (njs), which enables developers to integrate JavaScript code into the event‑processing model of the NGINX HTTP and TCP/UDP (Stream) modules and extend NGINX configuration syntax to implement sophisticated capabilities.
That sounds pretty similar (in spirit, at least) to mod_perl etc. in Apache.
Does anybody happen to know a writeup that compares the two?
As long as Dynamic Configuration is locked behind an nginx plus-only API, these are hollow words.
This is a must have feature for todays workloads (kubernetes or just very busy webservers) in production, and nginx will likely continue to lose market share to Envoy based alternatives where everything is configured through APIs, without needing to reload the server.
This is definitely addressed in the article. The relevant bits talk about moving features from the commercial version to the open source version, as well as the introduction of an open source nginx agent that can do the "dynamic config" on your behalf (via API I presume).
I agree with your sentiment though! I hope they do follow through as nginx is generally an easy sell to others when trying to fill holes in your tech stack and otherwise pretty bulletproof.
Some additional context to what I'm referring, in this old blog post from 2015, nginx describe exactly why the dynamic configuration feature is important, and what's wrong with just reloading (draining the old process of connections).
https://www.nginx.com/blog/using-nginx-plus-to-reduce-the-fr...
For rolling deployments, it can cause repeated configuration changes exacerbating the problem, some workloads more affected from this than others of course. The nginx ingress controller makes this clear
> Every time the number of pods of services you expose via an Ingress resource changes, the Ingress Controller updates the configuration of the load balancer to reflect those changes. For NGINX, the configuration file must be changed and the configuration subsequently reloaded. For NGINX Plus, the dynamic reconfiguration is utilized, which allows NGINX Plus to be updated on-the-fly without reloading the configuration. This prevents increase of memory usage during reloads, especially with a high volume of client requests, as well as increased memory usage when load balancing applications with long-lived connections (WebSocket, applications with file uploading/downloading or streaming).
Just to make things interesting, there’s actually two Ingress controllers based on NGINX, one led by the NGINX company and one under Kubernetes organisation. The Kubernetes-led controller ‘ingress-nginx’ is substantially enhanced with OpenResty integration and doesn’t have the issue with reloads that the blog refers to.
You can use a switch to start the nginx process and tell it to stop accepting new connections on the old one without killing it. Forgot the switch but its in the docs. Don’t know if this is well known but this is how I made a poor mans dynamically configurable nginx like 7 years ago using the free version. It worked great
I think this is known, and the Kubernetes Ingress Controller leverages this, however you are leaving that other process behind until connections drain. If you are changing configurations often enough, you might have many old processes lying around, consuming resources and using old settings. So it's not ideal.
> Once the master process receives the signal to reload configuration, it checks the syntax validity of the new configuration file and tries to apply the configuration provided in it. If this is a success, the master process starts new worker processes and sends messages to old worker processes, requesting them to shut down. Otherwise, the master process rolls back the changes and continues to work with the old configuration. Old worker processes, receiving a command to shut down, stop accepting new connections and continue to service current requests until all such requests are serviced. After that, the old worker processes exit.
This article came at an interesting timing for me, since I recently started to explore building my own CDN clusters on top of NGINX open source inspired by this article from Fly: https://fly.io/blog/the-5-hour-content-delivery-network/
I've worked with nginx in the past, and didn't have a great experience, so I was apprehensive diving in, but this time was very different. I think njs (their custom JS scripting environment) was a game changer. Support is built in to nginx core, and available by default in their docker containers, so it's much easier to get started with than Lua scripting. Their JS feature support has some quirks (no optional chaining, array destructuring, console.log's don't show up in logs, are some examples of things that threw me off) but overall nothing that blocked me from implementing the functionality I needed, and the integration points within the nginx config felt a lot more natural than I remember with Lua modules.
I did run into a number of things that were locked behind their commercial offering that made me a bit uncomfortable betting on it for the long term compared to purely open source alternatives. Off the top of my head:
- DNS discovery. There's a thread on the Fly example repo accompanying the blog post that describes the use case and proposes some workarounds: https://github.com/fly-apps/nginx-cluster/issues/2. Life would be a lot simpler if DNS discovery from the commercial offering was just available (i.e. we can outright delete a brittle bash script that makes DNS queries and reloads nginx on a 5 second interval). This was mentioned in the article as something they're planning to open source.
- Access to some kind of shared key-value store for custom caching logic in njs scripts. With Lua we could just connect to Redis, but njs can't seem to establish persistent network connections for now, so that's off the table. This wasn't mentioned in the article, but they did mention in this Github issue that they're planning on open sourcing their keyval module for this use case: https://github.com/nginx/njs/issues/437. I have some use cases where being able to connect to Redis would be ideal, since I'm already using Redis for caching across a bunch of other services, and syncing keyval across a cluster seems to be eventually consistent (https://docs.nginx.com/nginx/admin-guide/high-availability/z...), but for most of my caching use cases it should be sufficient.
So this article, along with their overall willingness to work with the community to identify and bring commercial features into open source (at least from what I've observed across their responses to Github issues) does a lot to alleviate those concerns.
Though at the end of the day, I don't necessarily need every nginx feature to be in open source. I have no problems with paying for great software like nginx to support its development. But as a small bootstrapped founder, their current pricing structure (from what I could gather on the internet is ~ 2k-5k per running instance) is completely prohibitive. It'd probably require a revamp to the way they sell the software (i.e. self-serve onboarding and automatic license provisioning for smaller customers instead of having customers of all sizes go through expensive sales people), but I'd love to see a more progressive pricing structure with a lower barrier to entry for their commercial product.
I have but only briefly. I needed a pretty high degree of customization for my use case so njs for scripting was a huge value prop over having to do everything in VCL.
Run Kubernetes. Install the open-source Kubernetes ingress controller with your favourite load balancer. I'd wager that'd solve majority of your use cases.
To me it totally sounds like 'all those features (some bizarrely so) we locked behind a commercial license to extract money from you' has resulted in people looking for alternatives as soon as the use case exceeded the capabilities of the open source version. Kubernetes and the ecosystem is eating their lunch for more complex service mesh deployment scenarios where nginx is basically an interchangeable dumb proxy, and doing it with generally new and free open source products, so here, have some more stuff we've kept gated behind an expensive license/subscription so we can stay relevant.
I don't say this to be ungrateful about what nginx has provided for free, it's great, thank you. But the post also feels like a cynical take about what the value of open source truly is.
I would put it in a more positive light. Long ago, nginx was excellent providing way more than Apache or other alternatives. Then they added features which were pretty advanced and uncommon behind a pay wall.
Many of those features have gone from fancy to expected, and they transitioned too late to making them open source.
The free product went through a slow decline from excellent to artificially crippled, and they’re changing course, which should be applauded. On top of that they didn’t take away features to cripple their product for money, just what is basically necessary these days is so much more than it was years ago.
I don't disagree with your take. I just find it disheartening that it took what presumably they must perceive as a threat to their revenue stream to start listening to developers/engineers about the role open source can play in their product offering and what effect that might have on the "community" around nginx.
My web server path has been Apache -> (Varnish) -> Nginx -> Caddy. This outside programming language specific servers like Waitress or Express.js.
I love the fact that Caddy is a single binary I can run for simple use cases without configuration file. Plus the built-in Let’s Encrypt support. Nginx is definitely on defence here unless you need “web scale” servers.
Yeah Caddy's developer experience is unrivaled when it comes to setting up SSL. You don't have to run Certbot or manage certificates manually. If you give it a hostname to proxy to, it just handles certificate management seamlessly in the background.
The real killer feature for me is the Cloudflare module. It allows you to use the acme DNS challenge, which means you can test your SSL setup without exposing your server to the public internet.
certbot has dns plugin for nearly all DNS providers including Route53 and Cloudflare. I have been using LE certs on the nginx server on my developer environment without any issues ,other can copying over the certs.
Certbot is fine, but it’s nice to have that functionality built directly into the reverse proxy as opposed to having to configure and update multiple tools. I would ideally like to see Nginx integrate a subset of Certbot’s feature set into their code base especially given that offloading SSL is one of the primary use cases of Nginx.
To clarify, Caddy absolutely can deal "web scale" as proven by companies like Fathom, Stripe, and many others. (The Go standard library powers many of Google's, Cloudflare's, and Netflix's infrastructure, and Caddy is a relatively thin wrapper over that.)
But since we're replying to a comment that's criticial about Nginx's commercialization: Caddy also tried to commercialize, and for some time they got way more criticism for that. Are those concerns no longer applicable?
"Going forward, all NGINX projects will be born and hosted on GitHub because that’s where the developer and open source communities work. [..] We pledge to be more open to contributions, more transparent in our stewardship, and more approachable to the community. We will follow all expected conventions for modern open source work and will be rebuilding our GitHub presence, adding Codes of Conduct to all our projects, and paying close attention to community feedback."
"we recognize that many critical features which developers now view as table stakes are on the wrong side of the paywall for NGINX Open Source and NGINX Plus. For example, DNS service discovery is essential for modern apps. Our promise is to make those critical features free by adding them to NGINX Open Source. We haven’t yet decided on all of the features to move and we want your input. Tell us how to optimize your experience as developers. We are listening."
"The NGINX Kubernetes Gateway is also something of an olive branch we’re extending to the community. We realize it complicated matters when we created both a commercial and an open source Ingress controller for Kubernetes, both different from the community Ingress solution (also built on NGINX). The range of choices confused the community and put us in a bad position.
It’s pretty clear that the Gateway API is going to take the place of the Ingress controller in the Kubernetes architecture. So we are changing our approach and will make the NGINX Kubernetes Gateway – which will be offered only as an open source product – the focal point of our Kubernetes networking efforts (in lockstep with the evolving standard). It will both integrate and extend into other NGINX products and optimize the developer experience on Kubernetes."
It sounds pretty honest and positive to me. And I'm the first person to call bullshit on corporate doublespeak. Most other companies would just put more money into B2B sales rather than courting OSS devs/free users and admitting when their strategy was stupid (actually their language is evasive, but w/e).
Of course hindsight's 20/20; let's see if they make good on these promises.
Practically, almost all open source work is on GitHub. I don't think I ever saw a project or library that I used on Gitlab, I saw something a few years ago on Bitbucket, and except for a couple of legacy libraries on bespoke source control servers I can't think of any other example. N>1000, but of course N<100%.
> almost all open source work is on GitHub. I don't think I ever saw
"all open source work that I know of, is on Github"
Where did you look for source code, on Github. Where is the source code you found after looking? Github. Where is "almost all open source work" that I saw? on Github. etc..
Sure, but the ecosystems are pretty well separated by legal & technical & language barriers. F5 / nginx mainly target the non-Chinese portion of the tech ecosystem.
I’m disappointed at Microsoft and GitHub as the next person but open source happens on GitHub. Period. I have a GitLab account with some code there but unless there’s a huge shift, I’m not gonna be contributing to OSS on GitLab. It’s just a hassle to remember where everything is.
Also, if you need further evidence of this, look at the MARA diagram. It basically spells it out. There's 2 products in that stack that are branded: nginx and Kubernetes, the rest is noise, with nginx positioned as the only component for traffic ingress. Also if that napkin scribbling constitutes an "architecture"...
It would be interesting to scorecard the progress made on promises made then. There is a lot of repetition. Feels a little hollow.
0. https://www.nginx.com/blog/nginx-sprint-2-0-clear-vision-fre...