Hacker News new | past | comments | ask | show | jobs | submit login
Nginx Unit: open-source, lightweight and versatile application runtime (nginx.org)
193 points by thunderbong 10 months ago | hide | past | favorite | 104 comments



At this point I'd rather be good at Caddyfile and have a project folder of:

  /home/me/project/caddy
  /home/me/project/Caddyfile
No sudo, no config spew across my filesystem. Competition is good, and I had a lot of fun with nginx back in the day but it's too little too late for me.


This is more of a linuxism no? I agree though, I have used Linux for decades but I never remember the usr, bin, local, etc permutations of magical paths, nor do I think it makes any sense. It’s a mess honestly, and is almost never what I want. When I was younger I thought I was holding it wrong but these days I’m sure it will never, ever map well to my mental model. It feels like a lot of distro-specific trivia that’s leaking all over the floor.

It seems like a lot of the hipster tooling dropped those things from the past and honestly it’s so much nicer to have things contained in a single file/dir or at most two. That may be a big reason why kids these days prefer them, honestly.

As for nginx itself it’s actually much better suited for high performance proxies with many conns, imo. I ran some benchmarks and the Go variants (traefik, caddy) eat a lot of memory per conn. Some of that’s unavoidable because of minimum per-goroutine stacks. Now I’m sure they’re better in many ways but I was very impressed with nginxs footprint.


Windows has the same thing, it's just much less exposed, and none of the paths are magical, they're well defined and mostly adhered to for all major distros. The core of how it works in Linux is fairly straightforward.

The main difference is in how additional software is handled. Windows, because of its history with mostly third party software being installed, generally installed applications into a folder and that folder contained the application... Mostly. Uninstalling was never as simple as that might imply.

Linux distros had an existing filesystem layout (from Unix) to conform to, so when they started developing package managers, they had to support files all over the place, so they make sure packages include manifests. Want to know where use executable are? Check bin. Superuser.com executables? Check sbin (don't want those cluttering the available utils in the path of regular users). Libs for in libs.

/bin and /usr/bin and the others are holdovers from the long past when disks were small, and recent distros often symlink usr to / so they're different in name only. /usr/local is for admin local modifications that are not handled through a package. /opt is for whatever, and often used for software installed into a contained folder, like in windows.

Just know what bin, sbin, lib, opt and etc are for and most the rest is irrelevant as long as you know how to query the package manager for what files a package provides or as it what package a specific filer belongs to. If you looked I to windows and the various places it puts things I suspect you'd find it at least complicated, if not much more.

Note: what I said may not match the LSB (which someone else usefully posted) perfectly, but for the most part it should work as a simple primer.


> I ran some benchmarks and the Go variants (traefik, caddy) eat a lot of memory per conn.

I'm pretty annoyed with how many people on HN are shouting in every thread "caddy is soon much better!" and the only material benefit to caddy I can glean from these threads is that it's easier for noobs. Which, to be clear, as far as I can tell, it does a good job of that and will probably win the next decade over nginx for that reason alone, but nginx really isn't that hard to set up, and I'm surprised there isn't so much push back against the pro-caddy narrative. It's just an uphill battle for an application written in golang to be faster than a mature application written in C. Obviously I will continue to use nginx until hard evidence of on-par performance is published, but at the same time I'm more likely to hold out for a competitor written in rust.


> It's just an uphill battle for an application written in golang to be faster than a mature application written in C.

Yes but Golang can be fast on almost every aspect too with some simple tricks. The main thing is the goroutine overhead per socket, which impacts reverse proxies indeed.

> but at the same time I'm more likely to hold out for a competitor written in rust.

Rust with Tokio can keep very low overhead theoretically. But language is not everything. For instance Envoy is also quite high per-conn overhead when I checked despite C++. I assume it’s just too many knobs and features.


I think the reason is that you needed to save on places where to look for files and sort by function in LFS. The package manager converts a package view into a system view, it kind of "turns around file system layout by 90 degrees". Both views have their pros and cons.

It is annoying however, that configuration is not standardized across distros.


honestly


Making me self conscious, honestly. I’m not a patient and careful writer. That’s why I’m lurking in the comments.


I don't know if not being able to remember filesystem conventions is Linux's fault. Computers have a lot of esoterica and random facts to recall. How is this one any different?

See also: https://refspecs.linuxfoundation.org/FHS_3.0/fhs/index.html


I'm really not a fan of Caddy. It tries to be too smart and make things simple for users, which means that the second something goes wrong or you step out of the expected box, you get weird behaviour which is hard to debug.

Fun example from last week, a colleague was trying to try out ACME with a custom ACME server, and configured it. For some reason Caddy was not using it and instead used its own internal cert issuer, even if explicitly told to use the ACME provider as configured. Turns out that if you use the .local domain, Caddy will insist on using its own cert issuer even if there's an ACME provider configured. Does that make sense? Yeah, somewhat, but it's the kind of weird implicit behaviour that makes me mistrust it.

My go-tos are nginx for static stuff, Traefik for dynamic stuff.


I bought into Caddy last year for its simplicity. Loved it, from the moment I saw the off-horizontal default page. I switched back to nginx last month because, like you said, I stepped outside of the expected box. Skill issue? Maybe... I transferred my gunvicorn webapp from WAN to LAN. No more dotcom, just 10.x.x.x ip. To say Caddy didn't like it: error messages were non-specific. Community knowledge (ie, stack overflow etc) was lacking. Then nginx.. It worked perfect with almost default config. Skill-issue or Caddy issue, the point is: Caddy is simpler sometimes.


What was the config? It should never override explicit configuration...


AFAIK, nginx doesn’t require root. If you’re thinking about the ability to bind port 80/443, you should be able to do that via CAP_NET_BIND_SERVICE.

With that said, Caddy is pretty rad.


Caddy sounds like the go to tool for people who care a lot about getting things done. It's time for me to try it.


A coworker of mine dislikes it as it bundles everything into a single binary. For example, to have ACME DNS-01 challenges for certificate issuance working, I need to compile in a Google DNS-specific plugin.

But then it... just works. Try the same with most other web servers/proxies and you're in for a world of pain. Having this much functionality bundled into a single binary is as much a curse as it is a blessing.

That said, having your own little 'Cloudflare Workers' in the form of Nginx Unit with wasm sounds great. Not sure Caddy can do that.


For me, the promise of Caddy and especially tools around it like FrankenPHP make the "everything in a single binary" idea the MORE enticing option, not less.

Sure we already have repeatable infrastructure, containers, etc. but I also love the idea of just building and shipping a PHP app binary that includes the webserver. It makes server provisioning even less of a priority, especially if I have reasons to not use serverless or PaaS tools.


Having a single binary is definitely what drives me to use certain software, Deno is one of them.


It's great until you want to include a non-standard plugin and need to compile your own binaries.

Now that single binary deployment requires you to compile the software yourself. Caddy has nice tooling for this but it'd be far more convenient to just drop a dll/so file in the right directory.

Single binary deployments are great if someone else did the compiling for you. If you need to compile yourself it truly does not matter if you need to ship a single binary or a directory or whatever.


If you want to see a real-life example of what Caddy can do, feel free to check the configuration of my iss-metrics project:

https://github.com/Radiergummi/iss-metrics/blob/main/caddy/C...

I was in the same boat as you and wanted to try out what Caddy is capable of. I was immediately convinced. So many features, where you expect them. Consistent configuration language. Environment interpolation, everywhere. Flexible API. It’s really all there.


From the first glance it doesn't look convincingly better than a generic and manually polished nginx configuration. Are there any other benefits to Caddy?


If you choose to start the project with docker compose, you’ll notice how it will immediately bring up a fully functional reverse proxy setup with TLS support on localhost; set the SITE_DOMAIN environment variable to your proper domain instead, and you’ll find that configured as well, along with a proper, ACME-issued certificate. Add a bit more effort, and you’ll also get mTLS for all services automatically.

All of this is more or less doable with nginx, I’ve done it often enough. But read the Caddyfile and tell me this isn’t miles ahead in clarity.


It does all the letsencrypt stuff for you - certbot is not a massive hassle if you're just serving the one domain of course but I really liked it for that when I was setting up a redirect server (corps do love buying TheirBrand.everytld haha)

Set the config up with CI/CD and can now just edit the config and git push knowing Caddy will just handle the rest


Seems to be a middleground between doing certs on a small scale with cronjobs and a fully fledged automated Kubernetes cluster.


it is a total displacement for the former and stupidly simple compared to the either; gold standard 'just works'


Better Docker integration out of the box, I guess.

I don't use docker so I don't care.


It's a fine project right up to the point of you needing additional functionality that's split into one of the plugins. Since Go applications do not support proper .so in practice, you have to build your own binaries or rely on their build service, and this puts the responsibility of supporting and updating such custom configuration on you.

So no setting up unattended-upgrades and forgetting about it.


I think that's what https://caddyserver.com/docs/command-line#caddy-upgrade (and the following commands) are for ;)


> experimental

also totally non-standard, apt unattended-upgrades won't be doing that for you.

sure you can do a cronjob, but, non-standard


I recently setup a Flarum forum and the instructions mentioned Apache and Nginx. I sighed until I saw Caddy immediately below.

Caddy really is the most pleasant webserver software I’ve ever used.


Eh, it's a bit over hyped imo although I do like the config format and built-in acme. My production clusters all run nginx though and give me minimal fuss with a lot of flexibility.


has anyone figured out why caddy is substantially slower (thoughput, not latency) than nginx at reverse proxying? i've switched it around for my seafile and it's a night and day difference.


Garbage collection pauses might have something to do with that.


Also the lack of 20 years of optimization where it spent a lot of time as one of the larger open source web servers and so got a lot of attention.

In 2024 people are more likely to turn the cloud knob up to pay for throughput (if they need it) and save on dev time with the comparably better dev ex that caddy offers.


> In 2024 people are more likely to turn the cloud knob up to pay for throughput (if they need it) and save on dev time with the comparably better dev ex that caddy offers.

This seems like a weird trade off to me.

The "learning tax" is really only paid once with nginx. Once you understand how it works and configured a reasonably end-to-end example with it then you can carry that over to your next project with minimal changes.

I've hosted countless Flask, Django, Rails, etc. apps over the years and very little changes on the nginx side of things. I'd rather learn this tool once and have better runtime performance all the time across all projects.

With that said, the performance difference probably won't be very noticeable for most sites but still, I personally wouldn't want to give in to running a less efficient solution when I know a more efficient solution exists right around corner that requires no application code changes to use -- just a little elbow grease to configure nginx once. This is especially true when nginx has a ~20 year track record of stability and efficiency.


Right. Everytime nginx comes up someone has to bring up how much better caddy is. After using nginx everywhere for nearly two decades I have no desire to learn a new tool. Nginx does everything I want, even some exotic stuff and plugins for certain use cases. It is highly configurable, has plenty of good documentation, is well supported in every distro, and is extremely performant. I don't care how much easier caddy is and that it can configure certs for me. I prefer the unix philosophy anyway, and it's not like I'm spending a significant amount of time on nginx configs or certs. I use acme.sh for certs it only takes a couple minutes to provision a new instance with nginx and acme.sh, just the way I want it. End rant.


> I don't care how much easier caddy is and that it can configure certs for me.

I always find this a weird selling point, TBH.

It's probably a selling point for people who don't already know the existing $FOO.

For me, putting effort into learning the new thing only to use it exactly as the old thing is wasted effort.

I don't know what I gain by moving to the new $FOO, usually.


Sure, but you list of things there describes a work flow that is becoming a bit old.

I would wager the vast majority of Nginx "installs" are running in a container now a days.

The distro doesnt matter and few are provisioning an instance of anything, that's some container orchestration job.

Last week trying to coax nginx to be able to set a CSP nonce in a web apps index.html which apparently meant I would need to custom build Nginx or a custom build a container with a custom built Nginx to install a plug-in to do it. This type of stuff adds up and having a bunch of stuff hidden in Nginx Plus doesn't help either.

I think Nginx is a great piece of software its just that people don't need all its offerings, they just want to host some tiny js and proxy to an API and things like caddy were built for that. The limited throughput doesnt matter when CloudFlare or Cloud front cache most of the things it is serving anyway.


I'm pretty much always in a container too, and I'm perfectly happy with the workflow, it scales just fine and can be orchestrated as well. When I'm deploying a new project, bringing up a new container with nginx is like 0.1% of the work, why mess with that? I like Debian, I use it in my containers. Yes there are slimmer and lighter things, I don't care, my stack is rock solid.

As for the CSP nonce, I'm surprised that there isn't a plugin, but compiling isn't a big deal, just annoying. Alternately NGINX is scriptable or you can do it at the application level as well. If caddy is easier for you or for that use case, then that's great, use it with my blessing.


I'd consider only pushing 20Mbps on a 2.5GbE network more than a lack of optimization. Supposedly you can tune some buffer sizes to make it better, but it's still laughably bad for serving larger files.


Go isn't Java, especially after the rewrite in 1.5 (and smaller scale changes in 1.19) the GC doesn't pause long or often enough to affect throughput.


Unlikely.


These are things for sure, but nginx config files are well understood by LLMs so I get good advice from them. That's really the limiting factor for most equivalent tools for me these days, how well the LLM handles it.

If someone hooks them up to a man page I think it might level the playing field.


I also prefer caddy with wazero plugin to run WASM .

So easy and works on everything


Unfortunately, Caddy does not support and do not plan on supporting anything other than HTTP/HTTPS. These days I find myself going back to nginx only for TCP/UDP reverse proxy.


It supports it with the caddy-l4 plugin: https://github.com/mholt/caddy-l4. It was also indicated we might move the plugin into standard Caddy once given enough feedback from the user-base and are comfortable with the implementation solidity.


Hi! I'm currently in charge of Unit. If you're using it, I'd love to chat with you to understand what sucks, what doesn't, and what's missing. I have my own ideas, but external validation is always nice. :)

Contact info is in my profile.


Neat! What is the benefit of using this over "standalone" nginx? The HTTP API enabling configuration change at runtime without downtime (like Caddy)? No need for a process supervisor like supervisord or systemd as nginx Unit is managing the backends?


It’s pretty much like Caddy vs. nginx: Language runtime, static asset serving, TLS, routing and so on bundled in a single package. That makes it very easy to deploy a container, for example.

Thinking of a typical PHP app, which exposes both dynamically routed endpoints and static asset. With a traditional setup, you’d let nginx handle all paths as static assets and fallback to the index.php file to serve the app. When you package that as a container, you’ll either have to use separate PHP-FPM and nginx containers, or run two processes in a single container. Both of which is not ideal. And it gets ever more complex with TLS, and so on.

Using unit or caddy, you can simplify this to a single container that achieves it all, easily.


What is caddys language runtime? Which languages does it support? Or are you thinking of the frankenphp plugin?


Caddy can either use something sophisticated like FrankenPHP, which I very much look forward to using soon now that it seems stable, or a regular old FastCGI SAPI.


But nginx also supports FastCGI, and you need to run the FastCGI server as a separate process (like php-fpm), right?

I don't see how caddy (without stuff like frankenphp) is any closer to a complete single binary reverse-proxy AND language runtime than nginx.


Unit has its own SAPI for PHP and executes it directly, no php-fpm needed. I’m using it to serve Wordpress right now, works pretty well.


Right, but the parent comment seemed to imply that the same was true for caddy. I was asking what caddys language runtime was (besides via plugins like frankenphp)?


Plugins would be it, so you could say Go is Caddy's runtime. Which is of course duh, but since the official mechanism to extend it is by statically compiling in go code, it's also accurate. It's not like nginx and apache are that much different, their "language runtimes" also boil down either to extensions linked into the server or proxying to a backend through another protocol like FastCGI. Caddy supports fcgi out of the box, even using PHP's default settings with one line of config, but I'm not a big fan of php-fpm, and I like having just one daemon to supervise.


So walking back to the comment (https://news.ycombinator.com/item?id=40543839) this started at:

> It’s pretty much like Caddy vs. nginx: Language runtime, static asset serving, TLS, routing and so on bundled in a single package. That makes it very easy to deploy a container, for example.

> Using unit or caddy, you can simplify this to a single container that achieves it all, easily.

With caddy this is not true, unless you have compiled in your own plugin (custom or frankenphp), right?

All I was asking is what they thought the language runtime for caddy was.


Exactly, they're complements: you'd deploy your application on Unit and put that behind NGINX or another reverse proxy like Caddy or Traefik.

Unit can serve static assets, directly host Python / PHP / WebAssembly workloads, automatically scale worker processes on the same node, and dynamically reconfigure itself without downtime.

Unit cannot do detailed request/response rewriting, caching, compression, HTTP/2, or automatic TLS... yet. ;)


It's an app server. It can run your asgi or wsgi app.


For me I'd rather ship a single binary with PHP support in it when using containers.


Can you elaborate on that? Especially, where is the php runtime and the webserver?


For some reason I had a thought lodged in my head that Unit wasn't open source, but I just checked their GitHub repo and it's been Apache 2 since they first added the license file seven years ago.

I must have been confusing it with NGINX Plus.


“Oops, sorry, thank you for letting us know, we will change that to the proprietary license instead”


I wouldn't bet on that. :)

F5 isn't the most visible corporation in terms of grassroots engagement, but NGINX itself has remained F/OSS all these years and newer projects like the Kubernetes Ingress Controller [0], Gateway Fabric [1], and NGINX Agent [2] are all Apache 2.0 licensed. Just like Unit.

We do have commercial offerings, including the aforementioned NGINX Plus, but I think we've got a decent track record of keeping useful things open.

[0]: https://github.com/nginxinc/kubernetes-ingress

[1]: https://github.com/nginxinc/nginx-gateway-fabric

[2]: https://github.com/nginx/agent


Ok, seems better than the industry then :)

I have trauma from Aerospike, Redis, and couple of others, so it may have affected my perception.


What's wrong with Redis? It is still open source, as I understand, and you can use it in non-commercial projects without any problems.


I don't the other one, but what's your gripe with redis, exactly? Can you articulate it?


Usually it's done on purpose, they wait until it get very popular and used everywhere before pulling the carpet


I tried a setup with Nginx Unit and php-fpm inside a Docker container, but the way to load the config is so combersome I never was confident to use it in production. It feels like I am doing something wrong. Is there a way to just load a config file from the filesystem?


We're very actively working on improving Unit's UX/DX along those lines. Our official Docker images will pick up and read configuration files from `/docker-entrypoint.d/`, so you can bind mount your config into your container and you should be off to the races. More details at https://unit.nginx.org/installation/#initial-configuration

But that's still kinda rough, so we're also overhauling our tooling, including a new (and very much still-in-development) `unitctl` CLI which you can find at https://github.com/nginx/unit/tree/master/tools/unitctl. With unitctl today, you can manually run something like `unitctl --wait-timeout-seconds=3 --wait-max-tries=4 import /opt/unit/config` to achieve the same thing, but expect further refinements as we get closer to formally releasing it.


That sounds much better, thanks for the effort.


https://unit.nginx.org/howto/docker/#apps-in-a-containerized...

> We’ve mapped the source config/ to /docker-entrypoint.d/ in the container; the official image uploads any .json files found there into Unit’s config section if the state is empty.


I saw that, but I do like to make my own container. So I did roughly the same steps as they do. But it feels complicated.


Can you copy the official image's script? https://github.com/nginx/unit/blob/0e79d961bb1ea68674961da17...


I am building https://github.com/claceio/clace. It allows you to install multiple apps. Instead of messing with routing rules, each app gets a dedicated path (can be a domain). That way you cannot break one app while working on another.

Clace manages the containers (using either Docker or Podman), with a blue-green (staged) deployment model. Within the container, you can use any language/framework.


The docs mentions:

> The control API is the single source of truth about Unit’s configuration. There are no configuration files that can or should be manipulated; this is a deliberate design choice

(https://unit.nginx.org/controlapi/#no-config-files)

So yeah, the way to go is to run something like `curl -X PUT --data-binary @/config.json --unix-socket /var/run/control.unit.sock http://localhost/config/` right after you start your nginx-unit.

The way to manage a separate config step depends on how you manage to run the process nginx-unit (systemd, docker, podman, kubernetes...). Here's an example I found where the command is put in the entrypoint script of the container (see toward the end): https://blog.castopod.org/containerize-your-php-applications...


I did that, but sometimes it takes a short moment before Unit is started, so you need a loop to check if Unit is responding before you can send the config. In total it was around 20 lines just to load the config. It feels like doing something wrong. Or using the wrong tool.


Ouch

From NGINX Unit webite: "Also on Linux-based systems, wildcard listeners can’t overlap with other listeners on the same port due to rules imposed by the kernel. For example, :8080 conflicts with 127.0.0.1:8080; in particular, this means :8080 can’t be immediately replaced by 127.0.0.1:8080 (or vice versa) without deleting it first."

Systemd (PID 1) also needs to start stopping their opening network sockets. In short, systemd now needs a systemd-socketd to arbitrate socket allocations.

I am never a fan of PROM, firmware, nor PID1 opening a network socket; it is very bad security practice.


I've been dabbling with Unit when I've had some downtime over the last few days. It's definitely compelling and I quite like that it could potentially replace language-centric (I know there are ways to bend them to your will ...) web servers like gunicorn, unicorn, Puma, etc. It's also compelling that you can easily slot disparate applications, static assets, etc. alongside each other in a simple and straightforward way within a single container.

As others have said and the team has owned up to, the current config strategy is not ideal but the Docker COPY strategy has been working well enough for my experiments.

The other somewhat annoying part of the experience for me has been logging. I would want access logs to be enabled by default and it'd be great to (somehow) more easily surface errors to stderr/out when using the unit Docker images. I know you can tap into `docker logs ...` but, IMO, it'd be ideal if you didn't have to. It's possible there's a way to do this at the config level and I just haven't come across it yet.

Also, and I know this is a bit orthogonal, but it'd be great if the wasm image exposed cargo, wasmtime, et al so you could use a single image to build, run and debug* your application. *This was a pain point for me and I got hung up on a file permissions issue for a few hours.

On the whole, though, I think it's pretty compelling and I was able to stand up Flask, Django and Rust WASM applications in short order. I'm planning to add some documentation and publish the code samples as time permits.


How does this compare to OpenResty? Could it somehow help with OIDC support (e.g. by integrating a relevant nodejs lib)?


Configured at runtime via REST?? What happened to infrastructure as code?

Is this some property required for something like your primary load balancer entry point, that can never be restarted and can’t have rollout restarts?


Django without gunicorn? I’ll give it a go…


I am using NGINX Unit with Django in a bunch of production workloads, with high traffic. Works really well!

The most time spent was building from source in Docker for ARM support and going down a rabbit hole of targeting the minor Python version that apt was installing on some Debians, w/o a virtual env, instead of the one it defaulted to.

I'm a fan. High performance, easily configurable ASGI server for so many flavors.


I'm wondering what are the pros and cons of that vs this

Years ago every website needed Apache or Nginx, then lately I've hardly used it at all... usually have a container with gunicorn behind a load balancer that is part of the cloud platform

It's easy to see how to get Nginx Unit working, but not sure exactly how it fits into the utility picture vs other options


Nginx is a reverse proxy, it can work as the load balancer, the static asset server, a response caching server, an authorization server, HLS streaming server, etc.

Nginx has a lot of usecases in one package, most larger companies and more technical workplaces, use Nginx or similar alternatives.

Nginx Unit typically is meant for unifying the entire application space of multiple programming languages served by 1 Server (which also acts as a static server)

So serving golang, python, php code and application all under 1 single wsgi/asgi application server called nginx unit and be able to dynamically change its routes with api calls too.

This allows you to have 1 root process in a docker container to control your python fastapi or golang api processes, without needing 1 container for nginx and 1 container for Python/Golang process or a supervisord like init system controlling 2 or more processes inside 1 container

Everything is under nginx unit and nginx unit is being triggered as the main process inside the container.

Moreover it is also much much much more faster in terms of response time than most language dependent application servers like gunicorn/unicorn for python3, etc [1]

[1](https://medium.com/@le_moment_it/nginx-unit-discover-and-ben...)


unit replaces gunicorn. It should also be much faster but you do your tests.


I’ve largely enjoyed gunicorn. What do you dislike about it?


Oh nothing really, it's just another cog is all. I'm running nginx because there's some things I want to serve statically (and it does a couple of other boring tasks really well too), so if I could just point it at the environment and tell it to go for it then it would just be a bit less messing around.

I'm connecting nginx to gunicorn through a unix socket too and found that to be a bit of a pain.


I largely agree here that of all the components in a stack, unicorn seems to be the least troublesome and almost invisible.

I have never had a problem that I would have traced back to gunicorn not working ...

On the other hand not having to run gunicorn as another separate service might be an advantage.


There is a certain irony that after the application servers bashing phase, a decade later everyone is doing their own version.


What is an application server exactly and who besides nginx is building one?


Apache with mod_tcl, mod_perl, mod_php, Websphere, JBoss (now Wildfly), Payara, IIS/.NET, Erlang/BEAM...

Basically a full stack experience from programming language, web framework and networking protocols, possibly with a management dashboard.

As for who is building one everyone that is now trying to sell the idea of packaging WebAssembly into containers, deploying them into managed Kubernetes clusters, or alternatively cloud managed applications, like Vercel, Nelify, Azure Container Apps, Cloud Run,....


> everyone that is now trying to sell the idea of packaging WebAssembly into containers, deploying them into managed Kubernetes clusters, or alternatively cloud managed applications

Does Nginx Unit really fit into this picture though?

Is there a place for an all-in-one app server in that scenario, I would have thought they want each component to be separated (wasm host, load balancer, etc etc) for commoditisation and independent scaling of different layers

(This is not a criticism in form of a question... I am honestly curious)


Absolutely keep your load balancer for multi-node scaling, but how are you going to run you WebAssembly workloads within a given node? Unit can do that.

Or what if you have a single logical service that's composed of a mix of Wasm endpoints and static assets augmenting a traditional Python application? Unit pulls that all together into a single, unified thing to configure and deploy.

If you're writing Node, Go, or Rust you haven't had to think about application servers for a long time. Folks writing Python and PHP still do, and WebAssembly will require the same supporting infrastructure since Wasm -- by definition -- is not a native binary format for any existing platform. :)


Well there are other dedicated "WASM in k8s" solutions like SpinKube

and my Python apps have not been behind Nginx for a long time, they're mostly wrapped in a zero-config gunicorn runner in a Docker container, static assets in S3 via a CDN

am wondering who wants a single-node heterogenous application server these days

TBH the simplicity of it is appealing though


IMHO, it's still a few years a early for pure-play Wasm solutions, though Fermyon is doing exceptional work to manifest that future.

My hope is that Unit can offer a pragmatic bridge: run your existing applications as-is, and when you want to sprinkle in some Wasm, we're ready. That's not to say Wasm is Unit's only use case, but do believe it's what will get people thinking about application servers again. :)

> my Python apps have not been behind Nginx for a long time, they're mostly wrapped in a zero-config gunicorn runner in a Docker container, static assets in S3 via a CDN

...and are there any reverse proxies, load balancers, or caches on the network path between your end user and your container? ;)


yes there are of course, but typically like an ALB from the cloud platform


We've been using Nginx Unit in production for a Python backend for about a year now and it's been working pretty well. Some thoughts in no particular order:

- "Nginx Unit" is an annoying name to google when you have a problem. You get a lot of nginx results that are of course completely irrelevant to what you're looking for, as there is zero actual overlap between the two things. Using quoted search terms is not sufficient to overcome this.

- When it works, the all-in-one system feels great.

- However sometimes the tightly-coupled nature can be slightly annoying. For example, they publish packages for the various runtimes (ours is python) in the various registries, but only for the "defaults. Concrete example: we are currently running Ubuntu 23.04 but wanted to upgrade to Python 3.12. However Nginx Unit only pre-packages a Python 3.11 package for unit on Ubuntu 23.04 as that is the system-included Python. Had to build our own support from source, which was fairly easy, but still more difficult than our pre-Nginx Unit infra, where all I would have to do is install Python 3.12 and I'm good to go (because the python runtime wasn't at all coupled with the webserver when our stack was Nginx + Python + Gunicorn)

- I never properly benchmarked the two in a comprehensive comparison, but Nginx Unit is definitely faster than the aforementioned previous stack. I tested some individual routes (our heaviest/most important) and the speedup was 20-40%.

- Even when I tell it to keep a minimum number of worker processes around, it kinda seems... not to? I haven't properly tested, but sometimes it feels more like serverless, where if I haven't sent a request in a while it takes a bit of extra time to spin up a process, but after that requests are snappier. Definitely need to properly investigate this but haven't gotten around to it yet. It might just be the difference between allocated memory and not rather than spinning up processes.

- It's a shame it doesn't support SSL via Let's Encrypt out-of-the-box, like Caddy. To me that is the biggest (really only) missing piece at the moment.

- I much prefer using the HTTP system to manage config than files, and find the Nginx Unit JSON config much, much more readable than either Nginx or Apache configs I've worked with in the past. I'd also give it a slight edge over caddy configuration.

- That said, managing the config (and system in general) can sometimes be annoyingly opaque. The error messages are somewhat terse when updating config fails, so you need to dig into the actual logs to see the error. Just feels a little cat-and-mousey when you could just tell me what the error is up-front, when I'm sending the config request.

In summary, overall I've liked using Nginx Unit, but wish they would: change the name to something less confusing, add built-in Let's Encrypt support ala Caddy, and make the errors and overall stack feel a little less opaque / black boxy.


Have a look at Jucenit. It bundles Letsencrypt SSLs with Nginx-unit.

https://github.com/pipelight/jucenit


Oh nice. So we use Ansible for deploys, and I ended up making a simple playbook to request a new cert whenever we deploy. It works well enough but I still wish it was automatic. I also find it mildly inconvenient that I can't hotswap in a new cert, which was discussed in this[1] GH issue.

[1] https://github.com/nginx/unit/issues/195


Just trying to understand. In the case of a Java app, would this replace tomcat or jersey or similar?


I'm no Java expert but I believe Unit would replace Tomcat, Jetty, etc. (i.e. the service handling HTTP requests) and would be responsible for running/managing/forwarding requests to a Jersey application.


I read this and expected some sort of unit testing for nginx configurations.

I'd love to have something like that: provide a configuration and automatically check all the paths that the configuration enables. Maybe throw in some LLM for some comments and tips to improve performance/security.


Another dud from the people that bought stolen code.

Nginx is awful, archaic software.


Nginx is a state machine that efficiently handles lots of L4-L7 protocols. Seems weird to feel any emotions about it.


What do you use instead?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: