Those benchmark outcomes are ridiculously in nginx-unit’s favor over php-fpm, way more than I would have believed was possible. What is php-fpm doing architecturally that is so different to warrant such poor relative performance?
I too would like to see a properly configured php+php-fpm container benchmark. There's a lot of overhead when you link by http instead of a unix socket, in the same container.
Binary-compiled languages in general: using the embedded libunit library.
Go: by overriding the http module.
JavaScript (Node.js): by automatically overloading the http and websocket modules.
Java: by using the Servlet Specification 3.1 and WebSocket APIs.
Perl: by using PSGI.
PHP: by using a custom SAPI module.
Python: by using WSGI or ASGI with WebSocket support.
Ruby: by using the Rack API.
WebAssembly: by using Wasmtime.
I'm currently using Apache as a reverse proxy for most apps and sometimes PHP-FPM when I need to run software with PHP, which works well for my personal needs (mod_md and mod_auth_openidc are pretty cool), but it's cool to see something similarly interesting to OpenResty coming along as well, too!
Why the obsession (it seems to be the prominent point in the readme) with configuration via API? How often do you need to add php support on the fly? I want to configure my app server via files so it just starts up in the state that I expect. What am I missing?
Probably the most common use case is SaaS providers that support custom domain name for whatever the software it is. For example, a site uptime monitoring service might offer a feature to host a status page on a custom (sub)domain of the customer. The SaaS now needs to programatically create virtual hosts on demand, issue HTTPS certificates, run routine updates, etc.
An API and a web server with small segmented updates make this so much easier. Compare this to Apache, that has to wait to properly end existing connections before reloading, has a config file parsing overhead, and probably does not scale that well with several virtual hosts anyway. There are hardware/File System level limitations as well.
> How often do you need to add php support on the fly?
Restarting the binary means you'll lose requests while it's restarting, so adding php (or whatever) support on the fly is what you need when running a system where losing those requests is material. Which it won't be for most people, but for, eg, Google (who don't use Nginx), losing those requests is a problem.
Although it doesn't directly go against what you're saying, many Unix daemons have supported HUP signals for decades which can achieve the same outcome. No need to configure via API, just change the configuration on disk and send HUP.
I suppose arguably that becomes a bit trickier for containers, so perhaps that's why you'd want to configure via an API?
There is a slight difference to me as having another service run (the API) is an additional attack vector to worry about from a security perspective.
I would also say it's easier to enforce good "IaC hygiene" when the configs are managed via configuration files. They can go through a code review process, deployed via existing config management systems etc.
Any half decent containerized setup should support zero-downtime deploys. Usually it involves bringing up new containers and signaling the existing containers to begin draining connections.
For most workloads it should be entirely possible to deploy a new stateless config and not need to resort to using mutable state for critical infrastructure.
If you have long-lived, stateful connections (perhaps for live streams) then I can see why re-configuring in place would be desirable, but in my experience that's pretty rare.
Neither Apache nor nginx require a restart to add php support, and neither will lose requests under normal operation. They will however parse the complete config on a reload operation. On huge configurations this is noticeable.
>Why the obsession (it seems to be the prominent point in the readme) with configuration via API?
Infrastructure As Code (in all its forms, chef/puppet/ansible/tfe etc.) is the standard for all enterprise cloud setups these days. It makes sense to support that as a first class feature.
I think they intend for you to run something idempotent like Ansible that coerces configs into the intended state.
In this case, I think Nginx is trying to avoid the in-fighting that happens between platform teams and the various teams that they serve. Most platforms with static configuration don't make it easy to ACL the platform so that one user can't mess with another user. E.g. it would be hard to automatically prevent one user from trying to take another's domain name.
This config API could be ACL'ed so that you can update your application code, but not change it's domain name or IP. Or whatever else the platform team wants to cut off. This hopefully comes with ACLs like that, but if it doesn't you could always add it in a standard way with a reverse proxy that has ACLs. That's a lot easier to do than trying to write an nginx config parser to make sure platform users aren't tinkering with a particular setting in their config.
it’s not hard to imagine a use case where there are backend configurations stored in a database somewhere and you want to apply them. i’m picturing it as data vs static configuration.
Why? To scale infra of my offering to my customers. They need it too. I’d like for my remaining customers to not suffer downtime. I’d like to use the existing infra I have without spinning up new ones. I’d like to offer a dashboard so my customer can configure their host.
If the state you need at startup isn't the same as you need for production, this could be incredibly useful. It also means that you can save a lot of time starting and stopping containers for many common configuration changes in production. There's a lot more utility in this than just PHP.
Perhaps adding support for PHP on the fly is an extreme case, but reconfiguring eg load balancer backends when new systems come and go without having to render a config file and HUP (and hope) is a typical case.
Trying to find context on what it was, I saw it’s been on HN a few times (mostly to the sound of people asking the same question).
Since I mostly do php/laravel, I was pleasantly surprised that it let me remove php-fpm from my stack - extremely nice when putting PHP apps in a container.
I personally have come to dislike PHP-FPM’s max_children and similar settings, whose low defaults come as an annoying surprise when you suddenly get gateway errors before your server is even overloaded. (and having to add nginx AND php-fpm “layers” into a container to run PHP apps is all sorts of annoying).
Unit doesn’t have as many features as Nginx so I suspect it’s best used when there’s another fully-featured http layer in front of Unit (to more easily handle TLS, protecting dot files, gzip, cache headers, and the like).
Overall i’m excited to use it when making images for php apps.
Tested this with a moderately complex PHP Laravel app and got a 40% speed improvement. Very useful to be able to run multiple apps each using different lang runtimes/versions without needing separate docker containers.
Compared to nginx and PHP-FPM installed on a host machine. All running opcache, JIT etc. Was pleasantly surprised. Note this is up to 40%, not all apps we’ve run have seen same improvement.
We tried roadrunner (similar thing), but couldn't do it without rewriting certain bits of the app (anything with globals basically) whereas unit was a straight swap. My guess would be, if you're properly using the advantages swoole can bring (like shared memory cache etc), it will outperform unit.
So much JSON... is that the real thing, as in comments are the gateway to hell, or is it actually some practical JSON superset like JSON5?
I almost see myself jury-rigging some bespoke filesystem on-ramp to that REST configuration interface. Likely with at least half a dozen scary security compromises.
Then on the other hand I guess they've never been afraid of getting called opinionated and that's certainly much better than trying to be everything, for everyone, at the same time.
JSON without comments is hell, forcing one to 1) convert JSON configuration to YAML. 2) add comments to YAML and admire it. 3) convert YAML to JSON for sending to the server.
I literally cannot count the number of times significant whitespace has broken a yaml file I've edited followed by a completely useless error on line 1. I don't have this problem in python. I loath yaml.
This is just my personal opinion, but I think YAML is more intuitive but has a lot more footguns and gotchas, while TOML is more consistent but less obvious (ironically, given the name.)
This may contribute to YAML's popularity in CI systems, because CI systems often need to be adjusted by someone who isn't super familiar with the syntax, and YAML is easier to intuit than TOML.
Fortunately it's only forcing one to convert to a JSON variant that does not forbid comments (starting as simple as wrapping the entire thing in console.log JSON.stringify). I'll happily leave the YAML authoring to YAML inventors, supposedly it makes them happy.
I like to think of JSON as a low-level configuration language that most people should generate from a higher-level language such as Python or JSONnet — sort of like assembly is to C. JSON has lots of things to like about it, but I don’t recommend generating it manually for most people for files larger than a few lines.
Thank you to link to that page. It is very funny. I highly recommend that others read it.
Choice quote:
The benefits of using the assembly language for your web apps are immense:
You get all the bragging rights: coding in assembly while everyone is working safe and secure within the confines of their precious sandboxed languages is like doing a perfect triple somersault without safety mats.
Next: I recommend multi-threaded assembly code with self-modifying code and lock free data structures!
In Linux where this is primarily expected to be used ASP.NET is extremely rare, as is C++ for web stuff, and Rust has a vanishingly small web presence still. Lots of people use Perl still though, if not for anything else, for legacy stuff.
So might as well ask why it doesn't support Delphi.
I can’t say I know the percentage of Linux servers running ASP.NET Core but the percentage of ASP.NET Core apps running on Linux is no longer negligible. Bing runs it on Linux, iirc.
This isn’t the dark days of .NET Framework anymore.
And PSGI is actually really rather good (for sync stuff, I dislike the way it handles async/websockets and would tend to use Mojolicious for that) and generally used to deploy OO MVC style apps.
The Perl ecosystem has come a long way from CGI scripts just like everybody else has.
This is anecdata, but my experience with modern ASP.NET teams these days is that while production may still be running Windows, the dev teams are a mix of Mac, Windows, and Linux/WSL. The allure of the M2 MBP is too great when the Windows option is an expensive but otherwise forgettable Dell corporate laptop.
It uses s6 to handle nginx + php-fpm services, meaning we have to maintain configuration for s6 services (written in execline), nginx webserver configuration files (nginx proprietary format) and php-fpm (INI format).
I've created a fork that replaces that stack with nginx unit:
No need for a service manager and both webserver and php configuration are in one file[1]. What is not well documented IMHO is that you don't have to load the config at runtime - if you place the JSON file in the (compile time changeable) /var/lib/unit directory it will be loaded on startup, similar to a traditional /etc config file. But it also will get updated at runtime, if configuration is uploaded on the config socket, making it persist service restarts, hence /var is more appropriate than /etc.
> What is not well documented IMHO is that you don't have to load the config at runtime - if you place the JSON file in the (compile time changeable) /var/lib/unit directory it will be loaded on startup, similar to a traditional /etc config file. But it also will get updated at runtime, if configuration is uploaded on the config socket, making it persist service restarts, hence /var is more appropriate than /etc.
Thank you very much for this example, and calling out the config file option.
Do you find that you get any use from the config api in your setup (vs pushing a new container)?
I imagine it might be more useful if nginx unit acts like a k8s ingress like setup?
As for:
> both webserver and php configuration are in one file[1].
Not that difficult, but it’s still a separate dependency (with python requirements). If the goal here is a Caddy competitor, then IMO it’s missing the mark in terms of “one stop shop”. What’s the killer feature?
On the python requirements: I've always been satisfied with Bash implementations of ACME. I use dehydrated from the very beginning (when it was still called letsencrypt.sh) and lately started using acme.sh.
This thing will probably run in a bunch of VMs/containers behind a load balancer, so it's actually better that it doesn't try to obtain any certificates by default.
Even a small number of apps trying to get their own certificates at the same time can exhaust the Let's Encrypt quota for your domain, with serious consequences to your other online properties.
> Even a small number of apps trying to get their own certificates at the same time can exhaust the Let's Encrypt quota for your domain
I got curious about this and decided to look it up, they actually have more restrictions than I expected[1]. Looks like to play it on the safe side you might be better off having a single server issuing certificates and distributing them where needed as well as using wildcards as much as possible. Interestingly, it looks like Google is doing just that since their certificate covers a very wide range of domains[2].
Can you feasibly run this behind nginx (either over http in a regular reverse proxy setup or with sort of more clever shared memory forwarding situation?) so you could, eg use nginx-unit instead of php-fpm but keep everything (ssl termination, rate limiting, etc) centralized behind your core nginx load balancer?
This is a webserver that can run various web applications for you. In a typical setup you'd run a web/http server (nginx or Apache etc) to handle the 'slow' connections to the client and a separate app server (say, an express application, or Django or Rails etc) that is completely separate from the webserver. The web server acts as a reverse proxy, sending the request on to the app server. The response from the app server is then passed to the webserver, which sends it back tot he client. So a request would this path: client -> web server -> app server -> web server -> client.
This is done because web servers are optimised to handle (many) requests from clients (including things like TLS termination, optimised handling of static files, orchestrate to which backend service to send a request and much more), while application servers are optimised to run the application code.
In this model, the web and app servers run separately, for example in separate containers and they have to talk to each other so you have to somehow connect them (often this is done through a network socket).
So what Nginx Unit does is combining the two: it can do the standard web server stuff (not everything a normal nginx can do, but often that's not needed anyway), but also run your application for you, which could, depending on your setup, make your life a bit easier, as now you don't have to tie the two together.
> NGINX Unit – universal web app server – a lightweight and versatile open source server project that works as a reverse proxy, serves static assets, and runs applications in multiple languages.
This is an interesting idea, but I feel like it is a big oversight not to have build in support for automatically getting certificates.
Instead the docs have you do something manual with certbot (a complete nono if you believe in automatic SSL and are using docker images that don't persist data, as Docker is meant to be used).
The only reason I have SSL on my domain in the first place is that my host offers a simple setup that runs automatically.
Which means now you can't do a clean deploy of a new version, you can't be sure that what works on one machine is the same as what works on another etc.
I built something like this, but with much simpler syntax and automatic https simply bc I'm not smart enough to reliably set up server blocks and letsencrypt every time I route a new app.
AppServe takes care of all that, and even if I did know the syntax by heart to do it in nginx, this is still faster.
Nginx Unit helped me out of a hole recently - needing to deal with a legacy Python 2.7 web application, while trying to keep it as 'supported' as possible (ignoring the whole python 2.7 thing for the moment!).
Nginx Unit, with their python 2 module, on Debian 11 ended up working petty well.
I've been tinkering quite a bit with WasmCloud and WasmEdge lately, for some data workloads (scraping, ingesting, apis etc) mainly as an excuse to learn WebAssembly.
What I want to see for myself is how Unit compares to those for this reduced scope (api's/services) and how fast I can be productive with it. This could be a good starting point for a lot of new applications, kind of an "airflow (the data orchestration thing) on Webassembly".
So main points are performance (is it good enough) and developer experience (is it easy to maintain/change).
Nothing really ground breaking, just trying out stuff.
My limited understanding from various blog posts is that you can still get the benefits of a container, but much faster startup/runtimes. As in WASM provides performance very close to native.
Security isolation too. I am using a WASM-based image processing pipeline for handling user submitted images. Much safer than trying to run a binary (Imagemagick, ffmpeg, etc) written in an unsafe language. In my case, the WASM does not have access to anything outside the sandbox.
Have you compared how much performance overhead has such sandbox in comparison to native (Linux) binary? Especially video processing. I just wonder if WASM app can even benefit from SSE4 or AVX instructions.
You don't need your language/runtime to support arm/other to run same compile artifact on arm32, arm64, power5 and amd64 - compile to wasm once, run anywhere (nginx unit would still need to support your target).
Your runtime theoretically don't need is support - compile to wasm, run on FreeBSD/arm.
You can compile your c microservice to wasm, and your rust microservice, and your c++ microservice to wasm - and deploy them sandboxed via the same stack - reducing complexity (note that another option is to deploy via nginx unit - different way to reduce complexity).
This is great to read! Thanks for all the great input!
I check the comments next week and make sure to address the issues / ideas mentioned. If you don’t mind feel free to drop a comment here as well with your ideas / needs. https://github.com/nginx/unit/issues/945
I will work on a compression between Nginx and Unit to close this gap in our documentation.
Unironically this. After trying Kubernetes, Terraform, Ansible, and various proprietary PaaS config formats, I've grown to love Compose files' simplicity.
I'll happily try anything simpler than that if you give me an idea what that might be.
Not trying to be salty or anything – I really think Compose hits the sweet spot of abstraction which is less complex than both the monstrosities I listed and the ad hoc Bash scripts copying code over SSH and restarting services approach (so, the other extremity of declarative v. imperative).
Actually yeah! A single node is usually enough for me though, but I like the ability to throw in more nodes as needed (more or less painlessly, as long as you properly guard services that need persistent volumes with a label constraint).
Shameless plug: I also make a Docker Swarm dashboard, check it out: https://lunni.dev/
What strikes you as difficult with compose files? In fact, I would say it’s the most concise format to describe a desired state of running applications currently available.
Compose is an open source project entirely separate to hub, though. The compose file specification is versioned separately, and will outlive Docker, probably. So I don’t quite get your criticism?
How are you using compose without docker? They are joined at the hip, criticism of one is criticism of the other.
The days of cheap money are over, it's inevitable that certain SaaS companies will start tightening the screws on their users to match the returns they can get with cash in a bank. I just wish lxc (which docker was built off) got a chance to gain traction. It's miles ahead in DX and sure they serve different functionalities but can be used the same and the network effects can't be understated.
No, that isn't true. Compose can absolutely be used with Docker replacements, and there is no reason you couldn't create an implementation for LXC, for example.
> Compose can absolutely be used with Docker replacements
Can you point to some examples of this?
LXC has profiles which can be mixed and matched for containers, it's got far more extensibility, compose files are a one-shot creation that gets copy/pasted/modified all over the place (we're all guilty of this), you need to read every single compose file and reboot unlike additively popping another profile on a linux container while it's running.
I'd really disagree that compose files are somehow one-shot, or blindly modified. To the contrary, really, we have them checked in with the source code. Upon deployment to the cluster, the (running) services will be intelligently updated or replaced (in a rolling manner, causing zero downtime). LXC might be more elegant, but I have no idea what simple, file-based format I could use to let engineers describe the environment their app should run in without compose.
I need something that even junior devs can start up with a single command, that can be placed in the VCS along with the code, and that will not require deep Linux knowledge to get running. Open for suggestions here, really.
It is a great start and shows very good potential. Still need to build support for io_uring, and all the sundry features in mainline nginx, apache, haproxy etc.
> open source server project that works as a reverse proxy, serves static assets, and runs applications in multiple languages.
Isn't this what Nginx also does. How is Nginx Unit different beyond configuration via JSON REST APIs? The Github readme or their site is not very clear on this.
Finally something exciting!
If only this had event push capabilities usefull for discoverability and announcement patterns over various sinks http/nats/mqtt/amqp ...
Yes, this is as bad as it looks: “success” isn’t even part of the schema. It’s in the examples, but not the actual schema definition. The way success or failure is actually indicated is by HTTP status codes. 200 is success, 400/404/500 is error. So, they seem to set very bad precedent in at least one of their response and their schema.
But I was expecting it to be something like {"success": string} | {"error": string}, which is frankly a perfectly reasonable way of doing things: an untagged, but still unambiguous, union. It can make for quite pleasant code, too: `if response.success` and such.
You’re looking for a tagged union, something like {"result": "success" | "error", "message": string} (or alternatively like {"result": "success", "message": string} | {"result": "error", "code": string, …}), which is also a perfectly reasonable way of doing things. In some ways it can be viewed as more principled, as it more formally allows you to check the tag.
Really, the two approaches are much of a muchness. They each have their strengths and their weaknesses. But if general status is being done at the HTTP layer, I’d prefer {"message": "Reconfiguration done."}, or… well, actually, just a 204 No Content response, and no JSON or body at all.
(On the terms untagged and tagged unions as I’m using them: they’re necessarily a bit different from the concepts as exposed in languages like C, since you’re dealing with a model built on objects rather than bytes, but they’re still reasonable descriptions of them. In Serde’s classifications, they’d be the untagged and internally tagged enum representations <https://serde.rs/enum-representations.html>.)
At the scale of nginx, having automatic verification that the examples (and the output of the API in general) match the specification would be great.
At our scale, here is what really helped me go through the rework (and ensure we do not regress too easily):
- setting additionalProperties to "false" to detect key field "rot"
- using "required: [x,y,z]" on everything, and by default specify "all the property keys", with an opt-out (so that each time a developer adds a field later, it is considered mandatory, unless otherwise specified)
- use tooling during the tests: "assert_schema" (with OpenAPISpex) to ensure our API endpoints responses pass the spec (additionalProperties: false helps ensure we get an exception in case of key field rot, again!)
It can be super frustrating for users to live with the uncertainty of the response of an API for sure, and I was happy to discover the Elixir tooling (OpenAPISpex in particular) worked so nicely once I understood what I had to do.
AFAIK this is just a frontend Postman replacement which also supports OpenAPI, no idea how much it differs from showing the underlying type support from OpenAPI specs tho, seems to use collections so probably accepts postman ones to import. dunno just saw the other day on show hn
If there was always only one top-level field and everything else was inside that, it could be externally tagged, but it’s definitely just untagged here, because #/components/examples/errorInvalidJson/value (line 4397) shows an error body with multiple top-level fields: {"error":"Invalid JSON.","detail":"…","location":{"offset":0,"line":1,"column":0}}. Externally tagged would be {"error":{"message":"Invalid JSON.","detail":"…","location":{…}}}.
I wasn't familiar with Serde's description of this so that was a good read, thanks.
Although I still think this is a bad example of an externally tagged representation. In the Serde example they have the key as "Request" then what follows is the request object. In this example, the "success" key is followed by an arbitrary string message, which isn't obvious at all.
The contents of each variant don't really matter here. The tag represents the variant, not the type contained inside of it. The Serde equivalent here would be:
Looks like the openAPI is a little rough around the redundant status fillers? Wouldn't be the first one I guess.
Putting the redundancy with 200 or not 200 aside, I sense a certain aesthetic quality in the {"success":string}|{"error":string} approach. Namely in how it adheres to keeping the schematic stuff on left side of the colon.
I think this is trying to be a discriminated union type[1] in json. One way to do it is to have a ‘tag’ field that says “ok” or “err” or whatever and then other fields for the rest of the data (or an array with the tag in the first slot). Another is to have one field whose name is the tag and whose value is everything else (which is what happened here).
[1] eg one where the possible values are either Err(<some error>) or Ok(<some result>), and the data inside could be more complex types instead of just strings
Heh. My pet peeve are useless success fields. A 200 response tells you it was successful already. We can talk about a „message“ field or something if you intend to display it to a user, but even that should be implicit by being a response to a specific request. If I send a POST /configuration, is there really something new to be gained from that message that the client cannot figure out on its own? Wouldn’t, perhaps, a 201 No Content suffice?
Eric S. Raymond, in his guide "The Art of Unix Programming", mentions this principle as Rule of Silence: "When a program has nothing surprising to say, it should say nothing."
His statement probably related to command line applications, but it makes sense for a lot of cases.
It sounds smart but what counts as surprising is entirely context dependent and most programs won't be aware of your context.
E.g. a command line app where you put a subtly wrong switch in that does exactly what it thought you wanted and prints nothing while outputting a 0 exit code is dangerous.
It's about context, of course, but it's not really the program's responsibility to know the user's context. It's the user's responsibility to understand their own context.
A tool should have sufficient interlocks to ensure safety when not engaged, but no more.
I follow your point, however it actually supports mine: An HTTP response isn’t intended to be the output of a command line app. A command line app should take the response from the lower-level HTTP communication and articulate whatever appropriate message to the user, depending on the verbosity level for example. Talking in OSI terms - this is a presentation layer problem, not a link layer one.
TLDW: Most of the Gitlabs outage where they deleted the primary prod and backup prod database could have been solved if they just waited for the command line op to complete - but because of a lack of feedback they thought it had frozen, leading to several other plans which made the situation worse.
Some people treat http as a pure transport/network layer and the actual body as the app-level layer.
In such an approach, any http-level error would be a network, server maintenance or other unknown/fatal error type of things.
App errors are encoded in the response body in a app-defined way.
It's not a completely bogus way of handling things, as long as it is perfectly consistent throughout the project and properly documented, which is rarely the case.
The opposite of that is people who try to loosely project the variety of error modes their app has to http status codes. You then print out a table of http code <-> actual meaning and try to outline ranges and messages you want to handle differently. On top of that, their backend may be down and you have to deal with bare reverse proxy statuses as well, which adds another dimension to that mess.
Unit has been around for half a decade. It feels like an evolution to something like Phusion Passenger, but it's not quite cloud-native. A lot of the documentation is tailored to installing directly on a server and some essentials (i.e. prometheus metrics) are missing.
I briefly evaluated it for bringing a PHP team into our Kubernetes cluster, but then ended up writing a bit of Go code to proxy into a real nginx+fcgi while adding a syslog sink (so PHP could log to stdout/stderr) and our standard prometheus metrics on the Go http server.
depending on the desired metrics, the Unit /status endpoint may provide enough info, and transformation of json to prometheus metrics is somewhat easy with existing tooling
wait, did that continue to use php-fpm? if not I want details plz! :D (especially around the logging sync, unless it’s just php-fpm configured to collect child process output)
Yes, it did, not too much magic going on, the Go proxy also was a replacement for supervisord so it started nginx and fpm, reaped zombies and pulled all the important logs into stdout/stderr. The most remarkable thing was how this revitalized a team that wrote boring PHP software limited to deploying via FTP on PHP 5.
They ended up really getting into stripping out every piece of PHP they didn't use because the image built PHP from scratch, eventually took over maintenance of the go piece and ported a bunch of their apps to Symphony. I was in a platform engineering team and they were one of the few teams to really torture test every feature we ever shipped to the point they'd report edge cases or a bug to us every other week or so.
As for the logging question, we configured Symphony to log to syslog, which was provided by the Go daemon via unix socket.
This way you can map to a custom message on your front-end based on the responseCode whether its successful or not, you can fallback to the response message if no mapping is found, and you can easily check if the transaction was successful or not.
If I were designing it I’d probably have a result field too, or even a success boolean. But I don’t hate what they’re doing here: the presence of the key is the Boolean value, the value is the description. A two for one.
An economy not worth investing in in my book. It’s always more clear to separate success and result. Sometimes, you’ll have no result, so there will be some parasitic (or truthy) value like null or true. And when you check success-named result deeper/further in code, it looks like you’re indexing into a boolean. Trying to make it clear on-site creates miniprotocols not worth remembering. Worst case scenario is {success:x, error:y}, x and y being 4 combinations of null and non-null, where you aren’t even sure what happened and may swallow a false positive with an optimistic check. Also, since undefined is not in json standard, the existence becomes ephemeral in languages where that’s distinct from just being undefined. You may think that returning undefined from a wrapped worker is okay, but it results in “error encountered: undefined” down the line due to {} payload. It’s one of the things that make programming harder for no good reason.
Also, Docker environments running PHP via Nginx Unit will no longer need separate containers for http + fpm, as it works similar to Apache's mod_php.
1. https://habr.com/en/articles/646397/
2. https://medium.com/@le_moment_it/nginx-unit-discover-and-ben...
3. https://github.com/nginx/unit/issues/6#issuecomment-38407362...