Hacker News new | past | comments | ask | show | jobs | submit login
Nginx Unit (nginx.com)
680 points by tomd on Sept 7, 2017 | hide | past | web | favorite | 232 comments

I hate sites like this. Im probably stupid but I have no idea what precisely it is after reading that page. I just know that marketing team wants me to believe it's going to be my saviour.

Right. And what is the relationship to nginx? What is the license? Maybe hidden somewhere, but not obvious when quickly trying to get a first impression on the phone screen.

EDIT: https://github.com/nginx/unit/blob/master/README.md is the better reading.

It seems to me that software producers often advertise to managers rather than programmers, and that's why this style caught on.

I can't say more than: Mee too!

Agree, this is marketing material for bean counters.

Yeah I had a hard time finding out what this site did.

Lol, yep:

> NGINX Unit is a new, lightweight, open source application server built to meet the demands of today’s dynamic and distributed applications.

That means everything and nothing. They wasted their intro sentence on it. They probably don't have a very good idea of what it is either.

While it is a fluffy marketing sentence, it seems clear to me. It's an application server.

Then the next section explains that you can run different kinds of applications and even different versions of the base software like different versions of PHP or python all in the same server.

This looks pretty cool, and makes me sad that Mongrel2 never became popular. In short: Mongrel2 solves the same problem, but does it by letting your application handle requests and websocket connections over ZeroMQ instead of eg FastCGI.

I guess it lost momentum when ZeroMQ did. Anyone know why? Sounds like a dream solution in the current microservice hype.


Yeah, Mongrel2 looked like a good idea… but turns out it's kinda pointless. Why talk to your app via HTTP-reencoded-as-ZMQ when you can just talk straight up HTTP? Pretty much all languages have very fast and concurrent HTTP servers these days.

I thought it was because there's a lot about http that's overhead and/or tough to get right, which can then be delegated to the server.

Also, websockets actually map pretty badly to http, conceptually, and fit zeromq much better IMO.

Regarding websockets (admittedly off-topic re Mongrel though), I recently found out about Pushpin[0], which seems to be an elegant way to translate WS into HTTP, should it be of interest to someone. Basically a proxy-server that takes care of accepting either websockets or HTTP on the front, and talking only HTTP on the other side.

[0] http://pushpin.org

I evaluated PushPin for a project recently but ended up going with Nchan


Pushpin uses Mongrel2 under the hood to handle incoming WebSocket connections, so not entirely off topic. :)

It's pulled in as a dependency and launched in the background.

I thought PushPin was based on Qt?

It's multi-process. The core logic is a Qt application, but it delegates the external protocol I/O to separate processes. Mongrel2 handles inbound and Zurl handles outbound (Zurl is a project of ours that is basically the inverse of Mongrel2).

Parts of mongrel2 were sadly a solution in search of a problem. Mostly the "let's redo FastCGI via ZMQ". Was still immensely fun working on and with it.

Mongrel2 was indeed a nice idea on paper.

It never had much momentum though -- but what it lost and killed it was Zed's interest.

microservices on their face seem cool, but in reality not so much. really, it's just SOA taken to an unnecessary extreme.

It's really SOA without the XML based service bus.

Oh the "service bus" <strike>xml</strike> json is coming back... its called lambda architecture.

And again nothing new except someone else takes care of some server software for you with the promise of reduced price and maintenance but the reality eventually becomes tight proprietary coupling and eventual price gauging.

Amazon's own Lambda is that, yes. But the Lambda architecture it inspired is the opposite: a de-facto standard (based on the way Amazon's works, but probably eventually an open standard) for servers any org can use to stand up their own public or private FaaS cloud, which developers can deploy Lambda functions onto rather than having to build an entire container/VM just to slot it into OpenStack.

I doubt it will ever be a standard. Amazon loves vendor lock in. Plus most of the cloud services love to do their own thing for each service type. The main exception seems to be Kubernetes. Google has it in GCE, and Amazon has said they are working on their own Kubernetes service. If that happens, I bet Azure will follow if they aren't already working on it.

I (and others) are not so much imagining a standard between cloud vendors, as we're imagining a standard "FaaS server function API" (sort of like how the web has a standard DOM API) supported by several FOSS FaaS server implementations (sort of like how the web has several FOSS Javascript engines.)

Given such a standard API and compatible servers, you'd then deploy a FaaS server cluster to your public/private cloud of choice, the same way you deploy e.g. a Kubernetes cluster, or a Riak cluster.

There would likely by small public clouds attempting to be "FaaS native" by exposing only such servers in a multitenant configuration (like small public clouds like Hyper are currently doing with CaaS.) Their implementations wouldn't always be exactly compatible, and might have some lock-in.

However, once FaaS "caught on" with the enterprise, a FaaS server would likely make its way into the OpenStack architecture.

At that point, you'd see medium-sized public cloud providers like OVH and DigitalOcean set up their own multitenant FaaS clusters as well, probably with custom code, but built to be compatible with the OpenStack FaaS tooling, to allow enterprises the freedom to move FaaS functions freely between public and private clouds.

And, eventually, the other major cloud providers would feel the need to support the API.


This path has already been followed: it's what happened to Amazon S3—first cloned (but not compatibly) in FOSS by tools like Riak CS; then standardized by OpenStack Swift; then cloned compatibly in FOSS by tools like Minio; then picked up by medium-scale clouds like Rackspace; and then, eventually, picked up by Azure and GCP as secondary APIs to address their equivalent offerings (that originally had quite different APIs.)

You can definitely do microservices that way but in reality they tend to be more granular both functionality wise and density-wise.

With old skool SOA you'd typically have a monolith app with a bunch of endpoints. With microservices, especially in a containerized environment they tend to be more lightweight.

Microservices is just SOA rebranded for the cool kids. The fact that modern orchestration and tooling makes it easier to have more granular services changes the equation for how you factor the services, to be sure, but it's an evolution not a revolution.


True. XML => Json.

Wait, zeromq lost momentum ? When did that happen ?

Unfortunately, the founder of ZeroMQ, Pieter Hintjens passed away (due to cancer) [1]. He was a regular on HN [2].

ZeroMQ still works great and the open source community is still maintaining it on GitHub [3]. I just think people are also looking at other technologies. A lot of interest popped up in things like Apache Kafka and Samsa. I still think ZeroMQ holds a unique place due to its lightweight and simple nature.

[1] https://news.ycombinator.com/item?id=11547212

[2] https://news.ycombinator.com/user?id=PieterH

[3] https://github.com/zeromq

I have been curious how the community would hold up after Pieter's death. This project is a unique case because of how much work went into building community and welcoming contributions. That said, the world is a different place than in zeromq's heyday. Other commenters refer to Martin leaving the project, C++ regret, and a poor fit with node.js. Maybe in the face of all those changes zeromq's mature community is primarily why it lives as a project.

Yes, I am aware that Peter had passed away. I thought that it was implied that the project is not maintained anymore, which is not true.

It has the same momentum, but too much mass and not enough velocity.

In which direction do you want it do move, if you want velocity that is.

I more often find platforms have too much velocity. And if not too little mass, then too little solidity.


I recently switched from zeromq to straight libuv sockets with jsonl (\n-separated json) payloads. Because I'm working inside a Node process, combining zmq's threading model with Node's threading model was a pain. Now, there's a single IO thread which is the same as the Javascript engine thread, and I can use uv_work to run CPU-intensive tasks on multiple cores.

Do you not allow \n inside your JSON or encode your JSON as base64? If not you might have problems with disambiguating frame ends from line breaks inside frames.

A common way for framing is to prepend each frame with it's encoded length. That's easier, faster and less error-prone than searching for ASCII delimeters.

If you do the JSON serialization yourself, there's no reason for newlines to be in the JSON. (Newlines within strings are encoded as \n.)

I'm generating the JSON, either with custom C++ marshaling routines or with JSON.stringify which doesn't include newlines unless you give it extra arguments. I believe that any valid JSON can be converted to a single line by changing any '\n' bytes to ' '. Literal '\n' bytes are not allowed inside strings, and outside strings any whitespace is equivalent.

a newline is encoded as the two characters "\n" in json and would not be confused with a literal \n (aka \x0a) character

I think it happened in 2012 when this came out http://250bpm.com/blog:4

Before that, I used to hear people talking about it all the time.

Cool article! I guess that's why Go does its error handling without exceptions.

When the main developer decided that it wasn't good enough and started working on nanomsg (http://nanomsg.org/)

zeromq is still very much in wide use. nanomsg doesn't have nearly as much documentation and community support as zeromq does.

Dunno, I spent close to a week trying to get it to compile with and without encryption on Windows to no avail.

Ended up using a Linux container on Docker to get the thing working.

That is troubling. I always thought of zeromq as having as good Windows support as anything not written exclusively for Windows.

So did I, that's why I selected it. Did my development on OSX and Linux and thought that deploying a static binary on Windows would be a breeze.

It wasn't, turned out to be a Class 4 hurricane :P

Looks like there is a path hardcoded in the build files causing problems. After some reflection on the msvc/README, renaming the project directory to libzmq (was: zeromq-4.2.2 from the release or libzmq-master from github zip download), and launching cmd.exe using the Developer Command Prompt for VS2015 link, libzmq/builds/msvc/build/build.bat successfully builds all configurations.

When was the last time you heard something about it?

When was the last time you heard something about zlib? At a certain point - libraries are basically done. They are widely distributed, everyone knows what they are, there is no reason to talk about them but they are still maintained and heavily used.

Libraries can be done, but that has got nothing to do with momentum. Momentum depends on mindshare, on the willingness of people to use and to keep using it. Most programmers don't choose technology based purely on merits, they choose it based on "I heard X talk about Y and s/he said good things, so I guess I'll use it". We programmers aren't as rational as we think.

Like it or not, popularity and momentum are important merits of a technology. They lead to all sorts of benefits, like healthy maintenance and further development, better documentation, and support when you run into trouble. It is rational to consider these things when choosing technology.

This morning when I used it.

I don't use zmq nearly has much.

I'm using it for a project now. It's a bit weird, but it does work. Cool thing: you can slot a file descriptor into the zmq provided poll ... point is that you can poll on both zmq and sockets in the one loop.

Pro Tip: Use 'cbor' for serialising.

> you can poll on both zmq and sockets in the one loop

Which is magnificent. The ZMQ poller is tons of fun. (Although I think this doesn't work on Windows.)

Thanks for the tip. In which language(s) do you use CBOR? I want to like it, but the various C APIs look a bit cumbersome and lacking docs.

Python 3.

That is a really cool idea, do you know of other projects like it?

Nginx Unit? :-)

Confusing description. After seeing the Github README (https://github.com/nginx/unit#integration-with-nginx), it looks to be Nginx's alternative to low-level, language-specific, app servers, e.g. PHP-FPM or Rack, with the benefit that a single Unit process can support multiple languages via its dynamic module architecture, similar to Nginx web server's dynamic modules.

It's still intended to run behind Nginx web server (or some other web server), much like you'd run something like PHP-FPM behind a web server.

It's a polyglot app server with microservice orchestration. It's definitely needed.

Some things to look for, such as registration/discovery of services, intra-cluster load balancing (where it started, no doubt), identity propagation & authn/z

The biggest issue to my mind though is distributed transactions and logging/debug/development. My biggest stumbling blocks with this sort of thing.. stepping through code over microservices is such a PITA.

You seem to be very experienced in this area. Can you explain a bit about why you think an "app server with microservice orchestration" is needed?

because you can work with individual microservices across clusters without a ton of overhead (or use a monolithic app server), aiding in deployment, rollback, debugging, development.

How exactly does having an app server reduce overhead, compared to running each service directly without app server? And how does having an app server compare to putting each microservice in its own Docker container and orchestrating them in Kubernetes, which is what more and more companies seem to be doing?

having to deal with e.g. php-fpm, fcgi, tomcat, and unicorn separately in the same stack is a nightmare. even if they run in separate locations/clusters/nodes/machines, it's still several different configuration and deployment paradigms you have to deal with.

some people simply don't like containers or aren't tooled for it.

there's more than one way to do it (TMTOWTDI).

You would be able to merge your services under a single server and have them talk to each other internally sans latency overhead. It also allows you to easily scale up and down and segment things on demand.

At a glance, I think this is an alternative to docker/kubernetes. The general idea seems to be to cut the middleman/topman out and let the bottom man (app server) be the "unit" of configuration. Like a sort of integrated docker/<YourLang>-runtime.

No, this thing is more like inetd, while kubernetes is more like an OS for containers and docker is a package manager.

So in what circumstances would you need the polyglot bit? (I guess I'm assuming a container/VM architecture here).

From the description it sounds a lot like the Passenger Nginx module. https://www.phusionpassenger.com

> It is not recommended to expose unsecure Unit API

why do people always use "not recommended" when they actually mean "do not ever do this or you'll end up the laughing stock in the tech press"

Exposing this otherwise awesome API to the public will amount to a free RCE for everybody. So not ever expose this to the public, not even behind some authentication.

It's very cool that by design it's only listening on a domain socket. Don't add a proxy in front of this.

> why do people always use "not recommended" when they actually mean "do not ever do this or you'll end up the laughing stock in the tech press"

For the same reason they say, "non-trivial" when they really mean "nearly impossibly difficult". :)

Technically, NOT RECOMMENDED is the same as SHOULD NOT in RFC2119 - i.e. "the full implications should be understood and the case carefully weighed before implementing any behavior described with this label". Not that this document uses those definitions, but.

Technically, you can expose the Unit API within an internal network.

Why that still might not be a good idea: https://research.google.com/pubs/pub43231.html

Thanks for linking that. Typically, if you know what you are doing, a setup of this nature would be segmented out from the rest of internal network.

I did compliance work for a lot of start-ups and never came across a company that understood this concept. The majority thinks that their wireless router is already doing this via the Guest account.

I am biased, but call me underwhelmed. It seems that with every "new" feature, nginx is copying Apache httpd, even now claiming to be the "swiss army knife" of web-servers. Embedded languages. Dynamic modules. Support of uWSGI. gracefull restarts. Thread pools... and yet people eat it up. Just goes to show what having corporate-backed marketing and PR can do.

When I started with apache, I thought it was great, but after moving to nginx, the speed and simplicity made me never look back. While these new features to nginx aren't new to the world, they are a nice welcome addition to a system that IMO is far superior to apache.

I never found Nginx especially simple to setup, the config files were always messy. Caddy seems to have knocked this out of the park for me, especially considering automated https, and redirection.

I use Caddy on all my small projects right now. I haven't used it long enough to install enough faith for production sized systems yet, but hopefully I will get there because it is much easier to setup. Still, nginx is a breeze compared to apache IMO

Been pretty rock solid for everything I've put on it! Side projects + corporate.

I will keep that in mind. You mind me asking what's your most complex setup and your scale?

I share your love for Caddy, but having worked with all three, I do agree that nginx is easier than Apache. The config file isn't perfect, but I wouldn't call it messy, and I prefer it's syntax httpd's. But to each their own.

>the config files were always messy.

How? It so much cleaner and simpler than Apache. I don't get this sentiment.

Simpler than apache doesn't mean simple. As someone who sets up HTTP servers rarely, I had trouble when I tried out NGINIX.

But I suspect for people who do it more seriously, then it's nginix config hits the sweet spot. To me the language seems sophisticated, well documented and fairly well behaved if you pay attention to the rules.

That's can make it too hard for someone casually trying to quick-start some experimental project. But it's exactly what you want if you are maintaining a long-lived set up that is likely to grow and become complicated over time.

Is this nginx though, or an app server with nginx's branding?

I believe it's an app server built by the nginx people. Still gives me faith that they built it right.

I don't understand why people want to write something off without even trying it.

This is not'nginx'. You can't sit and write this off because Apache did something similar ten years ago. The build in API alone is worth exploring.

Because people have a hard time figuring out what it is. Could you explain what it is? What benefits does it have to make it worth exploring? To me it looks like a rather invasive but flexible and dynamically configurable inetd. But it forces you to use its own libraries to receive http requests.

It's a lot like OpenResty (https://openresty.org/en/), which is Nginx with a Lua interpreter embedded and bridged to its request-response cycle (the OpenResty page explains the point of that pretty well); but instead of Lua, Unit has a bunch of other language runtimes embedded.

I haven't found any embedded interpreters or runtimes here. Quite the opposite, I see they have libraries they ship with other languages that a user has to use in order to receive http requests.

Why remove Lua, though? I'm a heavy Lua user, which is why I use the openresty bundle of nginx. There's no reason for me to try this out. This is unfortunate!

I agree. Apache has great module support. I think their worst sin was that their debian package defaulted to a small number of workers and using a forking mpm leading people to believe apache was slow.

Their eventing/threaded mpm is basically nginx.

And now nignx is starting to gain the features of apache.

Performance is a feature.

Could anyone explain to me why I would want to use this? What exactly is the use case and benefits of it when I am for example running a go web application?

NGINX allows you to proxy a back-end applications giving you the ability to load balance, handle upstream failures with custom maintenance pages, employ server blocks (virtual hosts), and much more. However, you always need to do the leg work to get your specific application language up and running. This new unit system makes that job easier as you would no longer need to employ separate middleware, like PHP-FPM for PHP applications, or use a separate init system like systemd to run Go or Node applications. Now NGINX would assume those responsibilities and provide you with a consistent interface.

Here you can see the configuration of workers and user/group permissions for a Go application:


I'm sorry but I'm not sure I get it.

Is it like the apache mod_php for php for example ?

Thanks in advance for your answer

That's how my reading of it goes. You provide an "endpoint" for the library to call, configure the Unit framework, and their Manager connects the nginx frontend to that Unit framework.

No real idea if it does so using fcgi or some other socket-based proxying, or if the unit is spun up as a separate process and handed the raw socket and some shared memory after the headers are parsed (closer to how mod_php works).

Yes, you can generally think of it as a replacement for mod_php as Unit would parse requests from NGINX, pass them along to the PHP parser, then return the responses back to NGINX. That's the same job mod_php does for Apache and what PHP-FPM (essentially) does for servers like NGINX.

You can see the PHP configuration here:


And here's the configuration needed to integrate Unit with NGINX:



Upon first reading I thought that Unit needed to be behind NGINX to function. When actually it listens for requests as a separate server, entirely. It only provides an API for configuration purposes.

However, If you want to use the other features of NGINX, like providing static files, you will need to put it in front of Unit.

It's worth noting that it's rarely necessary or desirable to put an app server like nginx in front of Go HTTP server applications. The Go standard library http and TLS stack are production quality and rock solid. Putting something in front is mostly cargo culting from people more used to the worlds of PHP/Python/Ruby/etc.

Pushing back on this a bit...for example, securely exposing a JSON endpoint to the public internet requires extra machinery that applications like nginx bring for free. If you simply set the router to your handler, then you accept arbitrarily large request sizes, wide open for DoS attacks. You have to either manually add limits or pull in some library. nginx caps these by default. Want throttling or load balancing? Again, things that haproxy and nginx do well, but require more cruft in your application.

I would argue that all is part of security-aware software engineering. If you aren't thinking of these things you have no business writing publicly-exposed HTTP applications.

Or... you spend your time building something useful, leveraging skills you do have, and let nginx leverage its own strengths.

What you say, sounds like NIH syndrome to me.

Secure software isn't useful? Insecure software isn't eventually value-destroying?

Really what this sub-thread is arguing is that security Isn't My Job(TM) as application developer. I disagree. Furthermore telling app devs not to worry about it because nginx takes care of everything is a false security blanket that will bite you eventually.

Not accepting unbound input and sane rate-limiting are kind of basic stuff, no? I'm not saying every app developer needs to be a Defcon wizard, just that they should have some fundamental awareness of secure coding standards for web apps if that's what they're building.

> Secure software isn't useful?

Nowhere in the sub-thread is this claimed.

> Insecure software isn't eventually value-destroying?

Nowhere in this sub-thread is anyone suggesting otherwise.

> Furthermore telling app devs not to worry about it because nginx takes care of everything is a false security blanket that will bite you eventually.

Nobody said this. But while we're on the topic the more likely false security blanket comes from telling app devs "just use 'net/http' and 'crypto/tls' and everything will be fine without a reverse proxy."

In any case the straw men you've raised are distracting and not driving the conversation forward.

> > Furthermore telling app devs not to worry about it because nginx takes care of everything is a false security blanket that will bite you eventually.

> Nobody said this.

That seems dishonest to say... From the grandparent:

> Or... you spend your time building something useful, leveraging skills you do have, and let nginx leverage its own strengths.

Really sounds like at least one person in this thread is advocating for app devs not to worry about things that nginx takes care of.

Agree that making straw men doesn't help. There's advice on either side regarding which one to use and realistically both are equally 'false security blankets'. The correct answer is to educate yourself on the benefit and drawbacks of each and make a conscious decision about where to implement your security.

What if I have an application that needs to be deployed internally and externally in separate instances. Identical application, but different security contexts. Using Nginx to handle these concerns is easy.

It's a common myth that internal networks are a more secure environment. You are better off implementing the philosophy behind something like Google's BeyondCorp¹ effort.

¹ https://cloud.google.com/beyondcorp

I find it useful for filtering and caching. Things like redirecting traffic to /.well-known/acme-challenge/ to your certificate management host, providing an endpoint for machine status or filtering requests to dot-files. Or telling Nginx to cache responses and allow it to serve stale content when the backend server returns 4xx/5xx status codes during deploys or high load. Handling things like client certificate authentication in Nginx instead of doing it in every backend application is another thing I've found useful.

It's useful to put Varnish in front of the app server for caching and to serve static content from a separate process (and domain) running a light/tiny httpd server instead of Apache/Nginx

I don't use Go, but D (dlang), vibe.d, varnish and lighttpd are working real well for my latest venture.

Does the Go http library asynchronously serve static files, by default? Or is it going to block on any app requests?

Nginx is extremely fast for that case, which is typically the reason most people proxy languages through it. ;-)

Go I/O is async by default, I figured, so...

It could be that nginx is more efficient at static file serving, but that'd be down to being specifically designed and optimised for it rather than some "sync vs async" thing.

Minor quibble, in the context of serving static files (ie. from disk), go doesn't use async I/O, the file I/O blocks the thread until it's complete. But since go's scheduler is M:N this doesn't lock up the whole program, so your point stands.

Err, no, this is a misconception. All IO in Go is async - there is no sync IO in Go (as sync IO would block an entire OS thread). There is an internal registry mapping blocked file descriptors to goroutines - when a kernel IO function returns EAGAIN, the goroutine throws the file descriptor + goroutine info onto the registry and yields back to the scheduler. The scheduler occasionally checks all descriptors on the registry to mark goroutines that were waiting on IO as being alive. The scheduler is, therefore, essentially a multithreaded variation on a standard "event loop" - the only difference is that "callbacks" (continuations of a goroutine) can be run on any of M threads rather than just one.

From a Go programmer's perspective, this looks like "blocking a thread", but because goroutines are relatively lightweight in comparison to actual threads, it behaves similarly resource-wise to callback-based async IO. (Although yes, nginx is likely optimised so that it throws out data earlier than Go can free stack space and so can save some memory. Exactly how much is up to benchmarking to find out.)

Basically, the only differences between Go and e.g. a libev-based application as far as IO is concerned is a different syntax - the event loop is still there, just hidden from the programmer's point of view.

Note that this doesn't mean you shouldn't put nginx in front of Go to serve static files - nginx is likely more optimised for the job than Go's file server, might handle client bugs a little better, is more easily configurable (e.g. you can enable a lightweight file cache in just a few settings), you don't have to mess around with capabilities to get your application listening on port 80 as a non-root user, and so on and so forth.

I'm referring specifically to disk IO, which on linux using standard read(2) and write(2) is (almost) always blocking. What you describe is true of socket fds and some other things, but on most systems a file read/write which goes to a real disk will never return EAGAIN.

This is why systems like aio[1] exist, though afaik most systems tend to solve this with a thread pool rather than aio, which can be very complicated to use properly.

[1] http://man7.org/linux/man-pages/man7/aio.7.html

Ah, absolutely, I forgot that the state of disk IO on Linux is terrible - although this still isn't quite the case, since there's a network socket involved in copying from disk to socket, so if the socket's buffer becomes full the scheduler will run.

It seems that nginx can use thread pools to offload disk IO, although doesn't unless configured to - by default disk IO will block the worker process. And FreeBSD seems to have a slightly better AIO system it can use, too.

I would be surprised if go did not use the 'sendfile' syscall that does exactly this, is there a go nut that can clarify?

Go does shortcircuit to sendfile where possible.

I love Warp for Haskell, but I would still be hesitant to expose it directly. It’s simply not used as much as Nginx or Apache. Less people have spent time trying to break it.

Perhaps it's rarely necessary, but it is often desirable. For instance if you are serving any static content along with your application, nginx is quite handy and is probably better at compressing and caching.

And yet this issue remains unsolved: https://github.com/golang/go/issues/16100

Your choice is force a timeout and kill streaming requests, but defend against slow client DOS, or support streaming requests and suffer from a trivial slow client DOS.

For this and other reasons I still recommend fronting golang with something more capable on this front.

Sounds like uWSGI based on the description. I wonder how it'll play along with certain environments like Kubernetes.

Same. I really want to like (and use) uWSGI, for many reasons, but I find it's lacking severely in the department of documentation (searching "uwsgi" on Amazon gives zero hits!).

A properly edited book would be awesome. I would pay for it of course.

> I really want to like (and use) uWSGI, for many reasons, but I find it's lacking severely in the department of documentation (searching "uwsgi" on Amazon gives zero hits!).

uWSGI definitely needs more concise tutorials on how to accomplish some tasks (e.g. creating Hello World with python and uWSGI, or how the uWSGI emperor works).

However I disagree with "lacking severely in the department of documentation"

Sure, it's not as easy as some other projects to dive into (e.g. Django) but IMHO the documentation is not lacking, it's just not forthcoming.

If you sit down and read through the uWSGI documentation, you'll discover a lot of very useful functionality and a reasonable description of how to utilise it.

What's lacking is the tl;dr way to bash something out quick and dirty.

https://uwsgi-docs.readthedocs.io/en/latest/WSGIquickstart.h... it seems very quick and straight to the point (yet complete, it even starts with apt-get)

https://uwsgi-docs.readthedocs.io/en/latest/Emperor.html - has config snippets too

Or maybe you mean detailed step by step instructions, a'la howtoforge?

> Or maybe you mean detailed step by step instructions, a'la howtoforge?

Yes, this is what I meant when I said

> IMHO the documentation is not lacking, it's just not forthcoming.

Yep. Somewhat tricky when you have 896 runtime options. That said, have been happy running uwsgi in production for a lot of python (and php) services.

Yelp.com runs behind uwsgi, and effectively all of the python services behind it do as well. Some use more uncommon features like gevent support.

I think their documentation is quite thorough. It's just as the other commenter indicated an app that extensible doesn't have cookie cutter simplistic configs out of the box.

It's perhaps thorough, but it's not particularly organized or edited.

>Same. I really want to like (and use) uWSGI, for many reasons, but I find it's lacking severely in the department of documentation (searching "uwsgi" on Amazon gives zero hits!).

Agreed. After recently testing out Python for a web dev project I was really dismayed at the fragmentation and lack of usability in the landscape of application servers. Here's hoping this might lead to some standardization.

Whoa. How did I never notice that uWSGI grew up beyond just WSGI!

More like a replacement for any wsgi server.

I initially thought it would allow to dynamically handle upstreams list (and other configuration) like hipache is doing [1], which would be awesome for dokku or other container management systems which rely on system nginx. But after seeing languages mentioned, I'm confused.

Is it supposed to replace language specific servers, like unicorn and puma for rails (but then, I'm confused about what such kind of support would be for Go, since the server is directly embedded in the program)? Does it embeds interpreter for interpreted languages, like mod_* did for apache?

[1] https://github.com/hipache/hipache

Re the Go thing: they give you a go package: https://github.com/nginx/unit/tree/master/src/go/unit.

Note that it's actually CGo (which is not Go), and it uses a non-standard build process to install it: http://unit.nginx.org/docs-installation.html#source-code.

I don't like it at all :( I usually put plain nginx in front of my app, to handle static files and simple load-balancing, but this seems to be oriented towards handling issues best handled elsewhere.

Kong would be ideal in front of dokku: https://getkong.org

It works with postgres or cassandra (and eventually scylladb https://github.com/Mashape/kong/issues/754 ).

Also, nginx is pretty good at restarts, even with thousands of files and vhosts.

Dynamic upstreams are available in nginx, but only in the enterprise/paid offering.

it's like swarm - https://docs.docker.com/engine/swarm/#feature-highlights but much more lightweight.

and nothing to do with docker containers

in fact nothing like it really AFAICT

I'm having a hard time seeing what niche this fills. It seems to be both a process manager and TCP proxy. What am I missing here? What makes this better than, for example, using docker-compose?

I think a "how it works" or "design doc" would be really helpful.

That said, the source files do make for pleasant reading. The nginx team has always set a strong example for what good C programming looks like.

EDIT: Their blog post [0] makes this more clear... nginx unit is one of four parts in their new "nginx application platform" [1]

[0] https://www.nginx.com/blog/introducing-nginx-application-pla...

[1] https://www.nginx.com/products/

>What makes this better than, for example, using docker-compose?

Not having to use docker would be a huge plus for me.

yes, infinitely more lightweight. but docker compose and friends are cool

What is this? I've tried to read blog post, product site, these comments and still having really hard time figuring out what is Unit and why?

Seems to be a standardized replacement for language specific app servers like fpm for php. I guess that makes it a little easier to deploy stuff, although recently with docker containers, that hasn't been such a big deal anymore. You can just take an off the shelf fpm container and deploy that.

Seems like a simple C app would take much less resources than a docker container and have a lot lower latency, though. How much computing power would you need for each, given the same number of users?

Interesting. I like the restartless configs idea. This is becoming more common these days with short lived microservices. This week I just switched my load balancer setup from HAProxy to Traefik - very nice API based setup. https://traefik.io/

Also note github repo at https://github.com/nginx/unit

If you're unfamiliar, look at this instead.

The homepage on Nginx.com is basically

> Join this webinar to learn

> - What NGINX Unit does that has not been available before

There's also a blog post introducing the project: https://www.nginx.com/blog/introducing-nginx-application-pla...

I'm happy to see this. nginx itself is excellent software, I'll be happy to use similar tech for the application server as well (instead of uwsgi).

There are a couple of options I'd like to see added to the Python configuration though before I could try it:

- Ability to point it at a virtualenv.

- Ability to set environment variables for the application.

lol.. nginx en-masse configuration is a nightmare. i can point to a fortune 50 company that it's destroying for relying on it. I won't name names :)

So they deployed a bad config file to all nodes and restarted the service, which then failed to start.

How is this specific to Nginx? This same mistake is possible with any other software ever written.

nginx is faster at stop/starting?

downvoters: I'm deadly serious. I've seen plenty of deployment systems which were unbearably slow because it gave more time for a human to spot a bad deploy and cancel it, and who were afraid to replace it with something faster because it would lack this safety net.

Any pointers for those interested in the story who have no clue?

Great story bro

What does "en-masse configuration" even mean?

That they deployed a broken config file & forcefully stop-started nginx instead of reloading it (and bypassing nginx's built-in protection: it will test a config and refuse to load it if it's broken on reload. on restart it's stuck with whatever busted config you give it).

The logo makes it as if it's read "N Unit", which is probably confusing as a popular unit testing with such name exists: http://nunit.org/

So it looks like they basically rewrote uwsgi and slapped a rest api on top of it.. (as a big fan of uwsgi, that seems like a reasonable thing to do...)

badly need settings like restart works every X requests or harakiri after # seconds timeout.

I'm OK with this.

So in my Flask app this would replace gunicorn?


I'm still not sure I understand "Unit".

I can't speak for the other languages (PHP, Go, Python) but I have some reservations about it helping Java (as well as Erlang and other (J)VM languages) as FastCGI like stuff has been attempted for Java in the past with not very good success with the exception of Resin.

I guess it would be interesting though if they did a native Servlet 3.0+ implementation like Resin but I doubt that is is what will happen. Regardless Netty, Undertow and even Jetty have caught up speed wise to Resin (well at least according to techempower).

CGI for PHP/Python.

AJP/mod_jk for Java.

Looks to be a good candidate to replace omnipresent nginx based API routers

I have a small flask application which basically is a rest get post API server. I'm struggling to make deployment easy. With PHP, i just push to the application server and rsync that folder into var www html for Apache httpd but what would I do for flask python 3?

As with most things, there is more than one way to do it. Push to the application server and hook it to your flask application using [uWSGI](http://flask.pocoo.org/docs/0.12/deploying/uwsgi/), for example.

[Here's](https://www.digitalocean.com/community/tutorials/how-to-serv...) an old guide for running Flask with uWSGI and nginx on Ubuntu. There are several more recent, detailed instructions online.

Personally, I have an AWS instance running a Node.JS server on (blocked) port 8000, a Django uWSGI app on 8001, and a static resume site, all being reverse-proxy served by nginx. So I don't really see the advantages of Nginx Unit yet.

Use a webserver that proxies requests to a wsgi server. We tend to put Caddy in front of Gunicorn which works really well. Also, look into running Gunicorn under supervisord.

Oh, and also use Fabric - http://docs.fabfile.org/en/latest/

Thank you. I'll look this things up. Haven't had to do deployment stuff in my previous life

the answer to this is usually "use docker". If you want to deploy your nginx as well, then you need docker-compose.yml and use "docker stack deploy".

If you are only looking to deploy your python code (and nginx/apache is constantly running on the server), then follow these steps

1. install docker on server 2. create an account on https://hub.docker.com/ 3. https://docs.docker.com/engine/swarm/stack-deploy/#deploy-th...

your docker workflow in the future looks like this: 1. test the application on your laptop inside a docker container 2. push container to docker hub 3. "docker update" your stack

Here will go their REST config api to force reloads.

BTW, it is a good idea to always do API versioning on production runs. That will eliminate the possibility that different API versions (files stuck in the cache, or simply people who kept browser open for a long time) use the same endpoint

Yes, I have a baseurl/v0/... in the naming scheme for now. (:

You can run gunicorn (that loads your flask app) as a service using systemd on e.g. port 9000 and then have nginx (also run as a systemd service) proxy port 80 traffic to that port and handle static files etc.

Ansible? Puppet? A five line bash script?

rsync && ssh target -t 'cd ~/app/; ...' and it's a one-liner.

All too hard. Use "bottle" (I think it's much the same as flask) - you just call 'run' and it does it's thing.


I think that's more intended as demo server to get you started quickly while you're developing.

You'll probably want to switch to uwsgi or gunicorn before you actually deploy anything.

I haven't actually used Bottle, but with Flask the development web server seems to fall over if a client cancels ones of its HTTP requests, for example. It's really just a simple, light thing for mucking around with.

For Go, does anyone have opinions on how is this is advantageous than using the in-built HTTP server (net.Listen() from net/http) that can fronted by a regular nginx/proxy_pass?

This would take the place of something like tomcat or uwsgi, right?

The day I can toss Tomcat to the bin and replace it with something written in C that allows zero-downtime restarts will be a fine day indeed.

Java is a notable omission on the diagram on their landing page...

Under Features / Multi-language support:

> Full support for Go, PHP, and Python; Java and Node.JS support coming soon

No Ruby support :<

Also "coming soon" according to their github page

It is in beta, but I hope this won't become a commerical-only product.


It's open source at the moment at least and I think it's reasonable to expect at least that the parts that are open source today will remain so in the future. Certainly they could have a commercial version with extra features like they do with Nginx, but as long as they have a useful version of this Nginx Unit available open source I will be happy to use it.

I am surprised noone mentioned Kong [1] yet. It seems to implement most of stuff promised by Unit and it was around for a few years.

[1] https://getkong.org/

Not sure how it is related. Unit is an app server, it runs app processes and manages them, handles graceful restarts and etc.

Kong is just an API gateway: you run your own infrastructure as usually and put a gateway on top of it.

>Build the foundation of your service mesh.

Not directly related as Unit seems to be advertised as app server primarily, but you can see quoted text on main page.

The last I saw Kong didn't support microcache, which is one of the best nginx features IMHO.

Any use for that on small scale (of 1 instance)? If you'd need to run nginx in front of it anyway, does it provide any use in case where you'd normally use php-fpm and some proxy_pass?

Interesting for me since I run not just php. Depending on what you do, maybe the API is useful for you.

The concept of XUnit is so ingrained in my head that I assumed it was a unit testing framework for NGINX.

The rest of the headline cleared it up of course, but I was curious for a minute how that would look.

EDIT: When discussing a new product, I would think the name is a fair point of discussion.

Furthermore after this thread's title changed, it now requires a clickthrough to dispel similar misunderstandings.

Yeah, given the title, I thought it was about a unit testing framework for Nginx. Kinda like ServerSpec, but more specific: http://serverspec.org/

Is this similar to openresty ? Instead of Lua - python, go and php? Or something different?

No, this is a replacement for things like php-fpm, gunicorn, etc.

The REST API part of it is for updating its configuration over HTTP.

So, Nginx follows exactly the way of Apache HTTP: remember mod_php, mod_perl etc?

This is not Nginx, it's a separate project developed by the same company. You could put any frontend proxy in front of it.

I didn't see this mentioned but is there any way to upgrade the versions of the modules such as Go and PHP indepedendently of the core Unit package?

So is still standalone or do you still need to run this behind the regular Nginx, like you would a language-specific application server?

Oh, they wrote their own uwsgi, based on what presumably started as nginx2. That's cool.

I hope they would avoid the Second System syndrome..

[honest question, not being negative] what real use-case is not already being addressed by existing technologies?

For almost every new product you see, the answer is: none.

It's not about making something impossible possible. It's about improving possible things in some dimension - like speed, safety, flexibility, or - in this case - standardization and integration with already used tool.

Great answer! I wondered about this myself for a moment :)

Honest question, not being snarky: when did the existence of other products handling the same use-cases ever stopped people from creating another?

For one, it's not just "handling a use-case" it's also _how_ you handle it. And within what ecosystem you handle it. And what kind of support etc you offer with it. Etc...

> Run multiple applications written in different languages on the same server

Amazing progress! Someone introduce them to CGI.

How is it different from Envoy?

It seems like it is quite different, you actually change your code to listen with NGINX unit.


I came to ask the same question. The landing page is terrible it says pretty much nothing.

this is an Envoy competitor. If my reading is right, they want to jump on the Istio bandwagon (https://istio.io/docs/concepts/what-is-istio/overview.html) as another data plane option.

Looks fantastic! Will be trying this over the weekend.

Is it an alternative to Docker Compose in some sense?

Any report of the perf (VS uWSGI for example) ?

So does it use fastcgi to rule them all?

Nginx Unit > G Unit

But can I use Perl 6?

that was my question too. seems like no for now, and no plans either.

Sad butterfly

how does it compare to openresty/luajit ?

AFAIK nginx unit would still require to have an nginx in front, so they are in different weight category with openresty.

It looks like it's more of a replacement to good old NGINX+Apache set up where there would be mod_php, mod_cgi, mod_perl and .htaccess on backend to serve the app.

Is this like AWS Lambda you could put in your own cloud?

This type of question is indication that NGINX Inc. salesmen did fail horribly to conceive of what the product actually is in layman engineering terms. Too much buzzword compliance.


I came to the comments specifically to try to figure out what the heck this thing does.

The page itself never gets to the point of "Here's what it does".

> Is this like AWS Lambda


> you could put in your own cloud?


So, it's an application server?

I recently tried to deploy a python flask application, and it was quite a mess. It relied on some services I had never heard of, and the documentation was a mess (not the documentation of Flask but of how to deploy it properly).

If Nginx Unit could host flask applications, it would be great news.

> Full support for Go, PHP, and Python;

Does it do WSGI then? Did they write the equivalent of mod_wsgi?

yes, from docs it appears the Python app type in Unit provides a WSGI host

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact