EDIT: https://github.com/nginx/unit/blob/master/README.md is the better reading.
> NGINX Unit is a new, lightweight, open source application server built to meet the demands of today’s dynamic and distributed applications.
That means everything and nothing. They wasted their intro sentence on it. They probably don't have a very good idea of what it is either.
Then the next section explains that you can run different kinds of applications and even different versions of the base software like different versions of PHP or python all in the same server.
I guess it lost momentum when ZeroMQ did. Anyone know why? Sounds like a dream solution in the current microservice hype.
Also, websockets actually map pretty badly to http, conceptually, and fit zeromq much better IMO.
It's pulled in as a dependency and launched in the background.
It never had much momentum though -- but what it lost and killed it was Zed's interest.
And again nothing new except someone else takes care of some server software for you with the promise of reduced price and maintenance but the reality eventually becomes tight proprietary coupling and eventual price gauging.
Given such a standard API and compatible servers, you'd then deploy a FaaS server cluster to your public/private cloud of choice, the same way you deploy e.g. a Kubernetes cluster, or a Riak cluster.
There would likely by small public clouds attempting to be "FaaS native" by exposing only such servers in a multitenant configuration (like small public clouds like Hyper are currently doing with CaaS.) Their implementations wouldn't always be exactly compatible, and might have some lock-in.
However, once FaaS "caught on" with the enterprise, a FaaS server would likely make its way into the OpenStack architecture.
At that point, you'd see medium-sized public cloud providers like OVH and DigitalOcean set up their own multitenant FaaS clusters as well, probably with custom code, but built to be compatible with the OpenStack FaaS tooling, to allow enterprises the freedom to move FaaS functions freely between public and private clouds.
And, eventually, the other major cloud providers would feel the need to support the API.
This path has already been followed: it's what happened to Amazon S3—first cloned (but not compatibly) in FOSS by tools like Riak CS; then standardized by OpenStack Swift; then cloned compatibly in FOSS by tools like Minio; then picked up by medium-scale clouds like Rackspace; and then, eventually, picked up by Azure and GCP as secondary APIs to address their equivalent offerings (that originally had quite different APIs.)
With old skool SOA you'd typically have a monolith app with a bunch of endpoints. With microservices, especially in a containerized environment they tend to be more lightweight.
ZeroMQ still works great and the open source community is still maintaining it on GitHub . I just think people are also looking at other technologies. A lot of interest popped up in things like Apache Kafka and Samsa. I still think ZeroMQ holds a unique place due to its lightweight and simple nature.
I more often find platforms have too much velocity. And if not too little mass, then too little solidity.
A common way for framing is to prepend each frame with it's encoded length. That's easier, faster and less error-prone than searching for ASCII delimeters.
Before that, I used to hear people talking about it all the time.
Ended up using a Linux container on Docker to get the thing working.
It wasn't, turned out to be a Class 4 hurricane :P
I don't use zmq nearly has much.
Pro Tip: Use 'cbor' for serialising.
Which is magnificent. The ZMQ poller is tons of fun. (Although I think this doesn't work on Windows.)
It's still intended to run behind Nginx web server (or some other web server), much like you'd run something like PHP-FPM behind a web server.
Some things to look for, such as registration/discovery of services, intra-cluster load balancing (where it started, no doubt), identity propagation & authn/z
The biggest issue to my mind though is distributed transactions and logging/debug/development. My biggest stumbling blocks with this sort of thing.. stepping through code over microservices is such a PITA.
some people simply don't like containers or aren't tooled for it.
there's more than one way to do it (TMTOWTDI).
why do people always use "not recommended" when they actually mean "do not ever do this or you'll end up the laughing stock in the tech press"
Exposing this otherwise awesome API to the public will amount to a free RCE for everybody. So not ever expose this to the public, not even behind some authentication.
It's very cool that by design it's only listening on a domain socket. Don't add a proxy in front of this.
For the same reason they say, "non-trivial" when they really mean "nearly impossibly difficult". :)
I did compliance work for a lot of start-ups and never came across a company that understood this concept. The majority thinks that their wireless router is already doing this via the Guest account.
How? It so much cleaner and simpler than Apache. I don't get this sentiment.
But I suspect for people who do it more seriously, then it's nginix config hits the sweet spot. To me the language seems sophisticated, well documented and fairly well behaved if you pay attention to the rules.
That's can make it too hard for someone casually trying to quick-start some experimental project. But it's exactly what you want if you are maintaining a long-lived set up that is likely to grow and become complicated over time.
This is not'nginx'. You can't sit and write this off because Apache did something similar ten years ago. The build in API alone is worth exploring.
Their eventing/threaded mpm is basically nginx.
And now nignx is starting to gain the features of apache.
Here you can see the configuration of workers and user/group permissions for a Go application:
Is it like the apache mod_php for php for example ?
Thanks in advance for your answer
No real idea if it does so using fcgi or some other socket-based proxying, or if the unit is spun up as a separate process and handed the raw socket and some shared memory after the headers are parsed (closer to how mod_php works).
You can see the PHP configuration here:
And here's the configuration needed to integrate Unit with NGINX:
Upon first reading I thought that Unit needed to be behind NGINX to function. When actually it listens for requests as a separate server, entirely. It only provides an API for configuration purposes.
However, If you want to use the other features of NGINX, like providing static files, you will need to put it in front of Unit.
What you say, sounds like NIH syndrome to me.
Really what this sub-thread is arguing is that security Isn't My Job(TM) as application developer. I disagree. Furthermore telling app devs not to worry about it because nginx takes care of everything is a false security blanket that will bite you eventually.
Not accepting unbound input and sane rate-limiting are kind of basic stuff, no? I'm not saying every app developer needs to be a Defcon wizard, just that they should have some fundamental awareness of secure coding standards for web apps if that's what they're building.
Nowhere in the sub-thread is this claimed.
> Insecure software isn't eventually value-destroying?
Nowhere in this sub-thread is anyone suggesting otherwise.
> Furthermore telling app devs not to worry about it because nginx takes care of everything is a false security blanket that will bite you eventually.
Nobody said this. But while we're on the topic the more likely false security blanket comes from telling app devs "just use 'net/http' and 'crypto/tls' and everything will be fine without a reverse proxy."
In any case the straw men you've raised are distracting and not driving the conversation forward.
> Nobody said this.
That seems dishonest to say... From the grandparent:
> Or... you spend your time building something useful, leveraging skills you do have, and let nginx leverage its own strengths.
Really sounds like at least one person in this thread is advocating for app devs not to worry about things that nginx takes care of.
Agree that making straw men doesn't help. There's advice on either side regarding which one to use and realistically both are equally 'false security blankets'. The correct answer is to educate yourself on the benefit and drawbacks of each and make a conscious decision about where to implement your security.
I don't use Go, but D (dlang), vibe.d, varnish and lighttpd are working real well for my latest venture.
Nginx is extremely fast for that case, which is typically the reason most people proxy languages through it. ;-)
It could be that nginx is more efficient at static file serving, but that'd be down to being specifically designed and optimised for it rather than some "sync vs async" thing.
From a Go programmer's perspective, this looks like "blocking a thread", but because goroutines are relatively lightweight in comparison to actual threads, it behaves similarly resource-wise to callback-based async IO. (Although yes, nginx is likely optimised so that it throws out data earlier than Go can free stack space and so can save some memory. Exactly how much is up to benchmarking to find out.)
Basically, the only differences between Go and e.g. a libev-based application as far as IO is concerned is a different syntax - the event loop is still there, just hidden from the programmer's point of view.
Note that this doesn't mean you shouldn't put nginx in front of Go to serve static files - nginx is likely more optimised for the job than Go's file server, might handle client bugs a little better, is more easily configurable (e.g. you can enable a lightweight file cache in just a few settings), you don't have to mess around with capabilities to get your application listening on port 80 as a non-root user, and so on and so forth.
This is why systems like aio exist, though afaik most systems tend to solve this with a thread pool rather than aio, which can be very complicated to use properly.
It seems that nginx can use thread pools to offload disk IO, although doesn't unless configured to - by default disk IO will block the worker process. And FreeBSD seems to have a slightly better AIO system it can use, too.
Your choice is force a timeout and kill streaming requests, but defend against slow client DOS, or support streaming requests and suffer from a trivial slow client DOS.
For this and other reasons I still recommend fronting golang with something more capable on this front.
A properly edited book would be awesome. I would pay for it of course.
uWSGI definitely needs more concise tutorials on how to accomplish some tasks (e.g. creating Hello World with python and uWSGI, or how the uWSGI emperor works).
However I disagree with "lacking severely in the department of documentation"
Sure, it's not as easy as some other projects to dive into (e.g. Django) but IMHO the documentation is not lacking, it's just not forthcoming.
If you sit down and read through the uWSGI documentation, you'll discover a lot of very useful functionality and a reasonable description of how to utilise it.
What's lacking is the tl;dr way to bash something out quick and dirty.
https://uwsgi-docs.readthedocs.io/en/latest/Emperor.html - has config snippets too
Or maybe you mean detailed step by step instructions, a'la howtoforge?
Yes, this is what I meant when I said
> IMHO the documentation is not lacking, it's just not forthcoming.
Yelp.com runs behind uwsgi, and effectively all of the python services behind it do as well. Some use more uncommon features like gevent support.
Agreed. After recently testing out Python for a web dev project I was really dismayed at the fragmentation and lack of usability in the landscape of application servers. Here's hoping this might lead to some standardization.
Is it supposed to replace language specific servers, like unicorn and puma for rails (but then, I'm confused about what such kind of support would be for Go, since the server is directly embedded in the program)? Does it embeds interpreter for interpreted languages, like mod_* did for apache?
Note that it's actually CGo (which is not Go), and it uses a non-standard build process to install it: http://unit.nginx.org/docs-installation.html#source-code.
I don't like it at all :( I usually put plain nginx in front of my app, to handle static files and simple load-balancing, but this seems to be oriented towards handling issues best handled elsewhere.
It works with postgres or cassandra (and eventually scylladb https://github.com/Mashape/kong/issues/754 ).
Also, nginx is pretty good at restarts, even with thousands of files and vhosts.
in fact nothing like it really AFAICT
I think a "how it works" or "design doc" would be really helpful.
That said, the source files do make for pleasant reading. The nginx team has always set a strong example for what good C programming looks like.
EDIT: Their blog post  makes this more clear... nginx unit is one of four parts in their new "nginx application platform" 
Not having to use docker would be a huge plus for me.
The homepage on Nginx.com is basically
> Join this webinar to learn
> - What NGINX Unit does that has not been available before
There are a couple of options I'd like to see added to the Python configuration though before I could try it:
- Ability to point it at a virtualenv.
- Ability to set environment variables for the application.
How is this specific to Nginx? This same mistake is possible with any other software ever written.
I can't speak for the other languages (PHP, Go, Python) but I have some reservations about it helping Java (as well as Erlang and other (J)VM languages) as FastCGI like stuff has been attempted for Java in the past with not very good success with the exception of Resin.
I guess it would be interesting though if they did a native Servlet 3.0+ implementation like Resin but I doubt that is is what will happen. Regardless Netty, Undertow and even Jetty have caught up speed wise to Resin (well at least according to techempower).
AJP/mod_jk for Java.
[Here's](https://www.digitalocean.com/community/tutorials/how-to-serv...) an old guide for running Flask with uWSGI and nginx on Ubuntu. There are several more recent, detailed instructions online.
Personally, I have an AWS instance running a Node.JS server on (blocked) port 8000, a Django uWSGI app on 8001, and a static resume site, all being reverse-proxy served by nginx. So I don't really see the advantages of Nginx Unit yet.
If you are only looking to deploy your python code (and nginx/apache is constantly running on the server), then follow these steps
1. install docker on server
2. create an account on https://hub.docker.com/
your docker workflow in the future looks like this:
1. test the application on your laptop inside a docker container
2. push container to docker hub
3. "docker update" your stack
BTW, it is a good idea to always do API versioning on production runs. That will eliminate the possibility that different API versions (files stuck in the cache, or simply people who kept browser open for a long time) use the same endpoint
You'll probably want to switch to uwsgi or gunicorn before you actually deploy anything.
I haven't actually used Bottle, but with Flask the development web server seems to fall over if a client cancels ones of its HTTP requests, for example. It's really just a simple, light thing for mucking around with.
> Full support for Go, PHP, and Python; Java and Node.JS support coming soon
It's open source at the moment at least and I think it's reasonable to expect at least that the parts that are open source today will remain so in the future. Certainly they could have a commercial version with extra features like they do with Nginx, but as long as they have a useful version of this Nginx Unit available open source I will be happy to use it.
Kong is just an API gateway: you run your own infrastructure as usually and put a gateway on top of it.
Not directly related as Unit seems to be advertised as app server primarily, but you can see quoted text on main page.
The rest of the headline cleared it up of course, but I was curious for a minute how that would look.
EDIT: When discussing a new product, I would think the name is a fair point of discussion.
Furthermore after this thread's title changed, it now requires a clickthrough to dispel similar misunderstandings.
The REST API part of it is for updating its configuration over HTTP.
I hope they would avoid the Second System syndrome..
It's not about making something impossible possible. It's about improving possible things in some dimension - like speed, safety, flexibility, or - in this case - standardization and integration with already used tool.
For one, it's not just "handling a use-case" it's also _how_ you handle it. And within what ecosystem you handle it. And what kind of support etc you offer with it. Etc...
Amazing progress! Someone introduce them to CGI.
It looks like it's more of a replacement to good old NGINX+Apache set up where there would be mod_php, mod_cgi, mod_perl and .htaccess on backend to serve the app.
I came to the comments specifically to try to figure out what the heck this thing does.
The page itself never gets to the point of "Here's what it does".
> you could put in your own cloud?
If Nginx Unit could host flask applications, it would be great news.
Does it do WSGI then? Did they write the equivalent of mod_wsgi?