Hacker News new | comments | show | ask | jobs | submit login
Nginx 1.12.0 stable (nginx.org)
120 points by ymse 285 days ago | hide | past | web | favorite | 76 comments

One key missing piece in nginx is a way to interact with the configuration without having to edit the config file. This is such a vital piece of a modern infrastructure; where backends are added/removed on demand.

You can interact with haproxy via lua[1] or use etcd to have traefik load its configuration[2].

Seeing how [as others also mentioned] nginx seems to favor pro customers in terms of functionality, it would only seem wise to choose another proxy/load balancer for your next project.

[1]: http://www.arpalert.org/src/haproxy-lua-api/1.7/#Proxy.serve...

[2]: https://docs.traefik.io/toml/#etcd-backend

This can be done with lua-nginx-module [1] and its 'balancer_by_lua' directive [2].

There are projects like the one I'm working on [3] that implement dynamic proxies using OpenResty.

[1]: https://github.com/openresty/lua-nginx-module [2]: https://github.com/openresty/lua-nginx-module#balancer_by_lu... [3]: https://github.com/3scale/apicast/

nginx can reload the config files without closing the listening port or interrupting any existing requests. haproxy can't do that. So you can actually live-update on-demand much more of the nginx configuration than you can the haproxy configuration.

> haproxy can't do that.

HAProxy reloads don't interrupt existing connections (unless you want it to).

There is a ~microsecond downtime in accepting new connections. https://engineeringblog.yelp.com/2015/04/true-zero-downtime-...

This doesn't work well for modern websites using push-style updates, e.g. via websockets. Old nginx processes might never close after the reload, as connections are very long-lived. If reloads happen often this leads to memory exhaustion. So this is a bad solution to rely on.

Websockets are indeed a complication, I'm guessing for haproxy also. But it seems nginx has a recently added feature which can fix this:


Logging in to machines to update configs doesn't scale well. You often end up building your own framework/api "only" to edit the config file.

Or paying $2500/instance/year for nginx plus.

Executing a template, writing it to a file, and sending SIGHUP, is not that hugely different from composing a request and writing it to a control socket.

Maybe nginx isn't better than haproxy for many purposes. But updating configuration isn't the reason :)

Ansible is working great for me.

If that's a problem for you, then use a pull-style config management tool (eg puppet) over a push-style one (eg ansible).

If you're at a scale where logging into each server from your control node 'hurts', then you're still going to hurt with all the config other than nginx anyway.

NGINX plus provides a simple http API to add and remove backends to an upstream block. They also have persistence of these changes.

See https://www.nginx.com/products/on-the-fly-reconfiguration/

Disclaimer: nginx plus is about $1900 per server. HaProxy is free.

You can hot reload config files that you have already tested for correctness. No dropped connections.

Been using Traefik under light load on Kubernetes with automatic acme let's encrypt support. Very happpy with the ease of use so far and it even exposes Prometheus metrics.

Apache httpd 2.4 has supported dynamic reconfiguration of backends for years now without leveraging Lua-hacks:


I did use that one which actually works pretty well, although you should manage Go skills a little bit.


We're having great luck using traefik (https://traefik.io) as a kubernetes ingress, we just couldn't get nginx working well and ever since the switch it's been rock-solid.

Traefik is amazing - you can add it as a service with almost no config and get http2, https (with good defaults), letsencrypt and auto-discovery of services

I also use it as a frontend to all local dev

Does it support more advanced forms of authentication than http basic?

Digest. But you probably want to be doing auth on whatever Traefik is pointing at

What was the reasoning of traefik vs haproxy?

Traefik was written before haproxy had hot reloading configuration in 1.7

It does one thing and does it well - auto configured and discoverable lb and proxying designed to run in container environments

Easy to get running, easy to know everything it can do and without much effort it gets a lot accomplished

Not that nginx and HAProxy still don't have their place, but if you want to front a docker swarm or k8 stack traefik is just easy whereas nginx/haproxy have to be configured for that task

Haven't used traefik but HAProxy doesn't have HTTP/2 support yet.

Was hopping to get HTTP2 server push on 1.12 since H20, Caddy and even Apache already support it.

you should have a look at the Nginx fork from tabao/alibaba



The only thing pretty bitter about Nginx is that the access to the server status in the open source community edition.

You get only so much with "ngx_http_stub_status_module" that you have to compile in yourself, as distros don't compile it in: https://nginx.org/en/docs/http/ngx_http_stub_status_module.h...

With Nginx plus you get access to so much more ("ngx_http_status_module") of the server status. It's not about the pretty frontend, it's about the values.


Why isn't there a CentOS-style distro apt/rpm package of Nginx (think of free RedHat) with the status module enabled, as based on open source Nginx?

This is the classic downside of open source that's supported by a "pro" edition: it encourages maintainers to actively avoid adding features that would compete with the paid product.

This is similar to the classic blunder by service product strategists who tier pricing by both capability and scale, rather than one or the other. This results in nonselection of the product rather than the hoped-for upsell.

Curious - for us non-strategists could you give another sentence explaining your thought here. I don't quite get why scaling both would prevent a sale - but it appears like hard-earned insight.

Sure. And yes it is. Bear in mind my perspective is mostly SaaS B2B. Two perspectives on this:

1. Some of my smallest customers are actually lighthouse customers for the rest of their industry. When they can't afford a price-segmented "enterprise" feature then I miss out on their super valuable product feedback and word-of-mouth.

I've seen this pattern in sport, telecoms, public transport, k-12 education. About the only place I haven't seen it is the financial sector.

I have been both the customer missing out and the provider in this scenario.

2. If a large enterprise is prepared to buy many units, but won't use your high end features, they can just pay for the cheaper end unit. You likely just missed out on revenue. They may well have been interested in later adoption of your more sophisticated stuff, but because they bought the cheap product it'll never happen because at scale the cost step function of going up a tier is perceived super negatively. So you also miss out on feature adoption and product feedback.

I have only been the customer in this scenario. I have seen it in both the financial sector and government. It almost happens in healthcare, but they are easily manipulated into higher price segments by placing HIPAA compliance stuff there. NB: Personally I find that kind of manipulation to be toxically abhorrent and when I've been the customer used it as a red flag against doing business. Nonetheless it is common.


To me, business strategy is all about the creation and capture of value. When you find a product model that creates a virtuous cycle between those two aspects of value, you likely have a winner.

The best example I can give validating my experience is AWS, where everyone can use everything on a pay-as-you-go basis. I have been both a customer and on the inside with AWS. The model is incredibly empowering for startups having access to this massive box of capability, and moreover enables enterprises to do long-term planning of feature adoption with price stability.

The only time I recommend charging more for an enterprise feature is because it costs you significantly more to deliver it and there is no other way to capture your share of the value created or use scale to drive down the marginal cost of delivery. A wise PM will want to separate that out into a separate product. A classic example is providing a TAM over and above your hopefully already universally excellent support desk.

The counter-example to this point of view might be Slack. I cannot fathom why they both tier their capabilities and then charge per seat. Hard to see how SSO and AD sync adds to marginal cost per customer seat, and the storage and uptime promises are similarly paper thin value adds. It is one of the reasons I never recommend them. Yet they are apparently successful, perhaps because their product is crack for dev teams.

So execution certainly still matters. And no doubt other counter-examples can be easily found. But if you want to maximise your market engagement, it is essential to know how your customers behave in the product selection process in response to the signals you give them.

Sorry that was more than one sentence. I was waiting for a compile.

"Nonselection"? Did you see the HN post a few days ago [1] about Nginx reaching 33% market share? They seem to be doing very well.

[1] https://news.ycombinator.com/item?id=14078589

Markets are measured in dollars, not usage. The last press release I read from Nginx* says they have barely more than 1,000 paying customers, and shied away from real revenue numbers, but I doubt anyone would place a bet that it is on the order of magnitude of revenues due to e.g. IIS on Windows Server, or even the revenues that Oracle & IBM collect for their rebadged Apache.

* https://www.nginx.com/press/nginx-carries-strong-business-mo...

It's simply replacing Apache that is old and deprecated. Nothing fancy.

Any citations or info on tiering by features and scale being wrong? Do you mean as two independent axis? Most services I see combine both features and scale. "Enterprise" features (like logging, RBAC, etc.) are available along with the highest volume plans.

I'd say they're correlated, which is part of the reason the fallacy arises. But the coefficient is different for every customer, so unless you're in a market with homogeneous buyers, any segmentation you devise is very likely to be wrong for almost everyone. See my longer answer nearby for some pathological examples.

True. Nginx got very popular when it was open source, without a company-face behind it. It's a classic example, how some company do it. If Nginx would have been Pro-supported from the start Nginx would never got that much traction. It's just sad, as server status is a bare essential feature, that all competitors offer for free, yet with Nginx one would have to pay $2,500/year/server to get it (plus running a binary blob with useless additional features instead open source edition).

I don't get why parent got down-voted. He is absolutely right. What's wrong with the confused downvoters?

It's not like nginx-the-company and nginx-the-opensource-project are different people. It's not that "some company" just overtook the project and now is milking the users.

nginx pro has been around for longer than nginx has been popular, so no.

Wrong. First the company only offered professional support. Nginx was completely open source. Look at the website how it looked like in 2012: https://web-beta.archive.org/web/20120603095936/http://nginx...

NGINX Plus has been around since at least 2014 if not longer. Your theory makes no sense, the server has been rising in popularity even faster since.

AFAIK the Plus version is just not something people care about but its also never had any impact on the open source one. And as a strong advocate of open source I find it extremely annoying when users find reasons to complain about friendly models for monetization of such software. :/

Nginx Pro enthusiasts / company owners I guess

Have you heard of NGINX Amplify?


The agent is open source and available here:


If you have a minute, could you give it a try and provide some feedback via Intercom in our UI? User feedback is useful for driving feature as well as product directions.

Disclaimer: I work at NGINX on the Amplify project.

It's does access the same thing I mentioned, see stub_status (open source, yet you have to include header and build it yourself) vs plus_status (paid, $2500/year/server): https://github.com/nginxinc/nginx-amplify-agent/blob/master/...

It would be a good move to open up the status module (release it as open source), and promote such a SaaS analytics product based on it with some error log scanning (like you try to do).

> You get only so much with "ngx_http_stub_status_module" that you have to compile in yourself, as distros don't compile it in

Debian compiles it in, so a lot of things that base of Debian would have it as well.

Agreed. nginx stripped all status pages and informations from the open source edition. You gotta pay $1900 per server to get nginx plus.

It makes it impossible to track issues when you cannot see what servers are online or offline and what's going on.

I'd recommend to forbid using nginx for load balancing at your companies, only use HaProxy.

Alternatively, if your company is in the position to do so, maybe advocate for buying the license.

I feel they chose rather well with status pages–it's a feature that's useful, yet leaving it out doesn't cripple the OSS version in most use cases.

It cripples the OSS version in every possible use case.

I don't think you realize how important are status pages. That's like the single most important feature in a load balancer, after the load balancing itself.

You cannot perform any troubleshooting without it. You cannot see servers which are online or offline. You can't see what or how many users are online and where. You can't have statistics, bandwidth, connections, health checks... nginx is a black box.

Paying for nginx is a short term solution that causes more troubles. Next you have to count licenses and distribute license keys and use the paid installer from a special source. The long term solution is to ban nginx and use HaProxy instead, same stuff without paywalled features.

If I want to put money on something, it's going to F5 or NetScaler, or ELB, or Google load balancers, or Akamai, or Cloudflare. Not nginx.

"every possible use case" is a bit of hyperbole; there are plenty of non-load-balancer use-cases for nginx. Serving static files, relaying RTMP streams, and running OpenResty applications, for example.

But if load balancing is your killer feature, maybe it is the wrong choice.

The status page is indeed a mandatory feature for a load balancer, when something is down you simply look at the page to see what's down. Many days of downtime and debugging lost because of not having that page.

It's less critical for a web server doing only one thing (a static folder or a single app). However, you still can't get metrics like time to load a page, connections per second, cache hits, error rates, etc... because nginx doesn't expose its metrics (for use by graphite/statsd and equivalent). Metrics are a paid feature, $1900 per server :D

> when something is down you simply look at the page to see what's down

Or look in the logs. Or have your monitoring system tell you what is down, the moment it goes down?

It's a good feature, it's not an absolutely critical stop the world women and children first feature.

The status page is quicker and more readable than logs.

By the way, nginx doesn't format logs properly. You think that a HTTP status code is a number? Nginx will give you "-" or "503,503" as well. Not sure if the paid edition have correctly formatted log.

Your monitoring system can't integrate with nginx. nginx doesn't expose its metrics as I just said in the last message. You need to pay nginx plus.

Your monitoring system has no need to integrate with nginx... It would monitor your actual systems, not nginx's ability to monitor your systems. Nginx shouldn't be the first to tell you system x is down.

A poor man's solution could be a bash while loop with wget and sleep, hitting the actual servers or just the nginx frontend. A better solution would be any one of the hundreds of monitoring systems you can pick from that do exactly that, and integrate with nginx.

And yes, those are valid values for status codes. A tad surprising at first perhaps.

Nginx exposes logs. Your monitoring system can handle those logs. A simple tail and a grep for 'Upstream server failed' in the error log would be fine. Better than manually opening some custom gui that offers the same info, just in a worse format you can't work with.

I think you're thinking of all use-cases as corporate use-cases (i.e. places where ops people exist.) There's a whole larger world of hobbyist/personal/academic/SMB-intranet use of software like nginx, where literally no ops is ever done, and the load balancer is considered a "black box."

It's a chicken egg problem in my experience.

It's considered a black box because it's a black box. A hobbyist would learn to use the status page from time to time, if there was a status page. Instead, he's learning that software are black boxes and won't look for status page in his next endeavour.

In some cases, sure. In other cases, nginx is embedded inside some other larger software appliance, and you'd have no idea it's even there in order to think to do that. (Like Apache in in macOS's Server.app.)

I agree. The cost of nginx Pro is a drop in the bucket at many larger companies. In the proper situation, I'm all for throwing some dollars at the people that make our infrastructure possible. Most companies aren't altruistic enough to donate money, but will buy the license given a decent rationale.

> Why isn't there a CentOS-style distro apt/rpm package of Nginx (think of free RedHat) with the status module enabled, as based on open source Nginx?

The code of ngx_http_status_module isn't open source (as far as I know). Getting access to it means you need an Nginx Plus license. Someone packaging that up and distributing it for free would be a rather big violation.

It does not make any sense to complain. The constructive approach that lead us to open source world domination goes like this: just build what you need. Start today, publish early, others will chime in and you will have the best status module you could dream of.

We used NGINX with Consul (template) and Vault quite extensively until I recently found out about eBay's Fabio ( https://github.com/eBay/fabio ). Fabio is really great as an NGINX reverse proxy alternative in an environment with Consul or etcd. I highly recommend it.

What made you switch?

Home made consul template duct tape with NGINX vs. solution that was designed from the ground up to be what we mangled NGINX to be with Consul template.

Concrete example; Fabio is far more resilient to operator mistakes then our own NGINX Consul template duct tape solution (which is totally unforgiving).

Glad you like it. :)

Disclaimer: I'm the author.

> Changes with nginx 1.12.0 12 Apr 2017

> *) 1.12.x stable branch.

Well, thank you very much, that's a very informative changelog. :)

On the topic of Nginx, does anyone know if it's possible/how one could fire off a http request (GET or POST) to an external service to log requests in real time (rather than say logging to a text file then processing that)?

It's undocumented, but you can do that with post_action:

   location /foo/ {
      post_action @mirror_request;
   location @mirror_request {
      proxy_pass ...;

Thank you so much! That looks perfect.

You can do it using the nginx-lua module, something like:

`local response = ngx.location.capture("/some/api/endpoint" )`

I would not recommend doing this, though. Nginx internals are rather complicated in my experience.


So better off to just log to a file and monitor the tail of that and send the data off with a seperate script?


But you really don't want to do that if you need accurate log collection and aggregation. Having a local log file is essential to coping with backpressure from the log consumers. Consider what might happen if the remote syslog server goes down or is overburdened.

I'm surprised by the lack of first-class ACME/Let's Encrypt support. I figured once Caddy paved the way in that regard that nginx wouldn't be far behind.

There is lua-resty-auto-ssl [1] providing exactly that. Sure, it is not plain Nginx, but OpenResty.

[1] https://github.com/GUI/lua-resty-auto-ssl

I'm all for adding native ACME support to web servers, but I can understand that they'd rather wait till ACME reaches RFC status (which hopefully won't be long now - the draft is in WG Last-Call).

Once that's done, I'd definitely be disappointed if web servers still decide it's not in scope for their core product.

I don't think it's unreasonable to expect an officially supported optional module, at the very least. It's how they've handled experimental features in the past.

I was just thinking the same thing. I've been using Caddy for about a year now and love it.

Thanks for the reference to Caddy! Looks like a pretty neat project. I'm also bummed by nginx's lack of end-to-end HTTP/2 support.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact