Hacker News new | past | comments | ask | show | jobs | submit login
Nginx 1.6.0 stable released (nginx.org)
233 points by Usu on April 24, 2014 | hide | past | favorite | 109 comments

Slightly OT: Does anyone know if packages for Ubuntu 14.04 are coming soon? We're using the official (mainline) repository[1] on Ubuntu 12.04, but Trusty doesn't seem to be supported yet.

I've always preferred the official repository because I didn't want to start compiling nginx just for stuff like SPDY support.

[1]: http://nginx.org/en/linux_packages.html

Looks like they exist, just not mentioned on that page yet.


It's a good idea to be comfortable compiling/packaging your infra from source (including interpreters, libraries, etc), if only for the ability to quickly apply and deploy emergency patches. To demonstrate the importance of that capability, look no farther than Heartbleed.

While distros are usually pretty good about updating critical software, they shouldn't be your only line of defense, except perhaps if you have a SLA or something.

I agree, although I'd stick with packages for almost anything as it's just too much work to keep up with every release of every piece of software in your stack (except for, as you mentioned, special circumstances like Geartbleed). Plus, since in this case the repository is managed by nginx.org, it's hard to beat them to a new release even if you compile from source.

> It's a good idea to be comfortable compiling/packaging your infra from source

Would highly recommend becoming comfortable making your own packages (with any security updates, misc changes, etc.) over compiling and installing your own stack from source - distro packaging really is mostly your friend.

nginx still has a critical bug with SPDY and proxy_cache which causes connections to be aborted on cache hit. SPDY with proxypass+cache is fairly unusable without this patch.


Agree on this, and also most of the production apps don't require the same modules.

I don't know, these things are always outdated so I always install from source.

In my experience the official packages are always up-to-date, as soon as I receive the release announcement, apt-get update will have the latest version ready.

The packages are usually uploaded within a couple of hours after the release announcement. It took it a big longer this time because there were two releases. :)

The packages have been uploaded as far as I can see.

I figure this is as good of place as any to ask my question: Where can I find someone to hire that is able to write Nginx cofigs well. I have spent literally 40ish hours trying to create a Nginx conf that holds up to my OCD. I have been told numerous times on IRC that I am too picky and clean urls are a challenge to write. I am college student and System Administration isn't even my job! Help!

I'm pretty sure the best way to accomplish this is to jump into IRC and declare that it is impossible to be done.

You will have 6 answers in 3 minutes.

Or say, "Well, this is the only way to do it... nothing else works for nginx." and then present an inefficient solution. You will get ripped but solutions will arrive.

Cunningham's Law to the rescue!


I wish, I have tried multiple times and no one wants/can do it.

this is literally happening in this thread

Yes I have, however I always find it very challenging to modify it to support clean urls and PHP-FPM in an OCD fashion.

Have you tried handling your urls in php? Most apps/frameworks I've worked on in the past few years all do better url mgmt then you are probably going to get with just nginx configs.

You didn't really specify what you meant by clean URLs but, maybe this will be helpful to you or someone else.

I have a static site served via nginx, but I don't like seeing .html in the URLs.

It's a little tricky serving a static site without the .html extension because you may have a directory and an html page with the same name.

The way to deal with that is to actually use the .html extension in the file system but not URLs.

  location / {
    if ($uri ~ ^/google) {
    if ($uri ~ ^/y_key_) {
    if ($uri ~ \.html$) {
      return 404;
    if ($uri = /index) {
      return 404;
    if (!-f $request_filename) {
      rewrite ^/$ /index.html break;
      rewrite .* $uri.html break;

I don't mean to be blunt, but using if is not the way: http://wiki.nginx.org/IfIsEvil

Writing clean URLs is much easier with try_files, which allows you to do something like

    try_files $uri.html
If you don't want to see file extensions.

By the way, the nginx wiki says that you should avoid ifs when possible: http://wiki.nginx.org/IfIsEvil

Wouldn't something like this accomplish the same thing? (Sorry if I'm totally wrong, I'm not really good)

   location /google/ {

   location /y_key_/ {

   location ~ \.html$ {
      return 404;
   location = /index {
      return 404;
   location / {
      try_files $uri @rewrite
   location @rewrite {
      rewrite ^/$ /index.html break;
      rewrite .* $uri.html break;

This is yet another concern of mine. Most of my solutions involve at least 3 if statements. But I need them in order to check if a requested url is a directory or a file and such.

What kind of app are you running that requires you to do something like this for clean URLs?

Thanks, I will try this, it looks like it will work.

It does work, thanks!

You should let your web framework of choice do that. You should be able to find any directives you need to add to your nginx conf in your framework's docs. Typically they will use a Front Controller (eg: app.php or index.php), so you need a directive to map all the dynamic traffic (ie. usually not images or other static files) to that file. Everything else is done by the framework.

I am designing this Ngnix conf for use on small vanilla sites and static generated sites like Jekyll. (Yes I know Jekyll puts all blog posts in separate folders to solve the URL issue, but I don't want that).

Try odesk? If you have a clear spec for what you want you can probably get a fixed-price bid.

Even the word "odesk" leaves a bad taste in my mouth. I have heard a lot of bad things about these kinds of freelance sites.

Have you used them? Like anything, odesk/elance/etc require an investment on your side to make successful.

We've found good people through odesk to take on tasks we weren't good at -- front-end javascript, document translation, audio transcription, to name a few.

Ask on StackOverflow or ServerFault, offer a bounty if no one answers within a day.

not a bad idea. Does either have an official way to offer a bounty? If not, any sites that do? Also, what would you say is a fair price range for this kind of request?

A bounty in those sites is not monetary but you use "karma" points that you win by answering questions etc.

I bet it will get closed for being too localized.

You could pay the Nginx guys to help I suppose.

What problems exactly with "clean URLs" are you running into?

It is a lot to get into now, but here is a taste.

foo.com/index > foo.com foo.com/ > foo.com foo.com/folder/index > foo.com/folder/ foo.com/bar.html > foo.com/bar foo.com/bar.htm > foo.com/bar foo.com/bar.php > foo.com/bar www.foo.com/ANY OF THE ABOVE TESTS > foo.com/*

other rules: Never add trailing slash unless it is an index file of a directory. Order: .PHP,.HTML,.HTM. Use h5bp/server-configs-nginx as a base. PHP-FPM should be fully supported and not exploitable by the foo.com/random.gif.php bug. 403 and 404 send to /404.html.

I am sure I am forgetting something, but that is a start.

Your specs look a lot like ours (we have a CMS that adds a trailing /index to every page that is part of the core navigation, and we don't want that). Here's the three primary rewrite rules we use for the issue; they don't match your spec exactly, but they might help you get started:

    # - Remove trailing slashes (except root /)
    # e.g. /foo/bar/ -> /foo/bar
    # ([^^] matches every character but the start of the string)
    rewrite [^^](.*)/$ /$1 permanent;
    # - Remove .html and .htm extensions
    # e.g. /foo/bar.htm -> /foo/bar
    rewrite ^(.*)\.html?$ $1 permanent;
    # - Remove index file URLs
    # e.g. /foo/bar/index -> /foo/bar/
    # (the trailing slash is then removed by the first rewrite)
    rewrite ^(.*/)index$ $1 permanent;
Edit: There's a couple extra directives that go along with the above to make it work that I neglected to include:

    # Try the request URI, and a potential index file in the URI (in the case of a directory).
    # This lets you hit the file WEBROOT/foo/bar/index with the URI /foo/bar,
    # and hit the file WEBROOT/foo/bar.html with the URI /foo/bar
    try_files $uri $uri/index $uri.html =404;

    # You need this (or some other way to provide the type information)
    # if you don't have extensions on your files.
    default_type text/html;
Also, for error pages, you can probably just use the error_page directive:

    error_page 404 403 /404; # The .html in your spec will be stripped off by the above rewrite rule

I have had problems using rewrite in this way. The URL that rewrite analyses may have already been changed by `index` or some other command, and this will lead to redirect loops. To avoid that problem, I have used:

    if ($request_uri ~* "^(.*)\.html?$") {
        return 301 $1;
The above is a safe use of "if", and can be helpful since it operates on the actual uri, not the internal uri.

That's a good suggestion. Yeah, index directives will cause a redirect loop in my above configuration. The try_files directive is effectively taking the place of an index.

foo.com/* -> www.foo.com/* seems "cleaner" to me than your opposite, due to the issues with cookie leaking, CDN hosting, "normal user" expectations, etc. Like, please appreciate that your usage of "cleaner" is at best subjective ;P. (And I don't quite understand how you are intending to do "foo.com/ -> foo.com"... all URLs must have a path: you can't just GET, you have to GET /.)

I understand the redirecting to non-www is subjective, however I find less characters in the URL is cleaner. Also for a majority of my projects cookies are not used, so it is not a big deal to use a non-www base url.(Static blogs and simple vanilla sites for various school projects, etc).

Also as far as foo.com/ -> foo.com, I was referring to the URL, not the acutal path.

example: https://www.google.com/ redirects to https://www.google.com correct? Maybe I am missing something...

No, that's just your browser (annoyingly) removing the trailing slash from URL shown in the address bar.

http://www.foo.com/ is the "correct" URL. Remember how HTTP works -- you connect to foo.com port 80, then "GET / HTTP/1.1". You can't just omit the "/" and expect it to work.

HTTP clients will just request "/" if no path is specified, so nginx will never even see a request that matches "^foo.com$". You'll cause an infinite redirect loop if you try to force the issue.

No, it doesn't redirect.

    $ curl -I https://www.google.com/
    HTTP/1.1 200 OK

I think that's just your browser removing the trailing slash.

oh right, that makes more sense.

What you need to do is route EVERY request to your 'index.php` or equivalent, show errors from there. It would look like this: ^ index.php

I am designing this Ngnix conf for use on small vanilla sites and static generated sites like Jekyll. Otherwise, yes I would rely on my CMS's index file.

Is anybody here using nginx as a REPLACEMENT for varnish ? I'm not an expert in devops, but will be deploying a webapp pretty soon - I was wondering if anyone is replacing varnish with nginx cache (memcached backed?)

nginx seems to be increasingly irreplaceable (with ssl caching,etc.) - so was looking to not having to deal with varnish.

I did some google searches, but was not able to find anything - including nginx configs, etc. Nginx Plus claims to be an accelerator, but again there isnt a lot of info around that.

I've been using nginx+memcached for about 2 years on a high traffic site in production. It's been pretty great and runs without a hiccup. However bear in mind that Varnish is far more capable as nginx's memcached integration is fairly simplistic. You'll have to manage all your keys in the application layer as all nginx can do is send a certain request to a certain key and failover if it's not found. Varnish ACLs allow much finer control.

Also, I've come across benchmarks that say Varnish is faster. I just don't want to deal with a complex setup for something that gets the job done. (Job = lower the load on the app server)

Have not used NGINX cache, but are a heavy user of Varnish + NGINX. Varnish is really good at caching and together with NGINX it makes a solid solution. NGINX cache seems (from skimming the docs) simple and it might be enough for your requirements, with Varnish you know it can do a whole lot and more. As always, it depends.

If configured correctly, varnish's cache is lightning fast, and incredibly flexible. Nginx + memcached can make things really fast, but I've seen cases where the time to first byte can be 20-40% faster with a well-tuned Varnish instance (we're talking milliseconds, of course). (Sometimes, I imagine this may be due to the fact that bleeding-edge performance people are more familiar with Varnish, though...).

Nginx has been getting better, and I'm excited to see what happens over the next couple of years.

To get a more accurate response, I'd list a few use cases here. If you want better control over cache in general (expiration, purge, etc) or have strong gains (use-case specific) of using VCL or ESI its pretty hard to stay clear of using Varnish. Those are at least my caveats when choosing whether to include it or not in my stack.

Nginx+memc+cosistent_hashing+memcache :)

There's also a survey from Nginx Team: http://mailman.nginx.org/pipermail/nginx/2014-April/043282.h...

Your opinion is needed for a great future of nginx!

The big feature, imo, is it finally implements a new version of SPDY (as Chrome and Firefox are discontinuing the version 1.4.x implements).

I'm currently running apache 2.2.22 on my Ubuntu 12.04 servers. It works fine. I'll be moving them to 14.04 and thus getting apache 2.4.7. I mostly use it for mod_passenger webapps and static sites.

14.04 includes nginx 1.4.6 but I'm sure the phusion guys will package 1.6 soon so I can easily upgrade to that. Is there any killer feature in nginx that I'm missing, staying with apache 2.4?

Generally speaking, nginx is lighter/faster/less flexible (though no less powerful). It really shines on low-RAM VPSes where the RAM eaten up by a big list of httpd processes really adds up.

For example, here are numbers from Apache+mod_passenger on my dev box:

	                         VSW    RSS
	root     20050  0.0  0.1 416524 21020 ?        Ss   Apr18   0:13 /usr/sbin/httpd
	root     13370  0.0  0.0 217068  1984 ?        Ssl  Apr21   0:00  \_ PassengerWatchdog
	root     13373  0.0  0.0 503104  2324 ?        Sl   Apr21   0:04  |   \_ PassengerHelperAgent
	nobody   13381  0.0  0.0 218208  3508 ?        Sl   Apr21   0:00  |   \_ PassengerLoggingAgent
	apache   13388  0.0  0.2 500060 33888 ?        S    Apr21   0:00  \_ /usr/sbin/httpd
	apache   13389  0.0  0.2 500060 33888 ?        S    Apr21   0:00  \_ /usr/sbin/httpd
	apache   13390  0.0  0.2 500060 33888 ?        S    Apr21   0:00  \_ /usr/sbin/httpd
	apache   13391  0.0  0.2 500060 34140 ?        S    Apr21   0:00  \_ /usr/sbin/httpd
	apache   13392  0.0  0.2 500132 33924 ?        S    Apr21   0:00  \_ /usr/sbin/httpd
And those same numbers on one of my production Linode instances, running nginx + passenger:

	                           VSW   RSS
	root     17824  0.0  0.0   7988   328 ?        Ss   Apr10   0:00 nginx: master process
	nobody   31676  0.0  0.5   8732  3248 ?        S    Apr23   0:08  \_ nginx: worker process
	nobody    9103  0.0  0.5   8684  3288 ?        S    Apr23   0:03  \_ nginx: worker process
	nobody    9106  0.0  0.5   8876  3416 ?        S    Apr23   0:04  \_ nginx: worker process
	nobody   22077  0.0  0.4   8400  3004 ?        S    01:23   0:02  \_ nginx: worker process
(yes, I know that ps auxf isn't the best measure of memory usage, but it ballparks to make the point)

Nginx is also non-blocking which is the main difference from Apache. I don't agree Nginx is less flexible, from my own experience it's quite the other way around - try configuring Apache as a reverse proxy, you'll see how "flexible" it really is.

As of 2.4 Apache also supports event-driven serving thanks to the event MPM.


What's so complicated about:

    ProxyPass url1 url2
    ProxyPassReverse url1 url2
Can it get any simpler than that?

Of course, if you need to fine tune it for performance you are in the advanced category and should know what you are doing, but these two lines are all it gets to configure reverse proxy with sensible defaults.

By "flexible", I primarily mean Apache's module system vs nginx's "compile all the things" approach. Upgrading a component in nginx means recompiling the whole webserver, whereas Apache modules can be managed separately from the httpd executable.

I'm curious about the reverse proxy comment, though; mod_proxy_balancer generally does the job just fine. I do agree that nginx is easier to set up as a reverse proxy, though.

Also, mod_perl essentially lets you write your own Apache modules in Perl with access to a large chunk of the Apache API. It's a little old-fashioned now though.

Having managed a fairly complex apache based web site (lots of rewriting to maintain various legacy url schemes, a few cgi bin apps -- lots of cruft) -- I do think Apache is more flexible than nginx. Traffic server is probably more flexible still. On the other hand, you could say if you take a routing problem, and you attempt to fix it via mod_rewrite -- you now have (at least) two problems! ;-)

There was a fairly recent comparison between nginx and apache2.2/2.4 (and uwsgi and gunicorn, I believe) driven by jmeter for testing that showed apache was a little lower on throughput -- but more consistent on latency (unfortunately I can't seem to find the link again). So while I think it is generally good advice to "just use nginx", I wouldn't write off apache based on how 1.3 used to behave compared to old versions of nginx.

I would normally advice an architecture where you have a reverse proxy in front of application servers (even if that means php with fastcgi) if you can, and when it makes sense. Possibly with ssl termination and/or caching (varnish) in front of that. I'm not sure that using nginx is actually any better than, say, HAproxy -- unless you need a static webserver in addition to your appserver. As always YMMV -- choose the stack that fits your needs.

> I would normally advice an architecture where you have a reverse proxy in front of application servers

My understanding too is that as we containerize more applications (whether this be Jails, Zones or Docker) then for shared-IP addresses (e.g. VirtualHosts) we need a reverse proxy to do the mapping to the correct container.

Do you know anything about this, as my research hasn't found anything?

Well, if you're not using ipv6 it can be a bit tricky to map a (public) ip to each application server/container/whatnot. For web services you need a front-end router/proxy that understands http host headers and/or SNI (for ssl). If you have that, you can map stuff in DNS, and still use just port 80/443 on the "user facing" side:

client sends "host: some.service.example.com" -> proxy (alias for some.service.example.com) routes -> internal-ip:port

If you have enough public ips (be that ipv4 or ipv6) the "proxy" can just be a firewall rule that maps/NATs public-ip:80 to service:80 (or whatever). Not that that is necessarily a good idea.

Virtualhosting and proxying are related to containerizing (containing?) services -- but you could for example set up your reverse proxy in one container, map all traffic there, and then after deciphering host-headers and/or SNI route traffic to different back-ends.

It depends on what your needs are. For low traffic services, simply having the container answer on an external ip might be fine.

If you want to do more sophisticated load-balancing some system needs to take care of that, typically between the client and the back-end server (DNS only allows for round-robin distribution, barring tricks like giving different replies depending on who (from where) is asking).

Personally I'm leaning towards moving my "internal" ip-related stuff to ipv6 and only multihoming my outward facing points to ipv4 -- for simplicity. It does mean I actually have to set up firewall rules again, as most "internal" systems are now technically exposed. I guess it depends on how one draws the line -- does the container manage its own SSL/TLS termination (if applicable)?

Thanks, it's interesting to hear about the different options. I've not even thought about IPv6 yet and the options I'd have using that internally. I'm never going to have enough public IPv4 addresses for the number of containers so something has to happen.

Said link on apache vs nginx (among others):


Linode just upgraded me to 2GB RAM for the same price, and apache doesn't seem to be taking all that much RAM. I guess the benefits aren't all that important for my setup. I may as well spend the time improving other things.

Phusion Passenger author here.

Use passenger-memory-stats. It measures the Private Dirty RSS which is a more accurate measure of the actual memory usage because it takes shared memory into amount.

Thanks. I'd forgotten about that. Anyhow, the point remains :)

Production (nginx + passenger)

        17824  1      7.8 MB  0.0 MB   nginx: master process /etc/nginx/sbin/nginx -c /etc/nginx/conf/nginx.conf
	3247   17824  8.3 MB  1.0 MB   nginx: worker process
	3685   17824  8.3 MB  1.0 MB   nginx: worker process
	3696   17824  8.2 MB  0.8 MB   nginx: worker process
	3699   17824  8.2 MB  1.0 MB   nginx: worker process
	### Processes: 5
	### Total private dirty RSS: 3.71 MB

	----- Passenger processes -----
	PID    VMSize    Private  Name
	17806  5.5 MB    0.0 MB   PassengerWatchdog
	17809  36.1 MB   2.1 MB   PassengerHelperAgent
	17814  10.9 MB   0.0 MB   PassengerLoggingAgent
	3385   424.6 MB  48.7 MB  Passenger ClassicRailsApp
	### Processes: 4
	### Total private dirty RSS: 50.79 MB
Development (Apache + mod_passenger, no Rails apps running through it at the moment)

        ---------- Apache processes ----------
	PID    PPID   VMSize    Private  Name
	13388  20050  488.3 MB  18.8 MB  /usr/sbin/httpd -DFOREGROUND
	13389  20050  488.3 MB  18.8 MB  /usr/sbin/httpd -DFOREGROUND
	13390  20050  488.3 MB  18.8 MB  /usr/sbin/httpd -DFOREGROUND
	13391  20050  488.3 MB  18.9 MB  /usr/sbin/httpd -DFOREGROUND
	13392  20050  488.4 MB  18.9 MB  /usr/sbin/httpd -DFOREGROUND
	20050  1      406.8 MB  1.1 MB   /usr/sbin/httpd -DFOREGROUND
	### Processes: 6
	### Total private dirty RSS: 95.45 MB

        ----- Passenger processes -----
	PID    VMSize    Private  Name
	13370  212.0 MB  0.3 MB   PassengerWatchdog
	13373  491.3 MB  0.3 MB   PassengerHelperAgent
	13381  213.1 MB  0.6 MB   PassengerLoggingAgent
	### Processes: 3
	### Total private dirty RSS: 1.18 MB

I would add "less bloated", but it might be a tastes matter

(my) rule of thumb: unless there's no equivalent for an apache module you need, use nginx instead of apache. For balance, "Don't break what (apache) is working" applies too.

Nginx is much more scalable and utilizes less resources than apache. Apache uses a thread per socket , where as nginx is non blocking and runs with fewer threads .

> 14.04 includes nginx 1.4.6 but I'm sure the phusion guys will package 1.6 soon so I can easily upgrade to that.

Yup. We're working on 14.04 packages too.

I wish there were better authentication options with Nginx. The ngx_http_auth_request_module is limited: First, it assumes that the authentication agent doesn't need to talk to the user. Second, it doesn't cache the authentication.

Perhaps nginx might instead check all requests for a particular signed cookie, verify the signature, if the signature matches, verify that the cookie isn't too old, and then unpack variables from the cookie that the application server might want, such as REMOTE_USER. It seems nginx would then want to freshen-up the cookie.

If the cookie doesn't exist, signature doesn't match, or the cookie has expired, then, nginx should proxy the request to a delegate... but, it should return the results of that delegation directly to the user agent. It'd be the job of the delegate to set/sign the cookie with the information needed when authentication succeeds.

In this way, the authentication agent has full control over the process (so it doesn't have to be in nginx), and, heavyweight authentication is cached.

EDIT: Thanks mixedbit -- you're correct that nginx will forward 3xx onto the client. However, I recall patches are needed to support headers; and, without 200 going to the client, how do you support LDAP form authentication? Even so, an extra sub-request to authenticate each request is still heavyweight.

>Perhaps nginx might instead check all requests for a particular signed cookie...

That's called session handling, which is something you want to implement in your web application, not your web server.



Unless you want to use Nginx as an SSL-offloading proxy for a bunch of internal apps that you want to protect from the public but your apps themselves don't use the session in any way? Yes, we can use Lua and effectively write our own, but one of the reasons I've considered Apache again is that there's now a plugin for OAuth 2 + OpenID Connect ;-) https://github.com/pingidentity/mod_auth_openidc

That said, even before this, Apache supported a million different mod_auth_* at http://httpd.apache.org/docs/2.4/mod/ including authentication caching http://httpd.apache.org/docs/2.4/mod/mod_authn_socache.html for modules that don't supply their own cache.

Put Nginx in front of Apache then? Or set up an authentication service and use it from your internal apps?

You'll lose some of the benefits of Nginx at that point, since part of why people like Nginx is how it handles connections, proxying and caching. And the internal apps aren't always mine to maintain, e.g. Apple's Xcode server.

But yeah, there are options in Apache-land, my post was more that nginx could eventually gain those options too :)

In some cases it would be useful to perform session handling like this in nginx. I like the idea of building a reverse proxy which handles authentication and sessions, in front of a backend web page which wasn't designed to handle it. Something like giving access to an old internal intranet without having to change the app.

this reminds me of a government agency that permits access by IP addresses whitelisted in IIS. Can't put fancy caching or load balancing in front because it can't understand X-Forwarded-For, etc.

tl;dr build your authentication into your app, not the web server layer.

There are plenty of upsides to decoupling authentication from your app codebases. For instance, in an enterprise where you have a single sign-on solution implemented as a web server module and a mix of third party and bespoke web apps.

I think no one argues that decoupling isn't the way, in fact that's exactly what others are saying too, but in the proper way. Make an authentication (micro)service and call it from your webapp. Proxy should proxy, auth service should manage (and cache, keep up to date) credentials, role associations and group memberships, web app should serve web pages (based on the business logic coded into it).

I've been using a pubcookie module (http://www.vitki.net/book/page/pubcookie-module-nginx) to do authentication across multiple subdomains (x.example.org, y.example.org, z.example.org). The idea being, you only have to authenticate to one of them in order to access any of them.

However, the module hasn't been updated in forever, and to build it in recent versions of nginx, I have to turn certain CFLAGS off (i.e. Werror).

Does ngx_http_auth_request_module seem like it could do pubcookie's job? Or perhaps, can I approach this problem using ngx_lua?

That signed cookie scheme you are talking about exists in Apache as AuthTkt...

This isn't correct, you can use auth_request when authentication agent needs to talk to the user. I can't even see how it could be used without such communication.

Are you referring to a modified version of ngx_http_auth_request by davidjb that permits 3xx responses, including cookie headers?


For some reason, I thought this behaviour made it to the upstream, till I re-read the official ngx_http_auth_request documentation and realized it doesn't pass through 3xx or headers other than WWW-Authenticate:

  The ngx_http_auth_request_module module (1.5.4+) implements 
  client authorization based on the result of a subrequest. 
  If the subrequest returns a 2xx response code, the access
  is allowed. If it returns 401 or 403, the access is
  denied with the corresponding error code. Any other 
  response code returned by the subrequest is considered 
  an error.  For the 401 error, the client also receives
  the “WWW-Authenticate” header from the subrequest response.

No, I was thinking about the original auth_request. For cookies based authentication you need to turn off authorization for login pages (because every visitor should be allowed to access login pages) and pass login requests directly to your auth backend. The auth backend can then verify password, set cookies etc. auth_request failures 401, 403 can also be configured to show login page to the user.

Here is a config that does something like this: https://github.com/wrr/wwwhisper/blob/master/nginx/wwwhisper... (deployed here: https://io-mixedbit.rhcloud.com)

(Sorry for the late reply)

Granted, it's more work for you, but this is pretty straight-forward to do with with access_by_lua (and the associated nginx APIs).

Nice to see we once again have a stable nginx release that supports a version of spdy that browsers currently support!

Wow, finally auth_request is an official module. Thank you!

I wonder what is the policy regarding their Debian repositories now that 1.6 is stable (currently we have 1.4.x installed from that same repo).

They broke some stuff in the past moving to 1.4 from an older release, it would be nice to have a "release notes" so we can check what can possibly go wrong (if anything).

The changelog is huge, congratulations to the nginx team!

EDIT: nginx twitter account confirmed that there's no expected disruption upgrading from 1.4 to 1.6. Excellent!

I just upgraded one smaller site to 1.6 to test the waters before doing it elsewhere.

So far everything works as expected.

All good, but Nginx is really playing a nasty game now. Basic features such as proxy_cache_purge are available in the commercial version only.

Fyi to those oh-so-lucky to be stuck on Windows systems, while nginx.org offers a 32 bit build, you can get yourself 64 bit build (no extra modules compiled) of the current releases, including 1.7, from http://kevinworthington.com/nginx-for-windows/.

but "In general, you should deploy the NGINX mainline branch at all times." @ http://nginx.com/blog/nginx-1-6-1-7-released/

Waiting for package for Ubuntu 12.04 and crossing my fingers that it comes with SPDY enabled so I don't have to compile it. I know, I am lazy :P.

It hasn't. "This PPA is maintained by volunteers and is not distributed by nginx.org."

you can use http://pilif.me/nginx.tar.bz2 to build a debian package from 1.6.0 that is built from the Ubuntu 12.04 source package, so it's a drop-in replacement.

Most of the best features are in the paid version. I am leaning towards replacing Nginx with Haproxy for the reverse proxying part, unless they move at least the advanced load-balancing features to the free version.

Have you considered Hipache? ( https://github.com/dotcloud/hipache )

HAProxy is awesome for load balancing IMO.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact