
Nginx 1.6.0 stable released - Usu
http://nginx.org/#1.6-stable
======
pfg
Slightly OT: Does anyone know if packages for Ubuntu 14.04 are coming soon?
We're using the official (mainline) repository[1] on Ubuntu 12.04, but Trusty
doesn't seem to be supported yet.

I've always preferred the official repository because I didn't want to start
compiling nginx just for stuff like SPDY support.

[1]:
[http://nginx.org/en/linux_packages.html](http://nginx.org/en/linux_packages.html)

~~~
jallmann
It's a good idea to be comfortable compiling/packaging your infra from source
(including interpreters, libraries, etc), if only for the ability to quickly
apply and deploy emergency patches. To demonstrate the importance of that
capability, look no farther than Heartbleed.

While distros are usually pretty good about updating critical software, they
shouldn't be your only line of defense, except perhaps if you have a SLA or
something.

~~~
lamby
> It's a good idea to be comfortable compiling/packaging your infra from
> source

Would highly recommend becoming comfortable making your _own_ packages (with
any security updates, misc changes, etc.) over compiling and installing your
own stack from source - distro packaging really is mostly your friend.

~~~
MichaelGG
nginx still has a critical bug with SPDY and proxy_cache which causes
connections to be aborted on cache hit. SPDY with proxypass+cache is fairly
unusable without this patch.

[http://trac.nginx.org/nginx/ticket/428](http://trac.nginx.org/nginx/ticket/428)

------
marshallford
I figure this is as good of place as any to ask my question: Where can I find
someone to hire that is able to write Nginx cofigs well. I have spent
literally 40ish hours trying to create a Nginx conf that holds up to my OCD. I
have been told numerous times on IRC that I am too picky and clean urls are a
challenge to write. I am college student and System Administration isn't even
my job! Help!

~~~
cheald
I'm pretty sure the best way to accomplish this is to jump into IRC and
declare that it is impossible to be done.

You will have 6 answers in 3 minutes.

~~~
gaoshan
Or say, "Well, this is the only way to do it... nothing else works for nginx."
and then present an inefficient solution. You will get ripped but solutions
will arrive.

~~~
faster
Cunningham's Law to the rescue!

[https://meta.wikimedia.org/wiki/Cunningham's_Law](https://meta.wikimedia.org/wiki/Cunningham's_Law)

------
sandGorgon
Is anybody here using nginx as a REPLACEMENT for varnish ? I'm not an expert
in devops, but will be deploying a webapp pretty soon - I was wondering if
anyone is replacing varnish with nginx cache (memcached backed?)

nginx seems to be increasingly irreplaceable (with ssl caching,etc.) - so was
looking to not having to deal with varnish.

I did some google searches, but was not able to find anything - including
nginx configs, etc. Nginx Plus claims to be an accelerator, but again there
isnt a lot of info around that.

~~~
ashray
I've been using nginx+memcached for about 2 years on a high traffic site in
production. It's been pretty great and runs without a hiccup. However bear in
mind that Varnish is far more capable as nginx's memcached integration is
fairly simplistic. You'll have to manage all your keys in the application
layer as all nginx can do is send a certain request to a certain key and
failover if it's not found. Varnish ACLs allow much finer control.

Also, I've come across benchmarks that say Varnish is faster. I just don't
want to deal with a complex setup for something that gets the job done. (Job =
lower the load on the app server)

------
vbartathn
There's also a survey from Nginx Team:
[http://mailman.nginx.org/pipermail/nginx/2014-April/043282.h...](http://mailman.nginx.org/pipermail/nginx/2014-April/043282.html)

Your opinion is needed for a great future of nginx!

------
DiabloD3
The big feature, imo, is it finally implements a new version of SPDY (as
Chrome and Firefox are discontinuing the version 1.4.x implements).

------
pedrocr
I'm currently running apache 2.2.22 on my Ubuntu 12.04 servers. It works fine.
I'll be moving them to 14.04 and thus getting apache 2.4.7. I mostly use it
for mod_passenger webapps and static sites.

14.04 includes nginx 1.4.6 but I'm sure the phusion guys will package 1.6 soon
so I can easily upgrade to that. Is there any killer feature in nginx that I'm
missing, staying with apache 2.4?

~~~
cheald
Generally speaking, nginx is lighter/faster/less flexible (though no less
powerful). It really shines on low-RAM VPSes where the RAM eaten up by a big
list of httpd processes really adds up.

For example, here are numbers from Apache+mod_passenger on my dev box:

    
    
    	                         VSW    RSS
    	root     20050  0.0  0.1 416524 21020 ?        Ss   Apr18   0:13 /usr/sbin/httpd
    	root     13370  0.0  0.0 217068  1984 ?        Ssl  Apr21   0:00  \_ PassengerWatchdog
    	root     13373  0.0  0.0 503104  2324 ?        Sl   Apr21   0:04  |   \_ PassengerHelperAgent
    	nobody   13381  0.0  0.0 218208  3508 ?        Sl   Apr21   0:00  |   \_ PassengerLoggingAgent
    	apache   13388  0.0  0.2 500060 33888 ?        S    Apr21   0:00  \_ /usr/sbin/httpd
    	apache   13389  0.0  0.2 500060 33888 ?        S    Apr21   0:00  \_ /usr/sbin/httpd
    	apache   13390  0.0  0.2 500060 33888 ?        S    Apr21   0:00  \_ /usr/sbin/httpd
    	apache   13391  0.0  0.2 500060 34140 ?        S    Apr21   0:00  \_ /usr/sbin/httpd
    	apache   13392  0.0  0.2 500132 33924 ?        S    Apr21   0:00  \_ /usr/sbin/httpd
    

And those same numbers on one of my production Linode instances, running nginx
+ passenger:

    
    
    	                           VSW   RSS
    	root     17824  0.0  0.0   7988   328 ?        Ss   Apr10   0:00 nginx: master process
    	nobody   31676  0.0  0.5   8732  3248 ?        S    Apr23   0:08  \_ nginx: worker process
    	nobody    9103  0.0  0.5   8684  3288 ?        S    Apr23   0:03  \_ nginx: worker process
    	nobody    9106  0.0  0.5   8876  3416 ?        S    Apr23   0:04  \_ nginx: worker process
    	nobody   22077  0.0  0.4   8400  3004 ?        S    01:23   0:02  \_ nginx: worker process
    

(yes, I know that ps auxf isn't the best measure of memory usage, but it
ballparks to make the point)

~~~
neverminder
Nginx is also non-blocking which is the main difference from Apache. I don't
agree Nginx is less flexible, from my own experience it's quite the other way
around - try configuring Apache as a reverse proxy, you'll see how "flexible"
it really is.

~~~
e12e
Having managed a fairly complex apache based web site (lots of rewriting to
maintain various legacy url schemes, a few cgi bin apps -- lots of cruft) -- I
do think Apache is more _flexible_ than nginx. Traffic server is probably more
flexible still. On the other hand, you could say if you take a routing
problem, and you attempt to fix it via mod_rewrite -- you now have (at least)
two problems! ;-)

There was a fairly recent comparison between nginx and apache2.2/2.4 (and
uwsgi and gunicorn, I believe) driven by jmeter for testing that showed apache
was a little lower on throughput -- but more consistent on latency
(unfortunately I can't seem to find the link again). So while I think it is
generally good advice to "just use nginx", I wouldn't write off apache based
on how 1.3 used to behave compared to old versions of nginx.

I would normally advice an architecture where you have a reverse proxy in
front of application servers (even if that means php with fastcgi) if you can,
and when it makes sense. Possibly with ssl termination and/or caching
(varnish) in front of that. I'm not sure that using nginx is actually any
better than, say, HAproxy -- unless you need a static webserver _in addition_
to your appserver. As always YMMV -- choose the stack that fits your needs.

~~~
porker
> I would normally advice an architecture where you have a reverse proxy in
> front of application servers

My understanding too is that as we containerize more applications (whether
this be Jails, Zones or Docker) then for shared-IP addresses (e.g.
VirtualHosts) we need a reverse proxy to do the mapping to the correct
container.

Do you know anything about this, as my research hasn't found anything?

~~~
e12e
Well, if you're not using ipv6 it can be a bit tricky to map a (public) ip to
each application server/container/whatnot. For web services you need a front-
end router/proxy that understands http host headers and/or SNI (for ssl). If
you have that, you can map stuff in DNS, and still use just port 80/443 on the
"user facing" side:

client sends "host: some.service.example.com" -> proxy (alias for
some.service.example.com) routes -> internal-ip:port

If you have enough public ips (be that ipv4 or ipv6) the "proxy" can just be a
firewall rule that maps/NATs public-ip:80 to service:80 (or whatever). Not
that that is necessarily a good idea.

Virtualhosting and proxying are related to containerizing (containing?)
services -- but you could for example set up your reverse proxy in one
container, map all traffic there, and then after deciphering host-headers
and/or SNI route traffic to different back-ends.

It depends on what your needs are. For low traffic services, simply having the
container answer on an external ip might be fine.

If you want to do more sophisticated load-balancing _some_ system needs to
take care of that, typically between the client and the back-end server (DNS
only allows for round-robin distribution, barring tricks like giving different
replies depending on who (from where) is asking).

Personally I'm leaning towards moving my "internal" ip-related stuff to ipv6
and only multihoming my outward facing points to ipv4 -- for simplicity. It
does mean I actually have to set up firewall rules again, as most "internal"
systems are now technically exposed. I guess it depends on how one draws the
line -- does the container manage its own SSL/TLS termination (if applicable)?

~~~
porker
Thanks, it's interesting to hear about the different options. I've not even
thought about IPv6 yet and the options I'd have using that internally. I'm
never going to have enough public IPv4 addresses for the number of containers
so something has to happen.

------
chiachun
Supplement: [http://nginx.org/en/CHANGES-1.6](http://nginx.org/en/CHANGES-1.6)

------
clarkevans
I wish there were better authentication options with Nginx. The
ngx_http_auth_request_module is limited: First, it assumes that the
authentication agent doesn't need to talk to the user. Second, it doesn't
cache the authentication.

Perhaps nginx might instead check all requests for a particular signed cookie,
verify the signature, if the signature matches, verify that the cookie isn't
too old, and then unpack variables from the cookie that the application server
might want, such as REMOTE_USER. It seems nginx would then want to freshen-up
the cookie.

If the cookie doesn't exist, signature doesn't match, or the cookie has
expired, then, nginx should proxy the request to a delegate... but, it should
return the results of that delegation directly to the user agent. It'd be the
job of the delegate to set/sign the cookie with the information needed when
authentication succeeds.

In this way, the authentication agent has full control over the process (so it
doesn't have to be in nginx), and, heavyweight authentication is cached.

EDIT: Thanks mixedbit -- you're correct that nginx will forward 3xx onto the
client. However, I recall patches are needed to support headers; and, without
200 going to the client, how do you support LDAP form authentication? Even so,
an extra sub-request to authenticate each request is still heavyweight.

~~~
tetrep
>Perhaps nginx might instead check all requests for a particular signed
cookie...

That's called session handling, which is something you want to implement in
your web application, not your web server.

[http://en.wikipedia.org/wiki/HTTP#HTTP_session_state](http://en.wikipedia.org/wiki/HTTP#HTTP_session_state)

[http://en.wikipedia.org/wiki/Stateless_protocol](http://en.wikipedia.org/wiki/Stateless_protocol)

~~~
lstamour
Unless you want to use Nginx as an SSL-offloading proxy for a bunch of
internal apps that you want to protect from the public but your apps
themselves don't use the session in any way? Yes, we can use Lua and
effectively write our own, but one of the reasons I've considered Apache again
is that there's now a plugin for OAuth 2 + OpenID Connect ;-)
[https://github.com/pingidentity/mod_auth_openidc](https://github.com/pingidentity/mod_auth_openidc)

That said, even before this, Apache supported a million different mod_auth_*
at
[http://httpd.apache.org/docs/2.4/mod/](http://httpd.apache.org/docs/2.4/mod/)
including authentication caching
[http://httpd.apache.org/docs/2.4/mod/mod_authn_socache.html](http://httpd.apache.org/docs/2.4/mod/mod_authn_socache.html)
for modules that don't supply their own cache.

~~~
pas
Put Nginx in front of Apache then? Or set up an authentication service and use
it from your internal apps?

~~~
lstamour
You'll lose some of the benefits of Nginx at that point, since part of why
people like Nginx is how it handles connections, proxying and caching. And the
internal apps aren't always mine to maintain, e.g. Apple's Xcode server.

But yeah, there are options in Apache-land, my post was more that nginx could
eventually gain those options too :)

------
wbond
Nice to see we once again have a stable nginx release that supports a version
of spdy that browsers currently support!

------
mixedbit
Wow, finally auth_request is an official module. Thank you!

------
reidrac
I wonder what is the policy regarding their Debian repositories now that 1.6
is stable (currently we have 1.4.x installed from that same repo).

They broke some stuff in the past moving to 1.4 from an older release, it
would be nice to have a "release notes" so we can check what can possibly go
wrong (if anything).

The changelog is huge, congratulations to the nginx team!

EDIT: nginx twitter account confirmed that there's no expected disruption
upgrading from 1.4 to 1.6. Excellent!

~~~
gog
I just upgraded one smaller site to 1.6 to test the waters before doing it
elsewhere.

So far everything works as expected.

------
kolev
All good, but Nginx is really playing a nasty game now. Basic features such as
proxy_cache_purge are available in the commercial version only.

------
d0ugie
Fyi to those oh-so-lucky to be stuck on Windows systems, while nginx.org
offers a 32 bit build, you can get yourself 64 bit build (no extra modules
compiled) of the current releases, including 1.7, from
[http://kevinworthington.com/nginx-for-
windows/](http://kevinworthington.com/nginx-for-windows/).

------
atom7
but "In general, you should deploy the NGINX mainline branch at all times." @
[http://nginx.com/blog/nginx-1-6-1-7-released/](http://nginx.com/blog/nginx-1-6-1-7-released/)

------
pedrogk
_Waiting for package for Ubuntu 12.04 and crossing my fingers that it comes
with SPDY enabled so I don 't have to compile it. I know, I am lazy :P._

~~~
Afforess
Nginx has an Ubuntu PPA.

[http://wiki.nginx.org/Install#Ubuntu_PPA](http://wiki.nginx.org/Install#Ubuntu_PPA)

~~~
atom7
It hasn't. "This PPA is maintained by volunteers and is not distributed by
nginx.org."

------
leccine
Most of the best features are in the paid version. I am leaning towards
replacing Nginx with Haproxy for the reverse proxying part, unless they move
at least the advanced load-balancing features to the free version.

~~~
pas
Have you considered Hipache? (
[https://github.com/dotcloud/hipache](https://github.com/dotcloud/hipache) )

