
Why Use Nginx? - seclabor
http://wiki.nginx.org/WhyUseIt
======
simonsarris
Every single project, open source or not, needs to have a "Why Use It" page.

(Now this is more of a "Testimonials" page, but for server tech it will do.)

> Apache is like Microsoft Word, it has a million options but you only need
> six. Nginx does those six things, and it does five of them 50 times faster
> than Apache.

This is exactly how I felt. I'm a pea-brained dolt in the server sphere, and
when I was remaking my server I went with nginx over apache on the advice of a
friend because "the config file is easier to understand."

He was right. And instead of being frustrated and bumbling through apache
until it worked the way I wanted, I was able to configure nginx (for the first
time!) in mere minutes. With nginx I was able to move on to the "get
frustrated by Wordpress" phase of server setup much sooner!

~~~
rlpb
> Every single project, open source or not, needs to have a "Why Use It" page.

Not only that, but a mention of all the major competitors with an honest
factual comparison with them is really handy as well. In the FLOSS world,
naming competitors won't necessarily cause a major problem and does a huge
amount for trust.

~~~
byamit
yeah but you can't ask Product X to tell you about Product Y because they
honestly don't know Product Y well enough to speak to it.

~~~
stedaniels
You have to ask yourself, if your building a product and you know nothing
about it's potential competitors, what are you really doing? Research is key
my opinion. It avoids cases of "Hey guys, look what I invented! I call it the
wheel.."

------
pifflesnort
One thing I still don't understand is why one would use a proxy server at all?

Why not just have your load balancers (which can operate cheaply at the TCP
layer) throw traffic directly at your application servers?

If you need caching, that's cheap to do, too. If you need static file serving,
can't you another load balancer end-point that points directly at static
content servers, or make your application servers faster?

Is nginx primarily useful for slow application server runtimes that can't keep
up with what nginx can do?

~~~
benhoyt
It's a good question. We (a large-scale website serving 250,000 pages/day) use
Python+CherryPy for our "application server", but that's sitting behind an
nginx reverse proxy.

The main reason is that nginx is much better and faster at handling certain
things than Python:

* handling HTTPS and serving plain old HTTP to the application server so Python doesn't have to worry about it

* doing the gzipping of content before it goes out

* routing requests to different places/ports based on various elements matched in the URL or HTTP headers

* virtual hosts, i.e., "Host" header matching and routing things to the right place based on that

* various request sanitization, like setting client_max_body_size, ignore_invalid_headers, timeouts, etc.

Historically we've also had multiple types of application servers, some Python
and some C++, and nginx routes requests to the right app server (based mainly
on URL prefix).

We also use nginx to do GeoIP with the GeoIP nginx module (though arguably
that would be just as simple in Python).

Edit: Note that we don't use it because our "application server is slow" (it's
not). Also, I know some people use nginx to serve static content, because it's
usually much faster/better than say Python at doing that -- we serve static
content via Amazon S3 and a CDN, so that's a non-issue for us.

~~~
aleem
I'm curious to know - did you consider Varnish? It's much faster as a rev
proxy caching server.

~~~
papsosouid
No, it isn't. It claims to be faster than squid, not nginx. But it doesn't
even meet that claim.

------
amalag
A big gotcha with nginx is if you have an app server behind it and you
foolishly have a long running web request which runs longer than the proxy
timeout, nginx will retry the original web request. Better make sure
everything is idempotent and don't have long running web processes. It is bad
design, but we ran into this. Code that used to run in a few seconds started
taking longer and then ran infinitely long without an error because it kept
getting resent to the server.

~~~
chubot
Oh really, does it do it for POSTs? It should retry for GET, but not POSTs
(which is exactly why there is a difference).

If you have non-idempotent GETs in your app, then that's the app's fault. If
Nginx is retrying POSTs then it's Nginx's fault.

~~~
CrLf
That's a pretty broad statement to make... You are assuming that out there
only "your app" exists.

Actually, many times "your app" is somebody else's app that you bought or an
app that somebody else develops and you don't have any control over it.
Sometimes those apps are just bad (well, most enterprise apps are) and your
only hope is that the infrastructure that you do happen to control doesn't
make it worse.

I've never used nginx, so I don't know if the parent has a point or not.
However, having a reverse proxy retrying _any_ requests to backends by default
seems very bad form to me. Do you want your routers resending packets? It's
the same thing.

~~~
derefr
> However, having a reverse proxy retrying _any_ requests to backends by
> default seems very bad form to me. Do you want your routers resending
> packets? It's the same thing.

No: HTTP GET is explicitly idempotent and cachable. As an HTTP client, you are
supposed to be able to send the same GET all day, and it's up to the server to
not screw that up.

~~~
CrLf
I'm not saying that it's a standards violation or anything like that. I'm not
even arguing against GET retries, but only against them being default.

Bad behavior from apps you don't control is a fact of life. Ignoring it
doesn't make it go away, and behaving like it doesn't exist can make it worse.

~~~
derefr
I would argue that _relying_ on the behavior allowed by a standard is the only
way to make it actually _be_ allowed. Otherwise, people code to your
unwritten, stricter standard, relying on nobody else ever sending them
something that _is_ allowed. Then, when something allowed by the standard
_does_ happen, they blow up.

But once you introduce other software in the ecosystem that is guaranteed to
send you these sorts of things, you'll damn well better release a new version
of your package that works with them.

------
marijn
Though I do use nginx and am very happy with it, I am somewhat put off by the
fact that I reported a bug five months ago (including a working patch for the
problem) and no one seems to have as much as looked at it [1]. (Granted, this
is in a module, not in the core server, but in general the community process
for the project seems messy and vague.)

[1]: <http://trac.nginx.org/nginx/ticket/242>

~~~
javajosh
Yes, I agree - nginx's community feels a little strange, at least to someone
who hasn't tried to really get involved. Apache, by contrast, is a huge, loud,
unruly crowd. Whereas the nginx author seems to be one of those quiet, aloof,
l337 h4x0r types. Which is consistent with software that is a) really fast and
b) not responsive to change requests. :)

------
apinstein
I _think_ ngninx is likely faster and more stable than apache, but I have yet
to see anything close to a trustworthy benchmark.

I come from the php world, and people always say how much lower-memory
nginx+php-fpm is than apache+mod_php. Well no doubt! If you understand how the
architecture actually works, it's clear this isn't a fair comparison. mod_php
means php is fully-loaded for serving statics, not to mention having a smaller
pool of php-fpm processes will take less memory and also be faster (due to
less context-switching) than the larger number of mod_php processes.

However the real comparison should be between nginx+php-fpm and
apache+mpm_event+php-fpm. Nginx is an evented server, so at least try to
compare apples-to-apples. I've seen very few comparisons of nginx with
mpm_event.

Also, apache's default tunings are much more geared towards modest server
usage whereas nginx's seem more geared towards high scalability. An argument
could be made that apache should have "better" defaults, but since at scale
you need to start tweaking your OS/rlimit/etc to prevent bad things from
happening you can see why apache might stick with more modest tunings ootb.

Our app has a lot of apache custom config and so I was a hesitant to try to
switch to nginx due to the risk of getting things wrong porting the configs.
We did move from apache/mod_php to apache+mpm_worker and php-fpm and we've
been able to improve throughput (especially on statics) at a far lower memory
footprint. Key to success in lowering the memory footprint was dropping
ThreadStackSize (from 8M default to 1M). What a difference!

Other than that, the competition is good for everyone. I am sure nginx pushed
apache to work on mpm_event much harder.

~~~
FooBarWidget
You may want to take a look at [http://www.eschrade.com/page/why-is-fastcgi-w-
nginx-so-much-...](http://www.eschrade.com/page/why-is-fastcgi-w-nginx-so-
much-faster-than-apache-w-mod_php/). The answer might surprise you.

As for ThreadStackSize: it impacts virtual memory but not actual memory usage.
Actual memory usage stays the same. You should never use the 'vm size' as a
good measurement of memory usage. Unfortunately memory management on modern
OSes is complicated and people don't understand the numbers, so they
arbitrarily pick a column in 'ps' and conclude that X is bloating memory... :(

~~~
apinstein
That link about AllowOverride is true. The reason I didn't mention it is that
one of the benchmarks I saw did turn off AllowOverride so I figured at least
that part was fair :) But it is a very good point. I think I'll do a talk soon
about tuning apache and make sure that's in there. Optimizing with strace is
always really fun. I used it pretty heavily when I was researching php/apc and
require/require_once. It's amazing how much faster you can get if you
implement things to not talk to the disk 20x on every request :)

I definitely know VIRT is complicated. I couldn't find any kind of clarity on
it. If you know of a good guide I'd love to see it.

That said, virtual memory still likely affects some kernel decision-making.
For instance the oom-killer was kicking in on a daily basis until I made these
changes. With mpm_worker using 250+ threads, I was able to reduce the
"committed" by several gigs. The system overall seems more stable and the oom-
killer hasn't reared its head in days.

I can imagine that the stack is treated differently since it'd be a terrible
idea to page out stack. I couldn't find proof, but if I were a kernel I
wouldn't page out stack :)

------
jstalin
I dropped Apache in favor of Nginx about two years ago. Haven't looked back
since then. It's so much easier to configure and it uses far less memory.

~~~
killerpopiller
I am using lighty for that reasons. Can anyone compare lighttpd with nginx
performance on small servers like Pis?

~~~
FooBarWidget
I dropped Lighttpd years ago because on some occasions it used 100% CPU for no
reason. It didn't make the server crash, and Lighttpd itself appeared to run
fine otherwise, but still... the CPU usage was there for no reason. This was
never solved, and development also seemed stalled. So I switched away from
Lighttpd to Nginx. Nginx just kept working and working, never broke once.

------
velodrome
One major drawback is you can't control output buffering and gzip with phpfpm
(LAMP stack equivalent) You cannot flush head, the user has to wait until the
whole page loads before rendering.

~~~
mk3
Latest version got much better at handling streams if I remember correctly

~~~
velodrome
True. Correct me if I am wrong but I am not sure if streams work for general
text pages (like blogs, eCommerce, etc). It is better suited for chat-style
applications (comet).

------
sergiotapia
I'm a newbie - if I install Passanger to be able to run Rails apps on on
Nginx, are these benefits lost?

Better yet: What exact is Passanger? (Explain it like I'm five)

Their site says, "Phusion Passenger is an application server for Ruby (Rack)
and Python (WSGI) apps." - so it's something that runs below Nginx and run
Ruby code?

Or is it an extension for Nginx/Apache?

Thanks!

~~~
FooBarWidget
Phusion Passenger extends Nginx and turns it into an application server. An
application server is a program that runs application code, so in this case it
allows Nginx to run run Ruby/Python code. Likewise, the Apache version of
Phusion Passenger turns Apache into an app server that can run Ruby/Python
code.

The benefits are not lost. Phusion Passenger integrates into Nginx to give you
the benefits of both. For example one of the tasks of Nginx is to buffer HTTP
requests and responses in order to protect apps from slow HTTP connections.
Phusion Passenger fully makes use of this Nginx feature and even extends it.

------
windsurfer
Nginx is great, but before you get down and start using it, make certain that
you'll never, ever use any features it doesn't support. I was bitten by this
when I found out Nginx has no equivalent to Apache's mpm_itk_module.

~~~
MatthewPhillips
What does that mod do out of curiosity?

~~~
uggedal
"mpm-itk allows you to run each of your vhost under a separate uid and gid—in
short, the scripts and configuration files for one vhost no longer have to be
readable for all the other vhosts"

\- <http://mpm-itk.sesse.net/>

~~~
derefr
I don't quite understand why that would be necessary. Nginx has no business
accessing other users' files in the first place.

I mean, I understand why Apache needs to do it: with Apache, you have things
like mod_php running in-process, so it makes sense to restrict Apache, running
one of Bob's scripts, from accessing Alice's files.

But with Nginx, anything with "intelligence" runs out-of-process. What Nginx
expects you to do is to run _it_ as one user, but run each _app server_ (in
PHP terms, each FCGI socket daemon) as the user whose files that server should
access. (Or, better yet, run the app server in an LXC container along with a
bind mount to only the files it needs to access. Very Plan9y.)

~~~
FooBarWidget
It may still be a good idea for security reasons. Suppose that an Nginx
process is exploited. If it runs under a certain user ID then the exploit
likely does not travel past that user. You can make the Nginx user ID
different from the actual user ID so that it only has read access.

------
slacka
I don't care whether it's for anti-wrinkle cream or a web server, I find
testimonials about as useful as that ball of lint in my belly button and
trustworthy as a used car salesman.

It's true many of benchmarks out there use flawed methodologies, so lets try
to fix that. When you benchmark something as complex as Apache, people are
going to find faults with your initial run, no matter how careful you are.
This is why you need to be completely transparent with you setup configuration
and should be prepared for a followup run with user suggested settings.

------
cmwelsh
Can someone give me a small comparison between nginx and HAProxy? It seems
like they're starting to overlap a lot. I'm really excited that nginx added
Websocket (including SSL termination) support.

~~~
manoleet
I don't think you can compare them

------
Refefer
Nginx is the most reliable workhorse I've used in nearly any tech stack.
Doesn't matter how much traffic we throw at it, it just keeps on kicking. It's
very much the 'Redis' of proxy servers.

~~~
thiderman
Given that nginx is almost twice as old as redis, shouldn't that analogy
rather be "redis is the nginx of databases"? ;)

------
strech
Benchs: [http://blog.inetu.net/2013/01/nginx-vs-
apache%E2%80%94which-...](http://blog.inetu.net/2013/01/nginx-vs-
apache%E2%80%94which-web-server-is-right-for-your-project/)

[http://blog.celingest.com/en/2013/02/25/nginx-vs-apache-
in-a...](http://blog.celingest.com/en/2013/02/25/nginx-vs-apache-in-aws/)

[http://readystate4.com/2012/07/08/nginx-the-non-blocking-
mod...](http://readystate4.com/2012/07/08/nginx-the-non-blocking-model-and-
why-apache-sucks/)

------
bpatrianakos
I've used Apache exclusively for the last 3 years until just a few months ago
when I set up another server to host a number of sites I wanted to move off
Apache and on to Nginx. After having used both, and trying real hard here not
to start a religious war as is what happens often in these kinds of
discussions, I have to say neither is "better" overall or in general in my
experience.

If you're familiar with Apache configuration then you should have no problem
with Nginx because the way both servers structure their config files is very
similar. I prefer Nginx config files however because it feels more like
writing JSON whereas Apache config files are like writing XML, especially in
the area of virtual hosts. That said, neither is better, its really more about
what you're comfortable with and prefer. Nginx had most of the same
configuration options and the tough part was figuring out what Nginx calls the
corresponding Apache option.

For me there was a barely noticeable performance difference with Nginx being
faster. The caveat here is that in my case I started moving all of my static
sites and sites with "simple" php script type apps over to Nginx and used the
apache server for a very few apps that were running more memory and CPU
intensive apps. The Nginx server was also new and clean while the apache
server had been in use for a great many more things including non-web
applications and managing private got repos for about 20 code bases.

Nginx did use about 25% less memory in my case than apache even while serving
up more sites.

I love being able to host multiple SSL sites on a single IP with no hoops to
jump through with Nginx. On Apache your options are to acquire more IPs or set
up SNI which for me was more hassle than it was worth.

So since we're on HN I'm assuming most people are serving Ruby or Python based
apps with Nginx or using it as a reverse proxy. That's cool and all but
there's still an enormous cross section of the developer community using it to
serve php and as I have a lot of sites that were originally built in php my
Nginx server needed Php-fastcgi. I mentioned earlier that configuring Nginx
was a breeze since most of my apache knowledge transferred over but setting up
fastcgi was not a breeze. It's easy to get set up and working but actually
understanding what it's doing and if you really do want to configure it in the
way whatever online guide shows you is the tough part. On apache you'd just
install mod_php5, 'a2enmod' it, and then all you need to worry about is your
php.ini file. On Nginx you have the added step of adding a config block to
each server block for php. That's easy enough to get the gist of but then you
start wondering if you've made the right decision after you read those
warnings about improperly setting it up leading to security holes with file
uploads and then you start wondering what other options should I know about,
should I implement them, etc. Maybe I'm totally misguided here but with
mod_php you didn't worry about security. You only worried about the security
of your actual code, the server itself (firewalls, ssh, port blocking and all
that), and your .ini's. So that was a downside for me but not insurmountable
by any means.

It's also far easier to find information on apache than Nginx. Nginx has a ton
of available support and articles and tutorials out there but most of them
cover the same narrow section of topics and are contradictory sometimes. The
Nginx wiki itself even has warnings about getting advice from outside the wiki
in the Pitfalls section. Of course you need to be careful when sourcing
information from the web no matter what the topic but I felt more secure in
searching for Apache information than Nginx information.

I really love Nginx though. It can really take quite a beating without even
batting an eyelash as I've seen. That said, I'll still be using Apache as my
"workhorse" server for some time until I can get more Nginx experience under
my belt. So I'd say take these testimonials for what they are: just
testimonials. True or not, any piece of software worth using can get people to
rave about it. What's important is if they're situations, expectations, and
needs align with your own.

~~~
ceejayoz
> I love being able to host multiple SSL sites on a single IP with no hoops to
> jump through with Nginx. On Apache your options are to acquire more IPs or
> set up SNI which for me was more hassle than it was worth.

What? Neither Apache nor nginx can serve multiple SSL sites off a single IP
without a UCC certificate, SNI, or multiple IPs. SSL requests have their Host
header encrypted, which means the server doesn't know which SSL certificate to
present until after encryption.

~~~
bpatrianakos
My mistake. I was always told Nginx can serve multiple SSL sites with no extra
work required besides the usual configuration you'd change for a single SSL
site.

~~~
chatmasta
So you said you "love being able" to do something that you have not tried
(since you didn't know it was impossible you must not have tried it)? I think
you should evaluate systems based on how they work for your use case, not by
how they might work if you wanted to do something later.

------
justjimmy
I came across Nginx, for the first time today, when I was trying to figure how
to make 'cleaner' URLs for the wiki I'm making. Totally going over my head and
out of my comfort zone…what a coincidence this article pops up on HN the same
day…maybe it's a sign I need to figure this Nginx thing out.

Thanks for the link!

------
dhruvbird
nginx has data structures that scale sub-linearly with the number of requests.
this is desirable. others usually don't do things this way.

------
foohey
Fake, these guys are corrupted and benchmarks are rigged :-P

</joke>

Long live to Nginx !

------
manoleet
Why not?

------
another_jerk
yawn!

nginx's ssi capability is so bare currently. however, it is an excellent
reverse proxy for me.

