
Nginx vs Apache performance - dangoldin
http://blog.webfaction.com/a-little-holiday-present
======
pinkbike
Benchmarks that are not completely anecdotal are really hard to produce. For
starters you need the following...

1\. Don't run the client on the same server. If you do, you have no business
trying to test for high concurrency. Isolate the variables.

2\. Size of file you are serving. Are you close to saturating your connection
between the client and server? Most of the time this is the case.

3\. Concurrency is hard to test because most of the time the client is the
problem in the test. Don't use apache bench for anything like this as it's
high concurrency is much to be desired.

4\. A lot of other details need to be compared to make a benchmark useful. Are
you using keepalives on both/or not. Nginx workers/processes vs apache
threads/clients. Are you comparing apples to apples? How's your TCPIP backlog
in a case like this? What kind of IO model are you running on each? Are you
using sendfile on both or only one?

Nginx is a great server, and probably a better choice for static files, but
data like this is like saying, "the other day I saw a some kind of Honda pass
some kind of Nissan". No useful information to infer about either.

------
brianm
Nginx shines as a high-volume proxy, but as a straight up web server or app
server, if either apache or nginx is your bottleneck, you are probably do
something wrong.

At its heart nginx is a fork of apache 1.3 with the multi-processing ripped
out in favor of an event loop (and all the copyright statements removed from
headers, but hey, it's cool). The event loop, time and again, has been shown
to truly shine for a high number of low activity connections. In comparison, a
blocking IO model with threads or processes has been shown, time and again, to
cut down latency on a per-request basis compared to an event loop. On a
lightly loaded system the difference is indistinguishable. Under load, most
event loops choose to slow down, most blocking models choose to shed load.

A few short years ago the benefits from using an event loop instead of
blocking io were much more dramatic -- the level of parallelism achievable in
hardware has gone way up (hey, look, erlang!) and is accelerating. Paul Tyma
did some great experimentation with this a while back, <http://is.gd/nJ6Z> .

------
sandGorgon
One of the things about nginx is the lack of an organised community - for e.g.
there is not even an official repository for nginx
([http://marc.info/?l=nginx&m=122153991029203&w=2](http://marc.info/?l=nginx&m=122153991029203&w=2)).
there are simply some mirrors of people who maintain a patch-based tree
(<http://mdounin.ru/hg/nginx-vendor-current>). There is no bug-tracker (!!),
just a wiki page (<http://wiki.nginx.org//NginxBugs>) and as someone mentioned
(<http://www.wikivs.com/wiki/Lighttpd_vs_nginx>), very less activity on IRC.

It comes down to the original issue of Linus Torvalds, Ingo Molnar and Con
Kolivar: do you have a clear roadmap of where the architecture is going, vs a
very cool technology that had a lot of support and was no doubt popular.

I am in no way commenting on the technology behind nginx, but as an architect,
making a deployment decision that is going to take hell to change later, I
would be very concerned.

~~~
grandalf
i think much of this is due to the language barrier, and also due to the ease
of use (and ease of writing nginx modules)...

~~~
sandGorgon
the lack of a bug tracker and a SCM should'nt be limited by language barrier.

------
saurabh
I wish the ModWsgi module for Nginx was maintained. The last commit was 12
months ago. <http://wiki.nginx.org//NginxNgxWSGIModule>

------
jwilliams
The memory is particularly a big deal if you're on a small slicehost/linode
instance - a standard Apache setup without tweaking can take up half your RAM.

~~~
patio11
Yep. I lost my slice to thrashing twice before I discovered that a standard
PHP forum under trivial load (6 simultaneous users plus Googlebot) can easily
balloon under the default settings. Nginx has much better "works right out of
the box" properties for folks who are not httpd.conf gurus.

------
snprbob86
My understanding is that Nginx is the server of choice for static content and
Lighttpd for dynamic content (particularly FastCGI). Is that still the latest
and greatest advice?

I've found Lighttpd way easier to configure than Apache and am having it serve
my static content simply because we don't need to worry about every little bit
of performance just yet.

~~~
pwk
Depends on the app platform. In the rails world Phusion Passenger (aka
mod_rails or mod_rack) in combination with Apache is making inroads for
serving up dynamic content. Despite the bigger footprint and other downsides
of Apache, I'm hearing more and more that stability and ease of configuration
of Passenger are a win. I'm only running it on a low usage backend app for the
moment, but it was definitely easy to set up.

~~~
gamache
Passenger is a big win. I once ran nginx + mongrel_cluster, and while it
probably had less overhead than the equivalent Apache + Passenger setup, it
was more than cancelled out by the effort necessary to babysit the mongrel
processes. This was also more of an issue with MRI-era Ruby and its associated
memory leaks.

With Passenger, I set it and forget it. Time is money.

~~~
zealog
I agree.

I've been using NGinx/Mongrel for most of my large deployed apps, but am
working to get things moved to Passenger (and Apache, natch), simply for the
ability to do a graceful restart on most code changes. For large apps, the
ever expanding mongrel footprint and slow restarts is becoming too much to
bear.

So for me, Nginx has a lot to offer over Apache, but with the rails
deployments I've been working on mostly, it's no longer enough.

------
mtalantikite
All of the sites hosted at engineyard.com use Nginx (github is one of them,
for example). Works great.

------
tlrobinson
How does Nginx compare to lighttpd?

~~~
jedberg
We have used both at reddit. Performace-wise they are comparable for us, but
nginx was a lot easier to configure, and lighttpd had a nasty bug that made us
switch away (for the life of me though, I can't remember what the bug was).

~~~
teej
> lighttpd had a nasty bug that made us switch away

Lighttpd has a bug in mod_proxy that makes it unusable under load.

~~~
jedberg
Yeah, that was it! The load balancing algorithm completely broke down under
load.

------
stanley
What is the optimal solution for PHP-based sites? Is Apache w/ mod_php faster
than Nginx with FastCGI?

~~~
handelaar
Anecdotes are not data, but if you're in the market for an anecdote anyway...

A thousand times no. Nginx+php-fastcgi is _screamingly_ fast by comparison,
while allowing me to free up about 70% of the memory previously in use, and
get huge gains from loading the PHP code into RAM with APC.

I look after one managed server which chucks out tens of millions of requests
per day despite only having half a gig of RAM in it. Before, running apache2,
it had a load average of about 6.0. Now? 0.2.

------
aliasaria
For a site where Nginx doesn't make sense, has anyone used memory caching on
Apache (to store static files in memory) with success? I am curious as to how
this would perform in comparison.

e.g. modmemcachecache

<http://code.google.com/p/modmemcachecache/>

------
ilaksh
Does anyone(anything) package php-fpm (or whatever you are supposed to use)
together with nginx?

~~~
pinkbike
apache w/modphp has the best latency in comparison to any fastcgi setup. When
it comes to high concurrency latency and time to finish is your biggest issue.
Slow clients are another killer (slow clients are users who have a slow
connection and take an order of magnitude longer or more do download the page
data as compared to generation). If you application is fast (less than 20ms)
page generation your best bet is the following setup...

nginx or varnish reverse proxy front end. (depending on your load you can turn
keepalives on here) This front end isolates your www/php/db from slow clients
making sure that your request gets processed fast, resources are released, and
then a light process of your proxy handles the delivery of the data. On the
back end use apache/mod_php with a limit of only 50-100 clients.

------
artificer
Interesting. Another nice choice for serving static content is rumored to be
thttpd. It lacks any kind of FastCGI support though (it's in the
proprietary,premium version). Has anyone had any experiences of thttpd versus
nginx?

------
furburger
apache is perfectly capable of saturating the outbound connection on static
content on any reasonable setup. you may save a little on memory with nginx
but you aren't saving on speed (how can you deliver more content than the
outbound connection can carry??). this is why the in-kernel http servers went
nowhere. in any case most people use CDNs these days for static content.

note that by not using apache you give up a lot of security hardening, add-on-
modules, and mindshare that nginx does not have.

~~~
sunkencity
For a very tight virtual server config I can see the use for nginx, but for a
normal server, running just apache, memory is not going to be an issue. There
are probably other limits that will affect performance, such as the connection
just as you say.

For example I tried running a server with apache + passenger on an ec2 node,
bumped up the MaxClients to 1024. I evened out at around 400 simulaneous
connections. Maybe it was due to some mysql limit or limits from the placed I
sent the load from, but I was consuming around 50% cpu, so there seemed to be
some other thing.

