Hacker News new | comments | show | ask | jobs | submit login
Nginx vs. Apache in AWS (celingest.com)
49 points by ruggerotonelli 1523 days ago | hide | past | web | 42 comments | favorite



Disclaimer: I work at AWS, I'm not speaking for my employer. I also helped write Apache httpd, and I'm not speaking for Apache either. Just on my own behalf.

The Apache configuration in the benchmark isn't great, using values of;

  MaxClients         400
  MinSpareThreads     25
  MaxSpareThreads     75 
  ThreadsPerChild     25
means that Apache will be doing extra work to spin up and spin down threads in response to the load. It's best to keep everything static, e.g.

  MaxClients         1000
  MinSpareThreads    1000
  MaxSpareThreads    1000 
  ThreadsPerChild      25
That way there aren't additional clone() calls when under load (when you least need it), the maximum concurrency is the same. For a fairer fight still, the event mpm could be used.

Apache has nearly 20 year old configuration defaults - because it's best not to break things on users who rely on them. Properly configured, and sanely built, Apache can easily perform as well as nginx or lighttpd. These days 99% of web-server performance is in the kernel and the SSL/TLS libraries. The user-space code is almost completely irrelevant (and those configuration knobs I mentioned really just control how often certain system calls are called).

But it hardly matters; "apt-get install nginx" just works, and you really don't have to worry about it; you get a relatively small binary with some sane defaults. Apache is awesome, but I run nginx on my own EC2 micro instance. It's less hassle.


"Apache is awesome, but I run nginx on my own EC2 micro instance. It's less hassle."

...enough said!


If you are going to test Apache at least use 2.4 in MPM-event mode so you are testing apples to apples.

Using Apache 2.2 in worker mode is like comparing 8 year old tech to modern code.

hint: Apache is still about 15-20% slower than nginx even in it's fastest mpm-event mode, but it has many more years of features and modules available - it all depends what your needs are - single site services vs multi-site webhosting

You can always do the best of both worlds by proxying apache with nginx if you need both. Takes one evening to figure out.


It's fair insofar as Debian still ships with 2.2 (2.4 in experimental)


I wouldn't call Debian Sta(b)le fair testing. Ubuntu is a slightly better but to be truly fair you should use the latest stable production branch of each program. Or dev if that's what you're testing.


The parent argued that the comparison is unfair, but those are the versions you would get from Debian (apt-get install nginx-light apache2)


Well, the title is "Nginx Vs Apache in AWS", not "Nginx from apt-get Vs Apache from apt-get in AWS". If you can't install the latest stable releases, then don't write a comparison about them.


I don't know what versions are provided by the "amazon Linux AMI" (which the article cites as the image) but I would venture to guess that those are probably the standard (which would make the title apt)


Both are available in the Amazon Linux AMI:

httpd.x86_64 : Apache HTTP Server

httpd24.x86_64 : Apache HTTP Server


Btw, there is also specially tuned Nginx AMI: https://aws.amazon.com/marketplace/pp/B00A04GAG4/


And what version of nginx does Debian ship with? I suspect it classes nginx entirely as experimental!


1.2.6, which incidentally is what was used in the test.


But in Stable it's not quite that cutting edge: v0.7.63.


"Here is a benchmark that shows the performance of a watermelon and a large canteloupe lobbed from a trebuchet using different kinds of wood and rope under different wind conditions. We tied random weights to each melon for no apparent reason."

What the fuck is the point of a performance benchmark if you didn't tune any settings for performance?


Indeed - this doesn't tell you anything. Was nginx configured well? Was apache? How much difference would it have made? Is the old apache version relevant? Pointless test, unless you intend to use those settings on that software on similar hardware.


Hell, it would have been great if they had used a real closed testbed on a local network instead of AWS. That would give you reproducible test results that aren't influenced by things like communal server resources and a communal network.

Regardless, they put zero thought into what makes each application work to its peak performance and just picked two random configurations, probably based on defaults that we know to be poor on Apache and advantageous on nginx. Using worker instead of event just adds insult to injury.


Call me stupid.

But maybe Apache shouldn't ship defaults that have poor performance ?


Well stupid, Apache ships with the default settings it does for historical expectations of its users. But the application's tuning is driven largely by how you use it, as there are several workers for Apache that all are useful in different situations and require vastly different configuration options. This is to say nothing of the kernel options which also need to be tuned to the application.

The defaults are just defaults. They need to be changed to improve performance.


Agree, you can only compare both, if both are tuned for optimal performance for the specific situation.


What do you mean what is the point ?

The obvious assumption with all software is that it is optimised out of the box.


If this wasn't a troll, please rest assured that most software is not optimized out of the box (especially when half the optimizing that affects the application is in the kernel)


Optimized for what? For which circumstances? What software fits all?


Optimized for serving HTTP requests at a high throughput.

What possible reason is there for Apache deliberately slowing that down ?


Are you serious?

Just off the top of my head, how about...

- Simplifying the initial setup?

- Making the default configuration simpler to understand?

- Dialing up logging to make your initial configuration easier to monitor / debug?

- Performance options with security implications (e.g. ok if other parts of your infrastructure are up-to-snuff, but otherwise...)?

The default configuration should be tuned to the needs of whoever is most likely to be using it. I have a hunch that's far more likely to be people using an Apache server (maybe even a web server) for the first time than people with extreme performance requirements.


I'm sure there's a good reason why Apache 2.4 wasn't in the comparison but the author failed to mention it. Apache 2.2.23 may be a recent release but it's a security update of an old but stable branch. Apache 2.4 branch is a year old and a significant update including performance improvements to complete with the likes of nginx.

I would still expect nginx to "win" but seems a fairer comparison somehow.


Why exactly did they limit nginx to one worker proces?


I came here to write the same thing..

Worker Process 1; - for nginx on 2 cores

Nginx Php ???? - you have not mentioned what PHP config you are running on Nginx

The Benchmarking needs to get better, this thing keeps on happening


Yeah... I came here to write this too. Seems just silly. Makes even less sense that not comparing with Apache 2.4.


i feel the same way... 1 worker process??


I think one of the less obvious conclusions here is how important caching is for wordpress, both with nginx and apache. I wish this test also covered W3 Total Cache though. I know it's hard to test all possible permutations.

One of the biggest issues with wordpress in particular however, is that whilst it can run with nginx happily, it is written primarily with Apache in mind. And many plugins rely on it too. I got it to work fine with nginx, but it somehow still felt rather experimental. Perhaps it's just me.

I wrote a similar post from a slightly different angle, comparing the cost/benefit of adding different caching layers. This included W3TC and varnish on both apache and nginx, and using a cheap shared hosting. http://blog.gingerlime.com/2012/how-much-cache-is-too-much/


Even with one process/worker, so without any parallelism, nginx is faster than apache most of the time. If you want to make a meaningful comparison you should set the workers to the number of CPUs the OS sees (hyperthreaded or not).


Can someone explain how the medium instance is serving more req/sec than the large one when caching is on?


"Moreover… Nginx wasn’t always the winnner." is my favorite part.


Good research and writeup.. But 3D bargraphs... Come on!


It's posts like this making it to the frontpage (and the inevitable comment thread) that make me sad about what HN has become.

What happened to the idea of making submissions (and comments) cost karma points anyway?


Whiny comments like this make me sad, but luckily I know they don't reflect on the community as a whole.


Lighttpd is missing... but good benchmark anyway.


I actually really like the lighttpd config structure, but its not actively maintained and doesn't have the amazing community that nginx has. It is clear that nginx has won the high performance web-server battle..


The sensible config syntax is the reason I still use lighttpd. http://wiki.nginx.org/IfIsEvil


Given the usage numbers (looking at netcraft) the two most popular engines were compared. Its a fair comparison, especially when you note that IIS doesnt run in linux. Lighttpd isn't even big enough to warrant a mention

http://news.netcraft.com/archives/2013/02/01/february-2013-w...


>Lighttpd is missing

That's like saying thttpd is missing. It isn't tested because it isn't relevant.


Nginx rulez... end of story!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: