The Apache configuration in the benchmark isn't great, using values of;
Apache has nearly 20 year old configuration defaults - because it's best not to break things on users who rely on them. Properly configured, and sanely built, Apache can easily perform as well as nginx or lighttpd. These days 99% of web-server performance is in the kernel and the SSL/TLS libraries. The user-space code is almost completely irrelevant (and those configuration knobs I mentioned really just control how often certain system calls are called).
But it hardly matters; "apt-get install nginx" just works, and you really don't have to worry about it; you get a relatively small binary with some sane defaults. Apache is awesome, but I run nginx on my own EC2 micro instance. It's less hassle.
Using Apache 2.2 in worker mode is like comparing 8 year old tech to modern code.
hint: Apache is still about 15-20% slower than nginx even in it's fastest mpm-event mode, but it has many more years of features and modules available - it all depends what your needs are - single site services vs multi-site webhosting
You can always do the best of both worlds by proxying apache with nginx if you need both. Takes one evening to figure out.
httpd.x86_64 : Apache HTTP Server
httpd24.x86_64 : Apache HTTP Server
What the fuck is the point of a performance benchmark if you didn't tune any settings for performance?
Regardless, they put zero thought into what makes each application work to its peak performance and just picked two random configurations, probably based on defaults that we know to be poor on Apache and advantageous on nginx. Using worker instead of event just adds insult to injury.
But maybe Apache shouldn't ship defaults that have poor performance ?
The defaults are just defaults. They need to be changed to improve performance.
The obvious assumption with all software is that it is optimised out of the box.
What possible reason is there for Apache deliberately slowing that down ?
Just off the top of my head, how about...
- Simplifying the initial setup?
- Making the default configuration simpler to understand?
- Dialing up logging to make your initial configuration easier to monitor / debug?
- Performance options with security implications (e.g. ok if other parts of your infrastructure are up-to-snuff, but otherwise...)?
The default configuration should be tuned to the needs of whoever is most likely to be using it. I have a hunch that's far more likely to be people using an Apache server (maybe even a web server) for the first time than people with extreme performance requirements.
I would still expect nginx to "win" but seems a fairer comparison somehow.
Worker Process 1;
- for nginx on 2 cores
Nginx Php ????
- you have not mentioned what PHP config you are running on Nginx
The Benchmarking needs to get better, this thing keeps on happening
One of the biggest issues with wordpress in particular however, is that whilst it can run with nginx happily, it is written primarily with Apache in mind. And many plugins rely on it too. I got it to work fine with nginx, but it somehow still felt rather experimental. Perhaps it's just me.
I wrote a similar post from a slightly different angle, comparing the cost/benefit of adding different caching layers. This included W3TC and varnish on both apache and nginx, and using a cheap shared hosting. http://blog.gingerlime.com/2012/how-much-cache-is-too-much/
What happened to the idea of making submissions (and comments) cost karma points anyway?
That's like saying thttpd is missing. It isn't tested because it isn't relevant.