Apache can be tweaked in so many different ways depending on what your traffic patterns look like, and how you're processing your requests. MaxClients / KeepAlive / MaxRequestsPerChild / etc...
e.g. you would have a completely different config for serving wordpress vs static images for a photo album.
I can't stress this enough. Besides all the configuration settings, you can really dig into the internals.
Our production deployment of Apache is custom-compiled with something like 90% of the modules disabled. Only the basic stuff we use.
Each deployment is unique, making benchmarks pretty useless.
If you need to choose which httpd to use, decide based on how easily it'll fit in your stack. I promise you, if you hit the performance ceiling on any of the httpd suites out there, you'll have bigger fish to fry.
Each deployment is unique, making benchmarks pretty useless.
Not that unique, as in snowflake-unique. There are several classes of deployments, that's all. A lot of things are similar.
That said, benchmarks still say a lot, when the performance gets over some limit. A 10-50% speed improvement could be alleviated with a different config. Even a 100%. I doubt a 10x one could. Or 1/5 the memory usage. Or getting to 20,000 rps on the same otherwise setup, while the other server stuggles after 4000 reqs.
You've got think about all of the specifics though...
Here's apache tuned to serve static files of 50kb in length, with 20 requests per client, 500 uniques per hour...
vs.
Here's apache tuned to serve a wordpress blog that gets 500 uniques per hour and each client makes 50 requests (average time on site is 5 minutes)...
and then...
apache serving wordpress on a VPS vs apache serving wordpress on a single core vs apache serving wordpress on a quad-core..
and oh-wait... if we're talking php then let's consider prefork/FastCGI/yada yada yada
The "problem" (or as I prefer to look at it: feature) with Apache, is that it's been around for so long, that it can accommodate all of the above specific issues.
No matter how many people stress this very important point there will still be people crying over performance during sleepless nights spent thinking about the one in a million chance that their server will be burdened by some ungodly amount of traffic. I've even been guilty of it on servers that get 14 hits a day (and that's on a good day)!
I disagree that each deployment is unique though. In theory each deployment should be unique but most people just go with the defaults their hosting provider suggests, others don't have control over those setting (shared hosts) and still others just use the htaccess file that comes with html5BP and leave it at that. There's a ton of people That have a majority of the modules enabled but neither use them or know what they do too. I've been guilty of it before too. Maybe around these parts it's safe to say each deployment is unique but out in the wild you tend to see a lot of the same.
Plus it's pretty easy to saturate a 1 Gbit NIC with even a pretty close to OOTB Apache setup. You might burn 5-10% more CPU than if you were using nginx, but it's not that big a deal in 99% of the use cases.
Benchmarks are like porn. The real thing is always better. Figure out what your stack is, try using all the options available, and go with the one you like best.