> *) HTTP/2 support no longer tagged as "experimental" but is instead considered fully production ready.
- Performant reverse proxy
- RFC compliant caching with various backends, including Redis, memcached, etc...
- Load balancing with fail-over
- Dynamic proxy configuration
- Dynamic reverse proxy health checks
- FPM for PHP and others
- Event-based, async request handling
- HTTP/2 support
- brotli support
* You are probably not Google and the http/https server is almost certainly not a performance problem worth worrying about.
* Swiss army knife. At my day job we use a pile of Apache modules; if we went nginx we'd have to replace these with a kitchen drawer full of standalone gadgets.
* You know Apache well.
* Lots of LAMP apps pretty much presume Apache (WordPress plugins in particular) and nginx support is a me-too.
* Smaller and cleaner config.
* You know nginx well.
* nginx support is getting better, if you like nginx it's worth checking your prospective application.
Honestly, for almost all use cases it probably doesn't matter which. Pick whatever makes you most effective.
Fixed in Plus, not in the open source product.
And unless you support it solidly and long-term, nobody would really use it: paid support from Ngnix is much less risk.
If you have an existing configuration written in any, it invariably ended up as a pile of hacks, domains and misredirections that is a pain to rewrite for the other. That makes it hard to change.
Apache has support for some modules, mostly for authentication, that nginx doesn't have.
Old applications, like wordpress, are only supported and documented on Apache.
Wordpress is not old, it is just very badly written.
Not true: https://codex.wordpress.org/Nginx
I've been using WordPress on Nginx for almost a decade now, I just found that bit funny given the context.
WordPress itself is fine. Many plugins, however, assume Apache (and MySQL).
"Come for the performance, stay for the configuration".
This is all turned on with one line in a configuration file:
Nginx and Apache are about the same performance, assuming you are on 2.4.x and using the latest apr/apr-util/apr-iconv. The memory footprint is about the same and benchmarks are about the same.
Nginx still hard codes libc options that I have to undo in the Makefile each time to get the glibc hardening options enabled. (PIE, Full RELRO, stack protector and so on) I have not tested the last 3 or so releases, so maybe Maxim has resolved this?
I prefer Apache for anything CGI related, as the security is more tightly coupled to the webserver. With NGINX, I have to use another daemon that has it's own security settings, environment, etc.
With .htaccess files turned off (AllowOverride None), I got 282 requests per second. With .htacess files on (AllowOverride All) I got 286 requests per second (Yes, actually a few more hits per second with .htaccess turned on, but the results swing from test to test by a few percent anyway).
I hope this is sarcasm. Apache has support for the same event driven model that NGINX has. Since most people get their copy of Apache from their distribution they're often getting a rather old version of Apache. NGINX has taken good advantage of this to imply that Apache is out of date.
I happen to also use NGINX at work. What I'll say is it has it's own set of problems. There are various reasons to choose one or the other for a particular problem. But the idea that Apache httpd is obsolete is nonsense.
> There are various reasons to choose one over the other
Care to elaborate? Out of my head I could think about these:
- rfc caching with proper revalidation (as opposed to mere Expires: header generation)
- process isolation (aka CGI)
- better docs (and community/mindshare, though unsure about the latter)
- vhosting ecosystem
But that's just my experience at work. I'm sure there's a ton of other reasons. I can't say that I really agree with your list. I actually find Apache's documentation to be much better, I find myself needing to go read the code to figure out some behaviors with NGINX quite often still. Process isolation with CGI can be equally done with Apache httpd as with NGINX. I can't say I'm familiar with NGINX's caching behavior in detail since we're not using it. Also not sure what you mean by vhosting ecosystem since I see the design as pretty similar, unless you mean the upstream configuration.
- I too find that the Apache documentation is much better than the nginx one
- I also prefer the Apache License over the open-core, pay-a-license-for-more-feature model of Nginx.
- Then the module system of Apache is simply stable, proven, and seamless.
I'd go with Apache any time of the day as far as HTTP servers are concerned. I have nothing against Nginx per se, but I think that the Apache is dying / Apache is obsolete rhetoric is so far away from truth and exaggerated that it is almost becoming some sort of propaganda.
Apache is doing just fine and is going nowhere anytime soon and its configuration file format is not that complicate. Just try it for yourself and read the (well-written) docs.
Err, the points I listed are in favour of Apache httpd (I guess I haven't made myself very clear).
I don't think that's fair to Linux distros, either. Most popular distros have been shipping mpm_event for quite a few years, even if not enabled by default (because of mod_php).
I understand that as a developer of a phenomenal piece of software, you would prefer everyone to use the latest and greatest version. But for a typical user, Apache is so stable that the difference between CentOS's 2.4.6, Ubuntu's 2.4.18, and the latest version doesn't look large enough to have a significant effect in an "Apache vs. nginx" debate. After all, distros ship outdated versions of everything, including nginx.
But yes the difference between patch versions is usually quite small.
If you care about performance then you probably don't want to use the distribution's copy. People tend to use NGINX's distributions directly from NGINX or build it on their own. Since NGINX didn't have dynamic loadable modules until recently that drove more people to build their own copies.
I do think it's fair to say that NGINX has a reputation for getting better performance out of the box without configuration. However, it doesn't take long before you have to start tweaking it as well in my experience.
So I don't think some peoples impressions of Apache are entirely unfair but I don't think they are entirely fair either. But this shouldn't surprise anyone given that NGINX is a business and has a marketing department. Apache is a foundation and really doesn't market like NGINX does.
* httpd 2.45.5, default config (prefork), peaked at 24 processes and ~8MB RSS each (192MB) => 88k req/sec
* nginx 1.10.3, default config, peaked at 8 processed and ~4MB RSS each (32MB) => 199k req/sec
I thought all the default modules could be slowing httpd down so I built the most stripped down version that would allow me to run it and also switched to mpm_event:
* httpd 2.45.5, stripped down mpm_event, peaked at 5 processed 8MB each (40MB) ==> 89k req/sec
Both mpm_prefork and mpm_event had similar numbers. Where should I be looking at if I want to increase the request rate?
EDIT: stripped down mpm_prefork got me 120k req/sec but I saw a few defunct processes which is scary.
We used different mpms for different purposes even 10 years ago. A cluster of boxes could still serve over a hundred thousand mod_perl responses per second, at which point with a decent-sized set of perl apps your bottlenecks are application CPU and memory, not connection processing.
Prefork is simply the most compatible mpm for every module that exists, so of course you'd ship that as the default for a distro. Server software being "fast out of the box" is like saying a phone's battery comes "fully charged out of the box". It's convenient, but I can also just take an hour and get it there myself.
I have bothered with php-fpm previously, specifically so that when the server is getting hammered it just stops trying, instead of sending the box into OOM-killer.
I've personally experienced the problems with Apache and prefork. We couldn't switch to the event model because modpython/modwsgi wouldn't work with it. And prefork was really awful when it comes to serving files. Nginx also had much better defaults. So you could use the version from the distribution and bombard it with requests without it breaking a sweat.
In the end, I think convenience is really overlooked when it comes to server software, or at least has been. Most shops don't have lots of time to spend on tweaking things. The defaults must be good and necessary configuration should be simple as at all possible, autodetecting as much as possible.
Not that nginx is perfect, the caching layer is one example of something that doesn't really do the right thing out of the box. Last time I looked, there was still a bunch of really common stuff it doesn't understand by default, e.g. it can get confused by compression headers or common tracking cookies.
On a similar note, I once sent a bug report to Varnish that didn't at the time respect Cache-Control: private out of the box. In the end it was wontfixed by PHK.
If you know that but can't be bothered to learn what you are putting in production then it's not really the project or the distro faults IMHO.
I am currently running an Nginx server which is using PAM service, which in turn is configured to authenticate against LDAP via pam_sss. I don't really see the use case having Nginx authenticating directly to LDAP.