OTOH, Apache suffered from the "slow loris" attack, so the whole shebang ended up being nginx sitting in front of a few front-end apache instance kinds, which sit in front of a dozen or so backend apache instance kinds.
I find it interesting that although on those servers there are 12x more Apaches than NGINX, it might get counted as a server "using nginx"...
... and that's just because the whole she-bang sits under cloudflare, which reports Server: nginx-cloudflare ;)
While I imagine PHP is the single largest reason, other languages that support or expect the use of fastcgi are also very easy to configure with nginx, whereas I can count on one hand the number of businesses I've seen using Apache's mod_fcgid.
I don't believe cPanel/WHM even supports nginx yet as a standard option.
I also really doubt that php7.1 mod and apache without .htaccess is faster than nginx and php7.1-fpm under 'ondemand' mode. Even a 5$ DO server can handle hundreds of requests a second to big frameworks like Drupal or Mediawiki, and they're securely seperated. Locking down permissions on a group level to the executing php-pool, so you can then make only specific users belong to that pool and bind a directory in their home to the actual website location.
The below are links to benchmarks and discussions around mod_php vs. fpm. These are from last year 2016. Fast forward to today; I am seeing people move to php 7 and move back to mod_php. I believe we are at the start of a movement. Articles/stories will follow but only after the fact.
The fact is that Apache + mod_php will keep an instance of the PHP interpreter active in every single child httpd process. With nginx+fpm, your static assets are served directly from nginx without the overhead of an unnecessary PHP interpreter loaded into that process, while only your PHP requests are funneled to FPM. The performance overhead of having a PHP interpreter loaded into the process that is only serving a static asset is astronomical.
At the end of the day, benchmark your shippable product. Never try to benchmark a "Hello World" or a Wordpress installation if you're not shipping a Hello World or Wordpress codebase. Purely based off professional experience, I have never seen a real-world app perform better on Apache+mod_php than on nginx+fpm.
The only thing PHP 7 gave us was essentially the ability to ignore HHVM as a "required performance booster". 90% of companies were already able to ignore HHVM; with the improvements made to PHP 7, it's now 95-99%+ of products that don't need to evaluate HHVM as a mandatory alternative. And yes, nginx+fpm is still the defacto standard for PHP 7; the links you have provided do not say any different.
But we should also ask ourselves how to get into such messes less often. That is, how to systematically reduce the number of early-stage design errors. One trick is to choose tools that forbid known anti-patterns.
That means the designers must work harder up-front to figure out a system that can do without the work-arounds. But that is a feature, not a bug; indeed that is what our processes should try to achieve.
Most people do it because they have no idea what they are doing and they never decided on a naming convention for their apps and domains.
Especially links on sites you have no control over.
> if you want your webserver to do something complex, you'd go for Apache
I would tend to disagree. Assuming "complex" = "business logic", Apache hardly seems the right choice. PHP/Python/Node/GoLang or Lua right inside nginx would be more appropriate in most cases, imo.
> a set of simple rules that are within what Apache (including there all the module ecosystem) can and is designed to do
I think this statistic is conceptually a little behind the times in some ways
This is hardly a legitimate statistics, how many sites sit behind CloudFlare/Sucuri/Incapsula IP's, which are always nginx reverse proxies.
I don't think anything changed that much as of late, apart from more and more people adopting CloudFlare.
> why do you need two?! Just pick one!
When they're good at different things.
Nginx seems to have a different model. It does support a number of features but from what I can see it focuses on composing functionality with HTTP rather than adding more plugins.
Nginx seems to do a great job at being a load balancer and cdn-lite, and it seems like that's what the market wants out of a web server.
The latter seems tricky. It's validation inside the application a bottleneck?
event doesn't support HTTPS, or rather, it stops being event-based when you turn on HTTPS. So Apache can never be performant unless you have an SSL terminating proxy.
Its only advantage is that the config script is slightly easier to test, but it's still way too hard _and_ it's incredibly verbose compared to nginx config.
Have you got any reference to this? I think I should have noticed, having run Apache with fairly bursty traffic.
The only reference in the documentation was a four year old one, which said it was problematic six years prior to that...
I find the Event MPM to be very stable under pressure and unless you have some really specific need it's the only sane way to run Apache. It should have been the default ages ago (in the limited sense Apache even has defaults).
The basis you pick your web server on shouldn't be whether it's "new" or "old". It is what your software runs on, what support contracts you have, and what functionality you need. But don't use it if you can't configure it. Preferrably your organization should have knowledge about every piece of software in production.
Long story short, I decided not to use any of the partial WebDAV implementations for Nginx and just use sshfs instead. It's so true that Apache is a swiss army knife, though.
It's the 3rd chart on: https://news.netcraft.com/archives/2017/03/24/march-2017-web...
Web server market share depends a lot on which sites you're looking at: are you checking the top X million sites or checking every site you can possibly find out about? And also how you're deduplicating them: is every blogspot blog counted separately?
Disclaimer: I work at Netcraft (but not on the survey).
- Apache httpd: 11,877,702
- nginx: 4,759,439
- Microsoft IIS httpd: 3,872,974
So what are the servers behind nginx? 9 times out of 10 it is Apache httpd, and numerous instances of it at that. So for each single nginx server "seen" in these surveys, there are unknown multiples of Apache httpd behind the scenes doing the real work.
But all that messes up the popular, if incorrect, narrative that Apache httpd is dying and nginx is gobbling up instances. It's all about marketing baby, for a product that really isn't truly "open source" but more so open core. And people buy it hook, line, and sinker.
A lot of your post is not wrong, but that statistic is just not right at all. Less that half of the time we see apache instances behind NGINX, and it's mostly because it's legacy, and hard to move away from it. The other half of the time it's application specific web servers, or other NGINX instances.
Source: Worked for Cloudflare and now work at NGINX
Can you cite a source for this?
In my practice, the servers behind nginx are usually platform-specific application servers, such as Gunicorn or uWSGI for Python.
Popular PHP CMS software also relies on .htaccess. Wordpress for example allegedly powers about a quarter of all websites, and auto-creates an .htaccess file when you enable pretty links, which basically everyone does. Drupal ships with multiple .htaccess files.
Sure it's possible to adapt this code into Nginx configuration, but there is really no reason to do that. It's far easier to set up Nginx in front of Apache and get most of the Nginx benefits that way.
There really are a lot of platforms.
Equally, there is a truckload of LAMP sites out there (Linux Apache MySQL Php) to give Apache the edge on pure quantity. It's been the standard for personal site hosting and forums for two decades. That's a long lasting effect. That's not where there is value and work for developers though.
Really? Completely disagree. Just look at the configuration files for these vs nginx.
Sorry if I don't hold your opinion to that high a standard in that case.
In the context of load balancing, on the performance + latency + load balancing mechanism + configuration files criteria, Apache is the worst by a huge margin compared to both HAProxy and nginx.
Maybe 5-7 years ago then yeah... maybe. Not even close today. Apache has lowest latency and faster total transaction time based on various benchmarks. It all depends on how you are using it.
"configuration files criteria"
Got me there. But then again, 2.4 adds a LOT of ways to even streamline that, like mod_macro, mod_define, etc...
Load balancing => Apache doesn't even support healthchecks. I won't even get into the lack of TCP/TLS support or the lack of some load balancing algorithms.
Also complete and total nonsense is the lack of health checks (which is, iirc, only available for paid nginx), TLS support and load balancing algos. I think nginx has some kind of hash LB method that httpd doesn't, although httpd has round-robin, byrequests, bytraffic, and bybusyness.
HAProxy > nginx > apache
Of course if you compare apache to nginx, you can find stuff where nginx is lacking too.
Agreed, a lot of critical features are stripped in the open source nginx.
TLS = tcp with tls, not https.
I have no idea what in the heck you are talking about. If one must use mod_php than it is recommended that you avoid a threaded MPM, but even that is no longer 100% true; you can run mod_php and Event is most implementations with no issues at all.
"stuck in prefork mode" is a nonsensical phrase. prefork is a MPM.
Just because something is threaded doesn't make it slow. Take varnish for example. There are tradeoffs on all implementations, that's why Apache httpd allows for prefork, worker(threaded) and event-based architectures which the sysadmin/devops can choose for their own particular case. But "Oog. Event be Good. Threads be Bad" is really completely missing the very real tradeoffs of both.
For load balancing, events trumps every other mode, that's just the way it is.
HTTP and TCP balancing are inherently mono thread operations. There is no need for threading at all, multiple threads are actually decreasing performances).
In HTTPS and TLS mode, the encryption is the bottle neck. So you use one process per core (that process needs events).
HaProxy lets me have one process pinned down to each core of the system while network card IRQ are on a dedicated core. Apache can't do half of that.
We could get into how nginx and HaProxy parser are insanely optimized. Whereas apache is not and it cannot be because of the modules.
Of course, not everyone has to push 10g or 30k requests/s with their load balancers.
The rest of your "analysis" suffers from the same misinformation as this. I especially like "Whereas apache is not and it cannot be because of the modules.". I have no idea what in the world you mean by that. Why "because of the modules"?
This gives a lot of flexibility but it has a performance costs.
I feel the "marketing" concept is less actual marketing and more feature appeal, but I guess that's similar? Maybe not, though. Features provide actual value, whereas marketing is ... tricksy.
At my work we use IIS for our main website, but I've been asked to setup and configure a few wordpress installations. I've played around with both nginx and apache, but it was much easier to find clear instructions to setup wordpress on apache, so thats what I went with. I also really like apaches .htaccess support. Easy to lock down access to wp-admin by just dropping a .htaccess file in there restricting access to local ip range, instead of having to pollute the nginx config file with that sort of thing.
The way you're writing, you sound like you've tried one (Apache), and not really the other (nginx) and are basing your opinion on that, not on any merit-based evaluation...
I Don't consciously know either server, I just like the way sites can spin facts differently.
No, it can't. That just misses the entire point of the article. The key take away from it is how much Nginx has grown in the last few years by taking share away from Apache especially those deployments that needed support of modern protocols (from the article: 76.8% of all sites supporting HTTP/2 use Nginx, while only 2.3% of those sites rely on Apache). All the others have barely made a dent.
Also read the distribution of how Nginx fares vs. Apache among the top websites. It gives a much better picture of what is happening.
>I Don't consciously know either server, I just like the way sites can spin facts differently.
Well then you really can't make the statement that it is a spin, can you?
That is, until a while after Nginx was released in 2004 and started to pick up adoption. Most articles I've seen over the past 4 years or so suggest using Nginx, and most web dev newbies I've met in the past year or two only know of Nginx.
So no, this is not about spin, this mostly about "hey, if you're still using Apache, the rate at which Nginx is being adopted just might warrant further consideration of switching, or at least adopting Nginx for further projects."
Consider how Plesk panels nowadays go with Nginx proxy by default, but Apache in the backend; CPanel will probably follow soon and people have already been doing this manually for a while too.
Apache is still there, just not in as much plain sight as it used to.
People like to think they are special, and that they've need more speed, so they turn to Nginx, but most of us can easily be served by Apache.
I wonder if anyone else had a similar experience. If the first 20 minutes are nicer with one tool over the other, I suspect most people will stick with that tool until it starts limiting them.
I don't think any of the stuff you mention is related to that.
First lot of web server configuration I had to do was Tomcat. After that, IIS 6 through to 8.
Compared to any of those those, writing Apache httpd config actually seems pretty straightforward.
But yeah, Nginx is a lot plainer and I pick it most of the time. Usual time when I don't is if deploying someone else's software with complicated rules and don't have the time to port them.
Damn Tomcat config is godawful gibberish. Seems to be a running theme in Javaland.
If I were to take a guess based on my own experience, I'd say hobbyists and teenage experimenters are increasingly using nginx over apache because nginx configs are considerably easier to understand.
Over time, that is slowly translating to increased use in production environments as these people move into the workforce and apply their skills to production grade services.
Is there anything that competes/a "next Nginx"?
Caddy is written in Go, and make use of native fibers/green threads/goroutines.
I went with it for the test networks at my job and liked it a lot.
I (and I know no one) who has reached for Apache over NGINX in a decade.
I mostly continue to use Apache because I'm more familiar with its syntax, but it sounds like nginx is popular for use cases that we've just used purpose-fit applications like haproxy to fill.
I find it hard to believe this could be accurate considering the vast majority of Node.js deployments are also utilizing Nginx as a reverse-proxy in front of it. I think a large portion of nginx's uptake is actually due to Node.js' popularity.
Also, by amount of traffic it's another story (YouTube).
With that said, use caddy. Not for political reasons, just because it's awesome. https://caddyserver.com/
Sure it is. Loving nginx and hating Apache httpd (especially) is short hand for telling everyone just how high-tech and up-to-date one is. "Still using Apache? Way to go Grandpa! Get with the times... all the cool kids use nginx nowadays!".
How about choosing the best solution for the job at hand, coolness and hip-factor regardless ;)
This besides the fact that nginx is software and software doesn't really have opinions.
Yes, Apache configs are confusing, but it's not due to them being in a HTML-like style, and I suspect same thing would happen even if they had similar syntax Nginx.
IMO the problem is that Apache is extremely modular and every little piece was split into a separate module. There are many modules that are pretty much essential. It doesn't help that the example Apache config then comes with many not essential modules enabled.
One time I was motivated to just have Apache run only things that are necessary. I started without any modules and each time config complained that something is missing, I read documentation about it and either enabled it, or removed from the config. It took me quite a while. And I know people (in fact that is a vast majority) who would never bother to do that. Because of that Nginx looks attractive out of the box, because by default you don't need any modules to have a basic server.
Similarly this is also the reason why people also moved from Sendmail to Postfix. Sendmail doesn't use XML but it also has a steeper learning curve to correctly configure it.
And Apache config predates XML. It was designed to look like HTML, because it's a webserver I guess.
XML with config helpers and the ability to open it in an editor and see what is going on and tweak it is a great experience. XML with the expectation that an editor will be used to configure it is a terrible experience.
The above also applies if you replace "XML" with "JSON". :-)
There's just opening and closing tags for a stanza. Look at a basic virtualhost config.
Ultimately, XML was a colossal mistake carried out to perfection.
SXML is much easier to parse and it's more concise.
In XML there's also the concept of using self-closing tags only for "empty tags". Meaning, <tagname val="123"/> isn't "correct" and <tagname>123</tagname> should be used instead; while s-expressions simplify this.
XML is great because there are plenty of fast parsers for it with bindings in pretty every programming. It can be modified with nothing more than a text editor, by someone halfway competent.
XML is bad because it will never be quite as optimal as some binary only solution and editing with a test editor is painful.
Its not like most programming languages would have a hard time with gzipped xml either. This grants most of the benefits of binary formats and is often smaller than all but the most carefully designed ones.
All of that even presumes config load time matters. I a crazy large case maybe 10MB loads from a disk and the important stuff is cached in memory, in the actual structures that will use the data later. It just doesn't matter. Gzipped xml is fast enough for realtime high performance games, who cares about this silly XML hating anymore. Just pick something that works and move on.