Hacker News new | past | comments | ask | show | jobs | submit login
Why Use Nginx? (nginx.org)
210 points by seclabor on Mar 24, 2013 | hide | past | web | favorite | 88 comments



Every single project, open source or not, needs to have a "Why Use It" page.

(Now this is more of a "Testimonials" page, but for server tech it will do.)

> Apache is like Microsoft Word, it has a million options but you only need six. Nginx does those six things, and it does five of them 50 times faster than Apache.

This is exactly how I felt. I'm a pea-brained dolt in the server sphere, and when I was remaking my server I went with nginx over apache on the advice of a friend because "the config file is easier to understand."

He was right. And instead of being frustrated and bumbling through apache until it worked the way I wanted, I was able to configure nginx (for the first time!) in mere minutes. With nginx I was able to move on to the "get frustrated by Wordpress" phase of server setup much sooner!


> Every single project, open source or not, needs to have a "Why Use It" page.

Not only that, but a mention of all the major competitors with an honest factual comparison with them is really handy as well. In the FLOSS world, naming competitors won't necessarily cause a major problem and does a huge amount for trust.


Still building the "feature comparison" feature but we have the "why use it" pretty much sorted, example: http://slant.co/topics/what-is-the-best-search-engine-for-we...


yeah but you can't ask Product X to tell you about Product Y because they honestly don't know Product Y well enough to speak to it.


You have to ask yourself, if your building a product and you know nothing about it's potential competitors, what are you really doing? Research is key my opinion. It avoids cases of "Hey guys, look what I invented! I call it the wheel.."


Yep feature comparisons, benchmarks even if never perfect,.. and.. what this cruft is for, are, well, useful.

But yep also, this nginx page is a testimonial page. Testimonial pages bring very little value as it's 100% subjective, and usually just "fanboy" content. And the 2nd half of your post is equivalent.

Personally I find Apache easier to configure, and with mpm-event its as fast as nginx (albeit both are fine httpds, and nginx has cleaner code). I could also compare both to text editors.

But none of this brings valuable info to me. It's way too subjective, and you know how people get touchy feely -and extremely subjective- when it's about favorite brand/software/company/validation-that-what-they-use-is-the-current-cool-thing


My favourite testimonial was a generic 'I like this product' that was signed 'anonymous'. Yes, the product is so good, I won't put my name to it. Maybe if the testimonial said something specific, but as it stood it was pretty laughable.


> Every single project, open source or not, needs to have a "Why Use It" page.

True and while creating such a list please consider to add as well a short list of scenarios in which you would not recommend the use of the product even if this may be a little "contra-marketing" and is probably even harder to come up with for the authors than the positive list.


I recently bought some leathergoods from a company who have a page called "Our Competitors". It's a list of 20 companies who make the same categories of goods.

Nothing quite inspires confidence like that.


Interesting. I'm sure they gained some customers due to that confidence, but I wonder how many they lost.


If you are making those comparisons, you should also wonder about the quality of those customers. I.e. people who stay with you still are probably profitable customers.


Not just a "why use it" page, but a "what the fuck it is" page.

The number of projects I've encountered which don't even have this .... Words fail me.


The number of projects I've encountered on HN which don't even have this...

Maybe it's because I'm not a developer or working at a startup, but I get majorly annoyed whenever I click on some "Show HN" link only to be presented with a landing page on some unheard of web site that contains a few nice images and a text box wanting me to give up my e-mail address before I can even see what the application looks like or, in many cases, before I even know what the hell it is or is supposed to do!

(On a side note, that may be the longest sentence I've ever written.)

> SIGN UP NOW!

For what!?


> Every single project, open source or not, needs to have a "Why Use It" page.

Let me expand your comment (with your permission) to make it more general.

"Every project, open source or not, needs to be run with all the branches of a proper organization"

Let me flesh it out with an example: some time ago there was a flame here and on proggit about cricitisism of C++. Some arguments and counter-arguments (http://www.reddit.com/r/programming/comments/197dn1/introduc...) got me thinking (have mercy, o' HN C++ overlords)

1. Problems are due to power and flexibility Design : build simple and tight systems or get smart enough to build tight complex systems.

2. clang is fixing template error messages Execution: reasonable to demand this be done before standardization

3. This is an implementation problem Management apathy: Passing the buck

4. C++ is not one language but actually many different ones Marketing : Positioning

etc

There should be no reason why open-source projects should be 'good code, bad with everything else'.


One thing I still don't understand is why one would use a proxy server at all?

Why not just have your load balancers (which can operate cheaply at the TCP layer) throw traffic directly at your application servers?

If you need caching, that's cheap to do, too. If you need static file serving, can't you another load balancer end-point that points directly at static content servers, or make your application servers faster?

Is nginx primarily useful for slow application server runtimes that can't keep up with what nginx can do?


It's a good question. We (a large-scale website serving 250,000 pages/day) use Python+CherryPy for our "application server", but that's sitting behind an nginx reverse proxy.

The main reason is that nginx is much better and faster at handling certain things than Python:

* handling HTTPS and serving plain old HTTP to the application server so Python doesn't have to worry about it

* doing the gzipping of content before it goes out

* routing requests to different places/ports based on various elements matched in the URL or HTTP headers

* virtual hosts, i.e., "Host" header matching and routing things to the right place based on that

* various request sanitization, like setting client_max_body_size, ignore_invalid_headers, timeouts, etc.

Historically we've also had multiple types of application servers, some Python and some C++, and nginx routes requests to the right app server (based mainly on URL prefix).

We also use nginx to do GeoIP with the GeoIP nginx module (though arguably that would be just as simple in Python).

Edit: Note that we don't use it because our "application server is slow" (it's not). Also, I know some people use nginx to serve static content, because it's usually much faster/better than say Python at doing that -- we serve static content via Amazon S3 and a CDN, so that's a non-issue for us.


I'm curious to know - did you consider Varnish? It's much faster as a rev proxy caching server.


No, it isn't. It claims to be faster than squid, not nginx. But it doesn't even meet that claim.


Proxy servers aren't just useful for load balancing. I'd say that this isn't the average use case for Internet-facing HTTP reverse proxies at all (people get all worked up by the big boys, but that's not what most Internet applications are about).

In most cases reverse proxies are useful as application firewalls, where you control what passes and what doesn't in an application-independent way (i.e. your systems administrators can do this without the need to touch applications - which in many enterprise settings can't be easily touched by developers, much less by operations).

This is why I have yet to use nginx, and stay with good-old apache. Apache is extremely configurable (not only is mod_rewrite very powerful, but you can insert your own request mangling scripts for that weird edge case).

Apache is also good enough for most cases. For applications with a few thousand users hanging on the site all day, apache can handle it just fine in a 5-year old, low-end Celeron rack server, with just 5% CPU usage even with all connections being SSL for both Internet and application server traffic.

Caching and load-balancing are nice things to have, but not the reason for having a reverse proxy in most cases.


For my hobby programming server, I primarily use it to serve different web applications on different domains. The server is a proxy to web applications that listen on different ports on the server. This allows me to multiplex different web servers and relieves me of finding modules that glue together nginx and various programming languages.


I think one response that's missing here is that application servers might use a lot of RAM while they're running, so you can only afford to have a few instances running. In such cases, it's better to have nginx buffer the response and deal with the slow clients, than holding up an application server for the duration of the transfer.


Another reason people haven't considered here is that nginx can display error pages for you when your application server crashes, or is taking too long to serve requests. This is much nicer for the user than a browser error.


An HTTP level load balancer like nginx can do session affinity, which is helpful when your application server isn't stateless.

I guess a TCP load balancer could do them based on IP as well, but IPs don't identify sessions as well as cookies can.


As can haproxy or any other L7 load balancer.


Sure, but those aren't TCP load balancers (at least, not if you enabled that functionality), since they work at the HTTP level. I wasn't claiming nginx was the only way to do session affinity, just that it's a reason to use it over a TCP load balancer.


nginx can do a ton of things with modules. One of the reasons I use it is because I can pattern match incoming URLs and send them to different servers based on some pretty fancy rules routing rules.

Also, there are many that use nginx as the load balancer (including me). Would a TCP load balancer be better? Possibly but it also comes at an additional cost whereas nginx is free and far more flexible with routing HTTP content.

I'm not even going to summarize all the other things nginx can do via modules (Lua, memcached, etc -- http://wiki.nginx.org/Modules).


Get a cheap load balancer that can make better decisions because it's operating at the HTTP layer... oh, nginx! :-)


A big gotcha with nginx is if you have an app server behind it and you foolishly have a long running web request which runs longer than the proxy timeout, nginx will retry the original web request. Better make sure everything is idempotent and don't have long running web processes. It is bad design, but we ran into this. Code that used to run in a few seconds started taking longer and then ran infinitely long without an error because it kept getting resent to the server.


Oh really, does it do it for POSTs? It should retry for GET, but not POSTs (which is exactly why there is a difference).

If you have non-idempotent GETs in your app, then that's the app's fault. If Nginx is retrying POSTs then it's Nginx's fault.


It definitely did it for POST. That is what surprised us. We had a lot of users added and as I said the few second operation started taking much longer and we discovered it was because their spreadsheet uploaded was getting uploaded again and again because it kept timing out. Was not fun.

The "feature" is 'proxy_next_upstream'. We spent time cursing this Nginx definition of a feature.

http://serverfault.com/questions/51320/setting-up-nginx-to-n...


Bit us in the behind as well, with a long-running report. I feel your pain.


That's a pretty broad statement to make... You are assuming that out there only "your app" exists.

Actually, many times "your app" is somebody else's app that you bought or an app that somebody else develops and you don't have any control over it. Sometimes those apps are just bad (well, most enterprise apps are) and your only hope is that the infrastructure that you do happen to control doesn't make it worse.

I've never used nginx, so I don't know if the parent has a point or not. However, having a reverse proxy retrying _any_ requests to backends by default seems very bad form to me. Do you want your routers resending packets? It's the same thing.


If you have an app that completely ignores the HTTP specification like that then you could well have bigger problems.

I recall reading about a very popular image website doing exactly that and, when Google attempted to index it, some strange things happened to the images as the web crawler chased down all those "safe" links.


> However, having a reverse proxy retrying _any_ requests to backends by default seems very bad form to me. Do you want your routers resending packets? It's the same thing.

No: HTTP GET is explicitly idempotent and cachable. As an HTTP client, you are supposed to be able to send the same GET all day, and it's up to the server to not screw that up.


I'm not saying that it's a standards violation or anything like that. I'm not even arguing against GET retries, but only against them being default.

Bad behavior from apps you don't control is a fact of life. Ignoring it doesn't make it go away, and behaving like it doesn't exist can make it worse.


I would argue that relying on the behavior allowed by a standard is the only way to make it actually be allowed. Otherwise, people code to your unwritten, stricter standard, relying on nobody else ever sending them something that is allowed. Then, when something allowed by the standard does happen, they blow up.

But once you introduce other software in the ecosystem that is guaranteed to send you these sorts of things, you'll damn well better release a new version of your package that works with them.


I agree with that. Automatically resenting requests is not good. Better to simply timeout. And if there are enough timeouts that it's causing a problem, the functionality should be restructured.


Though I do use nginx and am very happy with it, I am somewhat put off by the fact that I reported a bug five months ago (including a working patch for the problem) and no one seems to have as much as looked at it [1]. (Granted, this is in a module, not in the core server, but in general the community process for the project seems messy and vague.)

[1]: http://trac.nginx.org/nginx/ticket/242


Yes, I agree - nginx's community feels a little strange, at least to someone who hasn't tried to really get involved. Apache, by contrast, is a huge, loud, unruly crowd. Whereas the nginx author seems to be one of those quiet, aloof, l337 h4x0r types. Which is consistent with software that is a) really fast and b) not responsive to change requests. :)


I think ngninx is likely faster and more stable than apache, but I have yet to see anything close to a trustworthy benchmark.

I come from the php world, and people always say how much lower-memory nginx+php-fpm is than apache+mod_php. Well no doubt! If you understand how the architecture actually works, it's clear this isn't a fair comparison. mod_php means php is fully-loaded for serving statics, not to mention having a smaller pool of php-fpm processes will take less memory and also be faster (due to less context-switching) than the larger number of mod_php processes.

However the real comparison should be between nginx+php-fpm and apache+mpm_event+php-fpm. Nginx is an evented server, so at least try to compare apples-to-apples. I've seen very few comparisons of nginx with mpm_event.

Also, apache's default tunings are much more geared towards modest server usage whereas nginx's seem more geared towards high scalability. An argument could be made that apache should have "better" defaults, but since at scale you need to start tweaking your OS/rlimit/etc to prevent bad things from happening you can see why apache might stick with more modest tunings ootb.

Our app has a lot of apache custom config and so I was a hesitant to try to switch to nginx due to the risk of getting things wrong porting the configs. We did move from apache/mod_php to apache+mpm_worker and php-fpm and we've been able to improve throughput (especially on statics) at a far lower memory footprint. Key to success in lowering the memory footprint was dropping ThreadStackSize (from 8M default to 1M). What a difference!

Other than that, the competition is good for everyone. I am sure nginx pushed apache to work on mpm_event much harder.


You may want to take a look at http://www.eschrade.com/page/why-is-fastcgi-w-nginx-so-much-.... The answer might surprise you.

As for ThreadStackSize: it impacts virtual memory but not actual memory usage. Actual memory usage stays the same. You should never use the 'vm size' as a good measurement of memory usage. Unfortunately memory management on modern OSes is complicated and people don't understand the numbers, so they arbitrarily pick a column in 'ps' and conclude that X is bloating memory... :(


That link about AllowOverride is true. The reason I didn't mention it is that one of the benchmarks I saw did turn off AllowOverride so I figured at least that part was fair :) But it is a very good point. I think I'll do a talk soon about tuning apache and make sure that's in there. Optimizing with strace is always really fun. I used it pretty heavily when I was researching php/apc and require/require_once. It's amazing how much faster you can get if you implement things to not talk to the disk 20x on every request :)

I definitely know VIRT is complicated. I couldn't find any kind of clarity on it. If you know of a good guide I'd love to see it.

That said, virtual memory still likely affects some kernel decision-making. For instance the oom-killer was kicking in on a daily basis until I made these changes. With mpm_worker using 250+ threads, I was able to reduce the "committed" by several gigs. The system overall seems more stable and the oom-killer hasn't reared its head in days.

I can imagine that the stack is treated differently since it'd be a terrible idea to page out stack. I couldn't find proof, but if I were a kernel I wouldn't page out stack :)


I dropped Apache in favor of Nginx about two years ago. Haven't looked back since then. It's so much easier to configure and it uses far less memory.


>it uses far less memory

One of the main selling points for me. Especially for personal projects, I can go with a more affordable vps plan by using nginx instead of apache.


I am using lighty for that reasons. Can anyone compare lighttpd with nginx performance on small servers like Pis?


I dropped Lighttpd years ago because on some occasions it used 100% CPU for no reason. It didn't make the server crash, and Lighttpd itself appeared to run fine otherwise, but still... the CPU usage was there for no reason. This was never solved, and development also seemed stalled. So I switched away from Lighttpd to Nginx. Nginx just kept working and working, never broke once.


One major drawback is you can't control output buffering and gzip with phpfpm (LAMP stack equivalent) You cannot flush head, the user has to wait until the whole page loads before rendering.


Latest version got much better at handling streams if I remember correctly


True. Correct me if I am wrong but I am not sure if streams work for general text pages (like blogs, eCommerce, etc). It is better suited for chat-style applications (comet).


I'm a newbie - if I install Passanger to be able to run Rails apps on on Nginx, are these benefits lost?

Better yet: What exact is Passanger? (Explain it like I'm five)

Their site says, "Phusion Passenger is an application server for Ruby (Rack) and Python (WSGI) apps." - so it's something that runs below Nginx and run Ruby code?

Or is it an extension for Nginx/Apache?

Thanks!


Phusion Passenger extends Nginx and turns it into an application server. An application server is a program that runs application code, so in this case it allows Nginx to run run Ruby/Python code. Likewise, the Apache version of Phusion Passenger turns Apache into an app server that can run Ruby/Python code.

The benefits are not lost. Phusion Passenger integrates into Nginx to give you the benefits of both. For example one of the tasks of Nginx is to buffer HTTP requests and responses in order to protect apps from slow HTTP connections. Phusion Passenger fully makes use of this Nginx feature and even extends it.


> if I install Passanger to be able to run Rails apps on on Nginx, are these benefits lost?

No. but you will see the benefits only at a high req/min, because with Apache it would have to spawn new threads to deal with so many requests, taking up memory and other resources, while nginx just sends the requests directly to the passenger instances as needed. basically nginx would use less memory.

> What exact is Passanger? passenger is a module that understands and handles rails requests. So when a request comes in to nginx for an image, css or other static asset, nginx will find it and send it back, when the request is for a rails controller/action, then it will send it to a passenger instance to process and then return the results to you

hope that helps


Nginx is great, but before you get down and start using it, make certain that you'll never, ever use any features it doesn't support. I was bitten by this when I found out Nginx has no equivalent to Apache's mpm_itk_module.


Absolutely correct. Use Nginx because it's small and fast. But don't use it because it's fully featured, because compared to Apache it's not. But that doesn't have to be a problem. I use Nginx by default, and on the occasions that I need an Apache feature I just reverse proxy the virtual host from Nginx to Apache.


What does that mod do out of curiosity?


"mpm-itk allows you to run each of your vhost under a separate uid and gid—in short, the scripts and configuration files for one vhost no longer have to be readable for all the other vhosts"

- http://mpm-itk.sesse.net/


I don't quite understand why that would be necessary. Nginx has no business accessing other users' files in the first place.

I mean, I understand why Apache needs to do it: with Apache, you have things like mod_php running in-process, so it makes sense to restrict Apache, running one of Bob's scripts, from accessing Alice's files.

But with Nginx, anything with "intelligence" runs out-of-process. What Nginx expects you to do is to run it as one user, but run each app server (in PHP terms, each FCGI socket daemon) as the user whose files that server should access. (Or, better yet, run the app server in an LXC container along with a bind mount to only the files it needs to access. Very Plan9y.)


It may still be a good idea for security reasons. Suppose that an Nginx process is exploited. If it runs under a certain user ID then the exploit likely does not travel past that user. You can make the Nginx user ID different from the actual user ID so that it only has read access.


I don't care whether it's for anti-wrinkle cream or a web server, I find testimonials about as useful as that ball of lint in my belly button and trustworthy as a used car salesman.

It's true many of benchmarks out there use flawed methodologies, so lets try to fix that. When you benchmark something as complex as Apache, people are going to find faults with your initial run, no matter how careful you are. This is why you need to be completely transparent with you setup configuration and should be prepared for a followup run with user suggested settings.


Can someone give me a small comparison between nginx and HAProxy? It seems like they're starting to overlap a lot. I'm really excited that nginx added Websocket (including SSL termination) support.


I don't think you can compare them


Nginx is the most reliable workhorse I've used in nearly any tech stack. Doesn't matter how much traffic we throw at it, it just keeps on kicking. It's very much the 'Redis' of proxy servers.


Given that nginx is almost twice as old as redis, shouldn't that analogy rather be "redis is the nginx of databases"? ;)



I've used Apache exclusively for the last 3 years until just a few months ago when I set up another server to host a number of sites I wanted to move off Apache and on to Nginx. After having used both, and trying real hard here not to start a religious war as is what happens often in these kinds of discussions, I have to say neither is "better" overall or in general in my experience.

If you're familiar with Apache configuration then you should have no problem with Nginx because the way both servers structure their config files is very similar. I prefer Nginx config files however because it feels more like writing JSON whereas Apache config files are like writing XML, especially in the area of virtual hosts. That said, neither is better, its really more about what you're comfortable with and prefer. Nginx had most of the same configuration options and the tough part was figuring out what Nginx calls the corresponding Apache option.

For me there was a barely noticeable performance difference with Nginx being faster. The caveat here is that in my case I started moving all of my static sites and sites with "simple" php script type apps over to Nginx and used the apache server for a very few apps that were running more memory and CPU intensive apps. The Nginx server was also new and clean while the apache server had been in use for a great many more things including non-web applications and managing private got repos for about 20 code bases.

Nginx did use about 25% less memory in my case than apache even while serving up more sites.

I love being able to host multiple SSL sites on a single IP with no hoops to jump through with Nginx. On Apache your options are to acquire more IPs or set up SNI which for me was more hassle than it was worth.

So since we're on HN I'm assuming most people are serving Ruby or Python based apps with Nginx or using it as a reverse proxy. That's cool and all but there's still an enormous cross section of the developer community using it to serve php and as I have a lot of sites that were originally built in php my Nginx server needed Php-fastcgi. I mentioned earlier that configuring Nginx was a breeze since most of my apache knowledge transferred over but setting up fastcgi was not a breeze. It's easy to get set up and working but actually understanding what it's doing and if you really do want to configure it in the way whatever online guide shows you is the tough part. On apache you'd just install mod_php5, 'a2enmod' it, and then all you need to worry about is your php.ini file. On Nginx you have the added step of adding a config block to each server block for php. That's easy enough to get the gist of but then you start wondering if you've made the right decision after you read those warnings about improperly setting it up leading to security holes with file uploads and then you start wondering what other options should I know about, should I implement them, etc. Maybe I'm totally misguided here but with mod_php you didn't worry about security. You only worried about the security of your actual code, the server itself (firewalls, ssh, port blocking and all that), and your .ini's. So that was a downside for me but not insurmountable by any means.

It's also far easier to find information on apache than Nginx. Nginx has a ton of available support and articles and tutorials out there but most of them cover the same narrow section of topics and are contradictory sometimes. The Nginx wiki itself even has warnings about getting advice from outside the wiki in the Pitfalls section. Of course you need to be careful when sourcing information from the web no matter what the topic but I felt more secure in searching for Apache information than Nginx information.

I really love Nginx though. It can really take quite a beating without even batting an eyelash as I've seen. That said, I'll still be using Apache as my "workhorse" server for some time until I can get more Nginx experience under my belt. So I'd say take these testimonials for what they are: just testimonials. True or not, any piece of software worth using can get people to rave about it. What's important is if they're situations, expectations, and needs align with your own.


> I love being able to host multiple SSL sites on a single IP with no hoops to jump through with Nginx. On Apache your options are to acquire more IPs or set up SNI which for me was more hassle than it was worth.

What? Neither Apache nor nginx can serve multiple SSL sites off a single IP without a UCC certificate, SNI, or multiple IPs. SSL requests have their Host header encrypted, which means the server doesn't know which SSL certificate to present until after encryption.


Adding one more to that list-- you can host different domains on different ports (but your load balancer will need to for example direct :443 to :8443). This is common practice for people that use Amazon's ELB, for example.


Nginx can (and according to wikipedia Apache can too), I have been doing so until my old certificate expired. It's called SNI (https://en.wikipedia.org/wiki/Server_Name_Indication) and if your openSSL version has support compiled in, it works without any additional configuration (Beyond just normally specifying the correct certificate and key for the correct server)


Yes, that's why I mentioned SNI in my post.


SNI has the problem that it's not supported on all browsers or operating systems, so even if it worked without configuration it doesn't solve the problem for most use cases. Assuming multiple subdomains on the same domain the only viable alternative to multiple IPs is to use a wildcard certificate.


My mistake. I was always told Nginx can serve multiple SSL sites with no extra work required besides the usual configuration you'd change for a single SSL site.


So you said you "love being able" to do something that you have not tried (since you didn't know it was impossible you must not have tried it)? I think you should evaluate systems based on how they work for your use case, not by how they might work if you wanted to do something later.


It can; it's just using SNI without explicitly asking you to enable anything.


You don't need fast go to serve php through nginx. You can - and probably should - just use nginx as a reverse proxy in front of php-enabled apache. Best of both world.


> Best of both worlds

How is this best of both worlds?! You add one more layer where things can go wrong, for mostly no benefit nowadays. Now people need to know how to properly configure and secure two pieces of technology that do the same things. I know this is the most popular setup nowadays for new php deployments and this is why I hate it so much! Why does everyone have to complect anything from software to devops nowadays?! If you start with something like this and use the same mindset of "just the best tool for the job, nothing else, not matter how many tools you use" for everything from devops to frameworks to your app you'll end up sooner or later with something that makes asp.net and iis setups or even spring based enterprisey ones look like a breath of fresh air! Better spend some extra effort to make your php app work great on the nginx+php setup. My advice for anybody doing PHP: remember why you are using PHP and not something else in the first place! because you want everything to be drop-dead no-think simple, from devops to framework, to app! because this way you can concentrate 90% of your effort on front-end, design and pr (and it's not necessarily bad if this is how your business should work)!


I've been using Apache behind Nginx for years, and not once has it gone wrong. HTTP reverse proxying is not something overly complicated that can go wrong often.

How it is best of both worlds? Nginx simply does not have all the features Apache have. Take a look at the .htaccess rules that Wordpress and WP-Supercache needs. It's much easier to to run Wordpress on Apache and put that behind Nginx than to figure out how to rewrite the rules to Nginx form (or worse, update the rules when they change). There's also a ton of Apache modules that have no Nginx equivalent, or no easy Nginx equivalent.

Why I use PHP? Because some apps don't have good Ruby or Python equivalents. After all these years I still haven't found anything better than Wordpress and phpMyAdmin. If it's my own code, sure, I'll write Ruby, but I'm talking about apps.

Putting Nginx before Apache allows it to protect Apache from slow clients because Nginx buffers requests and responses.


FWIW, I routinely run WP + W3TotalCache on nginx. Sure, their config files were a little interesting to get setup the first time, but now it's smooth sailing. It's only easier for Apache because the .htaccess files are readily available and ship with the product/plugin. Nginx takes a little googling to get started.


Nginx+PHP-FPM blows the doors off of Apache+mod_php. Why "should" you do this?


There is a particular use case where output buffering worked better on Apache/mod_php, but I believe it's been fixed.


Unless I'm missing something you only need to have a version Apache that supports SNI then have NameVirtualHost for your https port. If your os/distro doesn't make that version easily available then I can see how it might take extra work to get SNI support.


What I should have said was that I've been too lazy (like to an extreme degree) to host multiple SSL sites using SNI. Just buying another IP was easier for me. It's probably stupid to do that but that's just me.


Actually doing SNI can be the stupid solution. We still have a large number of customers on Windows XP and we probably will for years to come. SNI isn't supported on any IE browser running on XP, so you need to buy the extra IPs anyway.


You can make your PHP/FastCGI directives all part of an include - then you don't have to edit a file for each vhost.


I came across Nginx, for the first time today, when I was trying to figure how to make 'cleaner' URLs for the wiki I'm making. Totally going over my head and out of my comfort zone…what a coincidence this article pops up on HN the same day…maybe it's a sign I need to figure this Nginx thing out.

Thanks for the link!


nginx has data structures that scale sub-linearly with the number of requests. this is desirable. others usually don't do things this way.


Fake, these guys are corrupted and benchmarks are rigged :-P

</joke>

Long live to Nginx !


Why not?


yawn!

nginx's ssi capability is so bare currently. however, it is an excellent reverse proxy for me.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: