
Battle-ready Nginx – an optimization guide - funkenstein
http://blog.zachorr.com/nginx-setup/
======
BobVerg
What purpose of the article if in the documentation at nginx.org/en/docs/ you
can find the same?

And, btw, you are giving bad advices. You are wrong here: "By default, nginx
sets our keep-alive timeout to 75s (in this config, we drop it down to 10s),
which means, without changing the default, we can handle ~14 connections per
second. Our config will allow us to handle ~102 users per second."

No, the keepalive connections doesn't limit nginx anyhow. Nginx closes
keepalive connections when it reaches connection limit.

"gzip_comp_level sets the compression level on our data. These levesls can be
anywhere from 1-9, 9 being the slowest but most compressed. We’ll set it to 6,
which is a good middle ground."

No, it's not "middle ground". It kill performance of your server. With 6 you
will get 5-10% better compression, but twice slowness.

"use epoll;"

What's the purpose of this? The docs says: "There is normally no need to
specify it explicitly, because nginx will by default use the most efficient
method."

"multi_accept tells nginx to accept as many connections as possible after
getting a notification about a new connection. If worker_connections is set
too low, you may end up flooding your worker connections. "

No, you have completely misunderstood this directive. It isn't related to
worker_connections at all.

~~~
BobVerg
And even more:

"send_timeout 2;" Mobile clients from another continent will "thank you" for
this setting when they cannot open your site.

"error_log /var/log/nginx/error.log crit;" A way to be unaware when something
is wrong with your server. Nginx produces not only "crit" errors, but a bunch
of very useful warnings, that need attention.

"limit_conn addr 10;" Chrome and Firefox usually open more than 10
connections. And btw, have you ever heard about NAT?

"Most browsers will open up 2 connections" 15 years ago this was true.

~~~
rimantas

      > Chrome and Firefox usually open more than 10 connections.
    

According to browserscope.org both browsers open only 6 connections per
hostname.

~~~
bjt
For http connections that's true. Websockets have a separate pool though, and
a much higher cap (200 in Firefox). Nginx recently added websocket support.

------
tszming
For anyone who is interested in nginx tuning, please follow the H5BP nginx
repo: [https://github.com/h5bp/server-configs-
nginx](https://github.com/h5bp/server-configs-nginx), which is very well
documented already and still being maintained.

~~~
sbarre
This post was worth it just for me to discover that this exists! Thank you!

~~~
rb2e
I have to agree. If only I had known this about this repo back when I when I
had a VPS. The comments for each option are explained well. The Nginx help
docs are helpful but sometimes its nice to see a more detailed approach and
though not every option will be right for "your" circumstances. It is nice to
see.

------
l_perrin
Good introduction to nginx. However, the guide states: "Keep in mind that the
maximum number of clients is also limited by the number of socket connections
available on your sytem (~64k)".

This is incorrect. The system can open ~64k connections per [src ip, dst ip]
pair. In the case of a webserver listening on just 1 port, it means you can
open 64k connections per remote IP, which is why some people can write about
how they handle a million connections on a single server.

~~~
lampington
That's true for incoming connections, but if you're proxying back to something
else then the limit does apply to the outgoing ones.

~~~
dialtone
Only if you don't use HTTP/1.1 on the proxy side.

    
    
        proxy_http_version 1.1;
        proxy_set_header Connection "";
    

In the proxy definition.

------
Volscio
Also useful for nginx: adding the pagespeed module
[https://github.com/pagespeed/ngx_pagespeed](https://github.com/pagespeed/ngx_pagespeed)

"ngx_pagespeed speeds up your site and reduces page load time by automatically
applying web performance best practices to pages and associated assets (CSS,
JavaScript, images) without requiring you to modify your existing content or
workflow."

~~~
d0ugie
pagespeed is impressively helpful, great news when nginx support was announced
(a tipping point for me), just wish I knew going in that I had to compile for
SPDY instead of using apt

------
killercup
I'd like to add that using [gzip_static][1] might also be a good idea since
nginx doesn't have to gzip your files over and over again and you can gzip the
files yourself with the highest compression possible (reducing file size).

[1]:
[http://nginx.org/en/docs/http/ngx_http_gzip_static_module.ht...](http://nginx.org/en/docs/http/ngx_http_gzip_static_module.html)

------
ElongatedTowel
"Chances are your OS and nginx can handle more than “ulimit -a” will report,
so we’ll set this high so nginx will never have an issue with “too many open
files”"

If the limit is a hard limit it doesn't really matter what nginx decides to
do, does it? I had to increase the limit by hand, outside of nginx.

------
rb2e
I would love to see some before and after in the wild stats using this
configuration. Whilst it would be an apples versus oranges comparison, it
would at least show that this config works compared to the default. Maybe a
Blitz.io rush test?

------
kbuck
If you set an application to use more file descriptors than ulimit -n returns,
then either the application will be smart and fix its configuration by using
MAX(configured limit, ulimit -n) or it'll start dropping requests because it's
assuming it's allowed to open more file descriptors.

Increasing an application's maximum file descriptors past ulimit -n is bad
advice. The proper way is to increase the limit in /etc/security/limits.conf
(note that assigning a limit to * applies it to every user but root, so if you
really want to assign a limit to every user, you must assign it to both * and
root) and then increase the application's max file descriptors. Restarting the
application is usually required, although on newer versions of Linux, changing
limits for running processes is possible.

------
adwf
You can also use "sudo service nginx reload" instead of restarting. Helps if
it's in use and you don't want to drop any active users.

------
bifrost
My favorite comment from this whole blog: "(warning, a neckbeard and an
operating systems course might be needed to understand everything)"

Thats actually true with a fair amount of what people fiddle around with. I
see a lot of tuning advice based on what I can only assume is guessing. I
guess this is as good of a "caveat emptor" as anything.

------
vvoyer
I would love to see optimization guides with actual benchmarking.

It's like saying `for(var i=..` is faster than `.forEach` without given any
numbers.

Always test for performance, do not blindy follow guides or copy paste
configuration files in your web server.

------
ericclemmons
I wish I could find a guide like this for Apache as well. Computing max
clients and other options seems like pure guess work and constant failure =/

~~~
logicalmike
[https://github.com/h5bp/server-configs-
apache](https://github.com/h5bp/server-configs-apache)

~~~
ericclemmons
This isn't really 1-to-1 with the article. I meant something like max_clients,
max_requests_per_child, etc.

The best I know if is my co-workers' efforts here: >
[https://github.com/genesis/wordpress/pull/64](https://github.com/genesis/wordpress/pull/64)

------
sergiotapia
Thank you for this write up. Out of sheer curiosity since I love benchmark
numbers, how many concurrent users do you think this config can handle?

------
noqqe
I wish i had read these post before my devnull-as-a-Service was on HN.

------
sigzero
You explain the "what" but not the "why".

------
calgaryeng
breif --> brief

