Hacker News new | past | comments | ask | show | jobs | submit login
Bring down a poorly deployed WordPress (and how to stop it from happening) (jalada.co.uk)
26 points by jalada on Feb 5, 2011 | hide | past | favorite | 15 comments

How to set MaxClients:

1 - restart apache

2 - run a handful of typical requests through it a few times (or just leave it live for a minute)

3 - look for the largest few workers by the RSS column in "ps"

4 - average that, divide it into the amount of memory you consider it acceptable for apache to use (64mb per worker, 512mb available for apache, therefore 8)

voila, thats your maxclients. If its greater than 256, congrats, you have more ram than you need[1], find something else to do with it.

the real trick here is when you realize "oh thats the exact same math I have to do with fastcgi worker processes in nginx" and realize its all the same.

[1] this statement will become untrue somewhere around 2013 when 128-core servers become cheap and standard

There is an even easier way. Use nginx.

Agreed, nginx is nice (I'm using it to serve static content on my blog in front of Apache). But nginx requires additional setup with WordPress to get it working properly, in particular mod_rewrite rules (especially if you then use WP Super Cache, or at least that's what it was like on Lighttpd when I used to use that).

I don't consider WordPress to be server agnostic - it's built for Apache and it only works well on IIS because Microsoft made some effort to make it happen.

Agree. I had to optimize my wordpress installation recently including some caching. Since there is not that much information about nginx, I took some notes and wrote about it. Here is my blog post:


There's also this post from patio11 that, amongst other things, describes what he does to use nginx + wordpress.


Nice resource, I'll update my post with a link to that.

Our application runs an Apache MPM/Passenger/MySQL stack. We conduct real-time purchasing events with anywhere from 5-25 sellers bidding in an event that may have 200+ line items spread out over several lots. Using the "minimum viable product" philosophy, the bidder interface relies on AJAX polling at a 3 second interval to get updated information (yes, we have plans to move to something different [Socket.io, actually]). This means that our app gets hammered during an event. All of this runs within 1 GB of RAM on our VPS, and that includes 6 Passenger instances and MaxClients set to 150 (GASP!). Here's a sample of our passenger-memory-stats output during a large-ish event:

    --------- Apache processes ----------
    PID    PPID   VMSize   Private  Name
    480    21168  17.6 MB  0.5 MB   /usr/sbin/apache2 -k start
    1216   21168  17.6 MB  0.6 MB   /usr/sbin/apache2 -k start
    5252   21168  17.6 MB  0.6 MB   /usr/sbin/apache2 -k start
    5340   21168  17.6 MB  0.6 MB   /usr/sbin/apache2 -k start
    5992   21168  17.6 MB  0.5 MB   /usr/sbin/apache2 -k start
    17163  21168  17.6 MB  0.5 MB   /usr/sbin/apache2 -k start
    21168  1      17.3 MB  0.4 MB   /usr/sbin/apache2 -k start
    ### Processes: 89
    ### Total private dirty RSS: 42.10 MB

    -------- Nginx processes --------

    ### Processes: 0
    ### Total private dirty RSS: 0.00 MB

    ---- Passenger processes -----
    PID    VMSize   Private  Name
    5997   55.2 MB  29.6 MB  Rack: /var/www/redacted/current
    6006   60.8 MB  34.5 MB  Rack: /var/www/redacted/current
    14262  63.4 MB  50.1 MB  Rack: /var/www/redacted/current
    20205  63.1 MB  49.7 MB  Rack: /var/www/redacted/current
    25250  5.1 MB   0.2 MB   PassengerWatchdog
    25253  14.1 MB  0.6 MB   PassengerHelperAgent
    25255  10.6 MB  4.8 MB   Passenger spawn server
    25259  9.0 MB   0.5 MB   PassengerLoggingAgent
    25412  63.2 MB  49.9 MB  Rack: /var/www/redacted/current
    32492  51.8 MB  39.6 MB  Rack: /var/www/redacted/current
    ### Processes: 10
    ### Total private dirty RSS: 259.50 MB

I don't know what the author's experience has been, but I've seen the recommendation to trim Apache back to 10-15 MaxClients all over the place, and when we took that advice, the results were catastrophic to app performance. We almost fell apart during an event because of slow clients tying up Apache processes. We did plenty of benchmarking prior to going live, but getting a picture of real world performance is a lot harder than running `siege -c 250 http://hostname` against your server. If you're going to trim Apache back that far, I'd recommend turning off KeepAlive altogether. You want that Apache process free immediately. If you are forced to run a config this tight, it's likely that you can afford the CPU and interrupt overhead of setting up a new TCP/IP session, but you can't afford the extra processes lying around idle, sucking up memory and MaxClient slots. In other words you're heavily memory bound.

So let me be clear about this: Trimming MaxClients to 10-15 is horrible advice if you don't understand what your actual memory usage (maybe you can run more) is and the impact of slow clients. I trimmed our Apache modules back to a bare minimum, resulting in an average Apache process size of 0.45 MB. Yes, less than 512 KB per Apache process. We run 'MaxClients' 150 with zero fear of running out of memory. If you're running a PHP based site, your Apache process size is going to be larger, but you had better know what that is before you start tweaking your MaxClients config.

I should probably write a blog post about tweaking Apache settings, because I've learned a lot of "hard knocks" lessons, not the least of which is "understand your web server's memory usage", but here are some quick tips in the mean time:

* Unload modules you don't need! Saving a couple megabytes of memory across 100 Apache processes means saving 200 MB of memory.

* To understand how much memory Apache processes are using, DO NOT rely on `top`. Have a look at the way `passenger-memory-stats` calculates real memory usage [1].

* Understand the output of `free -m`, especially the "-/+ buffers/cache" line.

* Do not rely on benchmarking tools like siege to tell you the whole story. Slow clients are difficult to account for.

* If you have to run a limited number of MaxClients, or you run up against your MaxClients limit frequently, try turning off KeepAlive [2].

If you follow those tips, you'll see a dramatic improvement in the accuracy of your Apache config for your environment, because that's what it comes down to. Apache is a fantastic web server, but it has to be configured to fit your environment. I don't have any dislike for nginx, but a lot of the nginx recommendations I see are born out of the inability to properly configure their existing web server. If you don't already know/understand Apache, I wouldn't hesitate to make the jump to nginx. It is also a fantastic web server. Hopefully this has helped someone.

1 - https://github.com/FooBarWidget/passenger/blob/master/bin/pa...

2 - http://serverfault.com/questions/86550/apache-keep-alive-or-...

Why no mention of standard caching and turning off KeepAlive during an onslaught?

What about Varnish?

WordPress is very cookie heavy. I don't think Varnish OTB would work in front of it. Sure you might be able to configure it to work, but that's probably more effort than just tweaking a few lines of Apache.

I don't think WordPress really uses cookies unless you're logging in or posting comments. As long as most of your visitors are read-only (which is usually the case, especially under heavy traffic), they won't get cookied.

Rather than fuss over Apache settings, use some decent caching strategies and you won't need to worry about this at all. And BTW, nginx is not always faster than Apache, and not always the best choice for any app.

I'm always a little frustrated when people recommend nginx over Apache. No argument; they're both great, but why would you suggest that someone adopt a new technology and deal with those hurdles, instead of making suggestions to their current environment and then guiding them towards future adoption?

Thank you sir for standing up for Apache!

True, and true. The main issue with caching WordPress is that WordPress itself is very dynamic and full of cookies, WordPress has plenty of caching plugins available but as I point out, that wont help you against Apache spawning tens of processes and eating all your RAM.

So therefore you're down to external caching, which is another layer of complexity and detrimental to anything dynamic you have on your blog (which is pretty common).

You're right, it caching won't help directly. But if every hit takes a tiny fraction of a second because it's (at least mostly) cached, it will take a lot more to overwhelm Apache. Caching Wordpress is not trivial, but it's not necessarily all that difficult -- it depends on the particular blog and how dynamic it has to be. I've had success setting up caching for my own stuff as well as a high-traffic online magazine.

Your larger point, BTW, that Apache can be overwhelmed, and that it makes a lot of sense to do some math and tune the configuration -- makes a lot of sense. I'd not really thought about that in depth before, but it seems obvious now. :-)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact