Hacker News new | comments | show | ask | jobs | submit login
10 Million hits a day with Wordpress using a $15 server (ewanleith.com)
311 points by EwanToo 1819 days ago | hide | past | web | 113 comments | favorite



As someone who worked as lead dev for a blog network that does 10M+ visitors a month, here's the thing: if you've gotten your blog to 10M hits a day, you likely have a massive amount of content (we had about 1M posts over 5 or 6 years and about as many comments); that is, far too much to get into cache on a $15 server. Every page gets cached, every gallery page gets cached (if you've got photo galleries), every comment page gets cached and all of those have objects/actual DB rows that get cached as well. With that much content, the GoogleBot alone will kill you if you're not careful.

These are all great tips to help you scale, but unless you've got a very small site WP site that is also doing 10M hits a day, there are many more complications. Once I have a little more spare time, I'll have to blog about some other solutions we've come up with.


Of course, all this really lets you do is survive a swarm of people from (for example) here, digg, a tweet meme, stuff like that, which will direct everyone at one single page on your site.

But I think that's the most likely cause of that kind of traffic on a small cheap VPS, as anyone running a big site without clustering is just being silly :)


Yup, that's fair. And there are actually ways to scale WP beautifully with a small amount of servers (we're using less than 5 servers to handle our 10M visitors a month and content archive). Again, I keep meaning to blog about it, I just need to find the time.

Anyways, I'm all for articles like this that help optimize WP sites and get rid of the stigma that WP doesn't scale without cost. Thanks for writing it.


Thanks :)

Hope I see a blog post from you about it sometime soon, I think the details around that kind of stuff are far too hard to discover for people trying to learn about it.


its an interesting academic investigation. In theory, you could do 10M pv/day with a relatively small archive. I bet some recently launched sites that take off quickly (PandoDaily for example) probably do decent traffic levels with a pretty small number of posts, that would probably fit into a small cache.

But that's all theoretical. There's no way someone running a site with that kind of traffic should be running on shared resources without an infrastructure. When that $15 server needs a reboot, the site is offline, and that would be a significant amount of lost traffic.

Capacity, speed, redundancy and cost of downtime... You really shouldn't run on this kind of architecture.


I can't agree more with this, we've just redesigned a site with 50m+ per mo with articles in the hundreds of thousands. Akamai Caching and google indexing alone can turn a relatively steady paced site into a snail.


This feels a bit like my "9 million hits per day" article from a while back: http://tumbledry.org/2011/08/31/9_million_hits_day_with_120 & http://news.ycombinator.com/item?id=2945185

Now, with 11% more hits! :)


Yeah but my 11% extra cost me 400% more RAM, so you probably win :)

I knew I'd seen a similar post in the past, but couldn't find it.


TL;DR - Vanilla ubuntu, configure firewall, install nginx, install wordpress, turn on wordpress caching, install and configure varnish.


Pretty much, yeah - though if you don't know what settings to put on nginx and varnish, you'll end up wasting most of the performance.

I'm sure my configurations aren't perfect, but they're a lot better than the defaults they ship with, and there doesn't seem to be a lot of "Here's some sensible settings" discussion on either project's website.


Your VCL ignores the fact that logged in users are going to have cookies and every request would then be piped to your backend. Stripping off the right cookies will allow those requests to be cached.

https://www.varnish-cache.org/trac/wiki/VCLExamples

At the bottom are some example templates that are a bit more tuned for production sites and are in use on some relatively large WordPress sites - last August, they claimed 8m pageviews/day on 3 frontend machines.

While I used to advocate W3TC, his support for Varnish purging has been ignored and I published a patch to fix it. Also, there are a number of tuning tweaks you can do - I don't know if you mounted your shmlog in ram or adjusted threading. The Debian packaged defaults are not very well planned - I don't know if Ubuntu blindly accepted Debian's defaults or repackaged with their own settings.

Nginx also includes its own proxy-cache which can eliminate the need for Varnish if you were still RAM starved. There are things you can do in Varnish VCL that you can't do in Nginx without writing your own module if that is an issue.

Since you're not running Apache with mod_rpaf, did you alter $_SERVER['REMOTE_ADDR'] processing? If not, most plugins and even commenting/trackback when used with Akismet would break.

Your worker-processes are probably not set well for nginx and there are a number of other tweaks that can help nginx quite a bit mostly with worker_connections and the rlimit_nofile (which if I recall, the default with a large site with w3tc's object caching would end up causing a bit of churn).

somaxconn might also have given a bit of a boost as you start having more traffic hit the backend. Not sure what version of PHP you get out of the repo or whether the backlog is patched, but, at some point, you'll need to adjust the backlog there for php-fpm - though, you would be well beyond the point of being able to run it on the ECS instance you tested with.

Good job benchmarking and actually including your config files.


Thanks - I half knew about the cookies, but didn't worry about them as most people going to my site (or most other blogs) won't actually be logged in.

Didn't touch shmlog configuration, or threading - everything I changed is on the post. I tried to keep it simple enough, in the end Varnish alone is probably enough for 99% of people.

You're right though, I should look at those VCL examples you posted, and thanks for taking a look through the files themselves, very helpful :)


Be careful: I've encountered many plugins that unexpectedly set cookies or session data.


Didn't mean to minimize the contributions made in the article - fully agree that proper settings matter a lot. The attention to detail in turning off unnecessary cruft was great. Loved the sudo ufw logging off.


Thanks - I mostly wrote it because I'd spent so long playing around with settings I realised I had no idea which ones made any difference anymore, so started again.


Isn't there any kind of static generator for Wordpress? I'd expect a static wordpress + nginx would be sufficient to handle quite a serious load.


most of the caching plugins for WordPress will generate static html files, or static html stored in memcached, which can further be written to disk using nginx' fastcgi_cache, or something like varnish.



wp-super-cache. The original* and the best. Saves cached pages to html and html.gz, which allows nginx to serve them straight out (without even compressing them on every request).

* Not actually the original, but definitely the best.


A blog is the most simple to scale application, just one step after the static content. The fact that wordpress has traditionally been not very scalable always used to puzzle me...


As a PHP dev with ~100 WP installs under my belt and plenty of customization, I think I'm qualified to say that WordPress isn't written to be scalable. It's actually kind of crap. Many of the things it does to make writing plugins easier for newbies are Very Bad Things in PHP. WordPress is a memory hog, to the point that foreaching over query data in the wrong way can cause you to hit the memory limit, even if just unwinding your foreach into a copy/paste wouldn't. The memory leaks are somewhat nonsensical, and they make scripting with the WordPress API a minefield.

I'm not an expert on WordPress internals, but the scene is definitely ripe for a replacement simple due to the quality of the API. WordPress has been good enough for most people for a long time, but it has many weak points.


IMHO the problem is that's old technology, who used to write free software N years ago is now busy doing startups ;) So the "next generation" of free software web stuff is missing in part.


Whats more puzzling is the need to render things from the sever when you just want to show people posts, which are really just static stuff. Thats why I love Jekyll.


I do wonder why W3 Total Cache or one of the other options isn't a standard feature of Wordpress.


Caching produces confusion, is my guess. You have to know it's there to realize how to fix the "oops, I made changes and they aren't showing up" issues.


Invalidation in the case of a blog is simple enough that the caching should be completely transparent.


Should be, but Wordpress is built to accomodate everything, and a caching layer baked in by default would screw up so many plugins, half the community wouldn't know what to do with their blogs.

If it was baked in from the beginning, no problem, but there is so much plugin momentum in Wordpress now that throwing a caching layer on top of it would confuse the hell out of a lot of people who just want to write words.


Is there some good Python-based blogging system that would be more scalable?


Check out the wiki page for python blogging systems: http://wiki.python.org/moin/PythonBlogSoftware

Blaag and Hyde generate static html, which is about as scalable as you can get, w.r.t. speed.


The hallmark of Wordpress is ease of use. That's why you can spin up a blog right on their site, and they have a backend designed by Happy Cog. Wordpress is blog software for the technically illiterate.

And then I take a look at this blog post that lists all the incantations necessary to scale Wordpress to reasonable scale and I wonder why anyone should do this.

If you're setting up your own server, installing nginx, configuring PHP, and doing automated load testing, maybe you should also consider rolling your own software or using a different package that isn't supremely bloated.


The thing is this - configuring a server is one set of problems, building a decent CMS database/backend is another set of problems, and building a decent browser-based UX for content authoring is a third set.

Very few people have all three of these skills, and it's fair to say that 2 and 3 are not yet solved problems. Rolling your own CMS for any non-trivial purpose is always something that sounds like a good idea until you try it, and then you start hitting all of the incredible idiosyncrasies and speed bumps that other CMS have already solved, even if they've done it poorly.

I'm not exactly defending Wordpress and its lousy code, but in my experience with publishing CMSs, if it's powerful enough to flex to non-trivial needs (Modern WP, Drupal, Django, CQ, etc), then it's probably going to feel like a bloated/complex mess to a programmer, because of all the nuances in the problem space.

Having worked on both sides of this one, i'd rather solve the 'scale this crappy software' problem than the 'Build a useable UX solution that does everything we need and works on mobile and in IE8' problem.


> if it's powerful enough to flex to non-trivial needs (Modern WP, Drupal, Django, CQ, etc), then it's probably going to feel like a bloated/complex mess to a programmer, because of all the nuances in the problem space.

This is a good rule of thumb, but I have found one shining exception, and it is called ProcessWire. If you've never tried it, I highly recommend a look. It's a CMS that essentially offers you a blank slate and a set of simple, powerful tools that let you mold it into something else quite quickly.


+1 for ProcessWire. I've been playing with it for a day now and I love its flexibility.


I think CMS engines like wordpress are great for semi technical people who are curious enough to set stuff up themselves.

However whenever I've had a website to develop I've never seen the point of using one.

If the website is going to be very simple the chances are it doesn't really need a full on CMS. All it needs is an HTML/CSS layout and some content that can come from either static HTML files, a few form handlers and perhaps some parts that my client can update themselves. Most of the time this can be solved by simply creating a part of the site behind a login with a few text boxes that update a database and are then displayed on the site or the ability to create lists of things.

I can create this sort of functionality myself from scratch in an afternoon or so and it is usually much easier to use for the client because it will have less buttons on the interface and be designed around metaphors that they are actually interested in (for example types of cake or whatever). I gave a client Drupal to use once and the result was that they would just call me up every time they wanted an update done to the site.

If it's something non trivial then I'd rather not have to work around a clunky PHP codebase and worry about the plethora of security updates I would have to do when I could just create something much more flexible in Java/Scala/Python.


Your forgetting that this favors your more than them.

People dont want to learn a new system, and they are confortable with wordpress.

There are millions of devs and designers also familiar with wordpress - which can mean they can leave you in an instant and get anyone else to make changes or add feature to the site easily.


That's assuming they are already familiar with wordpress, most people have never used a CMS in their life.

If requirements are simple then all you generally need is an admin area with about 3-4 links.

Something like:

Change Homepage Text

Show Customer Inquiries

Add Item to catalog

This is much easier to understand to a newbie than "Add Page" , "Add menu item" etc. If they want complicated changes they will usually end up calling me anyway.

I always keep the coding for my simple sites simple enough that any competent developer should be able to figure it all out in an hour or so anyway.

Many people who are not technically inclined do not have time to do much modification to their website themselves, so will generally not bother if you give them something with a lot of power like a full CMS.


* Find a new developer to work on my site * Install a plugin to handle facebook like buttons * buy a new template/design for $40

The first one is key though -- as a business decision.


Most small business people aren't going to go around installing new designs on their site on a whim (isn't that what they paid the designer to design?).

They want something they can log into once every couple of weeks and post some minor update too, then possibly consider a redesign 2 years down the line at which point they are likely to want to throw out most of what they have anyway which doesn't matter since they spent maybe $1000 on the whole thing.

If the site provides any kind of complex functionality then wordpress is really no longer going to make sense as a core to build around. Here 90% of the site is likely to be customer forms, order processes etc that don't fit into any convenient pre-existing model. At this point if you build around an off the shelf CMS 90% of the site will be custom plugin code so there will not be a huge benefit to anyone who would take over the code.

It's all about making the business itself a first class citizen rather than a particular piece of software.


I would argue that the latter 2 things are not truly needed, and can be iterated on over time. You don't need a CMS, you already have one built into your server: the file system. You don't need a browser-based UX, it's a nice to have. If surviving Reddit's front page is your goal, it's pretty easily doable.


_You_ might not need a CMS, but I guarantee that the vast majority of content sites that pull in 10MM visitors+/day have a cms.

You make a good point that if you're just a simple blogger firing off posts in a defined layout, you may as well just hand code html and send static files. I took the point of his post to be more for people who know they have to offer a CMS, and want some basic scaling settings.


"I wonder why anyone should do this" - Because you're setting this up for a "technically illiterate" client?


+1 - Nothing like being able to show a client who's got a billion dollar idea that his site can handle 10M users for $15/mo.

Just remember to explain all the caveats that are being discussed here. We're nothing without out integrity. Your client won't be listening to that point though, he'll only be excited he's going to be a billionaire for $15/mo.


Well if they are going to be getting that amount of traffic either they have a very poor conversion rate/profit margin or they could just afford a proper dedicated server or two.


The Happy Cog design was short lived. We redesigned in house for version 2.7 in 2008.

> "blog software for the technically illiterate"

According to what metric? Not having the ability (and the huge amount of time) to roll their own blog software, or learn how to use Git and Jekyll? Try and look at the world from a broader perspective. Not everyone is a programmer, but they still have interesting things to say to the world.

> all the incantations necessary to scale Wordpress to reasonable scale

10 million page views a day is reasonable scale? C'mon. No combination of Digg and Reddit and Daring Fireball or anything else will get you even close to this. There is a selection of caching plugins for WordPress that'll get any site on shared hosting to easily sustain those types of real world traffic bursts. For people who do need a million plus page views a day, they're in "good problem to have" territory, and probably have long since acquired technical assistance, or have switched to a WordPress-specific host like WordPress.com, WP Engine, Page.ly, or ZippyKid which has high volume caching already configured for you.


"Wordpress is blog software for the technically illiterate."

Is it, really? I see that most of the most popular blogs use Wordpress: http://en.wikipedia.org/wiki/Blog_software

What blog platform would you recommend for the technically literate people?


Are your engineers going to be writing the content? We run a WordPress instance just as a CMS that then gets pulled into memcache to be served up within our site. It's clunky, but there's no reason to reinvent an editing interface.


> If you're setting up your own server, installing nginx, configuring PHP, and doing automated load testing, maybe you should also consider rolling your own software

This is actually very easy to do. I just did it. Maybe 1000 lines of code for everything, with proper caching and templating thrown in. The most difficult parts of a blog are the comments and search. If you use something like Disqus for comments and Google Custom Search for search, things become very easy to manage.


Regarding ease of use, if you are a dev, there's always toto or jekyll. Throw in disqus for commenting and you are pretty much done. Each blog is just a simple haml/erb/liquid template and git push is all you need to get a new page/blog up.


FWIW: Amazon's Micro instances will burst to saturate unused CPU cycles on the host machine up to a hard-capped limit before being choked to death by the hypervisor.

Had he run the Blitz test for 10 mins or more you would see the spike in traffic up, beyond what you think a Micro should sustain, and then it plummet to near 0 for a disturbingly long period of time[1].

If you are unlucky enough to have a Micro on a host that is fairly saturated, the performance you get is untenable.

Micro's are not "smaller than" Smalls -- they are a different type of monetization production from AWS allowing you to pay cheaply for little bursts of underutilized hardware.

There is no way (none... not possible, zero) that a Micro would be able to provide the bandwidth and CPU power to host a realistic site doing 10 million hits a day even if everything was straight from RAM.

Read through the EC2 forums for any length of time and you'll frequently find people coming in with reports of their machines "stopping" or "crashing, and totally inaccessible" for minutes at a time with a 100% ST ratio; every time it is a Micro that has been hammer for a bit either through benchmarking or use before the hypervisor puts it in a full nelson and brings it down to the point that SSH connections to the host cannot even be maintained.

-- I would also point out that not only is the CPU time allocated in bursts for a Micro, but the bandwidth is prioritized behind every other instance type (unless you were using a CDN, this would make hosting a typical wordpress site sluggish at best from a Micro -- and again, doing 10 mil hits a day... not going to happen in reality) -- yes you can offload all your graphical assets to a CDN, but now this article is about a $15/mo server and a $4217/mo CDN bill which is a very different article.

Additionally, if you need to use EBS at all in here, the story gets even worse with Micro's (even using something like RDS which requires network I/O in addition to hosting site content is all going to collapse on itself within the first 5 mins of the site's life with traffic like that).

All that said, the tips in the article are great. I only mean to clarify expectations with the use of Micro's. A whole swarm of them grinding through a work queue in random order is great; using them as the backbone of your web presence will have pain points. (Yes you can put 20-50 of them behind an ELB, but at that point why not run a handful of Mediums or a few Larges).

Anyone with a Blitz.io account, please feel free to setup this exact configuration and run the benchmark for 1hr with the same load to verify the meltdown.

[UPDATE] Ewan, I am not knocking your article, I wouldn't expect most people to be familiar with the ins and outs of every cloud provider. The tips and techniques are great regardless of the actual performance on a Micro (and applies to anyone trying to scale WordPress). Just wanted to clarify for anyone getting excited that they can now run their Fortune 500 website on a single AWS Micro that there are nasty surprises lurking just under the surface.

[1] http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/co...


No problem, no offence taken :)

I only picked the micro because it's cheap, and AWS let me fire one up, build the config, break it, trash it, and start again, all in rapid succession.

One thing though, this configuration doesn't actually need much real CPU or disk resources - it's pretty much all memory, and as far as I know, AWS doesn't overcommit RAM. this means it should be "relatively" stable. The CPU usage is at around 5-10% even at the peak.

Personally, my own blog runs on Linode, because I think EBS is broken, but each to their own :)


EC2 may be a bit slow/clunky but at least it isn't a security nightmare like Linode.

Months after and they still haven't said what happened or what they fixed during the Bitcoin theft fiasco.

I wouldn't trust them again with ANY site (former customer).


> Months after

It's March 31. slush's post was on March 1.


+1 right you are about micro instances getting hammered by the hypervisor if you use too much sustained resources.

One trick that I used when I was temporarily hosting a small Clojure web app for a customer: I ran the web app using "nice" to reduce its priority. I did not do any measurements, so this is really subjective, but the app seemed to run a lot better as far as consistently getting a little bit of processor time to run.

Not so subjective: I did some Clojure development in a repl on a micro instance (because it was already set up for access to a Hbase cluster) and doing a "nice lein repl" really made development possible, if a little slow.


Interesting find Mark, I never tried dabbling with nice inside the Micro so the HV doesn't strangle it to death... appreciate the tip!


After reading this my question is: will his system work on a small instance on ec2 instead of micro? It sounds as though the majority of issues you're describing are micro-specific and while there's a thousand "tune wordpress" blog posts out there it sounds like this one is a good idea minus the use of a micro instance.


The techniques Ewan outlines are spot-on; my clarification is for people to have the right expectations of the Micro's, not on the information he provides on tuning a WordPress install.

That said, Smalls and up on EC2 will give you more consistent performance (they aren't built to provide the spike-performance Micros are) BUT are still relatively subject to "noisy neighbors".

The smaller the instance, the bigger the impact felt; it is still possible to have a small grind to a halt because of a noisy neighbor and typically I do not see people hosting web front ends with smalls because of the erratic and poor performance (in general). Possibly a fleet of them behind an ELB, but even then I see Mediums and Larges much more often as the "web server".

The mediums provide an awesome (and typically unsung) balance between monthly cost and performance on your way to a large (which can get quite expensive).


Great makes a lot of sense thanks for expanding. Like you touch on I think price is the big issue for most people who aren't looking to build for 15mm/month but are just looking to sleep at night knowing that getting to the frontpage of digg, reddit, techcrunch, hackernews, etc won't sink the ship.


I think "a $4217/mo CDN bill" may be a bit of an exaggeration :p I honestly had no idea how much CDN's charge, threw some rough stats into AWS Monthly calculator for CloudFront (596GB/Month, 500kb avg. object size) and the bill was only $73.87 /month. Call me surprised!


Sounds about right to me. How did you get to 596GB? That seems very low for 300,000,000 visits (remember this was a daily figure, but you're paying by the month). If you have 10 requests per visit (reasonable for the low end) that turns into a cool 3B requests. That itself is $2,250 a month.

A million visits times 500KB = 476.84GB. Per day, so 14TB per month. At best that's another $1,600 a month.


Epic math failure on my behalf. Calculated based upon 10M visits per MONTH! D'oh!


Great post, the 10 million hits is a bit misleading, WordPress is a cookie monster, so your default configuration of Varnish is going to have issues, when people start adding comments etc.

WordPress itself isn't the scalability problem anymore, it's usually how the blog is being used, and when the db needs to be interacted with.

At the same time, a lot of our customers use WordPress as a CMS, and have static home pages and sub pages, which we serve in a similar fashion.

https://github.com/zippykid/php-varnish the plugin there, will be handy when you actually start making blog posts.


I think this article title is misleading. It's not 10 million hits a day on Wordpress, it's 10 million hits a day using Varnish to a single URL. Might as well say you can serve 10 million hits a day to a static file served out of memory.


This is good but let's look at from slightly different perspective. The author (Ewan) is so good, so I'll assume his hourly cost runs anywhere between $50 to $80 an hour on sys op.

Taking the least denominator - $50, if he spends just 5 hours a month trying to keep his server alive, active, update and making sure it's not down, responding to it if anything happens - that will cost him $200 + $15 per month to maintain his setup.

If you're about running a well maintained popular Wordpress powered blog, why go to the extend of doing it yourself when you can spend a little more, let the people who runs such services handle it. That way, you can keep doing your more productive work.

Unless, of course, he does nothing else but earns via the website, does it full-time and is happy spending his 4-hour a week on running the server.


You're absolutely right (except about me being good, I don't know about that), and if someone asked me about hosting Wordpress for a serious business, I'd probably just tell them to use WPEngine or similar.

But as a learning experience for me, I think this has been pretty priceless, and I enjoy it, bizarre as that probably sounds to the people who don't read HN :)


I agree with your reasoning that it might not be worth the time, but I think spending five hours a month on server upkeep is a little steep. I've got two Linodes running and I rarely spend more than an hour a month on server-related issues. Of course, I don't have millions of views, but if that's the case, my assumption is that this high traffic would come with enough revenue to make it worth my time.

So yes, I agree with your sentiment, but once you have servers up and running with good tools they don't take that much effort.


My blog is not that popular with millions of visits of a month but it's decent enough to choke and die under most shared hosting. Well, I once setup my Wordpress Blog to run on Linode's cheapest server with nginx, wp-cache, and all the jazz. But then I kinda remained tense most of the time and even minor instances at the wrong time give me the bumps.

Now, I use Page.ly and CloudFlare in the front (WPEngine is an equally good host and I'm considering for another Wordpress Setup of mine) and I don't even care about the host, I just care about my blog and how to update it with articles. It's costlier but it comes with it's reward.


This setup is very unlikely to take four hours a week to maintain.

Hell, it takes 15 minutes to set up.


If a blog is generating that much traffic it's probably generating a good deal of revenue as well. $200 doesn't seem like much if it is getting him consulting gigs.


While I agree with the main point of the post(optimization, caching)the title is a bit of an exaggeration.

On a normal website 10M hits will not be evenly during the day this example hits are only an avarage of 10Kb, and the number of concurrer users are only 250.

Also the test seems to be done on a new wordpress installation. The more content you have, the bigger the database,the longer the comunication with your DB, the bigger the cache, ect...

In a real world scenario you proabbly would need something more than a $15/month virtual server to handle 10M hits in a single day.

Still optimization is always a good thing.


We're including a tool in Ubuntu called juju [1] that will enable us to ship cool setups like this to users.

We did something similar for Wordpress [2] and plan to ship it in 12.04.

I'd love to bring you on board so we can compare configs and make it even better. Also rkalla is correct, micros are great for prototyping but you'll need at least smalls for production. The nice thing about micros is that you can set it up, tweak, and then later reboot them into larger instances, so they're nice for playing, but I wouldn't run a live site on them.

[1]: http://juju.ubuntu.com

[2]: http://www.jorgecastro.org/2012/03/18/redeploying-omg-ubuntu...


What's the point of serving content from database by default if you have to set up all kinds of caches just in case your blogs happens to be on HN, Reddit, your local news paper, etc?

Shouldn't database be more like an add-on, not the core? Sure search is something that is hard to do with flat files, but everything else should just use files. It might be a good idea to save all the data also to DB, just in case you want to do some markup changes (which happen like once a year or something). But querying DB every time someone visits you blog? Crazy.

Also, when you don't need a database, backing up your whole site and/or transferring it to another host is a lot easier.

More complex sites than a normal personal blog is of course a different thing.


> echo “deb http://nginx.org/packages/ubuntu/ lucid nginx” >> /etc/apt/sources.list

Shouldn't that be "oneiric" since you're using Ubuntu 11.10? Or does the nginx team compile everything statically so it works no matter which version you choose?

I'm also a bit puzzled with your decision to make PHP run as the "nginx" user. You probably did this to match the username that the nginx debs use (Ubuntu's default package uses www-data), but what's the benefit of matching users there? If you're going to change it anyway, why not make PHP run as "php", for example? Some might even say that running both PHP and nginx under the same user reduces security.


If you've got your php code and static assets in the same repo, then it's generally easiest to have php and nginx run as the same user that owns the files in that repo. php needs access to those file to parse/run them, and nginx needs access to those (static) files to serve them.

Of course, you can manage this in other, more security conscious ways (move static assets elsewhere, use group permissions, etc.) but this is probably the simplest.


Most files and directories have 644/755 permissions by default, which means they can be owned by any user and still be accessible (readable) to any other user on the same system. What really matters is who can write to those files and directories, and there's no reason for anything other than "wp-content" to be writable by the web server. WordPress blogs get exploited all the time, so a bit of paranoia can't hurt.


Kudos for the step-by-step benchmarking. However, ideal situations and no business requirements can give you pretty impressive statistics. The most load producing elements of every Wordpress site I've worked on have been necessary to pay the bills, allow a reasonable workflow for content creators, or provide good user experience.

When you load in a sizable theme, warm up the DB with thousands of posts, tens of thousands of comments and users, and have requirements of short TTLs for new stories to be posted, comment administration, etc, and real world traffic... the picture looks a little different.


ProTip: Don't run your production MySQL server on a micro. Micros are great for mail relay servers, load balancer (no static assets), or dns.


Or leverage a template cache on (insert any framework here) and get 2,000+ hits a second on a Linode, or 172,800,000 hits day.


Probably closer to 20,000 hits per second with Nginx configured correctly and if the framework doesn't impose too much overhead. Alternatively, use ESI with Varnish and the HTTP Cache spec and get 10,000+ hits per second on a relatively dynamic website (read: micro-caching).


Indeed, 1M per day may sound like a lot but it's really not that impressive.


Well done on the article, man -- very detailed.

I would say you should dump the caching plugin, however, and just do everything it's doing in nginx itself. My mix also adds CloudFlare as a caching system:

http://danielmiessler.com/blog/how-to-run-a-wicked-fast-word...


Thanks, will take a look at your post later :-)


tl;dr - Varnish

The only way to make WP "fast" is to make it nearly completely static.

Get a half dozen people working admin area and your server will cry.


Though people on WP doing admin, if you have 6 people, are unlikely to be doing things every second - most of it is writing stuff, looking at things, so even if they were doing a click every second that's still 6 requests per second that for even completely non-cached dynamic stuff is completely irrelevant.


Assuming you are serving static content, a much easier method is to use a CDN for all your assets (including the WP generated pages). It might cost a little more for big spikes, but it makes you completely immune to almost any amount of traffic for very little effort.


Wish there is an Amazon AMI with your config... Much better than doing things from scratch :)


Just run The Ubuntu ami ami-baba68d3 and follow the instructions, will only take 10 minutes :)


Awesome, Wordpress promotes bad practices in multiple spaces now. Horrible PHP habits formed from green developers learning on the "platform" and now basic server architecture is going to crap for those poor newbies. Thanks Wordpress...


Try it again with just the last step(vanilla apache/vanilla wordpress/varnish) what are your results like?

Keep the original setup and have your test script perform some random searches. New figures?

(Everything is fast/non-server intensive when you're serving static data)


Just tried:

This rush generated 7,363 successful hits in 1.0 min and we transferred 56.39 MB of data in and out of your app. The average hit rate of 117/second translates to about 10,159,286 hits/day.

So you're right in that Varnish is the big improvement, but the CPU of the server seemed to be significantly higher with varnish alone than with varnish + APC.

Of course, one issue with these systems is there's no such thing as vanilla, the documentation goes from "Install" to "Read 100 pages of stuff to get a working configuration", with nothing in between...


I think the main difference between using Apache vs nginx as the backend server would be the amount of memory available for Varnish to use for caching. nginx has a much smaller memory footprint.


apache-prefork has a larger footprint because it includes the php interpreter in each thread. If he's using php-fpm, he would use mpm-worker in which case apache and nginx threads have a similar memory footprint.


What about memory usage with MySql? I've found that to be a significant bottleneck with wordpress sites. A basic wordpress installation doesn't use the database much, but if you add just a few plugins, things start to jump pretty quickly.


Varnish takes the load off by caching the results from Wordpress. The Total Cache plugin handles purging cache entries when you update posts/pages.

If you want to see how many queries Wordpress runs even for a simple post install this plugin I wrote at my last job. It shows a log of queries in the page footer if you are logged in as admin.

http://wordpress.org/extend/plugins/wpdb-profiling/


I didn't play around with MySQL in those settings at all, mostly because the last thing you actually want is a query hitting the database.

You're right though, if you start adding a few less well coded plugins, then you can start hitting the database a lot, for no really good reason.


The caching software and http accelerator probably mitigate the need to hit the database at all.


I found this really digestible. The HowTos that read like an annotated bash history are incredibly effective, and even though I don't need this for any of the WP sites I host, I can use it for almost any (mostly) static site.


I really like these articles; it's nice to see how things like this are done from scratch. The firewall step may be unnecessary in AWS though, because AWS already has a firewall enabled by default through "security groups."


Why setup a firewall when it's hosted on ec2? It's all firewalled off by default.


Mostly because some people won't be using EC2, and I didn't want to leave them unprotected. I thought I might as well include it - it's a trivial addition after all.


> Download the nginx secure key to verify the package

>

>cd /tmp/

>wget http://nginx.org/keys/nginx_signing.key

Verify a package with a key you got over http? Am I the only one who noticed this?


A bit silly yes, but nginx.org doesn't support https, which is slightly more silly and rules out most other options.


What's the right way to do this?


On a single page, without any plugins, menus, custom data, sidebar/widgets logic etc. Real sites will have 10-100x worse performance, so you might get 100k/day on the same machine.


Great article.

Question: If you have Varnish running as a cache, should you really have the WP 3W Cache plugin running too? Seems redundant, but I'm not familiar this tech.


Varnish caches the actual HTTP requests, and speeds up static content substantially by removing the necessity to load up PHP.

WP caching speeds up requests that spin up PHP by preventing them from making a connection to the MySQL Database. This is useful for requests that may peruse multiple blog posts, but load the exact same sidebar content by caching things like post counts, categories, tag clouds, etc.


Really? That's interesting. I assumed the WP caching was straight page output caching.


Some of the cache plugins may just utilize page output caching, but IMO if you're spinning up your language runtime (Ruby, PHP, Python, Java, .NET) just to do page caching, you've already lost. Varnish and even built in Web Server caching can be far more efficient.

It's better to use the language runtime for more specific caching inside your application logic and for smarter cache expiration.


Agreed. I'd like to see the same benchmark run with Varnish + Apache. No W3Cache, No Nginx. I'd bet this synthentic benchmark would show the same performance, meaning it's all just hitting Varnish anyway.


This is quite interesting .u could such a setup to run your prototype without breaking you bank account.


Will the bandwidth not cost way higher on AWS than the $15 mentioned?


Mmm, your right it'll cost something, though AWS free tier covers 15GB per month of bandwidth, then 12 cents per GB after that, so I'd expect it to be a few $.

Obviously though, there's cheaper options than AWS out there - Linode include 200GB of transfer per month in their $20 option.

I mostly just used AWS because I could start the server, build it, and blow it away, then restart, all without any hassles or full monthly chargers.


If you are going to host it for a long duration, you can reserve a heavy utilization micro instance and end up with an instance cost of less than $6.50 per month (with a three year reserved instance).

As long as you aren't serving lots of images, this will keep you well below $20/month - even with large amounts of traffic.

If you are serving lots of images, or worried about excessive bandwidth: Linode gives you 20GB/200GB (storage/bandwidth) on a 512MB server for $20, and MediaTemple gives you a 512MB server with 20GB/350GB for $30 (a little cheaper if you are going to be in the 300GB range when you consider Linodes .10/GB for overages).


15$ != 15$ per month




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: