Hacker News new | past | comments | ask | show | jobs | submit login
How to handle 1000s of concurrent users on a 360MB VPS (markmaunder.com)
255 points by joelg87 on May 28, 2011 | hide | past | web | favorite | 78 comments



Why use nginx instead of Varnish as the frontend proxy? If you are using nginx as your http <-> fastcgi gateway, that's one thing, but if you're just using it as a reverse proxy, it seems like a lot of extra stuff to maintain.

I like living on the bleeding edge, so I do varnish -> mongrel2 <-> apps instead... but varnish -> nginx <-> apps or varnish -> apache <-> apps seem equally reasonable. The key is caching and being able to handle slow users without tying up your app. All three of these setups do that. nginx to apache only saves your app server; it doesn't do much caching.


I must admit I've never used varnish, but from what I gather, it's function is caching, not handling large numbers of concurrent connections. Nginx's strength is handling tens of thousands of concurrent connections with very little incremental CPU load and a tiny memory footprint. It then reverse proxies those connections to a threaded-model server occupying each thread for very little time and quickly freeing them up for other work.

From what I gather, varnish uses a threaded model with 1 thread per connection which is what nginx solves. So it might make sense to put nginx in front of varnish so you get high concurrency with great caching.

Nginx does offer some caching but I've played with it in high load environments and wasn't impressed with the performance.


Varnish is arguably more performant than varnish. I've had amazing success with both, and I don't think both are necessary together, but they're both screaming fast. I noticed one day that varnish had served 100,000 visitors per day from the cache due to a Google mistake only because I saw it in my analytics. The server load was at 0.05 all day.

I love varnish. Hard.


Er, would you mind correcting your post to indicate which one is more performant?

Thanks.


What do you mean? I say they're both very, very fast, but I think Varnish might be a bit faster (or better written). In any case, it's a really close call, and you should use the right tool for the job, as they aren't both written for the same job.


I think he means that you should read the first sentence of your original post again.


Ah, I meant "than nginx". The weird thing is that I read it again after the comment. I can't edit it now, unfortunately...


It's clear from his post that he found varnish to be faster.


Unless you subscribe to the concept of case sensitivity, that is.


Varnish is ridiculously amazing. I use it directly in front of unicorn to host some ruby apps. The performance is just amazing compared to the load avg. No need for a 'proper' web server.


Thank God you came. I was thinking of doing this exact thing with a passthrough directive for socket.io. What's your experience with it? Any caveats? Do you serve static media in any special way? I usually just have Varnish cache them forever.

Any help would be great, thank you!


Varnish would need to support HTTP 1.1 proxying (specifically the "Upgrade" header) in order to run socket.io WebSockets. haproxy might be more what you're looking for in regards to reverse proxies.


Doesn't it work if you just pass that url through?


I don't exactly deal with any high amount of visitors (current record being like 300 hits per month).

But for using it directly with a application server I just use it as a backend and handle Cache-Control headers directly in the app. As for static files I haven't really handled them but caching them forever seems like a good enough option. It seems to work great and I haven't discovered any problems yet and with grace-time in varnish you could even survive a backend dying if it's serving out fairly static content.


Actually the reason I'm not caching more is due to a bug in Rack. https://github.com/rack/rack/pull/157 Rack seems to not handle session cookies correctly, they are always there even when there is no session set. Varnish does not cache things that sets cookies. (e.g pages where you are logged in, like a admin panel).

This makes things served through my sinatra application/unicorn have session cookies, even if it's something like a css file. I'm specifically stripping them out at this point but it shouldn't be needed. I don't know enough about Rack yet to fix the bug myself and way too busy with exams anyway.


I see, thanks. Since nginx would proxy gunicorn anyway, I don't think it would be any different, though, would it... It sounds like a good solution, I'll go with that, thanks.


I don't know how websockets works with varnish though, but you could run varnish on port 80, nginx on 81 and have the websockets hit :81 to get proxied through nginx I guess, you would still get good caching of normal content through varnish! :)


That's a good idea, but if you can run WebSockets on a different port you can just have it go straight through :p


For some reason I can't respond to your other post, but yeah that makes sense too. Let me know what you end up doing, would be nice to discuss it more :-) I'm on irc a few places (freenode and efnet mostly) if you want.


If you can't respond to a post, just click "link" and it'll let you. As for the final setup, it I'll definitely write it up and post it here :)


How do you test the strength of your setup? Some solid way to test my setup would probably reduce my fear of experimenting with bleeding edge.


You could use something like grinder (http://grinder.sourceforge.net/) to create a loadtest scenario which you can use to establish a baseline before a change and run after a change to assess impact.

True loadtesting requires quite a bit of thought to go into your test-setup, but for those kind of differential tests it is usually enough to make sure that your test-client and network connectivity to the application are constant factors and good enough to generate a decent load.

You won't catch the subtleties with those tests, but you will normally catch the 'oops, my perfomance dropped by 50%' type of errors. The subtleties you can then work out during normal operations.


Clojure with mongrel2?


This is surreal. I just wrote a draft blog entry about the changes in startup economics due to event-servers like node and nginx (publishing tomorrow), and I hit HN just before heading to bed and this blog entry is #1.

Thanks for all the love today guys. Have an awesome memorial day weekend if you're in the States.


Or bank holiday weekend if you're in the UK ;)

I've been using nginx for about 2 years and it performs so much better than Apache.

I'm not using Apache as a backend though, I use FastCGI. It performs even better.


I use nginx exclusively as my server and I run node instances and wordpress behind it. Why even use apache?


mind if I ask what you use to tie wordpress in to nginx? cgi?


As written in another comment, nginx + php5-fpm is a great combo, no need for Apache at all. I run both Drupal and WordPress (the latter against my will :-P ) in such a setup.

I think the only reason people use nginx -> Apache mod_php is because there's a lot of documentation and knowledge about Apache. nginx requires a little bit different mental model how HTTP requests are served but it's worth it.

EDIT: You can think of PHP5-FPM as Apache+mod_php but only serving PHP requests. So nginx handles static files and URL magic and when there's request for PHP, it passes it to PHP5-FPM, just like it would pass it to Apache+mod_php. This is oversimplifed but it's kind of like that.


I had a similar configuration for a Wordpress blog, and another Linode instance running ExpressionEngine.

I have nginx configured with php-fpm to use sockets to communicate between each other.


At the end of the article, why is apache even being used anymore?

Just use nginx and php-fpm (or php-cgi/fcgi).


That's what I use. I thought about the method in the article, but it seemed too complicated, and nginx + fcgi is plenty fast for me.

Are there any performance benefits to having Apache as well?


Performance benefits? No, I wouldn't think so. I can think of too many performance downsides to it.

The only benefits I can think of offhand would be if you had a requirement for something that Apache specifically provided, or custom modules you needed run. Something like suphp, modsecurity, or shared hosting configuration, etc. All policy type stuff, not performance.


None at all. It's purely because most people aren't willing to invest the time it would take to not need .htaccess.

In fact by not using PHP-FPM you lose out on awesome stuff like the slow log!


Missing: caching. Caching. Caching. And more caching.

I speak of Wordpress and database-backed PHP applications of its ilk. Wordpress loves to spray MySQL in a thick stew of queries for every single page load.

Without sensible -- preferably aggressive -- caching, your Wordpress site will die in the arse.

My setup generates cached .gz files, which nginx can serve directly. It flies. It didn't used to.


Or if you dont want to deal with Wordpress to implement the caching, use Varnish. (also mentioned few times in this thread)

I've used Varnish for some high-traffic Wordpress sites by sitting it in front of Apache running Wordpress. With well planned TTLs, and logged-in cookie detection, most of the inconveniences of caching can be eliminated.

Also, using Disqus or similar services for comments will be helpful. You wont have to worry about comments not showing up until the cache clears, leading to confused users. That is, if you use the javascript implementation of these comment services.

Alternatively, plugins like WP Super Cache have automatic post cache flushing if there are updates, and you can also tell it to bypass cache for logged in users. But it is IO-bound.


WP Super Cache will serve up cached files via PHP with the flip of a switch. While this isn't as fast as letting the server serve up cached files, it's an easy win.


It's not missing, he specifically addressed caching at the end of the article (indicating he didn't cover it).


This is the best caching plugin I've found for WP: http://wordpress.org/extend/plugins/w3-total-cache/

It supports disk-based caches and memcached.


This technique naïvely assumes the only thing that is slow is the client. If a backend data provider that the web server communicates with (e.g., database) is also slow, this arrangement merely adds complexity without much corresponding benefit. The origin server will still block waiting on the data provider, which can cause process starvation in the pathological case.

Moreover, issues with slow clients often can be solved by raising the TCP send buffer size. As long as the response size is less than the send buffer size, it really doesn't matter how slow the client is: write() will return immediately, leaving the webserver free to serve the next request. Getting the data to the client then becomes the kernel's responsibility.


That would work, unless you're using a container-based VPS. OpenVZ is notorious for standing in the way of system tuning. Using nginx makes a lot of sense in this case, especially with the ey-balancer patch.


The TCP send buffer size can be controlled at the application level (see SO_SNDBUF in tcp(7)). There's no need to adjust kernel sysctl settings.


By the manual page you mentioned: http://linux.die.net/man/7/socket

> SO_SNDBUF: [...] The default value is set by the wmem_default sysctl and the maximum allowed value is set by the wmem_max sysctl.

You can't use sysctl under OpenVZ. QED.


Fair enough - then you'll have to switch to a different VPS if you're sending > 128KB responses. Use the right tool for the job.


Worth noting the the story is from Dec '09.


People were doing this with Squid years before that, in the 90s.


Yeah.. squid is actually great. I do wish squid is a packaged that just needs to be turned on for most VPS


As a good number of folks have pointed out, this article (helpful for sure) is basically nginx + PHP-FPM.

nginx servers up static content (e.g. if you are using WP-SuperCache and it's generating those static cache files for you, JS, CSS, images, etc.) then you configure a pool of warmed up PHP VM instances via PHP-FPM (Check /etc/php5/php-fpm or some equiv dir on your server).

Then you setup a rule that directed PHP requests to the FPM service, that would probably look something like this in nginx:

  location ~ \.php$ {
    fastcgi_pass   127.0.0.1:9000;
    fastcgi_index  index.php;
    fastcgi_param  SCRIPT_FILENAME  /var/www/mysite.com/$fastcgi_script_name;
    include fastcgi_params;
}

and instead of adding the layer of Apache servicing the PHP requests, you are having nginx pass through the PHP requests directly to PHP processes to run.

Going from an Apache2 + FCGID configuration (somewhat similar to this, but with Apache) to nginx, I saw a 75% drop in server load.

I'm almost certain this still isn't a totally tweaked out setup that someone more familiar with this process could do better, but for my needs it took my server crashing under load no matter what I did, to typically being idle most of its life (~900k/month pageviews)

So I'm a big fan of nginx. I'm not saying you couldn't configure Apache to do the same... just 5 years of attempts to do so never got me anywhere with it.


Repost: http://news.ycombinator.com/item?id=970682

Interesting comments on the original thread.


oh wow. I didn't even notice the posted date on the original article until I followed the repost link you provided, and saw how long ago that had been posted.


I just find it odd that Apache hadn't employed the likes of epoll already. IRC servers have been using async sockets like this for yeeeears.


Well, it does since apache 2.1 ('event' mpm module, replaces worker)


Event MPM is still a large pile of fail. Unless you're unwilling to switch from Apache, the benefits over Worker are mostly theoretical from the performance point of view. Hopefully this will change with 2.4, but only for those who didn't kick Apache out of their production machines.


Are the loopback requests limited by the TCP stack?

Is it possible to set this up where nginx and apache communicate via a socket?


Yes, and that's the recommended way of setting it up. :)


You can do the same on a really high traffic Drupal site using the boost module. It's basically a module that will, for non-logged in users, generate a static html page and serve that instead. PHP isn't even called, the .htaccess just serves up the cached content.

And that's using Apache!


That'll definitely give you a speed boost, as the interpreter doesn't have to get involved (if configured right). However, this article is talking about IO clogging apache workers. The issue is that the worker must do two jobs: generate the page (using mod_php) then send it to the client. Using nginx in front of apache means that nginx handles the IO to the client (which it is extremely good at), leaving the apache workers to handle nothing but generating pages, hence getting overall higher requests/sec from the same machine.


I've heard a lot of bad things about htaccess, is its slowness comparable to php over static, or apache over nginx?


htaccess is just a configuration file used by Apache to determine what should be served to whom.


.htaccess is read and parsed on every request. Putting the same directives in the server's virtual host config is a ton more efficient.

Me, I use php-fastcgi daemons with APC and nginx, with no Apaches. The speed bump and the drop in memory requirements were staggering. The main 'cost' is figuring out how to convert rewrite rules from one type to another, but if you're using a widely-known app, someone's already done it for you.


Which can slow things down, seems much less of a problem now than when I last dealt with them. There's a lot to read on using them to speed websites up.

Minus two for an on topic question?


The amount of slow-down is relative to what you actually write in the htaccess file. You can have two very different files that actually do the same thing.


Dropping keepalives actually slightly hurts your best-case performance. As a config change, if you are using the prefork mpm (I wouldn't if scalability is your goal) it will generally help you though. With prefork the scaling math on worker memory * connections vs your box's memory is often the first place things will fall apart if you have keepalives on and real traffic.

I suggest the worker mpm if you don't want to switch to nginx.


Hint: you can just run Apache twice; once with a mpm_prefork config for your apps, once with a mpm_event config to handle client connections.


I've been on Apache servers for years, so I come from that frame of mind. My managed VPSs are running Apache, and one of them is running nginx for static resources. I think my managing host (Servint) doesn't even support nginx. How, for argument's sake, would I move my PHP/MySQL sites to run on a managed nginx server?


At the risk of stating the obvious, you would move to a better VPS that charges as much or less for a better package :)


He means Linode.


I use straight nginx on Fanboy Adblock (which gets hundreds of SSL connections per minute), no need for Apache.


This approach really works well... I would add that off-loading MySQL to a different VPS within the same network works well too; latency within the same network is very negligible. Doing simple optimization such as moving/disabling mail and stuff you probably dont need works too.


Indeed, pushing MySQL onto a different VPS in the same network basically halved page generation times for my sites.

This is partly because when a table contains a TEXT field, MySQL performs all joins against that table on disk, regardless of what fields you're selecting or joining on.

Which is a problem for, oh, I dunno, every PHP blog engine / bulletin board / CMS ever.


nginx is actually pretty bad at all benchmarks I've thrown at it, you can see some here: http://www.webhostingtalk.com/showthread.php?t=918355 this was on an 8 core intel Xeon 5520 server system it maxed 8 cores (18ghz) @ 7500 requests per second for static content.


You're doing them wrong. Just check results posted by Leif Hedstrom (Apache Traffic Server developer)... He's getting 100k req/s on far worse hardware: http://www.ogre.com/node/393


I found that forum thread quite hard to follow, and there are blobs of ab runs sprinkled all over. I clicked a few pages through, and didn't see any configs posted, sysctl values, etc. Was it more of an ad-hoc test you were running just to get some vague ideas about performance, as apposed to a comprehensive performance comparison?

My personal (anecdotal) experience shows that nginx is amazingly fast. Some nginx configs can impact performance if you aren't careful (I am thinking of try_files vs if-tests[1]), and there are various nginx tunables (worker process count, cpu affinity, etc) as well.

I have personally seen issues with apachebench when there are lots of connections (not to mention linux system tunables such as max file descriptors!), iptables state tracking, etc...

[1]: http://wiki.nginx.org/IfIsEvil


This was stock RHEL 5, I tested nginx compiled with gcc -O2, -O3, -O2 -ffast-math epoll was enabled, I even specified the cache line size at build time and tested hard enabling AIO the nginx config consisted of simply serving a simple static html page. I tried every option under the sun at the time tcp_nopush, etc. I spent days testing apache vs nginx vs litespeed, instead of drinking the kool aid.


Did you try others things like thttpd ?


I haven't read through the forum thread as there are ten pages and much of it seems off-topic. But the requests per second doesn't even matter here, the fact that you were capable of maxing the CPU at all shows me you did something wrong. I easily get 12k requests on an i7 box and the CPU wasn't even stressed then. Eventually it was IO that limited it.


Even though it was eight cores, he only had 1024KB of system RAM.


Ugh, then there is something seriously wrong with your configuration.

This is another interesting benchmark between nginx, lighttpd and varnish: http://nbonvin.wordpress.com/2011/03/24/serving-small-static... Even on a Core i3 laptop nginx can handle 70k req/s


First of all, there's no such thing as 18GHz. The GHz don't sum up. It's like saying that if a woman makes a child in 9 months, two women can make a child in 4.5 months. It doesn't work that way.

Moving forward to the actual issue. Something is really broken into your configuration. The first thing is the file AIO. But this is Linux's fault. The file AIO is broken by design. It bypasses the kernel file cache. Which is a pretty large performance hit per se. The second thing is the fact that I can easily get 55k req/s on a Q9400 with nginx, without even touching the sysctl parameters of a Ubuntu 10.04 x86_64. The only actual difference is nginx 1.0.x instead of the ancient 0.7.x you would actually find into the standard repos aka the production version I publish into my company's private repo.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: