Not a bad how-to...he mentioned the mod_wsgi route and that's the one I'm most fond of. You can still use Nginx to handle all static files and proxy all other requests to Django.
> The best seem to be Gunicorn and uWSGI, and Gunicorn seems the best supported and most active project.
This is not so cut and dry. In May, Gunicorn had 12 mailing list messages, where uWsgi had 191. uWsgi has a twitter with 225 followers, where Gunicorn has none. #gunicorn@freenode has 32 users, but #nginx@freenode seems to have a lot of uWsgi users.
Also, the author mentions nothing about installing the Nginx PPA for ubuntu. If you use the default repo on Lucid (as suggested by the author), you will end up with a very old version of Nginx that doesn't even natively support wsgi.
Gunicorn has only had a mailing list since about May. Most of our dev work is based in IRC or via the GitHub issues tracker. The three committers, @benoitc, @tilgovi, and @davisp have a combined 1709 twitter followers. There are also 479 watchers and 64 forks of the main development repo on GitHub.
Bottom line, numbers like this don't really mean much. In the end, people should investigate any project they think is a candidate and make a decision based on their specific criteria rather than try and use some proxy metric to make the decision.
> If you use the default repo on Lucid (as suggested by the author), you will end up with a very old version of Nginx that doesn't even natively support wsgi.
Nginx+Gunicorn combination uses HTTP, so WSGI support is not needed in that case.
uWSGI supports HTTP and FastCGI too, natively. Using the uwsgi protocol (that has nothing to do with WSGI standard) is only a way to increase performance and add a bunch of features. uWSGI and gunicorn are very different projects, with very, very different targets. Trying to make a fair comparison is impossibile.
Its not really hip and trendy, its the standard way to go now days. The advantages to me are, Nginx is much lighter on resources for any given number of concurrent connections. I can keep those connections open if I really need to, and I have found it flexible enough for almost all scenarios (hosting SVN is the one thing I can think of that it doesn't do, but SVN's days are numbered for my usage anyway and http1.1 proxying backends).
Nginx's syntax for its configuration tends to be easy to read, the community is helpful. Its also well tested and hosting a decent percentage of websites.
I question the need to use Apache2 these days and anyone who blindly recommends it without knowing the alternatives as they tend to be uninformed about modern hosting environments.
A lot of people use apache+nginx+mod_wsgi combination, but it seems to me that's more complicated than having just one web server. Apache might be more versatile, but if used with wsgi, and if you use nginx as well anyways, I don't see the need for all the bells and whistles apache has?
I happen to find this post very helpful as a student of the craft, and your comments seem to be defensive of a traditional approach because you don't care to examine the reasons for the choices you've made. The pros and cons of either choice would be more helpful than demonstrating animosity toward an approach you've chosen to dismiss.
It's great that you have a setup you know and are comfortable with, but for those of us that would like to explore all options these posts are very helpful. It sucks you have to dismiss this as a ploy to be "hip".
I'm not sure why 18pfsmt is overly sensitive. Maybe he subconsciously feels that his new techniques are inferior to proven approaches, and exhibits this through outrage?
In reality, not that much. There's a pretty detailed benchmark breakdown here [1]. From my experience, the point where you see the different methods start to diverge is past the point you're going to be able to handle on a single box with a real-world application anyway. I find that gunicorn is slightly easier to setup (not requiring Apache to be installed, etc), so I've stuck with it. To each their own!
Simon, slightly better memory usage in most cases. More so if you don't end up stripping out all of the Apache modules you aren't using. But it's such a small differences it usually isn't worth changing from what you know and are comfortable with.
The number of packages involved is a very poor measure of the effort needed to maintain a system.
Apache and more established technologies have had much more testing and are far more mature. Their releases are often much more infrequent, merely because most of the problems have already been worked out. Newer technologies, on the other hand, aren't in such a position.
Although your solution may involve a smaller number of packages, having to perform weekly or even monthly updates due to the immaturity of the software is much riskier and disruptive than having to upgrade Apache and a few of its modules once or twice a year.
You could also write those, yeah. I mentioned Upstart, which is a replacement for init.d scripts in Ubuntu. There's also djb's daemontools, and a couple of other ways to do it.
Everyone always shows a single server setup. What if you have a dozen sites? You can't just give each one 9 processes, you will swamp your box. Is there any way to do a master setup where you can dynamically feed it your settings file?
If you're already running a dozen sites on one server, collectively they're probably not getting very much traffic. You may very well be able to use a large number of processes just fine.
That is probably true CPU wise, I am just worried about memory. If each process is 50 MB and you have 12 servers and 9 processes per, that is over 5 GB. If my machine only has 4 GB of RAM some of those processes must be in the swap file.
you can do pretty well with 1-2 processes per site; if you don't have the traffic to warrant it, there's no reason to run more. gunicorn can also use an async worker type (gevent) which can help keep the number of needed processes down.
django also has a sites package, so you can run multiple sites per django instance. it's less encapsulated but if you run a high number of websites with a low amount of traffic each it makes a lot of sense.
it's pretty easy to add multiple upstream servers to nginx. If you're at that point you want to be reading the docs and understanding what's going on anyway.
Also gunicorn wants n+1 workers, where n is the number of cores you want to devote to it. It's in the docs. So an 8 core machine would want 9 workers.
If you've got multi server scaling issues, cut and paste won't help you.