Hacker News new | past | comments | ask | show | jobs | submit login
Django Setup using Nginx and Gunicorn (senko.net)
104 points by senko on June 18, 2011 | hide | past | favorite | 35 comments



Or you could use my Puppet modules which handles this for you on Debian or Ubuntu, using Monit in stead of supervisord: http://journal.uggedal.com/deploying-wsgi-applications-with-...


I'm a fan of uWSGI, personally. If anyone's interested, I wrote up (before it was cool) a guide to Django/nginx/uwsgi in more or less the same format:

http://posterous.adambard.com/start-to-finish-serving-mysql-...


Not a bad how-to...he mentioned the mod_wsgi route and that's the one I'm most fond of. You can still use Nginx to handle all static files and proxy all other requests to Django.

A tutorial link if you're interested: http://www.meppum.com/2009/jan/17/installing-django-ubuntu-i...


> The best seem to be Gunicorn and uWSGI, and Gunicorn seems the best supported and most active project.

This is not so cut and dry. In May, Gunicorn had 12 mailing list messages, where uWsgi had 191. uWsgi has a twitter with 225 followers, where Gunicorn has none. #gunicorn@freenode has 32 users, but #nginx@freenode seems to have a lot of uWsgi users.

Also, the author mentions nothing about installing the Nginx PPA for ubuntu. If you use the default repo on Lucid (as suggested by the author), you will end up with a very old version of Nginx that doesn't even natively support wsgi.


As you say, its not so cut and dry.

Gunicorn has only had a mailing list since about May. Most of our dev work is based in IRC or via the GitHub issues tracker. The three committers, @benoitc, @tilgovi, and @davisp have a combined 1709 twitter followers. There are also 479 watchers and 64 forks of the main development repo on GitHub.

Bottom line, numbers like this don't really mean much. In the end, people should investigate any project they think is a candidate and make a decision based on their specific criteria rather than try and use some proxy metric to make the decision.


> If you use the default repo on Lucid (as suggested by the author), you will end up with a very old version of Nginx that doesn't even natively support wsgi.

Nginx+Gunicorn combination uses HTTP, so WSGI support is not needed in that case.


uWSGI supports HTTP and FastCGI too, natively. Using the uwsgi protocol (that has nothing to do with WSGI standard) is only a way to increase performance and add a bunch of features. uWSGI and gunicorn are very different projects, with very, very different targets. Trying to make a fair comparison is impossibile.


I'm not clear on the advantages of using nginx+Gunicorn over nginx+proxy-to-Apache/mod_wsgi


Its not really hip and trendy, its the standard way to go now days. The advantages to me are, Nginx is much lighter on resources for any given number of concurrent connections. I can keep those connections open if I really need to, and I have found it flexible enough for almost all scenarios (hosting SVN is the one thing I can think of that it doesn't do, but SVN's days are numbered for my usage anyway and http1.1 proxying backends).

Nginx's syntax for its configuration tends to be easy to read, the community is helpful. Its also well tested and hosting a decent percentage of websites.

I question the need to use Apache2 these days and anyone who blindly recommends it without knowing the alternatives as they tend to be uninformed about modern hosting environments.


That's why I run Apache+mod_wsgi behind an nginx proxy (and have nginx serve up static files etc directly).


A lot of people use apache+nginx+mod_wsgi combination, but it seems to me that's more complicated than having just one web server. Apache might be more versatile, but if used with wsgi, and if you use nginx as well anyways, I don't see the need for all the bells and whistles apache has?


Apache is also very well-tested and extremely reliable. For some people and organizations, that's a very important factor.


I happen to find this post very helpful as a student of the craft, and your comments seem to be defensive of a traditional approach because you don't care to examine the reasons for the choices you've made. The pros and cons of either choice would be more helpful than demonstrating animosity toward an approach you've chosen to dismiss.

It's great that you have a setup you know and are comfortable with, but for those of us that would like to explore all options these posts are very helpful. It sucks you have to dismiss this as a ploy to be "hip".


I didnt find the parent defensive. I appears to be more of an observation. Perhaps it was edited after the fact?


It wasn't edited.

I'm not sure why 18pfsmt is overly sensitive. Maybe he subconsciously feels that his new techniques are inferior to proven approaches, and exhibits this through outrage?


In reality, not that much. There's a pretty detailed benchmark breakdown here [1]. From my experience, the point where you see the different methods start to diverge is past the point you're going to be able to handle on a single box with a real-world application anyway. I find that gunicorn is slightly easier to setup (not requiring Apache to be installed, etc), so I've stuck with it. To each their own!

[1] http://nichol.as/benchmark-of-python-web-servers


Simon, slightly better memory usage in most cases. More so if you don't end up stripping out all of the Apache modules you aren't using. But it's such a small differences it usually isn't worth changing from what you know and are comfortable with.


Less packages to keep updated, thus less maintenance.


The number of packages involved is a very poor measure of the effort needed to maintain a system.

Apache and more established technologies have had much more testing and are far more mature. Their releases are often much more infrequent, merely because most of the problems have already been worked out. Newer technologies, on the other hand, aren't in such a position.

Although your solution may involve a smaller number of packages, having to perform weekly or even monthly updates due to the immaturity of the software is much riskier and disruptive than having to upgrade Apache and a few of its modules once or twice a year.


Excellent points. Thanks!


It's hipper. More trendy. Lets you write blog articles that make it sound like you're cutting-edge.


This is pretty much my exact setup when rolling a custom server, sans-Upstart (I prefer supervisord just because it's what I'm used to).


Am I the only one who's never had gunicorn crash on him? I don't run supervisor or anything, but I've never had any issues either.


If you have Gunicorn crash please make sure and report any tracebacks in the logs to the issue tracker so we know that's something is broken.

https://github.com/benoitc/gunicorn/issues


I run supervisor to be able to restart servers at will. I've never had gunicorn crash at me.


isn't that want init.d scripts are for?


You could also write those, yeah. I mentioned Upstart, which is a replacement for init.d scripts in Ubuntu. There's also djb's daemontools, and a couple of other ways to do it.


Everyone always shows a single server setup. What if you have a dozen sites? You can't just give each one 9 processes, you will swamp your box. Is there any way to do a master setup where you can dynamically feed it your settings file?


I've found uwsgi is great for multiple site hosting. There are numerous ways to configure uwsgi for this purpose, one of my favorites: http://projects.unbit.it/uwsgi/wiki/VirtualHosting. Newer versions of uwsgi also have a feature called emperor mode which is pretty slick: http://projects.unbit.it/uwsgi/wiki/Emperor.


If you're already running a dozen sites on one server, collectively they're probably not getting very much traffic. You may very well be able to use a large number of processes just fine.


That is probably true CPU wise, I am just worried about memory. If each process is 50 MB and you have 12 servers and 9 processes per, that is over 5 GB. If my machine only has 4 GB of RAM some of those processes must be in the swap file.


you can do pretty well with 1-2 processes per site; if you don't have the traffic to warrant it, there's no reason to run more. gunicorn can also use an async worker type (gevent) which can help keep the number of needed processes down.

django also has a sites package, so you can run multiple sites per django instance. it's less encapsulated but if you run a high number of websites with a low amount of traffic each it makes a lot of sense.


it's pretty easy to add multiple upstream servers to nginx. If you're at that point you want to be reading the docs and understanding what's going on anyway.

Also gunicorn wants n+1 workers, where n is the number of cores you want to devote to it. It's in the docs. So an 8 core machine would want 9 workers.

If you've got multi server scaling issues, cut and paste won't help you.


oops i've apparently misunderstood what you were asking.

you want something like: include /etc/nginx/vhosts/enabled/*

and just put your server directives into files that are in enabled.


Thank you!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: