

Benchmark of Python web servers - nichol4s
http://nichol.as/benchmark-of-python-web-servers
A benchmark of 14 different Python WSGI web servers.
======
delano
_Ruby might be full with all kinds of rockstar programmers (whatever that
might mean) but if i have to nominate just one Python programmer with some
sort of ‘rockstar award’ i would definitely nominate Chad Whitacre. Its not
only the great tools he created; Testosterone, Aspen, Stephane. But mostly how
he promotes them with the most awesome screencasts i have ever seen._

It's true. I'm not sure if the first 30 seconds of any technical screencast
can get any better than this:

<http://www.zetadev.com/software/testosterone/screencast.html>

------
ubernostrum
Unfortunately there are several problems being pointed out with this
benchmark:

* Some of the servers tested aren't designed to interact directly with clients over a network, and are meant instead to run proxied over a local socket behind something else like nginx. The benchmark is inconsistent in whether it actually does this, though, which makes the numbers useless for comparison purposes.

* Several of these servers use preforking architectures which assume the ability to have multiple workers available to service requests. Yet the benchmark deliberately limited them to only one worker, which makes the numbers useless for gauging actual peformance under concurrent load.

~~~
ubernostrum
And now there are updated numbers in the graphs (labeled "gunicorn-3w") for
gunicorn, based on running it the way it's meant to be run (on a socket with
more than one worker). The results:

* Concurrent requests more than doubled

* Response times dropped 75%

* Error rate dropped 75%

It's amazing what a difference it makes to actually run something the way it
was designed to run...

~~~
davisp
I'm glad he updated it. A lot of people would've just left it. That said I
also would've liked to see at least a note that he tried more than three
workers and there wasn't an improvement. When you have an almost linear
improvement its a bit confusing why you wouldn't keep increasing it.

------
swannodette
I love CherryPy because I like simple software. That it can hold it's own on
benchmarks is icing on the cake.

I admit that getting your head around configuration for the first time is an
hour or two of staring at documentation, but once you get past that you'll
_love_ how simple it is. Getting it running under Apache via mod_wsgi is also
a refreshing breeze.

Where Django is great for building websites, the beautiful transparency of
CherryPy is excellent for building webservices (no cryptic errors).

We're using CherryPy with Routes, CouchDB, couchdb-python, couchdb-lucene, and
lxml for a web annotation project and we couldn't be happier.

------
benoitc
Like said david in comments of this post, gunicorn is specifically designed to
run behind a proxy like nginx. You should use 2x or 2x+1 numbers of cores.

Also limiting its use to one worker is constraining any possibilities for
concurrency in its responses. (grainbows built over gunicorn and using gevent
or eventlet is better for that).

Some other servers in the tests have same kind of specifications.

------
reynolds
What I really love about CherryPy is that I can use its web server without
having to use the framework itself.

------
megaman821
Thanks to the author for a thorough and well-written benchmark article.

I have been following the development of gevent lately, I am glad it did so
well in these benchmarks.

------
mark_l_watson
Interesting and I liked the CherryPy results. I used CherryPy a lot 5 to 6
years ago back in my Python using days -- simple and does the job.

That said, database access, network latency, caching, etc. are so important
that you might as well just use the framework you like, then tune if required.

~~~
devinj
They are WSGI servers. Which one you choose probably only depends on the speed
it can interface WSGI to itself to the client, how reliable it is, and so on.
The framework you choose is a different layer, on top of WSGI.

------
joshu
Is it just me or is the the first graph almost impossible to read?

