

Automated Scalability with Python on nginx - mahipal
http://almirkaric.com/2010/6/14/automated-scalability/

======
rythie
I thought this guy was being sarcastic the first time I read it. If he is
serious, there are so many problems with it.

\- Getting 823 requests/second should more than achievable with one server and
memcache. Nginx can serve over 2000 requests/second in a single process on a
single machine in 10Mb or so of memory.

\- his benchmark only tests one page, the first one (with 10 posts on it)

\- Cassandra is designed for high write bandwidth, which typically wouldn't be
a single author blog using an external commenting system

------
agazso
It is so wrong on many levels I don't even know where to start. He is scaling
a blog engine (!) with a "scalability daemon" instead of generating html pages
and storing them in cache. He could then serve ten thousands of pages with one
server instead of 823 request of seconds and the complex setup.

~~~
dageroth
The blog engine is just a proof of concept - I don't think he is interested in
actually scaling his blog...

------
piotrSikora
Please don't use it in production, like it was already said, it's wrong on way
too many levels:

1) This "daemon" reloads nginx (by sending "HUP" signal) every X seconds.
There is nothing "dynamic" about it. Also, nginx starts new workers on each
reload, but old workers are kept alive until they complete serving all
requests. This means that you could end up with thousands of workers if you've
got connections that take a while to complete (big files, comet servers, etc).

2) There is already nginx module that does exactly that (dynamically changes
upstream status, starts and stops backends on demand, depending on load, etc).

3) nginx's cache can scale blogs much better on a single machine, there is no
way to start another backend servers.

