Hacker News new | past | comments | ask | show | jobs | submit login

> "automatic" scaling of processes/threads based on load characteristics

How is this done in detail? In the case of Flask does it create new processes or threads? How quickly does it create a new Flask process/thread to serve a new request?

EDIT: This website documents configuration settings for autoscaling: http://uwsgi-docs.readthedocs.org/en/latest/Cheaper.html - looks like it's based on various heuristics and algorithms, and in practice it's too slow to create a new process for each request, thus wasting resources

I would think that ideally the number of processes/threads would exactly match the number of live requests, otherwise you're either having too few instances to handle all the requests as fast as possible or having too many instances which means that more resources are used than necessary, thus increasing costs.

I think Node.js is an improvement because the architecture is better suited for scaling and all the dependencies are asynchronous by default. Of course, I think there are other reasons for Node.js (e.g. isomorphism, easier full-stack development etc.), but I guess that's another discussion, really.

A count of the flags that you can pass to uWSGI shows 976 different options. It's highly configurable. WSGI itself is a synchronous protocol, but (as icebraining pointed out) uWSGI can be run in async mode [0] which ends up looking much like node.js.

In terms of the ideal number of processes/threads, I'm not so sure number if live visitors is correct. It's going to depend on your various resource constraints but if requests are quick to service there's no problem with having other requests waiting in a queue.

I once was called in to fight a fire where the team said, "we don't understand, we have horizontal scalability, but adding more machines seems to be making things run slower!" - Erm, yeah, because your single db server is on its knees :-)

[0] http://uwsgi-docs.readthedocs.org/en/latest/asyncio.html

Edit to add that I've just looked through the cheaper docs you mentioned and that behaviour all looks pretty sane. You can have a good default number of workers and expand as required. Also, uWSGI forks workers after the app is loaded in memory so you get fast copy-on-write behaviour during worker start.

uWsgi does have an async mode, which works together with an async loop engine like gevent: http://uwsgi-docs.readthedocs.org/en/latest/Async.html

Of course, this doesn't change your app and its dependencies to work in an async-friendly way, but then again, Node.js callbacks are hardly transparent either.

Well, the difference is the entire ecosystem of nodejs is built around async. Fitting it into python is very hard, and you better have a really good reason for trying it. It just doesn't interact favorably with a lot of common python tools (Sqlalchemy being one example)

This. Flask can technically run in an async event loop (see: http://docs.gunicorn.org/en/stable/settings.html). But gevent is a massive monkey-patch on top of Python standard libs to make them async friendly.

If you're hitting enough load where async really matters, just skip Python altogether. And please, please, don't use Node. Just use Golang or Scala for true multithreading, and you'll save yourself the headaches Javascript will bring for something that's not truly parallelized.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact