Hacker News new | past | comments | ask | show | jobs | submit login

> It's not uncommon to have ~20 pooled connections lying around. Maybe it's not that frequently used in Python or PHP, but in various other platforms, that's just the normal case.

OK but you're doing....500 req/s let's say, so, if base latency is 50ms, you're going to have at least 25 requests in play at once, so that's 500 database connections. That's one worker process. If your site is using....two app servers, or your web service has multiple worker processes, or etc., now you have 1000, 1500, etc. database connections in play at capacity. This is a lot. Not to mention you'd better be using a middleware connection pool if you have that many connections per process to at least reduce the DB connection use for processes that aren't at capacity.

On MySQL, each DB connection is a thread (MariaDB has a new thread pooling option also), so after all the trouble we've gone to not use threads, we are stuck with them anyway. On Postgresql, each DB connection is a fork(), and they also use a lot of memory. In both of these cases, we have to be mindful of having too many connections in play for the DB servers to perform well. We're purposely using many, many more DB connections than we need on the client side to try to grab at some fleeting performance gain by stacking small queries among several transactions/connections per request which is not how these databases were designed to be used (a DB like Redis, sure, but an RDBMS, not so much), and on the client side, I still argue that the overhead of all the async primitives is going to be in a very tight race to not be ultimately slower than running the queries in serial (plus the code is much more complicated), and throughput across many requests is reduced using this approach. Marginal / fleeting gains on the client vs. huge price to pay on the server + code complexity + ACID is gone makes this a pretty tough value proposition.

Postgresql wiki at https://wiki.postgresql.org/wiki/Number_Of_Database_Connecti...: "You can generally improve both latency and throughput by limiting the number of database connections with active transactions to match the available number of resources, and queuing any requests to start a new database transaction which come in while at the limit. ". Which means stuffing a load of connections per request means you're limiting the throughput of your applications....and throughput is the reason we'd want to use non-blocking IO in the first place.

> However just challenging it without proof is not really that useful.

this is all about a commonly made assertion (async == speed) that is never shown to be true and I only ask for proof of that assertion. Or maybe if blog posts like this one could be a little more specific in their language, which would go a long way towards bringing people back into reality.

well all your assertions are wrong. you think that there is only one database and no read only slaves. you also think that we always need strong serializability and acid. guess what? a users does not care if he needs to reload the page until his picture is online.

yes there are workloads, where everything you says is true. but most other workloads, like 80% of all the web pages don't need what you describe.

also some pages don't have a conventionell database at all. some people have a cache or some other services in place, some people use microservices, some people connect to other internet providers, other services like lpd/ipp etc. the world is just not black and white. everything what you describe is uterly crap since you just try to talk around, cause your application is not as complex as others. and yes in prolly 60-70% of the cases async will not yield more "speed"/"performance" however you call it.

> cause your application is not as complex as others

I work with Openstack. I don't think you're going to find something more complicated :). (it does use eventlet for most services , though it's starting to move away from that model back to mod_wsgi / threads).

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact