I said transaction, not query. A database transaction is on a single connection at a time and queries are performed via the transaction serially.
Unless you're doing e-commerce or banking sites, that's far less common that non-transaction requests.
edit: also, I'd challenge you to prove that for a web request that needs to make ten read queries to a relational database, from Python, that you can get better performance by opening up ten separate database connections (or from a pool) and running one query in each, bundled into the async construct of your choice and then merging them all back into your response, vs. just running ten queries on a single connection in serial. Assume these are not slow reporting-style transactions, just the usual "load the users full name, load the current status, load the user's current items", etc., small queries common in a web request that is looking for a very fast response with ten SQL queries.
Note that at the very least, it means your web application needs to use ten times as many database connections for a given set of load. In database-land that's more or less crazy.
Anyway, I respect your position that yes, for the average user, throwing a bunch of "async" in there isn't going to make their code faster, and it's just cargo cult programming. And yes, there is some tradeoff curve where sometimes, for a small benefit, it's not worth the effort to worry about it, as with all things. But it's just a tough sell to argue that no one should need this :-)
More and more often today, the backend serves as glue between frontend clients and a horde of services / data systems. This is often an I/O heavy workload (wait while I make a request, wait for a response, wait while I download x10). This kind of workload is ripe for speeding up with async. That's all I'm saying!
At least in Java, C#, Golang. And even psycopg2 offers a Pooling Abstraction (I guess it's not used in Django, but SQLAlchemy offers that aswell)
But of course running a blocking driver atop a non-blocking framework does not give the best performance.
However just challenging it without proof is not really that useful.
Also some workloads are better for Threaded Servers while others are better in Async Fashion, it's also highly unlikely that just wrapping your Database connection in a Async function that it will be faster or better suited for a async workload. If you are not non-blocking from the ground up you will still carry a lot of overhead around.
OK but you're doing....500 req/s let's say, so, if base latency is 50ms, you're going to have at least 25 requests in play at once, so that's 500 database connections. That's one worker process. If your site is using....two app servers, or your web service has multiple worker processes, or etc., now you have 1000, 1500, etc. database connections in play at capacity. This is a lot. Not to mention you'd better be using a middleware connection pool if you have that many connections per process to at least reduce the DB connection use for processes that aren't at capacity.
On MySQL, each DB connection is a thread (MariaDB has a new thread pooling option also), so after all the trouble we've gone to not use threads, we are stuck with them anyway. On Postgresql, each DB connection is a fork(), and they also use a lot of memory. In both of these cases, we have to be mindful of having too many connections in play for the DB servers to perform well. We're purposely using many, many more DB connections than we need on the client side to try to grab at some fleeting performance gain by stacking small queries among several transactions/connections per request which is not how these databases were designed to be used (a DB like Redis, sure, but an RDBMS, not so much), and on the client side, I still argue that the overhead of all the async primitives is going to be in a very tight race to not be ultimately slower than running the queries in serial (plus the code is much more complicated), and throughput across many requests is reduced using this approach. Marginal / fleeting gains on the client vs. huge price to pay on the server + code complexity + ACID is gone makes this a pretty tough value proposition.
Postgresql wiki at https://wiki.postgresql.org/wiki/Number_Of_Database_Connecti...: "You can generally improve both latency and throughput by limiting the number of database connections with active transactions to match the available number of resources, and queuing any requests to start a new database transaction which come in while at the limit. ". Which means stuffing a load of connections per request means you're limiting the throughput of your applications....and throughput is the reason we'd want to use non-blocking IO in the first place.
> However just challenging it without proof is not really that useful.
this is all about a commonly made assertion (async == speed) that is never shown to be true and I only ask for proof of that assertion. Or maybe if blog posts like this one could be a little more specific in their language, which would go a long way towards bringing people back into reality.
yes there are workloads, where everything you says is true. but most other workloads, like 80% of all the web pages don't need what you describe.
also some pages don't have a conventionell database at all. some people have a cache or some other services in place, some people use microservices, some people connect to other internet providers, other services like lpd/ipp etc.
the world is just not black and white. everything what you describe is uterly crap since you just try to talk around, cause your application is not as complex as others.
and yes in prolly 60-70% of the cases async will not yield more "speed"/"performance" however you call it.
I work with Openstack. I don't think you're going to find something more complicated :). (it does use eventlet for most services , though it's starting to move away from that model back to mod_wsgi / threads).
and also my count example, it just makes no sense to have the count and the list data called inside a transaction (ok there are cases, but these are way more rare, because mostly It's not to bad to give users a wrong count, you don't need strict Serializability)