I wanted to make my own evaluation of what kind of performance I could expect from SQLite on a server and investigate the experimental `BEGIN CONCURRENT` branch vs the inbuilt `DEFERRED` and `IMMEDIATE` behaviors.
Explanatory blog post: https://reorchestrate.com/posts/sqlite-transactions/
Since I don't know where else to ask, maybe this is a good place: How do async wrappers around SQLite (e.g. for node or python) work? SQLite only uses synchronous I/O if I'm not mistaken. Is it just a pretend async function with only synchronous code?
And, as a follow-up: If I have a server with say 100 incoming connections that will all read from the database, I've got 100 readers. No problem in WAL mode. However, I still could get congested by file I/O, right? Because every time a reader is waiting for data from disk, I can't execute the application code of another connection in a different thread since execution is blocked on my current thread. Is there any benefit to having a thread pool with a limit of more than $NUM_CPU readers?
And one more: Would you recommend actually pooling connections or just opening/closing the database for each request as needed? Could keeping a file handle open prevent SQLite from checkpointing under certain conditions?