This is why a database like SQLite has its miraculous read performance: "competing not with Postgres but with fopen()", as somebody said.
But any mutation locks the entire database. This, again, simplifies the implementation a lot, and guarantees serialized DML execution.
Rather few web apps need high write concurrency and low write latency. Great many web apps serve 99.9% of hits with SELECTs only, and when some new data needs to be persisted, the user can very well wait for a second or two, so rarely it happens. But this is often forgotten.
Same realization brought a wave of static site generators: updates are so rare that serving pages from a database makes no sense, and caching them makes little sense: just produce the "pre-cached" pages and serve them as is.
Maybe a similar wave can come to lighter-weight web apps: updates are so rare that you don't need Postgres to handle them. You can use SQLite, or Redis, or flat files as the source of your data, with massively less headache. Horizontal scaling becomes trivial. Updates are still possible, of course, you just pay a much lower complexity price for them, while paying a higher latency price.
It's easy to notice that horizontally scaled local databases are already known: it's called "sharding". Unless you need to run arbitrary analytical queries across shards, eventual replication of changes, where needed, is sufficient. (BTW this is how many very large distributed databases operate anyway.) Litestream already seems to support creation of read-only replicas. This can allow each shard have a copy of all the data of a cluster, while only being able to update its own partition. This is a very reasonable setup even for some rather high-load and data-packed sites.
This is why a database like SQLite has its miraculous read performance: "competing not with Postgres but with fopen()", as somebody said.
But any mutation locks the entire database. This, again, simplifies the implementation a lot, and guarantees serialized DML execution.
Rather few web apps need high write concurrency and low write latency. Great many web apps serve 99.9% of hits with SELECTs only, and when some new data needs to be persisted, the user can very well wait for a second or two, so rarely it happens. But this is often forgotten.
Same realization brought a wave of static site generators: updates are so rare that serving pages from a database makes no sense, and caching them makes little sense: just produce the "pre-cached" pages and serve them as is.
Maybe a similar wave can come to lighter-weight web apps: updates are so rare that you don't need Postgres to handle them. You can use SQLite, or Redis, or flat files as the source of your data, with massively less headache. Horizontal scaling becomes trivial. Updates are still possible, of course, you just pay a much lower complexity price for them, while paying a higher latency price.
It's easy to notice that horizontally scaled local databases are already known: it's called "sharding". Unless you need to run arbitrary analytical queries across shards, eventual replication of changes, where needed, is sufficient. (BTW this is how many very large distributed databases operate anyway.) Litestream already seems to support creation of read-only replicas. This can allow each shard have a copy of all the data of a cluster, while only being able to update its own partition. This is a very reasonable setup even for some rather high-load and data-packed sites.