Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The single-writer limitation in SQLite is per-database, not per-connection. You can shard your SQLite tables into multiple database files and query across all of them from a single connection.

I agree that "the single-writer limitation isn't just a theoretical concern", but it's also solvable without forking SQLite. ulimit's the limit! If your goal is resource maximization of a given computer, though, Postgres is likely a better fit.



> You can shard your SQLite tables into multiple database files and query across all of them from a single connection.

You mean using ATTACH statement, right? If you use WAL mode, then you cannot get transaction safety / ACID with ATTACH [0]

> If the main database is ":memory:" or if the journal_mode is WAL, then transactions continue to be atomic within each individual database file. But if the host computer crashes in the middle of a COMMIT where two or more database files are updated, some of those files might get the changes where others might not.

Moreover, ATTACH do not support more than 125 databases, so that limits the shards to 125. [1]

ATTACH does not solve the concurrency problems. That's why SQLite also has a BEGIN CONCURRENT experimental branch

[0] - https://www.sqlite.org/lang_attach.html

[1] - https://www.sqlite.org/limits.html


I would argue that BEGIN CONCURRENT doesn't solve the concurrency problems either, which is why SQLite also has an HCTree experimental branch :P


Joins and Transactions are a pretty big part of SQL. I'm no expert, but if my quick search results are right, both are lost in the separate file per table scenario.



But then you need the write lock for all the databases (assuming your transaction involves writes).


Unfortunately if you use WAL mode the transactions are only isolated per database file.

https://www.sqlite.org/lang_attach.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: