Reason (3) is clearly our ulterior motive here, so we're not disinterested: our model user deploys a full-stack app (Rails, Elixir, Express, whatever) in a bunch of regions around the world, hoping for sub-100ms responses for users in most places around the world. Even within a single data center, repeated queries to SQL servers can blow that budget. Running an in-process SQL server neatly addresses it. Conveniently, most applications are read-heavy, and most performance-sensitive app requests are reads.
tryna wrap my head around this architecture, it is quite interesting but concerning that it is now sharding into close-to-local sqlite instances located near the user.
(You're not generally "reaching back to the central source of truth to compare" things, so much as "satisfying the write centrally and shipping out the new database pages back to the read replicas at the edges").
More on this model: https://fly.io/blog/globally-distributed-postgres/
Are there cold start delays? From the moment I type domain.com is it going to spin up a fly instance closest to me and serve the SQLite database reads?
I'm gonna give this a go this weekend to see what it can do