Hacker News new | past | comments | ask | show | jobs | submit login

It's a database that in full-stack culture has been relegated to "unit test database mock" for about 15 years that is (1) surprisingly capable as a SQL engine, (2) the simplest SQL database to get your head around and manage, and (3) can embed directly in literally every application stack, which is especially interesting in latency-sensitive and globally-distributed applications.

Reason (3) is clearly our ulterior motive here, so we're not disinterested: our model user deploys a full-stack app (Rails, Elixir, Express, whatever) in a bunch of regions around the world, hoping for sub-100ms responses for users in most places around the world. Even within a single data center, repeated queries to SQL servers can blow that budget. Running an in-process SQL server neatly addresses it. Conveniently, most applications are read-heavy, and most performance-sensitive app requests are reads.




hmm but how would the replication and sync be handled if you have many sqlite instances on edge locations around the world? If someone inserts a row with id 234 and somebody from other side of the world does it, wouldn't this type of logic involve reaching into a central source of truth to compare the diff?

tryna wrap my head around this architecture, it is quite interesting but concerning that it is now sharding into close-to-local sqlite instances located near the user.


Yes: the model topology you should have in your head is "single writer, multiple readers" --- exactly the same way it would work with a conventional Postgres setup. What you're getting with SQLite here is that the reads themselves are served out of the app process rather than round-tripping over the network; otherwise, it's the same architecture.

(You're not generally "reaching back to the central source of truth to compare" things, so much as "satisfying the write centrally and shipping out the new database pages back to the read replicas at the edges").

More on this model: https://fly.io/blog/globally-distributed-postgres/


Interesting, do you have plans to support GPU as well? I can see this is a bottom up approach: put a low load instance close to the user for reads and have a globally synced write that should handle race conditions etc

Are there cold start delays? From the moment I type domain.com is it going to spin up a fly instance closest to me and serve the SQLite database reads?

I'm gonna give this a go this weekend to see what it can do


This is getting into Fly.io stuff and not WunderBase or SQLite stuff. GPU is a ways off for us: the programming interface for GPUs is tricky to implement with full isolation between VMs. The post we're commenting on talks a bit about cold start delays (a couple hundred milliseconds).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: