I wish there was some sort of an Access-like form/app builder tool for quickly building GUIs around SQLite databases. That's the one thing I miss. I'd love to reach for SQLite instead of Excel or something.
We're maintaining a rapid dev solution for building Apps for SQLite + other popular RDBMS's at: https://www.locode.dev
You just need to create your API definitions and you get APIs + Auto management UI for free, here's our most recent video showing how you can build a multi-user Bookings App in minutes [1].
This is the thing about Access. For simple to moderate stuff you can't really bear it for easy of building and deployment due to the forms and reports being integrated with the tables and queries. I'm always surprised by how powerful it is and what you can accomplish with it. Nothing beats it for local, department type I formation management, libre/open office don't come close.
With that easy of use comes its reputation of being a nightmare to manage, mostly because it's easy enough for people with no knowledge of the relational model to use.
If there was a similar solution utilising sqlite as the backend I'd definitely be using it.
I hope to play with Appsmith soon. It looks really cool. It doesn't seem to have an sqlite connector but I wonder how hard that would be to build. Note that I know nothing about Appsmith and almost nothing about sqlite, but I'm intrigued by the possibilities. https://www.appsmith.com/
Many db applications tack on a spreadsheet gui and a form input mode. See baserow, budibase, airtables, memento, etc. Some include fancier features like webhooks and ACLs.
My main rant is the lack of import/export standardization in this space.
Not too familiar with the space, how do all of these stack up? Are they mostly SaaS or are there local/offline options? Do any of these have personal plans?
I can't recall for certain, but I feel at least 75% sure there is an included demo project that shows how to wire up a simple CRUD app in, like, seven mouse clicks.
Not SQLite, but try Retool with their new RetoolDB (which is an integrated version of Postgres).
This totally fills the Access gap for me, assuming that you are looking for / happy with a web interface and a SaaS model which is paid per user (but free for first 5)
If it's SaaS, it's entirely different from the excel use-case (which is unfortunately ubiquitous and offline). It's also different from the average SQLite use-case, which is also local.
Finally, a software fad that I actually want to support! About time I get one!
SQLite is more than good enough for an awful lot of things. Most of the time that I do see software using MariaDB etc, I think to myself "they didn't need a server model for this, SQLite would have been fine".
Would love to see someone rewrite SQLite in Rust - it's always the example I go to for full-coverage, fuzzed to hell and back, code. IMO there is nothing a new language could improve. It really is insanely stable and well tested.
I started to use Postgres on AWS for a personal project, after fighting with several non-docker VM's - I don't want to use docker. But I decided on SQLite + Litestream, hopefully for free on AWS. The simplicity of deployment is great.
SQLite to my surprise has much of the same SQL functions as Oracle and Postgres. I wanted to use window functions, which are supported.
Sqlite came to my attention when I read a Tailscale blog article stating they will be using it.
It certainly has some good use cases and is not a toy, but there are some critical SQL features (like say: `last_at < NOW() - INTERVAL '10 minutes'`) that are not easily compatible with SQLite and are with postgres, that would require a lot of code rewrite in case you need to scale upwards. Testing against multiple databases is hard, so migrating to postgres wouldn't be trivial either. Hosting small postgres databases on things like GCP are ~$8 a month... why not just save yourself from future pain?
A lot of projects will never need beyond the scale that a SQLite host can offer, and gain from the lower bills and simplicity of a single host.
$8/month of GCP gives you 0.6GB memory and a single shared vCPU (no SLA), and you still need an application server. So with the same 'micro' specs for an application server, you are paying $16/month. With SQLite you can easily deploy to services like Hetzner Cloud and get 4vCPU + 8GM RAM for ~$15/month and use Litestream for backup.
There are definitely trade offs with SQLite around low write performance (due to single writer limitation), and definitely not as feature filled as Postgres, but I think a lot of people underestimate the read performance and simplicity of a single host setup. And now with sidecar setups like Litestream, backup and disaster recovery of data is also a lot simpler and again, cheaper. With added attention, hopefully SQLite will continue to improve with extensions like Spatialite and others. Schema changes are still a point of friction as your data grows larger, however fairly stable heavy read scenarios, the simplicity of a SQLite setup is very nice to use.
Don't get me wrong, Postgres is a great default, but for a growing number of use cases, SQLite can be an extremely portable, cheap and efficient setup.
The next one up on GCP is about $15 and supports a much higher workload. You're also not factoring in the developer costs to switch from SQLite to Postgres when you decide you really need some scale...
Do you mean the `HA db-f1-micro` or are we looking at a different pricing chart? I have a hard time believing that would even get close to double of the single db-f1-micro, and limits on memory would still impact both. SQLite can perform extremely well for heavy reads/low change datasets, but it is scenario specific. It's bad at heavy write/chatty workloads due to its single writer limitation.
"easily deploy Postgres", how easy? What are the point in time restores like? One of the advantages of SQLite is its portability, so why wouldn't I compare it to Hetzner cloud since my experience of hosting is _exactly_ the same regardless of provider? The same can't be said for Postgres, unless you go self install route. And all I'm pointing out is the trade off, SQLite is simpler in some situations.
Litestream is simpler for backups/disaster recovery. pg_dump -> S3 gives you a snapshot, but you then have to start worrying about time windows between backups and restore processes. Litestream does per second backups enabling point in time restores with 2 commands. Every time you want to upgrade Postgres you are going to have to manage that yourself, again, pg_dump -> S3 and back with possible issues around versions of pg_dump etc. I'm not trying to say these are insurmountable or even big issues, they are just trade offs.
For the right use cases, SQLite's embedded nature can provide simplicity other offerings can't. Is Postgres able fill more use cases? Absolutely, it is an amazing piece of software.
PgBackRest is working like Litestream to archive the WAL files, and for PG you can get those tools for more than a decade! Litestream is nothing novel. But you can‘t get it down to 1s, the minimum is 30s.
I wasn't aware of PgBackRest, something will definitely keep in mind when offering alternatives to managed services, thank you! 30s is perfectly respectable, and something I will definitely start testing out as a sidecar container to Postgres.
edit: It doesn't look like the use of PgBackRest composes as well as something like Litestream, eg it looks like PgBackRest has to be baked into the image running Postgres as opposed to a separate process monitoring changes to files like Litestream. Again, not huge issues, but definitely not as simple. Still very useful tech I'm glad you mentioned it, but a big draw card of SQLite is the simplicity of use.
Where do you draw the line with compatibility? There's lots of small, DB specific differences like this between all the databases (mysql, sql server, postgres etc).
For this particular one, you can replace interval with something like `last_at < DATETIME(CURRENT_TIMESTAMP, '-10 day')`
> Hosting small postgres databases on things like GCP are ~$8 a month... why not just save yourself from future pain?
I think this is a fair point, if you think you'll run into sqlite's limitations in the future. litestream and friends make sqlite an option for some non-embedded use cases.. though they're hardly a panacea.
A lot of devs seem to have an obsession with using the tech that only just works for the use case. Like you pointed out, Postgres works in everything from tiny apps to global scale SaaS. I really just can't see any reason you would want to use SQLite outside of local apps. Sure, many web apps can run on it, but they can all work on Postgres.
Postgres is a great default, no doubt, but it is still more complex to setup and maintain OR much more expensive to use as a managed offering than SQLite. Postgres extension upgrades can get pretty hairy, even on managed services. SQLite is a data file, so with projects like litestream, getting back to a working state is as simple as it gets, its a file.
SQLite definitely has its own problems, but using Postgres over SQLite and vice versa is still making a trade off. There is a massive spectrum between "tiny app" and "global scale SaaS", and with hardware improvements, SQLite can handle quite a lot. Would I pick SQLite as the default to use when aiming to build another "global scale SaaS"? No, I would more likely default to Postgres as well, but it definitely shines in a lot more scenarios than just in a local app.
Weird comment. Don't use a DB that doesn't fit your use case. In this case if you plan to scale or want PGSQL features, don't use SQLite. But SQLite is great if you need to embed a db in an app or are just doing some local stuff.
Hosting postgres on your own computer is free. Why would you possibly pay someone $8 / month for a postgres instance to replace SQLite? Postgres is free, and super easy to set up.
We use an embedded postgres in our tests (it is just literally a vanilla postgres), the thing starts up fast and runs great. I'm sure there must be other embedded of examples out there.
I'm sure I could find reasons to use sqlite, but I'd have to find reasons NOT to use postgres if I ever thought I'd need its capabilities.
Right, and if a project is small enough to only need a small app server, it's probably small enough to just run the DB on the same server in the background.
Today I opened up the Windows Resource Monitor to see why my elderly neighbor's AOL Desktop Gold app was loading so slowly. The problem turned out to be network related, but I was intrigued to see in it's disk activity many reads and writes to a .sqlite file.
I wonder how long AOL's desktop app has been using SQLite - since the early 2000s maybe? If anyone has one of those original AOL disks lying around, it may be interesting to see if the installer unpacks a SQLite DB of some kind.
AOL was one of the first large corporate users of SQLite.
"... the next tech giant to reach out was America Online... They needed a database on that [AOL] CD, and they had some ad hoc thing and they wanted to use SQLite on that. They had limited space, and so, “Hey, we need to put this on the CD.”"
We used SQLite to store the time series data used to show network activity in McAfee.com Personal Firewall circa 2002 or so. Anyways,acording to the Wikipedia page for SQLite AOL funded the development of at least some of SQLite 3 in 2004. I remember that we didn’t upgrade right away, or if we ever did for that matter.
SQLite not being an “official” browser api was the right choice. It would have tied to closely to a specific version and limited it’s possible use with no ability to use extensions.
SQLite is exactly what WASM is good at, now the only thing holding SQLite/WASM back is a low level filesystem/block store api. But this is coming soon! It will enable so much more than SQLite too, you will be able to use custom SQLite builds with extensions, and other databases (DuckDB, MongoDB/Realm and others)
A small suggestion: explicitly write out `import`s in examples. I know it sounds trivial but from my experience it really confuses people especially when some classes have generic names like `ColumnDict`.
Can attest to this, we've been pleasantly surprised by the unbeatable performance and value of SQLite + Litestream combo [1] that we expect it will become a popular choice for small/medium or multi-tenant Apps in future.
Is this not resolved with WAL mode? You can't have concurrent writes, but they can go into a "queue" so as long as you have fast writes and suitable timeouts they'll still be committed.
So whilst they're not technically concurrent, as far as a human using your app will be concerned it will feel concurrent because they were able to write at the same time as someone else, just at an imperceivable fraction of a second slower.
I like SQLite (use it for android apps), but there’s no reason to use it for apps over a network. SQLite is meant to be embedded (also works great with desktop apps and local-only web apps).
In fact, I’d argue there’s no networked use case in which SQLite is cheaper, easier to deploy, maintain or run than Postgres.
All that being said, absurdsql is great. There should be some optimizations in mirroring network state to IndexedDB that could be queried via SQl(ite).
The SQLite documentation literally says use Postgres and not it for networked use cases (unless you use WAL or rollback, in which case why not just use Postgres?)
As an aside, I think we need a new storage primitive. In the 2000s desktop apps were rampant and SQLite was more or less the standard.
We then moved over to networked apps where RDBMS had its day. There was a moment where nosql was booming for the scaling but it turned out people like relations.
We need an open source, distributed and relational store that’s easy to maintain build this decade.
It's not what this page says. This page suggests PostgreSQL if accessing the SQLite file from the application would induce a network connection (because of a network filesystem). Not if the application is accessed over the network.
Using SQLite for a network service when the app runs on the same machine as the SQLite file is perfectly fine.
> (unless you use WAL or rollback, in which case why not just use Postgres?)
Again. WAL / rollback is how SQLite works by default. The page does not suggest using WAL, it suggests using WAL, "but do all reads and writes from processes on the same machine that stores the database file", is you use a networked FS. And yes, in this case, you might as well use PostgreSQL indeed.
> In fact, I’d argue there’s no networked use case in which SQLite is cheaper, easier to deploy, maintain or run than Postgres.
I disagree. I now use SQLite whenever I can for low traffic services, unless the app developers strongly advise something else.I'd say there's no reason to bother with a client/server database like PostgreSQL / MariaDB / MySQL if you can help it. It's one less database to administer, one less user and password to create and manage, and the SQLite file is saved using my regular backup routine. It works great and it's less work for me. You also get to use the database engine with probably the most extensive and comprehensive test suite in the world , by far[1]. It is probably more robust than anything else. SQLite is probably capable of handling high traffic too, especially if the majority of accesses are reads and there are only a few writes.
Of course, Postgres is a really good choice too and you can't really be wrong when choosing it. It's very robust and works very well too, and does not have loosy typing.
FWIW, it's perfectly fine to backup both MariaDB and PostgreSQL by simply copying files. It's one of the recommended methods to set up PostgreSQL replicas:
You might want to configure the database to put binary logs into a separate directory (or you might want to leave them there, depends on your requirements).
Isn't that a bit different?
As I understand it, he's talking about just copying the sql db file while you're talking about running a cluster wide backup tool to create a separate set of files, that you can then copy elsewhere as a backup.
> Using SQLite for a network service when the app runs on the same machine as the SQLite file is perfectly fine
If preventing data loss is important for your app then this would be more hassle than using a networked db because now you have to replicate SQLite across a network to another machine.
> If preventing data loss is important for your app then this would be more hassle than using a networked db
How so? A networked database requires a persistent connection, while SQLite backups work fine with an intermittent connection. The best case (100% network uptime) and worst case (total network outage) are the same, but in the middle, SQLite-with-backups is more resilient to brief outages.
For some very real modern full-stack workloads, the network is the problem. It implies a separate database tier, and network latency for every query. The latency is managed by careful query design, but a valid alternative to that work is to move the database closer to the app --- within NVMe latency rather than network latency. A key part of this idea is recognizing that reads have different requirements than writes, and that many applications are overwhelmingly read-intensive.
I found the key part of getting your application up to speed is 80% of the time, the query itself. Using stored procedures, bulk insert and the like, have speed up more "slow queries / databases" then anything else I've seen.
Of course, if your coder already knows and does this, your last resort will be moving the DB closer to the app (the last 20% of optimisation).
It's useful to keep in mind that part of the premise of SQLite is to mitigate the need to carefully design query patterns; famously, SQLite promotes the fact that the N+1 query pattern tends to work fine (again, because you have ultra-low latency to your database).
So, yes, in an n-tier database architecture, 80% of your optimization work might go into meticulously minimizing your query load with better-designed queries. The point is: it doesn't have to be that way.
The more important thing I think is, again, thinking separately about reads and writes.
The issue is most apparent in edge-optimized applications, almost all of which are overwhelmingly read-heavy. You've got a (say) 100ms budget for the whole request, and "edge-optimized" with traditional n-tier databases practically implies that your database isn't always in the same data center as your app, so you can eat that budget up real, real quick going back and forth with Postgres. Even inside a database, you can be looking at a couple milliseconds per database hit, which adds up as you do multiple queries.
SQLite with replication is a nice solution for that problem. Writes get handled roughly the way they would with Postgres, serialized into a single write master; reads get satisfied instantaneously from NVMe.
I'd like to see someone advance the SQLite bandwagon by (A) showing how to queue up DB writes into a single thread in a variety of popular application programming languages (that have libraries for SQLite), and then (B.1) testing them for performance, and (B.2) for whether they measurably impede reads.
It feels bad to make a database choice which isn't capable of supporting lots of writes. Lots of things start small but get big unexpectedly, and if you choose sqlite then now you have to rearchitect things.
Note the difference between "lots of writes" and "lots of concurrent writes". "Lots of writes" in succession without heavy concurrency support is just fine for desktop/mobile apps. It is not okay for busy webapps.
"Many concurrent writes" is challenging to many parts of most architectures, not just databases. For example it means your caching strategy has to be more complicated than with most read-heavy apps.
> Lots of things start small but get big unexpectedly, and if you choose sqlite then now you have to rearchitect things.
Lots of things predictably don't. It seems like bad engineering to operate as though everything will. Especially in appliances/"IoT" and other embedded use cases, that line of thinking can drive production costs up a lot.
The majority of new things people build fail (say 95%), but it's still worth building for the success case because essentially all of the value comes in the success case. It increases cost on average but it means your website doesn't go down under load the second it's successful. This is only true to an extent of course -- don't build your startup like it's Google. But to a certain degree it's worthwhile architecting for some scale before you have it.
In the context of embedded things of course you should use something like SQLite as opposed to postgres, because there aren't going to suddenly be millions of people using your CO2 monitor (for example). But for web stuff you can plausibly have user counts that span 6 or 7 orders of magnitude, so you want to be prepared for that.
I mean, Amazon and the rest of the cloud vendors will happily sell you managed SaaS database that's open source, distributed and a relational store. That makes it real easy to maintain. Cheaper than a team of DBAs, too, until it isn't.
Came over this article as I was looking for interesting resources in the SQLite ecosystem. I'm building mvsqlite (https://github.com/losfair/mvsqlite), as an attempt to turn SQLite into a proper distributed (not just replicated) database. Check it out if you are looking for this kind of stuff!
Why not, looks like a great project for people building on SQLite who may want to migrate to a distributed solution who determine it's a better solution than replication for their use-case.
SQLite is a powerful SQL query engine. When combined with a rock-solid distributed storage layer, we get a distributed SQL database, just like what systems like Aurora and Neon have managed to build on MySQL and PostgreSQL.
Seems reasonable. I wonder how it compares to existing things in the space? It'd be cool to bring the test suite into the distributed world as well, I guess.
SQLite is targeting a different application space than PostgreSQL, MySQL, Oracle Database, or SQL Server. It's intended to be embedded in applications to provide easy, rapid access to structured data. Both iOS and Android, for instance, embed SQLite and apps use it for storing things like text messages and contacts. Even Microsoft UWP embeds SQLite and provides an API to access it from .NET applications (which is weird because it's more on-brand for Microsoft to embed, say, the Jet database engine from Access).
It's a serious database for anything that doesn't require concurrent multiuser access.
SQLite is an embedded database so you wouldn’t really want that. Postgres is fine for database servers, SQLite is for program-embedded local databases.
I don't think anyone has questioned sqlite for local databases. But there is a lot of talk around using sqlite for web app databases where it may work, but seems clearly inferior to any of the alternatives.
sqlite.com seems a pretty good cut off traffic volume, I would guess between 95-99% of all web apps are probably consistently getting less traffic than sqlite.com
There is a lot of static content on https://sqlite.org/ but also a lot of SQLite-backed dynamic content. I just checked the logs. Over the past 5 days, 12.03% of non-robot HTTP requests were against dynamically generated pages.
Fossil is hosted on the same machine as SQLite. Fossil is self-hosting and the Fossil website is 100% dynamically generated. Every HTTP request against https://fossil-scm.org/ does about 200 SQLite queries (give or take - depending on the page).
Isn't SMB not recommended because it can lead to corrupted database? Or is there a setting I disabled somewhere in my smbd.conf that I should re-enable (oplocks maybe?? )?
Since only a single process may write to a database, it would be best if that process were local.
NFSv4 has built-in file locking, which is likely the most preferable protocol. SMBv2 is a massive rewrite of that protocol, and I expect any file locking problems would be addressed (especially as Microsoft sponsored SQLite changes specifically for Windows 10).
Remote readers in explicit read-only mode likely won't hurt, but there is a deeper discussion below.