The submitted page can add a link to the github repo. If it is already there it isn't obvious.
As the links to /api/tables and others are throwing 404, I had to copy & paste the url from the dev snippet to see examples of the API.
Running the server to view documentation helps in ensuring that the docs are current but it will also add unnecessary friction and deter those who want to just look at the API doc for evaluation.
I thought the whole point of sqlite is that you don't have to deal with the cruft and overhead of having to send queries over a network connection. Why tack that back on?
>I thought the whole point of SqLite is that you don't have to deal with the cruft and overhead of having to send queries over a network connection. Why tack that back on?
An interesting application could be a web-application whose ONLY server is the SqLite -server. The state of the application would be stored into the SqLite database, not into "local storage" etc. Using a relational database is a great improvement in flexibility over using the key-value store of Local Storage.
Could this work? I guess we would need to load one web-page into the browser first to load some HTML and scripts and then that page could use fetch() to get everything else from the SqLite -server.
Good Question. The SqLite REST-server should probably provide ways of allowing only selected operations. There's no reason why EVERYTHING should be possible over the network.
One possibility could be that each user would have their own copy of the database, let them break it if they want to :-)
That's a great question. This pattern is actually getting popular today with services like Turso and Cloudflare D1 (technically only SQLite compliant).
There are a few reasons I can see reaching for SQLite behind an API rather than something like Postgres. Portability can be a big benefit and I would expect you don't need to deal with connection pooling.
If I had a service running on a single box and didn't mind setting up my own db backups, I might reach for SQLite just for the ease of standing up new environments for dev and automated testing.
I've found postgres via docker to be fine for dev and testing, but there are various epheremal postgres scripts, plus this could be promising for node stuff https://github.com/electric-sql/pglite if it works out. I'd imagine if this does work out we'll see the same kind of builds for other runtimes as well (like python)
Accessing the database from multiple applications or systems.
SQLite is great for sharing application data as well. If you have, for example data for a specific event,. It can make sense to use a separate database/file.
Being able to simply copy as a backup/archive is big here.
You might still want your application or services separate. Similarly take a look at Turso or AstroDB for more options. Turso created libSQL as a fork with libSQL server.
I regularly use an embedded sqlite database in my projects, and the existence of tools such as soul and pocketbase means I can easily make those databases available to other members of my team without having to write a custom front-end or component of my application to deal with their unwillingness to install an sqlite GUI on systems they may not necessarily be able to do so, in the first place.
Frankly, you thought wrong. The whole point of sqlite was to build a database for naval vessels. Aside from that, given it's open source, the "point" of it doesn't really matter anymore.
That is inaccurate. Ginko's statement is closer to truth.
I was inspired to write SQLite while working with Informix on DDG-79 and I saw how useful an embedded database would be in some situations, compared to a client/server solution. So I went off and wrote SQLite on my own, while the development contract was on hiatus. There was never a request for SQLite or anything like it coming from the the navy (or more precisely, Bath Iron Works) as they were both very happy with Informix on the ship and Oracle on land and had zero desire for anything new or different. The development team I worked on ended up using SQLite some for prototyping and testing on that project, but it was never deployed to the ship, as far as I know.
So yes, the whole point of SQLite was to build a database that operated as a library linked into the application, rather than as a separate server, as ginko postulates. Design issues on a single system within DDG-79 (Automated Common Diagrams) were the inspiration for that idea, but to say that SQLite was designed for DDG-79 is not true. There was never a request for SQLite coming from the navy or the ship designers. Indeed, there is was a lot of pushback against SQLite. SQLite was just a crazy idea coming from a rogue developer who happened to be working on one of the many on-board systems at that time.
I've always wondered why sqlite came along, but adoption of msql (Matt Dillons' similar database stack) didn't catch on .. did you have any opportunity to review the existing database tools that were available at the time, and if so - what did you find?
I realize this question is a bit archaic so if there's not really any good answers, no worries - but as I was using Matt's msql in the 2000's in the same way that I now use sqlite, its something I often wonder whenever I set up a new sqlite.db ...
Semi-related, there’s an anecdote that the InterBase database was selected for use within the M1 Abraham’s tank because it was particularly robust in its implementation.
Apparently an issue with the M1 is that because of the energy produced when the main gun is fired, internal systems may spontaneously reset. So they had to design around that phenomenon through things like robustness and rapid system restart times.
Plenty of these SQLite web front ends exist, including more enticing single static binaries in go, and they are always far too slow to use in anything but toy projects.
Should be standard to bench against a standard SQLite integration (on modern cpu with nvme).
Even PocketBase, for as nice as the UI is, an order of magnitude slower than SQLite directly.
Here's some rough numbers I've found using SELECT * FROM user LIMIT 1; per second ( 1x / 10x / 100x )
Keep in mind that PocketBase do a lot more than just executing a raw DB query. We perform data validation, normalization, serialization, enriching, auto fail-retry to handle additional SQLITE_BUSY errors, etc. All of this comes with some cost and will always have an effect when doing microbenchmarks like this.
The performance would also depend on what version of PocketBase did you try (before or after v0.10), whether you used CGO or the pure Go driver, etc.
There is definitely room for improvements (I haven't done any detailed profiling yet) but the current performance is "good enough" for the purposes the applications PocketBase is intended for (I've shared some numbers regarding a PocketBase app on production in https://github.com/pocketbase/pocketbase/discussions/4254).
Perhaps it's because I'm particularly snarky pre-morning coffee, but this doesn't come across as the welcoming attitude open source is supposed to be and comes across as downright entitled. Especially as the first thing to lead with. Maybe you didn't quite mean it that way?
I'm taking a wild guess to say that you don't have a screaming need to run 'SELECT * FROM user LIMIT 1;' tens of thousands of times per second on SQLite.
And if you did, I'm guessing you prob would be able to write your own API in front of it.
Am I wrong here?
Kudos to the authors of this tool. It looks great and I'd love to try it out on a recent project.
Ran the same test, same server (Ubuntu 22.04 + 32 thread 7950X + Gen 4 NVME + 64GB DDR5) -- nearly zero difference sqliterg vs ws4sqlite.
sqliterg (CURL)
20,000 / 2,500 / 237
For those wondering about write performance...its poor except for direct SQLite. If you're write heavy, look elsewhere, or use SQLite directly (preferably with multiple databases and WAL2).
INSERT INTO users (id) VALUES (..); per second ( 1x / 10x / 100x )
If I have an SPA querying a moderately-sized static dataset, then something like this is appealing. I can stand up a container on Fly, store the SQLite DB on the volume, and serve directly from there without needing to worry about # of reads I involve (w/ something like Turso) or all the ops complexity of a real DB.
I think this one is more for development or test endpoints. So installation from NPM is big plus, so that I can add it to devDependencies, have sources in node_modules and patch them with patch-package.
From what I can see this doesn't contain any kind of permission model, or authentication. At least what I could see in the documentation. I would say that RLS is one of the big killer features for postgrest which makes it so great for rapid prototyping something that's actually useful when exposed to the open internet.
Because then nobody has to stand up a database server separately. I have a toy project I want to move from MariaDB to SQLite just so I don’t have to maintain MariaDB just for this app. All I need is my JS runtime, Deno in this case.
The fact is sqlite and its derivatives are more than enough for a LOT of the web lol, people always act like their random site needs 10,000qps to operate when in reality most barely crack 5-10
Nice and a possible alternative to the very polished PocketBase.
I'm working on something similar for PostgreSQL [0], with an API compatible with the excellent PostgREST [1].
I believe these tools can be of great utility in many projects and represent a generalization compared to the dedicated middleware that was popular a few years ago. Companies like Supabase are demonstrating this.
I have quite ambitious plans, even though it started as a hobby project.
In addition to the already developed DDL functionality and multi-database management, the next main features will include:
I am not sure what Realtime means here.
But if it means something close to what
I used to [1], then REST is a horrible idea.
Javascript is not a good idea either.
[1]
"
Real-time computing (RTC) is the computer science term for hardware and software systems subject to a "real-time constraint", for example from event to system response.[1] Real-time programs must guarantee response within specified time constraints, often referred to as "deadlines".[2]
"
Realtime seems to mean - in this context - that you can open up a web socket and subscribe to changes to rows. i.e. get changes to rows pushed to you as a client instead of pulling.
The submitted page can add a link to the github repo. If it is already there it isn't obvious.
As the links to /api/tables and others are throwing 404, I had to copy & paste the url from the dev snippet to see examples of the API.
Running the server to view documentation helps in ensuring that the docs are current but it will also add unnecessary friction and deter those who want to just look at the API doc for evaluation.
PS: The source url is https://github.com/thevahidal/soul