Hacker News new | past | comments | ask | show | jobs | submit login
Soul: A SQLite REST and Realtime Server (thevahidal.github.io)
168 points by thunderbong 56 days ago | hide | past | favorite | 51 comments



Some suggestions:

The submitted page can add a link to the github repo. If it is already there it isn't obvious.

As the links to /api/tables and others are throwing 404, I had to copy & paste the url from the dev snippet to see examples of the API.

Running the server to view documentation helps in ensuring that the docs are current but it will also add unnecessary friction and deter those who want to just look at the API doc for evaluation.

PS: The source url is https://github.com/thevahidal/soul



You can either play along with words changing meaning, or you can die on this almost meaningless hills.

Signed, with my crypto keys while hacking on side-projects.

(I'm also in the camp of "I don't love it when words change literal meaning" but I've also realized there is literally nothing I can do to stop it)


HATEOS is a much cooler name anyway.

I mean,

"Let the restfulness flow through you" - lame

"Let the HATEOS flow through you" - now you're cooking with force


feels like going back to the Linux vs GNU/Linux naming discussion.


I thought the whole point of sqlite is that you don't have to deal with the cruft and overhead of having to send queries over a network connection. Why tack that back on?


>I thought the whole point of SqLite is that you don't have to deal with the cruft and overhead of having to send queries over a network connection. Why tack that back on?

An interesting application could be a web-application whose ONLY server is the SqLite -server. The state of the application would be stored into the SqLite database, not into "local storage" etc. Using a relational database is a great improvement in flexibility over using the key-value store of Local Storage.

Could this work? I guess we would need to load one web-page into the browser first to load some HTML and scripts and then that page could use fetch() to get everything else from the SqLite -server.


There's also the wasm build of sqlite that you could just run directly if it's really necessary. Of course there might be performance differences


What’s the security model for this? Seems like you’d end up with users able to run arbitrary queries against the database.


Good Question. The SqLite REST-server should probably provide ways of allowing only selected operations. There's no reason why EVERYTHING should be possible over the network.

One possibility could be that each user would have their own copy of the database, let them break it if they want to :-)


That's a great question. This pattern is actually getting popular today with services like Turso and Cloudflare D1 (technically only SQLite compliant).

There are a few reasons I can see reaching for SQLite behind an API rather than something like Postgres. Portability can be a big benefit and I would expect you don't need to deal with connection pooling.

If I had a service running on a single box and didn't mind setting up my own db backups, I might reach for SQLite just for the ease of standing up new environments for dev and automated testing.


I've found postgres via docker to be fine for dev and testing, but there are various epheremal postgres scripts, plus this could be promising for node stuff https://github.com/electric-sql/pglite if it works out. I'd imagine if this does work out we'll see the same kind of builds for other runtimes as well (like python)


Accessing the database from multiple applications or systems.

SQLite is great for sharing application data as well. If you have, for example data for a specific event,. It can make sense to use a separate database/file.

Being able to simply copy as a backup/archive is big here.

You might still want your application or services separate. Similarly take a look at Turso or AstroDB for more options. Turso created libSQL as a fork with libSQL server.


To me whole db being a single file is something.


I regularly use an embedded sqlite database in my projects, and the existence of tools such as soul and pocketbase means I can easily make those databases available to other members of my team without having to write a custom front-end or component of my application to deal with their unwillingness to install an sqlite GUI on systems they may not necessarily be able to do so, in the first place.


Frankly, you thought wrong. The whole point of sqlite was to build a database for naval vessels. Aside from that, given it's open source, the "point" of it doesn't really matter anymore.


That is inaccurate. Ginko's statement is closer to truth.

I was inspired to write SQLite while working with Informix on DDG-79 and I saw how useful an embedded database would be in some situations, compared to a client/server solution. So I went off and wrote SQLite on my own, while the development contract was on hiatus. There was never a request for SQLite or anything like it coming from the the navy (or more precisely, Bath Iron Works) as they were both very happy with Informix on the ship and Oracle on land and had zero desire for anything new or different. The development team I worked on ended up using SQLite some for prototyping and testing on that project, but it was never deployed to the ship, as far as I know.

So yes, the whole point of SQLite was to build a database that operated as a library linked into the application, rather than as a separate server, as ginko postulates. Design issues on a single system within DDG-79 (Automated Common Diagrams) were the inspiration for that idea, but to say that SQLite was designed for DDG-79 is not true. There was never a request for SQLite coming from the navy or the ship designers. Indeed, there is was a lot of pushback against SQLite. SQLite was just a crazy idea coming from a rogue developer who happened to be working on one of the many on-board systems at that time.


I've always wondered why sqlite came along, but adoption of msql (Matt Dillons' similar database stack) didn't catch on .. did you have any opportunity to review the existing database tools that were available at the time, and if so - what did you find?

I realize this question is a bit archaic so if there's not really any good answers, no worries - but as I was using Matt's msql in the 2000's in the same way that I now use sqlite, its something I often wonder whenever I set up a new sqlite.db ...


This was great, but you should have started the comment with "I'm D. Richard Hipp. You know nothing of my work."


Thank you for your work on SQLite and Fossil !! They are amazing.

Any chance of WAL2 being included as a standard journal_mode option in the near future? ..or BEGIN CONCURRENT ? :)


Huh, this is a little piece of history I was not aware of. Neat and somewhat helps explain the design philosophy of SQLite knowing that


Semi-related, there’s an anecdote that the InterBase database was selected for use within the M1 Abraham’s tank because it was particularly robust in its implementation.

Apparently an issue with the M1 is that because of the energy produced when the main gun is fired, internal systems may spontaneously reset. So they had to design around that phenomenon through things like robustness and rapid system restart times.


If you want to know more of the origins of SQLite I can highly recommend this episode of the Corecursive podcast

https://corecursive.com/066-sqlite-with-richard-hipp/


I wonder if it is inspired by [1], looks very similar including the "realtime" blurb.

--

1: https://pocketbase.io/


No benchmark = no download.

Plenty of these SQLite web front ends exist, including more enticing single static binaries in go, and they are always far too slow to use in anything but toy projects.

Should be standard to bench against a standard SQLite integration (on modern cpu with nvme).

Even PocketBase, for as nice as the UI is, an order of magnitude slower than SQLite directly.

Here's some rough numbers I've found using SELECT * FROM user LIMIT 1; per second ( 1x / 10x / 100x )

SQLite

68,000 / 50,000 / 4,000

SQLite WAL2 (3x Database files)

67,000 / 52,000 / 8,000

Pocketbase (CURL)

15,000 / 2,300 / 234

Pocketbase (Direct)

62,000 / 29,000 / 3,900

ws4sqlite (CURL)

20,000 / 2,600 / 255


Hi, PocketBase author here.

Keep in mind that PocketBase do a lot more than just executing a raw DB query. We perform data validation, normalization, serialization, enriching, auto fail-retry to handle additional SQLITE_BUSY errors, etc. All of this comes with some cost and will always have an effect when doing microbenchmarks like this.

The performance would also depend on what version of PocketBase did you try (before or after v0.10), whether you used CGO or the pure Go driver, etc.

For a benchmark closer to "real world" scenarios tested on various servers you can check the results from https://github.com/pocketbase/benchmarks.

There is definitely room for improvements (I haven't done any detailed profiling yet) but the current performance is "good enough" for the purposes the applications PocketBase is intended for (I've shared some numbers regarding a PocketBase app on production in https://github.com/pocketbase/pocketbase/discussions/4254).

Hope the above helps.


Man, thanks for building PocketBase. It's very pleasant to use!


> No benchmark = no download.

Perhaps it's because I'm particularly snarky pre-morning coffee, but this doesn't come across as the welcoming attitude open source is supposed to be and comes across as downright entitled. Especially as the first thing to lead with. Maybe you didn't quite mean it that way?

I'm taking a wild guess to say that you don't have a screaming need to run 'SELECT * FROM user LIMIT 1;' tens of thousands of times per second on SQLite.

And if you did, I'm guessing you prob would be able to write your own API in front of it.

Am I wrong here?

Kudos to the authors of this tool. It looks great and I'd love to try it out on a recent project.


Who really wants to build on top of something just to hit an immediate performance cliff?

How far you can vertically scale on one server before requiring a split or shard in your data is why it's a big deal.

A) App servers are easy to scale (stateless, add hardware).

B) Database servers are hard to scale (requires changing logic).

If you're immediately hitting B) you're probably screwed.


> and they are always far too slow to use in anything but toy projects

Or, not every project need the absolute best performance, sometimes good enough is simply good enough?


Btw ws4sqlite rewrite in rust that's allegedly faster: https://github.com/proofrock/sqliterg


Ran the same test, same server (Ubuntu 22.04 + 32 thread 7950X + Gen 4 NVME + 64GB DDR5) -- nearly zero difference sqliterg vs ws4sqlite.

sqliterg (CURL)

20,000 / 2,500 / 237

For those wondering about write performance...its poor except for direct SQLite. If you're write heavy, look elsewhere, or use SQLite directly (preferably with multiple databases and WAL2).

INSERT INTO users (id) VALUES (..); per second ( 1x / 10x / 100x )

SQLite WAL

11,000 / 3,000 / 300

SQLite WAL2

14,500 / 3,000 / 760

SQLite WAL2 (3x Database files)

29,000 / 6,000 / 1,400

sqliterg (CURL)

1,750 / 181 / blocks indefinitely


I've had similar ideas recently.

If I have an SPA querying a moderately-sized static dataset, then something like this is appealing. I can stand up a container on Fly, store the SQLite DB on the volume, and serve directly from there without needing to worry about # of reads I involve (w/ something like Turso) or all the ops complexity of a real DB.


Go calls to C are slow https://vancan1ty.com/blog/post/52


Curious to see how much it is a factor, did you run the same with/without cgo?


I think this one is more for development or test endpoints. So installation from NPM is big plus, so that I can add it to devDependencies, have sources in node_modules and patch them with patch-package.


it feels more like postgrest [1] than pocketbase [2].

--

1: https://postgrest.org/ 2: https://pocketbase.io/


From what I can see this doesn't contain any kind of permission model, or authentication. At least what I could see in the documentation. I would say that RLS is one of the big killer features for postgrest which makes it so great for rapid prototyping something that's actually useful when exposed to the open internet.


Why SQLite? Why not provide a generic SQL interface? I never understood that. Why do people build this around SQLite?


Because then nobody has to stand up a database server separately. I have a toy project I want to move from MariaDB to SQLite just so I don’t have to maintain MariaDB just for this app. All I need is my JS runtime, Deno in this case.


Seems like a toy solution, sqlite is the easiest to set up for someone who isn't familiar with databases


The fact is sqlite and its derivatives are more than enough for a LOT of the web lol, people always act like their random site needs 10,000qps to operate when in reality most barely crack 5-10


Nice and a possible alternative to the very polished PocketBase.

I'm working on something similar for PostgreSQL [0], with an API compatible with the excellent PostgREST [1].

I believe these tools can be of great utility in many projects and represent a generalization compared to the dedicated middleware that was popular a few years ago. Companies like Supabase are demonstrating this.

[0] https://github.com/sted/smoothdb [1] https://github.com/PostgREST/postgrest


Cool! Are you doing this mostly as a fun exercise or do you have a plan to provide something postgrest does not in the future?


Glad you like it.

I have quite ambitious plans, even though it started as a hobby project. In addition to the already developed DDL functionality and multi-database management, the next main features will include:

* Admin UI

* Projects with templates and versioning

* Migrations and deployment


Where are the hypermedia controls?


I am not sure what Realtime means here. But if it means something close to what I used to [1], then REST is a horrible idea. Javascript is not a good idea either.

[1] " Real-time computing (RTC) is the computer science term for hardware and software systems subject to a "real-time constraint", for example from event to system response.[1] Real-time programs must guarantee response within specified time constraints, often referred to as "deadlines".[2] "


Realtime seems to mean - in this context - that you can open up a web socket and subscribe to changes to rows. i.e. get changes to rows pushed to you as a client instead of pulling.


Yes basically the meaning of "realtime" in databases, not the realtime of RTC :S


According to that definition, if your specific time constraints are "between 10ms and 1000ms", wouldn't HTTP be sufficient?

Seems like your argument for if this is "realtime" or not depends on parameters not specific by either you or the project.


Firebase gives this definition:

> The Firebase Realtime Database is a cloud-hosted database. Data is stored as JSON and synchronized in realtime to every connected client.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: