
Sqlite 3.30.0 - QuadrupleA
https://www.sqlite.org/changes.html
======
hardwaresofton
Glad to see SQLite on then front page -- it's one of the silent workhorses of
the modern programming/database world. It _is_ the most widely deployed
database[0]. If you haven't given it a look/aren't interested in it since it
seems to be a "toy" database (often the "test" or "local" db for frameworks
like rails or django), you owe it to yourself to see what it can really do.
Easy to use full text search[1][2], CTEs[3], JSON support via extension[4]
(also the extension system is worth looking at[5]) and much much more. There
are certainly things that SQLite does _not_ do, and that's well documented
too[6].

If all this doesn't convince you to _use_ SQLite, it's also one of the most
well documented large C codebases that is fantastic to learn from.

I'd go as far as to say that many modern startups could get pretty far with
SQLite + aggressive caching before even bringing in a big database like
Postgres (though with recent deployment/ops advancements it's easier than ever
to run postgres).

[0]:
[https://sqlite.org/mostdeployed.html](https://sqlite.org/mostdeployed.html)

[1]: [https://www.sqlite.org/fts5.html](https://www.sqlite.org/fts5.html)

[2]: [https://sqlite.org/fts3.html](https://sqlite.org/fts3.html)

[3]: [https://sqlite.org/lang_with.html](https://sqlite.org/lang_with.html)

[4]: [https://www.sqlite.org/json1.html](https://www.sqlite.org/json1.html)

[5]:
[https://www.sqlite.org/loadext.html](https://www.sqlite.org/loadext.html)

[6]:
[https://www.sqlite.org/whentouse.html](https://www.sqlite.org/whentouse.html)

~~~
Scarbutt
But why?(for a server)

Postgres can run in a 256MB VPS(if one can find one these days) and has more
features that you might need later.

Installing is as easy as 'apt-get install postgresql', create user, create
database, done.

~~~
throwaway_bad
I have a small blog (using Ghost) that uses sqlite as the main database and
it's incredibly easy to maintain. It's just a single file. For example if you
want to back up, you just copy the file or add it to git. If you want to do a
diff you just do sqldiff on two files. It is as easy as working with json or
csv files, except with atomic writes and better querying.

From their site:
[https://www.sqlite.org/whentouse.html](https://www.sqlite.org/whentouse.html)

>SQLite does not compete with client/server databases. SQLite competes with
fopen().

~~~
majewsky
> For example if you want to back up, you just copy the file or add it to git.
> If you want to do a diff you just do sqldiff on two files.

With Postgres, you run pg_dump on the database (which is a shell one-liner,
only with slightly more arguments than cp) and that's even better for putting
into git because it's a text file that git can diff natively and store
efficiently with delta compression.

------
tlamponi
SQLite is the most stable small DB there is, maybe even the most stable one at
all.

Our CTO made a cluster wide distributed configuration filesystem[0], started
about 9 years ago. For that a local backing store was required, and a a small
DB seemed to fit the bill as it should allow to worry less about data safety,
when it's really written out, and it could be a general interface which could
be swapped out rather easily if required one day.

He checked out lots of the then available options, sorry I only remember
BerkleyDB and SQLite but there were more. Anyway, only SQLite never crashed,
had corrupt data when doing weird things with it and was still very fast, just
great.

Almost 10 years later we still use it, never a single issue with SQLite as the
cause - really impressive IMO.

[0]: [https://git.proxmox.com/?p=pve-
cluster.git;a=tree;f=data/src...](https://git.proxmox.com/?p=pve-
cluster.git;a=tree;f=data/src;h=67601f78bf53763092ead57e55ceb1a2428891a2;hb=refs/heads/master)

------
shafte
Aside from its merits as a database, SQLite is also a great example of good
documentation[0], thorough testing[1], and focused development. It's something
that all library maintainers should study and try to learn from.

[0]
[https://www.sqlite.org/whentouse.html](https://www.sqlite.org/whentouse.html)

[1] [https://www.sqlite.org/testing.html](https://www.sqlite.org/testing.html)

------
etaioinshrdlu
So, I've heard SQLite described as "Postgres for people not paying attention",
because it ignores data type errors.

A lot of people in the HN community like types. Me included. I really don't
want my database to ignore type errors. I basically want Postgres but embedded
like a library. I cannot think of any reason why that couldn't work. Perhaps
just no one has built it yet.

~~~
js8
I think it's a design choice. Ignoring types lets you focus on other features,
in case of SQLite, things like crash resistance and reliability.

In other words, typechecking is also a cost for language designer.

Even SQL itself has not particularly strong type system. Perhaps you're
looking for something else entirely.

~~~
justinclift
Interestingly, it's amazing to me that many SQLite libraries in the Go
ecosystem don't support these mixed/flexible data types in tables.

If you read back a set of values and one of the fields in a row doesn't match
the data type given by the column, the libraries will error out.

One library that works correctly is github.com/gwenn/gosqlite, which checks
the data type of each field of the result set (per row), so handles things
gracefully.

------
ISO-morphism
I remember reading once upon a time about the sqlite file format being a good
candidate for data transfer between systems/machines, as it's standardized,
stable, and infinitely queryable. Has anyone here had any experience actually
using it as such? What are your takeaways?

~~~
mroos
We have been using SQLite as a distributed object container for the last few
years as part of an industry spec. <[http://www1.semi.org/en/ritdb-
interplanetary-database-manufa...](http://www1.semi.org/en/ritdb-
interplanetary-database-manufacturing>). Has been working quite well on
windows, OS X and Linux. Also java, r, c, js, python clients. We are using
MQTT as the transport.

~~~
pdimitar
Your link gives 404.

~~~
bpicolo
Remove the >

------
silvestrov
I'm really hoping we one day will get ALTER TABLE x DROP COLUMN y.

~~~
foxhop
Me too, my only issues I have running SQLite3 in production is database
migrations where I need to drop or change a column

------
nattaylor
I use FILTER once in a while on a postgres database I work with, so I am happy
to see it added to SQLite

------
insulanian
We have started a project with SQLite as the data store, but ended up
replacing it with Firebird embedded due to two main reasons:

\- Lost decimal places on a simple database round-trip.

\- Issues with accessing the DB from multiple threads.

~~~
gigatexal
Yeah... many readers but only one writer at a given time. We got around that
by doing a SQLite db per user.

~~~
ramraj07
Once conceived of a plan to store a lot of data per user and thought of the
same idea. How do you store the dbs? Keeping it on say s3 means there's a
multisecond (or longer) load time when a user logs in and you need to load the
DB to the hard drive right? (I'm thinking a GB or more data per user).I
considered an ec2 instance with a multi terabyte ebs attached, that can then
effectively store a thousand users: data. Are there any other possibilities?

~~~
gigatexal
Our DBs are tiny for most users. We run s3 (minio rook ceph on k8s) locally so
the network latency is a cluster latency.

I figure you could just throw hardware at it like you mentioned. Move them to
nvme backed S3 if needed.

And our use case is only ever load, do a read or write, and then save. So they
DBs aren’t open for very long.

And with S3 compression you could save on download time but pay a decompress
cost.

This approach has its downsides don’t get me wrong but it scales nicely but
forget running aggregates across the databases at least not for a real-time
result.

~~~
xyzzy_plugh
Put sqlite in redis.

~~~
gigatexal
Got a link or more details?

~~~
gigatexal
Could be this? [https://grisha.org/blog/2013/05/29/sqlite-db-stored-in-a-
red...](https://grisha.org/blog/2013/05/29/sqlite-db-stored-in-a-redis-hash/)

~~~
xyzzy_plugh
I've never done it, but you can store binary blobs in redis, and a sqlite db
is a binary blob.

------
jimnotgym
I used to do a fair amount of ad hoc analysis with large (more than Excel can
handle is large enough) csv files.

Sqlite3 was my saviour. It was the work of minutes to have the data in a db
and be chopping out the bits I needed.

~~~
speedplane
> I used to do a fair amount of ad hoc analysis with large (more than Excel
> can handle is large enough) csv files. Sqlite3 was my saviour.

This was probably the right decision 5 years ago, but I'm not sure Sqllite is
the proper solution for "ad hoc" analysis now. Excel now has plugins to handle
very large datasources. And of course, the company leading this charge here is
Tableau, with many others doing something similar.

Sqllite can still do all the manipulation that these others can, but the
others can also be used by non developers and come with pretty graphs and
pictures. For large CSV analysis on an ad hoc basis, SQLlite doesn't stack up
well to the competition.

~~~
snthpy
As someone who has to use Excel a lot, I'm very interested in what your
typical workflow looks like.

~~~
speedplane
> As someone who has to use Excel a lot, I'm very interested in what your
> typical workflow looks like.

It's pretty standard Excel stuff: summing columns, averaging, functions
combining multiple columns, pivot tables. The significant difference is that I
routinely deal with 20k+ rows, and occasionally 100k+ rows. For simple stuff,
standard Excel starts breaking down with 10k+ rows, and it even struggles with
1k rows with more complicated manipulation.

------
rcarmo
Every time I have to set up a Java runtime on Windows and click through
Oracle's boast of it running on 3 (US) billion devices (which pops up on the
installer) I think of how SQLite probably is at least a thousand billion in
front :)

Great to see FILTER on aggregates, solves a small pain I have with my home
automation stats :)

------
rootw0rm
SQLite kicks ass. Oh, the horrible things I've attempted to make it do...

------
mikece
Something else I appreciate about SQLite: it proves that under the ultimate
free software distribution scheme -- public domain -- corporate users
contribute improvements back to the project.

------
alkonaut
I wish SQLite would move to github with issue tracking. Digging trough their
issues to find if my particular issue is reported is not as pleasant as with
github issues.

Specifically I’m now wondering if a case-insensitive replace() string function
is already a feature request or perhaps even implemented in a recent version.

------
no_wizard
How does SQLite compare to say BerkeleyDB or LevelDB, which I think compete in
this same arena no?

I’ve used SQLite and like it a lot but I’ve recently started using these two
for some pet projects and found them easier to work with and in theory they
should scale easier if I understand their strengths

~~~
zzzcpan
It depends on your use case. LevelDB is way too sloppy about memory
allocation, threads and I/O, which may not be acceptable for an embeddable
database. I got to the point once where every daemon on every node using
LevelDB had to be restarted daily from cron.

------
0x76
I would definitely use it more if it played nicer having it's DB on a nfs
share.

Although I don't know the underlying cause so it may be infeasible.

------
fauigerzigerk
I would be using Sqlite a lot more if it had a decimal type for monetary
calculations.

~~~
nosezinc
Coming in sqlite4...

~~~
ngrilly
Nope.

> SQLite4 was an experimental rewrite of SQLite that was active from 2012
> through 2014. All development work on SQLite4 has ended. Lessons learned
> from SQLite4 have been folded into the main SQLite3 product. SQLite4 was
> never released. There are no plans to revive it. You should be using
> SQLite3.

Source:
[https://sqlite.org/src4/doc/trunk/www/](https://sqlite.org/src4/doc/trunk/www/)

