
SQLite 3.32 - nikbackm
https://sqlite.org/releaselog/3_32_0.html
======
iagovar
As a non-dev intruder I have to say that I love SQLite. I do a lot of data-
analysis and it makes everything easy, from fast SQL Wizardry to sharing the
DB just coping a file! Just how amazing is that?!

It must sound naive to some of you, but the first time stumbled upn sqlite I
was so excited!

~~~
mmsimanga
I work in data analysis too and SQLite is to me what MS Excel is to other
people in the industry. I have development background so I never picked up in
MS Excel to same degree as my colleagues from statistics backgrounds. In
corporate organisations where DBAs monitor the use of their database with an
eagle eye SQLite is a fantastic option to analyse several millions of lines of
data on your desktop. I find DB Browser for SQLite (DB4S)[0] is invaluable
when using SQLite.

[0][https://sqlitebrowser.org/](https://sqlitebrowser.org/)

~~~
fauigerzigerk
I have a lot of Excel-like use cases for which SQLite would be a great fit if
only it had a decimal type.

~~~
cordite
As in like Java Bigdecimal, with a BigInteger on the left side, and a fixed
precision on the right?

~~~
fauigerzigerk
As in other database systems such as Postgres. Even a C# style decimal
floating point type would be sufficient.

------
bob1029
For our B2B application, we've been using SQLite as the exclusive means for
reading and writing important bytes to/from disk for over 3 years now.

We still have not encountered a scenario that has caused us to consider
switching to a different solution. Every discussion that has come up regarding
high availability or horizontal scaling ended at "build a business-level
abstraction for coordination between nodes, with each node owning an
independent SQLite datastore". We have yet to go down this path, but we have a
really good picture of how it will work for our application now.

For the single-node-only case, there is literally zero reason to use anything
but SQLite if you have full autonomy over your data and do not have near term
plans to move to a massive netflix-scale architecture. Performance is
absolutely not an argument, as properly implemented SQLite will make localhost
calls to Postgres, SQL Server, Oracle, et. al. look like a joke. You cannot
get much faster than an in-process database engine without losing certain
durability guarantees (and you can even turn these off with SQLite if you dare
to go faster).

~~~
hobs
I love sqlite, but just a wonder on how big you are going?

I regularly see 50TB total of databases on SQL Server, and scaling up to
thousands of clients.

~~~
bob1029
With current volume, our largest transactional datastore is ~50GB.

That said, if we had a situation where we were pushing such volume that the
transactional store was 50 TB, and we were still within the scope on the
overall project, I see no reason why this would be a problem for us. As I
mentioned prior, the context of our applications is single process, single
server. Obviously, this is not suitable for all edge cases and you will
potentially be better off with a hosted solution in such cases.

At the end of the day, there is no arbitrary number of bytes where B-Tree
indicies just stop working because you didn't shell out for a SQL Server
license.

~~~
hobs
Sure, but things like horizontally partitioning your data across multiple hard
drives via splitting files, supporting partition elimination in your queries,
etc are all things that I am pretty sure sqlite doesnt even want to bring to
to the table.

~~~
bob1029
And I would hope that they do not bother to. The current offering is arguably
perfect as-is.

I would prefer to implement things like horizontal partitioning myself using
business keys and logic that make sense for my problem. I do not want a
database to manage this sort of thing for me, because I want perfect
visibility into exception cases so business logic can be developed around
them.

I.e. if one of the drives fails, but I was only storing a lower-priority
business entity on that particular partition (e.g. user preferences), I could
decide to just continue processing without it (e.g. reset to defaults). With a
hosted database solution where you do not have this degree of granularity, a
catastrophic loss of part of the storage array would mean the entire database
is down. Even if the important data isn't directly impacted.

------
dtf
While reading the documentation for iff(), I noticed the command line function
edit(), which is pretty cool.

    
    
      UPDATE docs SET body=edit(body) WHERE name='report-15';
      UPDATE pics SET img=edit(img,'gimp') WHERE id='pic-1542';

~~~
combatentropy
[https://sqlite.org/cli.html#the_edit_sql_function](https://sqlite.org/cli.html#the_edit_sql_function)

------
ha470
While I love SQLite as much as the next person (and the performance and
reliability is really quite remarkable), I can’t understand all the effusive
praise when you can’t do basic things like dropping columns. How do people get
around this? Do you just leave columns in forever? Or go through the dance of
recreating tables every time you need to drop a column?

~~~
eli
Don’t many MySQL backends also recreate the whole table when you drop a
column? They just hide it from you better.

~~~
faceplanted
Pretty sure they must, row based storage on disk would practically require it
just to not completely waste all of the space you've just gained from deleting
the column by leaving a gap on every single row.

~~~
calpaterson
Adding a nullable column is constant time (ie: basically instant) in postgres
and innodb, maybe also in other systems.

~~~
HelloNurse
If _adding_ a nullable column is free, it probably means that the DBMS is able
to distinguish multiple layouts for the same table: existing rows in which the
new column doesn't actually exist and is treated as NULL, and newly written
rows in which there is space for the new column.

But dropping a column is different: even if the DBMS performs a similar smart
trick (ignoring the value of the dropped column that is contained in old rows)
space is still wasted, and it can only be reclaimed by rewriting old files.

~~~
calpaterson
Dropping a column in postgres is also instant, so yes, it uses the same trick.

Deleting a row is similar too - the row is not removed from the heap page and
the database does not get smaller (though if that page gets rewritten the row
is not kept). Last time I used innodb it didn't actually return free heap
pages to the filesystem at all so no matter how much you deleted the database
file never got smaller.

EDIT: Looks like that's still the case now:
[https://bugs.mysql.com/bug.php?id=1341](https://bugs.mysql.com/bug.php?id=1341)

------
why-el
One of the great things one can learn from SQLite is the degree to which they
unit (and integration) test their source code. It's honestly the best unit
test document I have read in my career to date:
[https://www.sqlite.org/testing.html](https://www.sqlite.org/testing.html).

~~~
ardy42
IIRC, some company wanted to use SQLite on an airplane, so they paid the devs
enough to bring the test suite up FAA standards. IIRC, they have code coverage
of _every machine instruction_.

~~~
why-el
Yep, I think (with 90% certainty) that Richard Hipp, the creator of SQLite,
mentioned this in a Youtube Talk, but sadly I can't recall which one. :(

~~~
justinclift
This seems to be it:

[https://youtu.be/Jib2AmRb_rk?t=675](https://youtu.be/Jib2AmRb_rk?t=675)

~~~
why-el
Yep, that's the one. Thanks.

------
wenc
SQLite is great but its decision in not having a standard datetime/timestamp
datatype -- a standard in all other relational databases -- has always struck
me as a surprising omission, but in retrospect I kind of understand why.
Datetimes are undeniably difficult.

So sqlite leaves the datetime storage decision to the user: either TEXT, REAL
or INTEGER [1]. This means certain datetime optimizations are not available,
depending on what the user chooses. If one needs to ETL data with datetimes, a
priori knowledge of the datetime type a file is encoded in is needed.

In that sense, sqlite really is a "file-format with a query language" rather
than a "small database".

[1] [https://stackoverflow.com/questions/17227110/how-do-
datetime...](https://stackoverflow.com/questions/17227110/how-do-datetime-
values-work-in-sqlite)

~~~
combatentropy
"SQLite does not compete with client/server databases. SQLite competes with
fopen()." \---
[https://www.sqlite.org/whentouse.html](https://www.sqlite.org/whentouse.html)

------
trashburger
>Increase the default upper bound on the number of parameters from 999 to
32766.

I don't want to know the use case for this.

Keep rocking on, SQLite. It's the first tool I reach for when prototyping
anything that needs a DB.

~~~
oefrha
Simple. Bulk insert with a 999-parameter limit is just painful; if each entry
has 9 columns, you can’t even insert 112 rows at once. In practice distros
already compile with higher default; e.g. Debian compiles with
-DSQLITE_MAX_VARIABLE_NUMBER=250000, still way higher than this new default.

~~~
abraae
What's the point? Inserting batches of 1000 rows at once, or even 10k rows at
once is hardly any faster overall than using batches of 100 rows, assuming
there are no delays in presenting the batches to the DB.

~~~
Carpetsmoker
It's just easier: I won't have to split queries with 1,500 parameters in two
because of some limit.

------
zubairq
Thanks so much for SQLite. Amazing and stable database. Yazz Pilot
([https://github.com/zubairq/pilot](https://github.com/zubairq/pilot)) is
built on it

------
oefrha
Good to see a ternary function iif() added. Case expressions are usually
pretty painful and/or unreadable when using query builders.

------
RivieraKid
Is it reasonable to assume that in most current deployments of PostgreSQL or
MySQL, SQLite would be at least an equally good choice?

I was recently choosing a database for a medium-size website and SQLite seemed
like an obvious choice. One thing I was worried about was that the database
locks for each write - but this is apparently not true anymore with write-
ahead log.

~~~
duskwuff
Depends on the environment. SQLite will scale out reasonably well _so long as_
it's only needed on one machine. As soon as you need a network-accessible
database, traditional database servers start looking like a better option.

------
emadda
I’ve been using SQLite on GCP for a few small projects and it seems to work
well.

I use docker volumes to write to disk. I pass the disk directory to my process
via a CLI arg.

When running on a VM these disk writes are replicated between zones (this is
default for regional GCP disks). So you get zero config high availability (if
you can tolerate down time during a reboot).

~~~
rhencke
You might find DQLite of interest.

[https://dqlite.io/](https://dqlite.io/)

~~~
emadda
Thanks I have seen this, but would prefer to use the data center provided
replication at the disk level as I do not need to have real time failover (I
just need to make sure I can recover data in case of a single zone failure).
Also incremental disk snapshots are nice to have.

------
devwastaken
Are there resources for good practices on database formatting? I feel that
what I make 'works', but I'd be curious on what experienced databases look
like.

For example I have an app that you upload files through. Files can be local to
the server or on s3 and have metadata. I end up making a new table for the API
points. Like a table for listing files/directories. A table for local files
and a table for s3 files. Then a table for the metadata, and a table for the
kind of file it is, etc. It works, but it feels like a heavy hammer.

~~~
vbezhenar
You might want to check out Codd books. He invented relational model after all
and his books cover database design.

------
nattaylor
A few nice little conveniences like IFF(). I like reading SQLite released
because they seem good at avoiding adding cruft. (The refusal to implement
JSONB comes to mind.) Now if only I could get my shared web host to upgrade to
a recent version...

~~~
jventura
If you have ssh access to your web host, you may be able to upgrade it
yourself. I needed something more recent for django 2.2 and had to download
the latest sqlite, compile it, put the lib in some folder and add the lib to
.bashrc so that python3 could use it (ld_include_flags or something like
that).

Look for it on google, it’s possible to do it..

~~~
simonw
My Google skills are failing me here, can you provide any more details? I'm
very interested in knowing tricks to upgrade the SQLite version used by
Python.

~~~
icegreentea2
You set the LD_LIBRARY_PATH environment variable
([https://unix.stackexchange.com/a/24833](https://unix.stackexchange.com/a/24833)).

Specifically, you'll need to recompile libsqlite3, put it somewhere, and then
set LD_LIBRARY_PATH before invoking Python. You can do that globally in your
shell by modifying your .bashrc or similar file.

Or if you're super brave, you just replace the libsqlite3.so that Python is
pointing to (really depends on your use case).

------
cptnapalm
Recommendations for learning SQL with SQLite? I've recently started doing the
Khan Academy videos, and am liking them, but I'd like more practice problems
and explanatory text.

~~~
justinclift
Some of these _may_ be useful:

[https://github.com/sqlitebrowser/sqlitebrowser/wiki/Tutorial...](https://github.com/sqlitebrowser/sqlitebrowser/wiki/Tutorials)

[https://github.com/sqlitebrowser/sqlitebrowser/wiki/Video-
tu...](https://github.com/sqlitebrowser/sqlitebrowser/wiki/Video-tutorials)

One of our developers (Manuel) started putting together lists of tutorials and
video's for SQLite + DB Browser for SQLite a while back.

There are probably more we've missed, and contributions to those pages (etc)
are welcome. :)

------
RivieraKid
One possible disadvantage of SQLite is that it only allows one writer at a
time (but writes don't block readers with write-ahead log enabled). I'm really
curious about whether Postgres performs better at concurrent writing, couldn't
find any benchmarks. In theory, disk writes are always sequential, so I'm
skeptical Postgres would do substantially better.

~~~
justinclift
> I'm really curious about whether Postgres performs better at concurrent
> writing ...

Very much so. PostgreSQL easily handles _lots_ of concurrent writing. It's a
use case where PostgreSQL is much better than SQLite. :)

~~~
RivieraKid
I don't believe this without a benchmark.

~~~
justinclift
Generally that's a good approach. :)

In this case though, it seems a bit weird.

SQLite is widely known to be for single writer workloads, whereas PostgreSQL
is similarly widely known for being extremely good in concurrent usage
scenarios.

Those are the things they're each designed for. eg:

[https://www.sqlite.org/whentouse.html](https://www.sqlite.org/whentouse.html)
(the "High-volume Websites", "High Concurrency", and "Many concurrent
writers?" pieces)

Feel free to run benchmarks to demonstrate this to your own satisfaction
though. :)

~~~
RivieraKid
Well I explained my rationale above - writes are sequential on the disk level
regardless the database. So PostgreSQL shouldn't have much of an advantage in
concurrent simple writes (it could even be slower than SQLite). PostgreSQL
should be faster in concurrent complex transactions but the question is how
complex and how much faster.

~~~
justinclift
Good point. That's a reasonable place to start investigating from.

Personally, I can't be bothered (to much higher priority stuff to do). But if
someone else gets around to testing this specifically, I'd be interested in
the answers as well just to "make sure" the common understanding isn't wrong.
:)

------
pachico
I have running in production a SQLite powered service for the free Geonames
gazetteer. It's a read only service so it fits perfectly and providing really
good performance. I also use it to work with data coming in CSV format. What a
great piece of software!

------
me551ah
Where can you use sqlite?

Embedded: Yes

Raspberry Pi: Yes

Mobile Apps : Yes

Desktop Apps: Yes

Microservices: Yes

Big Monolith : Yes

Browsers. : No

~~~
goutham2688
check this out for browsers [https://github.com/sql-
js/sql.js](https://github.com/sql-js/sql.js)

------
zeroimpl
Why is it “iif” instead of “if”? I don’t recall “if” being a keyword in SQL

------
hn_1234
is SQLite used for big data storage ? What are high end use cases than small
data points which I mostly use it for. excuse me if its a dumb question

~~~
mkl
Maybe this will help?
[https://www.sqlite.org/whentouse.html](https://www.sqlite.org/whentouse.html)

------
boksiora
my favorite db format

