
Appropriate Uses for SQLite - ftclausen
https://sqlite.org/whentouse.html
======
danso
I teach SQL to non-techie students. I used to give the the option of doing
either MySQL or SQLite, but not only did I underestimate how different the
syntaxes were, I also underestimated how not-trivial it is for students to
successfully install and run both the MySQL server and client. These are
students who can't even use a spreadsheet well, not that that makes a huge
difference in understanding databases.

I've moved everything to SQLite and couldn't be happier. Not only is it easier
to distribute assignments (e.g. a single SQLite file, instead of CSVs that
need to be manually imported), it does everything I need it to do to teach the
concepts of relational databases and join operations. This typically just
needs read-only access, so our assignments can involve gigabytes of data
without issue.

~~~
Yaggo
"I used to give the the option of doing either MySQL or SQLite"

SQLite is good choise for absolute beginners.

Later, when teaching "real" multi-user RDBMSes, although MySQL may be more
popular, it makes more sense to teach PostgreSQL as "default open source
database". Both will do the job, but PostgreSQL has got more stuff right from
the beginning, which is especially important when learning. Think PHP vs.
Python, PHP may cut few corners but it's not ideal language for teaching
generic concepts.

~~~
yannis
>Think PHP vs. Python, PHP may cut few corners but it's not ideal language for
teaching generic concepts.

Maybe not even Python, suggest go or rust. My generation we started learning
CS with Pascal and moved to C, when we had a bit more understanding.

~~~
StavrosK
I think it's "Python will teach you more about programming, C will teach you
more about computers".

------
qwertyuiop924
SQLite is quite possibly one of the most useful pieces of software ever
created. It's small, relatively fast, and unbelivably solid. It's up there
with bash, curl, grep, emacs, and nano: tools that are just so good at their
job that we don't even notice how amazing they are.

I mean, really. SQLite is remarkable, impressive, and used everywhere, and _we
never talk about it_. Emacs is a remarkably impressive piece of engineering,
bash is the world's default shell for a reason, Nano is the newbie's text
edior, and, well, just imagine for a second what would happen if grep or curl
stopped working.

~~~
seagreen

      It's up there with bash
    

Clearly you've just arrived from some wonderful alternate universe where bash
means something different than it does on Earth. Welcome traveler!

    
    
      bash is the world's default shell for a reason
    

Here on Earth that reason is network effects ("If I write it in bash, it will
run anywhere!"). Bash is an bad language. If you've mastered bash you can have
the honorable feeling of mastering a difficult, ugly, but practical skill (see
also: knife-fighting, driving a motor vehicle, running for office). But
there's no need to be mean to SQLite by comparing the two.

~~~
jsmthrowaway
Funny, I instead had that reaction to nano being there after I read "good at
their job." I mean, that's one way of looking at nano, I suppose, depending on
how you define "job"...

~~~
qwertyuiop924
Partly because listing emacs or vim would launch a flame war, but Nano's
intent is to be a small text editor, friendly enough for newbies, to handle
small jobs, like writing an email, or editing /etc/fstab. And it does that
pretty well.

~~~
jsmthrowaway
But you listed emacs?

~~~
qwertyuiop924
I forgot I did that. It was because Emacs is really technically impressive.
It's a bit ugly in places, but on the whole, it's remarkably well designed.

------
AstroJetson
I've always loved this line:

    
    
         SQLite does not compete with client/server databases. SQLite competes with fopen().
    

I have some small apps that I've written in TCL that use SQLite that I've been
very happy with. Not much more effort than using a file.

There are also some nice hooks to allow the use of SQLite from Lua scrips.
It's pretty easy and it fits into the Lua world view of data.

~~~
hackits
I found it useful for logging. Logging tends to have structured data
associated with it. Trying to re-parse log files into some meta data gets
tiresome and is error prone.

~~~
qwertyuiop924
Honestly, I'd say that logging should be text. Yes, SQLite DBs are hard to
corrupt, but it can happen, especially in the sort of catastrophic failure you
would want a log for. And when data corrupts, it's generally easier to extract
some degree of data from a text file than from a binary.

Maybe as a log archive format, though...

~~~
simcop2387
Big problem with text logs is the moment something you log doesn't fit what
you were expecting. A message has a new-line? all the sudden you either have
to escape characters or you have to handle the data being put on the next line
rather than the next log message. How do you detect when that happens? A
message doesn't fit the format properly? Ok so let's encode the data, base64?
now you can't grep the logs anymore for information, it's an opaque format
with meta data, might as well use SQLite or some other structured format.

~~~
the_duke
As someone else said, JSON is great for logging. Newlines get escaped, you
just write line seperated json entries to a log file. JSON can be easily read
by humans, and is trivial to parse in most programming languages.

I like SQLite, but I don't think logging is a good use case.

~~~
emn13
_Why_ don't you think sqlite is a good fit for logging?

Given the extensive testing including crash testing, it's plausibly _more_
robust than plain text - because you're almost surely not using any kind of
fsync's in your plain-text logger, so that text file isn't as incorruptible as
you may think. And you may write a buggy logger, or use a buggy json
implementation, or write incorrect error-recovery code when reading the file.

I'm skeptical that robustness is an argument in favor of plain text logging
over sqlite.

~~~
majewsky
> you're almost surely not using any kind of fsync's in your plain-text
> logger, so that text file isn't as incorruptible as you may think

Agreed. But if the log file is corrupted, then a plain-text one will be easier
to decipher for a human than any binary blob.

~~~
mschwaig
Writing to files in general is a fairly difficult thing if you care about not
losing any data under any circumstances, because you have to use the right
syscalls for the semantics of your filesystem, which may somtimes be why
you're looking at corrupted files in the first place. Correctly implemented
transactions can prevent you from dealing with that. Maybe that's worth giving
up easily readable, searchabe and processable textfiles, maybe not.

------
xiaomai
I run the backend/website of my side business on sqlite. It is one of the best
technology decisions I have made. It performs reasonably, is super
straightforward (at my day job we have a team of postgres people to keep our
dbs running smoothly, but for my little side business I don't have those
resources); backups are dead simple. I love sqlite.

~~~
chii
How do you handle concurrent access to your db?

~~~
Scaevolus
Most applications don't actually need concurrent access. SQLite handles
concurrent reads without any issues, with writes requiring exclusive locks. As
long as your queries are fast and your write load is minimal, you won't really
have any problems.

~~~
jack9
> Most applications don't actually need concurrent access

That's an interesting claim. I would rephrase that to, "Are their more http
calls that use concurrent connections to a DB, or standalone applications that
do not?" I would wager the former.

~~~
emn13
The sqlite website runs on an sqlite db.

Even on most websites, I suspect the need for concurrent, long-lived write
transactions is much rarer than people assume. If your write transactions are
short-lived, then sequential execution is a reasonable approximation of (slow)
concurrency, at which point it's a question of load whether that's good
enough. But the window in which it's not good enough is very slim - hardware
simply isn't all that concurrent in the first place, and as you scale, some
sharding strategy is required anyhow.

So the more plausible limitation is long-lived write transactions; e.g. where
a write cannot be committed until after some other confirmation occurs,
possibly over the network. That simply won't work well at all in sqlite - not
that it's a great strategy to use on other DBs...

~~~
icebraining
_The sqlite website runs on an sqlite db._

Yeah, but the sqlite website doesn't need a db at all.

~~~
emn13
Well, "need"...

To quote the sqlite website itself:

> The SQLite website ([https://www.sqlite.org/](https://www.sqlite.org/)) uses
> SQLite itself, of course, and as of this writing (2015) it handles about
> 400K to 500K HTTP requests per day, about 15-20% of which are dynamic pages
> touching the database. Each dynamic page does roughly 200 SQL statements.
> This setup runs on a single VM that shares a physical server with 23 others
> and yet still keeps the load average below 0.1 most of the time.

I think its fair to assume that the sqlite site could be redesigned to meet
most of its functionality as a largely static site, but that would come at a
loss of functionality. And obviously it's a form of dogfooding, but that's not
objectionable, right?

------
Lxr
_SQLite works great as the database engine for most low to medium traffic
websites (which is to say, most websites)...Generally speaking, any site that
gets fewer than 100K hits /day should work fine with SQLite._

Do people agree with this? I was under the impression you should not use
SQLite for production websites for some reason. Django has this to say, for
instance [1]:

 _When starting your first real project, however, you may want to use a more
robust database like PostgreSQL, to avoid database-switching headaches down
the road._

[1]
[https://docs.djangoproject.com/en/1.10/intro/tutorial02/](https://docs.djangoproject.com/en/1.10/intro/tutorial02/)

~~~
watermoose
> Do people agree with this?

I don't. I've corrupted SQLite DBs enough to not have warm and fuzzy feelings
about it like I used to have.

I think it's only a good choice when you just need a database for your app
that will barely be using it, and if you didn't use it you'd be writing to a
file instead. And, that's basically what the SQLite docs say.

However, even then, I think it can be short-sighted. I've used webapps before
that used SQLite and I thought to myself: if they'd only used MySQL or
PostgreSQL and then provided access to it, I could have used it.

Be aware though, if you decide to use a scalable DB like PostgreSQL, it will
require a port to be open for the DB, even if only locally. If you're trying
to minimize how people can access your data, you don't want a port open/an
extra port open, and you're not going to hit it very hard, SQLite's probably
your best choice.

~~~
qwertyuiop924
OTOH, it is a Real DB, if a small one.

And corruptions, while obviously not unheard of, aren't very common. Even in
power failure.

~~~
watermoose
Yeah- I changed my wording to "scalable". And I appreciate the developers and
community around SQLite. It has its uses, and I appreciate it. However, I
think it could be better with concurrency.

~~~
qwertyuiop924
It can have concurrent reads, and even concurrent read and write, but it
doesn't support concurrent writes.

------
red_admiral
I used SQLite for teaching last year because it was the only thing that I
could get IT to install between when I took over the databases unit and the
start of term.

While it was broadly a success, I consider the following major problems when
teaching to beginners:

    
    
      * very loose syntax. CREATE TABLE PERSON ( ID BANANA BANANA BANANA ); is legal :)
    
      * no type-checking: you can insert strings into an INTEGER column and vice versa - while you're trying with a straight face to teach students that one of the advantages of a proper database is that it can enforce some consistency on your data.
    
      * in the same vein - foreign key constraints are NOT enforced by default. 
    
      * misusing GROUP BY produces results, but not the ones you want. I'd much rather any use of aggregates that is forbidden by the standard also gave an error, to discourage students from thinking "it produces numbers, therefore it must be ok".
    

This year, I'll try with MariaDB. I consider SQLite an excellent product for
many things and use it extensively myself, but as a teaching tool its liberal
approach to typing is a drawback.

------
lucb1e
> People who understand SQL can employ the sqlite3 command-line shell to
> analyze large datasets.

And a bit further down:

> SQLite database is limited in size to 140 terabytes [...] if you are
> contemplating databases of this magnitude [use something else]

Yeah no. "Large datasets" here means a few megabytes. I figured that out the
hard way:

I had a database of about 70 megabytes and ran a query with "COUNT(a)" and
"GROUP BY b" on it. This makes it write multiple gigabytes to /tmp until it
goes "out of disk space" (yeah /tmp on my ssd isn't large).

I heard nothing but awesome and success stories about SQLite until a few weeks
ago when this fiasco happened. I still like SQLite for its simplicity and last
week I used it again for another project, but analyzing "large" datasets?
Maybe with a simple SELECT WHERE query, but don't try anything more fancy than
that when you have 100k+ rows.

~~~
pmorici
How you considered you might have used it wrong? If everyone else says it
works great and you use it and have an issue my first thought would be I must
have done something wrong not this sucks everyone else must be wrong.

~~~
lucb1e
Sound logic, but I didn't think to myself "gee _everyone else is wrong_ ". I
just noticed SQLite did something I've never seen another database do and
figured it's not made for this.

------
Esau
I am just an average, non-programming geek, and I love SQLIte. I use it to
from the command line to track my blood pressure, my comic book collection,
and my book collection.

It also gave me the chance to learn SQL for fun.

Sadly, it is not often looked upon as an end-user tool.

~~~
qwertyuiop924
>I am just an average, non-programming geek

Wait... Why are you on HN?

It's not a problem or anything, I'm just kind of curious.

~~~
eximius
I can tell you aren't trying to be malicious, but this implies that HN is only
for 'programming geeks' which is ridiculous.

~~~
qwertyuiop924
Well, the news that shows up is stuff primarily relevant to programmers...

I just wonder who else would be _interested_

~~~
mikeash
Looking at the front page right now, out of the 30 stories maybe 2-3 of them
would only be of interest to programmers versus people interested in tech in
general.

~~~
qwertyuiop924
I suppose...

------
kartikkumar
I use SQLite to store all of my simulation data (~10s of GBs). It's remarkably
versatile and the fact that there are good libraries for Python and C++ to
interface with and query SQLite dbs makes it a synch to use for data analysis.

I've seen so many people struggle with custom binary formats; I imagine there
are countless research hours lost in figuring out how to work with these
obscure formats. I've advocated to all students I work with to make use of
SQLite to store simulation data for their thesis projects and my experience is
that they're quick to pick it up and figure out how to do some pretty complex
querying.

It's one of those things that I don't understand about academia: there are so
many standards and well-established tools in the tech/IT sector that we don't
take advantage of. SQLite and JSON are the two that I constantly advocate to
everyone I work with.

------
nbevans
We use SQLite as a data integration tool. We connect to a third-party system's
esoteric database using an ODBC driver. Then export tables to a SQLite
database. This process can sometimes take a few minutes but is generally quite
quick. Then the SQLite database is compressed and uploaded to cloud blob
storage. Effectively at this point it is a "snapshot" of the third party
system's state. Our cloud system is then tailored with SQLite queries to know
how to use and understand that foreign schema. By doing it this way we avoid
needing to know several dozen SQL dialects for esoteric database engines that
"never won the race in the 1990s" (think Progress, Ingres, Paradox, etc). It
means we only need to know SQLite - a current, OSS and well supported variant
of SQL. Epic cost and time savings are the net result.

------
Const-me
For Windows, “Many concurrent writers? → choose client/server” heuristic isn’t
right.

On Windows, we have this:
[https://en.wikipedia.org/wiki/Extensible_Storage_Engine](https://en.wikipedia.org/wiki/Extensible_Storage_Engine)
ESENT plays very nice in high-concurrency scenarios.

Implementing client/server where you only need an embedded DB comes at price.
It bloats and complicates the installer, increases attack surface, conflicts
with other software for listening TCP port number, interferes with firewalls,
consumes more resources, slows the startup, etc…

------
JohnTHaller
When people say SQLite is everywhere, they mean it. Heck you're likely using
it right now as you browse HN since Firefox, Chrome, Opera, etc all use it.

------
deadlyllama
I used SQLite to analyze web server logs at my last job (devops at Xero).
SQLite supports in memory databases which are very fast. I'd parse a bucket of
logs into a table, then run some queries against them and write the results
into Graphite. The results ended up on the ops wall, and generating another
data point was one more SQL query in a config file away. Wonder if they're
still using it.

------
cyberferret
I use SQLite almost all the time now on my desktop apps especially, for
logging and other data intensive tasks that require only a single read/write
thread. It replaced text logging for me as searching and segregating log
messages is now a breeze.

I've also used it as a main data store for single user Win32 apps.

In my early days of web app programming, I had an app that created a brand new
SQLite data file for EACH customer that logged in and created an account on
the web app. I thought it would be the most secure way to separate datasets
and protect privacy for each user whilst negating the multiple write lock
issues on the same SQLite database. Tip: Don't even bother to do this! The
eventual data maintenance headache was far worse... :)

~~~
stevekemp
My personal favourite use for SQLite was for my blog. I wanted to use flat-
files for storing individual entries, but I still wanted to present tag-views,
and per-month entries.

My solution was to create a simple SQlite database, import all the entries
into it, and then generate the views by SELECTing from that store.

Populating the database, even if it got thrown away immediately afterwards,
was more efficient than trying to store all the entries in RAM.

[https://github.com/skx/chronicle2](https://github.com/skx/chronicle2)

~~~
cyberferret
Yes, temp storage is another 'use case' for me too. On one particular web app
I designed, there is a requirement for searching across multiple tables for
text data. I simply transpose the columns I need from the different tables
into a memory SQLite database 'on the fly' and perform a full text search on
that for lightning quick responses and no need to do multi table joins.

------
zmmmmm
I love SQLite up until you need to modify the schema. That's when you find
that upgrading an in place database is almost impossible. Rebuilding a whole
table just to rename a column is just completely impractical and makes
maintaining applications really cumbersome.

------
thom
I've used SQLite for single-user analytics in the past and it's fine up to a
point. It's slightly more SQL-literate than MySQL - it supports CTEs but not
window functions - and it has an okay GIS extension. I've also been pleasantly
surprised by performance in some cases.

However, I was recently pointed at MonetDB:

[https://www.monetdb.org/Home](https://www.monetdb.org/Home)

Monet's an open source column store, and I think it's worth evaluating by
anyone doing offline analytics or research-driven work.

------
kennell
I run a number of 1%-write, 99%-read web apps with decent traffic on SQLite.
Works like a charm. It is low maintenance and creating a "backup" is simply
copying the file.

------
trengrj
I recently made a tool for command line history to a sqlite database
[https://github.com/trengrj/recent](https://github.com/trengrj/recent).

I really enjoy using sqlite. Not everything needs a client server model, and
having your entire database located in a single file makes a lot of things way
easier.

------
mirekrusin
"Application file format" is the thing which intrigues me. I think people
don't think about it as an option enough, it should be used more frequently.
File format versions backed by migrations, trivial inspection of data,
transactional for free, you can keep recent history of changes, hierarchy etc.
It's pretty good.

------
chrismealy
Was there a time when using SQLite for websites would risk data corruption, or
has it always had reliable locking?

~~~
qwertyuiop924
Not AFAIK.

SQLite is the Samuel Vimes of software: It's not the fastest, or the
strongest, but it's solid as a rock, and entirely dependable.

~~~
ageofwant
It is: "His Grace, His Excellency, The Duke of Ankh; Commander Sir Samuel
Vimes", if you please.

~~~
qwertyuiop924
You know full well that he hates being called that.

------
ggregoire
What would be the arguments to not use MySQL for a business website, even a
small one? Sure SQLlite does probably the job as well as MySQL, but I don't
have any problems with MySQL and it's commonly the default option when
choosing a RDBMS. Just curious.

~~~
jdhawk
When you're a developer, not a systems guy, so the idea of running a "database
server" sounds scarier than just opening a file.

------
tmaly
I use sqlite for an intermediate representation of a report that is rendered
to multiple worksheets. I find having the data in sql lets me perform all
types of transformations that are not easily handled outside of sql.

------
Vintila
Slightly off-topic but I love their sql docs[1], those diagrams are just
beautiful.

[1]
[https://sqlite.org/lang_createtable.html](https://sqlite.org/lang_createtable.html)

~~~
contingencies
They are semi-regularly discussed here, and are known as syntax diagrams or
railroad diagrams. You can find a good list of generation tools at
[https://en.wikipedia.org/wiki/Syntax_diagram](https://en.wikipedia.org/wiki/Syntax_diagram)

------
optforfon
I've never had the occasion to use an SQL database. But say I was writing a
game using C++ - at what point would I go from managing a bunch of maps or
vectors of entities to using a SQL database?

If I was writing a ray tracer and needed to store vertices, would it makes
sense to use a SQL database? How about for a list of object? Or textures?

In general I often need to filter on objects, update object state, generate
new objects, remove some others, etc. but I never know when I should stop
thinking containers and start thinking "aha! time for SQL"

~~~
gh02t
It's not really appropriate for any of those things.

You should use a database to store data that you want to keep after the
program terminates, not so much transient things like in-memory data
structures. It's also best used for relational data- stuff that is logically
linked together.

For developing a game, maybe storing item tables with items and stats or the
player's inventory might be good candidates. Sqlite in particular is good for
this because it's easily embedded and a lot of games use it from what I know.

This is oversimplifying a good bit, but it's hard to completely describe the
scope of relational DBs.

~~~
int_19h
>> For developing a game, maybe storing item tables with items and stats or
the player's inventory might be good candidates

Doubtful. Player inventory is not going to be large enough to bother, and item
tables you'll want to be in-memory anyway, so you might as well just read them
from CSV, JSON, XML etc (and that way you can easily edit them, too).

I would say that SQLite only makes sense when your dataset is too big to be
entirely loaded into memory in a cooperative environment (i.e. assuming that
your app is not allowed to hog the entire memory). I'd say that starts at tens
of megabytes.

~~~
gh02t
It could be worth it if you need to do lots of relational queries with complex
inventory management. I was imagining e.g. an RPG that would ship the stats
for all its items in a sqlite data file and then you can store the player's
stats and inventory with foreign keys pointing to the item table. You're gonna
have to store that data somehow and if you've got enough items and/or complex
enough inventory management it seems like maybe you might want to consider
sqlite as it already exists and provides a lot of relevant features. I don't
consider size so much as whether or not there is a need to persist data _and_
the complexity of relationships; size is more of a factor in "should I use
sqlite or should I use a beefier database like Postgres."

I know sqlite is used heavily on iOS and Android and a lot of people use it as
a glorified serialization format. Probably not the best in most cases but hey
sqlite is so lightweight that it doesn't have much downside. I tend to use it
as intended as a lightweight database myself but hey if it works it works.

~~~
int_19h
It is far easier to store such structures as object graphs in-memory (i.e.
your "foreign key" is a pointer/reference to the actual object). The
navigation patterns would mostly be looking up properties on the item
referenced by inventory, so it's not like you need to do joins etc (but even
if you did, a join on in-memory object graph is still pretty easy and blazing
fast).

For C++ especially, I would recommend looking at Boost multi_index library.
This gives you the ability to do fast lookups on a variety of keys across the
same data.

Pretty much the only benefit I can see from SQLite in those small dataset
scenarios is when you need persistence _and_ the ability to change subset of
data in an atomic way (if you only need to save the entire in-memory dataset
atomically, you can always just do the rename trick to ensure atomicity with
far less overhead). Well, and, I guess, optimization of complicated queries -
but I'm somewhat skeptical about the ability of their optimizer to use indices
in a query that's really complicated; and simple ones are trivial to do
explicitly.

------
theseoafs
I'm looking at a project right now where I'm planning to use SQLite as a high-
level solution to file locking (i.e. create a record in the DB to "lock" a
file, delete it when you're done, and don't create a record if a record for
that file is already in the DB). Sound like an appropriate use of SQLite? Is
there a better, more direct solution? (I understand there are platform-
specific utilities but I would want something portable.)

~~~
chj
This is what people say you bring whole jungle just because you want the
banana.

~~~
theseoafs
For the record, the application already has SQLite, I just figured I would use
it while it's there.

~~~
AtheistOfFail
The main argument against is that a SQLite lock won't stop other processes
from messing with the file, instead use a system-based lock.

------
cheriot
> SQLite supports an unlimited number of simultaneous readers, but it will
> only allow one writer at any instant in time.

This is the one that usually gets me. For whatever reason, I tend to prefer
side projects that are "take a dataset and make a tool out of it". It often
ends up with simultaneous bulk writes when the dataset is updating.

I'm a big sqlite fan. Just throwing this out as a limitation for anyone
deciding if it's appropriate for their project.

------
Nican
I always wondered if using SQLite for the back end for a distributed
map/reduce jobs was efficient. Each machine holds part of the data in an
SQLite file.

It would not solve the usual sort/group by problems that require cross-machine
communication, but would take full advantage of SQLite's optimizations for
other problems.

------
i_feel_great
A good resource on optimising write speed:
[http://stackoverflow.com/questions/1711631/improve-insert-
pe...](http://stackoverflow.com/questions/1711631/improve-insert-per-second-
performance-of-sqlite)

------
contingencies
"Anything without frequent concurrent writes" pretty much sums it up.

------
jeremy_wiebe
> Each dynamic page does roughly 200 SQL statements.

I haven't done web work in a while but am I the only one who thinks that's a
ridiculously high number for a single page?

~~~
SQLite
An example is [http://sqlite.org/src/timeline](http://sqlite.org/src/timeline)

A complete log of the 200+ SQL statements used in generating the page above
can be seen at [http://sqlite.org/tmp/timeline-sql-
log.html](http://sqlite.org/tmp/timeline-sql-log.html)

The list of check-ins is computed by a single query. But that query then
tosses the list over the wall to another subsystem which generates content for
each check-in. And several queries are required for each check-in to extract
the relevant information needed for display.

The timeline example above is an information-rich page. Perhaps it could be
generated using fewer than 200 SQL statements. But SQL against an SQLite
database is so cheap that it has never really been a factor. You can see at
the bottom of the page that it was generated in about 25 milliseconds.
Profiling indicates that very few of those 25 milliseconds were spent inside
the database engine.

~~~
jstimpfle
This is called the "n+1 queries" problem.

~~~
SQLite
Perhaps the take-away is that when the SQL engine is in-process and queries do
not involve a server round-trip, the "n+1 query problem" is not really a
problem.

~~~
jstimpfle
Good point. If the requests can be satisfied from indexes, I guess.

------
bnolsen
fossil scm is a dvcs built around sqlite. one executable, very few
dependencies and atomic transactional safety. workflow wise its more like cvs
or svn properly converted into a dvcs. great for small teams, integrated web
server, wiki issue tracking. [http://fossil-scm.org](http://fossil-scm.org)

------
mbrock
I noticed the other day that the AWS Lambda runtime environment has the
sqlite3 binary already installed in $PATH.

------
SixSigma
I <3 Sqlite.

