I am using it for a small app, and the latency is so low that it feels like the app is like 10x faster than if it was running Postgresql. Now of course scaling will not be the same, but still.
Are any of using SQLite in production and if so what have your experience been with it? Hit any limits?
* You need to have your code repopulate index statistics with ANALYZE or PRAGMA OPTIMIZE now and again, or you may get a confused query planner.
* It's best if your code defines meaningful types to scan SQLite values into, and you ideally avoid ever writing to the DB via the CLI. Alternatively, be aggressive in writing CHECK constraints and triggers to ensure data validity.
* You need to be working with a team of developers that are pretty comfortable with databases to get a lot of value out of SQLite. I've been surprised at how performant features can be if a single mind is designing the schema, constraints, triggers, and queries, examining the query plan to choose indexes, and writing the application code that accesses the database. SQLite minimizes the barrier to doing this with real snapshots of production data - but that still doesn't mean it's going to be easy.
* You need to think about how long your code holds a write transaction open. Individual statements in SQLite run very quickly (I've seen ~250 microseconds per insert on a EBS-backed EC2 instance), but if you have a Django app with @transaction.atomic everywhere, you're going to run into lock contention quite quickly because your Python functions take much longer to run. If possible, "organize" all the data for your writes outside the transaction and then hold the lock for as short as possible.
* Any transaction that may eventually write must start with BEGIN IMMEDIATE, otherwise SQLite may throw an error to keep it's promise of serialized isolation on writes.
All said, I'd do it again for any other services-based backend. It also incidentally enforces the good hygiene of "services can only access their own data." Happy to answer any questions you have.