Hacker News new | past | comments | ask | show | jobs | submit login

I've found sqlite's performance on bulk insertions to be massively (100x) improved by wrapping the bulk insertion in a transaction.

fsync()ing a couple hundred thousand individual INSERTs isn't fast.




Batching inserts, wrapping the batches in transactions, waiting til after to add indexes: best way to go.


Journaling mode and sync modes etc, also make a big difference: http://www.sami-lehtinen.net/blog/sqlite3-performance-testin...


Those optimizations also apply to PostgreSQL.


It would still allow you to use a simpler solution and not paint yourself in a corner if you optimized on sqlite; no preemptive optimization necessary wrt choice of dbms if you can perform the work on the simpler solution.


Optimizing in SQLite is just as much of a sunk cost as optimizing in PostgreSQL if you switch away to another database.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: