Hacker News new | past | comments | ask | show | jobs | submit login

I don't quite agree. There's a reason relational won, and continues to win more than a half-century after it was invented, despite no lack of wannabe usurpers.

For your model to work, what would be needed is the reverse: a system which allows me to interact with TSV files with SQL.

For 98% of uses of SQLite, performance and storage efficiency don't matter; we're talking about dozens or hundreds of records. It doesn't need to be well-implemented from that perspective. It does need to be robust. For example, databases solve a lot of problems especially around consistency in threaded systems which this would need to handle as well (although a KISS solution, such as handling all access from one thread / process, and simply blocking other requests during transactions, would be fine).

Something like that would likely eat SQLite's lunch.

q seems close. It'd need to support writes, table creation, and all the other stuff, perhaps be more actively maintained, provide documented API (not just command-line) access, and work with things like ORMs.




> For your model to work, what would be needed is the reverse: a system which allows me to interact with TSV files with SQL.

Several such systems exist, one of which is SQLite. https://www.sqlite.org/csv.html

I don't know if there's a virtual table extension for tab separated values, but SQLite imports them just fine.


Interesting. Thank you.

It's intentionally available as a single source file:

https://www.sqlite.org/src/artifact?ci=trunk&filename=ext/mi...

For easy modification to handle TSV, TSVx, JSON, Pandas, YAML, or whatever else my heart desires.


> a system which allows me to interact with TSV files with SQL

ClickHouse and DuckDB can both do this. Pretty sure AWS has some built-in stuff for this too.


I think those are a very different use case. sqlite is used for things like app settings. Both of those are heavyweight monsters designed for rather large data.

That said, I'm reading their documentation right now, and I think one or the other might slot into a different use-case I have where I do have rather large data. I'd last looked at this space quite a number of years ago, and both are way ahead of what I'd recalled.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: