My favorite tool in this space is [lnav](https://lnav.org) which has an embedded sqlite engine and works with all kinds of structured data. It might obviate the need for datasette, or maybe complement it in a scripted workflow....
Indeed! I was just thinking that this could be a substitute for viewing where I don't want to have to fire up OpenRefine, but for editing, GREL, reconciliation, etc OpenRefine is still king! I've just started playing with reconciliation in OpenRefine via conciliator + Solr. Pretty cool.
I see something about uploading the CSV file. Not clear where the sqlite dB is being created. Is it local? Does the browser create it? Ideally I would like to run this locally when I am playing with data.
What's been the largest file tested? Basically what's the max rows/cols it can handle fast?
You can run it locally - that's the default way to use it. "pip3 install datasette", create your SQLite database file and run "datasette mydb.db" to start exploring.
I've run it successfully on SQLite files up to a GB in size and theoretically it should work with much bigger files than that.
The https://publish.datasettes.com/ tool works by taking your uploaded CSVs and running my "csvs-to-sqlite" script on them, then deploying the datasette application alongside that new SQLite database file.
you can try https://www.seektable.com which has 'flat table' reports for data browsing and postgresql connector; however this is cloud tool and your DB should be accessible by tool's server.
BTW, CSV files are also supported.
More about that plugin: https://simonwillison.net/2018/Apr/20/datasette-plugins/