I am currently evaluating dsq and its partner desktop app DataStation. AIUI, the developer of DataStation realised that it would be useful to extract the underlying pieces into a standalone CLI, so they both support the same range of sources.
qsv is a fork of this, as qsv is pretty much unmaintained now (I don't mean to sound negative, BurntSushi did an AMAZING job and I love the work they did).
I’ve spend what feels like too much time looking for a windows app that I can use to occasionally preview CSV files before moving them into WSL.
I just think $50/year is too much for occasional use.
Since this posts has initiated several other shameless plugs, maybe I can swoop in and ask for shameless recommendations for something cheap in this space.
This one is another desktop app, with a slightly different approach: in the CSV file case, the data is parsed and dumped into a bundled PostgreSQL database, which you can then query with a visual query interface.
I agree jq's syntax it doesn't make much sense for tables where the primary operations are filter and merge, but for deep tree-like datasets, which is what JSON is supposed to be used for, traversal and iteration are more important and the syntax makes perfect sense there.
I'd be willing to bet most programmers would instantly understand something like `.users[] | {email: .email, lastLogin: .logins[-1].date}`, even if they've never seen jq.
Now that I'm thinking about it, the kind of structures we often use JSON for are, in a way, a subset of what can be done with tables and pointers (foreign keys), so would it be possible to create a kind of jq to SQL compiler? Has anyone tried that?
Response in seconds if you hit partitions (a version of index that only support equality and its implemented in folders in this case) and your storage system contains content metadata like parquet or orc.
Is not a BI tool tho, if queries have high variability on the where clause and you can't leverage indexes then you're looking at minutes as response time.
If the data is not in structures but plain csv/json, all bets are off.
I've not yet tested it at terabyte scale albeit it should happily scale up there.
I have not put it through any stress tests. I look at this kind of tool as a nice convenience. If I needed something high throughput I'd probably want a full baked data warehouse pipeline.
It really, really, realllllyyyy, depends on how you setup your prefix/"folder" structure and the underlying file format. Though that's almost certainly true here.
I have a use case, where my company's application logs will be shipped to S3 in a directory structure such as application/timestamp(one_hour)_logs.parquet. We want to build a simple developer focussed UI, where we can query for a given application for a time range and retrieve a bunch of s3 blobs in that time range and brute force search for the desired string. I see that roapi offers a REST interface for a fixed set of files but I would like to dynamically glob newer files. Are there are alternatives that can be used too ? Thanks
This looks really cool! Especially using datafusion underneath means that it probably is blazingly fast.
If you like this, I recommend taking a look at OctoSQL[0], which I'm the author of.
It's plenty fast and easier to add new data sources for as external plugins.
It can also handle endless streams of data natively, so you can do running groupings on i.e. tailed JSON logs.
Additionally, it's able to push down predicates to the database below, so if you're selecting 10 rows from a 1 billion row table, it'll just get those 10 rows instead of getting them all and filtering in memory.
I’m going to need to see a citation for that. Last I checked, it was being beaten by Apache Spark in non-memory constrained scenarios [0]. This may be “blazingly fast” compared to Pandas or something, but it’s still leaving a TON of room on the table performance-wise. There’s a reason why Databricks found it necessary to redirect their Spark backend to a custom native query engine [1].
Datafusion out performs spark by a large margin. It is on par with photon based on my experiences, see benchmarks at https://github.com/blaze-init/blaze.
Ah nice, thank you for sharing that. I hadn’t seen it before, and congrats on beating out Spark that hard, I hope it continues to improve!
As an aside, maybe it would make sense to publish a new blog post somewhere so that the top hit on Google for “DataFusion benchmark” isn’t that post I linked.
Haha, yeah, we should definitely put a little bit more efforts into SEO :) Everyone is so focused on the hard-core engineering at the moment. I think Matthew from the community is actually working on a new comprehensive benchmark for us at the moment, which I hope will be published soon.
As I commented on a recent similar discussion, these tools can't be used for update or insert. As useful as querying might be, it's terribly misleading to claim to "run SQL" if you can't change the data, since that's such a critical part of an SQL database.
What you're really saying is that the database presented in OP is not useful because it only handles DQL.
1. SQL can be thought of as being composed of several smaller lanuages: DDL, DQL, DML, DCL.
2. columnq-cli is only a CLI to a query engine, not a database. As such, it only supports DQL by design.
3. I have the impression that outside of data engineering/DBA, people are rarely taught the distinction between OLTP and OLAP workloads [1]. The latter often utilizes immutable data structures (e.g. columnar storage with column compression), or provides limited DML support, see e.g. the limitations of the DELETE statement in ClickHouse [2], or the list of supported DML statements in Amazon Athena [3]. My point -- as much as this tool is useless for transactional workloads, it is perfectly capable of some analytical workloads.
Hi Gavin; that sounds interesting! I saw @eirikbakke make a comment about https://www.ultorg.com earlier. It appears to also support editing the underlying data. I'm curious to see how you've each tackled these tricky topics.
I think it is worth pointing out that this tool does support querying Delta Lake (the author of ROAPI is also a major contributor the native Rust implementation of Delta Lake). Delta Lake certainly supports transactions, so ROAPI can query transactional data, although the writes would not go through ROAPI.
Purely the power law. That would be an interesting thing to figure out though. Maybe a github crawl.
EDIT: I stand corrected based on github code files (which might better represent application CRUD queries versus use by analysts, more thought required!)
Yes, I designed the code base so that the core of the IO and query logic are abstracted into a Rust library called columnq. My plan is to wrap it with pyo3 so the full API can be accessed as a Python package! If you are interested in helping with this, please feel free to submit a PR. The core library is located at https://github.com/roapi/roapi/tree/main/columnq
clickhouse-local - https://news.ycombinator.com/item?id=22457767
q - https://news.ycombinator.com/item?id=27423276
textql - https://news.ycombinator.com/item?id=16781294
simpql- https://news.ycombinator.com/item?id=25791207
We need a benchmark i think..;)