This is nice. But less space-efficient than CSV when its strictly tabular, since CSV has columns legend on first row allowing 'pure' rows, whereas JSON will have to key every field, on every row.
I don't understand why people keep using CSV today while SQLite is MIT-licenced and can be used everywhere, has a very portable and lightweight implementation, has a stable file format with long-term commitment, and a good compromise on the type system that gives enough flexibility to be on par with CSV if some entries happen to have a different type...
Installed base. You can bet there are thousands of AS/400s and similar architectures putting out CSVs. Also you cant get more lightweight than CSV, i have several microcontroller projects that output CSV.
Switch from the developer world to the business world and everybody has Excel to open the CSV files with the article information, the sales numbers and so on and can work with that. How do you even read data from SQLite to Excel? VBA? Some obscure connector? With CSV it's "import" or even "open file".
Ironically Excels implementation of CSV is terrible. It’s constantly destroying data (eg large numeric and pseudo-numeric fields) not to mention the whole issue around any cell bringing with an equal symbol being converted into a function.
Haha. Wait to see when you receive a file made in a different language ( for example an excel file with formulas created on a japanese language computer).
Because if I want to do a graph from some data, it's much much easier to open the csv in Excel (or LO Calc) and create a graph from a sum of the subset of three columns VS a fourth column than it use to write an SQL query.
That's like a file with S-expressions, only worse (because those already existed, whereas this had to be made up as a new thing, without offering any improvement).
You could make a similar argument about the redundancy of JSON or even XML too. S-Expressions predates all of them and can represent any of their data structures too.
The problem is S-Expresssions is still pretty niche where as JSON support is widespread and supporting jsonlines when you already support JSON is trivial.
If you’re old enough to be a LISPer then you should be old enough to realise that there’s always going to be a multitude of competing standards and sometimes we have to make pragmatic choices.
ingy and I came up with p3rl.org/JSONY as a "JSON with less ceremony" compromise that I often find useful (it has a PEG grammar that's a superset of JSON so pretty easy to re-implement elsewhere)
The hyphen prefixed arrays are very weird. It looks like YAML but behaves differently. Doesn’t help that YAML is also a superset of JSON too.
I also don’t like space (char 32) used as a field delimiter because that opens up entire categories of bugs that are already a daily annoyance for shell scripting.
I respect your thought processes but I can see this particular specification being more confusing than productive to most other people.
The hyphen prefixed arrays were an ingyism. I don't use those at all but since he wrote the grammar I didn't feel like it was particularly fair to bitch about him sneaking in a single feature he wanted.
I use it basically like:
{ key1 value1 key2 [ value2a value2b value2c ] }
i.e. treating JSONY as "JSON but with as much of the syntactical noise as possible optional" which is what I always wanted it to be.
Plus because JSONY isn't at all indentation sensitive pasting chunks of JSON in becomes IMO a lot easier to comprehend.
A valuable thing for me is that if e.g. I'm writing a daemon that communicates via newline delimited JSON over a TCP or UNIX socket I can have a development mode where it uses JSONY as the parser so it's faster/easier to do quick interactive tests with socat.
I'm not going to criticize personal usage but for the point of accuracy the indention sensitive syntax of YAML is entirely optional. Like I said, it's a superset of JSON, which means the following is valid YMAL:
Granted that's still got a few additional control characters (comma and colon) but personally I think that aids readability because it wouldn't be clear to my what the intent of the parameters were in your example if you hadn't named them "key" and "value". But as I said, I'm not going to criticize personal usage.
We need a new schema language with the ability to specify such cross-format …formats.