Hacker News new | past | comments | ask | show | jobs | submit login
Friends don't let friends export to CSV (kaveland.no)
245 points by lervag 61 days ago | hide | past | favorite | 440 comments

This article seems written by someone who never had to work with diverse data pipelines.

I work with large volumes of data from many different sources. I’m lucky to get them to send csv. Of course there are better formats, but all these sources aren’t able to agree on some successful format.

Csv that’s zipped is producible and readable by everyone. And that makes is more efficient.

I’ve been reading these “everyone is stupid, why don’t they just do the simple, right thing and I don’t understand the real reason for success” articles for so long it just makes me think the author doesn’t have a mentor or an editor with deep experience.

It’s like arguing how much mp3 sucks and how we should all just use flac.

The author means well, I’m sure. Maybe his next article will be about how airlines should speak Esperanto because English is such a flawed language. That’s a clever and unique observation.

Totally agree. His arguments are basically "performance!" (which is honestly not important to 99% of CSV export users) and "It's underderspecified!" And while I can agree with the second, at least partly, in the real world the spec is essentially "Can you import it to Excel?". I'm amazed at how much programmers can discount "It already works pretty much everywhere" for the sake of more esoteric improvements.

All that said (and perhaps counter to what I said), I do hope "Unicode Separated Values" takes off. It's essentially just a slight tweak to CSV where the delimiters are special unicode characters, so you don't have to have complicated quoting/escaping logic, and it also supports multiple sheets (i.e. a workbook) in a single file.

> in the real world the spec is essentially "Can you import it to Excel?"

And the answer to that is always no. You will it think it's yes because it works for you, but when you send it to someone who has a different Excel version or simply different regional settings, it won't work. The recipient will first have to figure out what dialect you used to export.

Oh absolutely. Don't forget about the giant mess Microsoft made in countries like the Netherlands, where Excel will refuse to automatically open comma-separated values files (you know, CSV), unless the separator is a semicolon — because someone in the past thought that was how the Dutch did it.

You want people to be able to open your nice and sensible CSV files in Excel? First figure out which arcane locale specific settings you need, then generate those monstrosities and annoy anyone who genuinely expected Unicode comma-separated values.

My solution was to just write a generic spreadsheet writer component in our API and have it generate CSV (normal, comma-separated, UTF-8), XLSX, or ODS. Anyone using Excel just grabs the XLSX output. Generating XLSX and ODS was just a matter of saving a minimal example file for each and figuring out where to write cells and rows. No extra library needed (both file formats are just zipped up XML files), and everybody is happy.

Many countries use the comma as decimal separator, and Microsoft in its infinite wisdom thinks that data interchange formats should follow regional formatting settings (that's unbelievably stupid; I'll never understand how such an enormous error not only came to be, but was never corrected). That makes the comma unusable as column separator in those countries for exchange of numerical data.

US users have the same in my experience. I've had multiple clients complain the exports are broken, because Excel only imports them (perfectly) but doesn't allow to open them. I thought it was their way of forcing excel files. Iphone & Google Drive no issues.

"Always no"? What. Really not sure what you mean here when you agree that the possibility of it working exists.

I get the sentiment -- when I request data through FOIA, I will almost always request it as "an excel format" because I know that I'll at least be able to import it. CSV is much less of a guarantee and will have issues -- missing quotes, wrong delimiters, inconsistent column counts, things like that. So requesting "an excel format" implies "make the best effort to give me something that will load in my excel, but without asking what version of excel I have". Removes a fair amount of hassle, especially when it took months to get the data. It also means that if they fuck up the columns by doing a conversion, you have some means of telling them that the data is simply wrong, rather than the data is hard to work with. It does mean dealing with [0-9]+GB sized excel files sometimes, though.

That all said, I prefer to share CSV files. Haven't had much of a problem with it and I can guarantee some consistency from my side. Like, the people I share files with aren't going to know what the hell a parquet file is. A CSV though? Just double click it if you're less technical, or open it up in less if you can use a terminal. It usually compresses well, despite what the author wrote.

   > when I request data through FOIA
Fascinating. Can you share any details? Did you ever think to share some of your interesting finds here on HN as a submission?

That was a really fascinating story! Thanks for sharing.

Wow, what a treasure trove you’ve got there! I’ve subscribed via RSS, in case anything else comes down the pipe :)

Thank you! Hopefully by the end of the year but these things can get, strange.

I've been amazed by how much better LibreOffice is at importing CSVs in a sane manner than Excel. Its CSV import prompt is nothing short of the gold standard and puts Excel to shame.

Also, even if the CSV format is completely valid, Excel will still find a way to misinterpret some of your cells in baffling ways, destroying the original data in the process if you don't notice it in time.

Yeah, I can complain about LO in many ways, but the way it opens CSV is much better than Excel. It was developed by a dev, that's for sure.

The root cause of a lot of problems is that Excel's CSV import is garbage.

Someone should write a simple utility modelled on LibreOffice's CSV import dialog that reads in a CSV file and outputs its content as an XLSX file with all the cells formatted as text. Being as how XLSX files are just XML text in a renamed ZIP file and CSV is a very simple format such a project could be written over a weekend.

Network admins could then create a group policy to reassign the CSV file extension to open with the utility instead of Excel. I guess the utility could automatically open the generated XLSX in Excel as well.

This would fix so many data corruption issues across the whole world.

Microsoft themselves could even do this as a powertoy.

Heck yes, LibreOffice shines when it comes to that. Excel always threw me curveballs a lot.

Yes the LibreOffice CSV import dialog showing you a what-if preview of what you'd get as you play with different settings, is pure amazing.

At this point i suspect excel is as dangerous as powerpoint to the quest for sharing and improving human knowledge, in that it gives the illusion of allowing people to analyze data but not the tools to make sure they get their data and model lined up correctly.

otoh it could be instrumental precisely because it is flexible and doesn't require a lot of forethought

>> in the real world the spec is essentially "Can you import it to Excel?"

> And the answer to that is always no.

Sorry, you are wrong! You are confusing "No" and "Yes, after a quick setup at most".

Oh oh oh, I have a story about a quick setup. Sent a csv file to someone in the same org. Guy said that it was not opening. Wanted me to come to their office to see. I told them that IT support should fix it, since I can't, and every machine on my OU could read it. I was the bad guy. Yeah, quick setup my ass. Users can't be arsed to understand that their setup isn't properly configured.

Yeah, I definitely prefer CSV over Excel. Excel messes with your data. CSV is what it is. Sure, you may not know exactly what that is, and plenty of times I've had to figure out manually how to handle a specific CSV export, but once I've figured that out, it will work. With Excel, there's always the risk that a new value for a string field could also be interpreted as a date.

On the other hand you can now use Power Query to import perquet data into Excel.

Whatever situation got you into the "Power" universe was bad.

Warning you about M$, you will soon be an enterprise dev.

I am not in that universe, on the contrary I try to stay as far away as possible. And I agree with you. However I think the everything-done-in-OLD-Excel universe is worse. Some people will be terminally stuck in Excel but at least they can use new Excel capabilities instead of being stuck with the Excel of 20 years ago.

So why remain stuck importing CSVs into Excel when you can use Power Query to import Parquet. Why remain stuck using VBA in Excel when you can use Python in Excel.

I do not think an Excel user can be convinced to move to things like Jupyter, R, databases, etc. since they won't even make the jump to Access but maybe they can be convinced to use modern features of Excel.

Sorry man, I can't think of a case where I'd import a CSV into excel, but have the skill level to use powerquery and import parquet.

Like, if you are going to use power query, why not just python? At least this way you arent going to get nailed into a legacy hellhole.

If the answer was always no, importing CSVs to Excel wouldn't be an expectation or widely used.

The answer is "always no" because the question is inherently underspecified, precisely because importing things into Excel is more complicated and context-dependent than it appears on the surface.

ASCII has had field and record separators since like, forever. Wish we had kept using those.

No you don’t. It’s a holdover from when files were on tapes. The logic is all inverted too. Record separators were at the beginning of a record. Group and unit separators could then be nested. You really needed a spec for whatever document you were trying to makt

It doesn't matter if the sentinel byte is after or before each record.

Having it before is nice for syncing byte streams.

It matters when you trying to roundtrip the data through a text editor because existing tools balk at a 300MB file with a single line.

You need a "spec" just the same for a CSV: does it have headers, is there a consistent column ordering, etc. Control characters work the exact same as commas and newlines, but you don't have to worry about escaping them because (unless your data is binary or recursive) the control characters won't show up in the data!

Do you have a reference to how this worked?

Does it have nesting operators? I want to embedd ASCII within my ASCII fields. So I can have a table within my table.

The POSIX spec defines portable and non-portable ASCII characters, prudently placing the separators in the non-portable set. In order to nest tables, base64 (or whatever portable encoding) code the table into the field. This works much better, easier and more error-free than any escaping strategy.

Regarding visibility in editors, if you are nesting tables I don't think you care too much about manual editing, but if you do, it is easy to setup vim/emacs/vscode to display the ASCII separators. I am being told even notepad++ can display those, there are no excuses.

It kind of does. See `man ascii`

* FS (0x1C) file separator

* GS (0x1D) group separator

* RS (0x1E) record separator

* US (0x1F) unit separator

I've never seen these in the wild though.

They‘re used a lot in barcodes, e.g. for delimiting the different fields of a driving license.

That sounds like the premise for an utterly fascinating deep dive.

It‘s quite the rabbit hole, I can assure you.

Not seeing them in the wild is good, it means they will work when you use them. The more they get used the more often you'll find they crop up in the text fields you're trying to separate and the whole csv escaping nightmare will continue.

If you've got delimited-text embedded inside your delimited-text, you've got a nightmare that escaping can't save you from.

(obligatory https://knowyourmeme.com/memes/xzibit-yo-dawg)

They're definitely used in ACARS messages that go to every commercial airplane cockpit...

A manager: I wish I could have a CSV inside my CSV.

Any sane person: NO!

base64 encoded fields it is.

That was the right answer. You are hired!

Well you might be wrong, but EDI in general and HL7 specifically allow 3 levels of "fields in fields in field".

As long as your parser copes, and as long as you have appropriate structures to import into, its no big deal.

So now when I'm exporting data I need to know what nesting level it's going to live at so I can generate the correct separators?

I really think that might be the worst idea I've heard for a while!

It's not toooo bad :) But it's very much a "thing" in the real word. It's called EDI, and it's been around for a long time.

About 10 years ago, I worked at a place where we were embedding both XML and JSON in CSV fields.

Then there are always the people who can't generate a valid CSV due to escaping issues...

Nothing is ever simple.

Been there, done the Interop.

Are there unicode characters specifically for delimiters?

If Excel had a standardised "Save as USV" option it would solve so many issues for me.

I get so many broken CSVs from third-parties

ASCII has characters for unit, record, group and file separator. And a some days ago there was a story here about using the unicode printable representation of these for an editor friendly format.


There are characters from ASCII for delimiting records, which are underused because they cause confusion about whether they should be represented as a state change like a backspace character, or as a glyph. See also: "nobody can agree on the line ending character sequence".

The USV proposal uses additional codepoints introduced in Unicode for the representation of the record delimiters, so they will always look and edit like character glyphs, and nobody is using them for some other purpose. The standardized look of these glyphs is unappealing, and they aren't easy to type, but it's fixable with a font and some editing functions.

Most of the issue hinges on Excel support.

> nobody is using them for some other purpose.

There's a lot of tooling which uses them for their intended purpose, which is to represent the C0 control characters in a font, so they can be printed when they appear in a document. Your editor is probably one of those.

Which is why I consider USV a terrible idea. If I see ␇ in a file, I don't want to be constantly wondering if it's "mention ␇" or "use ␇ to represent \x07". That's why the control pictures block exists: to provide pictographs of invisible control characters. Not to make a cutesy hack "look! it's a picture of a control character pretending to be what it isn't!!" format.

I agree about USV, it creates confusion where none needs to exist. For personal use, though, it is not that bad to receive a USV: it should be postmarked ".usv" and in any case if you suspect shenanigans you can `grep` for the offending (literally!) unicode characters and `tr` them into proper ASCII separators. Now, if there is nesting in the USV, I give up.

I share the lament: the whole table issue was solved before it became a problem. POSIX divides ASCII into portable and non-portable characters; only portable characters are allowed in the fields and separators are non-portable. If you need nesting, use a portable encoding of the inner table. This scheme repeats indefinitely without escaping hell or exceptions, preventing tons of errors and headache.

Visibility is such a bizarre complaint. Text editors already handle control characters: they handle tabs, they handle newlines, it is not a tremendous, earth-shattering feature request to make them handle separators gracefully.

I don't underatand why this is a question up for debate. You need eye tracking, so that there is a beep when you read the relevant part.

Hell, there's ASCII characters specifically for delimiters. 0x1C to 0x1F are respectively defined as file, group, record, and unit separators. Unicode naturally inherits them all.

Except nobody uses them. Another previous discussion: https://news.ycombinator.com/item?id=33935140

My significantly bigger beef would be all of the auto-formatting Excel does to mangle data. Excel loves to turn entries into dates.

Human genes had to be renamed so as to avoid this Excel features.

excel now has a prompt so you can tell it to not convert stuff automatically

Gasp. Big news. I do not recall ever seeing this, so I wonder if $JOB is running some hilariously outdated version for compatibility with a load bearing VBA script.

Automatic Data Conversion toggle was only added in the past ~year: https://insider.microsoft365.com/en-us/blog/control-data-con...

Yes, this recent discussion has lots of good info and links in the comments: https://news.ycombinator.com/item?id=39679378

> Are there unicode characters specifically for delimiters?

We could use the HL7 pipe ‘|’ and all enjoy that hell.

God, please no.

For those unfamiliar with the atrocity that is HL7v2, the format is essentially CSV, but with the record separator set to a lone CR, and the field separator usually set to |. Usually, because the format lets the consumer of the format redefine it, for whatever reasons. (The first use of the field separator it determines whatever character it will be. Thankfully, the first use is in a fixed spot, so it's determinable, but still. Oh, but we don't know the character encoding until like the 18th field in … and it doesn't necessarily have to be an ASCII superset. So I have no idea what an HL7v2 message in a non-ASCII superset even looks like, or how a parser is even supposed to reasonably parse such a thing. I presume attempt a decoding in all possible decodings, and then see which one matches the embedded character set, and pray nobody can create a polyglot?)

There's also further separators, delimiting within a field.

It also has its own escape sequences, to deal with the above.

… and it is what carries an unfortunate amount of medical data, and is generally how providers interoperate, despite the existence of more civilized standards like FHIR

I've unfortunately had to bless my brain with much more of this standard this week, for some reason.

Did I mention that subcomponents (if you look at it like a CSV, cells are further subdivided into components & subcomponents, so subcomponents are sort of where we hit "cell text", if you want to keep going with that broken analogy) — contain escape sequences, so that you can have things like the field separator. Normal stuff, so far. The escape sequences also include highlighting, binary blobs, and a subset of roff.

My worst nightmare was a semicolon-delimited file. Where one of the columns had hand-typed street names - without quotes.. so "WELLS" was often "WE;;S".

Since it was the only column like that, the # of columns to the left of the annoying column and the # on the right would always stay the same. So it was pretty easy to clean.

It’s been years since I last worked with HL7. Isn’t there also ^ and ~ to deal with?

Hell indeed.

Isn't that what HL7 stands for? Hell Layer 7 as in the seventh circle of hell.

Yes, there is. Multi-dimensional CSV?

There is the white space bs too. Sometimes it matters, sometimes it doesn’t. What type of white space is it?

Seriously rough.

>I do hope "Unicode Separated Values" takes off. It's essentially just a slight tweak to CSV where the delimiters are special unicode characters

Commas can be typed by anyone on any keyboard and readable by anyone.

Special Unicode Characters(tm) can't be typed by anyone on any keyboard and readable by noone.

Convenience is a virtue.

I can't remember the last time I, or anyone I've ever worked with for that matter, ever typed up a CSV from scratch. The whole point of USV is that the delimiters can't normally be typed so you don't have to worry about escaping.

USV supports displayable delimiters (see https://github.com/SixArm/usv), so for the much more common case of editing an existing CSV in a text editor, you can just copy and paste.

Everyone of us was a beginner at some point. The first time we came across CSV format we likely typed it in notepad by hand. A lot of issues with CSVs are also sometimes troubleshooted by hand-- by manually fixing a quote or a comma.

There is value is the ability to do this level of editing and troubleshooting.

> The first time we came across CSV format we likely typed it in notepad by hand.

Again, I'm not saying CSVs aren't edited by hand in a text editor, I'm saying they aren't created from scratch in a text editor, even by beginners. USVs are easy to edit in a text editor, too, and I tried viewing and editing USVs with a couple different fonts and had no problems.

If the separators can't easily be typed, how do you add a new cell?

Nobody can type up a GIF image, or Word document in a Notepad, yet files of both those formats exist. The answer obviously is tooling. If a format with sane separators was common, so would editors that could edit that format be.

I was responding to the GP's:

USVs are easy to edit in a text editor

I don't see how that's the case.

If a format with sane separators was common, so would editors that could edit that format be

Sure, but that's a hypothetical future editor, not something that currently exists.

Edit to add: I also disagree with "sane" in that context. New separators won't solve anything. You'll always need escaping/encoding/encapsulation; get that right and everything else follows. JSON is comma-separated and does just fine.

Copy and paste.

No we didn't, we likely typed in Excel by double clicking on our first csv

I can’t speak for everyone, but I definitely didn’t use Excel.

I've valued the virtue of CSVs being readable by any text editor known to man, and I've occasionally edited them by hand. The pure simplicity of reading and typing commas trumps any value provided by more esoteric configurations.

As for escaping, that's for the subsequent programmers (which could also be me) to figure out. If it is me, I'll deal with it because it keeps things simple.

> I've occasionally edited them by hand.

Yeah, usually when the quoting was f'up.

> Special Unicode Characters(tm) can't be typed by anyone on any keyboard and readable by noone.

While I'm not a fan of USV, I do believe it is type-able on both macOS and Linux¹. The IME's character picker in both OSes contains all of the necessary characters, I think. (I use "␤" in commit messages, occasionally. That was a typed ␤, not copy/pasted from a Google query or such.)

It would be quite tedious, I do admit; reasonably typed by someone, probably not.

I don't normally type out CSVs by hand either, though.

(¹I don't know about Windows as I simply haven't cared about it in years. For all I know, they've grown an IME in the intervening decade.)

Even if the csv isn’t being typed out by hand, when importing into Excel and the delimiter needs to be manually entered, because it isn’t one of the common ones Excel has a radio button for… it is nice to be able to easily type it.

While I can see a convenience argument for the somewhat contrived case of typing an entire file out by hand, entering the character once for the import does not seem like a great bar.

> it is nice to be able to easily type it.

Again, that's where an IME is helpful; on the OSes I mention, it's "␞" is:

  macOS: ⌘+^+Space, "record", <arrows to select>, Enter
  Linux: Super+e, "record", Space, Space, <arrows>, Enter
The process is highly visual on both, so you're getting feedback about whether you're about to hit the right character, or not.

(And like, if you have the file, you can always ^C ^V the character, if say you don't know how to IME, or you don't know what the name of the char is, etc.…)

Would it be possible to just type the file using commas, semicolons, or pipes or something (whatever you happen to know you don’t have in your file) and then convert them using sed?

Yes, it would be possible. You'd have to make sure the character didn't appear / no escaping at all was present, which the data may or may not allow.

Might as well just get a dedicated CSV→USV converter, though.

(I have a variant of this problem in JSON→YAML … usually I just shove one into the other & pray, akin to your sed solution.)

Any character within reason can certainly be entered by way of Character Map in Windows or its equivalent in Linux or MacOS, but if you're arguing that then you don't understand the crux of my argument: Convenience is a virtue.

There is value in the delimiter simply being a key on practically any keyboard in existence. Anything that involves something more complicated than just pushing a single button on a keyboard (this includes IMEs) is a non-starter, because convenience is a virtue.

> Anything that involves something more complicated than just pushing a single button on a keyboard (this includes IMEs)

My point is that this is merely a more stringent argument; it's now "on a keyboard, and cannot involve dead keys, etc." … which now excludes classic CSV, too, which requires two keys to enter a double quote. (Shift+')

Again, it does require more keys, and it is certainly not convenient, but n keys to me is still n keys. The real question is why one isn't using a proper tool to enter the data, and is instead encoding it by hand, which, again, even for a classic CSV, is basically something I've never done. (… because why would one?)

The fact that it is a character on the keyboard is exactly the problem, too. Any character a user can easily enter will definitely end up mixed into your data somewhere.

The IANA standard for TSV already disallows tabs inside fields, so you can skip writing any quoting logic (in principle). The MIME type is `text/tab-separated-values`.


So true. Working with imports/exports in CSV from ERP software. One can't imagine how often "Oh, this import doesn't work. I'll just fix the CSV file" occurs. Try that with some compressed, "esoteric" file, or even CML and users will break it.

Besides all the downsides CSV has, as soon as it's not only machine-machine communication and a human is involved, CSV is just simole enough.

Check out Polars in python if you want some CSV performance lol. I recently got a 5 million row CSV from a 3rd party and I could manipulate columns (filtering, sorting, grouping) in actions that took less than a second. It's an incredible tool.

USV has a mountain of problems.

And really is in search of a problem to solve.

JSON objects as a CSV field has been mostly agreeable for my usage. It would be nice if some of the spreadsheet apps displayed the object tree.

ditto. if it fits in the ram, file types don't matter.

It seems like you missed the conclusion in the article. If users want CSV exports, let them have it.

If you have important data being shuffled around systems, pick something with a specification instead.

To me this criticism feels excessive. It feels like the author is describing their frustrations with internal usage of CSVs - there's no mention of customers and non-technical stakeholders at all. I think it goes without saying that Parquet files and other non-human-readable formats are a nonstarter when working with external stakeholders and the last paragraph makes that clear - if the end-user wants CSV, give them CSV.

I also think we shouldn't blindly dismiss the performance drawbacks of CSV when working with data pipelines. At even modest scales it becomes hard to work with very large CSVs because the data often doesn't fit into memory, a problem easily solved by Parquet and other formats assuming you only need a subset.

I deal with gig size csvs all the time and don’t have any performance issues. These aren’t huge files, but decent sized. And most are just a few megs and only thousands to millions of records.

Csv is not very performant, but it doesn’t matter for these use cases.

I’ll also add that I’m not working with the csvs, they are just I/o. So any memory issues are handled by the load process. I certainly don’t use csvs for my internal processes. Just for when someone sends me data or I have to send it back to them.

That being said my workstation is pretty big and can handle 10s of gigs of csv before I care. But that’s usually just for dev or debugging and anything that sticks around will be working with data in some proper store (usually parquet distributed across nodes).

That may be your experience, but certainly not a universal experience (and apparently not the author's, either). In my experience, it's pretty easy to have CSVs (or Parquet files, or whatever) that are tens or hundreds of GBs in size. The space savings from a more modern file format are significant, as is the convenience of being able to specify and download/open only a subset of rows or columns over the network. Most of us don't have workstations with 50GB of RAM, because it's far more cost-effective to use a Cloud VM if you only occasionally need that much memory.

That being said, the real point here is that folks blindly use CSVs for internal-facing processes even though there's no particular reason to, and they have plenty of drawbacks. If you're just building some kind of ETL pipeline why wouldn't you use Parquet? It isn't as if you're opening stuff in Excel.

The author is giving universal advice to all friends.

If the title was “friends in certain circumstances shouldn’t let friends in certain circumstances export to csv.”

Even a laptop with 8gb ram can open a gig csv.

Of course the internals of your etl will use some efficient data structure, but you’d still want to export as csv at some point to get data to other people. Or you want your friends to export csv to get data to you.

If I run a simulation workload it's pretty easy to generate gigabytes of data per second. CSV encoding adds a huge overhead space and time wise, so saving trajectories to disc for later analysis can easily become the bottleneck.

I have had many other situations where CSV was the bottleneck.

I still would default to CSV first in many situations because it's robust and easily inspected by hand.

> That being said my workstation is pretty big and can handle 10s of gigs of csv before I care.

How much RAM do you have? What's the ratio of [smallest CSV file which bottlenecks]/[your RAM]?

My dev workstation has 96gb. I don’t work with massive data files so I’ve never really hit my limit. I think the biggest raw data file I’ve opened was 10-20gb.

I very much agree with this. For an integration where you have control over both ends of the pipeline, CSV is not optimal unless there's existing work to build on, and even then it's a legacy choice.

Parquet and Avro are widely supported in backend languages and also in data analysis. I don't think the article is talking about exported-like-a-jpeg, but instead exported-like-a-daily-report-run: the data scientist doing the exporting is probably using R or Pandas instead of Excel, and can reasonably be expected to read https://arrow.apache.org/docs/r/reference/read_parquet.html.

btw, xsv has solved most of my problems dealing with 'large' 40GB csv files

xsv? I never heard of it. This one? https://github.com/BurntSushi/xsv

If yes, looks very cool. Plus, bonus HN/Internet points for being written in Rust!

yep .. his utils are most excellent.

its parser is buggy! https://github.com/BurntSushi/xsv/issues/337

(I ran into this issue myself)

I just responded to that. It isn't the parser that's a buggy. The parser handles the quotes just fine. If it didn't, that would be a serious bug in the `csv` crate that oodles of users would run into all the time. There would be forks over it if it had persisted for that long.

The problem is that `xsv table` doesn't print the parsed contents. It just prints CSV data, but with tabs, and then those tabs are expanded to spaces for alignment. Arguably it ought to print the parsed contents, i.e., with quotes unescaped.

It almost looks like it's doing that because the quotes are removed in one case, but that's only because the CSV writer knows when it doesn't need to write quotes.

Ok this might sound stupid, and a bit unrelated, but you make so many great tools that I can't help but ask. How do you start planning and creating for a tool that needs to "follow standards"(in this case I know CSV is under specified but still!), is it by iteration or do you try to set and build a baseline for all the features a certain tool needs? Or do you just try to go for modularity from the get go even if the problem space is "smaller" for stuff like csv for example.

I suppose https://old.reddit.com/r/burntsushi/ might be a good place for questions like this.

I don't really have a simple answer unfortunately. Part of it is just following my nose with respect to what I'm interested in. So there's an element of intrinsic motivation. The other part is trying to put myself in the shoes of users to understand what they want/need. I typically do a lot of background research and reading to try and understand what others have done before me and what the pain points are. And iteration plays a role too. The `csv` crate went through a lot of iteration for example.

I think that's about it. It's hard to answer this question in a fully general way unfortunately. But if you want to get into it, maybe you can be the first person who opens a thread on r/burntsushi haha.

Why is this getting downvoted? They're right that the criticism is pretty excessive:

  "Maybe his next article will be about how airlines should speak Esperanto because English is such a flawed language. That’s a clever and unique observation."

I got a little snarky but I think the analogy holds.

Esperanto is a superior language to English. And English has many flaws.

Theoretically it would be better to have all pilots and airports learn an efficient language.

But it would be stupid and immature to seriously write a blog post about that, especially without talking about all the flaws in that plan.

> Esperanto is a superior language to English.

Not really. You can say it's more regular but that's because it sees barely any actual use; if it ever gained popularity it wouldn't stay regular. (And given that pilots speak in set phrases anyway, irregularity isn't really an issue). It's not a great language by any stretch, it's an awkward mismash of four european languages; sure it sounds kind of nice in an Italianate way, but if that's what you want then why not just speak Italian?

> if it ever gained popularity it wouldn't stay regular

This. Utility and purity always pull in opposite directions.

I see those boutique little (programming) languages written by amateur language designers with exotic type systems or “everything is an X” philosophies, and my reaction is to assume that they are useless for anything past toys and experiments.

I know useful language features have been born in that world and then eventually bolted onto mutt languages like Java and Python, but that suits me just fine.

It would stay regular if there was a strict governing body for it that wasn't a Webster-style "whatever people are speaking is the new definition of correct".

English really is a disaster of a language. There was a(nother) great XKCD about it just a few days ago. https://xkcd.com/2907/

> if there was a strict governing body for it that wasn't a Webster-style "whatever people are speaking is the new definition of correct".

There is no way it can work.

People don't care about governing bodies when they speak a language.

They kind of do. In my language there exists a central "governing" body that decides what is correct, and some "incorrect" regionalisms are disappearing because of it.

When that happens, it’s likely more about politics and social status than the governing body.

Ie, the governing body decrees that the regionalisms from the dominant region is the definitive version of the language. But it might be considered cool to speak that way even without a governing body.

Are you referring to french? Because, if anything, french in France has an insane quantity of slangs and has an extremely emergent vocabulary. Much more so than any English speaking country I can think of. Quebec isn't really influenced by the Académie française yet has a much more "correct" usage of the language generally speaking.

Maybe it's a totally different language but still it goes to show that even a very prestigious central authority doesn't make a language better or less prone to diverge. Regardless of the reason, French is evolving much more quickly than English.

But maybe that's beside the point. If someone wants to "learn French" they can learn by the official rules and communicate with other french-speaking people regardless of how many slang variants exist in France. They can also probably watch French television and understand it.

The point of esperonto was to make it easier to learn. French is regular, but extremely complicated. English is complicated and has a million special cases. Both languages are hard enough to master that society starts to judge a person's intelligence by how well they know the rules and special cases.

Correction it's a disaster of at least 4 languages and this is probably why English is so hard to dethrone as it have no strict ownership so everyone is kind of equal in speaking it incorrectly.

Sometimes lack of rigidity is actually an feature that allows for things to sort of work that would be politically impossible if thinks had to be specified formally before being used.

I'm pretty sure the reason English is hard to dethrone is because Britain ~helped~ forced the various colonies to join world commerce using English, so they started teaching it to entire generations as the national second-language, and then because the USA dominated world commerce after that in a sort of "we'll let you in on the game if you speak our language and use our money" sort of way.

> Esperanto is a superior language to English. And English has many flaws.

At least one one perspective, English is superior. That perspective is that you can actually use it in almost any modern situation because it has been tried and tested globally.

This is the point of my analogy.


(Although I think parquet is much more useful than Esperanto and may eventually end up dethroning csv)


Not saying csv doesn’t have its issues, but I don’t think the author made a convincing argument.

A lot of the issues the author brought up didn’t sound that bad and/or it sounds like he never looked at the source data first.

If you’re doing work with large datasets, I think it’s a good practice to at least go and look at the source data briefly to see what to expect.

This will give you a good idea of the format it outputs, data types, some domain context, etc. or some combination thereof and I don’t think it even takes that long.

Also, it reminds me of the arguments against excel in a way. Most people know what a csv is, more or less how to open it, and don’t need too much context when discussing the file. Someone will quickly understand if you tell them the file isn’t delimited properly right away. These are pros that shouldn’t be taken for granted.

Again, I’m not saying csv doesnt have issues or that there aren’t better alternatives, simply that I didn’t find this particular argument convincing.

IME most people don't know that using Excel to open and save a csv will silently mangle data. In our application leading zeros are significant, so we constantly get screwed by people trying to do quick manual edits and breaking the data. If we're lucky it breaks so badly the import fails. It's worse when the mangling results in structurally valid but wrong data.

I think what you’re saying is accurate, but it’s also important to be practical about stuff.

These are pretty well know excel limitations by now.

And really, anyone using excel who is somehow not aware of that limitation is probably not someone yet experienced enough to be working on a larger and/or mission critical dataset to begin with.

Are there exceptions? Sure. You might be tempted to cite the example of the incident where this happened to some biologists not too long ago, but mistakes happen. I’ve seen people make mistakes building android or iPhone using the right (TM) tools.

What is the exact number of mistakes where you make the decision to jump to a new format?

I’m not sure. This does happen eventually, but the author didn’t make a strong case here imo.

But the point is that you don't have to look at the source data if you have an actual specification and defined format, right?

> all these sources aren’t able to agree on some successful format.

But the same is true for csv, and they are not readable by everyone since you don't always know how to read them, there is not enough info for that

Also it's not a good reflection on "deep experience" if it leads to reflexive defense of common stupid things people do with wrong analogies (e.g, flac is less efficient, so more like csv)

In my experience csv has the fewest problems. Not that it has zero problems.

For me, the giveaway was:

"You give up human readable files,..."

I was genuinely interested in some alternative suggestions - but the human readableness of csv is what makes it so sticky imo.

My entire exoerience with software development has been me bellyaching about how stupidly things are setup, why dont we do it this way instead etc... only to actually set about working on fixing these things and reqlizing its either way harder than I thought, it makes more sense than I thought, or it just plumb isnt worth the effort.

The evervescent suggestions of a brighter more logical, even obvious, solutions, is often a clear indicator of domain inexperience or ignorance.

I worked plenty enough with 'diverse data pipelines' and most of them were shit due to other companies just not knowing how to work properly.

CSV created tons of issues regarding encoding, value separation etc.

I started talking to our customers and were able to define interfaces with better and aligned format. json made my life easier.


In some senses, I think internet culture (maybe modern intellectual culture generally) gets stuck in these repetitive conversations.

Reprosecuting without seemingly knowing about all the previous times the conversation has been had.

And it's surprisingly hard for etl departments to export to csv correctly. I mean, if they can't do csv they can't do anything more complicated for sure.

This article seems written by someone who never had to work with diverse data pipelines

I think that's a little unfair, it sounds like the author does have a decent amount of experience working with real-world CSV files:

I remember spending hours trying to identify an issue that caused columns to "shift" around 80% into a 40GB CSV file, and let me tell you, that just isn't fun.

Yup, csv is always the best fallback, imo. It's: easily generated, easily parsed, human readable/editable, compact, portable, list goes on.

> Csv that’s zipped is producible and readable by everyone. And that makes is more efficient.

If only CSV were CSV, as opposed to some form that's 80-90% CSV by line count with enough oddities to really make the parser ugly and hard to read.

See, the sweet spot isn't something completely unstructured, because then you feel justified in throwing up (your hands) and declaring defeat. The sweet spot is a file that's sufficiently close to being structured you can almost parse it nicely, but has enough bad lines you can't go in and fix them all by hand in a reasonable timeframe, and you can't tell upstream to get their shit in order because it's only a few lines.

There’s definitely hair to deal with and it’s a little messy, but it’s never a blocker.

But I’d say the error rate is actually very low, maybe .1-1% and nowhere near 10-20% of data being messed up.

> But I’d say the error rate is actually very low, maybe .1-1% and nowhere near 10-20% of data being messed up.

The thing with CSV-related issues is it's usually not a fixed percentage but instead depends on the data.

I work in the VoIP industry so I deal with the FreePBX Asterisk GUI quite often, and it uses CSV as its bulk import/export format. This mostly makes sense as the data is almost entirely (with one notable exception) simple tables that fit nicely in to rows and columns. The issue I run in to most commonly with this is that it doesn't quote numerical fields, and as a result the fields for extension numbers and voicemail PINs can be problematic when they contain one or more leading zeroes. All of the major spreadsheet software I've used defaults to dropping leading zeroes from columns they've decided contain numerical values, and this results in broken data in these cases. It's of course relatively rare for users to choose a voicemail PIN starting with zero and even more rare for extensions to be set up with a leading zero, but both happen regularly enough that I need to remember to manually configure those columns as "Text" when opening an extension export CSV.

Either way, how often the problem occurs depends entirely on the data being sent through this pipeline. Most sites will never see the problem on the extension column, but one of my sites where the company liked a user's extension to be the last four of their DID when they were initially set up 20 years ago has a dozen of them in a row.

Depends on your tools, I suppose. I'd just like to share this:


Did you actually read the conclusion at the end of the article?

"Of course, we can't conclude that you should never export to CSV. If your users are just going to try to find the quickest way to turn your data into CSV anyway, there's no reason why you shouldn't deliver that. But it's a super fragile file format to use for anything serious like data integration between systems, so stick with something that at the very least has a schema and is more efficient to work with."

It's a premature optimization issue. If you don't have special requirements like IO throughput, or mission critical data accuracy guarantees, be biased towards picking the format that anyone can easily open in a spreadsheet.

You can open out easily, but just as easily it can be wrong. So with this bias you'd still not export csv, you'd use xls

"Of course there are better formats, but all these sources aren’t able to agree on some successful format."

It's the same with csv. They come in all kinds of formats because nobody agreed on the standard. Comma separated, semicolon separated, pipe separated, escaped, not escaped.

Everytime I have to deal with csv I first have to figure out how to parse it in code.

So I think the author is right, we must agree on a better format because that is what friends do.

You are also right because it's an illusion to think that this is going to change anythime soon. But who knows..

Every integration I’ve ever worked on has started off with high ideas of APIs and nice data standards. And has eventually devolved into “can we just put a CSV file on an FTP site…”. With the inevitable, “it’s not really CSV…”

... And what's more, you'll be an Engineer my son.

"You give up human readable files, but what you gain in return is..." Stop right there. You lose more than you gain.

Plus, taking the data out of [proprietary software app my client's data is in] in csv is usually easy. Taking the data out in Apache Parquet is...usually impossible, but if it is possible at all you'll need to write the code for it.

Loading the data into [proprietary software app my client wants data put into] using a csv is usually already a feature it has. If it doesn't, I can manipulate csv to put it into their import format with any language's basic tools.

And if it doesn't work, I can look at the csv myself, because it's human readable, to see what the problem is.

90% of real world coding is taking data from a source you don't control, and somehow getting it to a destination you don't control, possibly doing things with it along the way. Your choices are usually csv, xlsx, json, or [shudder] xml. Looking at the pros and cons of those is a reasonable discussion to have.

I think his arguments apply more closely to SQLite databases. They're not directly human readable, but boy are there a lot of tools for working with them.

We have a use case where we effectively need to have a relational database, but in git. The database doesn't change much, but when it does, references between tables may need to be updated. But we need to easily be able to see diffs between different versions. We're trying an SQLite DB, with exports to CSV as part of CI - the CSV files are human-readable and diff'able.

It's also worth noting that SQLite can ingest CSV files into memory and perform queries on them directly - if the files are not too large, it's possible to bypass the sqlite format entirely.

> we need to easily be able to see diffs between different versions

Can git attributes help in this case? It allows you to teach git how to diff binary files using external tools. Here [0] is a demonstration for teaching git to produce an "image diff" for *.png files using exiftool. You can do something similar for *.sqlite files by adding these 3 lines [1] [2]. The sqlite3 cli needs to be installed.

Alternatively, there's a tool that might also fit the bill called datafold/data-diff [3]. I'm pretty sure I originally heard of it on a HN thread so those comments may offer even more alternative solutions.

[0]: https://youtu.be/Md44rcw13k4?t=540 [the relevant timestamp is @ 9:00]

[1]: https://github.com/kriansa/dotfiles/blob/7a8c1b11b06378b8ca8...

[2]: https://github.com/kriansa/dotfiles/blob/7a8c1b11b06378b8ca8...

[3]: https://github.com/datafold/data-diff

Somebody already said this, but we built exactly this and it's called Dolt.


Would love to hear how it addresses your use case or falls short.

Have you considered https://github.com/dolthub/dolt for your use case?

Real world example of this that we just experienced:

I work with a provider who offers CSV exports as the only way to access data. Recently, we found they were including unsanitized user input directly in fields. They weren't even quoting these fields.

The platform "notified their quality assurance team ASAP" (like every other issue, we never heard back), but we had a deadline. This, of course, was a mess, but being able to quickly open the file and fix quotes was all it took. I shudder at the thought of trying to salvage a corrupted binary with zero help from the holder of the data.

This sounds like a problem that wouldn’t have existed in the first place if following a binary protocol with a standard format and using a proper serialization library.

The issue comes from CSV files looking easy to generate by hand, when it in fact is not.

This is a decent point, but practically the platform uses some library in their ancient ASP application. The issue is that these types of things can't be fixed because the original author is gone and the tech debt has become unmanageable. This is not the only issue we've had, unfortunately.

Debugging this same issue in a binary format is far and away not going to happen in this scenario.

...but if it was a binary protocol that did have a problem, of any sort whatsoever, and you couldn't get the provider to address is (in time), then you're hosed if it's not human readable.

In my experience, human readable file formats are a mistake. As soon as people can read a single file they think that that's the entire format and that it's okay to write it by hand or write their own code for it. And when everyone writes code based on the just what they've personally seen about a format, everyone is sad. This is why not a single piece of software on earth uses the CSV RFC. This is why people hand you CSVs that don't quote string fields with commas in them. This is why you find software that can't handle non-comma delimiters. This is why you find software that assumes that any field made of digits is an integer and then crashes when it tries to do string operations on it. This is why you find software that can't be used unless you change your computer's locale because the front end understands locales and uses commas for numbers but the backend is running on a server and doesn't know what locales are and now everything is broken. This has happened for every single "human-readable" format in existence: html, markdown, CSV, rtf, json, everything. I consider human readability to be a death knell for a format's chances of broad, easy interoperability. There are certainly other things that can doom a format - parquet is almost too complex to implement and so only just barely works, for example - but I'll take a sqlite database over a csv every single time.

I think a takeaway could also be not to give people options when making a human-readable format. "you always need quotes, they're not optional" solves the comma problem. "the delimiter is always a comma" solves the delimiter problem. json has also fared better than csv, I'd say.

That makes the delimiter "," which is ugly so someone is just going to use , instead and you are back to square one.

this is only the case because this was allowed from day 1. If no one ever allowed this to begin with, that just wouldn't work. Of course this is speaking in hypotheticals, but my point is a more general one about specifications of human-readable formats. no one ever attempts to use strings without quotes in json, because then you would be incompatible with everything. There's compatibility issues, but they're far more subtle edge cases

As a French, there is another problem with CSV.

In the French locale, the decimal point is the comma, so "121.5" is written "121,5". It means, of course, that the comma can't be used as a separator, so the semicolon is used instead.

It means that depending whether or not the tool that exports the CSV is localized or not, you get commas or you get semicolons. If you are lucky, the tool that imports it speaks the same language. If you are unlucky, it doesn't, but you can still convert it. If you are really unlucky, then you get commas for both decimal numbers and separators, making the file completely unusable.

There is a CSV standard, RFC 4180, but no one seems to care.

There are tools to convert between the formats. Either you have a defined data pipeline where you know what you get at each step and apply the necessary transformations. Or you get random files and, yes, have to inspect them and see how to convert them if necessary.

It’s unfortunate that there isn’t a single CSV format, but for historical reasons it is what it is. It’s effectively more like a family of formats that share the same file extension.

Excel actually has a convention where it understands when there is a line

at the start of the file.

By the way, in addition to differing separators, you can also get different character encodings.

Excel does understand a BOM to indicate UTF-8, but some versions of Excel unfortunately ignore it when the “sep=“ line is present…


Thank you! This is a game changer. As I was reading through these comments I was thinking how much better it would be if the separator could be specified in the file, but it would only be useful for my own stuff, if I was to do that on my own.

I’ll be trying this first thing tomorrow at work. I don’t do as much with CSVs as I used to, but am currently working with them a lot, and have a single entry that always throws the resulting Excel file off, and I can’t be bothered to figure out how to escape it (I spent too much time in the past messing with that and got nowhere). This sep= lines will hopefully solve my issues.

I have sometimes distributed a CSV file with a similarly-named text file that contains key-value pairs to aid in the use of the CSV file.

A minimal text file of this type would contain:


What kind of voodoo is this? I've always wanted something like this for non technical coworkers across countries but I didn't know it existed. I always just exported TSV and provided steps for how to import it (although I think most excel versions have native tsv support).

If I'm not mistaken this is pretty universal outside of the US (and maybe the UK).

Going by the Wikipedia article and included map, use of comma versus period as decimal separators is roughly an even split:



Seems geographically split, but I wonder what is the actual population split is. Most of the top 10 population countries use the decimal separator. Only Brazil, Russia and Indonesia don't.

Maybe someone with a CSV of the world populations and a CSV of the countries broken down by their separator can do that comparison.

There's definitely a big distribution disparity. 11 of the 15 most populous countries use the period for decimals.

Most of Europe is in the fat long tail though, as those countries are counted individually.

You are mistaken. Probably more countries overall use a decimal comma, but the decimal point is used as convention in many countries, including China, India, Nigeria and the Philippines.

Does any tool seriously localize CSVs?

Any serious CSV tool has the option to pick a delimiter. Usually semicolon or comma, some offer additional options. The only impact it has is on which fields need quoting. When using comma, all decimals in many langugaes need to be quoted. When using semicolon those don't need to be quoted.

Overall, semicolon feels like the superior delimiter.

Most sensible people don't export formatted numbers (e.g. 100.000,00), but even those are pretty trivial to import.

So many tools do that, even (most importantly) Excel does

In my experience, this is only a problem when you are using Excel. It's ridiculous how bad Excel is at handling CSVs, I really cannot comprehend it. If you use LibreOffice all your problems magically disappear.

I find out that Excel follows windows locale, you can change the decimal and the thousand separator on windows, and it will affect how excel exports and reads CSVs.

Isn't this exactly what quoting solves?

i.e.: ``` "1,20","2,3",hello "2,40","4,6",goodbye ```

If your tool reads CSV by doing `string_split(',', line);`, your tool is doing it wrong. There's a bunch of nuance and shit, which can make CSVs interesting to work with, but storing a comma in a field is a pretty solved issue if the tool in question has more than 5 minutes thought put into it.

Now all your numbers are strings

It's a text file. All your numbers were already strings. Nothing has changed.

There's a difference between "1" and 1. When you import a csv and try to do maths on a "number" you won't get the expected result. Some importers won't even allow you to specify that "number" columns are numbers, they'll outright fail and force you to say it's a string, or you'll have to specify which columns are "numbers" and map the strings to numbers on the importer side.

If they are numbers to begin with (not "numbers"), you can just import the csv and you'll get the expected result out of the box.

In the end things are just 1s and 0s, but that doesn't mean we only ever do binary operations on data at the abstraction layer we humans operate, so saying it's just 0s and 1s or just strings is not very smart.

Sounds to me like you're using a shitty parser. CSV is schemaless. It is up to you to tell the parser what types to use if it isn't unambiguous. Quoted values can be numbers, and unquoted values can be strings. I have not used any CSV tools that don't support this behaviour.

A shitty parser is one that assumes, if I quite a number I want it to be a string.

But now all our strings might be numbers! We now have to parse every quoted string, and we can no longer represent numbers as text.

    unquoted input:
    0, 10, "Text", "123"

    unambiguous output:
    (Num) 0, (Num) 10, (Text) Text, (Text) 123

    quoted input:
    "0", "10", "Text", "123"

    output :
    (Num) 0, (Num) 10, (Text) Text, (Num) 123

Why would you quote Text in the first example? That makes no sense. Text does not contain any delimiters or special characters.

Unquoted input should look like this:

unquoted input: 0,10,Text,123

Note the absence of spaces as well. Not sure what flavour of CSV you are using, but there usually aren't spaces after the delimiter.

Fair point, but that doesn't really resolve the issue. Here's a cleaned up example showing the same problem:

    unquoted input:

    (Num) 0
    (Num) 10
    (Text) Text,
    (Text) 123

    quoted input:

    (Num) 0
    (Num) 10
    (Text) Text,
    (Num) 123

Same in German. We have things like 1.500.021,92 (I.e. 1,500,021.92)

If only a totally separate data field separator character had been invented early on and been given its own key on the keyboard, coloured: "only use for field delimiting". You know as well as I do that it would have been co-opted into another rôle within weeks, probably as a currency indicator.

You should probably use PSV - Point Separated Variable! Obviously we would need to adjust PSV to account for correct French word order (and actually use French correctly). Britain and USA would use something called PVS2 instead as a data interchange format with France which involves variable @@@ symbols as delimiters, unless it is a Friday which is undefined. The rest of the world would roll its eyes and use PSV with varying success. A few years later France would announce VSP on the back page of Le Monde, enshrine its use in law (in Aquitaine, during the fifteenth century) but not actually release the standard for fifteen years.

The world is odd. We have to work within an odd world.

Interestingly enough, you and I could look at a CSV encoded data set with commas as decimal separators and work out what is going on. You'll need a better parser!

Don’t look at the ASCII table, at entries 28 to 31. You are not mentally ready for what you will find there.

> ..what you will find there

Made me laugh, especially since you can’t “see” them.

In fact, that was maybe an oversight when ASCII was designed, but maybe there was a reason for that. If they were visible, and were actually recognizable as separator types then people would know them better.


My company has been using DSV for a bit: Dagger Separated Values. Unicode dagger (†) to separate values and double-dagger (‡) to indicate end of row. It allows us to easily handle both commas and newlines.

> In the French locale, the decimal point is the comma, so "121.5" is written "121,5". It means, of course, that the comma can't be used as a separator

Heh. Ah, HN. Always good for a laugh line.

Yeah, hon hon hon and all, but one of my (US) bank statements exports a CSV which uses commas in numbers in the US fashion, so $1,500 and the like. Writing a custom CSV munger to intake that into ledger-csv was... fun, but then again, only had to do it once.

Surely if they're putting commas in values they were quoting the values though?

They were not.

Fortunately the values are always prefixed by a dollar sign, making parsing deterministic, though ugly.

That's exactly my point, and I still got downvoted.

This place...

Of course if you only consider the disadvantages, something looks bad.

The advantages of CSV are pretty massive though - if you support CSV you support import and export into a massive variety of business tools, and there is probably some form of OOTB support.

This is the biggest win IME.

You have a (usually) portable transport format that can get the information into and out of an enormous variety of tools that do not necessarily require a software engineer in the middle.

I'm also struggling with such a quick dismissal of human readable formats. It's a huge feature.

What happens when there's a problem with a single CSV file in some pipeline that's been happily running fine for years? You can edit the thing and move on with your day. If the format isn't human readable, now you may have to make and push a software update to handle it.

Of course, CSV is a terrible format that can be horribly painful. No argument there.

But despite the pain, it's still far better than many alternatives. In many situations.

In a POSIX shell, I actually prefer to use the bell character for IFS.

  while IFS="$(printf \\a)" read -r field1 field2...
  do ...
This works just as well as anything outside the range of printing characters.

Getting records that contain newlines would be a bit trickier.

Heaven help you if you cat the file in a shell, though!

I think IFS=$'\a' works too.

Only in bash and possibly other shells that extend the POSIX syntax, not in the basic POSIX standard.

I seem to remember something about dash adding that functionality.

...I found it - the question is under review by the Austin group for inclusion in POSIX.


You can do tab-separated "CSV" and it'll be much better, avoid the quoting and delimiter issues that somehow trip up something half the time, and pretty much all these tools have always supported that format as well.

If only the ASCII 31 "unit separator" were well supported

I also like that Preview can display CSV

The reason CSV is popular is because it is (1) super simple, and (2) the simplicity leads to ubiquity. It is extremely easy to add CSV export and import capability to a data tool, and that has come to mean that there are no data tools that don't support CSV format.

Parquet is the opposite of simple. Even when good libraries are available (which it usually isn't), it is painful to read a Parquet file. Try reading a Parquet file using Java and Apache Parquet lib, for example.

Avro is similar. Last I checked there are two Avro libs for C# and each has its own issues.

Until there is a simple format that has ubiquitous libs in every language, CSV will continue to be the best format despite the issues caused by under-specification. Google Protobuf is a lot closer than Parquet or Avro. But Protobuf is not a splitable format, which means it is not Big Data friendly, unlike Parquet, Avro and CSV.

> Parquet is the opposite of simple. Even when good libraries are available (which it usually isn't), it is painful to read a Parquet file. Try reading a Parquet file using Java and Apache Parquet lib, for example.

I skimmed their docs a bit: https://parquet.apache.org/docs/

I would not look forward to implementing that.

It all seems rather complex, and even worse: not actually all that well described. I suppose all the information is technically there, but it's really not a well-design well-written specification that's easy to implement. The documentation seems like an afterthought.

This is probably why good libraries are rare.

Thanks for the link. I couldn't even get past this part:

> Apache Parquet is a columnar storage format available to any project in the Hadoop ecosystem

Nope and nope.

Yeah, it all seems very application specific. I mean, for starters columnar storage isn't really appropriate for a lot of data. That's perfectly fine! Nothing wrong with any of this. Just means it's not a great candidate for a general application-agnostic data-exchange format.

> But Protobuf is not a splitable format, which means it is not Big Data friendly, unlike Parquet, Avro and CSV.

What, when I worked at Google concatenating protobuf strings was a common way to concatenate protobufs, they are absolutely splittable. People might not know it but there is a reason they are designed like they are, it is to handle big data as you say.

If you mean you can't split a single protobuf, sure, but you don't need to do that, it is like trying to split a single csv line, doesn't mean csv isn't splittable.

Edit: Maybe some of that tooling to work with protobufs are internal only or not a part of the external protobuf packages though.

> designed like they are

I laughed. We don't call somebody writing a poor varint serializer for in-house use, then discovering that it doesn't handle negative numbers and floats that well so slapping a couple of hotfixes on top of it that make it necessary to have a protocol specification file, "design".

Protobuf does not have built-in delimiters or sync markers between records, which makes it not possible to start reading from an arbitrary point in the middle of a Protobuf-encoded file and correctly interpret the data. That makes Protobuf not a splitable format.

> there are no data tools that don't support CSV format.

They support CSV but not your CSV.

For example, how does quoting work? Does quoting work?

I was working with a vendor’s csv recently…

They had never had a customer do X on Y field, so they never quoted it nor added code to quote it if needed..

Of course, we did X in one entry. Took me too long to find that which obviously messed up everything after.

Avro is a terrible serialization format. Well, not necessarily the spec but all tooling around it is. It should never be picked over multitude of other, better options unless the company is, maybe, a Java shop.

> Google Protobuf is a lot closer than Parquet or Avro. But Protobuf is not a splitable format, which means it is not Big Data friendly, unlike Parquet, Avro and CSV.

Eh, I don't think that's the problem. If that was the problem, there are a zillion ways to chunk files; .tar is probably the most ubiquitous but there are others.

The bigger problem is that Protobuf is way harder to use. Part of the reason CSV is underspecified is it's simple enough it feels, at first glance, like it doesn't need specification. Protobuf has enough dark corners that I definitely don't know all of it, despite having used it pretty extensively.

I think Unicode Separated Values (USV) is a much better alternative and as another poster mentioned, I hope it takes off.

Author here. I see now that the title is too controversial, I should have toned that down. As I mention in the conclusion, if you're giving parquet files to your user and all they want to know is how to turn it into Excel/CSV, you should just give them Excel/CSV. It is, after all, what end users often want. I'm going to edit the intro to make the same point there.

If you're exporting files for machine consumption, please consider using something more robust than CSV.

Well what would be a more accurate title? "CSV format should only be for external interchange or archival; columnar formats like Parquet or Arrow better for performance"?

People are busy; instead of hinting "something more robust than CSV", mention the alternatives and show a comparison (load time/search time/compression ratio) summary graph. (Where is the knee of the curve?)

There's also an implicit assumption to each use-case about whether the data can/should fit in memory or not, and how much RAM a typical machine would have.

As you mention, it's pretty standard to store and access compressed CSV files as .csv.zip or .csv.gz, which mitigates at least trading off the space issue for a performance overhead when extracting or searching.

The historical reason a standard like CSV became so entrenched with business, financial and legal sectors is the same as other enterprise computing; it's not that users are ignorant; it's vendor and OS lock-in. Is there any tool/package that dynamically switches between formats internally? estimates comparative file sizes before writing? ("I see you're trying to write a 50Gb XLSX file...") estimates read time when opening a file? etc. Those sort of things seem worth mentioning.

> Well what would be a more accurate title? "CSV format should only be for external interchange or archival; columnar formats like Parquet or Arrow better for performance"?

Something more boring, like "Consider whether other options make more sense for your data exports than CSV". Plenty of people have suggested other good options in comments on this submission, such as for example sqlite. I think the post comes off as if I'm trying to sell a particular file format for all uses cases, when what I had in mind when writing it was to discourage using CSV as a default. CSV has a place, certainly, but it offloads a lot of complexity on the people who are going to consume the data, in particular, they need to figure out how to interpret it. This can't necessarily be done by opening the file in an editor and looking at it, beyond a certain size you're going to need programming or great tools to inspect it anyway.

I was given an initial export of ~100 poor quality CSV files totaling around 3TB (~5-6 different tables, ~50 columns in each) in size a few years back, and had to automate ingestion of those and future exports. We could've saved a lot of work if the source was able to export data in a friendlier format. It happened more than once during that project that we were sent CSVs or Excel sheets that had mangled data, such as zip codes or phone numbers with leading 0s removed. I think it is a good thing to inform people of these problems and encourage the use of formats that don't necessity guessing data types. :shrug:

> People are busy; instead of hinting "something more robust than CSV", mention the alternatives and show a comparison (load time/search time/compression ratio) summary graph. (Where is the knee of the curve?)

This might be an interesting thing to follow up later, but would require a lot more work.

> I see now that the title is too controversial, I should have toned that down.

Sometimes a click-baity title is what you need to get a decent conversation/debate going. Considering how many comments this thread got, I'd say you achieved that even if sparking a lengthy HN thread had never been your intent.

Congratulations for getting the article upvoted and don't be too hard on yorself.

I got a parquet file once and I was like WTF is this format?

The problem with parquet is it's complicated and you basically have to remap from parquet to whatever you're importing into because the people on the other side have remapped from whatever to parquet.

There are likely relationships and constraints there that you'll have to hack around - which is harder to do because the parquet tools sort of suck/aren't as flexible.

With CSV you can hack around any problem in the ETL process.

Or export to CSV correctly and test with Excel and/or LibreOffice. Honestly CSV is a very simple, well defined format, that is decades old and is “obvious”. I’ve had far more trouble with various export to excel functions over the years, that have much more complex third-party dependencies to function. Parsing CSV correctly is not hard, you just can’t use split and be done with it. This has been my coding kata in every programming language I’ve touched since I was a teenager learning to code.

CSV is not well-defined. Data in the wild doesn't even agree that it's comma separated.

String encoding? Dates? Formatted numbers? Booleans (T/F/Y/N/etc)? Nested quotes? Nested CSV!?

How about intermediate systems that muck things up. String encoding going through a pipeline with a misconfiguration in the middle. Data with US dates pasted into UK Excel and converted back into CSV, so that the data is a mix of m/d/yy and d/m/yy depending on the magnitude of the numbers. Hand-munging of data in Excel generally, so that sometimes the data is misaligned WRT rows and columns.

I've seen things in CSV. I once wrote an expression language to help configure custom CSV import pipelines, because you'd need to iterate a predicate over the data to figure out which columns are which (the misalignment problem above).

If the use-case has these complexities, then other formats may be better. I'll go out on a limb and say that MOST data export to csv are simple column data where it works just fine - at least that's been my experience.

The problem is is that it is simple until it's not, then you have problems. You need to validate immediately after export, and work out what happens when that fails.

I agree about validating after export, which is a good practice. But if you know your use-case to be csv-friendly, then its a nice simple long-standing almost universal format. Lots of pros with that. Using a more complex format for simple data may (or may not) save you issues with a rare edge case but could cost you in other areas. Like a non-technical manager having no idea how to look at.

CSV is fine if you control both ends (in which case it's worth asking why not use something else, but CSV is a totally valid choice). The problem is typically you don't, and what you expected to be simple ends up with lots of hacks until you realise that you want a more defined format (what that is depends on your field), which would have been easier to do if CSV wasn't there already.

Intermediate systems like Excel will break anything, they aren’t constrained to CSV. Excel screws up at the level of a cell value, not at the file format.

On the topic of nested CSV, three approaches:

- treat it as a join, and unroll by duplicating non-nested CSV data in separate rows for every element in the nested CSV

- treat it as a projection, have an extraction operator to project the cells you want

- treat it as text substitution problem; I've seen CSV files where every line of CSV was quoted like it was a single cell in a larger CSV row

You get nested CSV because upstream systems are often master/detail or XML but need to use CSV because everybody understands CSV because it's such a simple file format. Good stuff.

Seems more like a problem of the exporting and importing software and not the format.

CSV gives you the freedom of choice how to write the data.

You need the french data format? You can write the french date format. You need # as separator? You can use # as separator.

Sure you can't read any file without knowing it's specification but that's not the usecase of CSV.

CSV is for data transfer between systems which know each other.

Indeed. It's like 'text file' - there are many ways to encode and decode these.

Add another munging the to list, ids that 'look like numbers' e.g. `0002345` will get converted to `2,345`. Better be sure to pre-pend ' i.e. `'0002345`

Most of the problems come from treating CSV fields as something other than US-ASCII text strings. They're not dates, they're not numbers, they're just freeform text. If you want fancy-pants features like "numbers" or the ability to represent all common words in use in the US (e.g. the California town Rancho Peñasquitos) and aren't a masochist, you don't want CSV.

And “do you have a header row”? And null/nil/nul/blank specification/expectation?

Yeah but we're not in the dark ages of computers anymore. Export to Sqlite database instead.

Sure, you tell the finance industry that. They have systems, the systems already produce CSV. They can sign a contract worth multiples of your salary if you can consume it. Do you want the money or not?

In case it wasn't clear, I want software engineers in the finance industry to implement sqlite import and export in their various pieces of software, not too give up on lucrative, existing contracts, obviously.

The inertia around CSV in the finance industry is incredibly strong, and let’s just say some of the biggest industry players aren’t exactly reknowned for their cutting-edge practices when it comes to data interchange formats.

For interchange between banks, XML is also very common, because it is usually accompanied with a schema doc.


There’s certainly XML. There’s also some very strange hand-rolled formats that are a right pain to parse.

The person in the fintec industry rarely decies "oh, I'm going to add this format no one is asking for". They'll get a specification requiring csv and start implementing it. Get fired halfway though, then someone on the other side of the planet will implement it incorrectly. The first 3 revisions won't meet the customers needs, but the customer can't move away anyway.

Specifically, people in the FinTech industry should start asking for sqlite as an export format.

> Parsing CSV correctly is not hard, you just can’t use split and be done with it.

Parsing RFC-compliant CSVs and telling clients to go away with non-compliant CSVs is not hard.

Parsing real world CSVs reliably is simply impossible. The best you can do is heuristics.

How do you interpret this row of CSV data?

1,5,The quotation mark "" is used...,2021-1-1

What is the third column? The RFC says that it should just be literally

> The quotation mark "" is used...

But the reality is that some producers of CSVs, which you will be expected to support, will just blindly apply double quote escaping, and expect you to read:

The quotation mark " is used...

Or maybe you find a CSV producer in the wild (let's say... Spark: https://spark.apache.org/docs/latest/sql-data-sources-csv.ht...) that uses backslash escaping instead.

The RFC 4180 says:

5. Each field may or may not be enclosed in double quotes (however some programs, such as Microsoft Excel, do not use double quotes at all). If fields are not enclosed with double quotes, then double quotes may not appear inside the fields.

The point is your clients and partner firms don't actually care what RFC 4180 says, they just expect you to deal with whatever their CSV library spits out.

4180 is just one of many CSV variants. You can't simply examine a CSV and determine for sure which flavor of file you were given, you can only guess.

For added fun, last column should be 1-2-2021.

At least in that case you know that the last part is the year. It is much funnier when you encounter something like 3-4-17 and you don't know if it is d/m/y, m/d/y, or y/m/d.

Oh that's easy, it's Janreburary Firscond 2021.

The deliminator is a setting that can be changed. I never said hard code. This is an interesting exercise to give students, but I am struggling to look back in my mind through the last 25 years of line of business application development where any of this was intractable. My approach has been to leverage objects. I have a base class for a CsvWriter and CsvReader that does RFC compliant work. I get the business stake holders to provide samples of files they need to import or export. I look at those with my eyes and make sub-classes as needed.

And data type influencing is a fun side project. I worked for a while on a fully generic CSV to SQL converter. Basically you end up with regex matches for different formats and you keep a running tally of errors encountered and then do a best fit for the column. Using a single CSV file is consistent with itself for weird formatting induced by whatever process the other side used. It actually worked really well, even on multi gigabyte CSV files from medical companies that one of my clients had to analyze with a standard set of SQL reports.

Y'all caught me being less than rigorous in my language while posting from my phone while in the middle of making breakfast for the kids and the wife to get out the door. To the "not well defined" aspect, I disagree in part. There is an RFC for CSV and that is defined. Now the conflated part is the use of CSV as an interchange format between various systems and locales. The complexity of data exchange, in my mind, is not the fault of the simple CSV format. Nor is it the fault of the format that third party systems have decided to not implement it as it is defined. That is a different challenge.

Data exchange via CSV becomes a multiple party negotiation between software (sometimes the other side is "set in stone" such as a third party commercial system or Excel that is entrenched), users, and users' local configuration. However, while not well defined none of these are intractable from a software engineering point of view. And if your system is valuable enough and needs to ingest data from enough places it is NOT intractable to build auto detection capabilities for all the edge cases. I have done it. Is it sometimes a challenge, yes. Do project managers love it when you give 8 point estimates for what seems like it should be a simple thing, no. That is why the coding kata RFC compliance solution as a base first makes the most sense and then you troubleshoot as bug reports come in about data problems with various files.

If you are writing a commercial, enterprise solution that needs to get it right on its own every time, then that becomes a much bigger project.

But do you know what is impossible, getting the entire universe of other systems that your customers are using to support some new interchange format. Sorry, that system is no longer under active development. No one is around to make code changes. That is not compatible with the internal tools. For better or worse, CSV is the lingua franca of columns of data. As developers, we deal with it.

And yes, do support Excel XLSX format if you can. There are multiple third party libraries to do this in various languages of various quality.

As a developer, I have made a lot of my living dealing with this stuff. It can be a fun challenge, frustrating at time, but in the end as professionals we solve what we need to to get the job done.

Unfortunately that is only the case as long as you stay within the US. for non-US users Excel has pretty annoying defaults, such as defaulting to ; instead of , as a separator for "CSV", or trouble because other languages and Excel instances use , instead of . for decimal separators.

A nice alternative I've used often is to constructor an excel table and then giving is an .xls extension, which Excel happily accepts and has requires much less user explanation than telling individual users how to get Excel to correctly parse a CSV.

Localization is a thing. None of this is a show stopper. Subclasses and configuration screen for import and export.

And then you get a business analyst at your client going "I just hit export in our internal tool, what's this delimiter that you're asking me about in the upload form? Google Sheets doesn't ask me to tell them that"

And how do you fix the “some analyst” problem? Is there a better format that reduces this problem?

JSON? RON? Protobuf? Cap'n'Proto? Anything with more types than "string" which is all CSV has, and with an unambiguous data encoding. Preferably also a way to transmit the schema, since all of these formats (including CSV) have a schema but don't necessarily include it in the output.

About half the problems with CSV are due to encoding ambiguities, the other half are due to schema mismatches.

> Unfortunately, that is only the case as long as you stay outside the US. For US users Excel has pretty annoying defaults, such as defaulting to , instead of ; as a separator for "CSV", or trouble because US instances of Excel use . instead of , for decimal separators.


* What does missing data look like? The empty string, NaN, 0, 1/1-1970, null, nil, NULL, \0?

* What date format will you need to parse? What does 5/5/12 mean?

* How multiline data has been written? Does it use quotation marks, properly escape those inside multiline strings, or maybe it just expects you to count the delimiter and by the way can delimiters occur inside bare strings?

And let me add my own question here:

what is the actual delimiter? Do you support `,`, `;` and `\t`?

What is the encoding of the text file? UTF8, windows-1252?

What is the decimal delimiter “.”, “,”?

Most csv users don’t even know they have to be aware of all of these differences.

The main issue is that "CSV" isn't one format with a single schema. It's one format with thousands of schemas and no way to communicate them. Every program picks its own schema for CSVs it produces, some even change the schema depending on various factors (e.g. the presence or absence of a header row).

RFC 4180 provides a (mostly) unambiguous format for writing CSVs, but because it discards the (implied) schema it's useless for reading CSVs that come from other programs. RFC 4180 fields have only one type: text string in US-ASCII encoding. There are no dates, no decimal separators, no letters outside the US-ASCII alphabet, you get nothing! It leaves the option for the MIME type to specify a different text encoding, but that's not part of the resulting file so it's only useful when downloading from the internet.

> RFC 4180 provides a (mostly) unambiguous format for writing CSVs,

What are the ambiguities in RFC 4180?

It allows non-ASCII text but does not provide any way to indicate charset within the file, instead requiring it out-of-band. Once the file is saved, the text encoding becomes ambiguous. Likewise for the presence or absence of a header row.

Likewise for whether double quotes (`"`) are allowed in fields (rule 5). This one gets even worse, since the following rule (6) uses double quotes to escape line breaks and commas, but they may not be allowed at all so commas in fields may not be escapable.

It only supports text, not numbers, dates, or any other data, and provides no way to indicate any data type other than text.

One example that will kill loading a csv in excel beyond the usual dates problem. If you open in excel a csv file that has some large id stored as int64, they will be converted to an excel number (I suspect a double) and rounded. Also if you have a text column but where some of the codes are numeric with leading zeros, the leading zeros will be lost. And NULL is treated as the string "NULL".

I am aware you can import a csv file in excel by manually defining the column types but few people use that.

I'd be fine with an extension of the csv format with one extra top row to define the type of each column.

> aware you can import a csv file in excel by manually defining the column types but few people use that

And what fraction of those users would be able to anything with another format?

> an extension of the csv format with one extra top row to define the type of each column

If the goal is foolproof export to Excel, use XLSX.

Excel will literally use a different value separator depending on the locale of the machine (if the decimal separator for numbers is a comma and not a dot, it’ll use a semicolon as a value separator instead of a comma).

I will hard disagree here. Always have has a clrf issue or another weirdness come up.

Especially if you work with teams from different countries, csv is hell. I always generate rfc compliant csv, not once it was accepted from day one. Once, it took us two weeks to make it pass the ingestion process (we didn't have access to the ingest logs and had to request them each day, after the midnight processing) so in the end, it was only 10 different tries, but still.

I had once an issue with json (well, not one created by me) , and it was clearly a documentation mistake. And I hate json (I'm an XML proponent usually). Csv is terrible.

> Parsing CSV correctly is not hard

Parsing the CSV you have in front of you is not hard (usually). Writing a parser that will work for all forms of CSV you might encounter is significantly harder.

Excel and data precision are really at odds. But it might be interesting to see what would happen if excel shipped with parquet import and export capabilities

> Honestly CSV is a very simple, well defined format, that is decades old and is “obvious”

This is the problem though. Everything thinks it is "obvious" and does their own broken implementation where they just concatenate values together with commas, and then outsources dealing with the garbage to whoever ends up with the file on their plate.

If a more complex, non-obvious format was required, instead of "easy I'll just concatenate values" they might actually decide to put engineering into it (or use a library)

> Parsing CSV correctly is not hard, you just can’t use split and be done with it.

And yet you can anyway if you are confident that your CSV won't contain anything that would mess it up.

CSV is so simple that if someone forgets to send you escaped CSV, you can and should simply smack them with a shoe.

> well defined format


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact