I'm now only using Markdown when I absolutely have to and live in org-mode for everything else. And I've only just began scratching the surface of org's features!
Personally I disable most fonts and limit the sizes to improve my browsing experience
Of course there is also the reader mode.
And what's more, what an absolute joy to find something published on the web where the text has both good contrast, weight and width.
I'm totally fed up having to use the browser's developer tools or firefox reader mode because the text has font-weight 200/300, slightly darker grey on light grey text and uses less than 30% of my screen width.
It's so sad that this is such a rare treat on the web these days.
Many "plain text" formats, like markdown or INI files or json, actually have very strict formatting requirements and character set constraints, but the value-add comes from a human's ability to examine the on-file-system object, examine it with well-known and reliable tools (grep, awk, text editor, etc.), figure out what it's supposed to mean, then feed it to the machine, and compare the machine's behavior with their expectations.
With non-human-readable data, this is much harder, you pretty much need a tool to convert the binary data to readable text to distinguish between "my program is broken" and "my program works but is getting bad input."
Note that even structured ASCII can still make this hard, XML is nominally human-readable, but as a practical matter this can be difficult.
ASCII is much simpler than Unicode encodings, to the point where text can even become an attack vector. A fully featured UTF-8 parsing and rendering engine is a sophisticated thing.
Does it matter whether one or the other is classified as text or binary? Not as much as it matters which requires the more complex code to process.
No, UTF8 decoding is trivial, you can do it in a few dozen lines in just about any language. It's Unicode that is a complex and moving target. But you can also just choose to implement a sane subset of Unicode for your application.
Recommended reading: http://cat-v.org/, https://github.com/cls/libutf
Encoding a number in binary takes 4 or 8 bytes. Encoding it in plain text (ASCII or Unicode) takes as many bytes as there are digits, plus one for the sign / decimal separator. If you're talking about ASCII/Unicode, you're not talking about text/binary.
> ASCII is much simpler than Unicode encodings
I don't disagree with you, but Unicode is not binary.
> Does it matter whether one or the other is classified as text or binary?
They're both text.
I also think that plain text is a bad idea, binary files are much easier to parse.
And yet, everyone agrees there is, and has no problem telling its case from other formats, even if they're all 0s and 1s underneath.
Still, missing the point.
The difference between plain text formats and binary formats is not that plain text files do no consist of bytes or don't need an encoding to read them.
It's being able to work on them with a plain text editor, and being based on actual written text -- as opposed to packed bytes and custom (proprietary or not) formats.
Plain text is anything but plain.
And I'm just saying it's pedantry. 'plain text' is an umbrella term for 'not binary'.
> I also think that plain text is a bad idea, binary files are much easier to parse.
Binary files are definitely faster and more compact. And tools such as google protocol buffers makes passing information very convenient and efficient. Unfortunately, most APIs out there use JSON, so we just have to live with it.
Maybe the widespread adoption of JSON and plain text APIs is a reflection of how we, as developers, have become more likely to optimize for our own development process rather than the actual hardware (see all the electron craze).