
ProtoBuff, Serialization, JSON, etc. - CastleWood
I have been programming for more than 40 years and I have seen many evolutionary and some disruptive changes in technology. While working on a large project for NASA in 1990 we determined and all agreed that data files (containing complex combinations of objects and primitive types) should be both human and machine readable. If they were entirely binary or cleverly condensed to some obscure form, we would have great difficulty detecting and tracing data errors when then occured. To implemenent human and machine readable data files we created a utility that would perform all of the necessary reading and writing including procedures that would read and write all conventional data types, objects with names for each field, delimited lists, 
 fixed length arrays, necessary but uncommon data formats (such as UTC), and we would add new data types and associated readers and writers when then became necessary. I find it comical that JSON provides the means to read a few delimiters, a few basic data types, identifiers, numbers, strings, etc. I also find it alarming when major players go to great lengths to create intermediate binary representations that are (generally) unreadable by humans that may need to debug their data transfer applications. I categorize this as lazy: we will make it possible to read text enclosed within quotes, but leave it to the user to parse the text within the string. Likewise, java scanner can read an entire line of text as a string, but leave it to the programmer to parse the string to extract structured information. I agree with one readers comments that smart people can make great tools, but also make tools that are unusable by others. C&#x27;est la vie. Those that control the machines of technical governance (microsoft, google, oracle, etc.) are not accountable for the quality and usability of their tools. The rest of us are left with the challenge of using the tools.
======
PaulHoule
This is one of the biggest problems in software today.

The range of requirements for serialization is extreme.

Some people are moving a vector of 10 billion floats from one place to
another, and that is a job for binary. It is shocking how many floating point
ops you can do (sometimes your whole calculation) in the time it takes to
convert ascii to float and back -- on top of that it is brain-damaged to have
a binary representation (power of 2 denominator) where you can't exactly
express ascii values such as 0.2 which have factors of something other than 2
in the denominator)

Finance systems need to handle large numbers of credit card transactions, bank
deposits, trades, and the cost of serialization and deserialization at the
central points of networks is stupendous.

Other applications don't care about speed so much. Early academic research in
remote procedure calls was highly concerned about performance -- they wanted
RPC to compete with local procedure calls.

RPC didn't take off in a big way until it was implemented over the web with
'performance doesn't matter' serialization such as XML and JSON. Once it took
off, we saw an explosion in binary serializations.

