1) This is an appealing idea, but my claim is that there's no single serialization format that will work. (Or if there is one, it has yet to be invented.) More detail here:
There's nothing stopping anyone from using structured data over pipes, but I think it's a mistake to assume there will be or needs to be a "standard".
3) I agree that JSON over HTTP is very much in the vein of Unix. The REST architecture has a very large overlap with the Unix philosophy -- in particular, everything is a hierarchical namespace, and you have a limited number of verbs (GET / POST vs. read() / write() ).
Then I'm not sure how the problem would be solved. The main reason why "everything should be plaintext" is problematic (aside from being inefficient storage-wise) is that there's no "standard" format. My interpretation of the article's criticisms is that overreliance on tools like awk is the problem, not the solution.
Hence, the recommendation to just standardize on YAML (or some stricter subset thereof). If unstructured data is really needed in the pipeline, then it can easily be encapsulated in an ordinary YAML document. This would unify the strengths of the Unix way (ease of human inspection) and the PowerShell way (ease of plugging arbitrary tools together without needing to stick a bunch of text filters all over the place).