POSIX is stuck in the days when UNIX software was plain CLI or daemons, with keyboards and teletype as devices.
On a more serious note, this is not happening because POSIX is bad or irrelevant but because people a) don't know about it and b) even if they did, reinventing the wheel is more fun.
For me the good old days were the time spend with Amiga 500, discovering the world of Smalltalk, Oberon and all Xerox PARC research and other pioneers.
I got into UNIX via Xenix, and used almost every commercial flavour of it, but don't consider it the good old days.
The fact that POSIX is stuck in a PDP-11 world, is a proof that no big in the industry, with power to drive POSIX forward, is seeing as a relevant OS API for anything besides writing daemons and CLI applications accessible via SSH.
If you want proof, just look at the systemd fiasco. Red Hat's way or the highway.
What kind of proof is this? systemd is the new standard across all major distros (except for stuff like Gentoo or Slackware, which is irrelevant in the targeted enterprise market anyway). And in the process, systemd steam-rolled over lots and lots of bizarre inconsistencies between distros.
If that's Red Hat's version of EEE, I'm very happy with it.
Things started picking up around Linux time, though that age of goodness started getting stale around 2006 (maybe because linux started to get commercialized? Or maybe because everyone started getting interested in handheld devices for their 'fun' programming?)
Standards probably shouldn't sit still, especially if they hew more closely towards existing practice than ideal practice. I think POSIX is overdue for an update.
As an aside though, did people ever voluntarily use csh? Aside from committed masochists, that is.
POSIX is being updated constantly. It is just not as noticeable because POSIX is, as you say,
> an attempt to codify existing practice amongst Unix vendors, picking the best of the most widely supported features, as opposed to actually trying to come up with a good standard.
Yes, for an interactive shell, back when the choice was between csh that had command-line history and sh that didn't.
This is just one example of where the old ways aren't necessarily the best. I'm fairly confident we could come up with something better than POSIX if we were willing to make the effort.
please, no. OOP approach means that my process needs to know how to communicate with other processes via very specific protocols, which aren't very well defined (any process could describe its type of object). This is the exact opposite of flexibility. The usefullness of tools like grep or sed would drop drastically, and we would fall back to big blobs of software.
Since I'm just passing data, do I really need the behavior attached to it?
How would state persistence be handled?
The text (bytes as ASCII until line ending) approach may seem ugly and dirty, but in fact you can pass list, tuples, maps, trees or just text and is up to the receiver the responsibility to make sense out of the data.
Need data from A but B can't understand it? Use C (which operate on text) to format A's output as B needs. With object how many "translator" (C in the example) would you need to acheive the same result?
That's why we have IDL -- interface description language for RPC calls. An IDL-to-X (usually C) compiler generates the necessary glue so that anybody can talk to the program in question. IDL is also the basis of MS COM, which I quite like from the design standpoint.
> The usefullness of tools like grep or sed would drop drastically, and we would fall back to big blobs of software.
So you teach grep to take an IDL file, invoke an IDL compiler and dynamically load the parser for the protocol in question. Also, if the broker were a standardized, perhaps in-kernel component (dbus, kdbus), you could attach "idlgrep" to any process to trace its calls. You wouldn't be restricted to pipes.
> Since I'm just passing data, do I really need the behavior attached to it? How would state persistence be handled?
The parent's wording was a bit unfortunate. You can have interfaces and interface inheritance and versioning (the "OOP" part), but there's no behavior send between processes.
> With object how many "translator" (C in the example) would you need to acheive the same result?
Exactly one: the IDL compiler.
So you introduce a huge load of accidental complexity because the text interface has some perceived inefficiency? I'll choose simplicity and accessibility over this mess anytime.
It's not as hard as you make out. Take a look at how PowerShell works for an idea about how a OOP-based approach can work for CLIs.
>>and is up to the receiver the responsibility to make sense out of the data
That is exactly the same thing.
Either way you're throwing bytes from one process to another and hoping the second one can do something useful with it.
In a world where things are "fixed" by just blowing away a VM and spinning up a new one, you might not care but in that case why bother to log anything at all...?
This comes down to a failure of the binary design. It's possible to design a binary format that is uniform enough to handle truncation of a few bytes. To give you one example, you have a header for the binary with pointers to the start and end of each data block and you can keep multiple copies of this header. Another alternative is to specify that each block needs to specify where the data in that block ends. That way, even if you have partial corruption you can read the uncorrupted data blocks.
POSIX is not Unix.
Passing data around as text is a tenet of the Unix philosophy, while AFAIK POSIX doesn't mandate any of that.
"This is just one example of where the old ways aren't necessarily the best."
The lowest hanging fruit gets picked. "Lets dump absolutely everything and NIH the whole thing for a single time 5% performance increase" doesn't sell well when that one time gain is expressed in the amount of time it takes hardware or network capacity to improve 5%, or fixing poorly scaling algos. Also insert the usual analogy of the ratio of the cost of microscopically faster hardware vs the labor cost of extremely expensive rockstar ninja programmers.
Also its assumed that change will lead to improvement because anecdote, or because change is always good. However, "the thing that won uses text, so naturally we gotta get rid of text" doesn't sound like a wise plan.
I'd argue the Unix command line ecosystem has only 'won' in the sense that many developers are familiar with it. I don't think it is technically the best we could do if we were starting from a blank slate.
Now look at what happens on Windows. Devops can use PowerShell to hack together something quickly, but if they want to do something more complex you have the option to easily add new PowerShell commands and data types because it's all based on .NET so you can pull in any .NET code you want, whether that's something you write yourself or from an existing library.
You could do the same with Bash, but people don't do as often, I would suggest this is because it would tend to rely on plain text and regular expressions, and programmers tend to prefer using better specified data types.
All more powerful than UNIX shell languages while using structured data.
Coming from Spectrum, MS-DOS and Amiga, the UNIX shell seemed powerful until I discovered how the development workstation should look like, from the eyes of Xerox PARC.
Amiga REXX should have militated against that view of Unix shells slightly.
OO on its own is slower to parse than plain text, but binary is faster to parse than plain text. If you combine the both can you get the best of both words, something that's both reliable and fast.
As one silly example, you can consider Google's protocol buffers. They are not particularly OO-y. (I have great sympathies for functional languages, but I don't think they offer too much insight into how to format your data files. And neither does OO?)
Here is our project site: https://columbia.github.io/libtrack/
and our repository github repo: https://github.com/columbia/libtrack