This reminds me a lot of GStreamer. If you are not familiar with it, it is a decades-old media platform that is still used extensively today in everything from DVD / Music players to deep learning on NVIDIA and Intel hardware. It's a swiss army knife of time-based media.
It has this really cool concept where elements can connect together in directed acyclic graphs to create amazingly complex pipelines that are all synchronized and buffered. Like connecting an "RTSP source to an mp4decoder to an image resizer to JPEG images" as a trivial example. Each pad defines a format it speaks (like RGB or YUV) called capabilities, and then GStreamer matches up the pad capabilities transparently to the user. There's no need to explicitly tell the pipeline manager how to connect the two pads, it will automatically pick a shared capability. Of course, the can be forced explicitly, but it doesn't need to be.
I wonder if executables that are pipe-friendly could expose formats in much the same way GStreamer pads do instead of your ".typed" properties file. It wouldn't require explicitly coercing the pipe, but it wouldn't be a type, it would be more flexible...
Huh, I was gonna nitpick about this phrase, but then I checked and... look at that, GStreamer turns (turned?) 20 this year. So, technically correct! The best kind of correct!
See my post elsewhere here that begins: What if every source or sink had a MIME type?
Extend the concept of GStreamer out to the shell, filesystem and command line tools. You could still process sound and video this way. Tools would indicate what type of data they emit, and other tools could check the incoming type of data to be sure it is something they can handle. (eg, you can't pipe a PNG file into a tool expecting JSON.)
I wonder if a different approach could work: have something conceptually like the @types/ libraries for javascript (which add type definitions to packages that don't have them). In this case, the @types are parsers to and from normalized json and schemas for each command. You need a new shell now, but it could be a simple layer on top of existing ones. Let's call it tbash. tbash will check each command to see if it's typed, and handle invoking the type conversions automatically, falling back to byte streams if there is no type. It would also handle raising errors.
I'm probably missing a lot of edge cases here, but I think it could work.
The main difference in this approach is the shell takes on most of the work, the commands themselves can remain unchanged. They just need an @typed wrapper and some indications of what types they can accept (typescripts structural typing could also give guidance here)
A couple weeks ago, `david_chrisnall` on lobste.rs talked about pipe content negotiation [0]. Here is the relevant paragraph:
"""
Pipe content negotiation is a simple protocol modeled
on the OpenStep drag-and-drop protocol, implemented as
three ioctls. The sender issues a blocking ioctl to
advertise a set of things that it can produce, the
[receiver] issues a blocking ioctl to receive this
list and a second (non-blocking) one to select the
one that it wants to use. If the sender does a write
before content negotiation, the receiver’s ioctl
unblocks and returns an error, similarly if the
receiver does a read before completing content
negotiation the sender unblocks and returns an error.
This lets you establish a pipe between two processes
that can negotiate a content type but also fall back
to whatever your default was if neither end supported
it.
All files have a MIME type. eg, this file is PNG, or PDF, or TSV, CSV, TXT, JSON, XML, audio, video, etc.
Just like the clipboard can contain multiple (MIME type) representations of something you copy to the clipboard; a tool could offer or emit multiple MIME types of the data it is writing. Similarly a tool consuming input data could check that the data type coming in is compatible with a format it will accept. (eg, can't pipe a PNG file to a tool expecting to see JSON.)
Next step: have a system registry for conversion pipe tools.
eg. GPS data. You write a tool that takes in application/gpx+xml, and gpsbabel is registered as a conversion tool so you can just pipe a FIT file in. Of course, there's conversion lossiness issues to be dealt with, but that can then be decided by your tool.
If there is such a registry, it should be completely automated. It should be possible for the shell you use to observe all possible executables available on your path. The executable files should have some sort of "manifest" with information that the software developer can declare. That manifest could include version, company, and things like input and output MIME types.
The shell would be able to offer exact compatible autocomplete suggestions when constructing a pipe, so that only tools compatible with the MIME type output by the most recent tool in the pipeline are offered.
Interesting. I've always wondered why there wasn't an addition in/out fd pair for communicating with the previous/next program in a chain of pipe, e.g:
foo | bar
STDOUT --> STDIN
ALTIN <-- ALTOUT
You could detected it by looking for specially number fd's open on program launch, or maybe an environment variable.
It would make pipes, the shell, and troubleshooting programs an order of magnitude more complex.
What is you want to inspect the ALT flow? How do you code logic that concurrently reads and writes data without blocking, deadlocks or decent performance? What's the terminating condition? And is it necessary at all? Pipes describe transformation of data, bidirectional pipes imply the transformation of the agent/process itself, not only the data, which flows in one direction.
Make it two orders of magnitude more complex.
I mean, it's possible, but unidirectional pipes are much easier to grok and pretty excellent already.
Text generally needs to be formatted somehow, after all it does represent data. So how do you format the text? You can use newline-deliminated values, if your data is simple enough. This is great for things like iterating over a set of files, as each file is just a path separated by a newline.
What about more complicated data though? Maybe you format the text as json? You can still use text-parsing tools like grep with it, but you can also use more specific text processing tools like JQ.
"Just text" is actually pretty complicated, a CSV and a JSON file are both "just text". You've got to have some structure somewhere, even if only by convention.
Sure, but a program like jq will essentially throw a type error if you try to pipe it stuff that isn't in a format it can do something with. Generally speaking you're not going to get silent errors when you try to extract formatted data from plain text content.
And at the point that you need types, maybe the shell isn't the place to be doing that data manipulation?
This reminds me a lot of GStreamer. If you are not familiar with it, it is a decades-old media platform that is still used extensively today in everything from DVD / Music players to deep learning on NVIDIA and Intel hardware. It's a swiss army knife of time-based media.
It has this really cool concept where elements can connect together in directed acyclic graphs to create amazingly complex pipelines that are all synchronized and buffered. Like connecting an "RTSP source to an mp4decoder to an image resizer to JPEG images" as a trivial example. Each pad defines a format it speaks (like RGB or YUV) called capabilities, and then GStreamer matches up the pad capabilities transparently to the user. There's no need to explicitly tell the pipeline manager how to connect the two pads, it will automatically pick a shared capability. Of course, the can be forced explicitly, but it doesn't need to be.
I wonder if executables that are pipe-friendly could expose formats in much the same way GStreamer pads do instead of your ".typed" properties file. It wouldn't require explicitly coercing the pipe, but it wouldn't be a type, it would be more flexible...