The claim was that text based tools are slower, right now that is clearly not the case. Of course a comparison between the current implementation of powershell and unix tools is not quite fair, they only had twelve years after all and unix had ~fourty. I would give them annother three decades to catch up.
Compiled code generally runs faster than plain-text interpreted code, even with no compiler optimizations. How about those tools? The whole point was that text parsing is an additional overhead that will make things slower. It's trivially true.
How does powershell pass objects from one process to another? Presumably there's a serialization/deserialization step involved? This could easily be slower than copying test around.
There doesn't have to be. It could just pass references to memory addresses of the object, or even just a copy of the raw binary data. If both the passer and the receiver understand the class of the object, then there is no serialization/deserialization necessary.
> It could just pass references to memory addresses of the object
How? They're different processes running in different threads. If it's raw binary data then it has to at least be copied, which will often be a lot less efficient than just copying text. You can't just memcpy complex .net objects though.
There's nothing stopping unix programs from piping raw binary data, it's just not generally done because it's usually a bad idea.