- The use of `1>&2` is to overwrite the stdout of the LHS process so that `echo green` actually never writes to the pipe, so it never gets SIGPIPE.
- `echo` only echoes arguments and always ignores stdin, so putting `echo blue` on the RHS of the pipe only serves to run the two sides of the pipe in parallel
- `bash -c '(echo green 1>&2) | echo blue' 1>stdout 2>stderr` will show you that green and blue actually write to different files
> if it was a separate command (or even if the shell forked to execute it internally), only the 'echo red' process would die from the SIGPIPE instead of the entire left side of the pipeline.
Most linux distributions have /bin/echo as a separate program. Running `(/bin/echo red; echo green 1>&2) | echo blue` will always print both green and blue.
EDIT: fix typo of stdin/stdout/stderr as suggested
Also regarding one of your examples: maybe I'm misunderstanding, and anyway this wouldn't change your overall point, but should it maybe read like this instead:
bash -c '(echo green 1>&2) | echo blue' 1>stdout 2>stderr
since stdin is normally associated with file descriptor 0
You could replace the | with an &; and get the same behavior because nothing in the 2nd stage depends on the first stage of the pipe.
In my old man grumbling at clouds voice: This is not arcane.
For normal people, pipes are as serial as can be. For the average person, parallel pipes = 2 pipes.
The shell ties the prior program's (or user input to the first) output to the input of the next program. That's (old) stdout and (new) stdin taken care of.
stderr is sent to the terminal as output by default.
For reference stdin, stdout, and stderr are normally numbered 0 through 2 respectively. When you're directing input and output on the shell (usually) the default is to wire up things as they'd make sense, but (without any IFS characters) a number on either side of the redirection operator codes tells the shell to grab a different input or output.
These will yield different results due to left to right parsing:
echo test >/tmp/test 2>&1
echo test 2>&1 >/tmp/test
Here's a more interesting example
echo test >/dev/null/test 2>/dev/null
echo test 2>/dev/null >/dev/null/test
Send stdout to a file.
ls myFileWhichExists > myStdLog
- or -
ls myFileWhichExists 1> myStdLog
Send stderr to a file.
ls myFileWhichDoesNotExist 2> myErrLog
Send stdout to one file and stderr to a different file.
ls myFileWhichExists myFileWhichDoesNotExist 1> myStdLog 2> myErrLog
Send stdout and stderr to the same file
ls myFileWhichExists myFileWhichDoesNotExist 1> myBothLog 2>&1
I read that last part "2>&1" as "Send stderr (2) to the same place as stdout (1) is already going to".
Notice that if you send stdout and stderr to the same file, because of caching and other issues, the output from stdout and stderr will overlap in unpredictable ways.
In retrospect, getting non-preemptive pipes (a type of coroutining) working in DOS would not have been all that difficult, if it weren't for the limited memory available to PCs of the time and the fact that most programs assumed they owned it all when they ran.
(Eventually people retrofitted a sort of multitasking with TSR programs, but that's not really the same thing.)
(sleep 1; echo red; echo green 1>&2) | echo blue
(echo red; echo green 1>&2) | (sleep 1; echo blue)
It appears that sleep (at least on a typical modern Linux desktop) does behave similarly to echo in that it does nothing with the pipeline instead of echoing it to terminal.
It's curious because someone might assume the default behavior would be to forward all file descriptors unless something was done to the data-streams. Clearly that isn't the case, the shell ties the prior standard output to the next programs standard input irrespective of if anything ever happens to it.
The shell sets up a pipe and connects the left side to the pipe's writing end, and the right side to the reading end. Now, the right side is actually a subshell (indicated by the parentheses "(...)"). And that subshell can spawn as many other processes, sequentially or in parallel, as it wants. All of them will get the open file descriptor (the pipe's reading end) inherited by the operating system.
If you had multiple processes in parallel reading from the pipe, the outcome would be totally nondeterministic (dependent on the kernel's scheduling behaviour. In the example case, none of the potential readers actually read (not the subshell itself, not the sleep, and not the echo).
Here's a perhaps illuminating example:
(echo h; echo hello; echo HALLO; ) | ( read firstline; echo "Firstline is $firstline"; grep A; )
thing | thing | thing > /another/file/I/didnt/mean/to/smash
if you include > redirection, its parsed and processed before execution of the pipe. Even if the subsequent pipe execution moments fail, you have probably smashed /thing/you/didnt/mean/to/smash if you > into it
I recently found out that you can’t easily spawn a shell and then send commands to it. It’s doable with tmux commands, but you’d think it would be easier. I just wanted to write something that locates npm/virtualenv stuff in bash, nothing fancy.
That's what you get setup for you by running tmux, screen, expect, xterm, ssh etc.
(Bonus: https://askubuntu.com/questions/481906/what-does-tty-stand-f... )
Bash is pretty bad at helping you discover how to do things.
It sounds like you are doing something simple in a very roundabout way. Explain instead the intended outcome and many people will be happy to help.
echo ls | bash
<testpipe1 bash # in separate window
echo ls > testpipe1
2) Even if RHS does not exit quick enough, `echo red` writes output to the pipe, which is ignored by RHS `echo blue` (echo never reads stdin)
Would you also expect "echo something < file.txt" to show the contents of file.txt?
Perhaps you are thinking of cat or some other command, because piping things to echo is such a strange and unexpected thing to do that you normally won't encounter it.