Imagine you get a lengthy help description which then you pipe to less.. and you only get (END) in your terminal. Turns out the author decided to print help message to stderr instead of stdout. I assume newcomers will be as confused as I was when it happened to me for the first time. GNU utils use stdout for help texts, and so should you.
There's a case where this might be valid, which is where a command-line invocation is incorrect and a utility provides help as part of an error message. In that case, writing to stderr is justifiable.
(Of course, an alternative argument is that commands should fail silently but emit a nonzero return value.)
When invoked directly, as with '-h' '--help', etc., help output should write to stdout, and not stderr.
StackOverflow has tackled this question, 2nd response follows the course I suggest:
More than justifiable, I'd say it's the correct thing to do in that case. Otherwise, the caller (which can be another script) may end up working with the help message thinking it was the output it expected.
The whole rule should be something like "Print to stdout if it's part of what's asked by the caller. Print to stderr if it wasn't asked but the user should know about it." So outputting it to stdout should happen when it's asked via --help, and outputting it to stderr should happen when it's part of an error.
The traditional Unix command "philosophy" is that commands succeed quietly, using their termination status to indicate all is well. Failing quietly was never a part of it.
Chatter on success reads like a cheesy sci fi script.
> copy * dir
34 files copied, captain!
> mount plasma_cannon /dev/sdc
plasma_cannon mounted, ready to fire, captain!
A message like "incorrect arguments, use --help" can itself go to stderr. Not --help itself though.
ESR's a somewhat less reliable narrator on many topics these days, but his TAOUP remains useful, and indeed suggests "Rule of Repair: Repair what you can — but when you must fail, fail noisily and as soon as possible."
Postel's Law does not count among "repair what you can". Do not try to repair Postel's Law as ESR is doing here; it's broken beyond repair and can only be replaced.
- Rarely repair a bad input; it is optional at your discretion. Just fail.
- If you repair a bad input, do it only in order to try to diagnose more of that input; more things could be wrong, or the first failure encountered could even have a root cause in those other things.
- Remember to fail if you repaired the bad input, even if there are no more errors after the repair.
Not part of it, but not against it. It's useful to stay quiet when the program is meant for conditions and failure is normal. For example: `test`/`[`, `false`, `grep` (when no matches are found), etc. Also when the program is meant as a sort of wrapper to other programs, like `ssh localhost false`, `script -qec false /dev/null`, `true | xargs false`, etc.
> A message like "incorrect arguments, use --help" can itself go to stderr. Not --help itself though.
I don't agree that it's incorrect to save the user the step of calling --help, when it's obvious they need to see that info from an incorrect call. Once decided that including the --help message in an error is right, I don't think it's correct to include it in stdout when it's not expected.
This isn't an odd behavior either, including the --help message (or at least just the synopsis) in stderr on incorrect options is the behavior I'm seeing in utilities like GNU's `bash`, `grep`, and OpenBSD's `netcat`, for example.
There are others I've encountered which return a much longer help set. I believe opkg from OpenWRT is amongst those, which prints over 80 lines worth of options to stderr when given an invalid argument.
You can't tell that in the program itself. Whether standard input is a TTY isn't a reliable indicator in either way. Though if you insist on spewing help text along side diagnostics, at least refraining from doing that to a non-TTY device might not be a bad idea.
> Can you be explicit with what they are missing, I’m clearly missing it as well?
There are two cases when a CLI program can print its usage
1. when the `--help` option, or often also the short-option `-h`, is passed to the program
2. When the user passes a wrong option to the program, where first the error is printed and then often also the general usage.
For 1. the output always should be on stdout, but for 2. the error should be on stderr, and it might be warranted that in that case the usage might be printed on stderr too, so that all is on the same stream.
Doing 2. is not a must though, one can also go for an output like:
> error message
> Try 'program-name --help' for more information
This avoids "hiding" the actual error in the often rather big amount of usage-text while still hinting how to get information about what options the program expects and/or accepts.
This is a special case of 2, but is distinctly different since no context can be inferred.
In my opinion the program should fail successfully (as in non zero return) since no command was given. I'm highly annoyed when kubectl starts spuwing help text when I forget the command somewhere in a script.
Can we also find a "special place" for programs who always output the help text to stderr no matter what and have pages of options? I don't want to be redirecting before beging able to grep...
This is exactly how I write my Bash functions. I didn't even realize it was a standard, it just made the most sense to me. Being given help via a --help argument is intentional and thus appropriate for STDOUT (and a return code of 0); being given help after an argument error makes sense to go to STDERR (and a return code of 2, "USAGE").
Since you can nest functions in Bash (did you know?), I usually have a help function within the main function that is called from both logic branches and just outputs to the right file descriptor.
$ foo()
> {
> bar()
> {
> echo 42
> }
> }
$ foo
$ bar # bar can be called though we are not in foo!
42
Probably the most profitable use for this is for individual functions to override some callback.
But without even dynamic scope, you have no nice way of restoring the previous one, which could be one that some intermediate caller installed for itself.
It could be used to delay the definition of functions. Say that for whatever reason, we put a large number of functions into a file and don't need them all to be defined at once. There can be functions which, when invoked, define sets of functions.
A module could be written in which certain functions are intended to be clobbered by the user with its own implementation. A function which defines those functions to their original state would be useful to recover from a bad redefinition, without reloading that module.
I wish I could use Elixir as a shell scripting language without incurring the VM startup cost and losing the ability to export shell variables or create shell functions... Maybe it would make sense for someone to hack its REPL to dupe most of what one would need on a commandline
The context here is that I may want to run an Elixir script from a Bash command line which results in some environment variables being changed. I don't think that's possible unless that script itself sends that env command to STDOUT, and I then capture it and eval it on the Bash side. Which seems... inelegant.
Hm, and not with `local bar` either, bit of a shame. I think at the point that mattered I wouldn't want to be using bash though! It's nice just as a grouping, 'this is related to that' sort of thing.
> 2nd response (…) And in this case, the first response
There’s a “Share” link under each answer that you can use to link directly to them. In this case it’s impossible to know which answer you mean because we don’t know what’s your “Sorted by” option. But even then, the order changes over time.
This is going to sound snotty, and I'm not really trying to be, but... Unix and its derivatives were made for people who sort of knew what they were doing. The reason you pipe exceptional events to STDERR is so the STDOUT output, if it exists, can flow into the next command in the pipe. Asking for help is an exceptional event. If you want the error output of a Unixish (linux, macos, solaris, etc.) machine running bash to be lessable, re-direct it to STDOUT with `2>&1`. You probably shouldn't be touching the shell if you don't know what that does. These tools were developed assuming the users would have a basic understanding of the system they were running on.
The GNU Project has published tools of varying quality, based on who was around to write the tool, debug it, give feedback, etc. It is not the exemplar of high quality software. (But it's far from crap.) The important bit about GNU (and any other software) is that it was written to adhere to their uses. Other people have different requirements. Telling people to "write your software like GNU writes their software" is to misunderstand personal agency and one of the major points of open source software.
Your comments sound like you're saying "Software freedom means you're free to write software the way I want you to write software."
Sure, but adding --help to a command means that the output of help is not exceptional, thus you're expecting it in stdout. If on the other hand you invoke the command incorrectly, and the author decides the best thing to do is print out the help in such cases, then yes, it should be stderr.
Let's take the cut command for instance. If you read the man page you discover it's job is to parse fields out of each line of input. Printing a usage message into STDOUT is not part of its documented behaviour. It is therefore an exceptional event.
- if you run cut with no input and no parameters, it outputs an error message to STDERR
- if you run cut --help it outputs usage info to STDOUT
This is what I'd expect based on the man page. Running cut with no parameters is undefined, hence the output to STDERR. Running cut --help is defined, so the output goes to STDERR.
I think people get confused because some tools, when run without any input, output the full help info to STDERR, instead of suggesting the user run foo --help*.
So foo* and foo --help* appear to be equivalent. Until you pipe into less*.
Regardless, there is no argument about software freedom to be made here.
You’re allowed, be it open or close source, to write and publish software that defies common, well-established conventions.
You can pretend that it’s some sort of first amendment right to do so if you like, and attempt to deflect your unwillingness to write software that behaves properly as incompetence on the users’ end.
But whether anyone will be convinced by that is a separate question, and those who aren’t convinced certainly have the right to tell you, in turn, that your software sucks. This does not inflige on your rights to write broken software.
This is going to sound snotty, but arguments were made for people who sort of knew what they were doing. You probably shouldn't be touching the comments box if you don't know what it does. These thoughts were written assuming a basic understanding of the language they were written on.
"Exceptional event" is not a useful or well defined concept. A better concept is "error" or "unexpected result".
Asking for help is a request for information. The normal, non-error, expected result is that a bunch of text will show up on the output. It is entirely reasonable that the "next command in the pipe" might want to do something with that expected output.
I shouldn't have to guess whether or not you think the output I specifically requested is "exceptional", so it's entirely reasonable to expect that programs in general consistently put user-requested help on stdout.
You are of course free to write your software any way you want. And I'm free to think it's stupid, and to not use your software.
If the next command in the pipe wants to do something with non-normal output of a command, then yes, it should do something with that. And the command can redirect stderr to stdout with the use of `2>&1`.
Yes. You're unlikely to like my software. I don't recommend you use it.
> Unix and its derivatives were made for people who sort of knew what they were doing. [...] You probably shouldn't be touching the shell if you don't know what that does.
You really do not need to be such a grumpy elitist. People are not born with Unix knowledge already in their heads. Asking questions, raising doubts, and getting answers from more knowledgeable users is a very effective way of learning new things!
With that said, can you make an example of a legitimate use of `command1 --help | command2`, where `command2` does something useful and is not `less`?
The problem isn't that you get a usage message when you ask for it. It's that you get a usage message (written to STDOUT) when you don't. Many commands will print out the usage message when command line options specify a condition that can't be met. I find this frequently when ssh'ing into busybox based systems. Busybox's find command is much less "refined" than comparable desktop OS finds (BSD & GNU/Linux).
I don't mean to disagree with your first paragraph, but I do pipe help to grep (or rg) all the time. (And never to less, who uses a non-paging/scrollback terminal emulator in 2023? Why else would that be beneficial, just to clear it when I've read what I wanted?)
I use `less` with help output because if the help output is long, it starts me at the top of the help output rather than the bottom, and the top usually has a nice summary of the command usage that I usually want to read.
More importantly, I can easily find things by searching with less's `/` hotkey. Relying on the terminal emulator's built-in search isn't great because (a) I'm not used to it - I am more used to vim's keybindings, and the search hotkey `/` is the same in vim, and (b) that's also going to search all the output from before I ran --help (not as big of a gripe, but still somewhat annoying).
I can see how that's a reasonable preference. Though I fall in @OjFord's camp. I have a mouse scroll wheel and I'm not afraid to use it. But... I have to remember to hit return a few times beforehand because it's sometimes hard to find the top of help when you're scrolling up in the terminal.
And if your brain is wired for vi, then that makes complete sense.
But... the cool thing about using the scrollwheel to scroll up to see the --help output is it's always there. If you pipe it into less, it disappears as soon as you exit less. So if you're writing a big, beefy command with lots of unfamiliar options, you can start typing down at the prompt and then scroll up to read the help output. It's annoying when you type that you immediately scroll back down to the bottom of the terminal buffer, and I think all terminal emulators default to doing this, but maybe it's a configurable behaviour.
This also works with `man <command> | cat`.
Also... how many times have I had to type out `git branch -a | cat` and tried to remember to put the `| cat` in it. I HATE that the stock git cli automagically pipes to /usr/bin/pager. If I wanted to pipe the output to /usr/bin/pager, I would type `command | /usr/bin/pager`. But now I'm just kvetching.
Yeah ok now you say it I realise I do sometimes pipe help to less too. It's just that typically I'll run it bare or piped to grep/rg first, and potentially stop there. I too don't use terminal search for pretty much the same reason, and just never got into it for whatever reason.
(Alacritty has a tonne of great features I just haven't taken the time to learn the muscle memory to use.. ctrl-j/k bindings to scroll back and some custom patterns to open as URLs is about it.)
It turns out that the right answer doesn't even depend on your level of shell knowledge or on how you tend to use the shell.
I use much more complicated pipe tricks than that interactively on a daily basis, and I definitely don't think of them as "arcane". As somebody who does that, it's useful to me to know which channel the data I want to pipe are going to come out on. Which is why help, which is normal requested output, should obviously go to stdout.
Usage messages issued in response to actual user errors are different, of course.
Sure. But the comment wasn't "You probably shouldn't be touching the shell if you haven't shipped industry-leading products" it was "You probably shouldn't be touching the shell if you don't know what that does."
Also, if you needed to use it every day, I suspect it would be more familiar than a vague memory.
I don't pipe command output daily, though. I use the shell because often it's easier than point-and-click for a lot of operations.
The shell has plenty of use cases that don't involve piping output. Saying you shouldn't touch the shell unless you understand piping output is like saying you shouldn't touch a refrigerator unless you know the perfect temperature to store milk.
I think this might be the crux of my discomfort with the OP's exhortation we should all adhere to his preferences. Unix and it's derivatives are great because there's a long history of people trying to do things, finding it hard and then slathering on a layer of functionality which is invoked through abstractions that are novel and probably inconsistent with those abstractions that came before. You don't need to ask anyone's permission and you won't find people saying "meh. you shouldn't do it that way." (okay. maybe a few, but you can ignore them. The Unix(tm) police won't show up to cart you away like what happens with VMS.)
Sure. But the benefit of open source is you can do whatever the eff you want. I just don't like being told "I have a use case that dictates your behaviour. You need to follow that use case, even though it's not your use case and contravenes an existing convention."
This is sort of my hot button issue. After years of working on BSAFE, OpenSSL, firefox and libnss, I hate that people say "Hey. Great software. Here's a list of things you must add to it. Of course I'm not going to pay you."
Why should I change my code to adhere to someone else's conventions when they're in opposition to existing conventions?
print --help to stdout, but if the usage is part of an error (ex: because of bad arguments), then print it to stderr.
Many tools, for consistency or for laziness always print usage to stderr. But it is better than always printing it to stdout. Errors should never go to stdout, and paging stderr can easily be done with 2>&1.
Edit: and maybe, if your --help output is several pages long, consider leaving out the details to a manpage.
You should have a man page regardless of the length of your help.
But it's also really useful to be able to get full synopsis of all the options even if all you have available is the binary. Some programs have a lot of options. The "--help" output for rsync on my machine is 184 lines, and is actually pretty terse.
... and there truly is no agreed-upon idea of what constitutes a "page". Even the VT100 screen size was never dominant enough to always count. And nowadays people's windows may be of almost any size.
What's a page? I set all my terminals to 22x23. Will --help run a few quick ioctls to calculate screen size?
But yeah, agree. It's way more preferable to have surprise output on stderr than surprises mingling with stdout, and it's good to be prepared for that.
Frankly, I nearly quit piping to a pager when GUI terminal backscroll became easy and infinite.
Actually the ADM-3A automatically wraps overflowing text similar to what you would expect from a terminal unless you disable the "Auto NL" switch and it is 80x24 characters unless you got the basic 80x12 variant without the "24 lines" option.
In college we used to print out "one-page" ASCII art on the LPs. Naturally, it could get rather risqué. But I would just stare into them, marveling at the ingenuity. Of course I'd already been into it awhile, thanks to C64 Print Shop.
Also my father produced tonnes of scratch paper in narrow strips. I found a way to weave them into a perpetually-growing "tractor-feed snake" which I placed in a padded crib and brought to grade school to show off. I would typically juggle a few bits and bobs to attract even more attention...
Sure, what is the standard URI format for a really long piece of tractor feed paper?
I'm sure you could dig around into my hometown's landfill for awhile. They're only 30 years sediment. I'll send you a tarp, shovel, duct tape to stick the pages back together, and my sister's old clothes to wear while you work. DM for deets.
$ /bin/bash --version
GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin21)
on the off chance anyone was curious/didn't remember. Yes, I have $HOMEBREW_PREFIX/bin/bash and it's 5.2.15
In that same vein, I wondered if that syntax was supported in zsh since modern macOS went whole hog and ... I gained one more pebble on the huge pile of steaming turds why I detest zsh
I'm not sure that I follow: you chose options "-exc" so:
XTRACE (-x, ksh: -x)
Print commands and their arguments as they are executed.
So if you use "zsh -ec" does it still surprise you?
-x is the most invaluable tool in my shell-debugging toolkit. It is great to see every command evaluated and run alongside the script output itself. I use it multiple times per day at work.
But in an ideal world, if every tool is careful to restrict it to one page or less, you'd literally never need a pager anymore.
Alternatively, tools could build in pagers for fewer pipeline surprises, like the all-encompassing systemd.
Even better, I could envision a framework where any tool that produces output can be automatically subject to pagination, with auto terminal detection and the whole bit. Think of libreadline but for output. You could thereby eliminate plenty of ad hoc hacks.
No, please DO print --help output to stderr, and exit with a nonzero status while at that. I mean, please do that if you have to make a conscious choice; there's no pointing fighting if you're using an argument parsing library.
--help, when used correctly, is almost always interactive, where stdout/stderr and exit status don't matter at all. The few noninteractive uses like help2man or zsh auto parsing can trivially handle a redirect. Sure, a noob piping --help to less may be confused for a first time, that's rare and it's a good chance for them to learn about streams and redirection.
That leaves accidental noninteractive usage. Sooner or later someone will call your program with dynamic arguments from another program, and if your command accepts filenames/IDs there's always a chance to encounter one that starts with '-' and contains an 'h' (a practical example: YouTube video IDs). It's very easy to forget to add -- before the unsafe argument(s), so that it's accidentally interpreted as flags. Nonempty stdout, empty stderr and zero exit status makes it way too easy to accidentally accept the output as valid, only to discover much later.
This is not a theoretical concern, I've made this mistake myself and had it masked by -h behavior. A noob only need to learn redirection once, in a totally harmless setting. Meanwhile, even the most seasoned expert could forget -- in a posix_spawn.
At the end of the day, this is not a big deal, but as I said, if you have to make a conscious choice, make the one that makes accidental mistakes more obvious, because humans do make mistakes. This principle applies everywhere.
No, keep --help on stdout. There's a good chance we want to pipe it to grep or less and having to write the 2>&1 incantation is a perfect way of scaring newbies off
By default getopt prints (to stderr) a one line message for unknown options or missing arguments, so I usually don't bother catching ':' or '?' explicitly.
If you used --help, then decided to pipe it to less, and it disappeared, then you can't have been confused for too long. But agreed that an explicit --help should print to stdout.
However, if you used the tool incorrectly (passed the wrong args) and you expected the usage information to go to stdout rather than stderr, I would disagree vehemently. stdout is (generally) for parseable information, whereas stderr is kind of a garbage bin of everything else.
I love this answer! Indeed, we cannot put --help because sometimes the help appears when we put wrong arguments, so we should output it to stderr, so we can see that we made a mistake
I disagree. stderr is a misnomer, in modern usage it has effectively become stdlog/stdinfo. The only thing that should go to stdout is the result of the program.
For example, many programs will print usage/help when used incorrectly. Imagine you upgrade the "read_reactor" tool, and your usage in your "control_reactor" becomes invalid - suddenly you're piping help message data to the control rods. By sending it to stderr instead, no bogus data would be piped and, as a bonus, you would see the help message after invoking your script because (as you have experienced) stderr is not piped by default.
If you want to send it to less: read_reactor -h 2>&1 | less
I disagree, the program's purpose isn't to show the help. Asking directly for help is one of the things in the path to getting desired output, along with getting an error.
If you're following such and such standards that says it should go to stdout, it should go to stdout though. I don't take that as a given.
> If you're following such and such standards that says it should go to stdout, it should go to stdout though. I don't take that as a given.
This is being pedantic. If 99% of programs operate a certain way (GNU coreutils or their BSD equivalent), while not dogma, it becomes convention.
You need a good reason to break convention than whatever rationalisation you seem to have against it.
The comment in the linked github issue does not present any advantage to using stderr (lack of buffering, really?) yet they completely ignore convention. Quit being fancy, be a good GNU/BSD citizen.
Bingo. The arguments for making unix harder than it has to be by sending help to stderr when the user has asked for it is the kind of stuff that gives linux it’s bad reputation.
AFAIK all standard tools on typical Linux distributions send help to stdout when it’s been asked for, and to stderr when it’s a response to a malformed command. Third parties can make their software do whatever they want, but that’s hardly “Linux”’s fault.
When passing dash dash help, the help is the output of the program, so it should go to stdout. When help is printed because the invocation was invalid, it should go to stderr. This is what most programs do, btw.
You could use &3 etc, but that's non-standard. 2>&1 does pipe both into stdout, you would have to do 1>/dev/null 2>&1 to get stderr only going to stdout.
syslog(3) is a sort of std{log,debug}. You can tee it to a vt if needed. (Of course with systemd you have weed through configuration.)
But stderr was designed to be seen on terminal regardless of piping or logging. It’s its purpose, so that a pipe user could see what’s wrong or what’s up. There may be a programmatic need to read stderr, but mixing it with stdout is only needed with programs that use these descriptors incorrectly.
|& isn't POSIX. As for the redirection order, if you haven't learned it yet, learn now!
These numbers arent magic. 0 is stdin, 1 is stdout, and 2 is stderr. These are the free file descriptors you get on Unix and on Windows. Stdout (1) from previous process goes to stdin (0) in the next process in a pipeline. So, if you want less to see the previous processe's stdout (1) and stderr (2), you just need to tell shell "send stderr to wherever stdout is going right now". That's exaxtly what 2>&1 means.
A fun caveat about this syntax is the difference between these two:
ls >/dev/null 2>&1
ls 2>&1 >/dev/null
The first one tells stderr to go to dev null ("where stdout is going right now"), and the second one sends stderr to the next process and stdout to dev null
While I appreciate you taking the time to explain the details, it's not that I haven't tried to learn/remember the specifics but that I tended to use it so infrequently that I'd forget in between uses--and even while knowing the specifics in terms of file descriptors & redirection syntax I would claim the ampersand placement/usage isn't exactly intuitive.
Also, my use primarily tends to be interactive rather than scripting so POSIX compatibility is less of a concern than whether I have to think about it--if I end up somewhere POSIX-compliant that isn't also bash clearly I took a wrong turn and should just through the whole computer out the window. :D
It might be a weird & non-POSIX pipe to die in but at least it's mine. :)
It didn't exist until "recently", but my mnemonic for 2>&1 is two goes to one, but not just one, but to the same place as one, so you have to dereference it with an ampersand.
In the meantime until every tool is rewritten to account for your recommendation, just add 2>&1 and stderr will redirect to stdout, so you can easily pipe it to other stuff
On that same note, please do not exit with non-zero when --help is passed. Requesting the help is expected, and I'm now not sure if the help did some weird stuff to output the help text, and one of those weird things failed, or someone was just too lazy and decided to exit with non-zero.
# in your .bashrc/.zshrc/*rc
alias bathelp='bat --plain --language=help'
help() {
"$@" --help 2>&1 | bathelp
}
This highlights the help output with colors so it looks nicer, works with most help outputs, as it highlights the first part which is the flag/argument in one color and the description in another color
I agree this makes sense. The help text is the primary output of the program in this case, so it should be stdout. (However, if you want longer documentation then a man page might be better anyways.)
You would use stderr for status messages, error messages, and other stuff that is not the primary output of the program. (In some cases, this might include help text; like another comment says, if you specified wrong arguments (not --help) then a short description might be a part of the error message.)
One program that does write error messages to stdout and that annoys me significantly is Ghostscript. (Although you can tell it to write it to stderr, doing that causes all output to be written to stderr; I want output from "print", "==", etc to be written to stdout.)
I think that's personal opinion though. IMO, stdout is used for anything that might make sense to pipe to another program. Help output doesn't generally make sense to send down the line, so don't send it to stdout. Fine if you do, but a general rule should be human consumption is sent to stderr.
Also I wasn't suggesting to speculatively use |&, I was saying to always use it when adding --help, so you don't have to speculate
> IIRC, ffmpeg is like this with everything going to stderr, making metadata parsing more difficult than it needs to be.
If you're trying to parse metadata, ffprobe has a set of options for structured output to stdout. Parsing this will be dramatically easier than whatever you're doing.
sometimes you just want to quickly look at ffprobe's natural output but the file have a lot of metadata or subtitles or whatever, and the stuff you want to look at is printed up on top, so the natural way is to pipe it to less or a file.
but no, you have to mess with stderr because the programs natural output without using the flags is an error apparently.
In ffmpegs defense it is designed that the video stream can go on stdout. I guess it could log to stdout if you specify an explicit file and stderr when you want the video on stdout. But sometimes being consistent is better than being convenient.
Is there any list of do’s and don’ts for cli? Common flags such as -h are used for help, but I’m not sure what other soft rules exist and don’t want to clobber well established paradigms when coding cli interfaces.
Java (since version 9) has -version which prints to stderr and --version which prints to stdout (also with slightly different content). Same for -showversion and --show-version and to get back on topic -help -h and -? print to stderr while --help prints to stdout.
Standard output is for program output. Standard error is something of an unfortunate name since it's actually a side channel for all non-output messages meant for the user to read.
So what's "output" anyway? Whatever the user asked the program to compute. If you pass a --help option, the help message is clearly the program's output because it's what the user asked for. If you use it incorrectly and the program prints usage information, the help message should go to standard error while the output is empty because the operation failed.
I don't see why a developer would do this, nothing failed, so why print to stderr on `--help`? I've printed help contents with an additional `one or more arguments expected` line to stderr in my code, but I wouldn't think someone would take it this far.
Just because it's unenforceable doesn't mean you shouldn't make a rule. In fact it seems just the opposite, that's exactly the time to make a rule. If it were enforceable you wouldn't need a rule, you could just enforce it and have it be true.
Its probably better to measure rules by success rate than pass/fail. One popular tool breaking convention sounds like one popular exception, which is probably better than no convention.
Agree or not (i don’t), we have loads of unenforceable conventions in software. Hell, if this field is known for anything at all, it’s people getting worked up over subtle deviations from longstanding conventions.
I'm not sure this is good advice. Many want to show usage info when there's an argument error, and you want to reuse the help code with minimal complexity.
I prefer that I don't need to figure it all out - that the tools pipe lengthy --help to $PAGER themselves. Or open manpage like git does. I don't think this is ever used in noninteractive environment.
My two cents, do not do that. Git invoking PAGER is tripping me up every time. For example, if I want to cross reference terminal output and --help and the PAGER hides the entire terminal output
Question regarding this: should I print output to stdout and log to stderr? Or should only log levels Error and above go to stderr?
What about structured logging? Where does that go?
The logging libraries I know of let you configure the log destination, for example, to save to stderr and/or a file and/or syslog or the Windows event log, etc.
Use the defaults for your logging library and support a config option.
Oh please no. If I wanted a manpage, I would have typed man git, that's just as easy to type. I use tool --help to get a quick cheat sheet of available options. Opening a man page that I didn't ask for changes my context away from the shell I was working in. Please don't force context switches on unsuspecting users.
So if you are admittedly a newcomer maybe you shouldn't try to tell people how to write programs?
Edit: I didn't mean to be rude, but from my own experience I know that a lot of times when you don't like something it might be because that you don't understand it, and therefore it might be more fruitful to ask why something is in a particular way, and only after ask people not to do it, if that makes sense.
(Of course, an alternative argument is that commands should fail silently but emit a nonzero return value.)
When invoked directly, as with '-h' '--help', etc., help output should write to stdout, and not stderr.
StackOverflow has tackled this question, 2nd response follows the course I suggest:
<https://stackoverflow.com/questions/1068020/app-help-should-...>
And in this case, the first response:
<https://stackoverflow.com/questions/2199624/should-the-comma...>
I'm looking for any specific guidance from, e.g., GNU but am not finding any.