Hacker News new | past | comments | ask | show | jobs | submit login
Tell HN: Please don't print –-help to stderr in your CLI tools
299 points by RicoElectrico on Sept 27, 2023 | hide | past | favorite | 147 comments
Imagine you get a lengthy help description which then you pipe to less.. and you only get (END) in your terminal. Turns out the author decided to print help message to stderr instead of stdout. I assume newcomers will be as confused as I was when it happened to me for the first time. GNU utils use stdout for help texts, and so should you.



There's a case where this might be valid, which is where a command-line invocation is incorrect and a utility provides help as part of an error message. In that case, writing to stderr is justifiable.

(Of course, an alternative argument is that commands should fail silently but emit a nonzero return value.)

When invoked directly, as with '-h' '--help', etc., help output should write to stdout, and not stderr.

StackOverflow has tackled this question, 2nd response follows the course I suggest:

<https://stackoverflow.com/questions/1068020/app-help-should-...>

And in this case, the first response:

<https://stackoverflow.com/questions/2199624/should-the-comma...>

I'm looking for any specific guidance from, e.g., GNU but am not finding any.


> In that case, writing to stderr is justifiable.

More than justifiable, I'd say it's the correct thing to do in that case. Otherwise, the caller (which can be another script) may end up working with the help message thinking it was the output it expected.

The whole rule should be something like "Print to stdout if it's part of what's asked by the caller. Print to stderr if it wasn't asked but the user should know about it." So outputting it to stdout should happen when it's asked via --help, and outputting it to stderr should happen when it's part of an error.


The traditional Unix command "philosophy" is that commands succeed quietly, using their termination status to indicate all is well. Failing quietly was never a part of it.

Chatter on success reads like a cheesy sci fi script.

  > copy * dir
  34 files copied, captain!

  > mount plasma_cannon /dev/sdc
  plasma_cannon mounted, ready to fire, captain!
A message like "incorrect arguments, use --help" can itself go to stderr. Not --help itself though.

Some GNU guidance is in the GNU Coding Standards:

https://www.gnu.org/prep/standards/standards.html#Command_00...

That does say that --help and --version should go to standard output.

The document also gives a list of common options; i.e. don't invent your own name for an option, if something matches in this list.


Fair point on succeed quietly.

ESR's a somewhat less reliable narrator on many topics these days, but his TAOUP remains useful, and indeed suggests "Rule of Repair: Repair what you can — but when you must fail, fail noisily and as soon as possible."

<https://www.catb.org/~esr/writings/taoup/html/ch01s06.html>


Meta-rule of Postel's Law:

Postel's Law does not count among "repair what you can". Do not try to repair Postel's Law as ESR is doing here; it's broken beyond repair and can only be replaced.

- Rarely repair a bad input; it is optional at your discretion. Just fail.

- If you repair a bad input, do it only in order to try to diagnose more of that input; more things could be wrong, or the first failure encountered could even have a root cause in those other things.

- Remember to fail if you repaired the bad input, even if there are no more errors after the repair.


> Failing quietly was never a part of it.

Not part of it, but not against it. It's useful to stay quiet when the program is meant for conditions and failure is normal. For example: `test`/`[`, `false`, `grep` (when no matches are found), etc. Also when the program is meant as a sort of wrapper to other programs, like `ssh localhost false`, `script -qec false /dev/null`, `true | xargs false`, etc.

> A message like "incorrect arguments, use --help" can itself go to stderr. Not --help itself though.

I don't agree that it's incorrect to save the user the step of calling --help, when it's obvious they need to see that info from an incorrect call. Once decided that including the --help message in an error is right, I don't think it's correct to include it in stdout when it's not expected.

This isn't an odd behavior either, including the --help message (or at least just the synopsis) in stderr on incorrect options is the behavior I'm seeing in utilities like GNU's `bash`, `grep`, and OpenBSD's `netcat`, for example.


If that bad command is in in a script where it is run 1000 times in a loop you don't want 1000 copies of the help text.

The user doesn't always need help; maybe they just made a typo.

Spewing help on every error gets old, fast. Much faster than a Unix graybeard gets old.


Take grep as an example.

If it doesn't find a match for the specified regex, it fails silently (and exits with a return value of 1).

If the file isn't found, then grep reports that error, e.g.,

  $ grep foo /doesnotexist
  grep: /doesnotexist: No such file or directory
Mind that it's quite possible that that error might result from an improperly quoted or escaped multi-term regex (containing whitespace), e.g.,

  $ grep foo bar baz /etc/bashrc
  grep: bar: No such file or directory
  grep: baz: No such file or directory
In a well-designed bash script, you'd test the file and quote the regex, e.g.,

  test -f somefile && grep 'foo bar baz' somefile
Grep is verbose on error where that's useful, but not overly so.

A tool which returns a (brief) usage note is "magick" (from the ImageMagick suite):

  $ magick --asdf
  Error: Invalid argument or not enough arguments
  
  Usage: magick tool [ {option} | {image} ... ] {output_image}
  Usage: magick [ {option} | {image} ... ] {output_image}
         magick [ {option} | {image} ... ] -script {filename} [ {script_args} ...]
         magick -help | -version | -usage | -list {option}
There are others I've encountered which return a much longer help set. I believe opkg from OpenWRT is amongst those, which prints over 80 lines worth of options to stderr when given an invalid argument.


Yeah, but if it's run interactively a quick hint (and not the full help message) can be useful.


You can't tell that in the program itself. Whether standard input is a TTY isn't a reliable indicator in either way. Though if you insist on spewing help text along side diagnostics, at least refraining from doing that to a non-TTY device might not be a bad idea.


interactive and non-interactive use should diverge as little as possible.

Personal, I like it, when a program directs me to its help-command, when I give it a bad option, like

    > openssl blabla
    Invalid command 'blabla'; type "help" for a list.


Specific guidance from GNU:

https://www.gnu.org/prep/standards/html_node/_002d_002dhelp....

> The standard --help option should output brief documentation for how to invoke the program, on standard output, then exit successfully.


I think you missed the point. The guidance you quote doesn't cover the case described.


Can you be explicit with what they are missing, I’m clearly missing it as well?

OP says please have —help post to stdout, GNU guidance posted above says exact same thing?


> Can you be explicit with what they are missing, I’m clearly missing it as well?

There are two cases when a CLI program can print its usage

1. when the `--help` option, or often also the short-option `-h`, is passed to the program

2. When the user passes a wrong option to the program, where first the error is printed and then often also the general usage.

For 1. the output always should be on stdout, but for 2. the error should be on stderr, and it might be warranted that in that case the usage might be printed on stderr too, so that all is on the same stream.

Doing 2. is not a must though, one can also go for an output like:

  > error message
  > Try 'program-name --help' for more information
This avoids "hiding" the actual error in the often rather big amount of usage-text while still hinting how to get information about what options the program expects and/or accepts.


3. With no input when input is required.

This is a special case of 2, but is distinctly different since no context can be inferred.

In my opinion the program should fail successfully (as in non zero return) since no command was given. I'm highly annoyed when kubectl starts spuwing help text when I forget the command somewhere in a script.

Can we also find a "special place" for programs who always output the help text to stderr no matter what and have pages of options? I don't want to be redirecting before beging able to grep...


Yes, the OP is explicitly talking about case 1, and the quoted GNU docs are as well?


IMO it does.


I remember when they have realized that this is a bug for the GNU false utility and changed it. I thought it was funny.


The original "task failed successfully".


Thanks, that would be what I was looking for.


This is exactly how I write my Bash functions. I didn't even realize it was a standard, it just made the most sense to me. Being given help via a --help argument is intentional and thus appropriate for STDOUT (and a return code of 0); being given help after an argument error makes sense to go to STDERR (and a return code of 2, "USAGE").

Since you can nest functions in Bash (did you know?), I usually have a help function within the main function that is called from both logic branches and just outputs to the right file descriptor.


> (did you know?)

Yes, but they are not scoped to the parent:

  $ foo()
  > {
  >    bar()
  >    {
  >       echo 42
  >    }
  > }
  $ foo
  $ bar # bar can be called though we are not in foo!
  42
Probably the most profitable use for this is for individual functions to override some callback.

But without even dynamic scope, you have no nice way of restoring the previous one, which could be one that some intermediate caller installed for itself.

It could be used to delay the definition of functions. Say that for whatever reason, we put a large number of functions into a file and don't need them all to be defined at once. There can be functions which, when invoked, define sets of functions.

A module could be written in which certain functions are intended to be clobbered by the user with its own implementation. A function which defines those functions to their original state would be useful to recover from a bad redefinition, without reloading that module.


That is unfortunate and true.

I wish I could use Elixir as a shell scripting language without incurring the VM startup cost and losing the ability to export shell variables or create shell functions... Maybe it would make sense for someone to hack its REPL to dupe most of what one would need on a commandline


Are you sure? I would be astonished if Erlang didn't have a way to export environment variables for child processes.

In a language that provides nothing beyond a function that lets you run system commands, you can control the child's environment via the env utility:

   system("env FOO=bar command arg ...");


The context here is that I may want to run an Elixir script from a Bash command line which results in some environment variables being changed. I don't think that's possible unless that script itself sends that env command to STDOUT, and I then capture it and eval it on the Bash side. Which seems... inelegant.


Hm, and not with `local bar` either, bit of a shame. I think at the point that mattered I wouldn't want to be using bash though! It's nice just as a grouping, 'this is related to that' sort of thing.


> 2nd response (…) And in this case, the first response

There’s a “Share” link under each answer that you can use to link directly to them. In this case it’s impossible to know which answer you mean because we don’t know what’s your “Sorted by” option. But even then, the order changes over time.


This is going to sound snotty, and I'm not really trying to be, but... Unix and its derivatives were made for people who sort of knew what they were doing. The reason you pipe exceptional events to STDERR is so the STDOUT output, if it exists, can flow into the next command in the pipe. Asking for help is an exceptional event. If you want the error output of a Unixish (linux, macos, solaris, etc.) machine running bash to be lessable, re-direct it to STDOUT with `2>&1`. You probably shouldn't be touching the shell if you don't know what that does. These tools were developed assuming the users would have a basic understanding of the system they were running on.

The GNU Project has published tools of varying quality, based on who was around to write the tool, debug it, give feedback, etc. It is not the exemplar of high quality software. (But it's far from crap.) The important bit about GNU (and any other software) is that it was written to adhere to their uses. Other people have different requirements. Telling people to "write your software like GNU writes their software" is to misunderstand personal agency and one of the major points of open source software.

Your comments sound like you're saying "Software freedom means you're free to write software the way I want you to write software."

No thank you.


Sure, but adding --help to a command means that the output of help is not exceptional, thus you're expecting it in stdout. If on the other hand you invoke the command incorrectly, and the author decides the best thing to do is print out the help in such cases, then yes, it should be stderr.


Let's take the cut command for instance. If you read the man page you discover it's job is to parse fields out of each line of input. Printing a usage message into STDOUT is not part of its documented behaviour. It is therefore an exceptional event.


cut does the right thing:

- if you run cut with no input and no parameters, it outputs an error message to STDERR

- if you run cut --help it outputs usage info to STDOUT

This is what I'd expect based on the man page. Running cut with no parameters is undefined, hence the output to STDERR. Running cut --help is defined, so the output goes to STDERR.

I think people get confused because some tools, when run without any input, output the full help info to STDERR, instead of suggesting the user run foo --help*.

So foo* and foo --help* appear to be equivalent. Until you pipe into less*.


> This is going to sound snotty

The word you’re looking for is “condescending”.

Regardless, there is no argument about software freedom to be made here.

You’re allowed, be it open or close source, to write and publish software that defies common, well-established conventions.

You can pretend that it’s some sort of first amendment right to do so if you like, and attempt to deflect your unwillingness to write software that behaves properly as incompetence on the users’ end.

But whether anyone will be convinced by that is a separate question, and those who aren’t convinced certainly have the right to tell you, in turn, that your software sucks. This does not inflige on your rights to write broken software.


I believe you have completely misunderstood my "argument".


This is going to sound snotty, but arguments were made for people who sort of knew what they were doing. You probably shouldn't be touching the comments box if you don't know what it does. These thoughts were written assuming a basic understanding of the language they were written on.


> Asking for help is an exceptional event.

"Exceptional event" is not a useful or well defined concept. A better concept is "error" or "unexpected result".

Asking for help is a request for information. The normal, non-error, expected result is that a bunch of text will show up on the output. It is entirely reasonable that the "next command in the pipe" might want to do something with that expected output.

I shouldn't have to guess whether or not you think the output I specifically requested is "exceptional", so it's entirely reasonable to expect that programs in general consistently put user-requested help on stdout.

You are of course free to write your software any way you want. And I'm free to think it's stupid, and to not use your software.


If the next command in the pipe wants to do something with non-normal output of a command, then yes, it should do something with that. And the command can redirect stderr to stdout with the use of `2>&1`.

Yes. You're unlikely to like my software. I don't recommend you use it.


> Unix and its derivatives were made for people who sort of knew what they were doing. [...] You probably shouldn't be touching the shell if you don't know what that does.

You really do not need to be such a grumpy elitist. People are not born with Unix knowledge already in their heads. Asking questions, raising doubts, and getting answers from more knowledgeable users is a very effective way of learning new things!

With that said, can you make an example of a legitimate use of `command1 --help | command2`, where `command2` does something useful and is not `less`?


Sure. I don't need to be a grumpy elitist.

The problem isn't that you get a usage message when you ask for it. It's that you get a usage message (written to STDOUT) when you don't. Many commands will print out the usage message when command line options specify a condition that can't be met. I find this frequently when ssh'ing into busybox based systems. Busybox's find command is much less "refined" than comparable desktop OS finds (BSD & GNU/Linux).

So if I do something like:

  ssh me@foo.example.com "find . -type f | while read line; do sha256sum \$line; done" | tr -s ' ' | cut -f 2,1 -d ' ' | sort | uniq -c
And expect output that looks something like a sorted line of hashes, I will be sorely disappointed.


I don't mean to disagree with your first paragraph, but I do pipe help to grep (or rg) all the time. (And never to less, who uses a non-paging/scrollback terminal emulator in 2023? Why else would that be beneficial, just to clear it when I've read what I wanted?)


I use `less` with help output because if the help output is long, it starts me at the top of the help output rather than the bottom, and the top usually has a nice summary of the command usage that I usually want to read.

More importantly, I can easily find things by searching with less's `/` hotkey. Relying on the terminal emulator's built-in search isn't great because (a) I'm not used to it - I am more used to vim's keybindings, and the search hotkey `/` is the same in vim, and (b) that's also going to search all the output from before I ran --help (not as big of a gripe, but still somewhat annoying).


I can see how that's a reasonable preference. Though I fall in @OjFord's camp. I have a mouse scroll wheel and I'm not afraid to use it. But... I have to remember to hit return a few times beforehand because it's sometimes hard to find the top of help when you're scrolling up in the terminal.

And if your brain is wired for vi, then that makes complete sense.

But... the cool thing about using the scrollwheel to scroll up to see the --help output is it's always there. If you pipe it into less, it disappears as soon as you exit less. So if you're writing a big, beefy command with lots of unfamiliar options, you can start typing down at the prompt and then scroll up to read the help output. It's annoying when you type that you immediately scroll back down to the bottom of the terminal buffer, and I think all terminal emulators default to doing this, but maybe it's a configurable behaviour.

This also works with `man <command> | cat`.

Also... how many times have I had to type out `git branch -a | cat` and tried to remember to put the `| cat` in it. I HATE that the stock git cli automagically pipes to /usr/bin/pager. If I wanted to pipe the output to /usr/bin/pager, I would type `command | /usr/bin/pager`. But now I'm just kvetching.


Yeah ok now you say it I realise I do sometimes pipe help to less too. It's just that typically I'll run it bare or piped to grep/rg first, and potentially stop there. I too don't use terminal search for pretty much the same reason, and just never got into it for whatever reason.

(Alacritty has a tonne of great features I just haven't taken the time to learn the muscle memory to use.. ctrl-j/k bindings to scroll back and some custom patterns to open as URLs is about it.)


Exceptional events yes like when the help output is printed due to incorrect usage. But when calling with --help directly it's the expected output.


> This is going to sound snotty [...] Asking for help is an exceptional event.

Yup, sounds pretty snotty.


> 2>&1. You probably shouldn't be touching the shell if you don't know what that does...

I use the shell almost daily, and have shipped industry-leading products.

2>&1 is a vague memory because I'm not sure if I've ever done that; and I certainly shouldn't have to know some arcane shell trick to read the manual.


It turns out that the right answer doesn't even depend on your level of shell knowledge or on how you tend to use the shell.

I use much more complicated pipe tricks than that interactively on a daily basis, and I definitely don't think of them as "arcane". As somebody who does that, it's useful to me to know which channel the data I want to pipe are going to come out on. Which is why help, which is normal requested output, should obviously go to stdout.

Usage messages issued in response to actual user errors are different, of course.


Sure. But the comment wasn't "You probably shouldn't be touching the shell if you haven't shipped industry-leading products" it was "You probably shouldn't be touching the shell if you don't know what that does."

Also, if you needed to use it every day, I suspect it would be more familiar than a vague memory.


I use the shell almost daily.

I don't pipe command output daily, though. I use the shell because often it's easier than point-and-click for a lot of operations.

The shell has plenty of use cases that don't involve piping output. Saying you shouldn't touch the shell unless you understand piping output is like saying you shouldn't touch a refrigerator unless you know the perfect temperature to store milk.


I am ambivalent. I see the value in what you're saying, and we owe so much to the ad-hoc accumulation of what we now call the gnu toolset.

On the other hand, a system that has been designed coherently is much nicer to use.


I think this might be the crux of my discomfort with the OP's exhortation we should all adhere to his preferences. Unix and it's derivatives are great because there's a long history of people trying to do things, finding it hard and then slathering on a layer of functionality which is invoked through abstractions that are novel and probably inconsistent with those abstractions that came before. You don't need to ask anyone's permission and you won't find people saying "meh. you shouldn't do it that way." (okay. maybe a few, but you can ignore them. The Unix(tm) police won't show up to cart you away like what happens with VMS.)

But the flip side of this is, yes, cruft.


The reason you pipe exceptional events to STDERR [..]

Uh-huh. And then you get developers that do this (from inside a f#@<ing library, of all things):

  FILE *stream = (level == LOGGER_LEVEL_ERROR) ? stderr : stdout;
If it's not an error, it is not exceptional, so it should go to stdout, right? Yay for personal agency!


I'd be more convinced by a technical argument, than this appeal to "personal agency"

Is there no value in following a convention?


Sure. But the benefit of open source is you can do whatever the eff you want. I just don't like being told "I have a use case that dictates your behaviour. You need to follow that use case, even though it's not your use case and contravenes an existing convention."

This is sort of my hot button issue. After years of working on BSAFE, OpenSSL, firefox and libnss, I hate that people say "Hey. Great software. Here's a list of things you must add to it. Of course I'm not going to pay you."

Why should I change my code to adhere to someone else's conventions when they're in opposition to existing conventions?


print --help to stdout, but if the usage is part of an error (ex: because of bad arguments), then print it to stderr.

Many tools, for consistency or for laziness always print usage to stderr. But it is better than always printing it to stdout. Errors should never go to stdout, and paging stderr can easily be done with 2>&1.

Edit: and maybe, if your --help output is several pages long, consider leaving out the details to a manpage.


You should have a man page regardless of the length of your help.

But it's also really useful to be able to get full synopsis of all the options even if all you have available is the binary. Some programs have a lot of options. The "--help" output for rsync on my machine is 184 lines, and is actually pretty terse.

... and there truly is no agreed-upon idea of what constitutes a "page". Even the VT100 screen size was never dominant enough to always count. And nowadays people's windows may be of almost any size.


What's a page? I set all my terminals to 22x23. Will --help run a few quick ioctls to calculate screen size?

But yeah, agree. It's way more preferable to have surprise output on stderr than surprises mingling with stdout, and it's good to be prepared for that.

Frankly, I nearly quit piping to a pager when GUI terminal backscroll became easy and infinite.


> What's a page?

80x25

> I set all my terminals to 22x23.

You are a very silly man and silliness should not be catered for.

> Will --help run a few quick ioctls to calculate screen size?

Your terminal can wrap text around just fine. If it can't, ask for refund.


You sound like someone who’s never programmed on an ADM-3a much less an ASR-33


Actually the ADM-3A automatically wraps overflowing text similar to what you would expect from a terminal unless you disable the "Auto NL" switch and it is 80x24 characters unless you got the basic 80x12 variant without the "24 lines" option.


In college we used to print out "one-page" ASCII art on the LPs. Naturally, it could get rather risqué. But I would just stare into them, marveling at the ingenuity. Of course I'd already been into it awhile, thanks to C64 Print Shop.

Also my father produced tonnes of scratch paper in narrow strips. I found a way to weave them into a perpetually-growing "tractor-feed snake" which I placed in a padded crib and brought to grade school to show off. I would typically juggle a few bits and bobs to attract even more attention...


Got any links? You know, to revel in the ingenuity.


Sure, what is the standard URI format for a really long piece of tractor feed paper?

I'm sure you could dig around into my hometown's landfill for awhile. They're only 30 years sediment. I'll send you a tarp, shovel, duct tape to stick the pages back together, and my sister's old clothes to wear while you work. DM for deets.


You got us <3


I think the suggestion was not that the tool should page the output, but that the user can easily do so:

    tool [args] 2>&1 | less
Presumably "less" knows how big your terminal is.


btw, as of Bash 5.0

    tool 2>&1 | less
can be replaced by

    tool |& less


  $ /bin/bash --version
  GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin21)
on the off chance anyone was curious/didn't remember. Yes, I have $HOMEBREW_PREFIX/bin/bash and it's 5.2.15

In that same vein, I wondered if that syntax was supported in zsh since modern macOS went whole hog and ... I gained one more pebble on the huge pile of steaming turds why I detest zsh

  $ /bin/zsh -exc '{ echo alpha; echo beta; } |& wc -l'
  +zsh:1> wc -l
       4
dafuq?

  $ /bin/zsh -exc '{ echo alpha; echo beta; } |& cat'
  +zsh:1> cat
  +zsh:1> echo alpha
  alpha
  +zsh:1> echo beta
  beta
oh, aren't you just the funniest. har. de. har.


I'm not sure that I follow: you chose options "-exc" so:

  XTRACE (-x, ksh: -x)
  Print commands and their arguments as they are executed.
 
So if you use "zsh -ec" does it still surprise you?

-x is the most invaluable tool in my shell-debugging toolkit. It is great to see every command evaluated and run alongside the script output itself. I use it multiple times per day at work.


But in an ideal world, if every tool is careful to restrict it to one page or less, you'd literally never need a pager anymore.

Alternatively, tools could build in pagers for fewer pipeline surprises, like the all-encompassing systemd.

Even better, I could envision a framework where any tool that produces output can be automatically subject to pagination, with auto terminal detection and the whole bit. Think of libreadline but for output. You could thereby eliminate plenty of ad hoc hacks.


No, please DO print --help output to stderr, and exit with a nonzero status while at that. I mean, please do that if you have to make a conscious choice; there's no pointing fighting if you're using an argument parsing library.

--help, when used correctly, is almost always interactive, where stdout/stderr and exit status don't matter at all. The few noninteractive uses like help2man or zsh auto parsing can trivially handle a redirect. Sure, a noob piping --help to less may be confused for a first time, that's rare and it's a good chance for them to learn about streams and redirection.

That leaves accidental noninteractive usage. Sooner or later someone will call your program with dynamic arguments from another program, and if your command accepts filenames/IDs there's always a chance to encounter one that starts with '-' and contains an 'h' (a practical example: YouTube video IDs). It's very easy to forget to add -- before the unsafe argument(s), so that it's accidentally interpreted as flags. Nonempty stdout, empty stderr and zero exit status makes it way too easy to accidentally accept the output as valid, only to discover much later.

This is not a theoretical concern, I've made this mistake myself and had it masked by -h behavior. A noob only need to learn redirection once, in a totally harmless setting. Meanwhile, even the most seasoned expert could forget -- in a posix_spawn.

At the end of the day, this is not a big deal, but as I said, if you have to make a conscious choice, make the one that makes accidental mistakes more obvious, because humans do make mistakes. This principle applies everywhere.


Although unconventional, this makes sense.


No, keep --help on stdout. There's a good chance we want to pipe it to grep or less and having to write the 2>&1 incantation is a perfect way of scaring newbies off


I always write my usage printing routine to take a FILE pointer, similar to the following,

  static void usage(const char *arg0, FILE *fp);
and then at the very end of main's getopt switch something like,

  case 'h':
    usage(argv[0], stdout);
    return 0;
  default:
    usage(argv[0], stderr);
    return EXIT_FAILURE;
  }
By default getopt prints (to stderr) a one line message for unknown options or missing arguments, so I usually don't bother catching ':' or '?' explicitly.


If you used --help, then decided to pipe it to less, and it disappeared, then you can't have been confused for too long. But agreed that an explicit --help should print to stdout.

However, if you used the tool incorrectly (passed the wrong args) and you expected the usage information to go to stdout rather than stderr, I would disagree vehemently. stdout is (generally) for parseable information, whereas stderr is kind of a garbage bin of everything else.


"But agreed that an explicit --help should print to stderr."

I think you meant stdout here, not stderr.


Fixed, thanks!


I love this answer! Indeed, we cannot put --help because sometimes the help appears when we put wrong arguments, so we should output it to stderr, so we can see that we made a mistake


I disagree. stderr is a misnomer, in modern usage it has effectively become stdlog/stdinfo. The only thing that should go to stdout is the result of the program.

For example, many programs will print usage/help when used incorrectly. Imagine you upgrade the "read_reactor" tool, and your usage in your "control_reactor" becomes invalid - suddenly you're piping help message data to the control rods. By sending it to stderr instead, no bogus data would be piped and, as a bonus, you would see the help message after invoking your script because (as you have experienced) stderr is not piped by default.

If you want to send it to less: read_reactor -h 2>&1 | less


Modern usage is a misnomer. When invoked with --help, help is the result of the program.


I disagree, the program's purpose isn't to show the help. Asking directly for help is one of the things in the path to getting desired output, along with getting an error.

If you're following such and such standards that says it should go to stdout, it should go to stdout though. I don't take that as a given.

I agree with this remaining open after all of these years: https://github.com/commandlineparser/commandline/issues/399 OP should add 2>&1 before the pipe or replace the pipe with |& (bash) or &| (fish)


When invoked with --help, the program's purpose is to show the help.



Your reply doesn't take into account the second paragraph of my comment.


> If you're following such and such standards that says it should go to stdout, it should go to stdout though. I don't take that as a given.

This is being pedantic. If 99% of programs operate a certain way (GNU coreutils or their BSD equivalent), while not dogma, it becomes convention.

You need a good reason to break convention than whatever rationalisation you seem to have against it.

The comment in the linked github issue does not present any advantage to using stderr (lack of buffering, really?) yet they completely ignore convention. Quit being fancy, be a good GNU/BSD citizen.


Bingo. The arguments for making unix harder than it has to be by sending help to stderr when the user has asked for it is the kind of stuff that gives linux it’s bad reputation.


AFAIK all standard tools on typical Linux distributions send help to stdout when it’s been asked for, and to stderr when it’s a response to a malformed command. Third parties can make their software do whatever they want, but that’s hardly “Linux”’s fault.


When passing dash dash help, the help is the output of the program, so it should go to stdout. When help is printed because the invocation was invalid, it should go to stderr. This is what most programs do, btw.


> Imagine you upgrade the "read_reactor" tool, and your usage in your "control_reactor" becomes invalid

As others have noted, that's an error which shouldn't go to stdout.

But help text is not an error. It's arguably the expected and primay output of the help function.


... Is there a concept of a STDLOG or a STDINFO ? (STDDEBUG etc.)

These sound useful, in any event.

I'd also like to be able to pipe both STDOUT and STDERR to the next in a sequence of pipes, but eh


You could use &3 etc, but that's non-standard. 2>&1 does pipe both into stdout, you would have to do 1>/dev/null 2>&1 to get stderr only going to stdout.


syslog(3) is a sort of std{log,debug}. You can tee it to a vt if needed. (Of course with systemd you have weed through configuration.)

But stderr was designed to be seen on terminal regardless of piping or logging. It’s its purpose, so that a pipe user could see what’s wrong or what’s up. There may be a programmatic need to read stderr, but mixing it with stdout is only needed with programs that use these descriptors incorrectly.


|& pipes both


Yes! And |& is so much easier to remember than 2>&1 or is it 1&>2?!

I'm sure there's some obscure reason why |& isn't the one people suggest first but when I learned about it recently it was hugely useful.

(And if I haven't learned the correct ordering for the > variant by now I think it's fair to assume I was never going to do so...)


> I'm sure there's some obscure reason

|& isn't POSIX. As for the redirection order, if you haven't learned it yet, learn now!

These numbers arent magic. 0 is stdin, 1 is stdout, and 2 is stderr. These are the free file descriptors you get on Unix and on Windows. Stdout (1) from previous process goes to stdin (0) in the next process in a pipeline. So, if you want less to see the previous processe's stdout (1) and stderr (2), you just need to tell shell "send stderr to wherever stdout is going right now". That's exaxtly what 2>&1 means.

A fun caveat about this syntax is the difference between these two:

    ls >/dev/null 2>&1
    ls 2>&1 >/dev/null
The first one tells stderr to go to dev null ("where stdout is going right now"), and the second one sends stderr to the next process and stdout to dev null


> > I'm sure there's some obscure reason

> |& isn't POSIX

POSIX, the obscurest of reasons! :)

While I appreciate you taking the time to explain the details, it's not that I haven't tried to learn/remember the specifics but that I tended to use it so infrequently that I'd forget in between uses--and even while knowing the specifics in terms of file descriptors & redirection syntax I would claim the ampersand placement/usage isn't exactly intuitive.

Also, my use primarily tends to be interactive rather than scripting so POSIX compatibility is less of a concern than whether I have to think about it--if I end up somewhere POSIX-compliant that isn't also bash clearly I took a wrong turn and should just through the whole computer out the window. :D

It might be a weird & non-POSIX pipe to die in but at least it's mine. :)

EPIPE


It didn't exist until "recently", but my mnemonic for 2>&1 is two goes to one, but not just one, but to the same place as one, so you have to dereference it with an ampersand.


Thanks for sharing, now I think I can't remember either approach. :D

But I'm glad the mnemonic works for you. :)


The problem with this is it just merges both. I want to be able to pipe both while keeping them separate.

I guess at that point you'd need 4 standard FD's: STDIN, STDOUT, STDERRIN, and STDERR.


wait what? in what shell? (since that’s not POSIX)


Yes, errors (and perhaps debug messages, diagnostics, etc.) should go to stderr.

The output of --help is not an error message, it's the legitimate and expected output of the program when invoked with that argument.


In the meantime until every tool is rewritten to account for your recommendation, just add 2>&1 and stderr will redirect to stdout, so you can easily pipe it to other stuff


Or just use "|& less" to do the same thing more succinctly.


...if you happen to be using bash.


Ah bummer, every command i run need to be backward compatible with Bell Labs first release


Unlikely. Pipes weren't added until Version 3 Unix. https://en.wikipedia.org/wiki/Pipeline_(Unix)#History



On that same note, please do not exit with non-zero when --help is passed. Requesting the help is expected, and I'm now not sure if the help did some weird stuff to output the help text, and one of those weird things failed, or someone was just too lazy and decided to exit with non-zero.


what really drives me crazy is when invoked

    $ foo -h
I get "haha, you idiot, there is no such option -h, rerun with --help"

when I rerun with --help, it prints a worse than useless usage string and says "for more help, type --help-advanced"


Or when a man page is a stub that refers you to an info page.


For this reason I have a zsh function in my .zshrc with bat (which pages by default, if it's longer than your console height):

https://github.com/sharkdp/bat#highlighting---help-messages

  # in your .bashrc/.zshrc/*rc
  alias bathelp='bat --plain --language=help'
  help() {
      "$@" --help 2>&1 | bathelp
  }
This highlights the help output with colors so it looks nicer, works with most help outputs, as it highlights the first part which is the flag/argument in one color and the description in another color


I agree this makes sense. The help text is the primary output of the program in this case, so it should be stdout. (However, if you want longer documentation then a man page might be better anyways.)

You would use stderr for status messages, error messages, and other stuff that is not the primary output of the program. (In some cases, this might include help text; like another comment says, if you specified wrong arguments (not --help) then a short description might be a part of the error message.)

One program that does write error messages to stdout and that annoys me significantly is Ghostscript. (Although you can tell it to write it to stderr, doing that causes all output to be written to stderr; I want output from "print", "==", etc to be written to stdout.)


Here's a funny bug caused by stderr being used for information.

Windows Powershell gives you the detail of git push in scary red text.

https://stackoverflow.com/questions/12751261/powershell-disp...


Also please do always exit with some non-zero status on error. Looking at you, Fedora Python tools ...


As many others have said, please disregard this advice.

OP (and others), if you're using zsh or bash, just use |& instead of |. That's all you have to do

    command --help |& less
It's shorter than 2>&1 and it does the same thing.


Good tip, but what the OP said is still valid. You shouldn't need to speculatively use |& then redo when you get interleaved rubbish.


I think that's personal opinion though. IMO, stdout is used for anything that might make sense to pipe to another program. Help output doesn't generally make sense to send down the line, so don't send it to stdout. Fine if you do, but a general rule should be human consumption is sent to stderr.

Also I wasn't suggesting to speculatively use |&, I was saying to always use it when adding --help, so you don't have to speculate


This feels like an easy one.

Not quite the same, but I really dislike programs which log status updates and more fundamental output to the same place.

IIRC, ffmpeg is like this with everything going to stderr, making metadata parsing more difficult than it needs to be.


> IIRC, ffmpeg is like this with everything going to stderr, making metadata parsing more difficult than it needs to be.

If you're trying to parse metadata, ffprobe has a set of options for structured output to stdout. Parsing this will be dramatically easier than whatever you're doing.

    ffprobe -print_format json -show_format -show_streams example.mp4
(There's more -show_stuff options, but they're probably more detailed than you want. Run ffprobe --help for details.)


> Run ffprobe --help for details.

But we all wanted to know, does it stdout or stderr? :-D

(It prints help to stdout, and a bunch of junk to stderr)

  $ ffprobe --help 2>/dev/null
  Simple multimedia streams analyzer
  usage: ffprobe [OPTIONS] INPUT_FILE

  $ ffprobe --help >/dev/null
  ffprobe version 6.0 Copyright (c) 2007-2023 the FFmpeg developers
    built with Apple clang version 14.0.0 (clang-1400.0.29.202)


sometimes you just want to quickly look at ffprobe's natural output but the file have a lot of metadata or subtitles or whatever, and the stuff you want to look at is printed up on top, so the natural way is to pipe it to less or a file.

but no, you have to mess with stderr because the programs natural output without using the flags is an error apparently.


In ffmpegs defense it is designed that the video stream can go on stdout. I guess it could log to stdout if you specify an explicit file and stderr when you want the video on stdout. But sometimes being consistent is better than being convenient.


Is there any list of do’s and don’ts for cli? Common flags such as -h are used for help, but I’m not sure what other soft rules exist and don’t want to clobber well established paradigms when coding cli interfaces.


Even more fun, Python printed --version to stderr before 3.4, then to stdout.


Java (since version 9) has -version which prints to stderr and --version which prints to stdout (also with slightly different content). Same for -showversion and --show-version and to get back on topic -help -h and -? print to stderr while --help prints to stdout.

https://docs.oracle.com/en/java/javase/17/docs/specs/man/jav...

But since the more standard double-dash variants printing to stdout where added with Java 9 I would actually laud it as good backward compatibility.


It depends.

Standard output is for program output. Standard error is something of an unfortunate name since it's actually a side channel for all non-output messages meant for the user to read.

So what's "output" anyway? Whatever the user asked the program to compute. If you pass a --help option, the help message is clearly the program's output because it's what the user asked for. If you use it incorrectly and the program prints usage information, the help message should go to standard error while the output is empty because the operation failed.


  use Getopt::Long;
  use Pod::Usage;
  =pod
  ... your scripts documentation here...
  =cut
  GetOptions(
    'help' => sub{ pod2usage(-exitval=>0, -verbose=>99); },
  ) or pod2usage(-exitval=>2, -verbose=>99);
  =pod
  ... continuing here...
  =cut


I don't see why a developer would do this, nothing failed, so why print to stderr on `--help`? I've printed help contents with an additional `one or more arguments expected` line to stderr in my code, but I wouldn't think someone would take it this far.


Disagree here. It's also unenforceable so there isn't much point trying to make rules out of it.


Just because it's unenforceable doesn't mean you shouldn't make a rule. In fact it seems just the opposite, that's exactly the time to make a rule. If it were enforceable you wouldn't need a rule, you could just enforce it and have it be true.


It takes one popular tool breaking convention to make the rule obsolete. There's no point.


Its probably better to measure rules by success rate than pass/fail. One popular tool breaking convention sounds like one popular exception, which is probably better than no convention.


Agree or not (i don’t), we have loads of unenforceable conventions in software. Hell, if this field is known for anything at all, it’s people getting worked up over subtle deviations from longstanding conventions.


This always works for some foo:

foo --help 2>&1 | less

I'm not sure this is good advice. Many want to show usage info when there's an argument error, and you want to reuse the help code with minimal complexity.


In Bash, just use `tool --help |& less` by default :)

Saves a few characters compared to 2>&1


I prefer that I don't need to figure it all out - that the tools pipe lengthy --help to $PAGER themselves. Or open manpage like git does. I don't think this is ever used in noninteractive environment.


My two cents, do not do that. Git invoking PAGER is tripping me up every time. For example, if I want to cross reference terminal output and --help and the PAGER hides the entire terminal output


Been really tempted to integrate PAGER support into the cli library I maintain (clap which is for Rust)


Question regarding this: should I print output to stdout and log to stderr? Or should only log levels Error and above go to stderr? What about structured logging? Where does that go?


The logging libraries I know of let you configure the log destination, for example, to save to stderr and/or a file and/or syslog or the Windows event log, etc.

Use the defaults for your logging library and support a config option.


Please learn to redirect output. Printing help to stderr will not mess up piped commands, instead you'd know right away that somethings up.


Use:

  asdf --help 2>&1 | less
Even better would it be if programs did it like git and open their own manpage upon --help.


Oh please no. If I wanted a manpage, I would have typed man git, that's just as easy to type. I use tool --help to get a quick cheat sheet of available options. Opening a man page that I didn't ask for changes my context away from the shell I was working in. Please don't force context switches on unsuspecting users.


i just found out in this thread that you can use |& instead for newer bash/zsh versions, which is so much better than trying to remember 2>&1.


If you're in this situation, redirect stderr to stdout:

    $ tool-that-uses-stderr-for-help --help 2>&1 | less


So if you are admittedly a newcomer maybe you shouldn't try to tell people how to write programs?

Edit: I didn't mean to be rude, but from my own experience I know that a lot of times when you don't like something it might be because that you don't understand it, and therefore it might be more fruitful to ask why something is in a particular way, and only after ask people not to do it, if that makes sense.


FWIW:

    foo --help 2>&1 | less


2>&1




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: