While writing this I'm thinking on my experience trying to do anything with ffmpeg or imagemagick... or even find.
* For any sufficiently complicated cmd line app, the list of arguments can be huge and the --help so terse as to be become useless. For man pages, the problem is the opposite... the forest hides the tree! I'm sure we all end up using google to look for example invocations.
* Very often completion doesn't work, since custom per-app machinery is needed. For instance: git-completion for with bash-completion.
* Sometimes I end up passing the help output through grep, then copy-pasting the flags from the output, and then hoping I got the right flag.
* ...how about things like regular expressions parameters... always so hard to remember the escaping rules! (and the regex flavor accepted by each different app).
* Not to talk about more complicated setups involved -print0 parameters or anything involving xargs, tee, and formatting with sed and cut, etc.
Is there a better way? Not sure. I like powershell a bit but some of the things I mention above still apply.
I think we may be able to get a workflow that is a bit closer to the tooling we use for writing programs while not being perceived as verbose and heavy (I'm thinking, the kind of workflow I get with a Clojure repl).
1. Auto-complete is by-design part of the language/shell
1. Pre-approved Verb list that prefix commands helps discoverability and usability no end. Still learning? Get-<anything> will literally never cause a problem.
Because of the above get-help <function> can automatically create some fairly useful documentation right away and whoever wrote the code didn't need to do anything. They can significantly add to to help though. Because that mechanism is part of the language it's worth doing literally every user is going to access it, not some blog your wrote 5 years ago I hope is still online.
Passing objects rather than string, whatever maybe it's not the best way, I think it's great. But That's not the only thing PS has to learn from.
If anyone uses the cli and doesn't know how the PS help system works, it's certainly a breath of fresh air to learn.
About the objects vs strings, Kalman Reti (of Symbolics IIRC) talked  about how an OS passing pointers could enjoy a much easier time instead of serializing everything as strings then deserializing, especially when done ad-hoc through sed/grep/perl/whatever .. It pains me to see this. It pains me to see how linux basic utils are 30% --usage, 30% output formatting (and they all share this).
MS did a great thing with PS.
Remember that you can always use and create aliases, so you could still use kube-scale or kube-expose in the terminal if you want. But for scripts readability is the most important thing. Any newcomer can look at any PowerShell script and know what it does. And any newcomer could type in New-Kube, hit tab, and see what new things you can create in Kubernetes, instead of having to google what the command for creating a service is.
It may not cover your particular use case (I want to foo this bar), but as restriction on the language it helps your users not have to discover esoteric commands.
If the possible commands are (for the sake of discussion) `Get-AppLockerFileInformation` and `Get-AppLockerPolicy`.
If you type `Get-App` and hit Tab, it will autofill to `Get-AppLockerFileInformation` which I just really don't like, and I need to keep hitting tab to cycle through all other possible `Get-App*` commands.
What I want is some UI that when I hit tab it autocompletes as much as possible until there's a decision to make, and then shows me the options. So in my example case, it would look something like this:
Type `Get-App` -> hit tab -> shows `Get-AppLocker` -> type "P" -> hit tab -> `Get-AppLockerPolicy` is displayed
bash-ish systems nail this, and they even have the "double tab" to list all options from that point which is normally pretty nice.
The PS idea of "keep hitting tab or shift-tab while you cycle through all possible options" sucks in comparison. Especially with discoverability (There are about 20 commands on my windows system that start with "Get-App", and in order to figure them all out I just need to keep hitting tab until it cycles around)
Set-PSReadLineOption -EditMode Emacs
Set-PSReadLineKeyHandler -Key Tab -Function Complete
And while I'm sure there's a way to make it persist, a shell's power is in it's ubiquity. If i have to change settings on every machine I work with, i'm probably going to want to use that time to install a different shell.
I know that's an attitude that is pretty impossible for any new shell to solve for, but it's the truth. I'm not choosing the most powerful or "best" shell, i'm choosing the one that gets in my way the least. And for me, that's bash in 99% of cases. And until another comes along that really improves discoverability and makes it easy to learn and adjust to, i'm not going to switch.
I have a ton of knowledge built up over the years about bash and I'm not in a position where I'd be able to dump it all and painfully try to learn everything new from scratch by reading manuals, trying out syntaxes, searching around for if what I want to do is possible, figuring out the "right" way to do things, and more.
Even if it is better in the end, if I can't get over the initial hump, it's not very useful to me. Maybe this is just me getting old...
Firstly, that form of completion has existed in the Microsoft/IBM operating system world for significantly longer than PowerShell has, tabbing backwards and forwards around a list of matches being the way that one did filename completion with some MS/PC/DR-DOS tools back in the 1980s. Secondly, PowerShell ISE can do things differently, so this cannot be a PowerShell thing.
Crank up PowerShell ISE, enter "Get-App", then press Control+Space. For more, press Control+Space to complete an option after entering "-".
Being able to change defaults is nice, but the power of PS or bash is that they are ubiquitous. So anything that has to change defaults to be usable makes it kind of pointless in a lot of ways (at least to me). Because if I need to change defaults on every system i'm on, then why not just use that time to install a different shell? (which is pretty much what I end up doing on windows machines)
I am running PowerShell 6.1.0 (which I think uses PSReadline). PSReadline is also very customisable, but this is the default behaviour.
Why can't a pipeline be a more complicated multi-io workflow? In a 2D GUI this would be trivial to construct and read, but in a 1D command line it would get confusing in a hurry. And the concept works much better with AV, I can easily construct and reason about complicated arrangements of audio and video inputs and outputs, with mixers, compositors, filters, shaders, splitters, etc. between them.
Instead we worship text. Is that because manipulating text is actually more useful, or because our tools are only good for working with text?
exact: in many GUI tools, you can have non-default settings that you changed via menus. Where are they stored? Which ones are currently active? Does it matter that you selected four objects first, then a transform tool, then another object?
> find /var/spool/program/data -name foop* -mtime +3d -print
vs "open the file manager, go to /var/spool/program/data, sort by name, secondary-sort by last modification time, find the ones that are more than 3 days old, make sure you don't slip"
repeatable: OK, do that again but in a different directory.
transmissible: here's the one-liner that does that.
Now, your specific requests are about audio and video toolchains, where I will admit that reasoning about flows is easier with spatial cues -- but I'd really like the output of that GUI to be an editable text file.
And they scrapped it in WP6 for some shitty object oriented version that lacked all the "serializable workflow" of the previous one.
A classic CLI command composition (a.k.a. pipeline) example:
More shell, less egg:
After reading that post, I wrote about it here:
with a couple of solutions in Unix shell and Python.
Why you should learn just a little Awk (2010) (gregable.com)
A comment by me there:
What we need is a ubiquitous command interface where the output of any command is a tree set of typed objects and any command is free to consume any of those sets of typed objects. The typed objects in this pattern would not be limited to text.
This paradigm removes the restraints of GUI applications but satisfies a great many use cases with the obvious exceptions being any program that requires a lot of mouse or pointing device input like photo manipulation.
(The following paragraph is not directed at the author of the parent post. It's fully rhetorical.)
Did you know that icons were supposed to be live representations of in-memory objects? That objects were more fundamental for the OS than files? Did you know that windows were views onto those objects? Did you know that interactions to and between icons were synonymous with OOP polymorphism?
And this isn't the best UI, it's simply the first modern UI.
You don't. Your question makes as much sense as asking how to do polymorphism in shell commands Not operating on data was the whole point of OOP. (Or at least one of the key points.)
"I wanted to get rid of data. The B5000 almost did this via its
almost unbelievable HW architecture. I realized that the
cell/whole-computer metaphor would get rid of data, and that "<-"
would be just another message token (it took me quite a while to
think this out because I really thought of all these symbols as
names for functions and procedures." -- Alan Kay
Look into Pharo or Squeak, at least watch some demos on YouTube.
man x | grep -P y -C 3 | less
I might suggest, however, that image/video manipulation is a case for which CLIs are uniquely unsuitable.
There surely are better ways (as you mentioned, Powershell appears to be a step or three in the right direction, though I have little familiarity with it), but it seems as though the greatest difficulty is in maintaining full backwords compatibility, while still encouraging new applications to make full use of novel features - not to mention the effort of modifying old applications to fit new standards that would need be expended.
The goal of grepping in that snippet is that you're `less `ing through precisely the parts that matter, and no more, rather than wading through the whole man page, and you're using -C to control how much context you (think you) need around the search results. This is a much better setup for skimming through potential hits than going through the man page wall of text.
Meaning I usually struggle to comprehend what particular options mean rather than how they bind to CLI.
Maybe indeed a good intuitive UI would give more intuition about the more obscure options. But then again, a good intuitive UI is a quest of its own.
Which is kinda a CLI application.
It's qualitatively different from your average CLI experience. As such, I don't think you can actually compare the two.
I used to agree with you on this point, but then came Bender. Chatbots singlehandedly liberated the CLI from the terminal and made them ubiquitous once again.
So yeah, it's not just the terminal where we're creating CLI applications anymore.
> Cannot write to myfile.out, file does not have write permissions
> Fix with: chmod +w myfile.out
I actually much prefer:
"can't write myfile.out: Permission denied"
This shows the same information as the first 2 lines combined from the example, and the 3rd line is not necessarily the correct way to fix the problem anyway (e.g. you might be running it as your user when it should be root, chmod +w would not help).
If you are so convinced that chmod +w is the way to fix the problem, why not just do that and carry on without bugging the user?
And having each error confined to one line also means it's much less likely that some of the lines are missed, e.g. when grepping a log.
EDIT: And to add to this: it's sometimes useful to prefix the error messages with the name of the program that generated them, so that when you're looking at a combined log from different places, you know which of the programs actually wrote the error, e.g. "mycli: can't write myfile.out: Permission denied".
I agree that you can do lots of stuff suboptimally and still have a usable tool, but I disagree with the author on what the ideal error output looks like.
Ultimately all we're doing is trying to save human time anyways.
Joel Spolsky's timeless advice applies: "Users can’t read anything, and if they could, they wouldn’t want to." https://www.joelonsoftware.com/2000/04/26/designing-for-peop...
Yes, this is a clear break with the unix tradition! But it's not intrinsically bad - it's just a context to bear in mind.
Still, conceivably the app could check the file owner before showing the message. If it was a common enough error, it might be useful to do something like this.
There is a difference between an error title and error description. The description can and should be long to help clarify what is wrong. It's ok to be verbose. It's ok to spill out on multiple lines. You're much more likely to be helpful to a confused user than someone grepping logs and looking for terse output on a single line. That user can just use `grep -C` anyways.
I agree that it's useful to include the app name though.
Yes, this is an example, but surely you'd want an example that shows the strength of doing this verbosely? A good example of a verbose error message system is Rust's compiler output (or newer clang/gcc outputs). Being verbose for no reason other than to be verbose is just wasting the users' time (or they just ignore the spam -- which is what I would do if I used a tool that spammed me with multiple lines of output whenever it hit an -EPERM).
Personally, something like:
% foo ./bar
foo: write config to "./bar": Permission denied
% foo ./bar
foo: write config to "./bar": Permission denied
Hint: Have you tried <possible-recommendation>?
> You're much more likely to be helpful to a confused user than someone grepping logs and looking for terse output on a single line.
There are two sides to this. Outputting lots of text can also cause a user to get confused (if we're talking about making things easy for not-necessarily-technical users).
When teaching (high-school) students to program, we quickly learned that even somewhat verbose output like Python's stacktraces can cause students to suddenly become anxious because there's a pile of text on their screen telling them they did something wrong. Adding more text to output does not always help, and you should keep that in mind.
SYS0003: The system cannot find the path specified.
EXPLANATION: The path named in the command does not
exist for the drive specified or the path was
SYS0002: The system cannot find the file specified.
Explanation: The filename is incorrect or does not exist.
Action: Check the filename and retry the command.
Please don't. There is nothing wrong with interactive tools, but by default, they should not be. So instead of making non-interactive session possible via flags, the default should be to be non-interactive. If there is an option to start an interactive session, everything is fine.
Otherwise, you would never know when your script could run into some kind of interactive session (and therefore break; possibly after an update).
(I'm going to assume the passive voice means "the article considers this more user-friendly" rather than some sort of commonly accepted fact).
I disagree with this strongly and agree with the GP -- I would much rather have the command exit with a message saying that a required parameter is missing. For example, if I have a script using a command and the command becomes interactive, then my script is dead; but if it simply exits then my script has failed at a repeatable point.
You could say that I should pass a "--noninteractive" flag into everything just in case, but sometimes these things aren't supported. I would much rather have an application support a "--interactive" flag to support those who want to be able to interact with the tool.
I think the two sides of this are unlikely to be able to convince each other. At least the article presents a reasonable-ish middle ground of always offering help as to the way to circumvent the interaction at the point where interaction is required.
ps. the verb "considered" is a good sign that this is an opinion, and would be even in a more "active" sentence.
A five minute pause sounds ridiculous to me, absolutely not user friendly from either end. It's jus unpredictable and time wasting. If you absolutely must, you can use 'isatty' to check whether stdin/stdout/stderr are connected to a terminal and act accordingly.
There is some merit to having consistent and predictable behavior regardless of where and by whom the tool is invoked, though.
Let's say that I did test the cronjob but that it starts "failing" after an update to the tool. My fault, I know, but at least I get mail when it fails while I won't if it's just waiting for input.
Yes, but in one case the issue would result in a mail because the cron job failed, and in the other case the issue would just cause the the process to hang indefinitely without notice
I think it's a fine suggestion, though in 94% of cases probably too much work to be worth it.
EDIT: remove rogue 'not'
Um... why? That is literally the situation for which the pseudotty device was created.
There is a spectrum between "interactive" and "scripted" utilities, and the command line interfaces we're talking about sit balanced on the interface. There's no way to make everyone happy, more or less by definition. So I think "huge antipattern" is maybe spinning a bit too hard.
Unless you already know your users will want man pages, I wouldn’t bother also outputting them as they just aren’t used often enough anymore.
That's the first place I look for help and it annoys me to no end when a CLI program that doesn't come with one.
I stopped taking seriously the article at that point and quickly skimmed through the rest of it.
Man pages are a great unix culture heritage, please new developers don't give up on them!
I really can't express strongly enough how much I disagree with the view that man pages are no longer relevant or necessary.
User friendlyness is a feature, and like all features it's not always worth adding to professional tools.
From people who don't know how to use man pages.
Please, always ship man pages with whatever you write. It's /easy/ to do and has a great added value.
PS: doc on the web is so often irrelevant… either too old, or too recent — and in the rare cases where it's properly versioned, the workflow is something like “cmd --version ; google cmd $version” instead of just “man cmd”, which is nowhere as convenient or reliable.
I understand that man pages might represent a minority, but I cannot express enough how wonderful it is to get the full manual of a program without interfacing with the web. Not to mention how powerful that is, since most apps have short names that are difficult to search for, but how accessible that makes the application.
My (current) position is that they're useful, but not worth the extra effort for most CLIs. It's a cost-benefit thing.
I'm genuinely curious as I've never had anyone request man pages in our CLI.
* Web docs are a problem because I don't always have access to the internet when trying to do something on my computer, and usually there are so many kinds of web doc generators that you have to figure out how the information you want is laid out. Web docs are useful as a quick-start guide or a very lengthy reference guide -- but not for the common usecase of "is there a flag to do X?"
* In-CLI docs are a cheaper version of man pages. In most cases, the output is larger than the current terminal size so you end up piping to a pager (where you can search as well), and now you have a more terse version of a man page. Why not just have a man page?
Man pages are useful because they have a standard format and layout, provide both short and long-form information, and are universally understood by almost anyone who has used a Linux machine in the past. "foo --help" requires the program to know what that means (I once managed to bootloop a router by doing "some_mgmt_cmt --help" and it didn't support "--help" -- I always use man pages now). One of the first things I teach students I tutor (when they're learning how to use Linux) is how to read man pages. Because they are the most useful form of information on Linux, and it's quite sad that so many new tools decide that they aren't worth the effort -- because you're now causing a previously unified source of information (man pages) to be fractured for no obvious gain.
I still add support for "--help" for my projects (because it is handy, I will admit) but I always include manpages for those projects as well so that users can actually get proper explanations of what the program does.
> I'm genuinely curious as I've never had anyone request man pages in our CLI.
Honestly, I would consider not using a project if an alternative had man pages (though in this case it would be somewhat more out of principle -- and I would submit a bug report to bring it to the maintainers' attention).
Some applications (e.g. Git) make "--help" redirect to man. What do you think of that?
Not to mention that using "--help" for man pages requires I open up a separate window when I typically just want a quick reference to the most used flags.
Moving man pages to a different command is like coming up with an alternative icon to the hamburger menu for your regular UI. Sure, all the functionality is still there, but it requires a full stop and search to remember where to find it.
- works without internet: very important when you want to use a long train ride to write some code (I also have the entire rust-doc and all IETF RFCs on my disk for quick referencing)
- quick and reliable access to known items: `man ascii` is way quicker than finding an ASCII table on the web (probably on Wikipedia). And finding the syntax for an obscure bash feature is way easier when your search is confined to `man bash` rather than to the entire web.
- Don't know how to label this, but I like that the manpage is a complete documentation of one tool, unlike a disconnected set of Stack Overflow questions. That allows one to cursory read through the manpage to learn the scope of what that tool can accomplish.
Notice that if you already have help, you can build the manpages automatically from them using "help2man". You could get manpages for all your tools by simply adding a line into your makefile!
It's good manners for a CLI app you want installed on someone's system to also integrate with the help infrastructure of that system.
I don't really care about documentation on the web. In fact I'd rather you simply treat the man pages as the single source of truth and put the man pages on the web. Sort of like https://linux.die.net/man/
I expect the -h flag to give me a summary of the flags and arguments, to remind me of the particular name of the flag I'm missing. I most certainly don't want the whole documentation there, partly because the whole documentation is (presumably) large enough to scroll my history off screen.
So, yes, man pages are definitely more important than web or in-cli docs.
I'd be just as happy with bundled HTML documentation.
I myself write doco in Docbook XML
generate HTML from that that can be read directly
and generate roff for man pages from it as well
I'm not saying they're not useful. If you've got plenty of time to write up docs, go ahead, but the reality is we only have so much time and I think we should spend our time writing in-CLI docs and web docs before we start man pages.
Also, you don't need web access to use in-CLI docs either, and that works on all platforms.
Having said this, I do plan on having man pages be an export type of the oclif docs (which is currently in-CLI and markdown). I intentionally made the output very similar to man pages already so it should be relatively easy to do.
Also, you briefly say a few things about CLI apps using a remote API, you may want to add to that and say a few things about the proxy environment variables . These are indispensible for corporate users. I think some early, early version of npm didn't respect the no_proxy environment variable, and for the http_proxy and https_proxy it required some arcane combination of: proxy in a flag, proxy in a config file, proxy environment variable set. It really should be an OR not an AND...
Last but not least, another annoying thing was tools changing their config format or location. I think it was docker that changed their config file format and/or location like two or three times. Absolutely infuriating.
We get away without using any config files in the Heroku CLI which is certainly preferable. (Well, there is a config file, but I don't think anyone's using it and it's undocumented. I think all it can do is disable colors) Config is another topic that I do think would warrant its own article as well. I may not be the best author though as we've tried to avoid config. (Though it's a common enough problem I do want to solve generically as possible in oclif).
As far as automatically building man pages, I still think that's a wasted effort. Nobody has ever asked for or even mentioned man pages in our CLI. Setting up a build process and distribution is considerable effort and maintenance burden.
Of course if the users of your CLI want man pages then of course build them. In my experience though, that's not what users want. Though it's important to note that a CLI that interacts with a cloud service is pretty useless without internet.
If no internet is the only compelling reason to support man pages, I'm still not convinced it's a better use of your time. The docs should already be available offline in the CLI itself.
I disagree about 11 (using "main_command sub_command:sub-sub_command" rather than "main sub sub-sub" syntax), but it's mostly a matter of taste.
Seriously, though, if you've already taken the time to write documentation, then there's no reason not to also generate a manpage. Just using pandoc to convert your, say, README.md gives good-enough results:
pandoc -s -f markdown_github -t man -o your_cli.1 README.md
(There probably are other good conversion methods.)
Why I like man:
Advantage over online docs:
It's offline and available directly in the terminal, without having to open a browser and it has a distraction-free, clean look. The only slight disadvantage is the lack of support for images, which are occasionally helpful, but in a pinch, for some use-cases, you can have ascii diagrams.
Advantages over "--help":
1. Conventionally, "--help" just provides a brief rundown/reminder of the options, so having full documentation is valuable.
2. If "--help" provides the full docs then:
a) You lose the option of having the brief rundown, which is also very valuable.
b) "man command" is slightly faster than "command --help" :p (yes, it is a slight pity that accessing the full docs is faster than accessing the brief version, if you use convention).
c) man deals with things like having nice output, with proper margins, at different terminal widths.
d) man deals with the formatting for you, providing consistency with all other applications.
FWIW I think that texinfo is (mostly) even better than man, as it considerably improves on the navigation, but it's been crippled by the FSF-Debian GFDL feud, which meant that the info pages weren't actually installed on many systems, and it's mostly a lost cause now.
You can have the best both worlds if the manpages are built automatically from the "--help" output (e.g., using help2man). Then you can have "-h" give a brief rundown and "--help" give the full docs.
> FWIW I think that texinfo is (mostly) even better than man, as it considerably improves on the navigation
I am curious about that. Do you really like texinfo navigation? I find it completely unusable, to the point of prefering to download and print a pdf from the web instead of opening (gasp!) the dreaded "info" program.
Also, I expect man pages to be more detailed than --help.
(The article sort of touches on this with mention of `mycli subcommand --help` as something that should show help and `mycli subcommand help` as potentially confusing `help` with an argument. But I find that making help a full subcommand tends to avoid avoid this ambiguity. And having the bare help command give the table of contents lends structure to the help system.)
I have future plans for oclif to take this a step further and make the help contextual based on what you're working with. For example, if you wanted to see what commands might relate to a file you might get different commands than a directory.
I'm so glad to see this included. I don't like $HOME being cluttered with .<app> config directories, but worse than that, far too many when releasing on macOS say Oh Library/Application\ Support/<app>/vom/something is the standard config location on Mac, so I'll respect XDG on Linux but on Mac it should go there. No! Such an unfriendly location for editable config files.
Regardless, I think it probably makes sense to have a uniform interface for getting said directories, so that however the OS decides things should be laid out, the developer just needs to `local_config_dir(app_name)`. If the user (or at least administrator) can decide between <app>/<function> and <function>/<app>, all the better.
Do you have to use ~/.config/app/ or could you just use ~/.config/app.conf?
If anyone can verify that I'm either right or wrong here that would be helpful.
> Your app is responsible for cleaning out cache data files when they are no longer needed. The system does not delete files from this directory.
However, many third-party tools delete from that directory with minimal caution if any, so it's a good idea to consider it ephemeral.
In general, if I'm targeting OSX, I use those locations as recommended, but make symlinks to relevant config/etc. from there to the XDG-suggested locations.
The one thing this does miss is distribution, which is a HUGE part of offering a great CLI app. Specifically, I'd say:
1. Make your OFFICIAL distribution channel the primary package manager on each platform (ex: on Mac, homebrew. Ubuntu, apt/snap). Beyond that, support as many as you have capacity to.
2. Also offer an official docker image which fully encapsulates the CLI tool and all of its dependencies. This can be a great way to get a CLI tool loaded into a bespoke CI environment.
In my experience, Homebrew always eventually results in pain and complex debugging and it's almost impossible to audit software it installs to prevent the installation of prohibited or dangerous software.
It's really not that hard to build a .pkg file and developers that want to properly support the Mac platform should go down that path before offering Homebrew support.
How so? I’m not aware of anything built-in that allows for this.
The top hit on Google for "os x pkgbuild" is a link into Apple's documentation that 404s. (Further Googling turns up some blog posts, and man pages, so that's good.) Does this support dependencies? How do I get updates to users? How does a user receive updates?
I just wish everyone would take a day and read an intro to nix/nixpkgs and the world would really be a better place. There are so many "popular" hyped tools these days that can barely do a fraction of what is going on in the Nix ecosystem, but it doesn't seem to get the hype that brew, buildkit, linuxkit, etc all seem to get.
Per my original comment, nix can do this, for example and already has an enormous number of packages packed, pre-built/cached, ready to go.
I'm not that much in devops/containers guy, bit I'm not aware about linuxkit alternative in the Linux world, care to elaborate?
I can create nix derivations that look like a linuxkit yaml, but instead of having a bunch of opaque sha256 hashes to some container in them, it has symbolic references to packages that are defined in my nixpkgs repository. This nixpkgs repository includes package definitions for basically everything in a distribution. From it, out of the box, you can issue single commands to build: VM images, container images, images ready to deploy on GCE/AWS/Azure, all from a single set of package definitions.
This means I can issue a single command that will output a VM image (or a container) that includes the exact revisions of all of my software, down to the kernel options and compiler flags. It makes it trivial to take in patches for critical components and rebuild a base image. No cloning extra repos. No build, container build, push, grab sha256, copy sha256 into a yaml file. Just specify the patch, hit rebuild, done.
You can do this for VM images, specifying the total system configuration - how you want etcd/kubelet/etc running, for example. One command and you have a bootable Azure VHD. You can then use the same tree, or maybe a different branch, and declaratively(!), very minimally  build the most optimal container images that you then have deployed to Kubernetes or wherever.
And you can be sure that you can build this exact configuration in 1 year, 2 years, 3 years, etc, due to how Nix works.
I hope I've done a somewhat okay job of explaining this. I'm trying to take some time and write a "container oriented look at why Nix is cool" guide soon too.
Consider a usage line like this:
prog <user> [password]
Also, an example like this:
git add <pathspec>...
git add src/*
Consider an interface like this:
prog [--rm] [--name <name>] [args...]
prog --name --rm -- --name foo
"args: ["--name", "foo"]
By definition any CLI that accepts variable args is fine here as it's all the same type.
The -- is a great point as well. It solves a lot of problems users have but a lot of time people don't even know about it. It solves issues with `heroku run` for example.
EDIT: updated to clarify my point
Or if they specify `NO_COLOR`!
With interpreted languages with language-specific package managers, you have to:
1) Install the language
1a) Possibly have to install a language version manager (rbenv, pyenv, etc)
2) Install the language's package manager
3) Install the CLI utility via the language's package manager
Here's the order I think CLI maintainers should strive to making their utilities available:
1) Install via OS package manager
2) Install via prebuilt release with OS-specific package, from hosting site (GitHub, etc).
3) Install from source
4) Install via language-specific package manager
5) Install via curl | sh :)
The awscli is just terrible in this respect. There's no man page for 'aws' so I say "aws --help". It then literally tells me "To see help text, you can run: aws help". OpenShift's 'oc' sucks at this only a little less, with no man pages and for some inexplicable reason you can only get a list of global options in a dedicated global options help subcommand instead of at the bottom of every help page. The documentation system for 'git' on the other hand is a work of art. Pure beauty.
Using tables, colors and other stuff requires a lot of terminal support. MacOS terminal, iTerm, Linux terminals supports a lot of stuff, but not always (our team is generally using XTerm for example). Implementing these are acceptable if there's a robust code detecting terminal capabilities and falling back gracefully and without treating these more streamlined terminals as lesser citizens, and this requires a lot of development, head banging and maintenance. If you're accepting the challenge, then go on.
BTW, That unicode spinner is nice. Very nice.
The article does specifically recommend using stderr for messages to the user, is there any reason that would kill scriptability?
stdout / stdin is reserved for user interaction, informational messages (md5sum), actual output (ls, less, etc.), and the like. OTOH, stderr is explicitly for error messages only, and it's very useful (and necessary).
A real use case from my daily job: I'm an administrator of a large system (approx ~1K servers), and I have substantial amount of cattle, and a lot of pets . All of these servers have cron jobs, and other automated tasks on them. All servers can mail to a local mail server to report us problems, so our cron automatically mails any outputs to us.
All the tools we use, and scripts we write have the following properties:
- If everything is OK, they are silent by default.
- They output to stderr, if anything notable happens.
- Also we copy (think tee) all stderr to their respective logs (both local and on a centralized server).
- The (e-mail, log) noise if all the tools were writing their outputs to stdout.
- The work required to silence all tools. What if they don't have any --quiet switches?
- Furthermore, they wrote everything to stderr. How can I know whether thing has worked as it should?
- How can I find the problem quickly if everything is written to "Error" log?
- Furthermore how can I revise the errors happened quickly? Info & Error on the same file. grep galore!
- How can I silent a tool (by redirecting to /dev/null) if both normal and error logs are written via stderr?
These are the simple problems that I can come up on a real, big production system in five minutes. I can find more problematic scenarios which will happen on a daily basis, if I think more.
All these conventions, folklore, recommended usage and facilities are in place because of the needs and the experience acquired in the history of these systems. Running around them amok, just because they enable color, pretty spinners, or justify some narrow usage scenario is not correct.
These conventions and philosophy  allowed *NIX systems to scale without needing excessive administrative elbow grease. Ignoring these, and developing tools which use facilities and conventions as they please will degrade the ability to manage these systems with minimal work. I can always grep, but it will be inefficient and not guaranteed to get everything that I want, and also it will cost me and the computer time to do so.
Like algorithms, systems are easier to manage when "n" is small. When "n" gets big, these tasks got really hard, really fast.
So, develop tools to ease tasks. Not to show-off.
Edit: Tried to increase readability.
Also, similarly you can have a -v --verbose set to make the application talkative. It's also an option.
> It’s important that each row of your output is a single ‘entry’ of data.
It felt weird to me to use `ls` as an example as it's not immediately obvious it adheres to the advice from the printed output. I suppose they were also trying to highlight the earlier point of differing output format depending on whether output is a tty/pipe.
Unrelated, but I didn't know `ls` was that smart about isatty. Once upon a time I read the '-1' option to print one name per line in the man page and assumed it was necessary for that functionality. Thanks!
I am going to add a note about `ls`'s behavior with isatty. It's sort of conflating a couple of things, but I think it's interesting enough to leave it in.
I really appreciate that you're taking the time to respond to all the feedback in this thread and wade through everyone's nitpicks. Looking forward to more articles.
I really think that we need a stdmeta file descriptor. See my comments at the link below, I would appreciate your feedback:
Unfortunately, if using CL to deploy a CLI app for modern systems, all of this has to be shoehorned into the usual stdin/stdout/stderr split.
Then realise my skills have atrophied to the point I can barely remember how to use edit.
Also unix commands tend to have illegible colors in Powershell on Windows. (Ripgrep for example). Powershell defaults to a blue background.
I would suggest these rules for using default coloured output:
2. Really, don't. Bold is fine, though!
3. (experts only) Make sure the colour scheme works with white-on-black, black-on-white, white-on-navy (for powershell), and Monokai/Solarized/whatever the flavour of the month for insecure hipsters is.
If you use colours, by default or not, make it really easy to configure the colours, so people can make it work with their terminal's colour scheme.
Also for colors it's good to try to pick a decent palette of a couple-three colors and stick with it rather than try to categorize a bunch of different bits of output on a single command.
Still, colors are awesome and make your CLI look and feel 10x better than it is, so it's worth the extra effort.
 - https://github.com/BurntSushi/ripgrep/blob/acf226c39d7926425...
The issue I've found is that of the 16 build in colors. cmd defaults to trivial colors.
(eg Blue is 000080 and Bright blue is 0000FF.)
Which give terrible contrast. MS seems to be working on improving things on their end. Then I'll be able to make it readable. (Eg: https://github.com/Microsoft/console/tree/master/tools/Color...)
Speed is the ultimate fancy enhancement ;)
That said, you need to be careful. Don't use a spinner if it's not a tty or TERM=dumb. Do use it in some CI environments that support it (Travis, CircleCI) That handles all the issues we've seen and everyone seems to be happy.
Continuous Delivery / Change Management. I believe any policy without change management principles isn't really complete, especially when its about 12factor which is considered a gold standard for production.
* CLIs are notoriously difficult to update because you have to convince every single consumer to update it manually, otherwise you just have scattered logic everywhere. Having an update workflow is essential before releasing the first version in production.
* Closely tied, a clear Backwards compatibility policy.
Apart from those two major items, I have also found one optionally nice pattern to reason about CLIs:
Design CLIs like APIs wherever possible. Treat subcommands as paths, arguments as identifiers, and flags as query/post parameters. It's not always applicable, but doing that for large internal tools helps against the "kitchen sink" syndrome.
Instead of failing then spitting out --help or manpage style info, the program just ask the user enter the needed argument or flag to continue. Having more ways to learn usage is always good IMO.
An oldie but goodie here: https://eng.localytics.com/exploring-cli-best-practices/
I've of course spent some time on automating Help, flags parsing etc, so I essentially just say what data I need and then what to perform.
It was best idea in a long time. I'm thinking that I'll publish framework for this as opensource (it's C# project).
> By keeping each row to a single entry, you can do things like pipe to wc to get the count of lines, or grep to filter each line
> Allow output in csv or json.
Yes please. Default to readable-but-shellable tabular output, and support other formats.
libxo from the BSD world is a really smart idea - it provides an API that programs can use to emit data, with implementations for text, XML, JSON, and HTML:
I personally love CSV output. Something like libxo means that CSV output could be added to every program in the system in one fell swoop.
GCC does this, leading to no colour output where it would be useful if you're building with Google's Ninja-build. Maybe there are some people who do pipe GCC output to a file - I've never had to. If you do this with your app, I'd appreciate being able to re-enable the colour.
I like how this is their #1. In my opinion the best way to do this is with tldr.
I'd highly recommend folks create a tldr page for their CLI app. Add 4-8 examples to cover 80%+ of the most common use cases. -h flags, readmes & man pages can cover the other 20%.
I hadn't considered this before you mentioned it, but oclif CLIs could integrate to tldr pretty well. It already supports arrays of strings for examples.
We're doing that for our internal CLI applications and it's great to be able to just copy-paste the common use case from the top of the documentation without searching much.
Using it less as a complete docopts kind of thing and more of multiple common usages (like `man tar` and what you linked) is far more useful.
I think there is something here I hadn't really considered before. It's not an example, but also not a useless dump of flags. Food for thought I suppose.
My hopes were crushed.
Isn't it some kind of disguised tracking? I know it doesn't give as much info as the user agent of a browser, but still, you could track the OS, even the linux distribution, and surely more, while still being a reproducible build.
Replace “pipe content” with “redirect content”
sure, web apps are themselves probably over complicated, but given that you're doing a web app, the recommendations arent bad. compare to where things have gone since, with containerisation.