
12 Factor CLI Apps - dickeytk
https://medium.com/@jdxcode/12-factor-cli-apps-dd3c227a0e46
======
emmanueloga_
Don't get me wrong! I love command line apps. But I wonder if we all have a
bit of an Stockholm syndrome... there are several things that suck about
them...

While writing this I'm thinking on my experience trying to do anything with
ffmpeg or imagemagick... or even find.

* For any sufficiently complicated cmd line app, the list of arguments can be huge and the --help so terse as to be become useless. For man pages, the problem is the opposite... the forest hides the tree! I'm sure we all end up using google to look for example invocations.

* Very often completion doesn't work, since custom per-app machinery is needed. For instance: git-completion for with bash-completion.

* Sometimes I end up passing the help output through grep, then copy-pasting the flags from the output, and then hoping I got the right flag.

* ...how about things like regular expressions parameters... always so hard to remember the escaping rules! (and the regex flavor accepted by each different app).

* Not to talk about more complicated setups involved -print0 parameters or anything involving xargs, tee, and formatting with sed and cut, etc.

Is there a better way? Not sure. I like powershell a bit but some of the
things I mention above still apply.

I think we may be able to get a workflow that is a bit closer to the tooling
we use for writing programs while not being perceived as verbose and heavy
(I'm thinking, the kind of workflow I get with a Clojure repl).

~~~
AnIdiotOnTheNet
At this point I'm not really sure what makes command-line so great. We should
have something like it in the GUI sapce that works much better but, like you
said, "stolkholm syndrome".

Why can't a pipeline be a more complicated multi-io workflow? In a 2D GUI this
would be trivial to construct and read, but in a 1D command line it would get
confusing in a hurry. And the concept works much better with AV, I can easily
construct and reason about complicated arrangements of audio and video inputs
and outputs, with mixers, compositors, filters, shaders, splitters, etc.
between them.

Instead we worship text. Is that because manipulating text is actually more
useful, or because our tools are only good for working with text?

~~~
dsr_
Text is exact, programmable, repeatable and transmissible.

exact: in many GUI tools, you can have non-default settings that you changed
via menus. Where are they stored? Which ones are currently active? Does it
matter that you selected four objects first, then a transform tool, then
another object?

programmable: > find /var/spool/program/data -name foop* -mtime +3d -print

vs "open the file manager, go to /var/spool/program/data, sort by name,
secondary-sort by last modification time, find the ones that are more than 3
days old, make sure you don't slip"

repeatable: OK, do that again but in a different directory.

transmissible: here's the one-liner that does that.

Now, your specific requests are about audio and video toolchains, where I will
admit that reasoning about flows is easier with spatial cues -- but I'd really
like the output of that GUI to be an editable text file.

~~~
gambler
Command line is not the only HCI that can use text. Also, no one (except
immense mental inertia) stops developers from producing serializable graphical
interfaces.

~~~
Shorel
I think the macro language of WordPerfect 5.1 was one of the most awesome
serializable interfaces ever.

And they scrapped it in WP6 for some shitty object oriented version that
lacked all the "serializable workflow" of the previous one.

------
jstanley
> Error: EPERM - Invalid permissions on myfile.out

> Cannot write to myfile.out, file does not have write permissions

> Fix with: chmod +w myfile.out

I actually much prefer:

"can't write myfile.out: Permission denied"

This shows the same information as the first 2 lines combined from the
example, and the 3rd line is not necessarily the correct way to fix the
problem anyway (e.g. you might be running it as your user when it should be
root, chmod +w would not help).

If you are so convinced that chmod +w is the way to fix the problem, why not
just do that and carry on without bugging the user?

And having each error confined to one line also means it's much less likely
that some of the lines are missed, e.g. when grepping a log.

EDIT: And to add to this: it's sometimes useful to prefix the error messages
with the name of the program that generated them, so that when you're looking
at a combined log from different places, you know which of the programs
actually wrote the error, e.g. "mycli: can't write myfile.out: Permission
denied".

~~~
Beowolve
So, I understand the basis of your comment. You have the knowledge to know
that there are other things that may be "the right way" given your situation.
I think what the author is getting at is that there are users who don't have
that knowledge. Giving them a hint that is verbose and non arcane can make a
world of difference. Speaking from personal experience, there are many
developers that I have met who don't have basic *nix knowledge, much less
knowledge of a terminal. The reality of the situation is that a business is
going to hire people regardless of that ability. They want someone who can
move the features out the door. Whether this is good or bad is probably beyond
this conversation. I think, for those users, these sorts of helpful hints are
extremely important because it makes them feel like they aren't stuck and
helpless. I think that, to your point, it may be useful to have the CLI offer
a "pro" mode in which you could set a config to not give you as verbose error
messages. Annoying? Yes. However, it would strike a balance and serve both
needs.

~~~
IcePic
Then again, on a recent linux system, the non-ability to write to the file
might be permissions. Or an immutable attr, or selinux, or apparmor, or
setfacl flags or a RO mount where it lies. As soon as you decide to print out
the solution to "can't write to: X" you are in for a page full of advice on
what to look for. Perhaps the disk was full, perhaps uid was wrong, perhaps
the 5% reserved-for-root-only kicked in. You'd end up writing a unix sysadmin
guide, and then perhaps the parent dir had too strict perms to allow you to
write to a file in it...

~~~
smarinov
Also, on an unrelated note, I would never ever suggest novice users to blindly
just give `chmod +w` to random locations. This is only marginally better than
the `chmod 777 <root-folder-name>` that used to be so spread out in many (e.g.
PHP-related) tutorials a decade or two ago.

~~~
mixmastamyk
Agreed, though perhaps it is just a bad example. I could imagine a situation
where a single line of advice could be useful outside the realm of filesystem
security or disk usage.

------
JepZ
> 7\. Prompt if you can

Please don't. There is nothing wrong with interactive tools, but by default,
they should not be. So instead of making non-interactive session possible via
flags, the default should be to be non-interactive. If there is an option to
start an interactive session, everything is fine.

Otherwise, you would never know when your script could run into some kind of
interactive session (and therefore break; possibly after an update).

~~~
jessaustin
My interpretation was that the prompt is for required information. In the
example graphic, "run demo" really does require that "stage" be specified.
This is considered more user-friendly than simply crashing. If you don't want
to see the prompt, provide that information as a flag or in a config file or
whatever.

~~~
boomlinde
User friendly until the user decides to invoke that command in a cron job and
ends up with a headless process waiting for additional input.

~~~
jessaustin
Anyone who doesn't test cron jobs before saving them deserves whatever she
gets. There are scores of ways for cronjobs to fail. b^)

~~~
boomlinde
Well, this is not a cronjob that failed, it's just waiting for input.

Let's say that I did test the cronjob but that it starts "failing" after an
update to the tool. My fault, I know, but at least I get mail when it fails
while I won't if it's just waiting for input.

~~~
dickeytk
This scenario would only happen if a flag became required. Prompting or not it
would still be an issue. (And it wouldn't prompt as this is a non-tty
environment)

~~~
boomlinde
_> Prompting or not it would still be an issue._

Yes, but in one case the issue would result in a mail because the cron job
failed, and in the other case the issue would just cause the the process to
hang indefinitely without notice

~~~
dickeytk
NO IT WON'T HANG. I give up. I don't know how else to try to explain this to
you.

------
Walkman

        Unless you already know your users will want man pages, I wouldn’t bother also outputting them as they just aren’t used often enough anymore.
    

I don't know where this is coming from, me and my colleagues are reading man
pages every day. I would be interested how much others read them.

~~~
kolme
I strongly disagree on this one too.

That's the first place I look for help and it annoys me to no end when a CLI
program that doesn't come with one.

I stopped taking seriously the article at that point and quickly skimmed
through the rest of it.

Man pages are a great unix culture heritage, please new developers don't give
up on them!

~~~
davidhcefx
I myself also love reading man pages, but speaking of compatibility, I have to
say that “--help” is a more universal way of showing help pages. Of course
it’s better to have both of them though.

~~~
larkeith
\--help is fine, but almost never a substitute for a full man page, except for
the most trivial of applications (unless your --help is as complete as a man
page, in which case... good on you for providing full documentation, but I'll
hate you a bit every time I unthinkingly drop two hundred lines of text in my
terminal.)

~~~
smarinov
Unless it just opens the man page if it is so long. Like git.

------
stepvhen
> I would skip man pages are they just aren’t used that often anymore.

I understand that man pages might represent a minority, but I cannot express
enough how wonderful it is to get the full manual of a program without
interfacing with the web. Not to mention how powerful that is, since most apps
have short names that are difficult to search for, but how accessible that
makes the application.

~~~
dickeytk
For people that like man pages (there appears to be lots of you) do you think
that man pages are _more_ important than web or in-cli docs? Or just that they
should be written in addition to and not missed out on?

My (current) position is that they're useful, but not worth the extra effort
for most CLIs. It's a cost-benefit thing.

I'm genuinely curious as I've never had anyone request man pages in our CLI.

~~~
cyphar
> do you think that man pages are more important than web or in-cli docs?

Yes.

* Web docs are a problem because I don't always have access to the internet when trying to do something on my computer, and usually there are so many kinds of web doc generators that you have to figure out how the information you want is laid out. Web docs are useful as a quick-start guide or a very lengthy reference guide -- but not for the common usecase of "is there a flag to do X?"

* In-CLI docs are a cheaper version of man pages. In most cases, the output is larger than the current terminal size so you end up piping to a pager (where you can search as well), and now you have a more terse version of a man page. Why not just have a man page?

Man pages are useful because they have a standard format and layout, provide
both short and long-form information, and are universally understood by almost
anyone who has used a Linux machine in the past. "foo --help" requires the
program to know what that means (I once managed to bootloop a router by doing
"some_mgmt_cmt --help" and it didn't support "\--help" \-- I always use man
pages now). One of the first things I teach students I tutor (when they're
learning how to use Linux) is how to read man pages. Because they are the most
useful form of information on Linux, and it's quite sad that so many new tools
decide that they aren't worth the effort -- because you're now causing a
previously unified source of information (man pages) to be fractured for no
obvious gain.

I still add support for "\--help" for my projects (because it is handy, I will
admit) but I always include manpages for those projects as well so that users
can actually get proper explanations of what the program does.

> I'm genuinely curious as I've never had anyone request man pages in our CLI.

Honestly, I would consider not using a project if an alternative had man pages
(though in this case it would be somewhat more out of principle -- and I would
submit a bug report to bring it to the maintainers' attention).

~~~
majewsky
> I still add support for "\--help" for my projects (because it is handy, I
> will admit)

Some applications (e.g. Git) make "\--help" redirect to man. What do you think
of that?

~~~
falcolas
Personally, I still pull up "man git-pull" or similar. I'm actively annoyed
that I have to remember that the AWS CLI is different in this regard.

Not to mention that using "\--help" for man pages requires I open up a
separate window when I typically just want a quick reference to the most used
flags.

Moving man pages to a different command is like coming up with an alternative
icon to the hamburger menu for your regular UI. Sure, all the functionality is
still there, but it requires a full stop and search to remember where to find
it.

------
gnomewascool
I almost stopped reading at "I would skip man pages", but the rest of the
article was mostly great advice.

I disagree about 11 (using "main_command sub_command:sub-sub_command" rather
than "main sub sub-sub" syntax), but it's mostly a matter of taste.

Seriously, though, if you've already taken the time to write documentation,
then there's no reason not to also generate a manpage. Just using pandoc to
convert your, say, README.md gives good-enough results:

pandoc -s -f markdown_github -t man -o your_cli.1 README.md

(There probably are other good conversion methods.)

Why I like man:

Advantage over online docs:

It's offline and available directly in the terminal, without having to open a
browser and it has a distraction-free, clean look. The only slight
disadvantage is the lack of support for images, which are occasionally
helpful, but in a pinch, for some use-cases, you can have ascii diagrams.

Advantages over "\--help":

1\. Conventionally, "\--help" just provides a brief rundown/reminder of the
options, so having full documentation is valuable.

2\. If "\--help" provides the full docs then:

a) You lose the option of having the brief rundown, which is also very
valuable.

b) "man command" is slightly faster than "command --help" :p (yes, it is a
slight pity that accessing the full docs is faster than accessing the brief
version, if you use convention).

c) man deals with things like having nice output, with proper margins, at
different terminal widths.

d) man deals with the formatting for you, providing consistency with all other
applications.

FWIW I think that texinfo is (mostly) even better than man, as it considerably
improves on the navigation, but it's been crippled by the FSF-Debian GFDL
feud, which meant that the info pages weren't actually installed on many
systems, and it's mostly a lost cause now.

~~~
enriquto
> Advantages of man over "\--help":

You can have the best both worlds if the manpages are built automatically from
the "\--help" output (e.g., using help2man). Then you can have "-h" give a
brief rundown and "\--help" give the full docs.

> FWIW I think that texinfo is (mostly) even better than man, as it
> considerably improves on the navigation

I am curious about that. Do you really like texinfo navigation? I find it
completely unusable, to the point of prefering to download and print a pdf
from the web instead of opening (gasp!) the dreaded "info" program.

~~~
TeMPOraL
I use info browser from Emacs and like it very much. The best benefit is that
you can stuff a whole book into info pages - and projects using info usually
drop their _full_ manual in there, to be perused off-line and distraction-
free.

~~~
enriquto
How do you search for a word inside the whole info documentation of a program
(say, gcc), and cycle through all appearances of that word? I never managed to
do that (which is trivial for manpages).

~~~
TeMPOraL
Don't know how it works in regular info browser; in Emacs's info browser,
incremental search can cover the entire manual (or even all info pages) if it
fails to find a phrase on the page you're currently viewing.

~~~
defanor
It does, see `info '(info) Search Text'`.

------
OJFord
> 12\. Follow XDG-spec

I'm so glad to see this included. I don't like $HOME being cluttered with
.<app> config directories, but worse than that, far too many when releasing on
macOS say Oh Library/Application\ Support/<app>/vom/something is the standard
config location on Mac, so I'll respect XDG on Linux but on Mac it should go
there. No! Such an unfriendly location for editable config files.

~~~
andreareina
I agree that seeing a bunch of ~/.<app> directories is annoying, but at the
same time I do think that it makes sense for each application to manage its
own hierarchy, rooted under e.g. ~/.apps/<app> instead of splitting it into
~/.config/<app>, ~/.local/share/<app>, etc.

Regardless, I think it probably makes sense to have a uniform interface for
getting said directories, so that however the OS decides things should be laid
out, the developer just needs to `local_config_dir(app_name)`. If the user (or
at least administrator) can decide between <app>/<function> and
<function>/<app>, all the better.

~~~
ptman
The reason ~/.config/app is superior, is that then you can e.g. backup all
your configs, or remove your cache, or store ~/.local and ~/.cache on a local
fs and the rest of ~ on NFS.

Do you have to use ~/.config/app/ or could you just use ~/.config/app.conf?

------
013a
This is all great advice.

The one thing this does miss is distribution, which is a HUGE part of offering
a great CLI app. Specifically, I'd say:

1\. Make your OFFICIAL distribution channel the primary package manager on
each platform (ex: on Mac, homebrew. Ubuntu, apt/snap). Beyond that, support
as many as you have capacity to.

2\. Also offer an official docker image which fully encapsulates the CLI tool
and all of its dependencies. This can be a great way to get a CLI tool loaded
into a bespoke CI environment.

~~~
curun1r
Homebrew is NOT the primary package manager on Mac and I wish people would
stop perpetuating that falsehood. Apple includes pkgutil/pkgbuild in the OS
and that official package management strategy plays much better with corporate
IT control of managed machines.

In my experience, Homebrew always eventually results in pain and complex
debugging and it's almost impossible to audit software it installs to prevent
the installation of prohibited or dangerous software.

It's really not that hard to build a .pkg file and developers that want to
properly support the Mac platform should go down that path before offering
Homebrew support.

~~~
colemickens
The best part is that some people think 'brew' is a solid package manager and
then show up in Linux and try to make 'linuxbrew' happen (no really, it is a
thing).

I just wish everyone would take a day and read an intro to nix/nixpkgs and the
world would really be a better place. There are so many "popular" hyped tools
these days that can barely do a fraction of what is going on in the Nix
ecosystem, but it doesn't seem to get the hype that brew, buildkit, linuxkit,
etc all seem to get.

~~~
3PS
Say what you will about Homebrew, but we need more package managers that can
easily install without root. Not everyone has the time and energy to compile
from source, and often it's a circular problem - I need to compile and install
Python 3.7, which needs openssl, which needs to be compiled from source, which
has even more dependencies... ad nauseam.

~~~
colemickens
Why do we need more of them when we already have a number that are capable of
deploying into a home directory? As someone who has worked on software that
has needed packaging, and someone who tries to help out with packaging for a
distribution, I can't imagine why we need more for the sake of having more.

Per my original comment, nix can do this, for example and already has an
enormous number of packages packed, pre-built/cached, ready to go.

------
kpcyrd
I disagree on the 2nd point. Flags prevent globbing by the shell and make the
help text less clear.

Consider a usage line like this:

    
    
        prog <user> [password]
    

This tells you which argument is mandatory and which argument is optional in a
second, without searching for the help text of --user and --password.

Also, an example like this:

    
    
      git add <pathspec>...
    

Tells you that git add accepts multiple paths and you can invoke it with:

    
    
      git add src/*
    

What's more important in my opinion is making sure your argument parser can
handle values that start with a dash and respects a double dash to stop the
option parser.

Consider an interface like this:

    
    
      prog [--rm] [--name <name>] [args...]
    

And you invoke it like this:

    
    
      prog --name --rm -- --name foo
    

This should result in:

    
    
      {
        "name": "--rm",
        "args: ["--name", "foo"]
      }
    

Getting things like this wrong can result in security issues.

~~~
dickeytk
I may try to expand on this in the article, but it's in there if you read
between the lines. There is a difference between something that takes in
multiple args of the same type and multiple TYPES of args. I'm arguing against
multiple args of different types, not the same.

By definition any CLI that accepts variable args is fine here as it's all the
same type.

The -- is a great point as well. It solves a lot of problems users have but a
lot of time people don't even know about it. It solves issues with `heroku
run` for example.

EDIT: updated to clarify my point

------
woodruffw
> The user may have reasons for just not wanting this fancy output. Respect
> this if TERM=dumb or if they specify --no-color.

Or if they specify `NO_COLOR`[1]!

[1]: [https://no-color.org/](https://no-color.org/)

~~~
dickeytk
yes, adding this

------
sudofail
I'm probably being picky, but I would also include that the CLI be a self-
contained binary. I'm tired of managing Python / Ruby / Node versions.

~~~
nhumrich
Its bad both ways. The advantage to python/ruby for example is you can simply
pip/gem install, or update. With a binary, you have to download, move, and
change permissions every update. For experienced linux users, the binary is
fine, but for newer users, its much more "friction"

~~~
Sean1708
IMO using a language's package manager to install applications is a massive
anti-pattern, that should be handled by your OS package manager.

~~~
sudofail
This is what I prefer as well. Let me use my OS's package manager for managing
my packages.

~~~
mixmastamyk
That would still be the case if the packages weren't 1-3 years out of date.

~~~
jetblackio
That is true. But there are ways around that. Including documentation on how
to build it locally is pretty standard. And hosting prebuilt binaries with
package installation for targeted platforms is also pretty common as well.

With interpreted languages with language-specific package managers, you have
to:

1) Install the language

1a) Possibly have to install a language version manager (rbenv, pyenv, etc)

2) Install the language's package manager

3) Install the CLI utility via the language's package manager

Here's the order I think CLI maintainers should strive to making their
utilities available:

1) Install via OS package manager

2) Install via prebuilt release with OS-specific package, from hosting site
(GitHub, etc).

3) Install from source

4) Install via language-specific package manager

5) Install via curl | sh :)

------
lousyd
Only a web programmer could believe that man pages "just aren't used that
often". It drives me nuts when compiled cli programs don't have a man page. It
says to me that the author of the program doesn't know Unix conventions or
doesn't care enough to put the effort into meeting his or her users where they
are, and so I'm gonna have to be careful about how I use the program lest it
do something unexpected. Use man pages.

The awscli is just terrible in this respect. There's no man page for 'aws' so
I say "aws --help". It then literally tells me "To see help text, you can run:
aws help". OpenShift's 'oc' sucks at this only a little less, with no man
pages and for some inexplicable reason you can only get a list of global
options in a dedicated global options help subcommand instead of at the bottom
of every help page. The documentation system for 'git' on the other hand is a
work of art. Pure beauty.

------
bayindirh
I don't agree with 7 and 8. I like silent apps while working, and actually I'm
used to applications saying nothing if everything is correct. Also,
"outputting something to stdout just because I can" kills scriptability a lot.

Using tables, colors and other stuff requires a lot of terminal support. MacOS
terminal, iTerm, Linux terminals supports a lot of stuff, but not always (our
team is generally using XTerm for example). Implementing these are acceptable
if there's a robust code detecting terminal capabilities and falling back
gracefully and without treating these more streamlined terminals as lesser
citizens, and this requires a lot of development, head banging and
maintenance. If you're accepting the challenge, then go on.

BTW, That unicode spinner is nice. Very nice.

~~~
oblio
You can just have a "quiet" mode for scripting. Or even better, detect if
you're connected to a TTY.

~~~
scbrg
I run scripts from a tty all the time...

------
ISO-morphism
These are all good points, and I wish more clis were like this. My own pet
peeve is un-disablable stdout logging.

> It’s important that each row of your output is a single ‘entry’ of data.

It felt weird to me to use `ls` as an example as it's not immediately obvious
it adheres to the advice from the printed output. I suppose they were also
trying to highlight the earlier point of differing output format depending on
whether output is a tty/pipe.

Unrelated, but I didn't know `ls` was that smart about isatty. Once upon a
time I read the '-1' option to print one name per line in the man page and
assumed it was necessary for that functionality. Thanks!

~~~
dickeytk
I just picked `ls` as it's a common utility everyone understands and isn't
some contrived example using `cat`.

I am going to add a note about `ls`'s behavior with isatty. It's sort of
conflating a couple of things, but I think it's interesting enough to leave it
in.

~~~
ISO-morphism
I agree, it is a good example, and hey I learned something.

I really appreciate that you're taking the time to respond to all the feedback
in this thread and wade through everyone's nitpicks. Looking forward to more
articles.

~~~
dickeytk
Thank _you_ for the great feedback!

------
henkdevries
Man I wish OpenVMS was still a thing. All the commands worked the same due to
the DCL enforcing it.
[https://en.m.wikipedia.org/wiki/DIGITAL_Command_Language](https://en.m.wikipedia.org/wiki/DIGITAL_Command_Language)

~~~
bigpicture
A) PowerShell was inspired by OpenVMS DCL. B) OpenVMS on x86 is due to be
released in 2019 (it's in private beta right now).

------
hornetblack
Also on Color. Don't got go all pschadelic. I've found that some programs
using 256-color is unreadable with my terminal colors.

Also unix commands tend to have illegible colors in Powershell on Windows.
(Ripgrep for example). Powershell defaults to a blue background.

~~~
burntsushi
Can you suggest a better default color configuration for ripgrep? We actually
already have different default colors for Windows as opposed to unix.[1]

[1] -
[https://github.com/BurntSushi/ripgrep/blob/acf226c39d7926425...](https://github.com/BurntSushi/ripgrep/blob/acf226c39d79264256c0295b8381f8c7f0d74d59/grep-
printer/src/color.rs#L7-L24)

~~~
hornetblack
After some time of experimenting with this. I probably can't recommend any
colors.

The issue I've found is that of the 16 build in colors. cmd defaults to
trivial colors. (eg Blue is 000080 and Bright blue is 0000FF.)

Which give terrible contrast. MS seems to be working on improving things on
their end. Then I'll be able to make it readable. (Eg:
[https://github.com/Microsoft/console/tree/master/tools/Color...](https://github.com/Microsoft/console/tree/master/tools/ColorTool))

------
nimish
#6 is great, except if it causes performance issues:
[https://github.com/npm/npm/issues/11283](https://github.com/npm/npm/issues/11283)

Speed is the ultimate fancy enhancement ;)

~~~
roylez
Even if it does not cause any performance issues, I dislike it. If the thing
runs in terminal, it should be expect to be used in a script thus it would be
better to make no assumptions of terminal capabilities and leave the fancy
part to external tools, if one is interested. I always hate systemctl's piping
to a pager by default. "Do one thing, do it well", don't try to surprise users
with fanciness because at work we don't like surprises.

~~~
dickeytk
In practice I've had overwhelming feedback praising our use of spinners in the
Heroku CLI and not a single complaint I can think of. In fact, I've had more
praise for adding spinners and color than any other change we've put in over
4+ years of development.

That said, you need to be careful. Don't use a spinner if it's not a tty or
TERM=dumb. Do use it in some CI environments that support it (Travis,
CircleCI) That handles all the issues we've seen and everyone seems to be
happy.

------
rdsubhas
Really great advice here. But missing one key area:

 __Continuous Delivery / Change Management __. I believe any policy without
change management principles isn 't really complete, especially when its about
12factor which is considered a gold standard for production.

* CLIs are notoriously difficult to update because you have to convince every single consumer to update it manually, otherwise you just have scattered logic everywhere. Having an update workflow is essential before releasing the first version in production.

* Closely tied, a clear Backwards compatibility policy.

Apart from those two major items, I have also found one optionally nice
pattern to reason about CLIs:

Design CLIs like APIs wherever possible. Treat subcommands as paths, arguments
as identifiers, and flags as query/post parameters. It's not always
applicable, but doing that for large internal tools helps against the "kitchen
sink" syndrome.

------
jessaustin
_...all of these must show help._

    
    
      $ mycli
    

Some commands have an unambiguous meaning and don't need arguments. For
example, it would be weird if a bare "make" command returned help information.
Great post, though. I'm looking forward to digging into oclif.

------
davemp
In regards to 7, prompts are great for teaching new users how the program
should be used.

Instead of failing then spitting out --help or manpage style info, the program
just ask the user enter the needed argument or flag to continue. Having more
ways to learn usage is always good IMO.

~~~
dickeytk
yep +1. A lot of times users are only ever going to run a command once. Better
to ask for the right information than bailing out because it's not perfect
syntax.

------
keithnz
While a bit quirky, I really like powershells approach where you aren't
limited to text streams. All the same advice applies.

------
willio58
Just the help factor alone is a big one.

------
drewmassey
I’m a huge admirer or well crafted cli apps, they can massively boost the
effectiveness of a team.

An oldie but goodie here: [https://eng.localytics.com/exploring-cli-best-
practices/](https://eng.localytics.com/exploring-cli-best-practices/)

------
O_H_E
> Follow XDG-spec Oh, please guys. Some apps even put visible (non .) folders
> in my home.

------
czechdeveloper
I have recently created single CLI program to put in all actions I need to
automate. It's so fast to make new action, that I make anything that saves me
just few seconds a day. Even stuff like "invoice" will open me timescheduling
app, invoicing app and creates canned email to clipboard. "Clockout" will open
timescheduling app and copy expected date and times to clipboard just to paste
to app.

I've of course spent some time on automating Help, flags parsing etc, so I
essentially just say what data I need and then what to perform.

It was best idea in a long time. I'm thinking that I'll publish framework for
this as opensource (it's C# project).

------
twic
> 8\. Use tables

> By keeping each row to a single entry, you can do things like pipe to wc to
> get the count of lines, or grep to filter each line

> Allow output in csv or json.

Yes please. Default to readable-but-shellable tabular output, and support
other formats.

libxo from the BSD world is a really smart idea - it provides an API that
programs can use to emit data, with implementations for text, XML, JSON, and
HTML:

[http://juniper.github.io/libxo/libxo-
manual.html](http://juniper.github.io/libxo/libxo-manual.html)

I personally love CSV output. Something like libxo means that CSV output could
be added to every program in the system in one fell swoop.

------
kurtisc
>Still, you need to be able to fall back and know when to fall back to more
basic behavior. If the user’s stdout isn’t connected to a tty (usually this
means their piping to a file), then don’t display colors on stdout. (likewise
with stderr)

GCC does this, leading to no colour output where it would be useful if you're
building with Google's Ninja-build. Maybe there are some people who do pipe
GCC output to a file - I've never had to. If you do this with your app, I'd
appreciate being able to re-enable the colour.

------
sytelus
Does these CLI features like tables, OS notifications etc work cross-platform
(Linux, Windows, OSX)? IS there any good library to develop such CLI apps?

~~~
dickeytk
we use [https://github.com/mikaelbr/node-
notifier](https://github.com/mikaelbr/node-notifier)

------
breckuh
> 1\. Great help is essential

I like how this is their #1. In my opinion the best way to do this is with
tldr.

[https://github.com/tldr-pages/tldr](https://github.com/tldr-pages/tldr)

I'd highly recommend folks create a tldr page for their CLI app. Add 4-8
examples to cover 80%+ of the most common use cases. -h flags, readmes & man
pages can cover the other 20%.

~~~
dickeytk
I almost want to rewrite the help section to encourage examples even more.
They're incredibly valuable.

I hadn't considered this before you mentioned it, but oclif CLIs could
integrate to tldr pretty well. It already supports arrays of strings for
examples.

~~~
tetha
Yup. On CPAN, it is encouraged that the first part of your documentation after
the table of contents is the synopsis[1]. The synopsis should clearly show how
to do the common tricks with the library. From there you can link and refer to
the more detailed documentation.

We're doing that for our internal CLI applications and it's great to be able
to just copy-paste the common use case from the top of the documentation
without searching much.

1: [https://metacpan.org/pod/Carp](https://metacpan.org/pod/Carp)

~~~
dickeytk
I feel the synopsis section of man pages often just becomes a bunch of useless
garbage above the fold (for instance, look at `man git`).

Using it less as a complete docopts kind of thing and more of multiple common
usages (like `man tar` and what you linked) is far more useful.

I think there is something here I hadn't really considered before. It's not an
example, but also not a useless dump of flags. Food for thought I suppose.

------
gtramont
Related: [http://docopt.org/](http://docopt.org/) – There are implementations
for various languages. Whenever I need to write something that has a CLI, this
is my default option…

------
fiatjaf
Is it just me or every now and then Medium opens with what looks like to be a
snapshot of the article instead of the HTML? I can't select text or scroll the
page. If I refresh the page everything is normal, though.

------
phoe-krk
I clicked this in hope that these was an article describing 12 CLI apps
written in the Factor language.
[http://factorcode.org/](http://factorcode.org/)

My hopes were crushed.

------
justinrlle
> I also suggest sending the version string as the User-Agent so you can debug
> server-side issues. (Assuming your CLI uses an API of some sort)

Isn't it some kind of disguised tracking? I know it doesn't give as much info
as the user agent of a browser, but still, you could track the OS, even the
linux distribution, and surely more, while still being a reproducible build.

------
superlevure
Does someone has the name of the terminal app used in the article ? (with the
nice colored path)

~~~
kjaer
It looks like zsh with the Oh My Zsh [1] with the Agnoster theme [2].

1\. [https://ohmyz.sh/](https://ohmyz.sh/)

2\. [https://github.com/robbyrussell/oh-my-
zsh/wiki/Themes#agnost...](https://github.com/robbyrussell/oh-my-
zsh/wiki/Themes#agnoster)

~~~
superlevure
Thank you !

------
sigjuice
Nitpicks:

Replace “pipe content” with “redirect content”

~~~
dickeytk
nice catch, ty

------
stuaxo
"12 factor" anything seems to be a symptom of the over-complexity of modern
apps, go back and rethink.

~~~
shoo
i found the original 12 factor website had a number of pretty reasonable
suggestions based on experience of people who ran a business doing operations
for other people's web apps.

sure, web apps are themselves probably over complicated, but given that you're
doing a web app, the recommendations arent bad. compare to where things have
gone since, with containerisation.

~~~
subway
In a lot of ways the "12 factors" have overly simplistic views of the world.
For instance, storing your config in the environment is fantastic -- until you
remember a great many frameworks will dump their environment to the browser in
a number of failure scenarios. There go all your secrets.

