Hacker News new | past | comments | ask | show | jobs | submit login
The -a to -z of Command-Line Options (2003) (catb.org)
132 points by polm23 on Dec 10, 2018 | hide | past | favorite | 64 comments



Wow, the fact that Mac OS Classic did not even have a command line interface really baffles me. I looked it up and https://news.ycombinator.com/item?id=18646286 - it seems to be true.

It's ironic that in 2018, the terminal is more prominent then ever in any developer scene (thinking only of package managers for diverse programming languages), while in the 90s somehow people thought the GUI could solve all problems, not only for the novices but also for computer experts.


Yes, classic Mac OS truly did not have a command line interface of any kind. As a consequence, everything your application did must have had a GUI command or preference corresponding to it.

Command line interfaces are powerful, yes, but they're also a crutch. It's very easy to hide certain features by exposing them only through command line flags or what have you. On classic Mac OS you could not do this; you needed to attach every feature to a GUI control of some kind.

As a consequence, people came up with all kinds of GUI tools for pro/expert users. Linux, comparison, treats GUI users as second class citizens and tends to relegate them to only the most basic functionality.


Eh, lots of stuff would be just stuffed into an appropriate resource and if you wanted to hack it you would use ResEdit.

I miss ResEdit. Pretty much every Mac app in the day used the standard system for managing their resources, so you could pick apart anything with ease using ResEdit. I don't know of any modern equivalent for Unix/Windows/Mac programs.


I would think that the modern macOS equivalent to the "resource fork" is simply an app bundle's Resources folder.

Also, regarding "burned-in configuration" resources—those are now found in the app bundle's Info.plist :)


Well, I was too young to really explore classic Mac OS at that time. But from my feeling, the unixoidity of Mac OS X was really a blessing for the hacking community. So there definetly was a need.


Oh don't get me wrong, I live in the terminal these days. It's immensely satisfying to whip up a quick bash or Perl script to solve a problem. That being said, I believe the reliance Unix command line tools has made this computing power inaccessible to ordinary users.

In some sense, I think it has deepened the digital divide.


I like your viewpoint, but my experience with GUIs was largely that THEY hid power...from me. Any setting that could only be managed from within a GUI meant automation was pretty much off the table (and by "automation" I mean any basic connection of two tasks, things joe average user would want, not necessarily heavy industrial automation). Remember when Windows NT had only a binary event log, and their graphical client was the only way to view it?

I think the issue is not so much "command line hides power and treats GUI like a second-class citizen", though that is true, but rather that preferring one over the other creates problems.

This issue is again rearing its head in website design - many "web applications" have states that are not reachable via linking.


Back before the Unix-ization of macOS, there was a large focus on programs providing accessibility/automation APIs. The GUI parts of macOS were actually far more scriptable back in the OS7-9 days (and early OSX days) than they are today. It used to be that any effect you wanted to trigger in the OS, you could do by sending an https://en.wikipedia.org/wiki/Apple_event through the OSA scripting component to the relevant system service.

Nowadays on macOS, with the availability of the CLI, Apple engineers have decided to expose those same effects through CLI utilities that call directly into private system APIs (which are themselves usually client libraries for private Mach client-server ABIs.) If there's a switch exposed in the GUI but not the CLI, it's unlikely there's an event to reach it any more.

(Though, in macOS or Windows or Linux, it's always a possibility to write a script that has GUI effects directly in terms of "click the first button on the right"-style accessibility directions. But that's extremely brittle to system version updates, and precisely the thing that exposing an automation API is meant to avoid.)


Most applications focus on either the GUI or the CLI because building two complete interfaces takes a lot of work.

You're right that a lot of GUIs aren't designed for automation, but CLIs aren't the only answer. Classic MacOS had a powerful scripting architecture. (It still exists, but it's been neglected.)


You could still do scripting akin to Bash scripts on OS 7 and above via AppleScript[1]; which worked a little bit like WSH did on Windows.

[1] https://en.wikipedia.org/wiki/AppleScript


Yes, and I'm an old fan of AppleScript as well. The difference, as far as I know, is that you couldn't hide core application functionality behind that API, could you?

If you could, I think most developers didn't, since AppleScript isn't a default means of using the computer the way Unix shells are.


IFIR, Win3.1 didn't have a command line either, you had to exit Windows completely back to DOS.

IMHO the Amiga offered the best of both worlds at that time (except for expensive UNIX workstations). On one hand you had a modern windowing UI, and a real multitasking OS. On the other hand you had terminal windows with a command line interface that was slightly different to UNIX, but close enough to be enjoyable.

Later in the life of the Amiga it became standard that UI applications provided standardized scripting interfaces (via AREXX), so you could write complex automation tasks that glued together all sorts of UI- and command-line-programs. In a way this was a lot more advanced than anything that's available on Windows or Linux today, since it wasn't limited to a few applications that all live in their own isolated world.


Oh, AREXX, the one thing Amiga had which still has no counterpart. I mean, the technology is of course there, but the ecosystem and culture is not. AREXX integration is so alien it's hard to even explain it someone who did not see it in action back in the day.

The closest I can think of is Siri and such automation, except AREXX was a full programming language in its own. (The language was from IBM REXX.)


I think Windows 3.1 did have one. Accessible at command.com

Here's the best source I could dig up https://www.bleepingcomputer.com/tutorials/windows-command-p...


Windows 1.x-3.x didn't really have a command line, the window you use to run command.com was for DOS programs only. Windows programs didn't have a text mode to utilize it.

Windows NT's console window was properly accessible to Windows programs, as well as DOS compatibility. Windows 95/98/Me inherited limited forms of text-mode compatibility from NT.


the window you use to run command.com was for DOS programs only.

This is not strictly true. You could launch Windows programs from the DOS box. If you tried to run something like PBRUSH.EXE from true DOS you would get "This program requires Windows" but in a DOS box it would detect that Windows was running and launch the program.

I think it is true that there were no Windows-only command line apps though.


There was no native support for command line apps (in part because the cooperative multitasking model is not especially conductive to such things), but at least some Win16 development environments came with libraries for emulating console applications (eg. BP7 and Delphi 1 had unit WinCRT which did exactly that).


By the Win95 era there was. In the 3.x and below era people tended to either be in DOS or be in Windows rather than running DOS in Windows. Or at least that was my experience, YMMV


Even in the Win95 era, I remember having to boot into DOS to run some games (Redneck Rampage comes to mind).


I think you could largely get around having to boot into DOS mode by enabling a specific setting on the MS-DOS shortcut for those games. I think it unloaded win.com - so you're essentially shutting down Windows rather than rebooting into DOS) but we're talking > 20 years since I've last used that trick so my memory is a bit fuzzy.


In practice, people usually rebooted into DOS mode to play so as to do memory optimization in config.sys & autoexec.bat (HIMEM, EMM386 etc). Although that was for the benefit of older 16-bit games - 32-bit DOS extender stuff like Redneck Rampage wouldn't care.


I don’t think you’d need to reboot to take advantage of that because they’re loaded as part of Windows booting and then a lot of the DOS kernel was patched into the Windows kernel for 16bit driver support.

I certainly don’t recall ever having any issues with 16 bit games (aside my weird soundcard not working with half the games I tried, but I can’t recall why that was off hand).


It's not a 16-bit driver issue; it's the conventional memory size issue. Remember the old 640 Kb limit? When you had too many DOS drivers loaded, you often ended up with less than 600 Kb free - and some 16-bit games didn't like running that low, hence the need to use HIMEM etc. And while it still works when running DOS apps from Windows, it reduced the amount of conventional memory even further.


Yes, I understand the problem but what I’m saying is when you shutdown to DOS (even without rebooting) win.com was shutoff. Since it was win.com that disabled EMM386 you were free to run your HIMEM whatever again. Thus having a full DOS environment without a full reboot (which used to be slow in those days).

Thinking about it some more, I think I might have had a subset autoexec.bat I used to run for those occasions.

That technique seemed to work for me. Maybe I’m forgetting a tonne of 16 bit games that didn’t play but nothing immediately springs to mind


EMM386 was usually run from autoexec.bat. But HIMEM was a driver installed via config.sys. I think there were some complications with the latter with "reboot to DOS".


It's indeed interesting how important the command line is now. You might like this mini-book / essay by Neal Stephenson:

https://en.wikipedia.org/wiki/In_the_Beginning..._Was_the_Co...

It examines the issue of GUIs vs. command line interfaces, among other things.

I'm pretty sure there is an anecdote there that says that Mac OS did have a hidden command line? It was only for Apple engineers, not for end users, but it was there. I'd appreciate a pointer if anyone remembers.


The Macintosh Programmers Workbench (MPW), the first IDE for Mac, had a command-line with a simple shell.

When I used it in 1986 it really needed a big machine, i.e. 512K RAM :-)


MPW had an interesting way to ease GUI-centric users into its CLI-centric world. There was a mode you could trigger for commands that brought up a help-oriented GUI for the command to build the command argument list, which would then echo onto the command line. Depending upon who built the GUI, it was usually a less terse and friendlier help for new users than AIX's SMIT, for those who are familiar with that.

I sometimes wish there was an open source equivalent like that for Linux , but with live, interactive building of the command as the GUI options were manipulated so new users could get a feel for commands more readily. This would act as an intermediate help system for casual users that is friendlier than manpages, and more immediate than searching around the Net.

[1] https://www.macintoshrepository.org/1360-macintosh-programme...


> There was a mode you could trigger for commands that brought up a help-oriented GUI for the command to build the command argument list, which would then echo onto the command line.

This still lives on in a sense. For example, in VSCode, many UI commands translate to executing shell code in its internal terminal (and you can see both the commands and the output).


Thanks, I'm pretty sure that's what Stephenson mentioned in his essay / book.


Wow, the fact that this is baffling to someone is, in turn, baffling to me! I don't mean this to be insulting, it's just one of those generational things. It hadn't occurred to me that someone (someone who cares about CLIs, at least) wouldn't know this.

It might be informative to step back and point out that this is why so many of "us" originally disliked Macs; before the "walled garden" of iOS, before even the walled garden of needing iTunes to load music onto your iPod, there was the GUI-only MacOS. Now, those in the know could dramatically alter Mac OS with Resedit, but I wasn't a "Mac guy", so I never really knew anything about it; I hated a locked down OS that only let you do things you could click on.

It was such a step forward to many of us hobbyists that OS X was based on Unix, and suddenly we had the best of both worlds -- a good GUI and a good CLI. Suddenly Windows was no longer the commercial OS that let you "do whatever you wanted" with it (relatively speaking).

In fact, (to personalize this for a paragraph), as a Linux desktop person buying his first laptop in 2002, I decided "well, I guess I might as well buy a Windows laptop instead of trying to hassle with running Linux on it." It felt like "giving in" to buy the commercial OS just to have easy access to Office and DVD players and the like. It slowly dawned on me "wait, if I'm going to buy a commercial OS, maybe I could buy a Mac?" I had never considered a Mac in my life, but I realized, wait, I can have Office on it (which not everyone realized, even then), AND easily play DVDs, AND this new OS X thing (which had come out a year before). It was a huge leap of faith, though.

The reputation of Apple building things that are just "pretty and shiny", I think, still stems from this sense that it's a consumer device that only does what Apple lets you do with it. And yes, some of these criticisms are still valid, but for those who weren't aware of the no-CLI Mac OS, I think it's very useful context that many of us take for granted.


It's worth considering what about command line interfaces makes them useful, who they are useful for and whether Mac OS offered an analogous interface somehow that fit the bill for their users. For me, the main gain of using a shell is that you can easily automate and document workflows. Apple added this to System 7 with AppleScript, to some extent. I haven't used AppleScript, so I have no idea of how successful it was to this end, but it does seem to cover some surface that would traditionally be in the domain of a command line shell.

Apple released their GUI operating system to consumers before most competitors had anything like it. Rough contemporaries in the GUI consumer OS space, in the mid-80s through the 90s almost all had some kind of command line interface. The addition of AppleScript was probably a response to this, while trying to stay true to Jobs' and Raskin's idea of the Macintosh as a "computer appliance".


I remember vaguely having to create a bootable floppy for an old os9 installation. It had to be done by dragging a system file from the OS onto the floppy in a special way which is termed "Blessing".


You can now do this from the command line using bless(8).


AppleScript existed on System 7 - I used to use it like a very limited command line (scripting common activities and complex workflows.)

The fun thing about it was that you were essentially interfacing with the GUI. Everything was incredibly discoverable - if you knew something was possible then it was almost always obvious how to do it in AppleScript.


Well, to be fair, GUIS can be powerful commandlines, to the point that they even satisfy most experts. But it needs more work, and you either end up using textinput anyway, or have a slightly worst performing interface. So at the end most invest into the simple solution.

With the raise of HTML5 & Electron and their simplicity for creating powerful GUIs and AI-Speech-Driven interfaces we might be again reach somewhere in the next decades the point where Keyboard-interfaces are in decline. And if the AI overlords take over whe don't have any need for complicated interfaces anyway.


It didn't have a Unix shell, but you likely had some way to type commands and get the computer to run them. Depending on context, you might have had HyperTalk, or AppleScript, or the MPW Shell.

In some ways these could be even more powerful than a Unix shell. What's the Unix way to say "Select a menu item", if the developer of your favorite app didn't think to also include a command-line interface to it?

(Yeah, I know: accessibility APIs. Do all your apps include a complete set of those, too?)


I use my package managers from within Eclipse and Visual Studio.

The builtin IDE REPLs are pretty good and more powerful than plain old style UNIX CLI.

Usually I use the cli when I have to, not by choice.


That whole book (not just the linked chapter about command-line options) - i.e. TAOUP - The Art Of Unix Programming - by ESR (Eric Steven Raymond, of Cathedral and Bazaar fame) - is good (although verbose and using many heavy words and verbal flourishes in his trademark style). I've read most of it. The Rules section stands out.

Being a Unix user and dev (above, not at the kernel level) for a long time before reading it, it crystallized and verbalized for me many concepts and unspoken techniques that are sort of hazily known and used but taken for granted (although applied instinctively) - although I had also figured out and sort of verbalized some of those on my own before reading the book.

Also, related, anyone interested in learning to write (not just use) their own command-line utilities in C on Linux, may like to check out this tutorial I wrote for IBM developerWorks:

https://jugad2.blogspot.com/2014/09/my-ibm-developerworks-ar...

It was on the site for long, got 30-40K views, and good ratings (4/5). Clients and others I know have used the tutorial as a guide to creating production command-line utilities. Follow the selpg links recursively to get to the PDF of the article and the source code.

I first wrote the utility that is used as a case study in the article, for production use at one of the largest motorcycle manufacturers in the world.


See also the canonical table of long GNU-style options:

http://www.gnu.org/prep/standards/html_node/Option-Table.htm...


Fun fact: -v typically stands for verbose except when using any "grep-like" utilities such as pgrep, pkill, etc.

[edit] forgot to mention that for grep-like utilities -v negates the match.

This led to an unfortunate incident in which I attempted to get verbose output from pkill and instead killed every single process on the machine except for my target. To make it worse I had no console access so I had to email another team to reboot and explain what I did.


I'll get crucified for saying this but this is one of the differences between a CLI and a GUI. A button/menu item/etc. always tells you what it does (even if it's just a brief description) before you click it. A command line parameter, well, reduces the information redundancy to near zero... you have to go out of your way to figure out its meaning.


Hasn’t anyone tried to add some kind of interactivity to command line typing to eliminate this advantage?

Not something simplistic like keyword completion, more like the best state of the art code editors or ides.

In those environments its just text based still, but can be incredibly rich and efficient. You can get seamless acesss to info on any language command or param, predictive code completion suggestions, based on machine learning, easily refactor/correct parts of a line you’ve already started typing, etc.

The thing is it doesn’t have to get in the way if designed right. So someone can also just sit down and blaze away typing and ignore it.


>Hasn’t anyone tried to add some kind of interactivity to command line typing to eliminate this advantage?

One example: /bin/rm has the -i (for interactive option). Some people alias 'rm' (or if they are newbs, it is sometimes aliased for them by someone senior in their org) to 'rm -i', which helps avoid deleting files by mistake, due to typos or typing too fast. With -i, rm asks for confirmation before deleting each file. This is more important in Unixes because most of them do not have any undelete utility; nor does Windows, AFAIK, except for 3rd-party ones like the Norton Utilities (does it still exist?), which need to be installed. And installing the utility (if you can get it) after an accidental delete, has some chance of the utility getting installed onto disk blocks earlier occupied by the deleted file.


While not as powerful, a number of modern tools support a --dry-run option which can be useful in avoiding mistakes.


Unless you use a keyboard shortcut in your GUI, in which case all bets are off. I remember being constantly frustrated back in the Adobe Illustrator 10 / Photoshop 7 days that the standard shortcuts (like cmd-H and cmd-M) did something completely different than I expected.

I think this is one of the big problems with both CLIs and (most) GUIs. We're stuck in a particular mode of thinking. For CLIs, it's that pressing a letter key must append that letter (and only that letter) on the screen, like a typewriter. Traditional GUIs have their own problems, which I'm sure half the crowd here can recite in their sleep.

When developing my app Strukt, I discovered I really needed to decouple "how you type it on the keyboard" from "what it looks like on screen". You get autocomplete on all operations and flags, so you can type just the first letter of a flag, but still see the whole thing on screen. It appears that some other modern editors (like Atom) have adopted a similar system, which is great. Emacs, of course, has had this forever (but has its own UI issues). :-)


That's why I prefer the more wordy -- alternatives, like --verbose


I use one-letter flags when working interactively, to save time, but my shell scripts always use the --verbose form. It makes them much easier to read.


ctrl-W


One interesting thing to note is that IEEE 1003 says: "The -W (capital-W) option shall be reserved for vendor options."

GNU getopt() uses this to make --foo and -Wfoo equivalent, but this is GNUism of questionable utility.

I have feeling that original reason for this is that there historically was libc which implemented wordexp(3) by exec'ing /bin/sh -W, but I have no way to confirm that.


I can imagine one of the reasons it might be useful is if you have a badly written argument parser that only accepts single-dash commands, but you need to pass a long argument (that gets shelled out). However, it's entirely opt-in so I imagine this is not practically useful for most programs.

Notably, GCC's -W<foo> works the same way. You could do "gcc --all" if you wanted to.


What's the flag to specify a separator character?

  paths() { echo $PATH | tr : '\n'; }
  paths | cut -d / -f 3
  paths | column -s / -t
  paths | sort -t / -k 1
Any more?


  paths | awk -F / '{ print $1 }'
(edit: format)


> -h Help. This is actually less common than one might expect offhand — for much of Unix's early history developers tended to think of on-line help as memory-footprint overhead they couldn't afford.

This one does surprise me. It is almost always one of the first options I try. That or "--help".


A friend of mine once had telnet'd into their (awful) ISP home router, and started playing with the various commands. Not thinking, they typed "init -h" and the router then promptly stopped routing packets. Turns out the (probably custom-written) "init" automatically sets the runlevel to single-user mode, and they handn't bothered to implement "-h".

For this reason, I always use "man <command>". The prevalence of "--help" has always bothered me since a badly-written command could start doing all sorts of unexpected things, when you just wanted to see the documentation. I've seen all sorts of ISV software not implement --help correctly. "man" always works.


Ooh, good catch.

"man oven" = look up your oven model in your Home Manual

"oven -h" = turn your oven dial past "Broil" and hope it pops out a hidden cheat sheet


I find in most modern tools whether or not -h or --help is implemented using them often yields at least a usage statement.

I'm sure it's just default "I don't recognize the option so here's how to use this tool" message but the result is the same.

There are exceptions of course, e.g. mysql uses -h for host so you just get an error message about requiring an argument. --help is implemented properly in this case though.


I usually use `-h`, because it's shorter to type than `--help`. I find I encounter this kind of message way too frequently:

    wc: invalid option -- 'h'
    Try 'wc --help' for more information.


That is annoying, I actually prefer the behavior I described. If an argument is not recognized I just want a usage message so I can check my syntax quickly.


-p is also very commonly "password". It may take an argument, or it may cause the program to interactively prompt you for a password when you run it. Similarly, -k is often "key".

-a could also be "ask"; Gentoo, for example, uses it to ask you to confirm before proceeding with emerge.

-h is very often "host".


For everyone who finds this (and similar work by ESR) valuable, consider supporting him on Patreon: https://www.patreon.com/esr


All I can think of, while noting all the numerous exceptions to the rule is https://www.xkcd.com/293/


pkill -v 1


In my version of pkill the short version is specifically disabled.

-v, --inverse

              Negates the matching.  This option is usually used in pgrep's context.  In pkill's context the short option is disabled to avoid accidental usage of the option




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: