I say this as someone who has been heavily using the command line for the last decade, even if you "know" how to use a CLI decently well, go read this if you haven't. From only a couple minutes of reading I found not one, but two new tidbits that I never even considered looking up. This information will completely change my daily levels of frustration when using a CLI. Very, very high ROI link.
Not read the article yet, but this is something that is new to me but probably shouldn't be. Hopefully I'll remember it next time it might be useful!
Also scanning the sort documentation for other bits I'd not remembered/known, I notice --parallel=n – I'd just assumed sort was single-threaded, where it is not only capable of multicore sorting but does so by default. Useful to know when deciding when to do things concurrently by other means.
I’ve been using the command line for almost 3 decades. This is really great! I found it covers basically everything I ever had to care about. Strikes a good balance in level of detail. Well done!
Working at the Command Line is a superpower especially when combined with auto-suggestions from tools such as zsh + oh-my-zsh.
I would be completely lost if I didn’t have the auto suggestions as my “second brain” !
One problem though — to run a variant of an existing previous command, I find myself editing super long command lines, I.e bring up a previous command by accepting the auto-suggestion, then Ctl-b thru it and make an edit to run a new variant. This made me wonder: is there some tool that presents a TUI type of 2-dimensional interface that lets me navigate through the various options using arrow keys etc ?
I like zsh, and I want to mention the fish shell auto-completion.
I start typing anything contained in a command in history (not necessarily the beginning) and can flip through the searches with up and down arrows. It is nicely highlighted, and the default, so I do not need to spend too much time fiddling with settings.
I can’t directly answer your question, but in iTerm2, I am able to option+click to where I want to put the cursor, and it will put your cursor exactly there. This allows you to edit annoying long command with more ease and (kind of) a GUI feature in a sense.
> is there some tool that presents a TUI type of 2-dimensional interface
In an Emacs shell, that's the default behaviour - then you can edit a previous command, and by pressing Enter it gets copied to the command line and executed.
Please, can anyone provide guidance for making Win10 CLI UX tolerable? After more than 2 decades on macOS, very comfortable w/ customized zsh in iTerm, I'm now unavoidably working in Windows and hating it. Sad to discover my vague perception of Windows as a 2nd-class citizen (or 3rd-world country) is all too accurate. Running git-bash in Terminal, surfacing any meaningful git status in my $PS1 incurs multisecond latency. Surely there's a better way. Right?
For a good PowerShell experience I use https://starship.rs/ which includes git info. I use the new windows terminal with the font/colors/etc set just so. For a package manager I like using https://scoop.sh/ and for anything missing there chocolatey usually has it. Good luck, there's more good stuff out there but it's hard to find.
I prefer macOS however my work machine is Win10. To improve matters, I use Cmder for my terminal with WSL2/Ubuntu. It isn’t perfect however it isn’t awful either.
Since the list mentions grep -o (--only-matching) and regex. Here is my preferred trick to extract a specific part of a line and circumvent the lack of capture groups in grep.
Imagine you have a line containing
prefix 1234 suffix
and want to only grep the number but need to match prefix and/or suffix to not grep another unwanted number.
The magic is '\K' which kind of resets the 'matched buffer'. [1] So anything before \K is matched but not outputted with -o (or without -o anything before \K would not be highlighted).
And for the suffix: (?=) is a positive lookahead [2] which checks if, in this case, ' suffix' is found after the number but also does not include it in the 'matched buffer'.
Nice one, I'll try to counter this garbage GPT3 answer:
- I don't think the regex engine in Perl is implemented in Perl. It's probably implemented in C/C++, like grep. libpcre is in C anyway.
- Even if grep is/were more efficient, You might have consumed more time and energy thinking, typing and running "grep --only-matching --perl-regexp 'prefix \K\d+(?= suffix)'" than the suggested perl solution
- I might have consumed even more energy typing this reply. My computer is there, waiting for me typing, doing not much.
In connection with history, you can use !$ for the last argument, but you can also use escape-dot. I use that quite a bit (and escape-dot is slightly easier to type than !$).
Also worth pointing out that you can modify the command in the history list before running it, by typing !xxx:p (adding :p instead of !xxx that just re-runs the command). Then I use arrow-up and then modify it before running it.
The less variant is good if you only want to follow for a short time or check for new output once since it can be enabled/disabled in less itself with 'F' and Ctrl+c and then you can again scroll and search in less.
And regards tail. I think most users want tail -F instead of tail -f in most cases. Lowercase f follows the file descriptor while uppercase F follows the file name.
So with -f you can still follow a file if it is moved or deleted (but I can't imagine many cases where you want to follow a deleted file and someone continues writing to the deleted file).
With -F you can follow a file which does not exist yet (and will start following when it's created) or when a logfile is rotated continue following the new logfile.
It's interesting how dev tools went from text-based 80s to GUI 90s and then back to text-based. As well as the terminal, think about Markdown vs WYSIWIG, Visual Studio/XCode vs Ruby on Rails, YAML config files vs XML with visual editor, etc.
Terminal is good because its unconstrained. But most of the time I would prefer a GUI interface.
We just need to blend the two approaches better and always have a terminal fallback.
Not sure if it really “went back to”, but an important reason is that Microsoft dropped the ball on consistent and power-user-friendly Windows GUI, and the year of the Linux desktop remains perpetually in the future.
No one paradigm serves all needs. I use GUIs if the tool needed is complex, and I'm not familiar with it; a well-done GUI is much more discoverable than a CLI.
I also use GUIs if the output is more than one-dimensional. Image editors is an obvious case, but how about spreadsheets? And, I love SmartGit because it shows the log and diffs in a much more intuitive and interactive way than even tig.
Note that I'm a bash-boy from way back, and spend time every day using it, interactively and in scripts. CLIs are great when they match my tasks. But they aren't the be-all and end-all.
Beyond the differences in task, I'm sure there are people whose needs are consistently best-served with CLIs (you sound like one). Just like there are people who consistently go to GUIs for their tools.
So: just because you view the world of tools in a certain way doesn't mean everyone else should as well.
Git is a good example because most tools will have a command-palette that prints out the git commands they are running to retrieve data to render.
Otherwise they are using libgit2.
It's interesting to think about the difference between an API as a library vs CLI / REPL. Often when I am building a library, for testing I usually would like almost every function to be runnable as a CLI.
Anytime someone is doing some piping (joining commands) or shell-scripting, it usually could also be its own script with a CLI. Many applications could also just be shell scripts piping stuff together which I think is actually what the unix people envisaged. Starts making you ask: why are some things CLIs and not everything. Lots to think about...
Yes. Once again And not Or. even playing game you use keyboard a lot. Not just the graphic input, even you live in a graphic world. So is editing etc. both.
Debuggers. A GUI debugger can show you a watch window with some variables and you can see it change in real time as you step through code, without you having to explicitly run the print command on each step for each variable, and without leaving a mess of historical values on your screen. Thanks to that, observing state as it changes costs less effort. Instead of manually asking all the time, you passively observe. It means greater productivity for the user of the debugger. And it goes beyond the watch window: source preview, disassembly, registers, hexdump of memory…
And obviously: editors. Unless you’re using ed, everybody’s using GUI or TUI editors. And TUI is just a poor man’s GUI. All the benefits of CLIs are gone, while the things GUIs are good at are degraded.
Not to mention anything related to graphics, photography, video…
CLIs all have the same consistent user interface. Positional arguments, flags, and text output. On unix-like envs they also have pipes. This is why people like them. This is what makes people productive in them.
Sure, modern GUIs have a few shared UX paradigms, but largely they are all different.
I wonder how the unix philosophy applies to GUIs? Or what the early developers thought about it. How would piping work in a GUI.
There was a GUI at some point - in an old rarely used OS - that allowed you to use point and click to connect the output of one text box into a form field.
Debugger is also an interesting example. In IntelliJ, when using the debugger I've often use the command line interface too. Sometimes I have been debugging something and I wished I could use a Node.js script to automate some stuff. Or I wished I could pipe the output through a Node script. The way the debugger is implemented in IntelliJ makes this a little difficult. Certainly not as easy as piping. I think this is because it uses the gdb machine-interface API which is different to the text command one.
For source control, IntelliJ does actually print all the git commands it is running which is nice.
cp is faster than drag-and-drop, especially when operating on multiple files (i.e. cp myphotos/2022-11-* someplace/)... vs drag and drop, where you need to open both locations in windows, select the files in one, and drag it to the other. Then probably close one of the windows.
Command-line is also discoverable, just not by the 'click on this' mentality, you have to be more curious. Which might be the better way to learn.
You might be right about higher density of information, but do you get to choose the information? vs i.e.
ls, ls --color, ls -lht, ... How quickly can you change between different representations? i.e. find big directories by du -h | sort -h
Changes can be seen instantly in the CLI, I don't understand what you meant by this.
The use of a keyboard when it is natural.
imagemagick
With the added advantage that all of the above, because it is text-based, is:
recordable,
repeatable,
searchable,
scriptable.
I used to be of this opinion. Now I think more “GUIs are great once you’ve mastered the CLI and know what underlying operations you want to execute”. A GUI I discovered recently that I really like is docker-desktop. I used to do everything from the CLI. The gui gives me a much better overview of everything. If I need to dive in to the CLI, I know exactly where to go.
The problem is that when you need to perform a task on some data being rendered by the GUI that is not supported by the GUI. Usually bulk tasks. Like, for all your docker containers, run a certain command.
A compromise is that GUIs should print the commands that can be run to get the output they are rendering and to perform the actions they are doing. Like a little command palette at the bottom of the window.
The other day I was doing some `make` stuff. I was passing in a bunch of env vars and flags. I wanted to tweak them between each run. I would have preferred to have check boxes to enable/disable certain flags. Rather than copying into
Then in the output, I have a bunch of make tasks and child make files running. I care about the output for each makefile as it runs, but then would prefer it to collapse when its finished. Otherwise its too difficult to see each step of the build process. A terminal cannot do this. XCode does it kind of well.
At the end of build too, when it reports errors, I'd like to jump up to the errors.
Almost every command output I would prefer to view as a GUI table or as a webpage.
The problem is that then, instead of just printing output, now I am dealing with a huge mess of frontend frameworks and build toolchains.
One example that comes to mind is a GUI for interactive rebasing, which lets you re-order commits with a drag-and-drop interface. I’m thinking specifically of the one that (I think) is included with the Git ReFlow VS Code extension.
True, a CLI tool could be made to mimic the same thing the GUI variant does in most respects, but at that point you’ve simply re-implemented a GUI, just with all the interface restrictions imposed by the shell.
I’d agree that apps in general should promote some level of scriptability, and letting users drop down into the CLI is a great option for that. I’d just make an argument for giving CLI users an option to “rise up” to a GUI where it makes sense.
> One example that comes to mind is a GUI for interactive rebasing, which lets you re-order commits with a drag-and-drop interface.
With `git rebase -i <some_commit>` on the command line, you get a list of one commit per line which are trivial to re-order in a text editor. It's probably faster than a GUI too, if you're at least a bit efficient in a text editor.
---
This is not to say that there aren't valid uses for a GUI, I think there are, but re-ordering commits is not one of them, I'd say.
> Do you have any examples of tools where the GUI is more useful than the CLI, and could you explain why that is?
Dunno if you’re just having some Thanksgiving fun, since the question of GUI vs CLI is mostly settled and moot, and this debate is irrelevant, but I’ll take your comment literally and respond as though you are serious.
So are you taking about shells, or any programs at all? Are you talking about programmers or all software users? What makes a GUI, exactly? (Are text menus CLI or GUI? Is CLI defined by REPL? Is Logo a CLI or GUI? What about vi, nano, or ddd?)
Speaking as someone with a lot of love for the shell, and few decades of experience with it, it seems to me like your question assumes a rather extreme position that doesn’t seem to be well supported by most people’s usage, even if we’re talking only about programmers. If the CLI is strictly better then why do people tend to prefer GUIs? That question needs a serious answer, ala Chesterton’s Fence, before dismissing GUIs. Make sure to thoughtfully consider learning curve, effort, discoverability, prior knowledge requirement, overall task and workflow efficiency, etc.
Web browsing, such as visiting HN and commenting, is much much better in a GUI browser than via manually typed curl or wget commands. What if you had to send POST requests to get your comment up? What if people had to send GET requests to know your comment existed, and then another one to retrieve it? We wouldn’t even be chatting if this was a CLI, right?
More or less all artist tools are better as GUIs, from Photoshop to video editing to 3d modeling to animation. If the output is graphical, then there’s no way around a graphical interface. Using the CLI for this isn’t just tedious and expensive, it’s far less efficient and effective.
Text editors are not CLI REPLs, even vi and nano. Spreadsheets are all GUI. Desktop OSes are GUIs. Smartphones are GUIs. (Imagine making calls via CLI!) Programming GUI IDEs can be extremely effective, especially when refactoring and debugging.
There’s also a gray area of text based GUIs in the console window, like nano and GBD’s TUI mode, just for two examples. Even in CLI-land these things are easier to use and more efficient for some tasks than a pure REPL with text commands.
Could you maybe explain why you claimed a GUI has no value? Are you aware of the history of debate on this topic, and of academic research on this topic?
This is because as Linux started taking over the server space, it brought with it a culture of "stone knives and bearskins" tooling from old-school Unix -- and most applications these days are web applications.
One of the major disappointments of recent years is seeing Microsoft backpedal from promoting Windows as the premier development platform and embracing (to extend or extinguish?) Linux with its command lije bullshit. Windows in the late 90s was an entire environment based on composable components, linked together via discoverable interfaces. It had a software composition story far superior to Unix pipes. We should be able to build entire web and cloud infrastructures by wiring components together visually, and using visual inspection tools to debug problems as they happen in the pipeline. Not monkey about with YAML and JSON.
> We should be able to build entire web and cloud infrastructures by wiring components together visually
100%. I think this is the next wave of development. Back to the future.
> Not monkey about with YAML and JSON.
The key difference this time needs to be two-way sync between code and visual.
Having a serialized format (YAML/JSON) for wiring at some level is important though, but it should be easily modifiable by humans.
In the last wave, we left this two-way syncing behind. An example of this is XCode's NIB files and Interface Builder. NIB weren't designed for humans to modify so everything had to be done through IB which made certain things a pain, and created a lot of VCS churn.
I've been thinking about whether we can achieve a two-way syncing (text <-> diagram) visual programming interface by interacting with the AST of a (subset of) an imperative language and using data flow tracing (via code instrumentation).
I wonder what the minimal data structures needed are to represent most of configuration programming. Such as state machines, streams, graphs (dependency tracking).
Imagined from the same fold as COM (which you are likely talking about) was CORBA, an international standard for "object" interoperability.
Guess what came out of that?
It's one of those systems that's perfect (as in perfectly abstracting away everything) but notoriously and impractically complex. COM is only slightly less so.
The downfall of all the object-oriented approaches is that we don't really think about objects, but instead of actions ("functions") we perform, which is much simpler as well. Basically, you don't see bread and say bread->eat() (or at least your doctor will tell you not to :)) but instead look for something to eat() and then stick bread in it once you find it (eat(bread)).
But as soon as you've got someone else who needs to eat (cat->eat(), dog->eat()...), it is still better to go with a functional approach:
feed(person, bread)
Basically, my abstraction was bad for that case, but it is so much easier to move from eat(bread) to eat(person, bread) and then rename that to "feed" than to introduce object inheritance/composition and think about the commonality between dog and person if all you want to do is feed them.
Sure, there are cases where purely OOP approach is not burdensome, and may even be easier, but functional approach, as an explicit superset of possibilities, will usually be simpler and more understandable.
> Windows in the late 90s was an entire environment based on composable components, linked together via discoverable interfaces. It had a software composition story far superior to Unix pipes.
I’ve never heard anything about this (probably because it was before my time); could you elaborate?
I think parent is talking about the COM interface.[1] powershell still has a legacy of object oriented manipulation as opposed to text/line based manipulation. I’m too much living in Unix world to give more insight as to how well it functioned though.
Sounds like COM. Comparing a programming framework to pipes seems a bit over the top though. I don't know anyone who'd seriously advocate building a large application solely from small programs piping their output to other small programs.
I've already complained previously about the new Microsoft though, so I have sympathy with the underlying sentiment :)
I find the suggestion of using `apropos` somewhat weird, since `man -k` has been functionally identical, easier to type, and easier to remember (for me at least) in the 45 years I've been using unix systems. And I think I've only come across one esoteric system (which was some unix on a PDP) that differed in any way between the two.
I think the idea behind preferring `apropos` instead of `man -k` is that you can get to `apropos` with `apr<TAB>`, while the other invocation is longer (and for sloppy typists, the dash may prove problematic).
In a way, true, but if you want to work in the corporate world, should be "learn ksh". Very little difference between them, and will force you to write portable scripts. Using bashisms will make things a bit harder for you.
Most proprietary software companies like to buy uses ksh. I support a few of them now where I work, all uses ksh.
> Most proprietary software companies like to buy uses ksh. I support a few of them now where I work, all uses ksh.
I've never known companies to buy large software products written in a shell language, but on top of that haver rarely if ever come across ksh in a professional setting.
I can imagine this being the case in some industry niche, but don't think "most" is appropriate.