A pet peeve of mine is confusing the terms "terminal", "shell", "CLI" etc. They are not interchangeable and the distinction is important to understand.
- shell: the "top level" user interface for the operating system. Can be text-based or graphical, command-line based or point and click etc. It's how you do things once you've loaded an OS. Examples: sh, bash, zsh, fish, eshell (emacs).
- terminal: A hardware device for input/output of text. Originally teletype machines (ttys) but later used CRT monitors instead of a printer. Example: DEC VT100,
- terminal emulator: a piece of software that emulates a terminal. Examples: xterm, GNOME Terminal
- CLI: command-line interface. A user interface based on typing commands one line at a time. Does not include text-based UIs like htop etc. Examples: git, curl.
- TUI: text-based UI. Essentially a GUI but works on more advanced terminals (ie. not actual ttys). Examples: htop, emacs, vim.
This is a lot to take in. Not sure if you're the author, OP, but in case the author is reading: Where would you suggest someone starts? (Note: Don't answer here! Use your answer to shape the organization of information on the site!)
For example, I use ZSH. Should I switch to fish? The list suggests I'd be sacrificing power and the ability to write scripts but gaining, uh, intelligence and user-friendliness? Is that true?
Then there's a ton of stuff about customizing ZSH. I know that oh-my-zsh is the most popular but I did not find it especially performant so I never really looked in to using modular components of it. Now I know there are 50 other guides I could read? Maybe I want a plugin manager. antigen is a plugin manager, maybe it'll be more performant than oh-my-zsh? But what about antibody, which is "faster" and "simpler"? But zgen is "lightweight". Are weight and speed connected? What about weight and simplicity? Maybe I should switch to pure, which is "pretty, minimal, and fast".
Then I moved on to my terminal. Currently I use iTerm 2, which "does amazing things". But maybe I should switch to Terminator, which is the "future of terminals". Or MacTerm, which is "powerful". Or Alacritty, which is GPU accelerated? (Isn't iTerm 2? I don't know?)
I went to macOS package managers. I use homebrew because it's what every webpage for everything I want to install says. Should I get fink instead? Macport, I think is old, but it "simplifies the installation of software". I thought homebrew was pretty simple, but I guess it could be simpler?
Then for text editors, I've never been good at emacs-fu, so I often use nano. Maybe I should replace with micro, which is "modern and intuitive"? Or jed, which is "freely available". I don't think I paid for nano, maybe I did.
How does a curated list of Terminal frameworks, plugins, and resources differ from an uncurated one? Can the author give some example of a framework, plugin, or resource that was considered but not included because of the curation? I suspect the things that I know aren't included simply weren't considered to begin with. Maybe I'm wrong.
Most of this criticism applies to every awesome list that gets linked, of course.
The problem with most of these lists and how-tos is that they're the CLI equivalent of 'I Fucking Love Science'. The people suggesting things like zsh frame-works rarely seem able to properly justify their use — they don't know what they actually do, nor what the alternative is, nor even whether you could accomplish the exact same thing in a less featureful shell like bash. It's just a cargo cult of people who don't really understand what they're doing writing Medium posts or whatever to evangelise unnecessary, badly written software to other people who don't really understand what they're doing.
(In zsh's case, i think that's partly the project's own fault; it has very underwhelming defaults and a very overwhelming set-up process for new users.)
I've been developing an opinion lately that what we need is fewer roundups and "objective" reporting and more opinionated here's the end-to-end stack I use and I think is good and here are the tradeoffs. With a diversity of stacks and opinions, this represents most of the roads you can go down and lets you pick the person whose approach or philosophy most closely aligns with yours.
This is quite good. I wish it would go a little further and let them expound on why they settled on those choices, but I understand that's not the intention of that interview format.
A lot of people seem to complain about loosing the ability to write scripts that work on other shells because Fish breaks compatibility.
Well, I’ve been using fish for a couple of years now and besides a couple of things that are very specific to my main computer, the rest of my scripts I build as POSIX as I can. I keep everything on a repo and I can then push/pull things on new machines and have everything available.
One can still execute a `/bin/sh script.sh`. Nothing prevents someone from doing that.
More seriously: it’s hard to put together a list like this because so many things are subjective. Maybe you prefer fish because it gets rid of some of the dumb things required for a POSIX-compatible shell, but I prefer zsh because it has every feature and the kitchen sink. I don’t know, and neither does the person putting together the list.
I don't know how someone reading the list would understand that "fish ... gets rid of some of the dumb things [what things?] required for a POSIX-compatible [?] shell, but [others] prefer zsh because it has every feature [what features?] and the kitchen sink."
For example, I didn't realize "smart" in the list's vernacular means "deprecates dumb legacy behaviour". Your post is the first time that became obvious to me. Now to figure out what that behaviour is and why I should care and if it impacts performance and and and...
My post wasn't asking the curator to make decisions for me, it was asking them to help me make decisions.
It would be nice if the list explained that, though!
And with the ~20 zsh "frameworks" and "plugin managers" here, they could really do some comparison, or probably even remove a couple.
I'd rather they showed me 3 good options and explained the trade-off than 20 options with little commentary (the lines seem to be mostly cribbed from the projects themselves).
If you haven't tried micro then you should, it's a huge improvement over nano. For the rest, most people are going to want what you're already using: iterm2 + zsh + homebrew.
We have distributions that are providing perfectly packaged things like Debian/Arch/RedHat.
I find already Homebrew hacky on macOS so why porting it on Linux distributions :D Is there anyone on HN using it, I'd be curious to know what are the advantages.
Linuxbrew, like Homebrew, is useful for non-invasive non-system/infrastructure packages. Compared to system package managers, it is (arguably) simpler to use and contribute to, and often has more up-to-date packages (especially compared to e.g. RHEL) because it tries to solve a much simpler problem. I think there is room for a package manager that focuses more on managing your user-specific CLI tools than your system libraries.
I don't like macOS all that much, but Homebrew is really good.
The “benefits” you are listing for Homebrew are interesting, because you can also look at them as disadvantages:
> it is (arguably) simpler to use and contribute to, and often has more up-to-date packages (especially compared to e.g. RHEL) because it tries to solve a much simpler problem.
Unfortunately, this often means it breaks or doesn’t do certain things…doing something simple like getting an old version of a package is nigh unto impossible, both because there doesn’t seem to be a built-in way to do this, and also because old packages are constantly removed from the package index. The fact that package inclusion is easy is nice, but it also opens issues that we’re already seeing in the npm community with regards to malicious packages.
The ability to install software to a non-system location is useful, but that’s really not something that Homebrew itself really recommends doing on macOS because it breaks a lot of things. It’s really only designed for a single user machine with the regular user being an admin, which is likely true for the majority of its authors, but for those who don’t fit into this you need to resort to a bunch of hacks to make it work.
I think the latter restriction is because multi-user macOS are rare. Linuxbrew is very much oriented towards non-root uses and works quite well on a multi-user system. In fact, unprivileged use on multi-user systems is the main reason I think Linuxbrew is useful.
I agree that Homebrew very aggressively focuses only on the common case (newest version, few/no options). I think it's the right choice, as Homebrew is never the only way to install something. It would be bad if RPM had made the same design tradeoff, though.
It's also useful for people who do have root access to a terminal, remote or otherwise, but want more up-to-date packages.
Some packages (my favorite example is youtube-dl) are useless if they're not up-to-date. And sometimes the new version has a feature that's really important to you.
I'm using it for ripgrep and node, for instance. ripgrep isn't available in apt, and the node version in apt is incredibly outdated.
There's a reason most software these days tells you to curl an installer and pipe it to bash, and that's because distro package managers never have the latest version.
Oh, yeah, I don't personally use Arch, but from what I hear, it isn't an Arch problem at all. I was mostly talking about apt (Debian, Ubuntu, Mint, etc) and yum (Fedora, CentOS, etc).
In case you're suggesting Linuxbrew is unnecessary (I don't think you are, but just in case), there's definitely something way more convenient about just using Linuxbrew to install everything, than to track down what software is best installed from what repository.
To respond to the implied dig, I've spent months to years using the following distros on desktops and laptops, in no particular order:
Gentoo, Red Hat, Fedora, Ubuntu, Debian, Mandrake
So I've had plenty of exposure to three major package management systems and a bunch of distro-managed repos, from the perspective of a desktop user (I've used Linux on the server plenty, too, but that's less directly relevant). I've poked around in a few others in a desktop context—Slackware, Arch, Nix, probably more that I'm forgetting.
I also used Macports for over a year.
Homebrew's the most pleasant overall solution for managing user-facing desktop software that I've used, by a long shot.
I use macOS as my daily driver. I've been using CentOS and Ubuntu Server to run my webservers for around twelve years now, which includes a top-1000 website in the US.
I used to manually compile things like Node.js and ripgrep from source to get up-to-date versions, but these days I just use Linuxbrew.
Just discovered it, so I still have to see if it fits my needs, however if I understood what it does, it could be useful to solve a recent problem: after some upgrades (apt, Debian) all my Windows VST audio plugins I used under .wine through LinVST stopped working. not a single one can be used anymore and it seems a newer .wine version is to blame for incompatibility, so I would attempt to create a different user and install a previous wine version locally then move the .wine directory there and see if running the LinVST patcher there will make the plugins useable again. That way I would not touch the system wine installation which runs fine for other things, ditto for the main user .wine directory, while still being able to test a different version.
I hate non-discoverable UI's. The first thing I do with any software, is click through all possible menu's, windows, ribbons, whatever. After some time, I usually get quite a grasp on what the software is capable of. The two notable exceptions are textbased UI's (terminals, dsls, ...) and softwar requiring domain specific knowledge (e.g. geomodeling software, specific 3D modeling software, ...).
I never understood the appeal for terminals. Often (Microsoft's Powershell is a notable exception) the syntax is full of incomprehensible abbreviations, the syntax is wildly inconsistent, the syntax contains hard to remember acronyms...
No, for me, terminals and most textbased interfaces will never be as usable as GUIs.
As a generality, GUIs can do only the tasks they were designed to do, and little more. Text-based terminals allow simple composition of tasks into arbitrarily-powerful pipelines.
Terminals might not have discoverability, but in terms of sheer power they wildly outmatch GUIs for many tasks.
Well hopefully this is not a sentiment that continues to grow in the technical community. All nice things have a cost. For all the efficiency gained from the web-view, consider the cost of building the web-view, maintaining it, hosting, and the other myriad of frontend activities. Contrast that process with downloading a few popular libraries, coding up some methods, exposing them via a cli lib, and then publishing to one of the many package managers. I can do the later in an afternoon, but I may never release if I think all my users should have a web-view.
I suspect if all technical users simply stopped considering text based ui acceptable, we'd actually have less quality software overall. There are too many programmers like myself, who can make decent programmatic interfaces, but simply can't put together a usable web-interface to save their life.
If you don't understand the appeal, do you think you should pass judgement? Hate is quite a strong word. Terminal interfaces are quite "discoverable," more so than GUI's I could argue. You can often display the list of available commands, see their documentation, etc. Terminals are universal interfaces to many different types of software. With GUI's, you must learn the different ways people have decided to hide away commands in convoluted menus. I am personally a reformed GUI-holic, so I understand where you are coming from, and know from my refinement that it is a place of misunderstanding.
A command line interface lets you communicate to the computer using language. Learning a language is something you can practise and get better at until you find that you're able to communicate very complicated tasks that the original authors of the programs had never even imagined.
Graphical user interfaces are a caveman interface. You go to the market, see what's on offer, point to it and grunt. That's fine as long as you see what you want. But you'll never do anything that you can't already see.
It's not easy, though. Nobody pretends that it is. But learning to read and write wasn't easy either and you managed that. What if our education systems didn't enforce that? How would you ever know the power you're missing out on?
But most commandline languages are inconsistent, have inconsistent abbreviations, have inconsistent command parameter naming/defaults, contain acronyms of incomprehensible words, have plain weird names, contain inconsistent or outright incorrect documentation...
A language to do stuff is actually a great thiing, especially if it can produce readable and reproducable objects (programs). However, I get really scared looking to most commandline scripts. It's an incomprehensible mess people only can start to grok after years of experience with the particualr commandline tool.
> But most commandline languages are inconsistent, have inconsistent abbreviations, have inconsistent command parameter naming/defaults, contain acronyms of incomprehensible words, have plain weird names, contain inconsistent or outright incorrect documentation...
And how is this different for GUIs? Location / icons / description / ... depends entirely on the application. Some will have hotkey handles, some won't. Same app on a different system will look differently. (Possibly with different layout) Creating a discoverable GUI takes as much will and attention as a good set of CLI options.
With a language, it is far more important. In a gui I can just click around and see. With tuis, I need to enter a command and hope it is the right syntax.
IDK about you, but I find it writing a command and seeing the result is far, far faster than clicking around.
I would spend far less time at my computer (would only use it for work) if it only had a CLI, but I would never, never use a GUI (beyond text editors, of course) for most of the grunt work, the CLI is simply faster and, and this is the most important issue, almost everything you do with it can be automated.
I couldn't even begin to imagine having to navigate the endless dialogues in IDEs to configure every little thing I would do in the command line, I'd give up programming altogether.
So? English is horribly inconsistent but you seem to be doing just fine. We know bash etc. aren't ideal. That's why Python was invented. But the reason they stick around is because they are useful and people do use them day to day. Are you really going to forgo language completely just because it doesn't meet your superficial idea of perfection?
Human languages are also forgiving on the receiving side which is not true for computer languages. 1 wrong character and yiu get an error. This is an important difference when discovering and even using it,l.
Terminals don’t suit this approach of trying to get an overview of what a program does. Personally I find trying to get an overview of a complex program/concept/project/anything quite taxing and laborious. Terminals are a welcome relief from this issue. They are precisely as complicated as they need to be for a given task. A GUI will never have this property.
For CLI apps you wouldn't have to click through all the menus and discover stuff, there should be a man page you can skim to discover it. Even better is that it's searchable, you don't have to click through all possible menus looking for that one odd feature you don't remember in 6 months time.
For full TUI applications I'd agree, there are usually worse at discover-ability, they are used because the make up for it in other ways like speed and learn-ability.
As an example of composability... Say you have an api that returns products, but you want to find out how many have the word aliens:
curl # gets the api responses
grep # searches input
wc # counts input.
jq # parse and query json
You have your answer quickly without thinking (after you know all this cold).
Most of the commands are also mnemonic -r is usually reverse, -R recurse, -a all.
I recently in 5 minutes took the API response from one api and constructed sql queries into a test db for test data using roughly the above method. Another fun, trolling type command, a product manager that wanted daily updates got a scrum.sh that ran on cron and posted summary of what people did on slack.
I don't know how you can work effectively as a software engineer without knowing something about it.
Speaking of discoverability, I commonly use --help with a command/program that I haven't used before. And if it doesn't have that option, I do dislike it for that.
With zsh and a good completion script (e.g. one provided by oh-my-zsh), tab completion will discover every option for a command. And the man page will list all the options as well.
I hate them too. There are a number of things wrong with them, discoverability is one of them but what really gets me is tracking state. Why do I need to guess at the state of things inside my head as I'm running commands, or query the machine to tell me, when the actual state is sitting there in memory the whole time?
Every time I encounter a tmux recommendation it has me pondering whether it's worth learning minicom.
When I first read the author's rationale for tmux it rubbed me the wrong way, because the main feature they dismiss as cruft- the ability to use it as a terminal emulator for a serial port - is something I still find myself using from time to time.
Is tmux really so much nicer than screen it's worth learning minicom?
I doubt it? An apparently old version of the tmux FAQ listed these advantages:
a clearly-defined client-server model: windows are independent entities which
may be attached simultaneously to multiple sessions and viewed from multiple
clients (terminals), as well as moved freely between sessions within the same
tmux server;
a consistent, well-documented command interface, with the same syntax whether used interactively, as a key binding, or from the shell;
easily scriptable from the shell;
multiple paste buffers;
choice of vi or emacs key layouts;
an option to limit the window size;
a more usable status line syntax, with the ability to display the first line of output of a specific command;
a cleaner, modern, easily extended, BSD-licensed codebase.
Which is all well enough. However, the way I use screen and tmux, I hardly notice any difference. I don't care about BSD vs GPL. I don't try to script either from the shell, mostly just interactive use. I don't really care whether it uses vi or emacs keybindings. I mostly use the system clipboard. I use one pretty standard status line on either one and I have already memorized what I usually type to get it.
i think screen still has some features that tmux doesn't, and vice versa, but I don't have a very strong affinity for either one. They even have similar keybindings. Give them the same prefix key and statusline/statusline color and if I'm not paying close attention, I might not realize which one I'm using.
Matter of preference. I use tmux as a multiplexer and minicom for serial consoles. Mostly because minicom works out of the box more frequently than screen does, even with proper baud rate and other options set sometimes screen just shits the bed when used for serial. Never had any issues with minicom, it just works.
Perhaps I was unclear: screen is what I use now (and have used for quite a long time) and I haven't moved away from it because I actually use and value the support of serial consoles.
It mentions TotalTerminal, which hasn't been properly supported since OS X 10.10, and outright doesn't work on versions of macOS newer than 10.12.
I really miss TotalTerminal. I've been forced to switch to iTerm, the only other way I can find to get a hotkey terminal, but it's an extremely noticeable difference to switch from the overall fastest terminal to the overall slowest:
Also recent versions include a metal renderer[3] (not sure if it's enabled by default) which might be even faster, I have been running it for some time but have not noticed much difference.
Another reason why it might be hard to notice is also because when one is using ssh you anyways have the network latency so might be one is simply trained not to react to latency in terminals as much.
For me, the most noticeable delay is in opening the hotkey window. I have animation disabled, and there's still a few hundred milliseconds between when I press the hotkey, and when iTerm appears.
Command-line interface can indeed be very convenient for many tasks, nevertheless a properly designed GUI can always be made a way better than a TUI so I can see no reason to stick with terminal-based editors given a choice. And I wish people would stop dichotomizing command-line and GUI and start integrating their capabilities to build something synergic. I.e. I'd love to have a terminal featuring powerful inteligent pop-up autocompletion and correction, visual shell command construction assistance, file choice dialogue (so I could choose a file the GUI way and get its name pasted into the command) etc.
I had a product idea (maybe it is even a business idea) a while back to do shell autocompletion based on an autocompletion engine which was trained across multiple users. Kind of like a cloud-connected fzf.
The problem this solves is that the ecosystem of unix tools is so massive it's impossible to wrap your head around it. Most of us just pick our tools and end up at a local optimum. But with that sort of thing seeing the way other people have solved a problem is very useful.
I still think this would be so useful but the privacy situation is very sketchy. If I could come up with a solution to that I might try to build it.
eshell is a terminal emulator inside Emacs that is written in Elisp. It also have support for ssh/ftp/sftp and more via tramp, highly recommend. You can multiplex it and have powerline without much effort
- shell: the "top level" user interface for the operating system. Can be text-based or graphical, command-line based or point and click etc. It's how you do things once you've loaded an OS. Examples: sh, bash, zsh, fish, eshell (emacs).
- terminal: A hardware device for input/output of text. Originally teletype machines (ttys) but later used CRT monitors instead of a printer. Example: DEC VT100,
- terminal emulator: a piece of software that emulates a terminal. Examples: xterm, GNOME Terminal
- CLI: command-line interface. A user interface based on typing commands one line at a time. Does not include text-based UIs like htop etc. Examples: git, curl.
- TUI: text-based UI. Essentially a GUI but works on more advanced terminals (ie. not actual ttys). Examples: htop, emacs, vim.