The best way is being restricted to dumb terminals for compiling and running university assigments.
I don't know how younger people force themselves to learn to use the command line. I had a windows 98 at home at the time and the university enviroment seemed primitive in comparison, but I had no choice.
For even faster learning make sure the remote machine is far enough away that network lag is a factor. It’s amazing how fast the student will learn I/O redirection, tailing files, and grepping for needles, and of course automating the entire build / test lifecycle.
I first learned not only the Unix CLI but Vim under these conditions.
I typed the HTML for my student website by hand in a minimal and ancient Vim build, because student websites were hosted on some kind of Unix mainframe.
It probably took me 2 hours to write 5 sentences! I was so proud of myself that I added a footer to the effect of:
> This page was typed painstakingly in Vim. "What doesn't kill you makes you stronger."
I'm sure I would have eventually forced myself to learn Vim anyway if I never had access to this Unix system, but it was definitely important and formative to start off this way.
When I was at uni and I logged in from home with my 1200 baud modem, I used vi's "open mode" (no, vim doesn't have it), which is visual mode, but restricted to one line at a time. I even had a shell alias for "ex +o $*" to start it. It was quite fun to use, but nowadays it's not terribly useful anymore, so I can see why vim hasn't implemented it.
Having used a 3270 terminal in an extremely saturated system, it's amazing how much your way of working changes. Typing a full page of code, checking for typos, doing a test run in your head, then finally pressing Enter and sipping your coffee while the system took 1-2 minutes to process the screen.
As soon as it was upgraded, we reverted to the normal workflow of pressing Enter first, the correcting typos, paginating instead of searching, debugging live instead of in our heads, etc. I don't think we were more of less productive either way, they're just different ways of working.
When I were a lad we had a teletype terminal in Pollock Halls (Edinburgh) to complete our assignments (well there were some VT100s but with deadlines coming up you had to take what was free!).
Over at Heriot-Watt we had SGIs and Windows boxes aplenty and I still found myself working in a little office on a VT100 hooked up to a Sun 3 (which ran a forgotten line printer). At first just because there was never any demand for the room, then later because it was just more fun.
Why when I was a lad had to get up 10 minutes before we went to sleep, mill our own paper AND punch the cards, and if you left an instruction out it was no supper for a week.
I don't remember if it was a VT220 or VT320 that did it but I had to use one in undergrad computer lab (also connected to some kind of DEC running ultrix) where whenever emacs did a screen redraw (for example if you did a page down with C-v) it would go into I-search mode because something in the terminfo had a C-s to freeze the screen (before some more commands and then a C-q to un-freeze), and this was visible to emacs.
Anyway what I'm trying to say is just as I'm glad I didn't have to use punch cards, I'm glad that the youth don't have to deal with that kind of nonsense.
I have a few dumb terminals connected to my machine. I wanted to get an X-terminal but I’m holding off because I’m worried about the incompatibility between modern X-clients with whatever extensions they may use and the old X-servers typically running on the terminals.
That sounds good, I had a hunch video playback wouldn’t work. My only other hesitation is that the particular ones only connect over 10Base5, which would involve some extra networking work and SLIP, which I’m not sure is even support on modern Linux given PPP seems to be the go to for that sort of thing now.
Does it actually have onboard 10Base5 or an AUI connector? If the latter, there are millions, or at least dozens, of 10BaseT to AUI transceivers sitting in people's junk drawers. EBay has a bunch at $10 and $15 price points.
This post definitely contains some good general advice, to me the most important step is really simple:
Just use them. And keep on using them.
Resources like those suggested are good, but each will suit people differently - the one thing that all people who are expert at these tools have in common is that they've all used them a lot!
The last bit is very crucial. Be prepared to relearn it after a prolonged break. So taking notes may be of some practical help.
I wish *NIX land had some uniform convention for options naming and a targeted help system to go with it, so one could lookup an option directly (preferrably in shortened form, no need to spell out the whole word). OpenVMS HELP was a good example of user-friendly command-line help system, well, but that's another story...
And, to be fair, give yourself a safe enough space to experiment.
Use Docker, libvirt, Digital Ocean, multipass, Virtualbox, or whatever you want, but find a place where you can use the stuff without worrying too much about the consequences :)
In terms of getting started with the command line and getting an understanding of what you are doing (rather than a handful of “memories these commands” as some beginner tutorials sadly end up being) I’ve found “Learn enough command line to be dangerous” to be absolutely amazing.
It’s free, even though the webpage makes it look you might need to buy a subscription (edit: you need to pay beyond chapter 2). It takes half a day to a whole workday to work through, but for me it was a great investment of time.
For users, The UNIX Programming Environment by Kernighan and Pike is the book that got me going. There are a few things in there that don't exist now but it is the most concise easy-to-read book for new *nix users.
For administration the UNIX System Administration Handbook by Nemeth, Snyder, Seebass, and Hein is how I got started. Newer editions of this book cover Linux specifically.
2. play a man page quiz with a friend:
- read a random command and ask for the description
- read a random description and ask which command it is for.
I've done both as an undergraduate. The HP-UX manual set remain on the shelves right next to me. Over the years, I found letting candidates compose a pipeline of commands to solve a problem is a great way to make interviews more fun ("How would you solve this problem without writing a program, by combining existing ones?").
I don't think absolute precision regarding specific commands or flags therein would be required because when you're in the chair and dealing with a problem you're going to have access to the resources you need to get that precision (man pages, vendor documentation etc).
Though being able to explain the mental model of how you want to go about manipulating the output, extraction of the information or data you need, be it through json manipulation, regex, line/column parsing etc is valuable.
> Most of these tools come with a --help or -h switch, that tells you the options available, with a short explanation of what each does.
Please don't do this by default unless you're sure such a command supports that flag otherwise you're literally just running unknown commands with unknown consequences.
The first port of call should always be `man` and not `--help` like the article suggests. Aside from that, the article looks good.
> If `--help` does anything besides display usage info or fail, the author of the tool is to blame.
No they're not. Not every command line tool uses flags, some use ordered parameters. So parameter 1 might be a file name to write to rather than a flag. Just look at how messy it is working with tools like `cp` and the `--` flag to get an idea of the problems some programs face differentiating between files and flags.
Other tools might not take any command line parameters at all. So it's entirely forgiveable for them not to check if any parameters were supplied.
Then there is the issue that some programs accept `-h` and others `-?` as a shorthand. Some programs have `-h` as short for `--host`.
Plus even if I were to agree with your point, which I don't, it's entirely moot because it's not the author of the tool that is the one who has to fix any issues that might happen to a host system due to someone blindly running commands.
So whichever way you cut it, blindly throwing flags at executables and expecting a favourable outcome every time can be risky.
> Many applications that are installed outside the scope of a package manager, especially tools distributed as a single binary, lack a manpage.
That point wasn't forgotten by me. But if you've downloaded an application outside the scope of the package manager then you should at least have some idea how it functions and thus know whether it's safe to run `--help`. If not, you probably know where to look online to find that information out.
Like I said in my OP, I've got nothing against people using `--help` et al. I use them myself. But it shouldn't be the first thing you recommend UNIX newbies to do when they're figuring out a command. It's usually something you do when you're already familiar with what a command does but need a nudge as to what flags you need to supply.
For starters, anything that’s a shell script. Maybe add Perl to that equation too (I’ve seen all too many Perl scripts that didn’t properly check parameters).
I’ve also seen plenty of enterprise UNIX era software not follow GNUisms (which ‘-—help’ technically is). It’s all good and well if your just running GNU/Linux with modern DevOps tooling but don’t expect ‘-—help’ to work on every Solaris or BSD application. Not to mention relics like Informix and it’s tools.
Also anything hacked together in house should be treated with caution too. Open those files in $EDITOR and/or examine the README (if you’re lucky enough to have one) first because there’s no telling how lazy your predecessor might have been.
So yeah, over my career I’ve ran into numerous instances where ‘—-help’ might have had unforeseen consequences.
There are almost no man pages at my job and the software is very old so good luck Googling. I practically default to --help here. I figure if --help somehow screws something up then they deserve it.
> I've never read a man page from start to finish, and I'm not sure you're meant to.
When I was 16 and just starting to get into Linux, I got a tiny little paperback, "reference to Unix programs" or something. It was literally just a list of every single Unix program and a shortened version of its man page. I read it cover to cover. After that, if I needed to do something on the command line, I knew there was a program to do it and what its capabilities were. After a couple weeks I didn't need the reference anymore.
I can't remember the book anymore, but this is basically what it was like: https://dspinellis.github.io/unix-v4man/v4man.pdf See if you can find a version newer than 1973, though... O'Reilly's Unix In A Nutshell isn't bad.
Read your man pages cover to cover. It's for the same exact reason you learn a bunch of "useless BS" in school: to save you time down the road because you'll know where to go for the solution.
I think I have tried it. The only reason I don’t use it too much because uses a lot of ram and plus its made with python which makes it slow compared to other browser
Maybe it’s just my opinion but happy to hear your thoughts on this.
Personally I use Qutebrowser on the same screen next to my terminal when I code. That allows me to be mouse-free during coding. It’s a great experience. Not seeing it being slow, I feel that the Python-hackability gives me superpowers.
Can’t judge the RAM consumption, my machine has sufficient RAM (40GB).
For private use (banking, email) I still use Firefox though.
Wow you have 5 times more ram than mine.
Anyways I have used qutebrowser it makes things slow especially when I’m watching something on YouTube this is why I sticked to firefox
This is awesome and so simple that I can't believe I haven't done it for myself already. It would be really useful to have your personal mann page displayed as the header or footer of the official man page.
(on a NetBSD,) alphabetically went through /bin, /usr/bin, /sbin, /usr/sbin, /usr/games and read each of the program's man pages.
Yeah, took a while. Also read POSIX.2
But it was ok, NetBSD doesn't have zsh installed by default (and neither bash, only csh, sh and (pd? m?)ksh).
When I did that, I was primed to use tcsh at $WORK. Switched to ksh later (and remained on there for a loooong time).
Anyways. I failed to make an actual point. My point was the following: The BSD manpages are _excellent_. IMO way better than the linux/GNU manpages (or info pages) of comparable utilities. It shows, again IMO, that the BSDs are a complete, single-tree distribution where documentation, userland and kernel go hand in hand and are equally well groomed.
So maybe, for learning "UNIX", go get yourself a nice little {Net,Free,Open}BSD install and toy around with the elder ways.
FWIW it was just a silly joke. I was going to write xargs first, but then realized that there's a couple of letters after x. I'm not even a zsh user myself.
I honestly agree that reading through the man-pages a good idea (after you've acquired some basic knowledge already of course), and as a Linux user I'm a bit envious of many things BSD, the documentation included.
I used to spend my time in #sed and #awk, and try to answer any short question that came up in the channel. They came up at a good pace, I don't know if this is still the case.
It was a really good exercise, and it has an additional social aspect you will not get from reading books (which is a great resource too).
I often look at web versions of man pages because it's just plain easier to deal with when researching mega-commands like curl or openssl. The green-screen-text-only bastard in me argues vehemently against this, but the pragmatist in me doesn't care. It's just plain easier.
How so? You can navigate much more easily inside the less pager that displays manpages than inside a browser. Moreover, it is easier to copy/paste (or simply read) snippets of the manpage into your terminal, after all everything happens in the same window. Opening a web browser and finding the man page also seems much more cumbersome than typing "man". I would never have thought that it would seem easier to anybody!
I think it's a combination of a larger window, antialiased text, arguably better font, semi transparent background, terminal being 256-color (or more), and so on. By passing -a I get it down to 80% CPU. This is progress, guys!
I have all of this except the transparent background. But that should be a compositor thing, independent of the window contents. That's bizarre, pushing a few thousand characters to the screen shouldn't, by far, reach 80% CPU usage. There's something in your system that's really botched up.
I tested again, it's Terminal.app doing something! I tried with iTerm and CPU usage is negligible. Wonder what it is doing. My system is otherwise very fast.
Is it choosing the CPU resource, or is it your terminal app doing so to make many individual changes per frame? Many terminal emulators are far from efficient in that sort of use case as they simply aren't optimised at all for it as it is relatively rare.
Yep, I've just installed it - looks like it will be perfect for when I have to do things that I rarely do (and just want to quickly know how to do something)
You might be interested in the Rust implementation dbrgn/tealdeer. There is also denisdoro/navi which can use both cheat.sh and tldr, and you can also write your own cheatsheets.
Plan 9 man pages are also much more manageable than their respective GNU versions.
The tools in Plan 9 are also much simpler, with fewer options (and also a few differences), but it's a very useful subset, you will rarely need anything else.
The genius of man is that it uses any standard system pager, and so the navigation semantics (admittedly simple) are clear.
Info uses a dedicated document viewer, which, unless you use that frequently is novel and ideosycratic when used.
(This can be changed, but more painfully than with man.)
Info is a hypertext documentation system admittedly. It happens to have been invented pretty much simultaneously with another you may have heard of, the World Wide Web. A system with slightly greater user familiarity. And Free Software implementations.
Debian's 'dwww' is an interesting marriage of multiple docuentation formats. It presents manpages, info pages, Debian package information, and additional documentation, all categorised and indexed, and accessible through a web browser from your local system.
Including, as it happens, console mode browsers (lynx, w3m, elinks, links2, etc.). Which tends to amp up and standardise access to all of the above.
dwww is the one thing which makes info actually useful. Which is a shame as there really is some quite good info documentation, it's just buried under what is for most people, an impenetrable interface.
In what way is info's interface "impenetrable"? It takes two minutes to look up its keybindings, most of which happen to be mnemonic: h-help, n-next, p-previous, u-up, l-last, q-quit, space-scroll down, shift+space-scroll up, tab-next link, shift+tab-prev link. You probably wouldn't ever need anything more than these 10 keybindings.
What does info add over plain old manual pages? I honestly don't really know, since I usually forget that info even exists.
I feel like the effort the GNU project put into creating a competing standard for documentation would have been better spent improving the tooling around the standard man page mechanisms instead.
I feel like info pages are more suitable for explaining the features of larger pieces of software. Can you imagine what a mess the man page for emacs would be, if it were to try to explain all its functionality? Instead the man page of emacs only gives a brief description of the software and explains the command line options and all the functionality is explained in an info document.
I think of info as an alternative to /usr/share/doc, not man pages.
The most import features of info that are missing from man:
I never knew about tldr. That tip alone is phenomenal. I'm embarrassed to say I've been doing this 20 years and never seen it. Trying to pull out several common usage examples from man pages has always been my difficulty.
man is missing some concrete examples about how the tool could be used. tldr seems a good thing to fix that. cheat.sh is also a great alternative to tldr, I use it a lot.
I regularly google man {x} : man grep, man curl...
and a couple of times a year I absentmindedly google : man find.
there's no linux in the first 30 pages of results
I don't know how younger people force themselves to learn to use the command line. I had a windows 98 at home at the time and the university enviroment seemed primitive in comparison, but I had no choice.
And were lucky, we had X-terminals!