Hacker News new | past | comments | ask | show | jobs | submit login
The Art of Command Line (2015) (github.com/jlevy)
597 points by axiomdata316 on May 23, 2019 | hide | past | favorite | 169 comments

> Learn basic Bash. Actually, type man bash and at least skim the whole thing; it's pretty easy to follow and not that long

Skimming The Grapes of Wrath would be shorter. Thank you, but no thank you.

If you read, execute and understand all the code in "The Advanced Bash Scripting Guide", you will suddenly find yourself in the top 10% of people who use bash.


(as a pdf: http://tldp.org/LDP/abs/abs-guide.pdf )

I'm pretty sure if you read, execute and understand all the code in most advanced guides to a language you will suddenly find yourself in the top 10% of the users of that language. But the understand is the hard part.

The Bash Hackers Wiki is a much better resource, especially as reference material: https://wiki.bash-hackers.org/

Wonder why they advise cat /dev/null > file to empty a file rather than echo > file

This will empty a file without closing or invalidating any open file handlers so you can empty things like log files without restarting the services that are populating those files.

Well, `echo > file` won't empty the file, it will have a newline in it. At the very least you'll need `echo -n > file`.

I've always done it as: `> file`

That's actually very nice, I never thought about that one.

Unfortunately, this doesn't work in fish:

> fish: Expected a command, but instead found a redirection

Right but I don't think anyone is really concerned if it runs in fish. Fish is non standard and doesn't try to adhere to bash. If we want to write fish shell scripts we can do that elsewhere

Funny how the same sentence applies to bash wrt POSIX shell in other contexts

Or after thinking about it for a moment, true > file. Or by its shorter name, :> file.

`printf > file`, but there's endless ways of achieving the same thing (and that's part of the beauty).

Trick: You can use just


Because echo-ing into a file will make a file with 1 byte in it: a single newline character.

echo -n >file

Or why they aren't recommending 'truncate -s 0 file'

IMO, the most natural method is simply:


or why not `touch file`?

That works to create an empty file if none exists, but if the file already exists then 'touch' simply updates its atime/mtime and doesn't alter the contents. This latter effect is actually the purpose of touch; the file creation part is just a side effect.

How large is it? Like 40p when you put it on a printer?

Let me help you with this:

    man -t bash > man.ps && lpr man.ps
Then, grab a coffee, sit in the sun and spent some quality time with your tools. :)

The man pages are the worst form of information ever for anybody trying to really solve any real-life problem which is not specified as "learn all unnecessary switches and once-useful but used-by-nobody-today options of a command X."

It happened to me often enough that even when I knew what I wanted to achieve and which command X could do that I would first try to read darned man page and fail to find the solution, but then just googling gave enough of material to

a) have something to try

b) have a discussion of the cases where that "something" doesn't fulfill the use most use cases of those who asked (the "natural need") including, of course, the use case I've had.

I have a completely opposite attitude. Bash is horrible on many levels. Bash man page is only a part of horribleness.

The main problem for all man pages is that they are mostly as if written for those who already know about every unnecessary detail of both the tool in question and all related tools or environmental issues, but who searches for some even more obscure detail among all obscure details that they already know.

Not to mention the descriptions that aren't: e.g. the complete explanation of the option --frob-blonk FILE would be something like "this option uses the FILE as the frob blonk" oh, how helpful.

I use bash every day, but I've at least managed to reduce its use to the minimum that keeps me still sane.

Sometimes I just want to reach through the monitor and smack the nerd that ends every answer on a forum or SO with, "you just type awk -@dee(++ | grep %%s, man pages are your friend. "

When you don't know what to look for, it is not as simple as "looking something up".

The worst are these who don’t answer the question at all but “helpfully”’give you the link to the man page. If the answer contains an example which matches the question it is useful.

(And I too actually consider awk much simpler than e.g. bash)

Skimming the bash manpage every once in a while is important to stay out of information bubbles.

There are too many developers who rely only on Google/Stack Overflow to find and write answers to questions, and it's common to repeat bad, incomplete, or outdated information. I can think of a lot of times when the most accepted SO answer for many questions was "copy and paste this gigantic block of code" instead of using an existing command.

Most of information there is not worth having in my "brain cache" at all, and especially not worth refreshing it regularly. Life is too short to even trying to remember the things like what's the difference between .bash_profile .profile .bashrc .inputrc and .bash_login (bonus point for remembering what is redirected to what in the files themselves in which distribution) or what the hell is a "login shell" and what is not. And there are many such details that are effectively useless for 99.99% of the users but they just have to survive the suffering to fix the darn thing when they have some issue.

I'm sure even most of the readers here can't give a one line answer what is the "right" and "general" way to preset a darn variable for a shell. By design of all these artifacts we work with, it necessarily leads to a page-length discussion of which shells exist and what their differences are and whatnot. Such a waste of everybody's mental energy on a global level.

I think it's a bit unfair you're being downvoted because you raise some good points. A lot of people agree POSIX is a mess and you've highlighted a fraction of reasons related to that. Would any of the downvoters care to elucidate? Maybe I'm missing something obvious.

Not a downvoter, but I assume the reason is that many people do understand these mechanisms without evicting too much from their brain cache.

Files like .bash_profile, .bash_login, .profile, .zprofile, /etc/profile, /etc/profile.d/whatever.sh all serve the same purpose, the primary aspect of which is to apply things like environment variables at login. Display managers and [login] shells source these things in a predictable precedence order so that environment variables are correctly inherited by children of GUI environment processes and login shells.

Login shells? Well yeah, the point of a login shell is that it invokes login-related tasks like setting environment variables from a profile. That's intentionally separate from the behavior of interactive shells because the environments of interactive shells may have been modified intentionally and shouldn't be "reset" to the values applied at login.

edit: also .inputrc is the readline configuration file for things like shortcut keys in all programs which use readline (not just bash) and .bashrc is for customizations specific to interactive shells

You see I’ve read this kind of information in exactly the words you use more than once and I still can’t summarize in one sentence where should one put his variables for things to “just work.” Locally for one user. Provided I just don’t know and don’t even care what the differences between the login and interactive shells are and that I just run the whatever terminal it is default from the GUI menu (whichever GUI it is).

And your explanation what the differences are is for me still circular, exactly like in the man pages: login shell is the one which invokes login-related tasks, and I should care about the difference from the interactive shells. I still have no idea what that even means, it is what exactly happens when?

Anybody knows some sane explanation of these? I admit I never tried to find it, as I tried to read the man pages, didn't find anything reasonable, but knew that it doesn't matter to me. I just want the darn variables to exists whenever I need them. I don't care about the differences. One single place is enough.

But now that you say that you "do understand these mechanisms without evicting too much from" your "brain cache" I'd really like to learn everything about the differences. Specifically, when is one invoked and when another for the user "acqq", let's say, on a default Ubuntu system, and on a default Red Hat system?

Are really more different user shells invoked in different forms for a single user, on a single machine, between the booting of the machine, logging in (in GUI) and then starting a few times the terminal with shell inside? When is invoked and which is invoked when? And why shouldn't be a single place for the variables I want to define? No idea, but I would be grateful to read about it now. Thanks in advance.

The general way to set an environment variable is in /etc/profile or ~/.{bash_,,z}profile.

PAM uses /etc/environment, and systemd would prefer to have you set up default environment variables in /etc/systemd/system.conf. You can even pass them directly to Linux from your bootloader (unrecognized arguments on the kernel command line in the form foo=bar become environment variables), but the profiles are the generally accepted way of doing it.

It is a joke? If I want to set some variable to be present for a single user (me) I surely don't want to ever touch /etc/profile or /etc/systemd/system.conf (and I have no idea what /etc/environment is supposed to be). And even less want to change something in a bootloader. The things are surely configurable in my user home directory.

I see where you're coming from, but I think it's the attitude of the developer that's more important. I've learned a good deal of Bash from places like SO, but I've always tried to learn why the command is like that, and I've found other sources more useful than man. If someone is the kind of developer who blindly copies and pastes chunks of code from SO then there are bigger problems than not using man.

I'd agree with this. I can't stand prolix documentation when I'm just trying to get something done (I'll never understand why people extol the Python docs!). I used bropages [0] for a while but it isn't as expansive as it should be, presumably on account of a lack of traffic.

I'm happy to read more expansive pages in my own time but when I'm 'in the zone' as it were, the last thing I want to do is read through why something works. Perhaps it's a gap in the market? Or more likely I'm in the minority!

The Python docs are, on the contrary, quite good when compared to the GNU man pages. There is an advantage of the context: the GNU *nix tools have to reflect how the things grew, totally independently, accidentaly and ad-hoc. Python is much narrower topic and the development mostly had some concept of the "overall goal."

The Python docs opened the door to programming as a career for me in my late 20s. I/O, concurrency, polymorphism, it's all there. Thanks Python.

TLDR is a pretty cool project that cuts out the verbosity of man pages.


Using bash's process substitution, you could also use this:

    lpr <(man -t bash)
See: https://unix.stackexchange.com/a/64011

Here is a function for mac os that outputs a pdf of a man page

    function pman() {
        man -t ${1} | open -f -a /Applications/Preview.app


Thanks for `man -t`!

It's actually 78 pages on my system (Debian, bash 4.4.12).

I've heard that OpenBSD has the best man pages that are more pleasant to read and more informative/concise.

OpenBSD man pages are what engineering documents should be. The contrast with Linux's is astounding.

This reminds me of a writeup by John Carmack. Last year he did a programming retreat where he set up a OpenBSD dev environment and tried to use the base system and the included documentation as much as possible rather than using ports and WWW†. That would probably not be so nice on a Linux system.


† Anyone else here that remembers the days when man pages and Windows help files used to be referred to as online documentation? As opposed to printed manuals.

The MSDN CDs where an endless source of amusement. The reference manuals were very good and search actually worked well. Did not feel crippled at all.

I don't know.

From `man wc`

    WC(1)                     BSD General Commands Manual                    

         wc -- word, line, character, and byte count
When in actuality `wc` outputs lines, words bytes ...

You probably referring to FreeBSD man, OpenBSD is slightly different: https://man.openbsd.org/wc.1

That still has the order wrong in the name section.

I mean, strictly speaking, the name field doesn’t document the output, but having those aligned makes the documentation much easier to read.

man -t is awesome!

IME it’s easier to focus when reading printed documentation. One can jot down notes and no temptation to go and reload Facebook/Twitter/etc. I haven’t done it often, but some commands are really worth knowing inside and out.

That let me looking into how to read them in an e-reader:

    man --html=calibre bash

Indeed. Never liked reading manpages more then when I was briefly exposed to OpenBSD. Really envied them for the concise and well written documentation they have.

Or, if you're fine with reading it online, but don't want to run the man bash command each time (for whatever reason), and/or want to strip out the formatting characters once and for all, and/or want to use the search and other editing features of your favorite text editor on the contents of the man page, for faster navigation, copying-and-pasting snippets into scripts, etc., you can use this small shell script:

m, a Unix shell utility to save cleaned-up man pages as text:


Edit: Also check out the stuff about the word mu in the latter part of the post :)

In vim, I can run:

  :r !man bash | col -b
the col utility will strip backspace characters with the -b switch (along with reverse line feeds).

You can also display man pages in vim by default:

    export MANPAGER="/bin/sh -c \"col -b | vim -c 'set ft=man ts=8 nomod nolist nonu noma' -\""

Right. But check what my script called m does :)

It uses col, but also redirects the cleaned-up output to a file named cmd.m (so you don't have to run that man command for the same command - like bash or other - each time), where cmd is the cmd you give to man as an arg. That file cmd.m is stored under your ~/man dir, which gets created first, with mkdir -p. More refinements possible, of course, but this was just a quick and dirty script I whipped up. For example, could check more for permission and other kinds of errors, not create the ~/man dir each time but only once, etc.

Even better than the man page, just print out the reference manual:

    sudo apt install -y bash-doc && lpr /usr/share/doc/bash-doc/bashref.pdf
178 pages of awesome.

I’d probably add the ps2pdf command before sending it to my non postscript printer. The output of pdf to a non postscript printer would be interesting.

Cause there's gotta be a button on this thing for that thing, right?--Homer Simpson

Learning the fundamentals of your tools is a...fundamental of doing this job. If you don't want to know how your tools work, I question one's competence or desire to do this job.

Homer was wrong in his assumption. If there however does exist a button, and it works, and there is no danger associated with ignorance about its internals, then there is nothing wrong with using the button. After all, someone put it there.

> If you don't want to know how your tools work, I question one's competence or desire to do this job.

I in turn question one's competence when, instead of getting the job done, hours upon hours are spent on irrelevant parts of configuration or getting some ideal solution to work where something less elegant would perfectly suffice. Unfortunately, the folks insisting on 'mastering' ones tools tend to often be in that category.

As an analogy: I don't care if the craftsmen that fixes my house uses his hammer holding it upside down. I only care that my house is properly fixed and stays so. How that was achieved, I could not care less about.

If one is spending hours upon hours on irrelevant parts, I, again, question their competence.

If one doesn't care that the carpenter building his house is using a hammer upside down, that's a whole 'nother bunch of issues I won't go into here.

I think OP's point still stands. If the job is done to the same standards (assuming same time frame as well), what other issues does this bring up?

An apt comparison might be Jimmy Hendrix playing a right-handed guitar left-handed (i.e. upside-down) and still producing master pieces.

Hendrix knew his tools intimately. The comparison proves a point other than that which you make.

> If one doesn't care that the carpenter building his house is using a hammer upside down, that's a whole 'nother bunch of issues I won't go into here.

I'd love to see at least a small allusion to what nature the issues are made of. Because, to iterate, I'd rather have a well-built house build by an absolutely unconventially working carpenter than a mediocre house build by someone that knows how to use a hammer according to the textbook.

If I want to see nice processes and fantasies fulfilled, I watch movies. In real life, I care about results.

You are basing your story on the competence of one who doesn't know the correct way to use his tools and can then build a house well. That's not going to happen or it will take far longer to complete.

Some friends and I read the whole thing aloud, start to finish, pausing to discuss and experiment, over a handful of lunches. The Grapes of Wrath would be longer, and the Bash man page has fewer turtles.

This is the attitude we can't have nice things! Seriously, how many times do you use your shell every single day?

Uh, multiple times a day. I'm a professional programmer in a UNIX/BSD/Linux environment.

... You can leave? Kidding, but seriously: I have 20+ tmux tabs open right now. I live here; I only run X because I need a browser. (A graphical browser that handles JS; yes I've tried w3m, no it's not quite enough.)

Very rarely. I hate the command line and actively avoid it.

Of course if your day to day job actually allows you to fully avoid bash, then by all means ignore it!

I always question what one's work day involves and how committed they are to their work when they say they hate the command line and avoid the shell. It says, to me, that they really don't want to delve very deeply into how computers work and don't care. If they care, but cannot understand, then I question their competence and whether they need to find another line of work.

I'm a massive command line advocate (to the point that I've even written my own $SHELL) but I don't agree with your point that the command line is how computers work.

The command line is just a UI like any other - GUIs included. The way modern computers work is via API/ABI calls; via kernel syscalls and drivers separating out the responsibility of peripherals. Your command line is just an interface for managing applications that are compiled against libraries that interface with those syscalls - so in that regard using the CLI isn't any different to launching an application via a graphical icon running on a typical WIMP interface. In either case you're not making those API calls from the CLI, you're not manually managing your memory nor any of the important things that an OS does under the hood. All you're doing is using a text interface to fork() a process rather than using a mouse click to fork() a process. The difference being many CLI tools require their config supplied as command line arguments while many GUI tools don't. But even there, there are as many exceptions to that rule as there are examples of it.

> I don't agree with your point that the command line is how computers work.

I said no such thing. My point was that, if one wants to program computers, I would think one would want to be good at their job and delve deep into how their code makes computers do what they do. Showing no interest in learning the command line or the shell makes me question their interest and desire.

What is the problem about not knowing how computers work? I care about well-written and working code. I couldn't care less about technology. In fact, I consider it an unfortunate fact of life that programming is tied to computers.

I don't really understand this comment. Surely to write well-written code you need to understand how computers work. Just like understanding how cars work is important to be competent at servicing and repairing cars.

Saying "it's an unfortunate fact of life that programming is tied to computers" is as weird as saying "it's unfortunate that being a car mechanic is tied to cars". The job exists because of computers and cars - not in spite of it.

If you wanted a problem solving or engineering job that wasn't tied to computers then there are other options, like a mechanic, electrical engineer, mathmatition or working in any of the numerous fields of science. But a the end of the day, you're still going to need to understand the core principles of that field if you want to be effective in it.

I don't really know what this means, but I would like to because it sounds cool.

Web developer checking in - no, I don't really care about the depths of how computers work. I care about competently developing the web applications my employer asks me to, for which the command line is scarcely necessary.

I'm a web developer, too. I care to know where I can speed things up or minimize things. How the code I write affects how my web applications run and which technologies I should consider or dismiss based on how it might affect the hardware we run with.

Too often, the solution of some is only to throw more hardware at the problem.

I have no doubt there's great value in learning basic bash, but I use zsh and almost all the features I use are exactly the same commands in both bash and zsh, so I think you'd probably be okay learning one or the other thoroughly.

5931 lines here. 25 lines per page of a novel seems reasonable.

~240 pages for man bash

Google says the grapes of wrath has 464 pages.

I call that a similar ballpark. If I had an English exam with Steinbeck as one of the set texts just once I would read every word of it. I have a shell exam every damn day I work rest and play. Do you? Is skimming seeming more reasonable in that light?

An aside: I highly recommend reading the grapes of wrath

Many people also use the classic Emacs, particularly for larger editing tasks. (Of course, any modern software developer working on an extensive project is unlikely to use only a pure text-based editor and should also be familiar with modern graphical IDEs and tools.)

Huh?! Unlikely?!

I belong to a 100+ team working on the Chromium code base; a project having 150+ third party sub-projects, all tied up by and build in the command line with a few build tools like GN and ninja.

Not one in my team use an IDE because they die merely trying to index everything. Everyone uses a simple editor of choice: Emacs, Vim, VS Code and Notepad++ are the popular choices. Everyone has at least two Bash/CMD instances open.

The systems programming world is a different one unlike the web programming world where everything is small and self contained for an IDE and GUI.

I tried getting sublime to work with a 60k file code base, I got the indexer down to 20k but the CPU usage of the indexing process was always at 100% when building (diagnostics showed it wasn't doing any index db changes) even though I filtered out most of the build cruft, third party code, and useless sub directories.

Vim with cscope/ctags is the best of both worlds. Index your project once -- no penalty when building. Symbolic searching is a must have on any large code base (declaration, usage, etc).

Is VS Code not an IDE at this point?

It more or less is, my use case for it is more or less Notepad++ with Intellisense and a few of the marketplace apps. I like it though, it's lightweight and I've used it as a text editor a few times.

That being said, my most used is probably either regular Notepad or Notepad++ in a Windows env.

Yes, not everyone is a unix purist / keyboard warrior.

(I've got 10+ years of experience in software development and I just can't get to grips with it, at best I can do basic operations in Vim (for git). Looked at emacs the other day but I need to read a lot of manuals and tutorials and practice to get to grips with it)

I'm currently using VS Code, before that it's been IDEA, Atom for a little while (but it was slow), Sublime Text (I still miss it), and much longer ago, notepad++ and eclipse. (the folder I have all my code in is still called "workspace").

Well, let me know if you need help getting started with it. The built in tutorial usually helps.

Yes, well, see what happens? Every time fads change or new languages appear, you have to change your editors. What if you could have just one editor for everything? For text files and taking notes, for all the programming languages in the world that you just learn once and learn it well, and be done with it. No more need for new editors.

This is exactly why I stick to Emacs. I have used it since senior high school and haven’t had to switch ever since. Really convenient IMHO.

It was the built-in tutorial that got me started. One just needs to type C-h t and then you’re good! Each to their own but that has suited me well at least.

I’ve never understood getting so passionate about IDEs or editors. I use VScode at the moment. I’d be just as happy using any other number of editors. If I wanted to switch to sublime or webstorm or anything else, it wouldn’t take me very long at all to get it up and running and get back to work. They all do the job I need them for perfectly well.

Maybe the Blub paradox applies to code editors as well,

As long as our hypothetical Blub programmer is looking down the power continuum, he knows he's looking down. Languages less powerful than Blub are obviously less powerful, because they're missing some feature he's used to. But when our hypothetical Blub programmer looks in the other direction, up the power continuum, he doesn't realize he's looking up. What he sees are merely weird languages. He probably considers them about equivalent in power to Blub, but with all this other hairy stuff thrown in as well. Blub is good enough for him, because he thinks in Blub.

Emacs is not just a code editor. It will change the way you approach programming. As long as you don't understand it you might (rightfully, I guess) compare it to just "another text editor, only with these weird commands". Some of the stuff that Emacs can do and the way it integrates into your daily activities, cannot be seen in other text editors, so you're not even expecting such benefits from the other tools or realize that they exist out there, in the wild.

> Emacs is not just a code editor. It will change the way you approach programming.

This is the most pretentious nonsense I've ever read. Writing code is writing code. God could create for you the most perfect IDE to ever exist, with every feature you know you want, and every feature that you don't yet know you want. Tailored exactly to your needs. But at the end of the day, you're just going to sit down and write code in it.

All I see when I look at people who endlessly tinker with their setup, are people who are spending time things that aren't providing aren't providing value to their customers. I've worked with quite a few emacs wizards. Do they write better code than me? Some do, some don't. It doesn't seem to related to their use of emacs though. Are they more efficient than me? Some are, some aren't. Did they all spend a lot more time than I did to learn how to use their text editor? Yes. Did they all have to tinker with their text editor a lot more than I did to get it working the way they want it? Yes. Have I ever had a problem that I wouldn't have had if I'd invested all the time required to master emacs? No.

In Emacs, every key is evaluating some Lisp expression. You are interactively using Emacs Lisp in a funny sort of REPL. Most of the time you just use predefined commands and they do what you want, but if you know there's a whole programming language there, then you'll see opportunities to take advantage of that sometimes, just like how sometimes it's nice to be able to write a for loop or a pipeline at a bash prompt.

Once you get used to that model of using a computer, it feels crippling to use applications that you can't easily program in on the fly. It's the difference between having a full programming language and a bunch of buttons you can press, and it's for something you're going to be doing a lot for a long time.

I don’t doubt that a skilled Emacs user would be more efficient at editing text than I am. But I don’t have any problems with my ability to efficiently edit text. The problems that I get paid to solve are ‘how do I effectively deliver value to my customers’ and ‘how can I design solutions and systems that meet the business’ needs’. The kind of challenges that I have to overcome to meet these goals do not include ‘I’m not efficient enough at editing text’. If I were to reach a perfect 100% efficiency in text editing, I doubt it would have much impact on my productivity, and I know that I would have no impact at all on the quality of my work. My IDE already has a lot of productivity tools, perhaps I could improve that by learning Emacs, but I’ve never seen the RoI in devoting time to that, over devoting time to learning any number of other things that would actually make me better at my job.

> Once you get used to that model of using a computer, it feels crippling to use applications that you can't easily program in on the fly.

Funnily enough, that's how I feel about Lisp itself: I love having a language I can program at runtime, at compile-time, at read-time (and, in a decent editor, at edit-time).

It really makes me wish that there were a decent modern Lisp OS. It'd need some solution to run Firefox and emulate emacs. Maybe when I retire I'll devote my remaining years to that.

I feel obligated to point out that people have run emacs as pid 1 on Linux. That's... some level of lisp OS, surely.

I don't believe they have, in any real capacity. Can emacs reap processes? Is there an elisp binding for every kernel syscall?

I'd be interested to see the results of a survey of how many Emacs users use Emacs Lisp in any capacity beyond cut-and-pasting examples found online into their .emacs or .emacs.d/ configs.

>I've worked with quite a few emacs wizards. Do they write better code than me? Some do, some don't. It doesn't seem to related to their use of emacs though. Are they more efficient than me? Some are, some aren't.

Are they really wizards then? :^)

You know, I am personally much more comfortable with using emacs as an extremely configurable text editor. As time passes, I want less and less to create ad-hock lisp scripts and to debug my editing commands.

That Blub paradox was insightful. However, you do realize that people using Emacs might be Blub programmers as well!

I think you got a point that some emacs users might severly underestimate the power of limited choices and pre-configured workflows. However, as in understood Blub, this is strictly about power of expression. I think emacs is at the top of the foodchain here, at least I don't know a text editor that would be extensible with such easy as emacs.

Same with the LISPs (which Blub is about btw) - Some LISP programs become unmantainable for people that did not write it, but without a doubt, LISP is one of, if not the most expressive language there is.

Funny enough, as I was reading this I was thinking the users and non-users of Blub were switched.

The argument is symmetrical.

> Looked at emacs the other day but I need to read a lot of manuals and tutorials and practice to get to grips with it)

My recommendation: Go with spacemacs. It is perfect for people who just want to use the goodness of emacs without it weird edges and spending hours on configuration. One day I might switch to emacs, but for the coming time I am perfectly happy with using a pre-configured emacs. The time I don't spent working, I'd rather read more papers and do non-compsci-stuff.

I tried spacemacs, but found the documentation to be lacking, and the "everything and the kitchen sink" approach lead to some slow behavior. Right now I'm trying emacs+evil, and slowly building it up with features I like. I still have to be careful about enabling the wrong feature and slowing things down though.

Most people I know use IDE with code style enforcement, automatic variable renaming, unused variable warning, etc.

i.e. Emacs? Or Vim? Or VSCode? Especially with language servers there isn't much difference between the three, except that VSCode is a bit primitive in the editing department, but if it works for you, by all means.

yes, unlikely. most people use IDE's, very few limit themselves to simple editors.

they're very rare but do exist.

I don't think using a text editor is limiting, using an IDE certainly is. When using Visual Studio, you're definitely limited to Windows. When you're using an IDE, they lock you into a certain set of paradigms.

An editor like Vim or Emacs ia great because eventually

_the editor adapts itself to the programmer and not the other way around_.

Every other configuration of Emacs and Vim have been different, suiting the programmer writing it. People have distros of Emacs/Vim

- Doom - Spacemacs - NeoVim


> I don't think using a text editor is limiting, using an IDE certainly is.

I agree. nonetheless, most developers use IDEs. This in turn means that a developer is unlikely to use only simple editors.

Maybe actually read the comment chain you're responding to?

I know that Stack Overflow is not representative of developers at large but as of 2018[1] most developers on Stack Overflow use "simple" editors.

1: https://insights.stackoverflow.com/survey/2018#technology-_-...

That question was "select all that apply".

VSCode, Visual Studio, Intellij may be fighting for mindshare on laptops, but if you ssh into a server, I bet you're using vim.

I mean, sometimes I run vim in the integrated terminal in VSCode. :) Whatever is quicker/handy. I know I answered that question with "VSCode, Sublime, Emacs, and Vim".

So you think emacs is a simple editor, eh?

I'm reading that "simple" word on the GP as something that doesn't impose a workflow on you. (What's ironically best described by "versatile", but somebody not used to the tool can not use most of the functions.)

By any other meaning, emacs and vim wouldn't qualify.

Vanilla emacs without config and plugins, started as a terminal application is a simple editor, just the same as vim. Yes.

But that is entirely besides the point. The parent was outraged how developers are unlikely to use only simple text editors. And the phrase he was taking an issues with didn't say that emacs is a simple editor either. So you're either willingly misinterpreting everything or just unable to read and understand simple sentences.

Name a feature and I bet Emacs has it.

Slight tangent, but forcing biology undergrads to work their way through this would probably help them a lot when they suddenly get saddled with some sequencing output and have literally never heard of Bash let alone used it. My lab of 40+ people has maybe 2 or 3 people who know how to use the command line.

As a rule, for any given topic, experts on the topic propose to add this topic to the curriculum. While this topic would indeed be useful, many other topics that are also not in the curriculum would be equally useful.

Does your lab work well with 2-3 people being able to set up the data processing for the rest of the team?

Learning the unix commandline is considerable effort and requires regular practice. Instead of learning bash, biologists could spend their time reading relevant papers in their specialization or brush up on statistics.

>Does your lab work well with 2-3 people being able to set up the data processing for the rest of the team?

Those people who know their stuff aren't doing that job because they have their own data to deal with. There are people with 10,000 line proteomics outputs who manually search for missing entries. There is at least one student with single cell RNAseq data which she can't even look at (never heard of fastqc or the LESS command). It is frankly disasterous.

I'm a nuclear engineer and I noticed that there was no official venue where people were learning the particular computer basics that I consider essential for daily operation of a computer as an engineering tool.

I figured maybe other fields have the issue too and I got it together enough to out out my first and only ebook on the topic. It's higher level than OP here, but still focuses mostly on the command line. Anyway, it's called Digital Superpowers and is $10. If anyone wants a preview chapter or anything I can send you one.


I really like the concept of this book. Your choice of topics is interesting, because it's so ecletic.

Thanks. Yeah it's an odd set, totally derived from the things I personally have found really useful in my computational physics career as well as my computer-related hobbies.

Genuine curious of the use case of command line by biology students. Can you elaborate?

A raw sequencing file can be massive, like 80-100gb per sample, and if you have a sample size of 1000, storage alone is going to be a huge requirement. Then you need to load these huge files into memory, so you are gonna want like 128gb of ram. Then you need to process your files, and if you want to get it done this week, you want at least a dozen cores working on your job. These requirements are far too expensive for an individual or even a lab to own a capable workstation, so everyone gets familiar with the command line and cluster computing instead, and they use whichever computer is cheap or comfortable (or even a phone if you wanted to be cheeky) and just run the jobs on the cluster.

When you have DNA sequencing data, you often have to use bash / command line to glue together the various tooling.

As an example: https://bedtools.readthedocs.io/en/latest/

Yes, this is very common.

But there are also many people who use Common Workflow Language (https://www.commonwl.org/) for this purpose.

CWL allows for workflow portability between execution engines. The open source engine my team creates (Arvados - https://arvados.org) is ideal for large-scale processing work.

We store, process and manage petabytes of genome data with it, and execute workflows that span hundreds of simultaneous compute nodes.

This looks very cool! Thanks for sharing it. I'm definitely going to mention this in the next technical meeting at the research group.

There is an entire field called bioinformatics. There is a fair amount of text and flat file database processing in it.

* https://bioinformatics.stackexchange.com/

* https://en.wikipedia.org/wiki/Bioinformatics

I feel grateful for learning these tools before repos like this existed. Discovering new CLI tools back then was like learning about new bands by word of mouth – it made it more special.

I remember finding out about ctrl+r, I thought it was incredible. The shell has so many helpful features and I'm continuously discovering more.

I was so happy when I found out ctrl+r. I can't believe I went so many years without using it, and now I use it constantly.

Now couple that with infinite bash command history and you never have to type out any command more than once.

Search for eternal bash history and set it up once. I can point out its usefulness more.

Feast your eyes on this, then: https://github.com/junegunn/fzf

Before "repos" existed there already were widely available manuals, with exactly equivalent content as this repo. And excellent printed books, for example, "the unix programming environment" by Kernighan and Pike is a beautiful read, even if somewhat dated.

I don't need my job to be "special", I just want to be effective.

Some of use these outside our jobs.

I often wonder if I would have been able to achieve command-line fluency if I had been born 10 years later. GUIs just weren’t a thing when I was learning computers, so I had no choice but to learn to work on the command line. I can tell that it was time well spent now - I can script solutions to things that GUI-only folks sometimes spend hours repetitively doing “by hand”, but it took years to get to that point. I can’t entirely blame somebody for giving up and saying, “meh, the ‘long way’ is good enough”.

I'm considering going on a GUI cleanse and working command line only for a while to build my chops. Bold idea, or absolutely insane?

It's perfectly possible, and not that insane. Much more possible than most people likely think. The primary pain point is mostly web browsing - for a long time Elinks was the only halfway-viable web browser, and it has a lot of pain points. But maybe brow.sh is better these days - I've never tried it (although, arguably, running Firefox through a shoddy GUI inside a terminal doesn't really count as a "GUI cleanse".)

It's worth doing, purely for the enlightenment of freeing you from the busted, "macho" notion that a GUI is inferior to a command line. GUIs are awesome.

Probably insane, but if we were sane we wouldn’t be programmers, would we? ; ) I actually sometimes develop Java apps entirely on the command line using only shell scripts (no maven or gradle), so I have to download all the dependencies and resolve all the transitive dependencies myself. It’s a bit time consuming, but I end up discovering a lot of intricacies of some of the libraries I work with that I would never have come across otherwise. You can even debug using jdb, although at times System.out.println is less painful.

Just yesterday in the students room I whiteness the awe-inducing effect and joy of discovery that a proficient use of Vim has on someone who has never seen something like this before ;)

If pictures or a hand-drawn look is more your thing, Julia Evan's work is very accessible. She sells zines like "Bite Size Command Line!" https://jvns.ca/blog/2018/08/05/new-zine--bite-size-command-... but also posts a lot of the content on Twitter too: e.g. https://twitter.com/i/moments/1026078161115729920

(I actually made a proof-of-concept tool called 0rk which displays the images in the terminal https://github.com/follower/0rk so you don't even have to leave your terminal to look something up.)

There's also the TLDR pages project which aims to create "Simplified and community-driven man pages" with practical examples which looks like a helpful approach: https://tldr.sh/

Not sure if any of the TLDR clients already do this but it just occurred to me it'd be cool to be able to view an example and then edit it directly...

If you're an experienced command line user, this is a great reminder of how far you've come, and check list to figure out where your weak spots lie. I wonder how long these skills will remain useful; skeptics often seem to (wrongly) think that the CLI is 'obsolete.' I learned some of the things on this list more than 10 years ago and I am sure for some people it has been much longer than that. If anything, the increased prevalence of embedded Linux devices has made these skills feel more useful over time.

While this has some helpful pointers, I found it amusing that some points were relatively little work, while others contain enough to keep you busy for weeks - for example, comparing output redirection to network management.

A big discussion 4 years ago:


(Provided for interest purposes, not suggesting it's a dupe - https://news.ycombinator.com/newsfaq.html)

Some time ago I stumbled upon a really good introduction to text manipulation in command line:


After years of using bits of bash I've found quite a few new things for me.

Have you ever read Unix Text Processing?

My suggestion would be: Learn basic POSIX Shell (sh). It's smaller than Bash and more portable.


It’s also really limited. I do a lot of shell programming and I lean heavily on Bashisms to get any work done. Even in Bash I’m hitting the wall constantly. Sh-only is utterly impractical: Reading code from big projects that try to stay compatible (e.g. automake) really show-cases these limitations. For instance, performing proper argument quoting of variables without arrays is next to impossible.

Even POSIX sh gives you effectively gives you a stack of lists in $@, one for each function call. If you absolutely must, you can maintain proper quoting very far with those. Shell is not as limited as you think, though it can be rather inefficient about it and it's probably not a good choice for programs whose designs it isn't especially good at handling.

For whatever reason, programmers tend to be rather dismissive of non-general-purpose programming languages, or more generally languages that do not work like conventional general-purpose languages, and refuse to learn them. I think the worst case we see of this is with CSS, where people come up with elaborate ways to place boxes procedurally in javascript (and JS once suffered from being considered a toy language, too), but I've seen similar attitudes get pointed at things like apache configuration, systemd unit files, jq, and Lua.

(On the other end, sometimes people believe that they're just writing configuration, and that this is somehow more maintainable than writing code, when really they're writing code embedded in a data description language; see Chef, Ansible &c.)

As for autotools, autoconf's "Portable Shell Programming" document covers seriously ancient shells. New programs following those guidelines aren't trying to stay compatible: their code isn't going to compile on the relics that run those shells without active effort that generally isn't happening. It's just a cargo cult.

I intentionally use it as a sanity check; if the task exceeds POSIX sh, it's time to pull out Python (for me; substitute your "real" language of choice).

This is a good heuristic for advanced logic, but not when the main purpose of the script is to encode a workflow of coding other tools. I’m using other languages for that too, but shell scripting languages are consistently superior to all of them (including Perl and Ruby, IMHO; workflow languages might help but pull in tons of dependencies).

Skimmed most of it but still learned some useful stuff. Will keep it in my bookmarks.

Also, would suggest mentioning gitbash in the Windows section. You get it for free with git so it's easy to come by, even in corporate environments where obtaining permission to install software is hard. If combined with something like ConEmu, it's actually pretty nice.

Holy crap! I never knew

    curl cheat.sh/<command>
was a thing. All these years....

  $ whois cheat.sh | grep 'Creation Date'
  Creation Date: 2017-04-16T13:42:06Z
Not that many years.

Thank you. That helps (but then, that means I missed it...)

I've written many thousand lines of code in bash (along with way more in 15 years of programming in other languages) and I must tell you:

Never again!

Use xonch at least.

> Obscure but useful

I find the choice of the word "obscure" a bit amusing, given that this section lists many commands used ona daily basis by thousands of syadmins.

> Know > overwrites the output file and >> appends. Learn about stdout and stderr.

Yep, learned that the hard way

I was ready to knock it, but it's actually a decent reference.

If you're just picking it up, check it out.

Gosh! How is it possible I missed C-x C-e for over 20 years?

Is it worth it to master the (newly reloaded) Windows Terminal?

If you work with Windows at all it's worthwhile to have some Powershell scripting chops for system automation. And while it's kind of like Bash, it's syntax is more common and everything is an object which is nice. No more string parsing hoops to get basic info.

Windows Terminal is like iTerm2 / xterm etc. CMD.exe, Posh is like bash, zsh, etc.

It's less likely you'll "master" a terminal emulator. (Other than figuring out settings where the defaults get in your way of using it).

This looks great! Is there an equivalent for zsh users?

> This looks great! Is there an equivalent for zsh users?

Most of it works just the same.

It's missing nmon and bmon

Do I really have to point out that it's an open source project and you're free to write it and open up a pull request instead of merely pointing out things in a comment thread? At least make the effort and open up an issue in the project itself - provide your motivation, link to resources, etc.

I thought about doing it, but I can't do it from a phone on a train.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact