Hacker News new | comments | show | ask | jobs | submit login
What's the one Linux command you wish you knew years ago? (reddit.com)
524 points by roadnottaken on Nov 20, 2011 | hide | past | web | favorite | 212 comments



Actually the new top comment blew me away. I'm in the 20 year veteran category and was also oblivious. Quoted:

disown - bash built in.

You know how you always start that mega long rsync in an ssh session from your laptop and then realize you have to go offline halfway through? You forgot to start it in screen or nohup, didn't you? You can pause it (ctrl+z), background it (bg), and then disown it so it is protected from SIGHUP when you quit your ssh session. Bash (and, to be fair, zsh and others have copied much of this) has some wonderful job control features most people are completely oblivious of - even veterans of 20 years.


You will be pleased to know that there is a site full of little gems like this:

http://www.commandlinefu.com/commands/browse/sort-by-votes

Most of my time savers that I've never thought of were found there.


python -m SimpleHTTPServer

Love this one.


And for you crazy people on python3..

python3 -m http.server


    [ward@hathor ~]$ ls -l /usr/bin/python
    lrwxrwxrwx 1 root root 7 Sep  5 00:45 /usr/bin/python -> python3
Arch default


I think he covered Arch users when he said "crazy people" :)


disown is great. However, I don't know if it's possible to start up a new shell, then bring back that process to the foreground. 'jobs' will return no entries, and fg doesn't take a PID parameter. Any ideas? It would be brilliant if that were possible with bash alone.


reptyr: https://github.com/nelhage/reptyr is Linux specific & involves a fairly gruesome abuse of ptrace() to attach a running process to your current pty. It won't put the process in the shell jobs list though.

There's also retty: http://pasky.or.cz/dev/retty/ but again it's not a complete solution.


Screen is the only real answer, I've ever found.


There's also 'tmux':

http://tmux.sourceforge.net/


Which according to reports has the benefit of being maintainable.

GNU screen, which I love and use to obscene extents daily, has an exceptionally hairy codebase. That said, I've got to get my tmux on and learn how to do screen equivalents with it....

http://www.techrepublic.com/blog/opensource/is-tmux-the-gnu-...


I keep seeing people comment on the codebase of screen being bad vs tmux, but I'm not sure why that is an issue? Are there any features of screen missing that haven't been added to the codebase? The vertical split patch has been added a long while back, and even though I'm sure tmux has a smaller memory footprint, I've never had an occasion where screen has ever been an app I'm worried might impact my server memory usage.

Since it is installed for the most part everywhere I have no desire to switch to tmux.


FWIW, I've switched to tmux but think tmux's implementation of split screen is broken and desperately miss screen's.

Screen lets each part of a split screen select a different existing window. Tmux makes both parts of a split part of a single window, so you can either have two things going in one split, or have something else going on in another window, but if you suddenly decide you want two existing windows side by side, you can't do it.


I can think of a few.

If there are problems found with screen (and it's a widely used admin tool that may be left running on remote servers, hence a high-value attack vector), it's going to be harder to find and fix the problem.

If there are features to be added, they'll be harder to add to screen than tmux. Which means that with time, tmux stands a higher chance of becoming more featureful.

Both argue in favor of tmux in the long run, though there's no immediate need to ditch screen. As you note, screen's very widely available, which is a feature. Then again, so was telnet 10 years ago.


Yes. screen will not run in terminals with long names, like rxvt-unicode-256color. This is due to a fixed-length buffer for storing the terminal name. Simple solution: make the buffer bigger. I did that and submitted the patch to Debian and upstream, and last I looked, it was still broken. I switched to tmux and never looked back.

OK, I looked back and it's fixed now: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=621804. 6 months is a long time to wait. (Not that I'm criticizing Debian. It would never be fixed if it wasn't for the distributions patching it.)


True, screen and screenlikes. I've never used tmux, mostly because screen just works well enough for me.


I'm still a die-hard (and vertical-split-patched) screen user, but it seems tmux is the only suggested alternative. The combination of the more orthogonal dvtm[1] (for window management) and dtach[2] (for screen's killer feature: detaching) is flexible and compelling and sadly doesn't seem to have been mentioned here.

I currently use dvtm to split terminals and connect to multiple screen sessions in a single window. The downside of dvtm is it takes the dwm[3] approach of using #defines rather than configuration files.

1. http://www.brain-dump.org/projects/dvtm/ 2. http://dtach.sourceforge.net/ 3. http://dwm.suckless.org/


The only solution I've seen is to start up gdb with the process and point the process's 0, 1 & 2 file descriptors to the shell's STDIN, STDOUT and STDERR, then detach the process and quit gdb.

I can't remember the exact details, but that should be enough to set you up for some googling.


disown -h will (reportedly) keep it under job control.


True, but doesn't help if you've quit the shell & want to re-attach to the process after logging back in again :(


'man bash' is pretty amazing the first time you actually read through it.


And when you're done with that, try 'man zsh'.


Didn't know about disown, I always just did an "exec $SHELL" in this case.


Could someone explain what this would accomplish? If you have to go offline you would still lose that session, no?


Not really a command but this:

  mv /path/to/some/file.{jpg,png}
Has been a huge timesaver. It's the same as

  mv /path/to/some/file.jpg /path/to/some/file.png
But saves you the typing/copy pasting the path for the second argument. Can also be used with cp etc.


Either that or

    rename jpg png /path/to/some/file.jpg


I didn't know you could do that with rename. I would have done

    rename 's/jpg/png/g' /path/to/some/file.jpg


These are actually two different "rename" commands. bgaluszka's version is usually found on Fedora/RHEL/etc. systems coming from the util-linux package, while your version is usually found on Debian/Ubuntu/etc. systems coming from the Perl package.

I prefer the regex one, of course, especially since it takes proper Perl regexes :)

You have to be careful of this though and probably avoid using this command in shell scripts.

edit: You can just stick Debian's rename at $HOME/bin/rename.pl and call it a day; this command is useful in the terminal since it's less typing than a for loop.


The best is when you mix filename expansion and brace expansion :

  > ls *.{png,jpg,bmp,gif}
But this gives error if there is no jpg files. In bash, you can activate the extglob mode with "shopt -s extglob" and do this :

  > ls *.@(png|jpg|bmp|gif)
http://www.faqs.org/docs/bashman/bashref_35.html



Nobody knows this: the bash extension <(cmd) to pass the output of a command, as if it was a file, to another command. Example to diff the list of files in 2 directories (a and b) without using temporary files:

  $ diff -u <(cd a && find . | sort) <(cd b && find . | sort)
Internally, bash creates a pipe and passes /dev/fd/XXX to the main command.


Just diff the two directories.

    $ ls a b
    a:
    1  2  3  4
    
    b:
    1  2  3
    
    # Yours:
    $ diff -u <(cd a && find . | sort) <(cd b && find . | sort)
    --- /dev/fd/63	2011-11-20 02:50:21.919072726 -0700
    +++ /dev/fd/62	2011-11-20 02:50:21.919072726 -0700
    @@ -2,4 +2,3 @@
     ./1
     ./2
     ./3
    -./4
    
    # Mine:
    $ diff -u a b
    diff -u a/1 b/1
    --- a/1	2011-11-20 02:43:57.073130007 -0700
    +++ b/1	2011-11-20 02:45:14.535912768 -0700
    @@ -1 +1 @@
    -1
    +11
    Only in a: 4
Maybe I'm missing something?


The OP's example is diffing listings of the two directories whereas your command is asking diff to directly work on the directories themselves.


Which means for example you could get that listing in any format, so you could for example spot changes in permissions or ownership if you included it in the listing.


I stand enlightened.


Yes, you are because the example was contrived. The point is that process substitution (as it is called) is available on systems that support named pipes. You could think of better uses for it.


Thanks so much. I had to do this just earlier today and ended up using a tmp file.


100% agree with CTRL-R. It changed my life. Others that I love:

screen, pdsh,

perl -pi -e 's/find/replace/g' *.txt

redis-cli's new ability to have piped in data be the last argument of a command.

vim -o file1.txt file2.txt file3.txt (opening multiple files with vim)

Using vim instead of less as a pager: cat file.txt | vim -


http://www.vim.org/scripts/script.php?script_id=1723

vimpager. Set vim as your pager. I love reading man pages in syntax-colored vim glory.


I prefer keeping file viewing and file editing as separate actions.

'most' will colorize bash output as well (it's strictly a pager, in the pg / more / less / most continuum).


As I recall, vimpager doesn't edit files, it just uses vim to view the file instead of using a different program to view the file.


Wow! I knew about less.sh (aka vim less), but this seems even more powerful. Installing now!


sed -i does the same thing as perl -pi. Though I've not encountered a system that doesn't have both.


sed's regular expressions are different from those perl uses (even when using the (¬Ņgnu sed?) flag that tells sed to use extended regular expressions). Especially if you "live in perl", that can be a problem.


Does sed allow in-place file replacement, or do you need to redirect to a different file then cp/mv it over the original? I haven't used it in many years, but sed didn't used to allow this.


sed -i 's/old/new/g' file


Note: "cmds" is one or more s commands. For other commands, if it's a big file, I will just pipe through less (still no temp file) and then save the buffer (to overwrite the original file), instead of using this hack.

A hack for in-place editing, for old school sed (no -i):

sedi(){ case $# in 0) echo usage: sedi cmds file;; 2) sed -an ' '"$1"'; H; $!d; g; w '"$2"'' $2;; esac; }


Did you know that Ctrl-s will search forwards through the history too? You'll have to switch flow control off first.


Fantastic. I've always wanted to know what I'm going to do next.


C-R was one of a few watershed moments for me. I discovered it over a decade a go as a cow-orker was flying through commands way faster than was humanly possible. "How did you do that?". And so I learned.

It, along with other readline functionality, shell functions, substitutions, expansions, scripts, and the bazillion utilities are what make Linux (and Unix) shell so much more than just "it's like DOS, right?".

Yeah, kind of, in the same way that ... a cross between a Prius, a Mack Truck, an Lamborghini, an F-16 fighter, a helicopter, and a freight train are like a pushcart. It's an interface that helps you manage your computers, the things on them, and the things they're connected to. It's also a hugely efficient and effective way to process information and issue commands and controls in a useful way.


And vim -p file1.txt file2.txt to open them in tabs.


> screen

Unless you only use GPL software, there's no reason to not use tmux these days. Well, maybe if you're on HP-UX or something.

http://tmux.sourceforge.net/


I've switched from screen to tmux about a year or two ago, after decades of using screen.. and I'm perfectly happy with tmux, as it fits my needs.

However, it's not true that screen has no advantages over tmux. Each program has its own strengths and weaknesses. For instance, screen is scriptable via its Lua bindings, tmux is not. Screen has zmodem support built in, tmux does not. There are probably many other examples, since screen has a bazillion features which have been developed over decades, while tmux is relatively new (and its developers don't seem to care to duplicate every one of screen's features).


tmux is scriptable from the shell via its command-line arguments, so any language can script it.

It's certainly true that screen does have tons of small old features... but I sort of swept that in with "unless you're using HP-UX or something." zmodem support isn't exactly relevant to the overwhelming majority of programmers, but yes, I admit these things are still relevant to a few niches.


"tmux is scriptable from the shell via its command-line arguments, so any language can script it."

I'm not sure if tmux is scriptable to the same extent that screen is via screen's Lua bindings, as the latter probably exposes many of screen's internals, while scriptability of tmux is limited to what you could do with its command line arguments. I haven't researched the matter, though, so I may well be wrong.


Is tmux as available as screen?

I often have to fix servers that are not from own group, so all my knowledge is honed around tools that i can find anywhere.


Perhaps not, but tmux's bindings are nearly all backwards-compatible with screen's. Well, the default escape char is ctrl-B but I (and I think most folks) just remap it to ctrl-A immediately.


How do people use CTRL-A for screen without being driven batty that the emacs-style 'start-of-line' command is not available? I use that constantly!


Pressing ^A then a in screen should deliver a single ^A to emacs.


People rebind it. I bind it to ^o.

  # put this in your .screenrc
  escape ^Oo


I always rebind it to ^z. Since I never need to put a process in the background (I just open a new screen instead) when running screen, it never bothers me that that combo is a bit harder to type.


or if you never use toggle-input-method, maybe C-\

  escape ^\\


I don't use CONTROL-A to go to the start of the line, instead I hit ESCAPE 0

It works because I have my shell configured to use vi mode for command line editing.


i use the ratpoison window manager (it works a lot like tmux/screen, but for x11) and have control+a set as its prefix key. i use tmux for some long-lasting remote sessions, so to send a control+a to the shell running under tmux, under ratpoison, i type control+a, a, control+a, a. yes, i do it a lot. no, i'm not going to change it.


I actually only learned that ctrl-A works in bash 3-4 months ago, and haven't yet picked a new screen/tmux escape and retrained to it. Mainly because these days, I only use screen when I WFH or am traveling, when I ssh into my workstation. Almost nobody has ssh rights to a production machine, so I'm actually struggling to think when I would use it from my workstation.


Crtl-Z, you mean. Ctrl-A is for start of line in bash and emacs, so that won't do


My method to learn Linux/OSX command line.

Step 1: Think "I wish I could do X."

Step 2: Google/duckduckgo "How to do X in bash"

Step 3: Delight that a simple solution already exists.

This method is successful in 95% of all cases.


This post and probably your comment will be in the results , gotta love that.


For me it was definitely find. I went through a ton of contortions to replicate its behavior several times before somebody knowledgeable clued me in.

Also, `` and $(). Find combined with one of these is great. One of my favorite one-liners (I even figured it out myself :)) is:

    cat `find . --name *.java` | wc -l


You'll want to quote .java so the shell doesn't interpret it as a glob, and find doesn't use GNU style flags. You can also pass wc the filenames directly like this:

    wc -l $(find . -name '*.java')


That has different semantics (it prints one line per file) and can have line length limitations. This has the same semantics as the parent's version, but without line length issues.

  find . -name \*.java -exec cat {} + | wc -l


Note that the + may not be portable. I know it doesn't work on older Solaris versions (<10) and HP-UX (from a couple years ago when I last touched it).

Also, find may or may not split the files it finds into multiple invocations.


Too many invocations of "cat"

find . -name \*.java -print0 | xargs -0 cat | wc -l

Edit: Dammit. Others have already posted this exact version further below. :-(


Yeah, the flags always get me. For some reason, I've never had issues with the *. It's good to know about passing files to wc directly.

I just use M-x find-dired and friends these days for most things, admittedly.


For find, you need to use single quotes so your shell doesn't automatically expand the .java glob into the names of all the files. Also, find uses one dash for parameters, and can take an exec parameter: find . -name .java -exec wc -l {} \;

The {} means "replace with filename" and the semicolon (escaped so your shell doesn't eat it). This will run wc -l <filename> for every file that matches.

The -regex flag lets you do the same thing as -name with regular expressions.

The problem with this is we run n different wc's, as opposed to just one with all the parameters. We can use xargs to fix this: find . -name .java | xargs wc -l

This even gives you a total!

The really* simple solution (which is probably best) is to just use shell globbing: wc -l *.java

While find and xargs are definitely useful, the easy solution here works better


You can skip xargs altogether with the plus sign instead of semicolon.

    -exec utility [argument ...] {} +
            Same as -exec, except that ``{}'' is replaced with as many path-
            names as possible for each invocation of utility.  This behaviour
            is similar to that of xargs(1).
The problem with just wc -l *.java is that it don't recursive directory.


With bash-4 or zsh

  wc -l **/*.java


I was an avid find user until I discovered ack-grep. These days it's one of the first things I sudo apt-get install.


Agree with "find", especially finding files with recent changes, e.g. in last 5 minutes:

find . -mmin -5 -print


zsh's extended globbing support makes the use of find unnecessary in most cases. See the "extended globbing" section of the link below[1] for some examples.

zsh also has the 'zargs' command, which works something like xargs, except that it gets its arguments from the command line (often in conjunction with extended globbing) rather than from stdin.

You might also be interested in reading about the "useless uses of cat"[2]

[1] - http://linuxgazette.net/184/silva.html

[2] - http://partmaps.org/era/unix/award.html


Never thought to use ``. I always used xargs:

find . -name *.java | xargs cat | wc -l


Your approach works better than backtick expansion, since it won't run into 'command line too long' issues; but it will still break on "weird" file names (e.g., file names with spaces in them). The correct way to do this is

    find . -name *.java -print0 | xargs -0 cat | wc -l


BTW, recent versions of linux don't have a fixed limit on command line length.

The command line can be a large as available memory (for a single processes). (Although the ratio between command line length and allocated memory is not obvious since it depends on the number of words in the command line.)


That glob could cause trouble.

    find . -name '*.java' -print0 | xargs -0 cat | wc -l


xargs can still run you into the 'too many args' issue. You'll want to use this:

       --max-args=max-args
       -n max-args
              Use  at  most max-args arguments per command
              line.  Fewer than max-args arguments will be used
              if the size (see the -s option) is exceeded,
              unless the -x option is given, in which case xargs
              will exit.


Only if there's a bug in your xargs. POSIX states:

The xargs utility shall limit the command line length such that when the command line is invoked, the combined argument and environment lists (see the exec family of functions in the System Interfaces volume of POSIX.1-2008) shall not exceed {ARG_MAX}-2048 bytes

so you should never run into the ARG_MAX limit.


Dollar-paren expansion beats backticks for being nestable.

E.g.: I like dumping temporary files with timestamps:

some-command-to-generate-log > /tmp/log-$( date +%Y%m%d-%H%M%S )

Now, if I want to wrap that in a larger loop -- say, iterating over a number of files or parameters:

command $( expansion one $( expansion 2 ))

Works

With backticks you'd have to do escapes:

command `expansion 1 \`expansion 2\` `

... which gets tedious.


I like to use (for GNU date at least): $ date -Iseconds 2011-11-21T10:56:17+0000

because it requires less typing. The ISO 8601 standard format.

-Is, -Im, -Ih, and -Id allow you to change the displayed resolution.

Bizarrely, this doesn't seem to be documented in my version of date (coreutils 8.10).


A lot of the date documentation is unfortunately in info only. Debian (and derived distros) do a good job of updating the manpages.

That said, I can't find "-I" documented anywhere.

What I like about the timestamp I use is that it's semantic but sorts lexically as well. Though yours does as well. Hrm.


Without xargs:

    find . -iname '*.java' -exec cat {} + |wc -l
But if you do that, why not just:

    find . -iname '*.java' -exec wc -l {} +
Which will also print out output of wc -l for each files along with total lines.


Also remember find's -print0 and xargs -0 so that you can deal with all filenames.


In zsh:

    cat **/*.java | wc -l


Bash got this functionality in v4.

It's not as useful as find.


pv - very useful to put in the middle of a series of piped commands to provide feedback on what, if anything, is happening.

I could've used this many times to catch cases where the amount of work being performed didn't match my expectations; usually an indication of an incorrect assumption or mistake I had made. Instead I sat there waiting for something way longer than I should've, only because I had a bad feedback loop.


I'm not sure why, but it was a while before I ran across du -h. Not sure how I ever did without it!

Oh, and "sudo !!". Definitely key, along with "& disown".


"!ssh" to run the most recent ssh command without having to type out the long list of arguments I use and the host name.


It helps to put your most frequent entries in ~/.ssh/config

  Host somewhere
  HostName somewhere.example.com
  User username
  IdentityFile ~/.ssh/somewhere_key.pub


yes and do you know about proxycommand ?

let you set up hops when you don't have direct access to a server.

e.g. if you don't have direct access to serverB but have direct access to a serverA that has direct access to serverB

put this in your ~/.ssh/config file

  host serverA
  hostname serverA.domainname.com

  host serverB
  hostname serverB.domainname.com
  ProxyCommand ssh serverA "nc %h %p"
then typing ssh serverB will log you on serverB


I also use !$ all the time. It is replaced with the last argument to the previous command line


That reddit thread is delightful, because it is full of wonderful commands that I didn't know, or rarely use, or had forgotten.

But it's also terrifying, because it exposes how many people will post questions before using man.


Many comments seem to be shell- and bash-related. Perhaps the thread exposes how difficult to read the bash manpage is. On my system, the bash 4.1 manpage is 41026 words long, or about 80 pages. Manpages are most useful to me when they are short and to-the-point so I can find what I am looking for before I lose interest in skimming through an 80 page book.


I'm baffled all the time by the lack of clear examples on many manpages.


Yep, a short list of useful one-liners would go a long way. Often, your use-case will be a straight specialisation of one of the examples, and if not, chances are you can cobble together something from several examples. I'd like to have a auxiliary help command for this, e.g. example ffmpeg in addition to man ffmpeg (another 18k words manpage).

Edit: This works rather well:

  function cmdfu() {
         curl "http://www.commandlinefu.com/commands/matching/$@/$(echo -n $@ | openssl base64)/plaintext" --silent | sed "s/\(^#.*\)/\x1b[32m\1\x1b[0m/g"
  }
From http://www.commandlinefu.com/commands/view/3086/search-comma..., it uses the commandlinefu API: http://www.commandlinefu.com/site/api


Another alternative is <a href="http://cheat.errtheblog.com/>Cheat</a>, which is essentially a CLI accessible wiki of usage examples. It was originally focused on Ruby (and still is to an extent), but it gives decent results a lot of the time.


You could try the OpenBSD man pages. One of OpenBSD's must-haves is good documentation. If I'm looking for an example and the Linux man doesn't have one, I'll often google for "openbsd man xxxx".

Of course, watch out for differences in the utility. OpenBSD (usually) won't have any GNU extensions, for example.


yes, bring back the VAX/VMS help system with the helpful examples at the end, was awesome last time I used it.


Same here, most man pages seem pretty good at documenting all the available options, but are pretty useless at explaining how the tool should actually be used.

I'd like to see someone dump all the common man pages into a public wiki so people could flesh out the usage examples and suggest alternative tools.


There is a standard "EXAMPLES" section to manpages. It's just not used often enough.

Most Linux distros have a bug reporting feature which could be used to suggest such examples, and it would be a Good Thing if more were suggested.


man bash | pr | less

... gives you paginated manpages. I get about 96 pages for bash 4.1.5.

man -Tps bash > /tmp/bash.ps && gv /tmp/bash.ps

... gives you the manpage pretty-printed as postscript output (assumes you've got the gv ghostview viewer installed). Or you can use a PDF converter (e.g.: ps2pdf) and read with your preferred PDF viewer.

This version runs about 70 pages for the same manual.


How would you make the bash manpage short and to the point, considering the complexity that it implements?


By reducing the needless complexity that it implements. The purpose of a Unix shell is to interpret a simple language wrapping syscalls so the user can easily give instructions to the kernel. There are several such shells whose source code is smaller than the bash manpage.

13 pages of the 80 page manual are devoted to readline and programmable completion. Readline and programmable completion could be removed from the shell, and their functional equivalents moved to a more logical place in the terminal emulator. Plan 9 does this, and it works well, and people seem to like it and the flexibility it gives.

24 pages are devoted to builtin commands, many of which are unnecessary duplicates of regular commands, and several unnecessary altogether.

3 pages are about weird parameter expansions, all of which can be done more intuitively with sed, except for the array one, which can be done with awk.

And there are many smaller things as well, like biffing or arithmetic evaluation, which can be done with dc or whatever. You could try moving history out of the shell and into the terminal emulator as well.


  $ man bash | wc -l
  5351
  $ 9 man rc | wc -l
  496
You may want to check out the `rc' shell from Plan 9 (carried over from latest UNICes). Simple, elegant. Available for linux via http://swtch.com/plan9port/ Manpage at http://swtch.com/plan9port/man/man1/rc.html


i believe the p9 in his name is short for "plan 9" :)


Responding that bash itself is too complex is skirting the question. The original post made it sound like the bash manpage was needlessly complex within the constraints of bash's current complexity.


I wish there was an easy way to get info about builtin commands. So things like man bash-disown would work (similar to git).


The 'help' command is exactly what you're looking for -- e.g., 'help disown' will describe the 'disown' command. Definitely beats searching the entire bash man page for the command you want (which is what I did for far too long, until I found out about 'help').


Yeah, but it often isn't as long as a real man page. Plus for some reason zsh doesn't have it!


There's always google you know :D. The point is to know what to look for:

$ help "*"

will print out all shell built ins (list help for them). And if the short description is not enough, go google for each.


While it doesn't fix the issue, zsh's man page is divided into different sections, which at least alleviates it.


And zsh has the user-friendly users guide: http://zsh.sourceforge.net/Guide/

This is how I learned unix shell back around 2000ish. Aside of giving an excellent insight into zsh, it also gives many good hints and notes about unix shells in general.


The first two or three chapters of that guide are pretty good, but then it gets bogged down in obscure minutia to such a degree that you start to feel almost like you're just reading the zsh man pages again.

The guide is in some serious need of a good editor to make it more concise and better organized. It also needs many more simple, practical examples. Unfortunately, it hasn't been updated since 2002, which also makes it a bit obsolete, since zsh development has been very active since then and zsh is now on to a new major version.

Overall, the guide is a nice try, but zsh needs more and better documentation if its not going to overwhelm all but the most dedicated users.


'info bash' is much better than 'man bash', give it a try.


On my system, the info documentation for bash 3.1, not including indices, is 53610 words, or 105 pages. The content is identical to that in the manpage. In addition, you have to read it in a clunky, unintuitive, and un-Unixy browser. Far from short and to-the-point, I nearly lost interest just trying to figure out how to coax info to send its output to stdout so I could pipe the 105 page book to wc. The info documentation is not in any way I can measure better than the manpage. Both the info documentation (and I use that term loosely) and the manpage are more noise than signal.


Info pages are written as a single document, but displayed by the 'info' or 'pinfo' readers in multiple pages.

You can simply view the original file (it has minimal markup) in a pager for a single-file view. I generally write an "info" script or function to replace the info viewer with something more useful.

I share your dislike of the info format.


Or, for another alternative, w3mman, which is bundled with the w3m browser. Tabbed manpage browsing in the terminal with URLs converted into links and "foo(1)" converted into a link to the man page for "foo". Don't leave home without it.


On my MacBook the manpages scroll hellishly slow and there are no "Page Down" keys (maybe there is some key combo? I don't know). I end up googling for the manpages on the web instead most of the time...


On the MacBooks I've seen, Page Up/Down is usually Fn+Up Arrow/Fn+Down Arrow. I don't know if this is implemented in software or hardware, though, so if you're running *nix, you might be out of luck.


f and b will do page scrolling in the man viewer.


I was taken aback by one commenter whose command he wished he'd known for years was man, newly discovered by him. "All those years of googling, wasted."

What happened that it's possible for someone to not know that it exists?


Google is what happened. Not to date myself, but when I was first learning (BSDi) UNIX Google didn't exist. The first thing the sys-admin who helped me said was, "type 'man'."

These days most people just know that if they have an unanswered question, ask the Google.


> The first thing the sys-admin who helped me said was, "type 'man'."

And that also happened. It used to be virtually impossible to jump in alone, because someone had to create an account for you. It's now possible to start in Unix with no one's help. As long as you can figure out how to burn an iso to CD, you're in. The installation gives you an account, with sudo.

After you install CrunchBang, it pops up a terminal the first time you log in, and allows you to make additional choices that weren't made during install.

I think all distros should pop up a terminal upon first boot and direct you to read "man man."


You can learn a lot by searching the web, so many folks don't learn something new by reading the first chapter of any intro to Unix (or the shell) book anymore. It's still surprising though.

If you search for help with command line utils you are almost guaranteed to get some results that are man pages online, so the person who said that probably has read man pages without realizing it, and without the convenience of the man program.


Quite often asking a good question to knowledgeable people can give you a tested and well working solution in a fraction of the time it would take to search and try and fail and try a working solution by yourself.

For me, often asking the question and then still trying to figure it out myself worked well. Often I come to a solution, post it (IRC) and then later some guru responds with "try this instead" or "with that flag it could do some thing better".


> Quite often asking a good question to knowledgeable people can give you a tested and well working solution in a fraction of the time it would take to search and try and fail and try a working solution by yourself.

The pool of knowledgable people decreases with each question asked, especially if you're not making an effort to at least try to do it yourself.


I personally find man and info very cumbersome and hard to read. I usually google for things even when I know full well that its covered by man and where, because the google results are normally a lot easier for me to read and figure out than it is for me to read the manpages.


If that is surprising to you, then much of human behaviour must utterly baffle you.


emacs --daemon

(or after you start emacs, M-x server-start). This starts up the emacs server, so that it preserves your open files, and you can start new emacs instances immediately with emacsclient. (I actually symlink emacsclient to vi since it loads as fast as vi, which was one of the weakest points of emacs for me.)

Very useful if you are using a remote box (like a vps) as your dev environment.


In my .bashrc I have:

  export ALTERNATE_EDITOR=''
  alias e='emacsclient -t'
Now

  $ e file.txt
will fire up emacsclient to connect to a running emacs daemon. If there is no emacs daemon running it will start one for you and then connect to it.

You can

  export VISUAL="emacsclient -t"
  export EDITOR='emacsclient -t'
for good measure if you like.


And if you are VIM user, you can do the same :D:

    $ vim -h | grep -i server
       --remote <files>	Edit <files> in a Vim server if possible
       --remote-silent <files>  Same, don't complain if there is no server
       --remote-wait-silent <files>  Same, don't complain if there is no server
       --remote-send <keys>	Send <keys> to a Vim server and exit
       --remote-expr <expr>	Evaluate <expr> in a Vim server and print result
       --serverlist		List available Vim server names and exit
       --servername <name>	Send to/become the Vim server <name>
    $


watch for not having to repeatdly type a comnand, usually for an sw app stat for checking when a files been accessed

tee piping to a log file with tee, while still seeing normal stdout on screen


I love the watch command too. I really enjoy it with -d (delta or differences) which will highlight what changed since the last run.


Thank you! No more while true + sleep loops for me.


It's not a command but it is still useful. When you are typing a long command and you realize that you need to do other command or set of commands first, usually you type ctrl + c to cancel it. By default it does not save in the history. If you put # (comment the command) in front of the command (you can do it quickly with Ctrl + a #) and then you can use Ctrl + r to get it back. I'm sure there are other ways to save the command in the history but it is a simple way to have the same effect.


For those using zsh hit alt-q to stash the current command. After you run the next command your stashed line will be brought back for you.

alt-h behaves similarly but runs `man <command you were typing>`, so if you have this:

    $ rsync -
and need to check the options, hit alt-h and the man page for rsync comes up. When you exit you'll be back at:

    $ rsync -


What I usually do in that situation is Ctrl + u (delete all the way back to the start of the line), then type the intermediate command, then Ctrl + y (place the deleted partial line back on the prompt, with my cursor at the end where it was).

This assume Emacs-style bindings for the shell.


Even better: just type \ (line continuation) at point that you realize that you need to do something else first. Then hit Ctrl-c to cancel once you get to the new line.


I just put a random character instead of # to get "command not found".


# seems safer... what if you're typing a mv command on a BSD machine that also has GNU binutils and the random character you hit is a "g"? Bad news.


Awk. Actually a full programming language but i usually use it for quick one liners: http://gregable.com/2010/09/why-you-should-know-just-little-...


every developer should know sed and awk. it is amazing how many devs write cumbersome scripts or little programs to handle tasks that are built for sed and awk.

my ~/bin folder has ~50 different little scripts that handle all sorts of small tasks.


I don't really know a lot about this stuff, but my understanding is that Perl was more or less replaced sed and awk.


I code in Perl at my day job, and I still use awk and grep. Sometimes it's quicker to do it in awk than perl, especially when you're doing it in command line and have to figure out quotes.

awk -F, '{print $3}' foo | xargs -I{} echo 'command -x -y -z {}'

perl -e 'open F, "<foo"; while(<F>) { my($a,$b,$c,@rest) =split /,/; system("command -x -y -z $c"); }'

I think I could do the split into an array and then take the third element, but I'm just trying to do an elementary example. When you want to do more things to this argument you necessarily have to grow this, and at one point I got to the point where I started having to escape quotes. That's rather difficult to read when you're just trying to do something quick in the command line and you mismatch a pair.


Instead of opening the file "foo" by hand, you can use Perl's "awk mode" thusly:

   perl -lane 'system("command -x -y -z $F[2]")' foo


awk (and to a lesser extent sed) is cleaner and easier to remember.


For me, netcat. I mostly use it for

* Copying files across machines (`cat file| nc -l port` and `nc host port`)

* Remote pasteboard (on OSX w/ pbcopy, `while true; do nc -l port| pbcopy; done`)

Not a life changing, but I wish I know it earlier.

Only if there's a way to make "remote $EDITOR" with Netcat...


Quitting SSH connections that hang: ~.

If you frequently connect through VPNs to other hosts, often enough the SSH connection times out and just siits there, taking no commands. Hitting ~ and . will kill the session (it's an openssh feature).

Other frequently used tools: awk, sort, uniq, tail, find, grep.. they alone make the shell a really powerful tool


it's important to note that this needs to be preceeded by a newline to work.

so:

    <ret>~? (list possible escapes)
    <ret>~. (disconnect session)
If you're dumping binary data through ssh as a pipe, this can sometimes bite you if these sequences appear in your data.

    ssh -oEscapeChar=none ...
handles that situation. (for the gory details, man ssh_config)


these are great ones!

But here is mine that I recently learned:

  sort -h 
compare human readable numbers (e.g., 2K 1G)


Canonical example:

    du -h | sort -h


Sort to the same output file: sort -o foo.txt foo.txt

You can't do sort foo > foo because you'll be left with a blank file.


sponge(1) from "moreutils"

Although sort(1) handles this for you as you mention, the general problem in the shell of wanting to overwrite your input file with the output file, but not being able to redirect to it for the reason you mention, is solved with sponge. Rather than:

$ grep "foo" bar.txt >baz.txt; mv baz.txt bar.txt

instead:

$ grep "foo" bar.txt | sponge bar.txt



!$ (short for !!:$)

Passes last argument on the command line. In general, the whole paragraph on HISTORY EXPANSION in the bash manual is a must read.

A few ones from the moreutils package:

- vipe : manually edit the data in the middle of a pipe.

- vidir : treat a directory and its files as a text file. Does renames and deletions.

- sponge : when you're doing modification on a file in a pipe, and want the output to be to this file (file will be scrambled without sponge).

It has others, very useful tools, but I won't spoil them all.


alt-. or (ESC .) does something similar, but can be pressed repeatedly to keep going back further in history in case the argument you want isn't on the most recent command.

!! is short for the previous command, but you can do !-2$ to get the last argument from the 2nd last command, or !cp$ to get the last argument from the most recent cp command. Or !cp:2 to get the 2nd argument to the most recent cp command. !# refers to the current command being entered (in zsh, don't know about bash).

Basically I second Aissen's recommendation to read up on history expansion. Super useful.


#proxy from port 80 to port 3001

socat TCP-LISTEN:80,fork TCP:localhost:3001


Great program, socat is like netcat, but with a lot more features.


The way uniq should have been implemented:

    awk '!x[$1]++'


lsof comes in handy to match pid to socket and thus things like apache proc to running MySQL process.


strace - for looking a bit deeper into what's going on with live processes when things go awry.


Most of those commands are not specific to GNU/Linux and will work with *BSD, OS X, etc.


"Hey guys check out these sweet Ubuntu commands!" ;-)


In bash: cp /dir/dir/dir/dir/filename !#:^:h/newfilename (Applies to more than cp) Strips cp & repeats previous dir tree minus /filename, plus /newfilename.

This can be mimicked by brace expansion: cp /dir/dir/dir/dir/{filename,newfilename}


The "^[." (esc-.) keybinding. In zsh it maps to insert-last-word which allows you to do this:

    echo hi > /tmp/blah.txt
    ls -l <esc-.>
There's a bunch of complementary commands to go with it too.


Is this only in zsh? I've been recommending this and not marking it explicitly as a zsh thing.

alt-. works too. In the shell ESC <foo> is the same as alt-<foo>. The difference being that with alt-<foo> you hold alt and press <foo>, then release them both. with ESC <foo> you press escape, release, and then press and release <foo>.


It's bash too.


zless, zgrep, zcat... Only recently found out about these and now I love them


uptime. I had a phone screening itw with Google and it was the answer to one question. I answered sar, which is much more powerful, and top, but caller wasn't tech and I probably missed a checkbox.


I always use "w" instead. It give the same information as uptime plus some more (basically "who"), for much less typing.


For me, it was using 'cd' without argument. (It takes you back to your ~). Also, learning the various options of grep (-v, -i, -A, -B) made its use really more powerful.


A useful one I somehow missed early on is

  cd -
to change back to whichever directory you were just in. Every once in a while most of us find ourselves facing someone's "how could you not know that?" look.


Also along these lines are the directory stack manipulation commands: "pushd", "popd", and "dirs" -- along with some related environment variables.


I wonder, how do you remember all the command line options?

Recently I had this thought: maybe those people who can remember the options actually are autistic (don't know the proper word, that thing many geeks are deemed to be), and I am not. Remembering command line options might be like knowing large prime numbers within seconds.

But I am also interested in memory techniques like the Loki method. Perhaps some good system for memorizing command line arguments could be devised...


Umm, we just use the commands a lot. Doing something 12 hours a day for 10 years tends to leave some imprints on your memory.


set -o vi


The one that starts the system up - still forget what it is - but I know it lives in /boot.


This is useful for finding and killing a process.

ps -A | grep some text in program name

upon getting the pid

kill -9 pid


Instead of kill -9 (a nuke), you can often get results with just kill, kill -INT or kill -HUP. (The advantage is that the "weaker" signals allow your runaway program to do some cleanup, but kill -9 doesn't.)

There's a nice write-up and a function automating attempts to run increasingly strong signals on this site: http://web.archive.org/web/20080610070315/http://sial.org/ho...

(Thank god for the wayback machine. That site was full of good Unix tips and tutorials, but it's gone now.)


It's still there, just without a domain:

http://72.14.189.113/


Thanks for letting me know. It's still probably easier to use the Wayback Machine since all of the site's internal links point to http://sial.org/ so browsing within the site is very painful. Still, I'm glad to see the content is all still online. I may just mirror the whole thing (there's tons of good reference material in there, especially for Perl, but also for Unix generally).


Instead of the first, try "pgrep".


also check

killall programname


Be careful with killall -- BSD and Linux killall does what you say, but SysV killall kills every program on the machine.

Most of us probably won't be executing killall on any HPUX or IRIX boxes, but it's still good to know the difference ;)


pkill

pidof


I wish I would have known about dmidecode long before I found out about it.


Along these lines there are also: lspci, lshw, and procinfo.


du -sh ./D*

  872K	./Desktop
  78M	./Documents
  3.2G	./Downloads


I would add | sort -h here


if you don't have "sort -h" (mine doesn't do it), you can also use "du -m", which gives file sizes in megabytes -- pretty reasonable. (then "sort -n")


ctrl+d was an amazing improvement when my boss showed it to me years ago. Now I discover that a lot of people don't know it.


to logout or to delete characters?


More accurately: End Of File.


equals to "logout" on the console


pushd / popd


also cd - which goes back to the last directory you cd'd from.


slabtop


This looks interesting. But would it be useful to anyone who's not a kernel developer?


python -m json.tool < unformatted.json > formatted.json

edit: requires simplejson


screen

Saved my work on remote servers too many times to count. :)


tree


mytop


dd


sed -n '/./{ /beg/,/end/p;}'




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: