
Unix tricks - shawndumas
http://mmb.pcb.ub.es/~carlesfe/unix/tricks.txt
======
unfletch

        '!!:n' selects the nth argument of the last command, and '!$' the last arg
    

A lot of people know about "!$" (which is shorthand for !!:$), but that's just
the tip of Bash's history expansion. I use these things all the time. One of
my favorite keystroke savers is adding :h, the head modifier, to !$. For
example:

    
    
        $ cp file.txt /some/annoyingly/deep/target/directory/other.txt
        $ cd !$:h
        $ pwd # => /some/annoyingly/deep/target/directory
    

Once you understand how each component works it's easier to put them together
into new (to you) combinations. For example, once you know that !$ is
shorthand for !!:$, it's not a huge leap to reason out that you can use !-2:$
to get the last argument to the 2nd-to-last command. Or !ls:$ for the last arg
to the most recent `ls` command.

I also prefer to do substitution with the :s modifier rather than ^ as
suggested at the link, for consistency's sake:

    
    
        $ echo "foo bar"
        foo bar
        $ echo !!:s/bar/baz
        foo baz
        $ echo !?bar?:s/foo/qux
        qux bar
    

Relevant Bash manual pages:

[http://www.gnu.org/software/bash/manual/html_node/Event-
Desi...](http://www.gnu.org/software/bash/manual/html_node/Event-
Designators.html)

[http://www.gnu.org/software/bash/manual/html_node/Word-
Desig...](http://www.gnu.org/software/bash/manual/html_node/Word-
Designators.html)

[http://www.gnu.org/software/bash/manual/html_node/Modifiers....](http://www.gnu.org/software/bash/manual/html_node/Modifiers.html)

~~~
pjungwir
Another nice ending is `:p` to print the command instead of executing it. I
use this if I'm doing something complicated and I want to make sure it's
right. Or if I'm saying `!-n:foo` with n>2\. Then just up-arrow and enter to
run it for real.

~~~
absqua

      shopt -s histverify
    

to show the expanded command before executing it. Then just enter. I never get
these right the first time.

~~~
pyre
In zsh, you can hit <tab> to expand it in place before hitting <enter>.

------
niggler
1) `pgrep` is a standard utility that does what his `psgrep` does and much
much more.

2) htop is a cpu and memory hog -- every time I've used it I noticed it takes
6+% CPU time

3) there's an awk trick to do the `sort | uniq` recommendation that works on
10+GB files (single pass):

    
    
        awk '!x[$0]++'
    

4) Passwordless keys are dangerous -- use ssh-agent to save the password of
the keys

~~~
carlesfe
Not trying to contradict you, just some explanations

1\. I prefer 'psgrep' because it covers 99% of my use cases for pgrep (ps axuf
| grep $NAME)

2\. htop is very nice, come on! Would not let it running on the background for
hours, but it's nicer than top

3\. Note taken, thanks!

4\. Is ssh-agent really safer than using passwordless keys? Just asking, I'm
curious

~~~
niggler
1\. `man pgrep` is your friend (you save two greps, and TBH the `grep -v grep`
should be a hint that there's a better way)

2\. In my experience on Debian (granted this was in 2010), there is a
noticeable performance difference between `htop` and `top`.

4\. ssh-agent stores the password in memory and is erased on reboot. OTOH If
you use a passwordless key file, anyone can use it if they have the key.

~~~
npsimons
I'm pretty sure ssh-agent doesn't store the password, but the private key.
Also, the fact that it supports timed expire (and can be setup to drop keys
upon events such as screen lock) make it a wiser choice than passwordless
keys.

~~~
gingerlime
That's correct. And ssh-agent doesn't give access to the private key either,
only to perform operations like signing. The only way to extract the key is to
search it in the process memory, which I believe would requires root level
access.

------
artagnon
Feels very outdated.

1\. Use zsh, not bash. AUTO_PUSHD, CORRECT_ALL and tons of other options make
some tricks redundant. Also, the zle M-n and M-p are more useful than C-r imo.

2\. Use tmux, not screen.

3\. Use z (<https://github.com/rupa/z>), not j.py.

4\. Use cron, not at. Or even systemd timer units, if you're so inclined.

5\. Use public-key authentication and keychain, not password-based SSH.

6\. Don't send emails from command-line naively; you have no control over the
headers. Use git-send-email or similar.

7\. Consider using something slightly more sophisticated than Python's
SimpleHTTPServer to share files/ folders. One example: woof
(<http://www.home.unix-ag.org/simon/woof.html>)

~~~
laichzeit0
I take it you don't work on many disparate unix systems on a daily basis :) I
find most of the time I'm lucky if there's even bash installed on the remote
system, it's usually ksh. tmux? way too new. Python? Nope. Perl is the only
scripting language I'd wage my balls on beyond awk if I hope to reuse the
script again.

Sad, but I find this is generally the case in extremely large enterprises
where there is a mix of AIX, HPUX, Linux and Solaris being used due to years
of weird procurement decisions. Sigh.

~~~
carlesfe
Yes, that's exactly why those tricks "feel outdated". It's because they're
made to run on systems from the early 2000s

------
carlesfe
So happy to see this on HN! Actually I took most of the tips from unix threads
here and r/commandline, I highly recommend that subreddit if you like this
kind of tricks!

Edit: also, @climagic: <https://twitter.com/climagic>

Second edit: I'm enhancing the file with these comments, so if there are any
inconsistencies between the txt and a commenter, it's my fault.

~~~
adlpz
Hostiàs! Topopardo! I used to follow your podcasts long ago :D. Anyway, nice
list of tips.

------
crusso
The special bash command I'm most often asked about by shoulder surfers is !$.

It substitutes the last argument in the previous command into the current one.

For example:

    
    
      $ ls /some/long/path/somewhere/looking/around/
      <output>
      $ cd !$
      cd /some/long/path/somewhere/looking/around/

~~~
vilgax
Better than that for me is "Alt+.", much easier to type. It can be also
combined with number like "Alt+2+." will insert second argument. Similarly
"Alt+0+." will insert last command.
[http://linuxcommando.blogspot.in/2009/05/more-on-
inserting-a...](http://linuxcommando.blogspot.in/2009/05/more-on-inserting-
arguments-from.html)

~~~
micampe
If you are on OS X and use Terminal.app “Alt-.” won’t work because Alt is used
for alternate characters. You have two options: enable “use option as meta” in
the app settings (but you lose the extra characters) or use “Esc-.” instead.

 _Yes, I know about iTerm. I don’t want it._

~~~
kps
One way or another, that's true of any terminal. The shell sees _characters_ ,
not _keystrokes_.

------
etrain
His last trick - compressed fie transfer without intermediate state:

    
    
      'tar cz folder/ | ssh server "tar xz"'
    

Can be pulled off with two flags to scp - and you get to see progress as a
benefit!

    
    
      scp -Cr folder server:dest/

~~~
ominous_prime
Tar can transfer more filetypes and attributes than scp can (even using -p
option). `scp -p` only transfers mode, mtime and atime; you lose ownership,
extended atributes, symlinks, and hardlinks.

You will also get better compression with tar (or rsync), as it is compressing
the files directly, and not just the ssh stream (-C is just passed on to ssh).

I did the tests years ago, but a quick google found someone who tried to test
the various combinations: [http://www.spikelab.org/transfer-largedata-scp-
tarssh-tarnc-...](http://www.spikelab.org/transfer-largedata-scp-tarssh-tarnc-
compared/)

~~~
jerf
In particular, scp is mindblowingly slow on lots of small files. I
independently rediscovered the tar-pipe trick while sitting there watching scp
laboriously copy thousands of 100-bytes so slowly I could count the files as
they went by. That should not be possible, even at _modem_ speeds. Fine for
moving one file, OK for directories of very large files, not suitable for
general usage where you might encounter a significant number of smaller files.

~~~
ominous_prime
Absolutely. Connection latency hits you the hardest, since each file is sent
serially, and requires 2 (or 3 with -p) round trips in the protocol, and this
is on top of an ssh tunnel with it's own overhead. I can't remember what my
tests showed, but I have this inkling feeling that tar over ssh was far faster
than rsync for an initial load, since there's no round trips required, but you
lose some of possible rsync benefits, like resume-ability and checksums.

~~~
jerf
If my first tar attempt fails for some reason, but it made a lot of progress,
I switch to rsync. Best of both worlds. This hasn't come up often enough for
me to script it.

------
VeXocide
Instead of

    
    
        ProxyCommand ssh -T host1 'nc %h %p'
    

you can use

    
    
        ProxyCommand ssh -W %h:%p host1
    

which uses ssh itself and therefor also works on machines where netcat isn't
installed.

~~~
carlesfe
Hi, I tried that but my ssh version doesn't have the -W flag. What would you
suggest?

~~~
argarg
Unfortunately OSX uses a badly outdated openssh version just like many other
unix tools and it doesn't support the -W option. You could try upgrading
openssh using homebrew if you want.

~~~
carlesfe
I'm using Ubuntu 10.04 and it doesn't have the -W flag either.

~~~
argarg
You need openssh 5.4+. I am running 12.10 and I've got 6.0p1

------
kibwen
_\- 'cd -' change to the previous directory you were working on_

To my surprise, this also works with git:

    
    
      git checkout -  # to checkout the previous branch

------
growt
If I may add a trick:

ctrl-z - stops a program

bg - sends the stopped program to the background

fg - gets the program back to the foreground (interactive mode)

very useful in editor sessions or when you want to get rid of the endless
download/scp that is blocking your terminal

~~~
niggler
Note that each job gets an identifier (which you can see by running `jobs`).
Other commands like `kill` can work with the id number by using %[id]. For
example:

    
    
        $ some_command
        ^Z (hit control z)
        $ some_command_2
        ^Z (hit control z)
        $ jobs
        [1]-  Stopped                 some_command
        [2]+  Stopped                 some_command_2
        $ kill %1
        $ jobs
        [1]-  Terminated: 15          some_command
        [2]+  Stopped                 some_command_2

~~~
dminor
Way back in my college days, one of the student admins took down the CS
department's server by forgetting the % and killing process 1 by mistake.

------
firebones
Wanted: a lint for your history that analyzes your commandline usage,
suggesting these types of tips based on your historical use.

    
    
      history | commandlint

~~~
ninetax
Where can I find commandlint?

~~~
vvpan
That's a hypothetical program (note "Wanted").

------
networked
Related earlier discussions:

<https://news.ycombinator.com/item?id=5022457>

<https://news.ycombinator.com/item?id=4481234>

A pretty good thread on Reddit:

[http://www.reddit.com/r/linux/comments/mi80x/give_me_that_on...](http://www.reddit.com/r/linux/comments/mi80x/give_me_that_one_command_you_wish_you_knew_years/)

Edit: see <https://news.ycombinator.com/item?id=3257393> for a discussion of
the above thread.

While we're at it, consider using weborf [1] as an alternative to Python's
SimpleHTTPServer for simple file sharing. I found it able to saturate a
gigabit ethernet connection when hosted on a Core 2 Duo ULV laptop with an
SSD.

[1] See <http://galileo.dmi.unict.it/wiki/weborf/doku.php?id=start>. It's
available from the official repos in Debian and Ubuntu with

    
    
      sudo apt-get install weborf
    

Invoking it is dead simple:

    
    
      weborf -b ~/dir-to-share

~~~
dfc
RE: weborf

I will have to look at weborf. I have always wished debian packaged
publicfile, similar to djbdns or even dbndns. As it is I am still looking for
a "djbdns-like http server" that is apt-get installable and actively
maintained.

I will never understand why gnome-user-share depends on apache...

~~~
keenerd
Consider webfs too. It is even simpler than weborf.

<http://packages.debian.org/search?keywords=webfs>

------
nathanstitt
Whut? ctrl-r? How have I missed that? No more 'history|grep foo' for me!

~~~
pavel_lishin
Try this in your .inputrc:

    
    
        # Bind the up arrow to history search, instead of history step
        "\e[A": history-search-backward
        "\e[B": history-search-forward
    

No more "^rls" to search for ls in your bash history; just type "ls" and start
hitting the up arrow.

~~~
nathanstitt
Thanks for the suggestion. I used to have a very customized .bashrc with nice
little things like that, but have decided to stick with standard stuff for
things that work off of muscle memory.

I got tired of sshing into a new box and half the things I'd type wouldn't
work properly until I remembered to copy my settings files over, which seemed
more trouble than it was worth for a short-lived s3 instance.

That's why I was excited to discover ctrl-r. It's a built-in method of
searching history that I can remember and it'll work everywhere.

~~~
pavel_lishin
> I got tired of sshing into a new box ...

I've got a public repo of my dotfiles, so the first thing I typically do is
"git clone git@github.com:pavellishin/dotfiles.git && cd dotfiles &&
./install.sh"

After that, I launch tmux, and it's all hunky dory.

~~~
nathanstitt
Heh.. right after I posted that comment, my thought was: 'you know - the
correct answer here would have been to make an Uber command that would suck
all the configs in and install them'.

Thanks for giving me the push to do so. I think I'll take your suggestion but
put the command on a site somewhere so I can simply 'curl
<https://foo.bar/configs> | bash'

------
bradbeattie
Of similar note, <http://www.commandlinefu.com/commands/browse/sort-by-votes>
indexes a good number of these tricks.

------
nonamegiven
"Add "set -o vi" in your ~/.bashrc to make use the vi keybindings instead of
the Emacs ones." Better to do this kind of thing in .inputrc, as:

set editing-mode vi

(or set editing-mode emacs) _because_ any application that uses readline gets
to use those settings. So for example you get command line editing in various
command line apps. bash uses readline, so you'll get that. The python repl
will give command line editing with .inputrc set as above.

    
    
      $ apt-rdepends -r libreadline6 |egrep -ve "^ " |wc -l
      8995
    

psql (postgresql) and mysql (mysql) are really handy with command line
editing.

"'ctrl-x ctrl-e' opens an editor to work with long or complex command lines"

If you've set -o vi, or set editing-mode vi in .inputrc, then on a command
line type:

    
    
      esc-v
    

(Escape key to get out of insert mode, then the 'v' key) That will open a full
vim session for editing your complex command line.

    
    
      :wq
    

exits vim and gives your command to bash to execute.

~~~
codygman
s/wq/x/g

~~~
nonamegiven
Duly noted. :)

------
Symmetry
Lots of good tricks, though I replaced a lot of them with "Use fish instead of
bash" <http://ridiculousfish.com/shell/>

------
65a
Here's one it took me a while to figure out...

Hung SSH session (such as wifi out of range)?

Type Return-Tilde-Period

~~~
philsnow
Also, if you've done this:

    
    
        you@somehost:~$ ssh otherhost
        you@otherhost:~$ some_command
    

If you hit ^Z right now, it will tell the shell on otherhost to stop
some_command and give you the shell prompt on otherhost back.

If instead you wanted to stop the ssh process and get the shell prompt on
somehost, hit Tilde ^Z (you don't have to hit a new Return, but ssh only
notices these escape sequences after a Return).

Also if you use ControlMaster and have a few xterms open all with sshs to
otherhost, and then you exit the first ssh you happened to have open, it will
seem to hang and not give you your prompt back. What's happening is that that
ssh process is the "control master" and it's still open because you've got
other sshs to the same host open. Hit Tilde & to background the ssh process
and get your terminal back.

Yes, you could also hit Tilde ^Z and then 'bg' the ssh process.

------
setrofim_
> Compile your own version of 'screen' from the git sources. Most versions
> have a slow scrolling on a vertical split or even no vertical split at all

or just use tmux

~~~
tomsthumb
Just to note, splits in tmux and screen behave differently. iirc, in screen
you have a set of splits and fill them in with different ports (kind of how
vim thinks of viewports). so technically you can have the same viewport open
twice on your monitor and the other will mirror the workings of the one that
your are working in right now. in tmux each 'tab'/window is a set of splits,
which act more like sub-windows, and don't really share across multiple
spaces.

------
jcr
This is bad juju.

    
    
      'find . -type d -exec chmod g+x {} \;'
    

If you happen to have a malicious directory named (without double quotes):

    
    
      ".;sudo rm -rf /"
    

You'd be stuffed.

It's better practice to use the '-print0' flag of find(1) and pipe the result
into xargs(1) with the '-0' flag set. For safety, it's best to also use '-r #'
for expected number of arguments, '-n' to stop xargs from running without
arguments, and "-J %" so you can use quoting.

    
    
      find . -type d -print0 | xargs -0 -n -r 1 -J % chmod g+x "%"
    

I believe some on implementations '-n' is unnecessary when '-r #' is
specified, but on other implementations it's is the maximum.

EDIT: thanks for the vim tip on for finding spelling mistakes!

~~~
neoteric
This is simply not true:

    
    
        $ mkdir '; echo woops'
        $ find . -type d -exec echo {} ';'
        .
        ./; echo woops
    

As you can see 'woops' is never echoed.

EDIT: The reason being the shell is never involved in this process, and the
shell is what is responsible for splitting commands on semicolons/newlines.

~~~
jcr
You are assuming the exact versions of the shell and find programs that you
use are the only ones that exist. It may not be a problem on _your_ exact
system, but it can be a problem elsewhere.

~~~
neoteric
I am assuming a POSIX-compliant implementation of `find`. The shell is not
involved.

FWIW, your `find -print0`/`xargs -0` is not POSIX.

~~~
jcr
Sorry, I didn't see your 'EDIT' caveat when I responded --now that will teach
me to reply too soon. ;-)

Also, it seems I failed to be clear; I'm probably too tired I suppose.

My point was there is plenty of ancient and buggy code out there. It could be
"most", or even "many" modern unix variants have fixed a lot of the old bugs
in find(1), but if you don't have the luxury of working on a current system,
and your not allowed to upgrade it, then plenty bad things can happen due to
invoking a shell, handling space, quote, and delimiter characters, and so
forth.

reference:

    
    
      $ uname -a
      OpenBSD alien.foo.test 5.1 GENERIC.MP#207 amd64
    

setup:

    
    
      $ mkdir test
      $ cd test
      $ touch file1
      $ touch file2
      $ touch file3
      $ mkdir ';ls'
    

bad:

    
    
      $ find . -type d -exec sh -c {} \; 
      sh: ./: cannot execute - Is a directory
      ;ls     file1   file2   file3   test.sh
    

better:

    
    
      $ find . -type d -exec sh -ec {} \;
      sh: ./: cannot execute - Is a directory
    

also bad:

    
    
      $ find . -type d -print0 | xargs -0 -r -n 1 -J % sh -c "%"
      sh: ./: cannot execute - Is a directory
      ;ls     file1   file2   file3   test.sh
    

better:

    
    
      $ find . -type d -print0 | xargs -0 -r -n 1 -x -J % sh -ec "%"
      sh: ./: cannot execute - Is a directory
    

better;

    
    
      $ find . -type d -print0 | xargs -0 -r -J % sh -c "%"
    

best:

    
    
      $ find . -type d -print0 | xargs -0 -r -J % sh -ec "%"
    

POSIX is all great and wonderful in theory, but in practice it's no different
than the bogus Java "write once, run anywhere" claim. If a system or utility
claims to be POSIX compliant, then you're probably close, but you'll still
need to do testing and debugging.

At least some of the issues with find/xargs are mentioned in the following
wikipedia article. It's probably more clear than I am right now.

<http://en.wikipedia.org/wiki/Xargs>

~~~
neoteric
The contrived examples you've shown aren't examples of POSIX-incompatibility,
or bugs in `find` at all. You've explicitly involved the shell. Of course
trying to run every directory name as a shell command string is going to
result in executed code!

Your original argument was that given:

    
    
        find . -type d -exec chmod g+x {} ';'
    

It is possible to force code execution of arbitrary commands given a carefully
crafted directory name. The key difference in this case is that the shell is
not involved _at all_. I challenge you to find an implementation of `find`
that is broken in this way.

As a side note, it is even possible to involve the shell in the picture in a
safe way with `find`, without the use of `xargs` (and thus avoid the overhead
of setting up a pipeline):

    
    
        find . -type d -exec sh -c 'chmod g+x "$1"' _ {} ';'
    

(my contrived example is quite poor, though, since it does nothing but
introduce unnecessary shell overhead)

Modern (POSIX > 2001?) `find`'s support `-exec {} +` which further reduces the
number of reasons to invoke `xargs`:

    
    
        find . -type d -exec sh -c '
            for x; do
                do_foo "$x"
                do_bar "$x"
                do_baz "$x"
            done
        ' _ {} +
    

(example above shows how to make proper use of this feature with an explicit
shell invocation)

------
jofel
Instead of

    
    
      find . -type d -exec chmod g+x {} \;'

you can usually use

    
    
      chmod -R g+X .

which gives additionally the group execute permission to files which have
already user/everyone execute permission.

~~~
jff
His example changes permissions for directories only, which I believe your
example does not.

------
ciupicri
> SMB is better than NFS.

More details would be nice.

~~~
carlesfe
NFS basically breaks the client computer until the host responds, while SMB
detects the I/O error earlier and, at least, doesn't hang the terminal.

Other than that, SMB allows for permissions and a bunch of other features. It
is, basically, a more modern and robust protocol. Does NFS shine for some use
cases? Indeed. Would it be the first choice for most ones? Nope.

Disclaimer: my sysadmin skills are not so great, I'm talking as a user

~~~
EwanToo
NFS has 2 mount types, soft and hard.

<http://tldp.org/HOWTO/NFS-HOWTO/client.html>

Soft mounts report errors immediately, hard mounts hang.

And for permissions, NFS provides everything under the sun that you could
possibly need via ACLs. NFSv4 is a very modern protocol, much as SMBv2 is (not
SMB though, it's awful).

Linux has had NFSv4 for what, 6 years now, at least, and even v2 and v3 had
some limited ACL support?

<http://wiki.linux-nfs.org/wiki/index.php/ACLs>

~~~
carlesfe
That's nice to know, it seems that we have the "hard" configuration at the lab
and it's a real pain. Backwards compatibility, you know. For my mini-cluster
we use SMB and couldn't be happier.

~~~
EwanToo
Not sure what your lab's trying to be backwards compatible with (NFS has had
those mount options since at least 1989), but whatever works for you :)

It's a mount option on the client, not the server, so maybe you can change it
yourself.

<http://www.ietf.org/rfc/rfc1094.txt>

------
rburhum
I freaked out when I found out about the "screen" command several years back.
"screen" starts a virtual screen that you can detach from with "ctrl-a d" and
you can log out, login from a different machine/session and reattach with
"screen -r". it has history so you can run long running commands and reattach
3 days later to continue from where you left off as if you had been logged in
the whole time.

~~~
revscat
Yup! You might want to try tmux, though. I used screen for many years and
switched over to tmux about 2 years ago, and have been really happy with it.
It (tmux) is also maintained more regularly now; screen development seems to
have stagnated.

Either way, enjoy!

~~~
rburhum
I will checkout tmux. Thank you!

------
McUsr
This was a nice very timely page sitting with my private repo in mercurial,
and the other one at github…

I discovered help <builtin> some months ago, and that was a great boon really.
Like help "test" so I didn't have to go the rather large bash man page.

Here is a little shellscript for displaying a man page on Mac Os X (gman). (If
you then click on on of the links on the man-page, it may pop up in your
default browser).

    
    
      #!/bin/bash
      if [ $# -lt 1 ] ; then
      	echo "gman takes a man page, if found and formats it into html."
      	echo "Usage: gman [manfile]"
      	exit 2
      fi
      a=`man -aw $* |head -1`
      if test  x$a = x ; then
      	echo "Can't find $1"
      	exit 1
      fi
      # Figures out if it is a normal man page or something else (gz).
      b=`man -aw $* |head -1 |grep "gz"`
      echo $b
      if test  x$b = x ; then
      	groff -man $a -Thtml >|/tmp/tmp.html
      else
      	gzcat $b |groff -man -Thtml >|/tmp/tmp.html
      fi
      qlmanage -p /tmp/tmp.html >/dev/null 2&>1

------
daGrevis
Only one Vim tip? I'm disappointed.

Check out this resource!

<http://www.rayninfo.co.uk/vimtips.html>

~~~
carlesfe
Great resource! Thanks for the link.

Most of my vim tips are on my .vimrc (my dotfiles here
<https://github.com/carlesfe/dotfiles/blob/master/.vimrc>) and on the links on
my homepage: <http://mmb.pcb.ub.es/~carlesfe/#programming_h3>. The .txt that
op linked feels a bit out of context :)

------
runejuhl
When creating excessively long oneliners (you know, the kind that should
actually be a script, because you know you're going to find a use for it in a
few weeks), the following key combo is golden:

    
    
        ^x ^e
    

It opens up the exported EDITOR with a tmp file containing whatever is on the
command line.

Using SSH, especially on campus/in a train/other places where a wifi
connection doesn't long, mosh is really a godsend. In places where mosh isn't
practical or available, the following combos are really good to know:

    
    
        <RETURN> ~ .    # end ssh connection
        <RETURN> ~ ?    # show available commands
    

Paired with autossh which will reconnect by itself, it really takes the pain
out of traveling while doing remote work.

Oh, and when you need to know what the decimal value of 0x65433 is, it's good
to know that bash can do stuff like that:

    
    
        $ echo $((16#65433))
        414771
    

Reading the bash man page is not a bad idea in itself...

------
marekrud
GNU parallel: easily substitute of file extensions + parallel execution, e.g.

    
    
        ls *.png | parallel convert {} {.}.jpg

------
mikegioia
_'ssh -R 12345:localhost:22 server.com "sleep 1000; exit"' forwards
server.com's port 12345 to your local ssh port, even if you machine is not
externally visible on the net._

This one blew my mind

~~~
malloc2x
Better to use, e.g.:

    
    
      ssh -fN -o ServerAliveInterval="240" -R 2222:localhost:22 example.com
    

(And on example.com _ssh -p 2222 localhost_ )

This lets you easily keep the tunnel open long-term. Why?

    
    
      -f    Background the ssh process (don't need nohup)
      -N    Don't run any command
      -o... Make ssh do the work of keeping the session alive forever

~~~
carlesfe
But the remote host can limit your ServerAliveInterval, however most hosts
don't close your session if there is something running. On our clusters this
is the only working solution, and I tried both, trust me.

~~~
malloc2x
If that's an issue (I've never met a server like that, not to mention one like
that and I couldn't change the setting) then wouldn't you rather use this?

    
    
      while [ true ]; do sleep 1000; done
    

Just using:

    
    
      sleep 1000; exit
    

means you can't make new connections after 17 minutes.

~~~
carlesfe
Yes, in fact I use the first one (while true; sleep; ls) but I thought the
second is more succint. Anyone interested can make a loop. Please notice that
most of the snippets aren't meant to be copied & pasted, but rather analyzed
and understood by the user.

~~~
malloc2x
Ah, sometimes that's a hard line to draw since often the people that know
enough to understand not to take it literally don't really need the pointer in
the first place. :)

I enjoyed your list.

------
implicit_cast
More:

Bash and zsh support a surprising amount of emacs editing functionality.
Cursor navigation (M-b, M-f, C-a, C-e), text selection (C-space), copy/paste
(C-w, M-w, C-y, M-y), and undo (C-_).

------
gcv
Learn to use your shell's globbing features instead of overusing find. In zsh,
you can do 'print -l __/*.(c|cc|h|hh)' for example (I'm sure bash has an
equivalent).

~~~
oftenwrong
For me, the __glob gets a lot of use.

~~~
oftenwrong

        ** glob

------
grn
Personally I put the following in my .bashrc:

    
    
        function pushcd {
            if [ $# -eq 0 ]; then
                pushd ~ > /dev/null
            else
                pushd "$@" > /dev/null
            fi
        }
    
        alias cd='pushcd'
        alias b='popd > /dev/null'
    

Then cd saves the history of visited directories and b navigates backwards,
e.g.:

    
    
        ~$ cd /tmp
        /tmp$ cd /usr
        /usr$ b
        /tmp$ b
        ~$

------
dunk010
Here's a really great presentation I found (probably on HN) some years ago.
<http://www.ukuug.org/events/linux2003/papers/bash_tips>

All extremely useful, my favourite being the .inputrc rebindings of up and
down to search history. Takes a little getting used to but is great once you
are (good luck to anyone else trying to use your terminal though ;))

------
jbp
As an emacs user I map "C-p" "C-n" for history search in bash

In .inputrc

    
    
      "\C-p": history-search-backward
      "\C-n": history-search-forward
    

Also I found the following settings to be very useful:

    
    
      # List the possible completions when Tab is pressed
      set show-all-if-ambiguous on
      #tab complete without needing to get the case right
      set completion-ignore-case on

------
xiaoma
One of my new favorites-- use ctrl-r autocompletion to grab a command close to
what I want from history, tack on a character that breaks it if necessary and
run that.

Then use fc to get that command back in a vi editing context, change what I
want and :wq

The resulting command is immediately executed. This process can be _really_
fast compared to manually constructing long piped commands.

------
FiloSottile

        * Read on 'ssh-keygen' to avoid typing passwords every time you ssh
    

He meant ssh-agent?

~~~
jabo
No he did mean ssh-keygen which generates a public and private key pair for
you. Run ssh-keygen on your local machine and copy the public key to
~/.ssh/authorized_keys and you'll be able to login to the server without a
password using ssh -i /path/to/private/key user@host

~~~
LiveTheDream
That approach is insecure, however, because anyone with the private key now
has access. When running ssh-keygen, you should add a passphrase to the key,
then add the key to ssh-agent so you don't need a password for the account,
nor do you need to type the key's passphrase constantly.

~~~
emidln
Anyone with private key access has control over my user account and has better
access than what my private key + passphrase would provide.

I understand layers (probably moreso than most), but this is something that
always bothers me a lot from a practicality perspective. My passwords are
encrypted at rest via encrypted filesystems. If you are running things on my
personal machine as my user account, I'm already being keylogged and/or am
executing arbitrary code for you. If I'm logged into somewhere via ssh (hint:
I am whenever I have a network connection), you can just scan my ssh config
and use my ssh key anyway. From there, you can probably do a lot of other
nasty stuff. ssh-agent won't really prevent this. It will prevent the malware
from working again when I reboot until I log into another remote host (which
I've established I do a lot) where the keylogger now gets me.

It's possible, but extremely unlikely that I have might have completely read-
only media. I could be using my TPM device to protect from from booting and
executing modified system states. Some of this might prevent you from easily
persisting the keylogger threat across reboots. I might also have a module or
something that calculate checksums on startup of critical things, have
ridiculous anti-exfil outgoing connection policies, etc that prevent all but
the most targeted attacks.

I don't have all of that in place. (In particular, to anyone generating a
profile for me, I don't build detailed outgoing packet filter rules (you are
welcome).) But what I do have in place will probably prevent me from getting
my initial passphase keylogged if I used ssh-agent since it's likely (although
this isn't strictly necessary) that I'm going to get attacked again after I
start logging into remote hosts. So they can't steal my password, but they do
have unrestricted access to my user account and the remote users I can log
into. That complicates things, but is still a major security failure, to the
point where them having the passphrase to my key isn't super important. I
mean, in this scenario, they already have the absolute best input vector (a
history of me logging in so they can execute attacks at the times I'm supposed
to be logging in, as well as direct access to the systems from my ip
addresses) to the point where using my ssh key from elsewhere is probably a
worse a idea.

------
hackerpolicy
If you work with something like this

    
    
      /this/is/some/very/nested/directory/
    

and you need to move to almost the same structure

    
    
      /this/be/some/very/nested/directory/
    

then you might benefit from

    
    
      function bcd {
        cd ${PWD/$1/$2}
      }

------
siavosh
Is there a command/short-cut to storing the path(s) returned from a find
command? Currently I do something like:

find . -name somefile.txt -print

Then copy and paste the results manually into a cd command for example. I feel
like there's a much better way somewhere.

~~~
Tyr42
xargs?

Here is an example from the man page: >For example, the following command will
copy the list of >files and directories which start with > an uppercase letter
in the current directory >to destdir:

    
    
                       /bin/ls -1d [A-Z]* | xargs -J % cp -rp % destdir
    

You can use this with find really nicely.

find . -name somefile.txt -print0 | xargs -0 -J % cp -rp % destdir

Note the -print0 and -0 flags (zero). This will use a null byte as a separator
instead of spaces to avoid failing on files with spaces in their names.

------
duggieawesome
:set spell is pretty neat. Will use not when coding, but when writing blog
posts.

~~~
nathan_long
If you put that in, for example, `~/.vim/ftplugin/markdown.vim`, it will be
used on all markdown files. (`../ftdetect/markdown.vim` controls how Vim
determines that a file is markdown.)

------
D9u
_\- Use 'apt-file' to see which package provides that file you're missing_

Ummm, "apt-" isn't a "Unix trick..." It's specific to linux distros which use
the "Aptitude" package manager.

Linux != Unix

~~~
iso-8859-1
Aptitude is just a front-end to APT. And APT (package tool) is just an
interface to dpkg or RPM (package managers).

------
dmackerman
Did a quick and dirty format for easy reading:
<https://gist.github.com/dmackerman/5117156>

------
Ensorceled
Weird, I've been using vi/elvis/vim/MacVim as my primary editor since 1984 and
I _hate_ vi key bindings for shell; I _always_ use the emacs bindings.

~~~
AaronBBrown
I'm the opposite. Every time I log into a server that doesn't have my bashrc,
I immediately have to `set -o vi` or else I'm useless.

~~~
Ensorceled
That's why I love *nix ... options!

------
SEJeff
I can't believe this past has sort | uniq when GNU sort (important
distinction, the BSD version and hence OSX can't) has a -u flag so sort -u ==
sort | uniq

~~~
lri
OS X does have sort -u. -u is also in POSIX
([http://pubs.opengroup.org/onlinepubs/9699919799/utilities/so...](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/sort.html)).

------
init0
Md version for readabilty <https://gist.github.com/hemanth/5109020>

------
artursapek
Regarding set -o vi, which I love, is there a way to have it load my vimrc as
well? I have custom bindings I would love to have.

------
appplemac
tar czf - . | ssh destination "tar xz"

To pipe all the contents of your current directory (including dotfiles) to the
destination machine.

Greetings from LSI-UPC!

~~~
cowmix
I would change this to:

tar czf - . | ssh destination "cd /remote/dir; tar xz"

~~~
zengargoyle
You will one day punch yourself for not being safe.

    
    
        tar czf - | ssh destination 'cd /remote/dir && tar xzf -'
    

Test that `cd` or one day you'll end up extracting in the wrong place.

------
DanHulton
sudo !!

Perform the previous command as root. Great if you continually forget which of
your scripts need to be run as root and which don't.

------
emmelaich
I see people often have aliases for ls -ltr | tail but ls -lt | head might be
a lot faster with a lot of files.

------
bramswenson
SMB is better than NFS in Unix Tricks? Shame Shame Shame. Possibly just wrong.
Like the rest though. Thanks!

~~~
carlesfe
I know, that's controversial... but I've had zero problems with SMB and many
ones with NFS. Just my two cents.

------
marvwhere
i'm working with unix since years, but only on my server, and now for a little
bit more then a year now on osx for 2 days a week.

and since one month i have my first own macbook. totally helpful to get in
touch with some magic in the console.

thank to everybody, who makes my working life easier =)

------
globalpanic
find . -name "file-wildcard" -exec "string" {} ";" -print

is something I use a lot - plus xargs sometimes

~~~
martinced
I use variations on "find" so often that I've created several little commands
"fij" (find in any java source), "fit" (find in any text / org file), "fix"
(find in any XML file) etc. which I use all the time

~~~
prakashk
You might want try `ack` then. (<http://betterthangrep.com/>).

On Ubuntu/Debian systems, it is packaged as `ack-grep`.

------
rogerd
I really could have used that "chmod g+X * -R" the other day! Thanks for the
tips.

------
andr3w321
find . -name "*" | xargs grep "hello" 2>/dev/null

Searches all the files in current directory and all subdirectories for files
that contain "hello". Add -l option to grep to only display the filenames
instead of filename and match.

------
woodchuck64
zsh: setopt extendedglob allows you to use numeric ranges on globbing with <
>:

mv p1080<100-300>.jpg folderx/

to move p1080100.jpg through p1080300.jpg to a new folder.

Still not available on bash or more common shells?

~~~
LukeShu
Bash:

    
    
        mv p1080{100..300}.jpg folderx/

~~~
1amzave
Though that's not strictly a glob, but brace expansion (it will expand to all
the numbers in that range regardless of whether or not files with those names
exist). bash does have 'shopt -s extglob', which enables a number of useful
globbing extensions, though I don't believe there are any numeric ones among
them.

------
mixedbit
ack-grep is great for searching source code

~~~
sepeth
Yesterday, I discovered the_silver_searcher:

<https://github.com/ggreer/the_silver_searcher>

I think you might like it.

------
UNIXgod
% :(){ :|:& };:

