Actually the new top comment blew me away. I'm in the 20 year veteran category and was also oblivious. Quoted:
disown - bash built in.
You know how you always start that mega long rsync in an ssh session from your laptop and then realize you have to go offline halfway through? You forgot to start it in screen or nohup, didn't you? You can pause it (ctrl+z), background it (bg), and then disown it so it is protected from SIGHUP when you quit your ssh session. Bash (and, to be fair, zsh and others have copied much of this) has some wonderful job control features most people are completely oblivious of - even veterans of 20 years.
disown is great. However, I don't know if it's possible to start up a new shell, then bring back that process to the foreground. 'jobs' will return no entries, and fg doesn't take a PID parameter. Any ideas? It would be brilliant if that were possible with bash alone.
reptyr: https://github.com/nelhage/reptyr is Linux specific & involves a fairly gruesome abuse of ptrace() to attach a running process to your current pty. It won't put the process in the shell jobs list though.
Which according to reports has the benefit of being maintainable.
GNU screen, which I love and use to obscene extents daily, has an exceptionally hairy codebase. That said, I've got to get my tmux on and learn how to do screen equivalents with it....
I keep seeing people comment on the codebase of screen being bad vs tmux, but I'm not sure why that is an issue? Are there any features of screen missing that haven't been added to the codebase? The vertical split patch has been added a long while back, and even though I'm sure tmux has a smaller memory footprint, I've never had an occasion where screen has ever been an app I'm worried might impact my server memory usage.
Since it is installed for the most part everywhere I have no desire to switch to tmux.
FWIW, I've switched to tmux but think tmux's implementation of split screen is broken and desperately miss screen's.
Screen lets each part of a split screen select a different existing window. Tmux makes both parts of a split part of a single window, so you can either have two things going in one split, or have something else going on in another window, but if you suddenly decide you want two existing windows side by side, you can't do it.
If there are problems found with screen (and it's a widely used admin tool that may be left running on remote servers, hence a high-value attack vector), it's going to be harder to find and fix the problem.
If there are features to be added, they'll be harder to add to screen than tmux. Which means that with time, tmux stands a higher chance of becoming more featureful.
Both argue in favor of tmux in the long run, though there's no immediate need to ditch screen. As you note, screen's very widely available, which is a feature. Then again, so was telnet 10 years ago.
Yes. screen will not run in terminals with long names, like rxvt-unicode-256color. This is due to a fixed-length buffer for storing the terminal name. Simple solution: make the buffer bigger. I did that and submitted the patch to Debian and upstream, and last I looked, it was still broken. I switched to tmux and never looked back.
OK, I looked back and it's fixed now: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=621804. 6 months is a long time to wait. (Not that I'm criticizing Debian. It would never be fixed if it wasn't for the distributions patching it.)
I'm still a die-hard (and vertical-split-patched) screen user, but it seems tmux is the only suggested alternative. The combination of the more orthogonal dvtm[1] (for window management) and dtach[2] (for screen's killer feature: detaching) is flexible and compelling and sadly doesn't seem to have been mentioned here.
I currently use dvtm to split terminals and connect to multiple screen sessions in a single window. The downside of dvtm is it takes the dwm[3] approach of using #defines rather than configuration files.
The only solution I've seen is to start up gdb with the process and point the process's 0, 1 & 2 file descriptors to the shell's STDIN, STDOUT and STDERR, then detach the process and quit gdb.
I can't remember the exact details, but that should be enough to set you up for some googling.
These are actually two different "rename" commands. bgaluszka's version is usually found on Fedora/RHEL/etc. systems coming from the util-linux package, while your version is usually found on Debian/Ubuntu/etc. systems coming from the Perl package.
I prefer the regex one, of course, especially since it takes proper Perl regexes :)
You have to be careful of this though and probably avoid using this command in shell scripts.
edit: You can just stick Debian's rename at $HOME/bin/rename.pl and call it a day; this command is useful in the terminal since it's less typing than a for loop.
Nobody knows this: the bash extension <(cmd) to pass the output of a command, as if it was a file, to another command. Example to diff the list of files in 2 directories (a and b) without using temporary files:
$ diff -u <(cd a && find . | sort) <(cd b && find . | sort)
Internally, bash creates a pipe and passes /dev/fd/XXX to the main command.
Which means for example you could get that listing in any format, so you could for example spot changes in permissions or ownership if you included it in the listing.
Yes, you are because the example was contrived. The point is that process substitution (as it is called) is available on systems that support named pipes. You could think of better uses for it.
sed's regular expressions are different from those perl uses (even when using the (¿gnu sed?) flag that tells sed to use extended regular expressions). Especially if you "live in perl", that can be a problem.
Does sed allow in-place file replacement, or do you need to redirect to a different file then cp/mv it over the original? I haven't used it in many years, but sed didn't used to allow this.
Note: "cmds" is one or more s commands. For other commands, if it's a big file, I will just pipe through less (still no temp file) and then save the buffer (to overwrite the original file), instead of using this hack.
A hack for in-place editing, for old school sed (no -i):
sedi(){
case $# in
0) echo usage: sedi cmds file;;
2) sed -an '
'"$1"';
H;
$!d;
g;
w '"$2"'' $2;;
esac;
}
C-R was one of a few watershed moments for me. I discovered it over a decade a go as a cow-orker was flying through commands way faster than was humanly possible. "How did you do that?". And so I learned.
It, along with other readline functionality, shell functions, substitutions, expansions, scripts, and the bazillion utilities are what make Linux (and Unix) shell so much more than just "it's like DOS, right?".
Yeah, kind of, in the same way that ... a cross between a Prius, a Mack Truck, an Lamborghini, an F-16 fighter, a helicopter, and a freight train are like a pushcart. It's an interface that helps you manage your computers, the things on them, and the things they're connected to. It's also a hugely efficient and effective way to process information and issue commands and controls in a useful way.
I've switched from screen to tmux about a year or two ago, after decades of using screen.. and I'm perfectly happy with tmux, as it fits my needs.
However, it's not true that screen has no advantages over tmux. Each program has its own strengths and weaknesses. For instance, screen is scriptable via its Lua bindings, tmux is not. Screen has zmodem support built in, tmux does not. There are probably many other examples, since screen has a bazillion features which have been developed over decades, while tmux is relatively new (and its developers don't seem to care to duplicate every one of screen's features).
tmux is scriptable from the shell via its command-line arguments, so any language can script it.
It's certainly true that screen does have tons of small old features... but I sort of swept that in with "unless you're using HP-UX or something." zmodem support isn't exactly relevant to the overwhelming majority of programmers, but yes, I admit these things are still relevant to a few niches.
"tmux is scriptable from the shell via its command-line arguments, so any language can script it."
I'm not sure if tmux is scriptable to the same extent that screen is via screen's Lua bindings, as the latter probably exposes many of screen's internals, while scriptability of tmux is limited to what you could do with its command line arguments. I haven't researched the matter, though, so I may well be wrong.
Perhaps not, but tmux's bindings are nearly all backwards-compatible with screen's. Well, the default escape char is ctrl-B but I (and I think most folks) just remap it to ctrl-A immediately.
I always rebind it to ^z. Since I never need to put a process in the background (I just open a new screen instead) when running screen, it never bothers me that that combo is a bit harder to type.
i use the ratpoison window manager (it works a lot like tmux/screen, but for x11) and have control+a set as its prefix key. i use tmux for some long-lasting remote sessions, so to send a control+a to the shell running under tmux, under ratpoison, i type control+a, a, control+a, a. yes, i do it a lot. no, i'm not going to change it.
I actually only learned that ctrl-A works in bash 3-4 months ago, and haven't yet picked a new screen/tmux escape and retrained to it. Mainly because these days, I only use screen when I WFH or am traveling, when I ssh into my workstation. Almost nobody has ssh rights to a production machine, so I'm actually struggling to think when I would use it from my workstation.
You'll want to quote .java so the shell doesn't interpret it as a glob, and find doesn't use GNU style flags. You can also pass wc the filenames directly like this:
That has different semantics (it prints one line per file) and can have line length limitations. This has the same semantics as the parent's version, but without line length issues.
Note that the + may not be portable. I know it doesn't work on older Solaris versions (<10) and HP-UX (from a couple years ago when I last touched it).
Also, find may or may not split the files it finds into multiple invocations.
For find, you need to use single quotes so your shell doesn't automatically expand the .java glob into the names of all the files. Also, find uses one dash for parameters, and can take an exec parameter:
find . -name .java -exec wc -l {} \;
The {} means "replace with filename" and the semicolon (escaped so your shell doesn't eat it). This will run wc -l <filename> for every file that matches.
The -regex flag lets you do the same thing as -name with regular expressions.
The problem with this is we run n different wc's, as opposed to just one with all the parameters. We can use xargs to fix this:
find . -name .java | xargs wc -l
This even gives you a total!
The really* simple solution (which is probably best) is to just use shell globbing:
wc -l *.java
While find and xargs are definitely useful, the easy solution here works better
You can skip xargs altogether with the plus sign instead of semicolon.
-exec utility [argument ...] {} +
Same as -exec, except that ``{}'' is replaced with as many path-
names as possible for each invocation of utility. This behaviour
is similar to that of xargs(1).
The problem with just wc -l *.java is that it don't recursive directory.
zsh's extended globbing support makes the use of find unnecessary in most cases. See the "extended globbing" section of the link below[1] for some examples.
zsh also has the 'zargs' command, which works something like xargs, except that it gets its arguments from the command line (often in conjunction with extended globbing) rather than from stdin.
You might also be interested in reading about the "useless uses of cat"[2]
Your approach works better than backtick expansion, since it won't run into 'command line too long' issues; but it will still break on "weird" file names (e.g., file names with spaces in them). The correct way to do this is
BTW, recent versions of linux don't have a fixed limit on command line length.
The command line can be a large as available memory (for a single processes). (Although the ratio between command line length and allocated memory is not obvious since it depends on the number of words in the command line.)
xargs can still run you into the 'too many args' issue. You'll want to use this:
--max-args=max-args
-n max-args
Use at most max-args arguments per command
line. Fewer than max-args arguments will be used
if the size (see the -s option) is exceeded,
unless the -x option is given, in which case xargs
will exit.
Only if there's a bug in your xargs. POSIX states:
The xargs utility shall limit the command line length such that when the command line is invoked, the combined argument and environment lists (see the exec family of functions in the System Interfaces volume of POSIX.1-2008) shall not exceed {ARG_MAX}-2048 bytes
pv - very useful to put in the middle of a series of piped commands to provide feedback on what, if anything, is happening.
I could've used this many times to catch cases where the amount of work being performed didn't match my expectations; usually an indication of an incorrect assumption or mistake I had made. Instead I sat there waiting for something way longer than I should've, only because I had a bad feedback loop.
Many comments seem to be shell- and bash-related. Perhaps the thread exposes how difficult to read the bash manpage is. On my system, the bash 4.1 manpage is 41026 words long, or about 80 pages. Manpages are most useful to me when they are short and to-the-point so I can find what I am looking for before I lose interest in skimming through an 80 page book.
Yep, a short list of useful one-liners would go a long way. Often, your use-case will be a straight specialisation of one of the examples, and if not, chances are you can cobble together something from several examples. I'd like to have a auxiliary help command for this, e.g. example ffmpeg in addition to man ffmpeg (another 18k words manpage).
Edit: This works rather well:
function cmdfu() {
curl "http://www.commandlinefu.com/commands/matching/$@/$(echo -n $@ | openssl base64)/plaintext" --silent | sed "s/\(^#.*\)/\x1b[32m\1\x1b[0m/g"
}
Another alternative is <a href="http://cheat.errtheblog.com/>Cheat</a>, which is essentially a CLI accessible wiki of usage examples. It was originally focused on Ruby (and still is to an extent), but it gives decent results a lot of the time.
You could try the OpenBSD man pages. One of OpenBSD's must-haves is good documentation. If I'm looking for an example and the Linux man doesn't have one, I'll often google for "openbsd man xxxx".
Of course, watch out for differences in the utility. OpenBSD (usually) won't have any GNU extensions, for example.
Same here, most man pages seem pretty good at documenting all the available options, but are pretty useless at explaining how the tool should actually be used.
I'd like to see someone dump all the common man pages into a public wiki so people could flesh out the usage examples and suggest alternative tools.
... gives you paginated manpages. I get about 96 pages for bash 4.1.5.
man -Tps bash > /tmp/bash.ps && gv /tmp/bash.ps
... gives you the manpage pretty-printed as postscript output (assumes you've got the gv ghostview viewer installed). Or you can use a PDF converter (e.g.: ps2pdf) and read with your preferred PDF viewer.
This version runs about 70 pages for the same manual.
By reducing the needless complexity that it implements. The purpose of a Unix shell is to interpret a simple language wrapping syscalls so the user can easily give instructions to the kernel. There are several such shells whose source code is smaller than the bash manpage.
13 pages of the 80 page manual are devoted to readline and programmable completion. Readline and programmable completion could be removed from the shell, and their functional equivalents moved to a more logical place in the terminal emulator. Plan 9 does this, and it works well, and people seem to like it and the flexibility it gives.
24 pages are devoted to builtin commands, many of which are unnecessary duplicates of regular commands, and several unnecessary altogether.
3 pages are about weird parameter expansions, all of which can be done more intuitively with sed, except for the array one, which can be done with awk.
And there are many smaller things as well, like biffing or arithmetic evaluation, which can be done with dc or whatever. You could try moving history out of the shell and into the terminal emulator as well.
Responding that bash itself is too complex is skirting the question. The original post made it sound like the bash manpage was needlessly complex within the constraints of bash's current complexity.
The 'help' command is exactly what you're looking for -- e.g., 'help disown' will describe the 'disown' command. Definitely beats searching the entire bash man page for the command you want (which is what I did for far too long, until I found out about 'help').
This is how I learned unix shell back around 2000ish. Aside of giving an excellent insight into zsh, it also gives many good hints and notes about unix shells in general.
The first two or three chapters of that guide are pretty good, but then it gets bogged down in obscure minutia to such a degree that you start to feel almost like you're just reading the zsh man pages again.
The guide is in some serious need of a good editor to make it more concise and better organized. It also needs many more simple, practical examples. Unfortunately, it hasn't been updated since 2002, which also makes it a bit obsolete, since zsh development has been very active since then and zsh is now on to a new major version.
Overall, the guide is a nice try, but zsh needs more and better documentation if its not going to overwhelm all but the most dedicated users.
On my system, the info documentation for bash 3.1, not including indices, is 53610 words, or 105 pages. The content is identical to that in the manpage. In addition, you have to read it in a clunky, unintuitive, and un-Unixy browser. Far from short and to-the-point, I nearly lost interest just trying to figure out how to coax info to send its output to stdout so I could pipe the 105 page book to wc. The info documentation is not in any way I can measure better than the manpage. Both the info documentation (and I use that term loosely) and the manpage are more noise than signal.
Info pages are written as a single document, but displayed by the 'info' or 'pinfo' readers in multiple pages.
You can simply view the original file (it has minimal markup) in a pager for a single-file view. I generally write an "info" script or function to replace the info viewer with something more useful.
Or, for another alternative, w3mman, which is bundled with the w3m browser. Tabbed manpage browsing in the terminal with URLs converted into links and "foo(1)" converted into a link to the man page for "foo". Don't leave home without it.
On my MacBook the manpages scroll hellishly slow and there are no "Page Down" keys (maybe there is some key combo? I don't know). I end up googling for the manpages on the web instead most of the time...
On the MacBooks I've seen, Page Up/Down is usually Fn+Up Arrow/Fn+Down Arrow. I don't know if this is implemented in software or hardware, though, so if you're running *nix, you might be out of luck.
I was taken aback by one commenter whose command he wished he'd known for years was man, newly discovered by him. "All those years of googling, wasted."
What happened that it's possible for someone to not know that it exists?
Google is what happened. Not to date myself, but when I was first learning (BSDi) UNIX Google didn't exist. The first thing the sys-admin who helped me said was, "type 'man'."
These days most people just know that if they have an unanswered question, ask the Google.
> The first thing the sys-admin who helped me said was, "type 'man'."
And that also happened. It used to be virtually impossible to jump in alone, because someone had to create an account for you. It's now possible to start in Unix with no one's help. As long as you can figure out how to burn an iso to CD, you're in. The installation gives you an account, with sudo.
After you install CrunchBang, it pops up a terminal the first time you log in, and allows you to make additional choices that weren't made during install.
I think all distros should pop up a terminal upon first boot and direct you to read "man man."
You can learn a lot by searching the web, so many folks don't learn something new by reading the first chapter of any intro to Unix (or the shell) book anymore. It's still surprising though.
If you search for help with command line utils you are almost guaranteed to get some results that are man pages online, so the person who said that probably has read man pages without realizing it, and without the convenience of the man program.
Quite often asking a good question to knowledgeable people can give you a tested and well working solution in a fraction of the time it would take to search and try and fail and try a working solution by yourself.
For me, often asking the question and then still trying to figure it out myself worked well. Often I come to a solution, post it (IRC) and then later some guru responds with "try this instead" or "with that flag it could do some thing better".
> Quite often asking a good question to knowledgeable people can give you a tested and well working solution in a fraction of the time it would take to search and try and fail and try a working solution by yourself.
The pool of knowledgable people decreases with each question asked, especially if you're not making an effort to at least try to do it yourself.
I personally find man and info very cumbersome and hard to read. I usually google for things even when I know full well that its covered by man and where, because the google results are normally a lot easier for me to read and figure out than it is for me to read the manpages.
(or after you start emacs, M-x server-start). This starts up the emacs server, so that it preserves your open files, and you can start new emacs instances immediately with emacsclient. (I actually symlink emacsclient to vi since it loads as fast as vi, which was one of the weakest points of emacs for me.)
Very useful if you are using a remote box (like a vps) as your dev environment.
$ vim -h | grep -i server
--remote <files> Edit <files> in a Vim server if possible
--remote-silent <files> Same, don't complain if there is no server
--remote-wait-silent <files> Same, don't complain if there is no server
--remote-send <keys> Send <keys> to a Vim server and exit
--remote-expr <expr> Evaluate <expr> in a Vim server and print result
--serverlist List available Vim server names and exit
--servername <name> Send to/become the Vim server <name>
$
It's not a command but it is still useful. When you are typing a long command and you realize that you need to do other command or set of commands first, usually you type ctrl + c to cancel it. By default it does not save in the history. If you put # (comment the command) in front of the command (you can do it quickly with Ctrl + a #) and then you can use Ctrl + r to get it back. I'm sure there are other ways to save the command in the history but it is a simple way to have the same effect.
What I usually do in that situation is Ctrl + u (delete all the way back to the start of the line), then type the intermediate command, then Ctrl + y (place the deleted partial line back on the prompt, with my cursor at the end where it was).
Even better: just type \ (line continuation) at point that you realize that you need to do something else first. Then hit Ctrl-c to cancel once you get to the new line.
every developer should know sed and awk. it is amazing how many devs write cumbersome scripts or little programs to handle tasks that are built for sed and awk.
my ~/bin folder has ~50 different little scripts that handle all sorts of small tasks.
I code in Perl at my day job, and I still use awk and grep. Sometimes it's quicker to do it in awk than perl, especially when you're doing it in command line and have to figure out quotes.
I think I could do the split into an array and then take the third element, but I'm just trying to do an elementary example. When you want to do more things to this argument you necessarily have to grow this, and at one point I got to the point where I started having to escape quotes. That's rather difficult to read when you're just trying to do something quick in the command line and you mismatch a pair.
If you frequently connect through VPNs to other hosts, often enough the SSH connection times out and just siits there, taking no commands. Hitting ~ and . will kill the session (it's an openssh feature).
Other frequently used tools: awk, sort, uniq, tail, find, grep.. they alone make the shell a really powerful tool
Although sort(1) handles this for you as you mention, the general problem in the shell of wanting to overwrite your input file with the output file, but not being able to redirect to it for the reason you mention, is solved with sponge. Rather than:
alt-. or (ESC .) does something similar, but can be pressed repeatedly to keep going back further in history in case the argument you want isn't on the most recent command.
!! is short for the previous command, but you can do !-2$ to get the last argument from the 2nd last command, or !cp$ to get the last argument from the most recent cp command. Or !cp:2 to get the 2nd argument to the most recent cp command. !# refers to the current command being entered (in zsh, don't know about bash).
Basically I second Aissen's recommendation to read up on history expansion. Super useful.
In bash: cp /dir/dir/dir/dir/filename !#:^:h/newfilename (Applies to more than cp) Strips cp & repeats previous dir tree minus /filename, plus /newfilename.
This can be mimicked by brace expansion:
cp /dir/dir/dir/dir/{filename,newfilename}
Is this only in zsh? I've been recommending this and not marking it explicitly as a zsh thing.
alt-. works too. In the shell ESC <foo> is the same as alt-<foo>. The difference being that with alt-<foo> you hold alt and press <foo>, then release them both. with ESC <foo> you press escape, release, and then press and release <foo>.
uptime. I had a phone screening itw with Google and it was the answer to one question. I answered sar, which is much more powerful, and top, but caller wasn't tech and I probably missed a checkbox.
For me, it was using 'cd' without argument. (It takes you back to your ~). Also, learning the various options of grep (-v, -i, -A, -B) made its use really more powerful.
to change back to whichever directory you were just in. Every once in a while most of us find ourselves facing someone's "how could you not know that?" look.
I wonder, how do you remember all the command line options?
Recently I had this thought: maybe those people who can remember the options actually are autistic (don't know the proper word, that thing many geeks are deemed to be), and I am not. Remembering command line options might be like knowing large prime numbers within seconds.
But I am also interested in memory techniques like the Loki method. Perhaps some good system for memorizing command line arguments could be devised...
Instead of kill -9 (a nuke), you can often get results with just kill, kill -INT or kill -HUP. (The advantage is that the "weaker" signals allow your runaway program to do some cleanup, but kill -9 doesn't.)
Thanks for letting me know. It's still probably easier to use the Wayback Machine since all of the site's internal links point to http://sial.org/ so browsing within the site is very painful. Still, I'm glad to see the content is all still online. I may just mirror the whole thing (there's tons of good reference material in there, especially for Perl, but also for Unix generally).
disown - bash built in.
You know how you always start that mega long rsync in an ssh session from your laptop and then realize you have to go offline halfway through? You forgot to start it in screen or nohup, didn't you? You can pause it (ctrl+z), background it (bg), and then disown it so it is protected from SIGHUP when you quit your ssh session. Bash (and, to be fair, zsh and others have copied much of this) has some wonderful job control features most people are completely oblivious of - even veterans of 20 years.