I was surprised to see `$()` missing from this (otherwise quite extensive) list. There are a few commands listed which employ it, but it absolutely deserves its own entry.
That and `readlink -f` to get the absolute path of a file. (Doesn't work on MacOS; the only substitute I've found is to install `greadlink`.)
And `cp -a`, which is like `cp -r`, but it leaves permissions intact - meaning that you can prepend `sudo` without the hassle of changing the ownership back.
I never see `lndir` on these lists either. It makes a copy of a directory, but all of the non-directory files in the target are replaced with symlinks back to the source while directories are preserved as-is. Meaning that when you `cd` into it, you are actually landing in a copied structure of the source directory instead of the source directory itself, as would be the case if you just symlinked the source folder.
Once inside, any file you want to modify without affecting the original just needs you to create the symlink into a file, which you can do with `sed -i '' $symlink`. There you have it: effectively a copy of your original directory, with only the modified files actually taking up space (loosely speaking).
Other languages have this also. In fact it's a C language system call in Linux, though it only goes one indirection level at a time.
This is useful if you run a script and want to use files contained in the directory where the script is located. This comes up if you have a symlink to the script in any /something/something-else/bin directory.
Use cases:
Script needs its own location for e.g. configuration files.
User of the script needs the location for e.g. another script that needs to be run that isn't in the search path.
FWIW `realpath` on macOS should be functionally equivalent to `readlink -f` - particularly if you ignore all the other functionality `readlink` provides.
Nice, thank you! I did not know about `realpath`. Piping absolute paths and copying them to the clipboard is a pattern I use a lot, so this tip will get substantial mileage out of me.
When I need to shuffle around and/or rename my media files, for instance, it's risky to operate on the originals themselves. I've screwed up hundreds of mp3 files by issuing a bad `rename` command. I've lost the hierarchical structures of genres and artists and albums and such by accidentally moving them into the same directory together. And so on.
If you `lndir` your mp3 directory to a functional copy of it, a playground of sorts, you can move things around and rename them without having to worry about scenarios like having to listen to a bunch of mp3s in order to put them back to where they're supposed to be.
When you're satisfied with the re-organization of your files, you can replace your symlinks with the original files. Since none of the directories are symlinked, you never have to worry about `cd`ing into a place you didn't intend to.
You could use hardlinks for this, though (as long as both places are in the same filesystem). Then when you're finishes you just delete the old folder.
Huh. As a hardlink avoider, I never even thought of this. (I don't have a good reason for avoiding hardlinks - it's mainly just because I didn't grok them enough to predict their behavior. When you teach yourself, you gotta expect some gaps!) Thanks for the suggestion.
At work, we have a codebase that is used by numerous projects. The solution for project specific changes is 'checkout the repo, and make your changes locally'.
What that has turned into is 'checkout the repo and make a new directory structure that symlinks everything back to the original'. The keeps the maintenance burden relatively small as you can easily update the base version, and only need to worry about the files you changed.
I certainly wouldn't recommend using this approach for anything; but it is not as terrible as it sounds.
Not one-liners, but some of the tools that I have found helpful for working with large bash codebases are:
- shellcheck and shfmt for linting and formatting
- sub for organizing subcommands
- bashdb for debugging scripts interactively via the VSCode extension
I'm still missing a way to share modules with others like it can be done with ansible/terraform, but I have not found an optimal way to do it yet.
Watch out, that's a Linux-ism and macOS's sed will cheerfully use the thing after it as the backup expression; as far as I know, the absolute safest choice is to always specify a backup extension "sed -i~" or "sed -i.bak" to make it portable, although there are plenty of work-arounds trying to detect which is which and "${SED_I} -e whatever" type silliness
My contribution (and yeah, I know, PR it ...) is that I get a lot of mileage out of setting the terminal title from my scripts:
Even better, use a real text editor like ed or ex. (Nowadays ex is more portable because many distros — against POSIX — omit all 55 kilobytes of GNU ed. Of course, smaller systems might not have ex/vi.)
Basic usage looks like this:
printf '%s\n' '" some commands...' 'wq' | ex -s file
Or:
ex -s file <<'EOF'
" some commands...
wq
EOF
By the way, these commands are the ones that you use in your vimrc or after a colon in vim — at least, the POSIX subset of that — so any ex commands you learn translate naturally to your normal editor.
That's an interesting trick, I'll bear it in mind.
That said, the "lottery factor" is often a bigger contributor to the things that land in codebases than "optimality". Plus, I've actually seen somewhere that perl is the most common binary across every system, and it's likely a larger population who know perl than ed would be my guess
How many major versions of Perl do you know about?
What is the distribution of these different versions of Perl across various OSes and OS versions?
Hint: Lots of backward-incompatible changes tend to get made around different major versions. Having Perl 4 is not like having Perl 5 which is not like Perl 6.
If you want to claim broad compatibility, you can't just look at the latest distributions. You have to look at OSes other than Linux. You have to look at older versions of OSes, too. And don't forget the billions of embedded and handheld devices, too.
edit: I just realized that's because apt is _written in_ perl, but tomato, tomahto, and it may very well be that they picked perl for that same universal-binary reason
It's GNU sed vs (Free)BSD sed, which are different enhancements of the POSIX standards for sed that went in different design directions. One could Homebrew/macports install gnu-sed on macOS to get a GNU version to write Linux-portable scripts as-needed.
Plan9 sed has no -i option. Older versions of NetBSD will not have it either.
I never understood the point of the -i option other than to conserve keystrokes.
A temporary file is still created then removed; the -i option only saves the user from having to specify it. Maybe the intent is it is only for "one-off" use, not for use in scripts.
This will work for GNU, BSD and Plan9:
sed -n 's/old/new/wfile.tmp' file
mv file.tmp file
Or just use redirection.
Given the choice between avoiding some keypresses and more portable scripts, I will keep choosing the later.
NetBSD sed may have the -i option now but I do not see anyone using it in scripts meant to be portable, like build.sh^1
-i creates a temporary file and then replaces the original with the temporary. The BSD man page advises not to use -i alone because if there is insufficient space to store the temporary, data may be lost.
Using raw escape codes is ugly and device-dependent. People learned this in the 1970’s, and created libraries to get away from having to hard-code escape codes.
Having arrived back at my Mac, running iTerm2, I wanted to share another fun fact about using those executables: running that function while the outer shell is in "set -x" causes the title of the window to be
You can’t run only one of the tput commands! You need to run both of them, as in the shell function; i.e. both tsl and fsl needs to be sent to the terminal!
If you want to see what bytes it would output, use “od”:
Yeah, I actually thought about that afterward, however, in that same "I wonder what happened", I also wondered if tput is bright enough to know the difference between the local termcap and the connected one
As a concrete example, my printf version works even when run inside docker, but
$ docker run --rm ubuntu:22.04 bash -c '{ tput tsl || tput -T xterm+sl tsl; } | od -c'
tput: No value for $TERM and no -T specified
tput: unknown terminal "xterm+sl"
0000000
Your example suffers from being a toy example; it makes no sense to run a noninteractive command in a docker container merely to output terminal escape codes. If this were the norm, "docker run” would probably by default make sure to copy the TERM setting to the inner command.
I would assume that if you run an interactive shell inside docker, TERM would actually be set correctly. It’s the same when you ssh somewhere else – the TERM environment variable is sent along, so that the remote program can see it and output the correct codes for your local terminal. Also, the docker image needs the terminfo database installed for “tput” to work.
Thank you so much for the navigation shortcut link! I love these and picked them up from mentors at jobs but never found a definitive guide to all of the ones I could learn.
Learnt a neat trick from an old sysadmin colleague.
If you’ve written a command but realize you don’t want to run it right now but want to save it in your history you can just put a `#` in front of it (ctrl-a #) making it a comment and allowing you to save it in your history without running it.
When you’re ready to run it you find it and remove the preceding `#`
I had been in the habit of symlinking ~/.bash_history to /dev/null to avoid AFS/NFS writes on every local command execution. When I moved over to the financial industry, it didn't occur to me that such a symlink might look like an attempt to evade monitoring. A year or two in, I realized it didn't look good, but it had clearly been made my first week on the job, so I just left it in place for over 10 years rather than risk looking like I was again monkeying with my history.
I hope and presume they had much better monitoring than scanning bash history, but I'm not bet-my-career confident of that.
Enterprises that requires logging of user actions will very likely not being doing it at the shell level, either through compiled in options, or shell history.
Instead, the Kernel has built in functionality called Auditd[0], which is capable of logging any and all executions, file or socket accesses, and much more. Along with included tooling for quickly finding and alerting on events[3].
Further, if terminal logging or playback is really required (usually not), it's generally done through pam with tlog[1]. Red Hat 8 and above come with built-in tlog support[2].
Lol that makes sense, never thought of commenting out the command but I guess I do something similar. If i realize i dont want a command yet I enter it with a trailing `\`, then `CTRL+C` to get back to an empty prompt.
This is useful when saving command lines to files (scripts) using the POSIX-required fc builtin. Command line histories are relatively cumbersome to save with Ash, Bash saves them but truncates them to 500 entries, whereas scripts can easily be saved indefinitely. Amongst Bash and other feature-heavy shell users, there are Rube Goldberg-like workarounds for command line history saving. OTOH, all shells aiming for POSIX compliance, including the fastest, lightest weight ones I prefer, will implement fc. It's already there; I make use of it.
I will type fc, save to a file (script) and then delete all lines before exiting the default EDITOR, e.g., %d in vi. This prevents the commands from being re-executed when I exit vi.
Also I sometimes use # combined with a semicolon to disable portions of command lines, e.g., early commands ;# late commands. I might cut and paste from one entry in the history into another one. Or I might fc -l 1 > file and edit the file down the the entries that form the starting point for a new script. By far, the shell is the most useful REPL for me.
There is no shortage of comments online praising the utility of the REPL concept but the only comment I have ever seen about fc was from a shell implementor/maintainer; it was negative. I use fc all the time. It has become essential for me to use the shell effectively as a REPL.
> Bash saves them but truncates them to 500 entries
That behavior can be modified with the HISTSIZE environment variable.
> The maximum number of commands to remember on the history list. If the value is 0, commands are not saved in the history list. Numeric values less than zero result in every command being saved on the history list (there is no limit). The shell sets the default value to 500 after reading any startup files.
My favourite example in this page is used four times but not pointed out specifically. It's the use of <(some stuff) to create a temporary file descriptor. Since it's not explained I'll give it a quick go.
If you ever found yourself doing something like this:
You can instead skip making two new files, and do:
diff <(sort file1) <(sort file2)
And voila - you've got two 'files' you are comparing, but without having to save them to disk. The examples on the page use this with `curl` and `head` to good effect, but it wouldn't necessarily be obvious what's going on.
No, $SHELL is the user’s default shell; i.e. the shell started in a new terminal or when logging in on a console or remotely. If another shell program is started, $SHELL will still refer to the default shell, not the running shell program.
I know that may be convenient, but you'll want to exercise caution because $'' turns on inner escaping that wouldn't otherwise happen inside single-quoted strings
it's c-x followed by c-e although no need to release the control key that I'm aware of. The c-x is in the same family as alt-x (sometimes called "meta-x") which is a similar Emacs "mode switching" leader keystroke
I'd add this when a filesystem gets almost full (but not overfilled, see below). This shows where most of the space goes:
# du -axm / | sort -n | tail # takes a while on large filesystems, or ones with lots of files
Then narrow down for each of the most filled directories:
# du -axm /some/dir | sort -n | tail # subsequent searches are fast, now that metadata is cached.
In case there is no space at all, sort will complain if the /tmp directory is on the same fs, then the only option is to search any suspect directories with du -sm $dir
A bit surprised that the Ctrl+b(ack one...) and Ctrl+f(orward one char) shortcuts are not included.
As well as their Alt+b/f for a word back/forward too. Very convenient for going through a long command by getting in the beginning or the end of the line, then move words back/forth to update it.
Ctrl + s : to stop output to terminal.
Ctrl + q : to resume output to terminal after Ctrl + s.
Who uses these? These are newb as well as advanced user killers that doesn't seem to serve a purpose.
It just makes you think your shell is locked up making you think either the server is out of memory, process is hung to the point no keys would react or the network is hosed.
I /remember/ them from when I was logging in to "the computer lab" at my Uni from literally decades ago, but I haven't thought about them nor used them in at least that time.
to accomplish something similar. Not by default of course.
It re-runs the command, so if not idempotent/etc it will not return expected results. Also when re-run, the command will not be in a tty context, so if the executable is sensitive to such things (e.g. `ls`), the output format might be different.
might be kludgey but i wonder if bash coproc would help, it basically can send stdout to a named pipe that makes it avail as a environment variable fike descriptor.
I love these one-liners. It's also about knowing your tools better.
I hadn't known about `look` [0], which is great.
The writer looks to be a bioinformatician, so it might be a bit out of scope, but I also found `socat` [1] quite a good serial communication helper tool.
I've used `look` for years when I'm not sure how to spell some obscure word and I'm in a context where there isn't a built-in spellchecker (e.g. editing source code). I was today years old when I learned that looking up words from the system dictionary is just the convenient default and you can use it to search lines from any file.
I don't recall where I heard it, but my understanding is that socat is the sort of successor to good ol' netcat. (Of course, don't ask me to compare each, nor know what socat brings that netcat lacks, etc.)
Related: the Warp terminal has the concept of "workflows" - which are a list of snippets you can pull and use with auto-complete. I found that to be a good way to remember those one-liners that I only use once every two months.
That and `readlink -f` to get the absolute path of a file. (Doesn't work on MacOS; the only substitute I've found is to install `greadlink`.)
And `cp -a`, which is like `cp -r`, but it leaves permissions intact - meaning that you can prepend `sudo` without the hassle of changing the ownership back.
I never see `lndir` on these lists either. It makes a copy of a directory, but all of the non-directory files in the target are replaced with symlinks back to the source while directories are preserved as-is. Meaning that when you `cd` into it, you are actually landing in a copied structure of the source directory instead of the source directory itself, as would be the case if you just symlinked the source folder.
Once inside, any file you want to modify without affecting the original just needs you to create the symlink into a file, which you can do with `sed -i '' $symlink`. There you have it: effectively a copy of your original directory, with only the modified files actually taking up space (loosely speaking).
Looks like I have a few pull requests to submit.