I love fuzzy shell history. Game changer in terms of shell productivity.
I use atuin[0] instead of fzf as I find the experience a bit nicer and it has history backups built in (disclaimer, I am a maintainer)
Some of our users still prefer fzf because they are used to how it fuzzy finds, but we're running an experiment with skim[1] which allows us to embed the fuzzy engine without much overhead - hopefully giving them back that fzf-like experience
Thank you!! Why on earth isn't that the default. It always seemed weird that with multiple bash windows open, the commands from most of them weren't added to the history.
I often have three or more terminals open, doing different tasks in each; I also often have cycles of work where I'll repeat the last three commands again (three up-arrows and a return). This breaks if one terminal's commands get inserted into another terminal's history.
"Ted, the change I suggest doesn't affect the independence of your sessions as you suggest. Each shell maintains a unique history in memory so modifying the history file has no affect on running terminals. The only time the history file is read is when you start a new terminal. I recommend you try my suggestion. Really, all I am doing is eliminating the race condition that causes the bash history file to have inconsistent data.
To some degree; it depends on the amount of multitasking. I mainly care about which commands in which order when I'm looking at recent commands from that terminal; otherwise I use C-r.
My guesses are that it's on-close so you can follow the per-shell history slightly easier (rather than it being interleaved from multiple shells?), or reducing disk writes?
It is very useful just be careful when switching between shells and hitting the up arrow to get the previous command, as you may get something from another shell.
That will only happen if PROMPT_COMMAND also contains "history -c; history -r", right? "history -a" just saves it, but "history -c; history -r" clears memory history and reloads from disk.
Meta question : do you keep the archive.org link of the article in your favorite or did you manually look up the link before posting?
Or maybe an extension that does that automatically?
I have used that for years, but there are downsides to the approach as well.
So, you revisit a window, and you want to start from where you left, but now, you might maybe wade through 100's of commands before you get back to that point in time. There are fixes for this too of course, my point is, that it doesn't come without side effects, and that is maybe why it isn't set as default behaviour.
At least in a pre 'fzf/atuin/smenu' world.
I prefer smenu's history search, even if I consider myself a heavy fzf user.
> The first command changes the history file mode to append and the second configures the history -a command to be run at each shell prompt. The -a option makes history immediately write the current/new lines to the history file.
I used to have something like this set up on my Linux laptop - the downside is that seperate shell/terminals/windows/tabs don't keep seperate history - so if you eg start a server in shell one (rails s), start editor in two - then go back to one and ctrl-c out - up arrow will now give you "vim" not "rails s".
The problem compounds if you ping, or curl in another shell etc.
Not sure what that link had as it's dead for me as well but...
PROMPT_COMMAND='history -a'
Has always worked for me. Goes in your .bashrc from the FM
PROMPT_COMMAND ¶
If this variable is set, and is an array, the value of each set element is interpreted as a command to execute before printing the primary prompt ($PS1). If this is set but not an array variable, its value is used as a command to execute instead.
FYI, speaking as a skim library user/lover, two things -- 1) it's really not maintained, and 2) once you dig into the code it gets a little gnarly.
I have a branch[0] where I'm trying to do things like reduce the user perceptible lag in search, the initial time of ingest, and add small features I need, etc (all done). I've tried to create PRs where I can, and they go unnoticed and unused.
One other thing I was trying to get a handle on is memory usage. The issue is -- you're implicitly creating objects with static lifetimes everywhere. Now try to refactor that, and there is a trait object held in a struct which depends on another trait object, so good luck figuring out the lifetimes. This is totally fine for a fuzzy finder tool, probably, but less fine when you drop a fuzzy find feature it into an app.
Love to have others interested in skim, and eager to work with anyone with big ideas about how to make it better. I'll have to try out atuin!
Was really trying to figure out why you would name a fork of a project that skims the filesystem two_percent. "What, is it invoked using %% or something? That seems unw... oh. Nice." Well done.
Fuzzy history is nice, but the real game changer for me was the ability to pretty much stop remembering paths in big projects. The default keybindings provide the Ctrl-T shortcut to insert a path/filename on the command line using fuzzy search and Alt-C to fuzzy-cd. No more tedious completion - just search + enter.
fzf includes the path in your match. So, unless the directories also have unpractical names, this typically won’t be a problem.
Learning for me: Brew suggests to install the shortcuts but doesn’t automatically do this. Just activated this and Ctrl-R is a big improvement this way.
Something I've always wanted from my shell history is to be able to record relative filepaths as their absolute equivalent in the history, is that supported in atuin?. If you do a lot of data munging on the CLI, you end up with a lot of commands like `jq 'complicated_selector' data.json`, which if I want to remember the selector is good, but if I want to remember which data I ran it on is not so good. I could do it with better filenames but that would involve thinking ahead. I also run into this a lot trying to remember exactly which local file has been uploaded to s3 by looking at shell history.
I sometime use a #a-text-comment at the end of long/complex command line incantation. Easy to find using fzf at a later date. Also can provide you with a quick context.
Technically. For every command you run, atuin stores the current directory, the time, the duration, the exit code, a session id and your user/host name.
With the current directory you should be able to get the absolute path from your relative paths
File paths are just strings from the shell, though, and each tool can handle relative paths differently. So `./` can mean relative to your shell's cwd or the program could interpret it as relative to something else. Moreover something like `go test -v ./...` is...ambiguous.
I think what would be more useful is to record `env` or some similar context along with the time and command. That would probably get weird pretty fast, though. Maybe just a thing that could insert some useful bookmarking/state into the history record on-demand? `history-set-checkpoint` or something would save your pwd and local vars or something.
Issues like this are why I write everything into scripts and pipelines, even the munging. This way everything is documented: the environment, the input, the output, the log file, what ran, and longform comments with why I was doing it.
Good choice but that for me breaks the "carpe Diem" part of the inspiration that can go away in a whim, that it's what I have when I'm writing (complex) one liners in the shell.
Just have two windows open, your console and your editor. If you ran a command that worked just copy and paste it into the editor. There's your script if you didn't want to get fancy. If you copied your initial cd command or whatever you'd know what the relative paths were referring too as well (although this issue is why I have gotten into the habit of using a path variable instead of relative or anything hardcoded).
I wanted to like atuin. The idea is great. But it just could not match the instant search that ctrl-r with fzf offers sadly. There is always a noticeable delay that annoyed me and made me revert back to fzf for search.
For me another issue was that I needed to do more keystrokes for the same behaviour (search a previously ran command and execute it).
Fuzzy shell history search is just one of those mind-blowing things. I love my history, it's such a trove and I can trust it to work as some kind of external memory (it's enough to vaguely know the kubectl command, or that I want to "du -h | sort" to see what is using disk space, etc).
I also had a rather noticeable delay when launching atuin. As it turns out, this was because it checked for an update every time it launched! You can disable that update check: add a ` update_check = false` to your `~/.config/atuin/config.toml` [1]. That made the delay pretty much disappear for me.
What? I don't expect any tool to interact with the network if it's not made specifically for this (curl, netcat, etc). This would betray my expectations, I find it unacceptable.
But it's up to you. You can disable any sync features by installing with `--no-default-features --features client` set. Your application won't have any networking features built into the binary then
Oh, it's nice you fixed it, thanks! And don't worry, I updated atuin, as it's in my distro's repository (which is why I wasn't worried about disabling the update check).
Intrigued by local-directory-first feature of McFly (It brings up commands you previously executed in that folder first). I tried it for a bit. But there was noticeable lag compared to FZF and also the UI was a bit shaky.
Went back to FZF (in Zsh).
I have a seven year zsh history. That may be a contributing factor for the issues ?
I noticed that before and am glad to see it fixed. My current issue is that typing to search is noticeably slow on a 150,000 entry history, especially for the first few characters. fzf is instant for me.
Yeah we accidentally had this blocking :/ It does only check once an hour though, and can totally be disabled!
We introduced this as we found a lot of people reporting bugs that had already been fixed + they just needed to update, or users on the sync server that were >1yr behind on updates (making improvements really difficult to introduce).
Same, I enjoyed atuin but found myself missing fzf's fuzzy search experience so I ported fzf's own ctrl-r zsh widget to read from atuin instead of the shell's history to solve this. Best of both worlds imo, you get fzf's fuzzy search experience and speed with atuin's shell history management and syncing functionality.
Zsh snippet below in case it's helpful to anybody. With this in your .zshrc ctrl-r will search your shell history with fzf+atuin and ctrl-e will bring up atuin's own fuzzy finder in case you still want it.
It only searches the last 5000 entries of your atuin history for speed, but you can tweak ATUIN_LIMIT to your desired value if that's not optimal.
atuin-setup() {
if ! which atuin &> /dev/null; then return 1; fi
bindkey '^E' _atuin_search_widget
export ATUIN_NOBIND="true"
eval "$(atuin init "$CUR_SHELL")"
fzf-atuin-history-widget() {
local selected num
setopt localoptions noglobsubst noposixbuiltins pipefail no_aliases 2>/dev/null
# local atuin_opts="--cmd-only --limit ${ATUIN_LIMIT:-5000}"
local atuin_opts="--cmd-only"
local fzf_opts=(
--height=${FZF_TMUX_HEIGHT:-80%}
--tac
"-n2..,.."
--tiebreak=index
"--query=${LBUFFER}"
"+m"
"--bind=ctrl-d:reload(atuin search $atuin_opts -c $PWD),ctrl-r:reload(atuin search $atuin_opts)"
)
selected=$(
eval "atuin search ${atuin_opts}" |
fzf "${fzf_opts[@]}"
)
local ret=$?
if [ -n "$selected" ]; then
# the += lets it insert at current pos instead of replacing
LBUFFER+="${selected}"
fi
zle reset-prompt
return $ret
}
zle -N fzf-atuin-history-widget
bindkey '^R' fzf-atuin-history-widget
}
atuin-setup
I'm now using atuin for shell history and fzf for fuzzy completion[0], works awesome! As Shell I use zsh with some plugins managed via antigen on my Linux Mint default terminal.
Thanks for the atuin reminder, I knew fzf reminded me of a crate I had on my backlog to try out but I completely forgot the name. I should probably just get to it now.
I liked the way he covered just one function though, which at least might be starters for such operations, and are fully operational, and probably caters for better workflows for most, than without fzf.
Funny thing: I try to stop using Ctrl-R from fzf at the moment, and rather use the one that ships with smenu.
I'd never heard of fzf before! It looks like there's a lot of cool stuff you can do with it, but honestly, I think it's worth it even just to replace the frustrating ctrl+R search in bash, which I use a lot, but constantly dislike.
I feel like with a lot of these kinds of tools, if I have to actually actively use them, I forget that they exist ('z'[0] ended up being like this for me) and eventually remove them, but something that directly replaces something I use regularly (but hate the UX of) is perfect for me.
Another bit that helps is that I'm not having to learn/remember something new that I'll only have on my laptop. I'll continue to ctrl+R and get a nicer experience, but if I'm ssh'ed into some random box without fzf, ctrl+R will still work, just with a worse experience.
You do have to remember to use it, but the thing to keep in mind is that you can pipe a list of ANYTHING into it. Any list of text items you can search through with a text query is fair game.
git log --oneline | fzf
for example is one of my favorite tricks. Instead of scanning by eye or repeatedly grepping to find something, it's a live fuzzy filter. And depending on how deep you want to go you can then add key bindings to check out the selected commit, or preview the full message, or anything really.
apt install? There's no way I'm integrating a core tool like that with a crude "curl | zsh" hack. How and when would it get updated? Am I expected to stick that in an Ansible playbook?
For an article titled "You've installed fzf. Now what?", I would've expected step 1 to be "how to add the shell hook for your particular shell". Especially since people here are referencing extra integrations that the bundled shell hook doesn't setup.
This is what happens when tooling providers try to do their own packaging. Homebrew, apt, rpm, and other package managers, as well as bash, zsh and other shells, all offer standardized ways to install configuration loader scripts for the user's environment, and to display installer messages that prompt the user to do it - but fzf has a bunch of functionality to go around all that and auto-update itself in place and hack lines into the user's bashrc/zshrc.
Not trying to single out fzf here - there are many other tools that do this - but I find this behavior really sad because it makes me very disinclined to trust the tool with anything. A command-line tool should not be trying to auto-update itself or manipulate dotfiles in the user's home directory. It's dangerous and unexpected.
There’s no real good mechanism for what fzf is trying to do. profile.d might work, but I think that is more intended for environment variables, rather than interactive shell changing scripts.
There’s no standardised way to extend interactive shell configuration without just appending to /etc/bashrc.
The other problem with this is that it is hard to disable. What if you want fzf but without the key bindings? FZF could add a mechanism for this, but it doesn’t fully solve the problem, because the FZF script being added to global configuration will always be run before any user config.
In this specific case, it is a limitation of bash (and probably ZSH as well) that there is no simple package-manger compatible extension mechanism for interactive shell plugins which doesn’t sacrifice user control.
Pretty much par for the course, Linux distributions don't do per user configuration of tools beyond appending to PATH or setting some essential environment variables using scripts in in /etc/profile.d/. They will most certainly not mess with login scripts.
Yeah sure, that makes sense. I'm surprised that the .deb doesn't install the stuff from `$upstream/shell` into `/usr/share/fzf` though. That's the sort of thing `/usr/share/$prog` is intended for.
This enables the C-r / M-c / C-t key bindings, but there's also a completion.bash file. This enables the "**" completion trigger, which can contextually complete paths, envvar names or PIDs in commands. (See "Fuzzy completion for bash and zsh" in fzf's readme [1]). Similarly, you can add to your .bashrc:
if [ -e /usr/share/doc/fzf/examples/completion.bash ]; then
source /usr/share/doc/fzf/examples/completion.bash
fi
/usr/share/doc/fzf/README.Debian explains this, as well as the setup for zsh, fish and vim.
Thanks fore sharing. I was also lost reading this article, because I installed with apt (Debian). After searching for "fzf + Ctrl+R" I found about the keybindings part.
I don't think this article goes far enough into what a game-changer fzf can be.
Combined with some clever "preview" functions, and a pop-up terminal function (which, that part is admittedly harder than it ought to be) it can be an across the board a better general purpose script menu creator than most command line AND GUI tools, namely due to
- instant previews AND
- the ability to either arrow up and down OR fuzzy type what you want.
For me, it renders, e.g. "select in" and "case" functions in Bash completely obsolete. At any time, if I want quick selection of anything that can be a list, my first thought is, can I do this with fzf?
I have ~200 lines of scripts that use fzf and mblaze (https://github.com/leahneukirchen/mblaze) as the ui for my very own mua. It's all Unixy and lovely. I should write a page about that
Yes please! Even just posting some of the scripts somewhere would be awesome. I'm always looking for better ways to handle email. Mblaze seems great, but I've never managed to quite tap into it's obvious power.
Here's a little function that I use pretty often when I want to install a package but I'm not sure what the exact package name is. Give it a keyword and it searches apt-cache and dumps the results into fzf.
function finstall {
PACKAGE_NAME=$(apt-cache search $1 | fzf | cut --delimiter=" " --fields=1)
if [ "$PACKAGE_NAME" ]; then
echo "Installing $PACKAGE_NAME"
sudo apt install $PACKAGE_NAME
fi
}
I have a similar one for Homebrew, with the ability to preview package info and install multiple targets:
# ~/.config/fish/functions/fbi.fish
function fbi -a query -d 'Install Brew package via FZF'
set -f PREVIEW 'HOMEBREW_COLOR=1 brew info {}'
set -f PKGS (brew formulae) (brew casks |sed 's|^|homebrew/cask/|')
set -f INSTALL_PKGS (echo $PKGS \
|sed 's/ /\n/g' \
|fzf --multi --preview=$PREVIEW --query=$query --nth=-1 --with-nth=-2.. --delimiter=/)
if test ! -z "$INSTALL_PKGS"
brew install $INSTALL_PKGS
else
echo "Nothing to install…"
end
end
Interesting, I'm of the first kind, who uninstalled it quickly.
I use ctrl-r a lot, though I have no issue remembering exact portions of the commands, but that may just be me.
I'll give it another try.
In the same vein as ripgrep (rg), I recommand the author (and everybody else) to give fd-find (fd) a try as a replacement to find. It's much, much faster (multithreaded search), and has better defaults to use from the shell (doesn't complain about these permissions issues, and `fd abc` is equivalent to `find. -iname 'abc'`.
A few comments here mention it already, but I wanted to recommend it.
I loveeee fd! About the only time I reach for find is if I need to do something with printf.
I’ve noticed a bunch of CLI tools recently released written in rust that are along this same line of being snappy and well-written. Fclones, ripgrep, paru, fd, exa, to name a few. This probably has more to do with the type of developers the rust platform attracts, rather than the language itself (many awesome tools have been written in go recently as well). But yea, devs who have an interest in Linux and command line tools tend to be great IMO :)
If you don't want to install a tool just for this:
fs() { find -iname '*'"$1"'*' ; }
I don't really have big directories where speed would be a benefit, so haven't tried `fd` yet. I do use `ripgrep` but mostly for its features (like default recursive search respecting .gitignore, `-r` option, etc) over speed benefits.
time find ~ -type f
6.99 user 10.66 system 53% cpu 33.218 total
Second run: About the same (-0.32, -0.78, -1.929)
time fd --type f
4.61 user 11.27 system 120% cpu 13.172 total
Second run:
4.06 user 4.35 system 178% cpu 4.720 total
That's on an SSD. Admittedly, fd skips over hidden files and some directories by default. Adding them back (-u), it seems to take about the same time.
13.74 40.58 145% 37.324
Though it falls back to 13s if I disable colored output.
Piping these in wc -l to eliminate vt overhead, I have 1.4M matches, in about 5s for fd and 11s for find (warm runs, there is quite a bit of jitter). About 1s if I let fd skip hidden directories (.cache, .git, etc: 0.4M matches).
Point is, it's usually noticeably more responsive for realistic use-cases (and has colored output, parses .gitignore files, etc).
Just to explain a bit more... fzf is able to fuzzy-search a list for the input string. When it shows a list of directories, it relies on `find` to do so.
fzf is also more suited to interactive search, while fd is more of a "give me a list" thing.
Maybe I'm just getting old, but I found most of the animations in this article to go too fast. I couldn't tell what was happening, couldn't tell what I was supposed to be looking at.
> Maybe I'm just getting old, but I found most of the animations in this article to go too fast.
It's not that you're getting old and it's not that it's "too fast". The problem with many animations like this and why they make no sense whatsoever is because they cycle without adding a little pause... So you have no way to know when it's beginning and when it's ending. It's endemic on Github.
Some .gif files are properly done but IMO most don't explain anything, only add confusion and would be better served by three or four screenshots.
There are poor UX and then there are these kind of cycling .gif files.
We have an IoT command server and connecting to one of the devices (for maintenance, logs or whatever) involved two manual lookups in the output of two commands and then finding your match in both. I was sick of this one day and with some grep, sed, awk and finally fzf, you could just sort of type the device's name and press "enter" to connect.
People's reaction when I showed them was basically WHAT IS THIS MAGIC?!?
Too often do I ssh into some ECS, run a docker or whatever, only to find it doesn't have locate.
The problem then is that even if you can quickly install it, it'll still need to reindex the whole system, which may take minutes (and can easily cause stressed servers to start trashing).
If it ain't there -and it often is- getting it is a lot of hassle.
`locate` doesn't seem to be on everything, unlike `find`. Same with `adduser` vs `useradd`. Both were a rude awakening to me later on as I hadn't learned the hard/common versions.
The default ctrl+r behavior doesn't allow me to scroll up through my history of matches, which is always disappointing. Often it's some random incantation I'm trying to search for that shares a prefix with other more common commands and the default ctrl+r requires me to type out far enough to be unique, which defeats the purpose of history search. Maybe I'm missing some essential behavior that I'm not familiar with.
ctrl+r matches any part of the line, not just the commands / start of the line. I'm often using it on a unique argument I remember last using the command with.
Imagine you have a bunch of different for loops and you want to get the one that ran a specific command. You can use something like "history | grep for | grep command" or you can use fzf. Ctrl-r, type "f o r <space>" then type "c o m m" and you'll quickly narrow down the search results.
It's the fuzziness that really beats ordinary ctrl-r, being able to search on multiple fragments of text is just fantastic.
I don't like the `Ctrl+r` interface. Instead, I use these settings in `.inputrc`. Type the command I want and use arrow keys (I prefer matching from start of the command, but you can match anywhere too).
# use up and down arrow to match search history based on typed starting text
"\e[A": history-search-backward
"\e[B": history-search-forward
# if you prefer to search anywhere in the command
"\e[A":history-substring-search-backward
"\e[B":history-substring-search-forward
which will instantly upgrade my search capabilities, if I'm stuck. Which doesn't really happen that much.
I think the main part of my vanilla C-r experience is that I trained myself to remember commands differently. I somehow remember exact char-to-char tokens, like the "/fzf/" substring above. Or for a more extreme example "ose -p d" when I try to find:
docker compose -p devenv exec myservice /bin/sh
Weirdly, I kinda know that if used "se -p d" instead (shorter) it would land me on a wrong command (so, not shorter).
There's a potential well of habit that you have to climb out of before you start getting a benefit. If you don't put in the energy to try it for a while, you'll slide back down into the well.
I initially felt the same way and found it cluttered. Though I still left fzf-history around and bound it Ctrl+Alt+R so Ctrl+R could still be the default. In zsh I just did this with `bindkey '^[^R' fzf-history-widget`.
Over time I eventually found myself using Ctrl+Alt+R more and more so I made it the default and now Ctrl+Alt+R is still 'old school' history search for me.
When I learned what Ctrl-R was originally, by mistake, I was like damn... maybe the majority of shell users just have no idea how to drive bash. No wonder people lose their shit over it, it's like trying to drive a car but you're never told the car can go in reverse and the rear view mirror is just tucked up into the roof!
If anyone who uses i3 needs an fzf application launcher in their lives, adding the following two lines to your config will make $mod+d open it as a floating window in the middle (uses rxvt-unicode):
Nice. I'm going to dig through these tomorrow. Thanks for posting! The PR review helper looks excellent and I'm excited to try it out. Til about `pandoc` and `glow`.
I whipped up a nice, performant branch picker last night that I'm pretty happy with, hopefully there are some useful tidbits for others. It's similar to your `flog` command but it uses the reflog to find the most recently checked out branches. It filters those which have been deleted using a set structure (well, map of bools), thus requiring BASH 4+. I'll be interested to see the differences in behavior and performance of your approach
Is there a preceding blog post I could read here? Something like, "So you've just heard of fzf for the first time, what even is it?" I was kind of surprised that the author doesn't even make a passing attempt to define or explain it.
The tl;dr is that it provides terminal fuzzy finding that can be plugged into just about any task that involves finding things.
I use it many times an hour, it's the main way I navigate files and buffers in nvim, it's how I find files in my projects, it's how I select sessions in tmux.
I really like the suggested command `<my editor> $(fzf)` to quickly open files in <my editor> relative to the directory I am in. However, when I abort the fuzzy find with esc, it still opens <my editor> without a file input, which is not what I want, so I wrote a little script for my shell config to prevent this:
fuzz() {
file=$(fzf)
if [ ${#file} -gt 0 ]; then
nvim "$file" # or any editor you prefer
else
:
fi
}
but don't press enter. Press tab instead. The shell will expand the $(), which will run fzf and let you choose a file. When you've made your choice, fzf will exit, and your choice will be written in place of the $(), so you can see the result before you run it. It works with any command, not just vim. It's particularly useful when doing multi-selects. And if you press ESC then nothing gets written.
I've only tested this with zsh, not sure how other shells behave. And you may need to alter fzf's default command or pipe something into it.
I hope this article along with this comment serves as a gateway drug for people to realize fzf is also really useful in injecting some interactivity into their shell scripts. It feels different to write scripts which call down fzf from the heavens for a split second.
I am very new to shell scripting and as I wrote the script, I actually just realized that I can plug anything into fzf that is split into lines of text and that I can use the selected output for further processing, since it’s just text.
I just love how simple it is to stick anything together via the universal plain text interface in the shell and even pipe this text/dataflow through interactive tools like fzf, as you just mentioned.
You could also map <ctrl-t> to fzf and then do `$EDITOR <ctrl-t>`. This works as you’d expect and with any command. I believe the instructions to set it up are in the fzf readme.
I like `<editor> <ctrl-t>`; ctrl-t opens fzf and pastes any selected files onto the command line. If you choose to abort you end up back at the prompt after `<editor> `.
I've been using fzf for a long time, but I didn't even know about Ctrl-R. Since I use fish, I've been used to just typing some random part of an old command, then hitting the Up key to cycle through old commands with that snippet. I'm going to have to give this a real shot.
It gets my autojump history, sorts it by most used on top, extracts just the directory names, pipes the list to `fzf` and `cd`s into the selection. It's great because my most-used directories will be right on top of the `fzf` list.
I've been seeing mentions of fzf for some time but this article might make it try it.
Zsh's H-S-MW plugin [1], which provides multi word CTRL+R to search the history + a couple of half-assed tools I wrote [2] seem to cover most of the think I would need fzf for, but maybe fzf would work better or be a useful addition to my toolset. Maybe it could replace those half-assed tools.
Was not immediately apparent to me. It will first search the current directory, then search from the root directory of the filesystem. This is useful (beyond just `find` which is implicitly `find .`) because it will turn up local results quickly and global results eventually.
I actually did not catch myself that it has that two-tiered behavior. That's fascinating! Maybe I would have found nginx faster if I had reflexively hopped to /etc first.
FZF is a great ad-hoc UI. Let’s say I want to run an operation against a cluster- it’s way better to use fzf to select clusters from a list than it is to have to copy/paste a particular cluster name
On mac 12.5 + bash + macports, having trouble getting anything to work, like Ctrl-R, Esc-C etc, **TAB complete, even after trying to follow the suggestions linked here.
The basic problem, however, is that whatever I type into the fuzzy search, it finds many thousands of hits. It seems it's picking up a lot of aliased Downloads folders in ~/Library/Containers. Anyone else have that problem? Not sure how to turn that off.
I just typed "abcdefghijklmno" trying to narrow it down, and still had 10 hits. Typing a further "p" reduced that to 2, but I could see most of those eliminated 8 had a "p" in the filename. Confusing!
edit: I got Ctrl-R, Esc-C and **TAB complete working by adding this to .bash_profile, not .bashrc as it said to when installing:
But I still have many thousands of options for Esc-C cding, for example, whatever I type—mostly from ~/Library/Containers. I don't remember having that problem when I tried fzf a few years ago, on macos 10.13 I think.
I've made a simple tagging solution based on fzf and don't know how to live without it anymore.
You just name your folders and subfolders by tags. And then search it via fzf (I prefer the exact search) launched by a hotkey.
To exclude a subfolder from the global search I use a trick. You add the stop symbol " #" at the end of the subfolder name. A custom fzf configuration (see below) then excludes what's the inside. Though, you still can cd into the excluded folder and make fzf search in it. So, in fact it gives you an hierarchy of tagged folders with stop symbols. This helps to avoid search pollution and unnecessary disk reads:
export FZF_DEFAULT_COMMAND="fd -E=**/*#/**"
I also use hotkeys to open items in default programs and to preview files in fzf UI:
Of course, it still requires a strong discipline to tag everything appropriately. But the profit is immense - you can find everything you remember and open it in the default program in a matter of seconds.
Obviously only useful for MacOS, but fzf plus spotlight is pretty useful. You can use it to find that lost file that you only remember part of the name!
The article talks about rg (ripgrep) but not rga (ripgrepall)[1] which extends rg to other file types like pdf, tar, zip. The rga website provides a function on integrating it with fzf[2] to interactively search documents from terminal.
Basically, prefix any jq command (or rg, or anything you can think of) with fzr and it'll give you an interactive interpreter for your query. People have written a number of dedicated jq interactive repls, but not much is needed beyond fzf.
For windows, I like everything search https://www.voidtools.com/
It let you search all your files and folders instantly as long as they are ntfs partions, and there is no heavy background indexing needed(it still need to do index but it's very light since it use ntfs file journal to build it) .
I also customize it so I can click the folder in the UI and open it in total commander.
I love fzf for one-liners. Simple pattern: a command gives you everything, pipe it to fzf so that you can pick one, then pipe that to a command to switch to it.
The article is a pretty helpful collection of examples, and I appreciate it.
The intro irks me, though:
> Software engineers are, if not unique, then darn near unique in the ease with which we can create tools to improve our own professional lives; this however can come at a steep cost over time for people who constantly flit back and forth between different tools without investing the time to learn their own kit in depth. As someone with a healthy respect for the tacit knowledge of people better than me, I think a great 80/20 heuristic is “Learn the oldies first”: venerable Unix tools like cat, ls, cd, grep, and cut. (sed and awk, too, if you have the good fortune of landing yourself in an actual modern sysadmin role.)
This seems to be either naive or hubristic in light of the thousands of years of history of craftspeople building their own workspaces. Even a hobbyist woodworker will build plenty of jigs and fixtures for their work. Machinists as well. Anyone whose work comprises building things is readily capable of applying those skills to make their work easier
unique in the ease with which we can create tools to improve our own professional lives
I think the key line is the ease in which we can create tools. A software engineer has free access to the lumber yard. Builders of old had to work hard to create tools to create tools.
On ease, much of what a woodworker needs can be built from wood and with basic hand tools, so the woodworker, by definition can build many of their own tools.
This is similar to programming, where much of what a programmer needs can be written in code.
On cost, simple observation of woodshops—including many in person and among acquaintances that are not professionals—has shown me that the cost is not prohibitive. Every shop I have seen includes a significant amount of self-built tools and fixtures.
Cost also explains the difference in programming environments I have seen. I have met a great many professional programmers, some of whom have no scripts directory or self-written tools, whose only programming output is their direct work product. This reflects the fact that professional-grade tools are available for free to programmers.
As individuals, my observations lead me to understand woodworkers as much more likely to use their skills to build tools for themselves. The same holds true of other craftspeople I have had the opportunity to observe; even in small, one-off projects, it is common to use elements of the craft to build a tool or otherwise aid the endeavor in a way that is not directly producing the work product.
As a collection, I would agree that programmers build tools that allow us to do our work better. The open source community is incredible. Because of the collective action and the cost to individuals, it is much less common for those individual programmers to have to write tools for themselves.
I would argue that it is similarly easy, but circumstances lead to these disparate outcomes.
A woodworker will also just try stuff. They will also first lay out some specs and then start out to fill those specs. One step a time.
While not TDD by the letter, many crafts work more or less agile, and more or less incremental.
An example: I'm currently cutting down some 15 trees (Elms, died of Elm Blight). While it's impossible to cut down a single tree using TDD, I do cut them down incrementally, agile and in small steps: start out with the smallest tree (that stands alone) see how it reacts. Don't plan too far ahead, but plan a little - escape route, sharpen the chains etc. TDD wise: delivery criteria is "wood on a safe, manageable pile".
(I halted the moment a funny little owl peeked out of one of the trees, annoyed. Apparently it was building a nest and I'm too late in the season. Also agile)
Most traditional construction was done with green wood, in fact, and there is a whole bunch of furniture making and other fine work that can be done with green wood. Much carving and turning is preferable on green wood, as it works more easily than dry.
And in fact, testing is one of the most widely shared pieces of advice among woodworkers I have seen working and met. If there is more than one of any cut to be made, it is customary to build a custom jig to hold the work, then test on scrap to ensure all is correct. After work pieces are all in or near their final shape, it is typical to do a dry fit: put the entire (or some substantial subassembly of the) finished product together without glue or fasteners to ensure it does fit. Iteration and small tests which allow for fast failure and early correction are a longstanding tradition in woodworking, much longer than in programming.
In my personal experience, working professionally as a programmer and as a hobbyist in woodwork, I have been much more impressed by the degree to which woodworkers build their own tools to support their workflows than by programmers. This holds up among individuals I know personally, discussions I have followed online, and popular personalities I have been exposed to.
If you follow discussions and developments in both communities, it becomes clear that the difference is not the ease with which one can build tools to make their work better, but the price at which programmers can get professional-grade tools for the same purpose.
BSD and Linux are free. Hyper-powered text editors are free. Most programming languages are free. Postgres (and a plethora of other, databases) is free. Compilers, debuggers, package management, CI/cd tools, collaboration tools are all available for free.
As for the availability of raw materials for building such tools to improve work, that is a question of cost, not ease. It is certainly easy to build tools for a woodworker, and the cost is not prohibitive, based on the observation that every woodshop I have seen has a significant amount of tools and fixtures built by the worker.
This article makes things more complicated than they need to be:
> vi $(find . '/' | fzf): For finding random config files
Instead you can just use:
> vi **<tab>
** is the real gamechanger here. You can use it with ssh, too. I managed about 500 SSH hosts. ssh **<tab> will find hosts via /etc/hosts and .sshconfig
You have to parse it to extract the directories without all the metadata it spits out.
Then you probably need to write a shell function in your preferred shell to tie the pieces together, as well as assign it to a keybinding. It's not too hard.
I use xonsh for my shell, so my config probably won't help. And doing it in xonsh is probably a lot messier than doing it in Bash.
https://github.com/vapniks/fzfrepl: Edit commands/pipelines with fzf, and view live output while editing. There's a cool example using ffmpeg, graph2dot & graph-easy for displaying ffmpeg filter graphs.
https://github.com/vapniks/fzf-tool-launcher: Browse & preview contents of files, and launch tools/pipelines to process selected files. You can use fzfrepl in sequence to process each part of a dataprocessing pipeline with different tools.
So I might aswell ask here: I want to search for a string using ripgrep, select one of the files from the result list using FZF and then open the selected file with VS code.
> rg . | fzf
How do I do this on windows (NOT Linux)??
*Note:* Assume I already have ripgrep, FZF & VS Code installed.
You open Total Commander in that folder, press Alt+F7, put your string into the lower search box and press search. It will give you the list of matching files, with F3 available for quick preview; then you right click the file you need and choose "Open with VS Code" from the context menu.
Sadly, you don't get the "context" (the content of actual matches) out of the box, you'll have to resort to double-tapping F3 while manually going down the file list. That's a downside, I fully admit that.
I appreciate your answer but the whole point is to do everything from the command line. If I need to use another application, I can just open VS Code in the directory and search from there directly.
cut is a part of git for windows. It should be inside "c:\Program Files\Git\usr\bin\" folder. So after adding this folder to your path `rg . | fzf | cut -d ":" -f 1` should work. However I wasn't able to actually use that to open that file in vscode or vim because the application started immediately without waiting to pick a file ...
I use orthodox file managers heavily (mc, FAR, etc). Your approach was my norm for over 20 years. However, fzf is the first tool I've found that far exceeds the file managers in speed of navigation.
I’ve never used fzf before and live in a shell all day, but the capabilities shown in the article doesn’t seem very useful to me. Am I missing something?
For Ctrl+R, how many times are people running similar variants of commands that they 1) don’t know what they typed and 2) didn’t think to simplify their workflow to not be running duplicated commands?
For Alt+C, are peoples’ file and directory layouts such a mess that they need to search entire directory trees to find what they’re looking for?
For Ctrl+R, suppose you ran 'curl' against a bunch of API endpoints on a single domain name 2 months ago. I can now easily find one without knowing too much detail by fuzzy searching curl+part_of_domain_name+part_of_path.
For Alt+C, yes files and directories are often a huge mess. It may be no fault of your own. Maybe you're working with a giant legacy codebase or digging into node_modules. Now you can type 'vim Alt+C', to find and immediately open whatever you're looking for.
Of course this can all be done other ways, but it's very convenient and very fast when paired with ripgrep especially.
Regarding Ctrl+R: Forget fzf for a moment and ask if you ever use it or some similar history command. If you do, then fzf is automatically a better interface than the default one.
If you don't, then yes, I use it all the time to find the exact command I typed in a few weeks ago. I'm not going to memorize all the options I passed in, etc.
For Alt+C: Useful even if you organize things very well. I'm in my home directory. I have /home/me/media/video/youtube/channel_name. I want to go in there. That's a lot of typing (even with Tab autocompletion). When I can just press Alt+C and type perhaps 4 characters of the channel name and I'm instantly in that directory. Do this 100 times over for different directories and the benefits become obvious. In the past I would put convenient symlinks to get to deeper directories quickly, and I now realize that approach is just a hack due to a poor navigation interface.
> For Alt+C, are peoples’ file and directory layouts such a mess that they need to search entire directory trees to find what they’re looking for?
Have you ever joined a new project? Usually they're both messy and you don't know where anything is.
Besides, even now with plenty of familiarity in my current legacy code base I can use fzf to type filenames or directories without typing the full path.
Ctrl+R is great for those repeat commands in your shell history that you want to use right now but don't want to add to your .bashrc, like repeatedly running a script with some arguments. I think the value of searching your history is self evident.
fuzzy matching is honestly vastly superior to any sequential search, the few times where the match is incorrect still feels a better time investment as a user
Yes, the article misses what fzf actually is by focusing too much on the shell integrations. I hardly ever use those features, except for fzf-tab (which partially subsumes them).
Fundamentally, fzf is a well-behaved UNIX tool for choosing things. It does one thing well (choosing things), it communicates by simple text streams on stdin and stdout, and it integrates seamlessly with other text-based programs.
It's like an interactive, human-friendly version of grep that narrows down the possibilities on every keypress. You pass any newline-separated text to its stdin, and it will let you choose from among them. Whatever you choose will get written to stdout. This can be a single choice or multiple choices. You can customize the layout and even run arbitrary scripts when each option is selected (not chosen) to show a preview.
Once you recognize it, "choosing things" shows up everywhere, so fzf can accelerate any terminal-based workflow. Examples of what I use it for:
- what unit tests to run
- what git branch/commit to check out
- what process to kill
- what wifi to connect to
- what todo list items to check off
- find an emoji and put it on the clipboard
- you're leaving your laptop and you want to choose among shutdown/restart/suspend/logout/lockscreen
- what files or options to pass into an arbitrary command (using fzf-tab)
Basically anything that, if it were in a GUI, would be shown as radio buttons or checkboxes or a dropdown menu.
You can use it in scripts/aliases, or you can just write a quick fzf command inline. I use it for so many things, it's hard to even recall them. It's part of my muscle memory now. Check the wiki on the fzf github, there are all kinds of examples. e.g. here's the one I use for killing processes[2].
An example from recently where I used it "inline": I was in the middle of debugging something in a Python project. I needed to temporarily remove a bunch of packages from the virtual envirnoment, but not all of them, to narrow down where the problem was coming from. After 20 seconds of trial and error (I forgot the syntax for tail) I had:
This let me multi-select from the list of installed packages and uninstall them. Go down the list, boom-boom-boom, done. Pressing enter would uninstall my choices right away. Pressing tab would first expand the $() and replace it with the stdout of the pipeline inside, so the text after the prompt would become `pip uninstall requests numpy pandas ...` or whatever I chose, without running the `pip uninstall` part until I pressed enter. I tend to do the tab-expand trick a lot with multi-selections or with dangerous commands like rm, so I can double-check the full thing first before running it.
NB: in that pip example, the "fuzzy" part wasn't even relevant. All I did was use the up- and down- arrows to navigate the list .. there were only a few dozen entries so I didn't need the fuzzy-search. In fact, in most of my scripts I actually turn off the fuzzy matching and use the --exact flag, so that it just searches for exact substrings, whitespace-separated, order ignored. I find this makes its behaviour more predictable. e.g. if I want to find a pyproject.toml file from among all my files, in --exact mode I can just type "pypro" and it will show like
then I type "bar" to narrow it to the one in the "bar" repository, so my query is just "pypro bar". But unlike fuzzy mode, it doesn't show entries that just happen to have "b", "a", and "r" somewhere in the string, like something named
~/repos/big-archives/pyproject.toml
^ ^^
I have to type slightly more than I would with fuzzy-mode, but the lack of bad search results more than makes up for it.
This is what I mean when I say the core of fzf is "choosing things". It's not really about fuzzy-searching, despite the name.
I'm a recent fish convert (from zsh) and I love it.
The only thing that annoys me is working with Android AOSP requires sourcing a bunch of bash functions that I don't feel like porting to fish so I'm occasionally required to drop into bash whereas with zsh and it's POSIX compatibility I could just source the bash functions and it would work fine. But fish's completions work much better out of the box and they have some useful features like being aware of your history in each directory.
> I reviewed my options. I could
> Use my half-remembered knowledge of the FHS to guess around, with trees and greps, or
> Just know and commit it to memory and feel superior to everyone else, or
> Just pipe find . ‘/’ to fzf and start searching.
Or you could have just typed find / -name nginx.conf
Maybe an extra 2>/dev/null if you dont want to do it from sudo
Notice in that section I called it within a subshell. If I ran `vi $(find / -name nginx.conf)`, and there happened to be multiple files named nginx.conf lying around my system, my guess is `vi` would try to open all of them as different buffers, which is not generally what I want.
Personally i never like the "interactive" part of fzf, ideally i should be able to do micro | fzf "myfile" and have it opened in the editor (fzf matching the first result for the file name)
Instead it's :
micro $(fzf) , then type the filename , select it and presee enter to get it opened in the editor.
1. You are running jump for the first time. Have you integrated jump with your
shell? Run the following command for help:
$ jump shell
If you have run the integration, enter a few directories in a new shell to
populate the database.
Are you coming from autojump or z? You can import their existing scoring
databases into jump with:
$ jump import
Doesn't work afaik / Tried all doesn't work how do you set this up @elAhmo
I didn't have to do any manual setup, I used it on ~three machines so far. Maybe something is odd with your shell? Which one do you use?
I have seen other people suggesting https://github.com/wting/autojump too, so it might be worth giving that tool a look, it seems supported a bit better and more actively developed.
I just type in a fuzzy match for some of the directories I have visited.
For example, if I know I frequently visit `/users/elahmo/developer/project`, if I type `j pro` I know it will jump there most of the time. If I type `j project`, it will jump 99% of the time. So it is quite good to cycle between things you often open, but of course you can reach things that are in the history it builds.
Now combine the two: autojump and fzf. Basically look at all the directories in the jump database and pass that to fzf (sorted by frequency). You'll be amazed at what an improvement that is. I've bound it to Alt-j on my shell. I use it a ton more than Alt-c.
I have `,fv` running `nvim $(fzf)` and it's actually a little annoying because afterwards I can't find the filename that I had open. I wish there were some way to expand it into my history.
Just by knowing about alt+C blows my mind, i've been using fzf for a long time now and i basically just use ctrl+T with it, i don't know why i never looked into more shortcuts with it.
3. A command to fuzzily run npm scripts with previews and syntax highlighting. Depends on bat and gojq, but you can sub gojq with jq. It does have a bug where it doesn't handle ":" characters in script keys well, but I'll fix that at some point.
npz() {
local script
script=$(cat package.json | gojq -r '.scripts | keys[] ' | fzf --preview 'cat package.json | gojq -r ".scripts | .$(echo {1})" | bat -l sh --color always --file-name "npm run $(echo {1})" | sed "s/File: /Command: /"') && npm run $(echo "$script")
}
4. A command for rapidly selecting an AWS profile. As a user of AWS SSO with many accounts/roles, this is a life-saver. I combine this with having my prompt show the currently selected role for added value.
alias aws-profile='export AWS_PROFILE=$(sed -n "s/\[profile \(.*\)\]/\1/gp" ~/.aws/config | fzf)'
alias ap="aws-profile"
5. A command for fuzzily finding and tailing an AWS cloudwatch log group. I didn't come up with this one, and I can't remember where I read about it, or I'd attribute it properly.
The interface is really cool. But is it possible to globally set the default "fuzziness" to zero? I'd like to get only exact matches in all circumstances.
I use atuin[0] instead of fzf as I find the experience a bit nicer and it has history backups built in (disclaimer, I am a maintainer)
Some of our users still prefer fzf because they are used to how it fuzzy finds, but we're running an experiment with skim[1] which allows us to embed the fuzzy engine without much overhead - hopefully giving them back that fzf-like experience
[0]: https://github.com/ellie/atuin [1]: https://github.com/lotabout/skim