The connection with ripgrep is that fuzzy selection works well when getting all the possible candidates is fast. And ripgrep (or sd) is good for that. So there's a connection, but the big change is really moving to fuzzy down selection. When dealing with large context it really makes a difference in productivity in my experience, because it helps a lot discoverability in a smooth and efficient way.
For a GUI alternative to fzf, one can look at rofi.
I've been using Helm for a couple of years, and it has crippled all other editors and IDEs for me.
I never read the manual, I have no idea how Helm works, or what it does, or what it is supposed to do. Yet I do use it all day every day. I just type, and Helm delivers what I meant, blazingly fast.
I've tried Atom, VSCode, Eclipse, CLion, Visual Studio, Xcode... I can't use them. You have to click on tabs to switch, you have to click dialogs to open views of the file system, and then navigate them to where you want to go, manually search for the file you want, actually use search boxes somewhere... they all feel super ancient UI wise. Sure, Atom, VSCode, Sublime, they all have some magic box where you can do anything, but that box just isn't Helm.
With Helm I know there is a file or a buffer or anything somewhere containing something that I might need and I don't even know what it is, and I just start pressing keys and Helm goes from showing me the whole universe, to pin pointing me the grain of sand that I didn't know I wanted, by interacting with me.
I have no idea how to even start Helm, nor which keys I press there. Do I press Tab? Sure. What do I press to start interacting with helm ? I don't even know, it just happens. Best tool ever.
- http://wikemacs.org/wiki/Helm and http://wikemacs.org/wiki/Helm-swoop
- which link to an extensive tutorial: https://tuhdo.github.io/helm-intro.html
and +1 for Spacemacs: http://spacemacs.org/ (see other starter kits: http://wikemacs.org/wiki/Starter_Kits
It's great. I wouldn't work the old way anymore. The way I use it is wherever I am in emacs I just type the file name, then the matches are listed from all of my working directories, most recently used ones displayed first and I can quickly narrow down by typing any part of the path.
In practice, since I store the last 1000 files in the recent files list, the desired match is usually the first or at least in the top 3, if I work on the same group of files for a while.
One sad thing about FZF is that it doesn't also support tab-completion. Both ido and ivy support both tab-completion and fuzzy searching at the same time. They compliment each other really well and make these both much more general purpose tools. One import application is when your search space is a tree (like a file system). FZF sucks at that and the authors have no interest in supporting tab-completion or trees.
git clone https://gitlab.com/myaccount/mydotfiles
followed by an optional:
which for me I have working on any number of linux boxes as well as the odd windows one. I've got a bunch of other config, such as bash aliases/functions etc anyway so if I'm going to want that everywhere it doesn't matter what else comes with it; I only need to perform the above once and it takes under a couple of seconds; there's just literally no reason not to do it.
One advantage of using vim-related config is precisely so you can do the same thing on different machines, including windows and linux. One thing that article doesn't touch on is tmux, and there you can have config for vim, tmux and bash so that you can effortlessly and identically move between panes in tmux as you do in vim; copy text between panes, ssh sessions and vim buffers etc. Without this system I'd be running multiple ssh sessions in little windows and copying text between them (using the mouse!) like a doofus.
Disclaimer: I recently made one of these "dotfiles" repositories anyways, to take of that what you will.
Yes, you could download random scripts from the internet and execute them, and you'll probably get into trouble once an auditor checks your system.
Not a nice experience. Would not recommend.
the person you initially responded to was likely referring to ssh connections to servers
I always get into trouble due to inconsistencies across unix environments, and still don’t understand all the different greps I’ve encountered.
I absolutely love these newer tools that shed all the attempts at unix-grey-beard backwards-compatibility..
(This is ofc completely subjective)
In my fish_user_key_bindings.fish (but surely adaptable to other shells), I bind it to Alt+G:
bind \eg 'test -d .git; and git checkout (string trim -- (git branch | fzf)); and commandline -f repaint'
bind \eG 'test -d .git; and git checkout (string trim -- (git branch --all | fzf)); and commandline -f repaint'
and then do a checkout with `git cof`
Which means that if you run VS Code on OS X you actually have ripgrep installed already - just run the following:
/Applications/Visual\ Studio\ Code.app/Contents/Resources/app/node_modules.asar.unpacked/vscode-ripgrep/bin/rg
I'd recommend using fd for listing files as it offers lots of filtering options right out of the box - much more useable than find imho.
I like the result ordering better than fzf, and it has a neat interactive mode.
Ctrl-r for shell history
Ctrl-t for file search
$ hyperfine --warmup 2 -r 10 'rg --files | fzy -e hello' 'rg --files | fzf -f hello'
Benchmark #1: rg --files | fzy -e hello
Time (mean ± σ): 153.6 ms ± 57.9 ms [User: 399.2 ms, System: 96.9 ms]
Range (min … max): 85.9 ms … 244.3 ms 10 runs
Benchmark #2: rg --files | fzf -f hello
Time (mean ± σ): 210.5 ms ± 61.6 ms [User: 443.3 ms, System: 90.9 ms]
Range (min … max): 123.3 ms … 315.5 ms 10 runs
'rg --files | fzy -e hello' ran
1.37 ± 0.65 times faster than 'rg --files | fzf -f hello'
With fzf, and also helm in Emacs, rofi... a space is a separator between matchers, and the order of the matchers does not matter. So if you type "foo bar", it will select a path "/bar/bla/foo" just fine. This is important to me for discoverability: very often you know quite well what keywords to look for, but you don't know or are not sure about the order. So order irrelevance is very important.
fzy treats the space as a literal. So the order matters. And that's a killer for me. Plus in my experience the speed difference is not important, all are fast enough in practice.
Honestly I just feel like the authors of these tools were just too lazy to learn grep/find/cut/tr/bash/awk/etc and decided to implement everything in their own toy program, which always
The list of "features" in the article lead me to the same conclusion
> no need of googling the right command
Yeah right but if you spent time learning the existing tools you would not have to google it. And this new tool should require time to accomodate too.
> no need to look around for the right line in the output
I feel like the author just don't understand the unix philosophy. Instead of having 1 tool with a quadrillion options, you just have to incrementally plug and use all the available tools that are already part of linux distributions since dozens of years.
You don't need to "find the right line in the output", you just grep/cut/awk the output, after some time it becomes second nature / muscle memory.
I used grep for over ten years, almost every day, before I sat down and wrote ripgrep. Hell, I still use grep. So no, I'm pretty sure you're just a bit off the mark here. You might consider that speculating on the motivation of others without the facts is pretty counter productive.
> Instead of having 1 tool with a quadrillion options
Have you looked at the man page of pretty much any GNU tool? The Unix philosophy is just a means to an end, not an end itself. ripgrep, for example, has _tons_ of options and flags. But if you went and looked at the number of flags for GNU grep, you'd find an approximately similar amount!
Besides, grep/cut/awk/find/whatever do not even remotely come close to providing the user experience of fzf, so your entire analysis here seems way off the mark to me.
And it does respect Unix philosophy, as I just wrote in another comment it's easy to pipeline between source (of the stuff to find) and sink (of what to do with the fuzzy-found one).
It does one thing, and it does it well.
The unix philosophy isn't magically better and faster than other software philosophies
You have provided absolutely no evidence that these tools don’t fit the Unix philosophy (and you cannot, because the 2 tools mentioned actually do a great job fitting the Unix philosophy). In fact, the first example is piping the output of ls into fzf. I’m not sure what could be more Unix philosophy like than that.
Once I started using ripgrep I couldn't really go back. Maybe you don't work with huge files, but I do. And everyday I'm thankful to burntsushi for all the hard work he put in to making ripgrep.
The author creates small wrappers to kill processes, search files, etc.
The concept could be extended to recognize certain object types. E.g. if you select some files with search then you could choose from a list of operations on files (delete, grep, copy path, etc.)
If you select from processes then after selection you could select from process actions to perform on the selection (killing, sending some other signal, etc.)
So this way you don't implement separate wrappers for every kind of usage, you just create actions for certain object types and connect selections to object types.
This is how Helm works in Emacs, and probably there is a VIM equivalent too.
I've tried to play with Zsh's Zle but wasn't able to find a way to abort FZF and retain the input, I suppose I need to modify FZF itself.
If there's a way to do it with Zsh I'm all ears...
I'm using Zsh, Ctrl-r is bound to fzf-history-widget. I start the history search with Ctrl-r using fuzzy as default, I input the command but it's not present in the history and something unrelated is selected, if I press Ctrl-g to abort the original input is erased.
I've found at least a couple of feature requests (#389 #993) where it's suggested to bind a shortcut to the --print-query function.
I think I've tried it in the past without success but having a fresh look at it right now, I think the blame is to the fzf-history-widget in the binary way it tries fetch an history line or reset the zle prompt...
Doubt ripgrep it will beat indexing, line GNU id-utils. (mkid to build ID file, lid to query).
If you're using git, "git grep" is useful; it searches only files indexed in git, which provides a useful speedup.
Does it provide the same user experience? i.e., Does it keep the index up to date for you automatically? If so, that's something a lot of users aren't willing to pay for.
If you want a pre-indexed solution, I'd recommend checking out qgrep instead: https://zeux.io/2019/04/20/qgrep-internals/
> If you're using git, "git grep" is useful; it searches only files indexed in git, which provides a useful speedup.
Depends on what you're searching. In a checkout of the Linux kernel:
$ time LC_ALL=C git grep -E '[A-Z]+_SUSPEND' | wc -l
maxmem 63 MB
maxmem 9 MB
$ time LC_ALL=en_US.UTF-8 git grep -E '[A-Z]+_SUSPEND' | wc -l
maxmem 64 MB
maxmem 9 MB
$ time rg '[A-Z]+_SUSPEND' | wc -l
maxmem 21 MB
maxmem 9 MB
More generally, if someone can find a non-trivial example of ag being faster then ripgrep, then I'd love to have a bug report. (Where non-trivial probably means something like "not I/O bound" and "not so short that the differences are human imperceptible noise.")
When I started, I didn't know ripgrep, now I use it as reference. Of course it's still slower for regex searches and it has less options, but in some cases (e.g. simple string matching search), it is faster than rg (PM_RESUME in 160-170ms), mostly thanks to mischasan's fast strstr:
If you want, let me know, what you think about it.
I would also caution you to make sure you're benchmarking equivalent workloads.
And I know than benchmarking is hard, a coarse comparison is in scripts/compare.sh. More detailed performance tests are in test/TestPerformance.
I did some playing around with your binary, but it's pretty hard to benchmark because I don't know what your tool is doing with respect to .gitignore, hidden files and binary files. Your output format is also non-standard and doesn't revert to a line-by-line format when piped into another tool, so it's exceptionally difficult to determine whether the match counts are correct. Either way, I don't see any evidence that fsrc is faster. That you're using a fast SIMD algorithm is somewhat irrelevant; ripgrep uses SIMD too.
On my copy of the Linux checkout (note the `-u` flags passed to ripgrep):
$ time /tmp/fsrc PM_RESUME | wc -l
maxmem 67 MB
$ time rg -uuu PM_RESUME | wc -l
maxmem 13 MB
$ time rg -uu PM_RESUME | wc -l
maxmem 13 MB
$ time rg -u PM_RESUME | wc -l
maxmem 13 MB
$ time rg PM_RESUME | wc -l
maxmem 21 MB
$ time /tmp/fsrc 'Sherlock Holmes' /data/benchsuite/subtitles/2018/OpenSubtitles2018.raw.sample.en
Error : option '--term' cannot be specified more than once
Usage : fsrc [options] term
-h [ --help ] Help
-d [ --dir ] arg Search folder
-i [ --ignore-case ] Case insensitive search
-r [ --regex ] Regex search (slower)
--no-git Disable search with 'git ls-files'
--no-colors Disable colorized output
-q [ --quiet ] only print status
Build : v0.9 from Jul 5 2019
Web : https://github.com/elsamuko/fsrc
maxmem 9 MB
* gitignore behaviour: If there is a .git folder in the search folder, it uses git ls-files to get all files to search in
* a .git folder itself is never searched
* hidden folders and files are searched
* binaries are ['detected'](https://github.com/elsamuko/fsrc/blob/f1e29a3e24e5dbe87908c4ca84775116f39f8cfe/src/utils.cpp#L93), if they contain two binary 0's within the first 100 bytes or are PDF or PostScript files.
* pipe behaviour is not implemented yet
* it supports only one option-less argument as search term
* folders are set with -d
I use it for a slew of git subcommands for example, e.g. if they expect a ref and I don't provide one, open log and fzf a SHA.
I also use it in vim to find files or lines within them.
I also use it with my password manager to find the one I want if what I specify isn't an exact match.
fzf is great.
There are situations where git is not installed or I can't download those files for security reasons set by administrators, but those situations have been pretty rare for me.
and since i write code for a living, the time spent hacking vim to work just how i want it to is definitely worth it. with a "modern editor" i'd probably spend a similar amount of time configuring it to be vim-like, so might as well use the real thing :) i use neovim tho lol