Hacker News new | past | comments | ask | show | jobs | submit login
How FZF and ripgrep improved my workflow (medium.com/sidneyliebrand)
324 points by daddy_drank on July 5, 2019 | hide | past | favorite | 87 comments

Fuzzy down selection (fzf) is the big win. In Emacs that would be Helm of swiper. Instead of local completion (tab tab tab ;) do a global search, and narrow down interactively using fuzzy selection. It's really a different experience, a bit disconcerting at first (tab built into muscle memory), but it quickly becomes a one way change. No way I'd let go of fuzzy selection now.

The connection with ripgrep is that fuzzy selection works well when getting all the possible candidates is fast. And ripgrep (or sd) is good for that. So there's a connection, but the big change is really moving to fuzzy down selection. When dealing with large context it really makes a difference in productivity in my experience, because it helps a lot discoverability in a smooth and efficient way.

For a GUI alternative to fzf, one can look at rofi.

> but it quickly becomes a one way change.

I've been using Helm for a couple of years, and it has crippled all other editors and IDEs for me.

I never read the manual, I have no idea how Helm works, or what it does, or what it is supposed to do. Yet I do use it all day every day. I just type, and Helm delivers what I meant, blazingly fast.

I've tried Atom, VSCode, Eclipse, CLion, Visual Studio, Xcode... I can't use them. You have to click on tabs to switch, you have to click dialogs to open views of the file system, and then navigate them to where you want to go, manually search for the file you want, actually use search boxes somewhere... they all feel super ancient UI wise. Sure, Atom, VSCode, Sublime, they all have some magic box where you can do anything, but that box just isn't Helm.

With Helm I know there is a file or a buffer or anything somewhere containing something that I might need and I don't even know what it is, and I just start pressing keys and Helm goes from showing me the whole universe, to pin pointing me the grain of sand that I didn't know I wanted, by interacting with me.

I have no idea how to even start Helm, nor which keys I press there. Do I press Tab? Sure. What do I press to start interacting with helm ? I don't even know, it just happens. Best tool ever.

Sounds amazing. Would https://github.com/emacs-helm/helm/blob/master/README.md be the correct starting point for learning more about Helm?

Yes. I also like

- http://wikemacs.org/wiki/Helm and http://wikemacs.org/wiki/Helm-swoop

- which link to an extensive tutorial: https://tuhdo.github.io/helm-intro.html

and +1 for Spacemacs: http://spacemacs.org/ (see other starter kits: http://wikemacs.org/wiki/Starter_Kits

No idea. I just use spacemacs, and there it is enabled by default.

The cherry on the cake ultimate big big win for me is that with fzf, fuzzy selection behaves uniformly across all tools in my stack. Before that each one has subtle differences in filtering the resultset from the input, which often threw me off.

That's why emacs users tend to use emacs for everything. Ultimately, everything we do is all text. Editing source code, writing notes, searching through documents, writing emails. It's natural to want to use your favourite tool for all of your text editing needs.

Why would choose such a terrible text editor then? </joke>

Honestly? I derive pleasure from hard work. I'm somewhat of a masochist in that way, I guess. I shun convenience and choose the hard path towards personal enlightenment.

Curious what the rest of your dev env is...

I run Gentoo. Very minimal system. Really quick. sway window manager (i3 clone for wayland). It's probably not as bad as I made it sound, though. I use tools and I am always trying to automate away any work in my life. But I like simple tools that I can understand. I don't like magic. I won't use a tool that promises to solve all my problems and make me rich at the same time. I like to stay on top of it all.

Oh yeah? Then try coding in word...

> Instead of local completion (tab tab tab ;) do a global search, and narrow down interactively using fuzzy

It's great. I wouldn't work the old way anymore. The way I use it is wherever I am in emacs I just type the file name, then the matches are listed from all of my working directories, most recently used ones displayed first and I can quickly narrow down by typing any part of the path.

In practice, since I store the last 1000 files in the recent files list, the desired match is usually the first or at least in the top 3, if I work on the same group of files for a while.

For (Neo)Vim, the plugin you are looking for is Denite, it can be cajoled into using fzf, rg, whatever external process you need. It's a game-changer.

ido-mode was the standard back in the day. I have since moved to ivy/swiper but it was ido-mode that showed me the way some ten years ago (or more).

One sad thing about FZF is that it doesn't also support tab-completion. Both ido and ivy support both tab-completion and fuzzy searching at the same time. They compliment each other really well and make these both much more general purpose tools. One import application is when your search space is a tree (like a file system). FZF sucks at that and the authors have no interest in supporting tab-completion or trees.

What made you switch? I'm still using Ido-mode and is quite happy.

I went back and forth for a bit. Ultimately ivy was faster, though, which meant I could use it for more things. I used ido-hacks before which sped ido up enough to use it for M-x, but ivy is already easily fast enough for that.

I also use fzf as git branch picker :)

In my fish_user_key_bindings.fish (but surely adaptable to other shells), I bind it to Alt+G:

    function fish_user_key_bindings
        bind \eg 'test -d .git; and git checkout (string trim -- (git branch | fzf)); and commandline -f repaint'
        bind \eG 'test -d .git; and git checkout (string trim -- (git branch --all | fzf)); and commandline -f repaint'
This goes neatly with fzf's default Alt+C binding to navigate to a directory: Alt+C -> go to a git repo, followed by Alt+G -> pick a branch

In Bash/Zsh you can do it with `git config --global alias.cof $'!git for-each-ref --format=\''%\(refname:short\)\'' refs/heads | fzf | xargs git checkout'`

and then do a checkout with `git cof`

That is extremely useful (and for fish, which I don't usually see), thank you!

I can't really fault both of these tools at all. They have good defaults and are easy to configure and are well documented. Of course they are very performant too, but if you take the time to configuring your wildignore, learn how to use find's and rgrep's flags (maybe script some of this with shell or vimscript if necessary) you can get close to the performance of these tools and define your own usability if needed. Basically I'm saying two things and can I just state that this is what works for me; that learning the included tools is worthwhile and not having to install and setup additional tools is good, especially when you are using lots of different machines. Again for Vim specifically, I'm enjoying having a small amount of keybindings for built in functionality (:b*, :vimgrep) than having a ton of plugins. I just have to move a single .vimrc around or failing that I understand how it all works by default anyway.

(Edit: typo)

Not to argue with you as I see where you're coming from (all things being equal I too usually go with the defaults) but the argument about "not having to install and setup additional tools...on lots of different machines" is very weak in these days of:

git clone https://gitlab.com/myaccount/mydotfiles

followed by an optional:


which for me I have working on any number of linux boxes as well as the odd windows one. I've got a bunch of other config, such as bash aliases/functions etc anyway so if I'm going to want that everywhere it doesn't matter what else comes with it; I only need to perform the above once and it takes under a couple of seconds; there's just literally no reason not to do it.

One advantage of using vim-related config is precisely so you can do the same thing on different machines, including windows and linux. One thing that article doesn't touch on is tmux, and there you can have config for vim, tmux and bash so that you can effortlessly and identically move between panes in tmux as you do in vim; copy text between panes, ssh sessions and vim buffers etc. Without this system I'd be running multiple ssh sessions in little windows and copying text between them (using the mouse!) like a doofus.

If you are allowed to git clone random repositories and execute binaries from them (presumably pulling more random dependencies) then I'm going to say you do not (or should not) manage what most people expect when "lots of different machines" is what is said.

Note that not every machine I SSH into has git installed or provides me with permission to install it.

Disclaimer: I recently made one of these "dotfiles" repositories anyways, to take of that what you will.

Assuming it's hosted on something like GitHub, you can just as easily download a zip/tarball of the repository instead.

A copy is, so I can grab my bashrc if I need it. However, it’s still impossible to install things on certain systems…

If it's the "too complicated to bother" sort of "impossible" as opposed to "denied by the company security policies", you can always use linuxbrew or nix.



Its not the technical capability.

Yes, you could download random scripts from the internet and execute them, and you'll probably get into trouble once an auditor checks your system.

Not a nice experience. Would not recommend.

Yes that's a good point. I also have a dotfiles repo and I actually go one step further in that most things are organised into Ansible roles, for my laptop locally and for remote machines if necessary. But as the other responses have suggested it's also to include times when using a dotfiles approach isn't possible or desired. Also, not that it's a day job thing for me, but it's handy being more portable across the Unices. I am especially interested in both NetBSD and OpenBSD (and most of my work colleagues develop on macOS). Some "newer" tools might of course be readily available in the base packages or ports but being able to use the defaults in Linux and other Unices is good. Of course there is still the different flag issue due to GNUisms vs everything else but then when you factor in a cloud virtual machine with might be running a different distro to you locally or container images that you may only use interactively, I think it ends up being easier and simpler to learn the defaults rather than having to deal with different package systems or manual compilation. Again this is how I work and is my opinion, nothing against these tools or the people using them.

I do hope you don't do this on production servers!

Who develops on production servers?

> not having to install and setup additional tools is good, especially when you are using lots of different machines

the person you initially responded to was likely referring to ssh connections to servers

Not everyone is a developer. Still have application and system admins, especially where segregation of duties exists due to security requirements

Non-developers need to change the state of machines on occasion, yes. You're saying there's something about using dotfiles and vim plugins which are inherently less safe than using other open source and proprietary software? Why? (If it's the plug-ins you're worried about there's nothing to stop you forking a version you're happy with; copying to your own corporate repo etc).

>especially when you are using lots of different machines.

I always get into trouble due to inconsistencies across unix environments, and still don’t understand all the different greps I’ve encountered. I absolutely love these newer tools that shed all the attempts at unix-grey-beard backwards-compatibility..

(This is ofc completely subjective)

Fun fact about ripgrep: it's built into Visual Studio Code - it's why the "find in files" feature runs so fast.

Which means that if you run VS Code on OS X you actually have ripgrep installed already - just run the following:

    /Applications/Visual\ Studio\ Code.app/Contents/Resources/app/node_modules.asar.unpacked/vscode-ripgrep/bin/rg

Shameless plug FZF even works well as a dictionary/word autocompleter. https://github.com/Avi-D-coder/fzf-wordnet.vim

Neat. I use it for all kinds of stuff in Rails Feb, from searching routes and rake tasks to finding Cucumber step definitions.

I'd recommend using fd for listing files as it offers lots of filtering options right out of the box - much more useable than find imho.


I can't edit now, but that was typed on my phone and it autocorrected 'dev' to 'Feb'. Amazing.

See also Skim (fzf-a-like written in Rust):


I like the result ordering better than fzf, and it has a neat interactive mode.

I'm pretty junior at coding but I find FZF great.

  Ctrl-r for shell history
  Ctrl-t for file search
Especially writing `code` and then ctrl-t, finding the file and then enter to open it in visual studio.

Game changer.

Ctrl-t is a game changer, thank you for sharing this, I only knew the <star><star><TAB> notation.

export FZF_COMPLETION_TRIGGER='' and you don't need <star><star> just <TAB>

fzy[1], which is written in C, seems to be slightly faster:

    $ hyperfine --warmup 2 -r 10 'rg --files | fzy -e hello'  'rg --files | fzf -f hello'
    Benchmark #1: rg --files | fzy -e hello
      Time (mean ± σ):     153.6 ms ±  57.9 ms    [User: 399.2 ms, System: 96.9 ms]
      Range (min … max):    85.9 ms … 244.3 ms    10 runs
    Benchmark #2: rg --files | fzf -f hello
      Time (mean ± σ):     210.5 ms ±  61.6 ms    [User: 443.3 ms, System: 90.9 ms]
      Range (min … max):   123.3 ms … 315.5 ms    10 runs
      'rg --files | fzy -e hello' ran
        1.37 ± 0.65 times faster than 'rg --files | fzf -f hello'

[1] https://github.com/jhawthorn/fzy

There's a big difference in how their matching works, that made me prefer fzf. It's related to how they deal with spaces.

With fzf, and also helm in Emacs, rofi... a space is a separator between matchers, and the order of the matchers does not matter. So if you type "foo bar", it will select a path "/bar/bla/foo" just fine. This is important to me for discoverability: very often you know quite well what keywords to look for, but you don't know or are not sure about the order. So order irrelevance is very important.

fzy treats the space as a literal. So the order matters. And that's a killer for me. Plus in my experience the speed difference is not important, all are fast enough in practice.

Thanks for pointing this out. I guess fzy's primary goal is path matching which might explain this behaviour.

Fzf is one of those tools I learned about and instantly became one of the things you can't go without anymore.

Been using linux for 15 years and I never found any use case for all these fancy grep/ctrl+r/find replacements.

Honestly I just feel like the authors of these tools were just too lazy to learn grep/find/cut/tr/bash/awk/etc and decided to implement everything in their own toy program, which always

The list of "features" in the article lead me to the same conclusion

> no need of googling the right command

Yeah right but if you spent time learning the existing tools you would not have to google it. And this new tool should require time to accomodate too.

> no need to look around for the right line in the output

I feel like the author just don't understand the unix philosophy. Instead of having 1 tool with a quadrillion options, you just have to incrementally plug and use all the available tools that are already part of linux distributions since dozens of years. You don't need to "find the right line in the output", you just grep/cut/awk the output, after some time it becomes second nature / muscle memory.

> Honestly I just feel like the authors of these tools were just too lazy to learn grep/find/cut/tr/bash/awk/etc and decided to implement everything in their own toy program, which always

I used grep for over ten years, almost every day, before I sat down and wrote ripgrep. Hell, I still use grep. So no, I'm pretty sure you're just a bit off the mark here. You might consider that speculating on the motivation of others without the facts is pretty counter productive.

> Instead of having 1 tool with a quadrillion options

Have you looked at the man page of pretty much any GNU tool? The Unix philosophy is just a means to an end, not an end itself. ripgrep, for example, has _tons_ of options and flags. But if you went and looked at the number of flags for GNU grep, you'd find an approximately similar amount!

Besides, grep/cut/awk/find/whatever do not even remotely come close to providing the user experience of fzf, so your entire analysis here seems way off the mark to me.

I'm sure fzf has fewer options than awk...

And it does respect Unix philosophy, as I just wrote in another comment it's easy to pipeline between source (of the stuff to find) and sink (of what to do with the fuzzy-found one).

It does one thing, and it does it well.

All else aside, grep is _noticeably_ slower than ripgrep.

Once I started using ripgrep I couldn't really go back. Maybe you don't work with huge files, but I do. And everyday I'm thankful to burntsushi for all the hard work he put in to making ripgrep.

> I feel like the author just don't understand the unix philosophy.

The unix philosophy isn't magically better and faster than other software philosophies

Maybe, but has been used since dozen of years successfully and productively by millions of programmers. It may be worth learning instead of reinventing...

Your complaint seems to be people are writing new tools.

You have provided absolutely no evidence that these tools don’t fit the Unix philosophy (and you cannot, because the 2 tools mentioned actually do a great job fitting the Unix philosophy). In fact, the first example is piping the output of ls into fzf. I’m not sure what could be more Unix philosophy like than that.

I am puzzled, what tool can do anything remotely similar to fzf? How does it reinvent anything?

FZF is also very nice for Ctrl-r history search in the shell.

May be of interest:

The author creates small wrappers to kill processes, search files, etc.

The concept could be extended to recognize certain object types. E.g. if you select some files with search then you could choose from a list of operations on files (delete, grep, copy path, etc.)

If you select from processes then after selection you could select from process actions to perform on the selection (killing, sending some other signal, etc.)

So this way you don't implement separate wrappers for every kind of usage, you just create actions for certain object types and connect selections to object types.

This is how Helm works in Emacs, and probably there is a VIM equivalent too.

FZF/rg combination is so much snappier than Ctrl+P on my vim setup!

I use workflows in Alfred[0] for finding files, navigating to pages, and searching around a multitude of other customized datasets. The advantage I have with that over a fzf, at least for me, is being able to search instantly with a Cmd+Space without having to switch to the terminal.

[0] https://www.alfredapp.com/workflows/

Using FZF to search the shell history is great until you don't find the command you were searching, you need to abort and what you typed is canceled. That's my only gripe with it.

I've tried to play with Zsh's Zle but wasn't able to find a way to abort FZF and retain the input, I suppose I need to modify FZF itself.

If there's a way to do it with Zsh I'm all ears...

In Helm you can go back and forth as much as you want. Sometimes you end up in a dark alley, and you are just key press away from stepping back out and trying somewhere else.

Emacs Helm? I'm so used to ido... ^__~

can you explain what you are looking for i na little more detail?

I'll try.

I'm using Zsh, Ctrl-r is bound to fzf-history-widget. I start the history search with Ctrl-r using fuzzy as default, I input the command but it's not present in the history and something unrelated is selected, if I press Ctrl-g to abort the original input is erased.

I've found at least a couple of feature requests (#389 #993) where it's suggested to bind a shortcut to the --print-query function.

I think I've tried it in the past without success but having a fresh look at it right now, I think the blame is to the fzf-history-widget in the binary way it tries fetch an history line or reset the zle prompt...

> It performs amazing even in a larger code base.

Doubt ripgrep it will beat indexing, line GNU id-utils. (mkid to build ID file, lid to query).

If you're using git, "git grep" is useful; it searches only files indexed in git, which provides a useful speedup.

> Doubt ripgrep it will beat indexing, line GNU id-utils. (mkid to build ID file, lid to query).

Does it provide the same user experience? i.e., Does it keep the index up to date for you automatically? If so, that's something a lot of users aren't willing to pay for.

If you want a pre-indexed solution, I'd recommend checking out qgrep instead: https://zeux.io/2019/04/20/qgrep-internals/

> If you're using git, "git grep" is useful; it searches only files indexed in git, which provides a useful speedup.

Depends on what you're searching. In a checkout of the Linux kernel:

    $ time LC_ALL=C git grep -E '[A-Z]+_SUSPEND' | wc -l

    real    1.033
    user    5.769
    sys     0.592
    maxmem  63 MB
    faults  0

    real    1.033
    user    0.000
    sys     0.008
    maxmem  9 MB
    faults  0

    $ time LC_ALL=en_US.UTF-8 git grep -E '[A-Z]+_SUSPEND' | wc -l

    real    3.624
    user    21.910
    sys     0.404
    maxmem  64 MB
    faults  0

    real    3.623
    user    0.000
    sys     0.008
    maxmem  9 MB
    faults  0

    $ time rg '[A-Z]+_SUSPEND' | wc -l

    real    0.138
    user    0.767
    sys     0.704
    maxmem  21 MB
    faults  0

    real    0.138
    user    0.003
    sys     0.006
    maxmem  9 MB
    faults  0
This is even despite the fact that `git grep` already has a file index (ripgrep has to process the >200 `.gitignore` files in the Linux repo for every search) and the fact that `git grep` is also using parallelism.

This looks interesting and I'm going to play with it, but if you still still pipe PS into grep before killing, 'pkill -f' will do exactly what you want 80% of the time.

pgrep is pretty handy too.

There are a bunch of similar tools. I use https://github.com/peco/peco

How does ripgrep compare to silversearcher-ag for performance?

I did an extensive comparison a while ago: https://blog.burntsushi.net/ripgrep/ --- It should still largely be pretty accurate (and ripgrep has only gotten faster).

More generally, if someone can find a non-trivial example of ag being faster then ripgrep, then I'd love to have a bug report. (Where non-trivial probably means something like "not I/O bound" and "not so short that the differences are human imperceptible noise.")

I'm currently working on a searcher on my own: https://github.com/elsamuko/fsrc

When I started, I didn't know ripgrep, now I use it as reference. Of course it's still slower for regex searches and it has less options, but in some cases (e.g. simple string matching search), it is faster than rg (PM_RESUME in 160-170ms), mostly thanks to mischasan's fast strstr: https://mischasan.wordpress.com/2011/07/16/convergence-sse2-...

If you want, let me know, what you think about it.

I don't see any build instructions, so I don't know how to try it. Sorry. I did run `./scripts/build_boost.sh`, but that didn't produce any `fsrc` binary that I could use.

I would also caution you to make sure you're benchmarking equivalent workloads.

There are no build instructions yet, you need to build boost with build_boost.sh and then open qmake/fsrc.pro with Qt Creator. There are binaries available here, too: https://github.com/elsamuko/fsrc/releases

And I know than benchmarking is hard, a coarse comparison is in scripts/compare.sh. More detailed performance tests are in test/TestPerformance.

I don't know what Qt Creator is. Please provide tools to build your code from the command line.

I did some playing around with your binary, but it's pretty hard to benchmark because I don't know what your tool is doing with respect to .gitignore, hidden files and binary files. Your output format is also non-standard and doesn't revert to a line-by-line format when piped into another tool, so it's exceptionally difficult to determine whether the match counts are correct. Either way, I don't see any evidence that fsrc is faster. That you're using a fast SIMD algorithm is somewhat irrelevant; ripgrep uses SIMD too.

On my copy of the Linux checkout (note the `-u` flags passed to ripgrep):

    $ time /tmp/fsrc PM_RESUME | wc -l

    real    0.143
    user    0.330
    sys     0.474
    maxmem  67 MB
    faults  0

    $ time rg -uuu PM_RESUME | wc -l

    real    0.149
    user    0.564
    sys     0.690
    maxmem  13 MB
    faults  0

    $ time rg -uu PM_RESUME | wc -l

    real    0.112
    user    0.481
    sys     0.675
    maxmem  13 MB
    faults  0

    $ time rg -u PM_RESUME | wc -l

    real    0.118
    user    0.507
    sys     0.701
    maxmem  13 MB
    faults  0

    $ time rg PM_RESUME | wc -l

    real    0.142
    user    0.749
    sys     0.726
    maxmem  21 MB
    faults  0
I originally tried to run `fsrc` on a single file (in order to better control the benchmark), but I got an error:

    $ time /tmp/fsrc 'Sherlock Holmes' /data/benchsuite/subtitles/2018/OpenSubtitles2018.raw.sample.en
    Error  : option '--term' cannot be specified more than once
    Usage  : fsrc [options] term
      -h [ --help ]         Help
      -d [ --dir ] arg      Search folder
      -i [ --ignore-case ]  Case insensitive search
      -r [ --regex ]        Regex search (slower)
      --no-git              Disable search with 'git ls-files'
      --no-colors           Disable colorized output
      -q [ --quiet ]        only print status

    Build : v0.9 from Jul  5 2019
    Web   : https://github.com/elsamuko/fsrc

    real    0.005
    user    0.002
    sys     0.002
    maxmem  9 MB
    faults  0

I included qmake and added a `deploy.sh` in the main source folder, which generates the deployed zip file. Let me know, if this doesn't build.

  * gitignore behaviour: If there is a .git folder in the search folder, it uses git ls-files to get all files to search in
  * a .git folder itself is never searched
  * hidden folders and files are searched
  * binaries are ['detected'](https://github.com/elsamuko/fsrc/blob/f1e29a3e24e5dbe87908c4ca84775116f39f8cfe/src/utils.cpp#L93), if they contain two binary 0's within the first 100 bytes or are PDF or PostScript files.
  * pipe behaviour is not implemented yet
  * it supports only one option-less argument as search term
  * folders are set with -d

Emacs has something similar to FZF called ido or ivy

Can this be hooked in <shell> kinda like pagers ? so ever long listing gets piped automatically to fzf ?

Yes. The composability (in usual Unix way) is the best thing about fzf.

I use it for a slew of git subcommands for example, e.g. if they expect a ref and I don't provide one, open log and fzf a SHA.

I also use it in vim to find files or lines within them.

I also use it with my password manager to find the one I want if what I specify isn't an exact match.

fzf is great.

See also pgrep & pkill

ripgrep is great, all of the cli options are very memorable and it has sane defaults for everything that I want to use it for. The one thing I add to a ripgreprc is --smart-case

I’m getting one

I see these Vim-enhancment posts a lot. If you need to install a mountain of plugins just to make it usable, why not use a more modern editor? Seems like nostalgia for the Unix greybeard era, possibly some eletism for when someone who isn't intimately familiar with their config has to ask for help to use their machine.

Vim is activly developed, with major new features like a terminal emulator and async arriving in the latests versions. You need to better define “modern”.

Exactly. I use a modern editor whenever I'm working on one of my main computers, and the only time I use vim is when I'm working on a remote server and can't mount the drive remotely (or where mounting would just be a pain). Because of that, I don't want any plugins when I use vim as I'll probably be using it on a new machine every time, and setting up my environment would be a waste of time.

Setting up your vim environment is as cloning a git repository that contains your .vim files. Takes me about ~60 seconds to have vim up and running with all my favorite plugins. That's faster than you can download and install any other text editor. It's what I've been doing on just about every machine I've used for the past 5 years.

There are situations where git is not installed or I can't download those files for security reasons set by administrators, but those situations have been pretty rare for me.

can't speak for everyone, but as a software engineer, it's about getting as close to my ideal editing workflow as possible. with vim i get to stay in my terminal, and my fingers rarely leave home row. it's just quicker to do things and hard to go back once muscle memory sets in.

and since i write code for a living, the time spent hacking vim to work just how i want it to is definitely worth it. with a "modern editor" i'd probably spend a similar amount of time configuring it to be vim-like, so might as well use the real thing :) i use neovim tho lol

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact