Hacker News new | comments | show | ask | jobs | submit login
Ripgrep – A new command line search tool (burntsushi.net)
740 points by dikaiosune on Sept 23, 2016 | hide | past | web | favorite | 209 comments



Meh, yet another grep tool.... wait, by burntsushi! Whenever I hear of someone wanting to improve grep I think of the classic ridiculous fish piece[0]. But when I saw that this one was by the author of rust's regex tools, which I know from a previous post on here, are quite sophisticated, I perked up.

Also, the tool aside, this blog post should be held up as the gold standard of what gets posted to hacker news: detailed, technical, interesting.

Thanks for your hard work! Looking forward to taking this for a spin.

[0] http://ridiculousfish.com/blog/posts/old-age-and-treachery.h...


Totally agree. This is like burntsushi's technical tour de force, the culmination of years of work building up a stack of technology leading up to this utility and article. It's a simple tool, but he's built every piece from the ground up to be the very best, and then documented the shit out of it. So cool.


Another burntsushi project was recently posted, but didn't get much attention:

https://news.ycombinator.com/item?id=12559515


Looks interesting. But I did not find binaries for it and do not want to setup a rust env to try it out.


The releases are right there on GitHub: https://github.com/BurntSushi/ripgrep/releases


I think the GP was asking about xsv, not ripgrep. There are binary releases for xsv though: https://github.com/BurntSushi/xsv/releases


Awesome. Thanks


PS: You might want to revise this verbiage in the README markdown file:

    Installing xsv is a bit hokey right now. Ideally, I could release binaries for Linux, Mac and Windows. Currently, I'm only able to release binaries for Linux because I don't know how to cross compile Rust programs.


Ah how embarrassing! I will fix that soon. Thanks :-)


No need for any embarrassment. I should have looked for releases instead of only looking at the doc.


I'm the author of ag. That was a really good comparison of the different code searching tools. The author did a great job of showing how each tool misbehaved or performed poorly in certain circumstances. He's also totally right about defaults mattering.

It looks like ripgrep gets most of its speedup on ag by:

1. Only supporting DFA-able Rust regexes. I'd love to use a lighter-weight regex library in ag, but users are accustomed to full PCRE support. Switching would cause me to receive a lot of angry emails. Maybe I'll do it anyway. PCRE has some annoying limitations. (For example, it can only search up to 2GB at a time.)

2. Not counting line numbers by default. The blog post addresses this, but I think results without line numbers are far less useful; so much so that I've traded away performance in ag. (Note that even if you tell ag not to print line numbers, it still wastes time counting them. The printing code is the result of me merging a lot of PRs that I really shouldn't have.)

3. Not using mmap(). This is a big one, and I'm not sure what the deal is here. I just added a --nommap option to ag in master.[1] It's a naive implementation, but it benchmarks comparably to the default mmap() behavior. I'm really hoping there's a flag I can pass to mmap() or madvise() that says, "Don't worry about all that synchronization stuff. I just want to read these bytes sequentially. I'm OK with undefined behavior if something else changes the file while I'm reading it."

The author also points out correctness issues with ag. Ag doesn't fully support .gitiginore. It doesn't support unicode. Inverse matching (-v) can be crazy slow. These shortcomings are mostly because I originally wrote ag for myself. If I didn't use certain gitignore rules or non-ASCII encodings, I didn't write the code to support them.

Some expectation management: If you try out ripgrep, don't get your hopes up. Unless you're searching some really big codebases, you won't notice the speed difference. What you will notice, however, are the feature differences. Take a look at https://github.com/BurntSushi/ripgrep/issues to get a taste of what's missing or broken. It will be some time before all those little details are ironed-out.

That said, may the best code searching tool win. :)

1. https://github.com/ggreer/the_silver_searcher/commit/bd65e26...


Thanks for the response! Some notes:

1. In my benchmarks, I do control for line numbers by either explicitly making it a variable (i.e., when you see `(lines)`) or by making all tools count lines to make the comparison fair. For the most part, this only tends to matter in the single-file benchmarks.

2. For memory maps, you might get very different results depending on your environment. For example, I enabled memory maps on Windows where they seem to do a bit better. (I think my blog post gives enough details that you could reproduce the benchmark environment precisely if you were so inclined. This was important to me, so I spent a lot of time documenting it.)

3. The set of features supported by rg should be very very close to what is supported by ag. Reviewing `ag`'s man page again, probably the only things missing from rg are --ackmate, --depth, some of the color configurability flags (but rg does do coloring), --passthrough, --smart-case and --stats maybe? I might be missing some others. And Mercurial support (but ag's is incomplete). In exchange, rg gives you much better single file performance, better large-repo performance and real Unicode support that doesn't slow way down. I'd say those are pretty decent expectations. :-)

Thanks for ag by the way. It and ack have definitely triggered a new kind of searching. I have some further information retrievalish ideas on evolving the concept, but those will have to wait!


In terms of core features, ripgrep is totally there. It searches fast. It ignores files pretty accurately. It outputs results in a pleasant and useful format. If a new user tries rg, they'll be very happy.

My warning about the feature differences was meant to temper ag users' expectations. There are lots of little things that ag users are accustomed to that are either different or missing in ripgrep. Off the top of my head: Ag reads the user's global gitignore. (This is harder than most people think.) It detects stdout redirects such as "ag blah > output.txt" and ignores output.txt. It can search gz and xz files. It defaults to smart-case searching. It can limit a search to one hardware device (--one-device), avoiding slow reads on network mounts. And as a commenter already pointed out, it supports the --pager option. Taken together, all those small differences are likely to cause an average ag user some grief. I wanted to manage expectations so that users wouldn't create annoying "issues" (really, feature requests) on your GitHub repo. Sorry if that came off the wrong way.

On a completely unrelated note: I see ripgrep supports .rgignore files, similar to how ag supports .agignore. It'd be nice if we could combine forces and choose a single filename for this purpose. That way when the next search tool comes along, it can use the same thing instead of .zzignore or whatever. It would also make it easier for users to switch between our tools. I'd suggest a generic name like ".ignore" or ".ignores", but I'm sure some tool creates such files or directories already.

Edit: Actually, it looks like .ignore can work. The only examples I've found of .ignore files are actual files containing ignore patterns.


You raise good points, thank you. I hope to support some of those features, since they seem like nice conveniences.

In principle I'd be fine standardizing on a common ignore file. We'd need to come up with a format (I think I'd vote for "do what gitignore does", since I think that's what we're both doing now anyway).

Adding files to this list is kind of a bummer though. I could probably get away with replacing `.rgignore` proper, but I suspect you'd need to add it without replacing `.agignore`, or else those angry users you were talking about might show themselves. :-)

I do kind of like `.grepignore` since `grep` has kind of elevated itself to "search tool" as a term, but I can see how that would be confusing. `.searchignore` feels too long. `.ignore` and `.ignorerc` feel a bit too generic, but either seems like the frontrunner at the moment.


I also vote for "do what gitignore does". My plan is to add support for the new file name, deprecate .agignore, and update docs everywhere. But it'd be a while before I removed .agignore completely.

I really like .ignore, and I like it because it's generic. The information I want it to convey is:

> Dear programs,

> If you are traversing this directory, please ignore these things.

Of course, some programs could still benefit from having application-specific ignore files, but it'd cut down on a lot of cruft and repetition.



…and merged: https://github.com/ggreer/the_silver_searcher/pull/974

I'll tag a new release in a day or two. Also, it looks like the sift author is getting on the .ignore train: https://github.com/svent/sift/issues/78#issuecomment-2493352...

This worked out pretty well. :)



This is probably the best case of out-in-the-open open source developers of similar-but-different tools collaborating on a new standard and implementing them in record time that I have ever seen.

Keep it up all (rg/ag/sift)!


I completely agree. That was one of the most reasonable and level-headed discussions between strangers I have _ever_ seen on the Internet!


I really like .grepignore as it is generic enough to encompass all tools which have grep-like functionality while never stepping on the feet of other programs that may also require ignore files that may be different than the .grepignore.


The problem is that grep will never obey .grepignore. That's so confusing as to be a deal-breaker.

Also, what about programs that have search functionality as part of their design, but not as their core function? For example, I don't want my text editor to search .min.js files. I'd even prefer it if such files didn't show up in my editor's sidebar. Do I have to add *.min.js to .searchignore and .atomignore? (Or if the editor people ever work out a standard, maybe it will be .editorignore.)

If I had to draw a Venn diagram of ignore patterns in my text editors, my search tools, and my rsync scripts, they'd mostly overlap. I don't deny the need for application-specific ignores, but there is a large class of applications that could benefit from a more generic ignore file.


I do think it would be better to have the name at least reflect that class of applications, maybe "searchignore" like someone else suggested. There may be overlap but it's hard to predict all the types of applications people are using that need ignore functionality and something as simple as backing things up with rsync would seem like an example where someone could well want considerably different ignores.


I'd say that .ignore is too generic a name. What is to be ignored by what?

But I like the idea of standardizing this. Perhaps the cache directory tagging standard gives some inspiration.

http://www.brynosaurus.com/cachedir/spec.html


.gignore? .ignore is way too generic. .gignore as in grep ignore, rg ignore, ag ignore, they all have a g in their names somewhere. Well, ack doesn't but what kind of name is that anyway; it sounds like someone (the author?) got annoyed with grep's lack of pcre. .gignore seems generic enough, yet specific for these tools. Expecting a single .ignore file to rule them all text search tools is rather too optimistic.


May be .searchignore?


.grepignore


> detects stdout redirects

Is there a portable way for you to find out that stdout is connected to output.txt? isatty() only tells you that there may be a redirection and I suppose on linux you could use /proc/self/fd/1 but I don't know how to do it portably.


If output is redirected, ag calls fstat on stdout and records the inode. It then ignores files based on inode rather than path:

https://github.com/ggreer/the_silver_searcher/blob/b995d3b82...


Could I suggest .ignorerc?

I would immediately intuit that such a file is a (r)untime (c)onfiguration for ignoring something.

.ignore isn't... bad, it just looks like something I can safely delete, like a file styled `such-and-such~`


If you're taking the time to create a .ignore file and set up personally-relevant regexps, I doubt you'll forget. (You'll probably also have it versioned.)

What I like about '.ignore' is that it's not tied to grep (which will never use it) but expresses that the concept is agnostic. You can imagine lots of programs supporting it.


I notice that subsequent runs in the same (non changing) directory get different results. These runs are all within 20 seconds, what gives?

  $ rg each | md5sum
  670b544e15f9430d9934334a11a87b7e  -
  $ rg each | md5sum
  4d13be6b4531ad52b1b476314fe98fb7  -
  rg each | md5sum
  88e15dbb943665ea54482cb499741938  -
  rg each | md5sum
  eec6d6d5c9a592cec25aa8b0c19aae15  -
  rg each | md5sum
  ad74b78ef8f0d21450f8f87415555af0  -
And:

  $ date
  Sat Sep 24 01:42:27 EEST 2016
  $ rg each > foo1
  $ rg each > foo2
  $ rg each > foo3
  $ rg each > foo4
  $ ls -la foo*
  -rw-r--r--  1 coldtea  staff  1429646 Sep 24 01:42 foo1
  -rw-r--r--  1 coldtea  staff  2250868 Sep 24 01:42 foo2
  -rw-r--r--  1 coldtea  staff  4536031 Sep 24 01:42 foo3
  -rw-r--r--  1 coldtea  staff  9140652 Sep 24 01:42 foo4
  $ date
  Sat Sep 24 01:42:44 EEST 2016
OS X 10.12, installed with brew.


This can happen because rg searches files in parallel, so the order in which it finishes the files can be nondeterministic. If you run with -j1 (single-threaded) then it is deterministic.

To get deterministic output in multi-threaded mode, rg could wait and buffer the output until it can print it in sorted order. This might increase memory usage, and possibly time, though I think the increase would be minor.


In the first case, it's searching in parallel, so I bet the order of results is different each time.

In the second case, rg each > foo2 found results in foo1 and put them in foo2. Then rg each > foo3 found results in foo1 and foo2, and put them in foo3. Etc. That's why the file size increases so quickly.


>In the first case, it's searching in parallel, so I bet the order of results is different.

Aha. Thought that needed the -j flag (it says: default threads: 0 in the cli help).

Could it do anything to put them out in order of "depth" (and directory/file sorting order)?

>In the second case, rg each > foo2 found results in foo1 and put them in foo2. Then rg each > foo3 found results in foo1 and foo2, and put them in foo3. Etc. That's why the file size increases so quickly.

LOL, facepalm -- yes.


Forcing it to use one worker (-j1, I think) should give it deterministic output.


`ag --pager` is the most important one (I use alias ag='ag --pager "less -FRX"')


I just made a cursory pass at `man rg`, but it seems to me that the `-g` option from ag is also missing. I use it with vim-ctrlp to search file names.

Thanks for rg and the very informative blog post!


It's there. You need to pass --files.


> Only supporting DFA-able Rust regexes. I'd love to use a lighter-weight regex library in ag, but users are accustomed to full PCRE support

Would it be possible to detect when an expression requires PCRE-specific features and use a different engine when possible?


It's possible, but it's certainly not easy. Here are some complications:

1. The DFA-regex engine's syntax must be a subset of PCRE's syntax. If it's not, then users will be very confused when regex features work fine in isolation, but cause errors when combined in the same query.

2. The DFA-regex's behavior must be the same as PCRE. If whitespace matching or unicode support is even slightly different, it will frustrate users.

3. Adding another dependency means yet another way in which compilation can fail or incompatibilities can arise.

Considering the marginal usefulness of backtracking and captures, I'd prefer to keep ag as simple as possible and ditch them.


> 1. Only supporting DFA-able Rust regexes. I'd love to use a lighter-weight regex library in ag, but users are accustomed to full PCRE support. Switching would cause me to receive a lot of angry emails. Maybe I'll do it anyway. PCRE has some annoying limitations. (For example, it can only search up to 2GB at a time.)

The standard trick here is to use the faster method for searches that it supports, and use the slower but more capable method only for searches that require it. Parse the regex, see if a DFA will work, and only use PCRE for expressions with backreferences and similar.


Or use fancy-regex. Not ready for prime-time, but potentially the best of both worlds.


Do you know any engines that actually do this? As in, is it really standard? I thought maybe Spencer's Tcl regex engine did it? Although I confess, I've never read the source.

I guess RE2/Rust/Go all kind of do it as well. For example, RE2/Rust/Go will actually do backtracking in some cases! (It's bounded of course, maintaining linear time.) But this doesn't actually meet the criteria of being able to support more advanced features.


This is probably a digression.. While it's not a regex engine, and I suppose this is standard in compiler development, the Neo4j query planner team uses this approach extensively to incrementally introduce new techniques or to add pointed optimizations.

For instance, Neo4j chooses between a "rule" planner, which can (at least while I still worked at Neo) solve any query and a "cost" planner, which can solve a large subset. For those queries the cost planner can solve, it usually makes significantly better plans, kind of like the example with regex engines here.

For the curious, that happens here: https://github.com/neo4j/neo4j/blob/3.1/community/cypher/cyp...

Likewise, once a plan is made, there are two runtimes that are able to execute them - an interpreting runtime that can execute any plan, and a compiling runtime that converts logical plans to JVM bytecode, much faster but only supports a subset.

That choice is made here: https://github.com/neo4j/neo4j/blob/3.1/community/cypher/cyp...

This goes on in finer and finer detail. Lots of similar examples in how the planners devise graph traversal algorithms on the fly, by looking for patterns they now and falling back to more generic approaches if need be.

FWIW, the overhead of this has, I would argue, massively paid for itself. It has made extremely ambitious projects, like the compile-to-bytecode runtime and the cost-based query planner safely deliverable in incremental steps.



> Do you know any engines that actually do this? As in, is it really standard?

I don't know about choosing between multiple regex engines, but GNU grep and other grep implementations check for non-regex fixed strings or trivial patterns, and have special cases for those.


Well... yeah... I was more thinking about really big switches like between PCRE and a DFA.

It looks like the sibling comment found one that I think qualifies based on the description (assuming "NFA" is used correctly :P).


For anyone else interested in the memory map issue, here's some more data: https://news.ycombinator.com/item?id=12567326


> It looks like ripgrep gets most of its speedup on ag by:

A non-trivial amount of time spent is simply reading the files off the disk. If speed is the all-encompassing metric, there's a big gain to be made by pre-processing the files into an index and loading that into memory instead, and that's what livegrep[1] does.

If you find yourself waiting for grep often enough, it's a pretty handy internal tool to configure across all repos.

[1] https://livegrep.com/about


You just said that to the author of The Silver Searcher (ag).

[0] https://github.com/ggreer/the_silver_searcher


Thanks for bringing ag to the world. It's my favorite code searching tool because the letter 'a' and the letter 'g', from its name, feel easier to type than the ones of any other code searching tool. I am definitively willing to sacrifice a few milliseconds, while searching for a word in my megabyte of code, for this feature.


I've just installed ag. It defaults to case insensitive searches when the pattern is in lowercase. Is there a way to change this default? Perhaps a config file of some sort?


There's no config file for default options. You probably want to add an alias to your bash/zsh/fishrc:

    alias ag='ag -s'
I tend to favor aliases over config files. It reduces complexity and improves startup time. Startup time may not seem like a big deal, but it really matters if you're running something like:

    find . -ctime 2 -exec ag blah
(Find all files modified in the past two days and search them for instances of "blah".) If 10,000 files were changed in the past two days, and your search program takes 10 milliseconds to parse a config file, that's an extra 100 seconds wasted.


find can batch arguments using +, much like xargs:

    find . -ctime 2 -exec ag blah {} +
This makes startup latency less important.


In contrast, GNU grep uses libc’s memchr, which is standard C code with no explicit use of SIMD instructions. However, that C code will be autovectorized to use xmm registers and SIMD instructions, which are half the size of ymm registers.

I don't think this is correct. glibc has architecture specific hand rolled (or unrolled if you will lol) assembly for x64 memchr. See here: https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/x86...


Drats, you're totally right. It's easy to mess up that kind of thing.

Thankfully, it looks like my analysis remains mostly unchanged. I don't see any AVX2 in there (and indeed, I didn't when I looked at the profile either, in contrast to Go's implementation).

I updated the blog, thanks again for the clarification.


Nice! Lightgrep[1] uses libicu et al to look up code points for a user-specified encoding and encode them as bytes, then just searches for the bytes. Since ripgrep is presumably looking just for bytes, too, and compiling UTF-8 multibyte code points to a sequence of bytes, perhaps you can do likewise with ICU and support other encodings. ICU is a bear to build against when cross-compiling, but it knows hundreds of encodings, all of the proper code point names, character classes, named properties, etc., and the surface area of its API that's required for such usage is still pretty small.

[1]: http://strozfriedberg.github.io/liblightgrep


I hadn't heard of liblightgrep, nice. It's on my short list for looking more closely.

I doubt I'd ever be comfortable with Rust's regex engine growing a dependency on libicu, but it's still worth understanding your implementation. Some questions, if you don't mind. The big one is: does your regex engine use finite automata, and does it put the text decoding into the automaton itself? For example, when you compile the `.` regex, you end up with an automaton that inlines UTF-8 decoding itself. It looks like this: https://gist.github.com/anonymous/8fbe170bfcca5d7475b59299fa...

Does you regex library do that for each type of encoding? Or is there a transcoding step?


Reply fail, see above.


Yep! It's a multipattern engine so we use an NFA--slower than a normal engine, but effective when you have thousands of patterns to search for in binary streams. Since we are often searching binary, we'll encounter text fragments in lots of different encodings, so they're all compiled into the NFA. We wrote a paper about its Unicode support: http://www.dfrws.org/sites/default/files/session-files/paper....


I wish more people actually took steps to optimize disk io though; my current source tree may be in cache, but my logs certainly aren't. Nor are my /usr/share/docs/, /usr/includes/, or my old projects.

Chris Mason of btrfs fame did some proof of concept work for walking and reading trees in on-disk order, showing some pretty spectacular potential gains: https://oss.oracle.com/~mason/acp/

Tooling to do your own testing: https://oss.oracle.com/~mason/seekwatcher/


"Anti-pitch

I’d like to try to convince you why you shouldn’t use ripgrep. Often, this is far more revealing than reasons why I think you should use ripgrep."

Love that he added this


It would be interesting to benchmark how much mmap hurts when operating in a non-parallel mode.

I think a lot of the residual love for mmap is because it actually did give decent results back when single core machines were the norm. However, once your program becomes multithreaded it imposes a lot of hidden synchronization costs, especially on munmap().

The fastest option might well be to use mmap sometimes but have a collection of single-thread processes instead of a single multi-threaded one so that their VM maps aren't shared. However, this significantly complicates the work-sharing and output-merging stages. If you want to keep all the benefits you'd need a shared-memory area and do manual allocation inside it for all common data which would be a lot of work.

It might also be that mmap is a loss these days even for single-threaded... I don't know.

Side note: when I last looked at this problem (on Solaris, 20ish years ago) one trick I used when mmap'ing was to skip the "madvise(MADV_SEQUENTIAL)" if the file size was below some threshold. If the file was small enough to be completely be prefetched from disk it had no effect and was just a wasted syscall. On larger files it seemed to help, though.


One thing I did benchmark was the use of memory maps for single file search (cf. `subtitles_literal`). In that case, it saved time (small, but measurable) to memory map the file than to incrementally read it. Memory maps were only slower in parallel search on large directories.

Thankfully, ripgrep makes it easy to switch between memory maps and incremental reading. So I can just do this for you right now on the spot:

    $ time rg -j1 PM_SUSPEND | wc -l
    335
    
    real    0m0.406s
    user    0m0.350s
    sys     0m0.293s

    $ time rg -j1 PM_SUSPEND --mmap | wc -l
    335
    
    real    0m0.482s
    user    0m0.380s
    sys     0m0.317s
Note that this is on a Linux x64 box. I bet you'd get completely different results on a different OS.


Interesting that user time went up as well.. not sure if that's significant.

I guess it's not too surprising that mmap isn't much of a win these days for anything... SIMD can copy a memory page pretty fast these days.

I just installed rg from homebrew and it's quite impressive... about 2.5x faster than ag on my macbook pro. Interestingly I get another 25% improvement by falling back to -j3 even though I'm on a quad-core machine. Not sure what is bottlenecking since it's all in cache.


Yeah, figuring out the optimal thread count has always seemed like a bit of a black art to me. I can pretty reliably figure it out for my system (which has 8 physical cores, 16 logical), but it's hard to generalize that to others.

-j3 will spawn 3 workers for searching while the main thread does directory traversal. It sounds like I should do `num_cpus - 1` for the default `-j` instead of `num_cpus`.


I recently questioned if/why parallel mmap might be slower without a satisfactory conclusion. One specific thing I couldn't answer is if reusing the same filesystem buffer and program memory addresses has a less negative effect than reading a wide range of mapped memory addresses.


Since all processors must share the mapping,

- The initial mapping of each file in any thread must halt all of the threads which are otherwise active.

- Every page fault in any mapping must also halt all of the threads.

Worse, since the page tables are getting munged, some or all of the TLB cache is getting flushed every time, again, on every processor.

I'm not sure of the details, but this hypothesis should be directly testable. IIRC, there are some hardware performance counters for time spent waiting on TLB lookups.

Addendum: One other possibility is that the mere act of extending the working set size (of the address space) is blowing the TLB cache.


    - The initial mapping of each file in any thread must halt all of the threads which are otherwise active.

    - Every page fault in any mapping must also halt all of the threads.
These are certainly not the case on Linux, and I'd imagine not on other OSes, as it would be terrible for performance.

Each mapping (i.e., mmap(2) call) is synchronized with other paths that read process memory maps, such as other mmap(2), munmap(2), etc syscalls, and page faults being handled for other threads. (i.e., mmap(2) takes the mmap_sem semaphore for writing). Running threads are not halted. The page tables are not touched at all during mmap, unless MAP_POPULATE is passed. (Linux delays actual population of the page tables until the page is accessed.)

The page fault handler takes mmap_sem for reading (synchronizing with mmap(2), etc, but allowing multiple page fault handlers to read concurrently) the mappings and page_table_lock for the very small period when it actually updates the page tables.

Again, running threads are not halted. The active page tables are updated while other cores may be accessing them. This must be done carefully to avoid spurious faults, but it is certainly feasible.

In fact, at least on x86, handling page faults does not require a TLB flush. The TLB does not cache non-present page table entries, and taking a page fault invalidates the entry that caused the fault, if one existed.

There are plenty of places here that may cause contention, but nothing nearly so bad as halting execution.

munmap will be rather noisy. It involves tons of invalidations and a TLB flush. I wouldn't be surprised if a good bit of performance could be regained by avoiding munmapping the file until the process exits.


So, I did some testing with 'perf'. This is on an older Intel processor, 2-cores with hyperthreading. These were all done on the same set of files, using the binary release of ripgrep v0.1.16 on Debian Jessie:

At -j1, --mmap: 95 kdTLB load-misses, 2800 page faults

At -j2: --mmap: 170 kdTLB load-misses, 2840 page faults

At -j3: --mmap: 230 kdTLB load-misses, 2800 page faults

At -j4, --mmap: 4180 context switches, 2900 page faults, 200 Minsn, 280 kDTLB load misses, 35 MDTLB loads

At -j1, --no-mmap: 50 kdTLB load misses, 635 page faults

At -j2, --no-mmap: 70 kdTLB load misses, 675 page faults

At -j3, --no-mmap: 90 kdTLB load misses, 715 page faults

At -j4, --no-mmap: 377-400 context switches, 750 page faults, 275 Minsn, 100 kDTLB load misses

As the number of threads goes up, the total amount of TLB pressure goes up in both cases. These results are consistent with a number of TLB cache flushes proportional to N_threads * M_mappings + C for the --mmap case, and N_threads * M_buffer_perthread + D for the --no-mmap case. I think that does support the model that each thread's mmap adds pressure to all of the threads TLB's.


I did some experimentation last night as well. I suspected a lot of the cost came from the unmapping the files and the required invalidations and TLB shootdowns required to do so.

I made rg simply not munmap files when it was done with them (I made this drop do nothing: https://github.com/danburkert/memmap-rs/blob/master/src/unix...)

Searching for PM_RESUME in the Linux source gave me these results:

    --no-mmap: ~400ms
    --mmap (with munmap): ~750ms
    --mmap (without munmap): ~550ms
So eliding munmap made a big difference, but it was still not enough to beat out reading the files. perf shows that the mmap syscall itself is just too expensive (this is --mmap (with munmap)):

      Children      Self  Command  Shared Object       Symbol
    -   81.88%     0.00%  rg       rg                  [.] __rust_try
       - __rust_try
          - 50.57% std::panicking::try::call::ha112cda315d6c57d
             - 47.73% rg::Worker::search_mmap::h5179a76c63e344d0
                - 23.91% __GI___munmap
                     6.08% smp_call_function_many
                     3.14% rwsem_spin_on_owner
                     1.86% native_queued_spin_lock_slowpath
                     0.94% osq_lock
                     0.67% native_write_msr_safe
                     0.52% unmap_page_range
                + 21.41% _$LT$rg..search_buffer..BufferSearcher$LT$$u27$a$C$$u20$W$GT$$GT$::run::hd0f8b2830716be0c
                  0.80% memchr
          - 17.99% __mmap64
               5.20% rwsem_down_write_failed
               1.96% rwsem_spin_on_owner
               0.77% osq_lock
               0.59% native_queued_spin_lock_slowpath
          + 6.79% 0x1080d
            1.72% __GI___libc_close
            1.27% __memcpy_sse2_unaligned
            0.98% __fxstat64
            0.56% __GI___ioctl


To build a static Linux binary with SIMD support, run this:

    RUSTFLAGS="-C target-cpu=native" rustup run nightly cargo build --target x86_64-unknown-linux-musl --release --features simd-accel


That's an awesome demonstration of how easy it is to swap out the libc in Rust. :-)

Note that I also distribute statically compiled executables with musl and SIMD enabled (using target-feature=+ssse3 instead of target-cpu=native): https://github.com/BurntSushi/ripgrep/releases


> That's an awesome demonstration of how easy it is to swap out the libc in Rust. :-)

Now I have to look up how I use cargo to build a static binary on FreeBSD, where I don't have to swap out libc.

> Note that I also distribute statically compiled executables with musl and SIMD enabled (using target-feature=+ssse3 instead of target-cpu=native): https://github.com/BurntSushi/ripgrep/releases

I took the flag from your blog post, thanks for pointing out the explicit feature flag. That will allow the binary to run on more cpus.


Very nice. Not only fast, but feels modern.

Tried it out on a 3.5GB JSON file:

  # rg
  rg erzg4 k.json > /dev/null  1.80s user 2.54s system 53% cpu 8.053 total

  # rg with 4 threads
  rg -j4 erzg4 k.json > /dev/null  1.76s user 1.29s system 99% cpu 3.059 total

  # OS X grep
  grep erzg4 k.json > /dev/null  60.62s user 0.96s system 99% cpu 1:01.75 total

  # GNU Grep
  ggrep erzg4 k.json > /dev/null  1.96s user 1.43s system 88% cpu 2.691 total
GNU Grep wins, but it's pretty crusty, especially with regards to its output (even with colourization).


My guess is that since you ran `rg` first, the file wasn't in memory, and you ended up benchmarking disk IO. (Notice the sys time decrease from your first run to the second run.) Subsequent commands then run faster with the file already in memory.

This is one of many reasons why assembling the benchmarks in my blog post was so difficult. For example, on every command I benchmarked, I ran them 3 times for "warmup" and didn't record any measurements. I then ran them another 10 times in which I recorded them. You can see the raw output here: https://github.com/BurntSushi/ripgrep/blob/master/benchsuite...

In any case, on my underpowered Mac, here are some results on a 1.2 GB file (notice how much the time fluctuates until its fully in cache):

    mac:~ andrew$ ggrep --version
    ggrep (GNU grep) 2.25                                                                                                                                                                                                
    Packaged by Homebrew
    mac:~ andrew$ time ggrep 'Bruce Springsteen' foo.jsonl > /dev/null   
    
    real    0m5.447s
    user    0m0.600s
    sys     0m0.350s
    mac:~ andrew$ time ggrep 'Bruce Springsteen' foo.jsonl > /dev/null
    
    real    0m1.247s
    user    0m0.549s
    sys     0m0.264s
    mac:~ andrew$ time ggrep 'Bruce Springsteen' foo.jsonl > /dev/null
    
    real    0m0.803s
    user    0m0.542s
    sys     0m0.259s
    mac:~ andrew$ time ggrep 'Bruce Springsteen' foo.jsonl > /dev/null
    
    real    0m0.805s
    user    0m0.544s
    sys     0m0.260s
And now for rg:

    mac:~ andrew$ time rg 'Bruce Springsteen' foo.jsonl > /dev/null
    
    real    0m1.062s
    user    0m0.339s
    sys     0m0.333s
    mac:~ andrew$ time rg 'Bruce Springsteen' foo.jsonl > /dev/null
    
    real    0m0.640s
    user    0m0.337s
    sys     0m0.302s
    mac:~ andrew$ time rg 'Bruce Springsteen' foo.jsonl > /dev/null
    
    real    0m0.637s
    user    0m0.336s
    sys     0m0.300s
Oh! And check this out, on a Mac, not using a memory map for single files is faster. My goodness---memory map performance is all over the place.

    mac:~ andrew$ time rg 'Bruce Springsteen' foo.jsonl --no-mmap > /dev/null
    
    real    0m0.445s
    user    0m0.170s
    sys     0m0.274s
If I do this on my Linux machine on the same file, I get timings of 0.275s for rg, 0.398s for rg with no memory maps (opposite direction for Mac) and 0.708s for GNU grep (v 2.25).

Benchmarks are fun, eh?


I ran each test four times and picked the best result — not my first rodeo — but for the first result I picked the wrong time from my output, which obviously didn't make use of the cache. Here's it again, complete result, added --no-mmap:

    # ggrep
    zerogravitas$ for x in {1..4}; do (time ggrep erzg4 k.json >/dev/null); done
    ggrep erzg4 k.json > /dev/null  1.96s user 0.67s system 99% cpu 2.641 total
    ggrep erzg4 k.json > /dev/null  1.95s user 0.68s system 99% cpu 2.660 total
    ggrep erzg4 k.json > /dev/null  2.00s user 0.66s system 99% cpu 2.672 total
    ggrep erzg4 k.json > /dev/null  1.96s user 0.67s system 98% cpu 2.662 total

    # rg
    zerogravitas$ for x in {1..4}; do (time rg erzg4 k.json >/dev/null); done
    rg erzg4 k.json > /dev/null  1.76s user 1.40s system 99% cpu 3.180 total
    rg erzg4 k.json > /dev/null  1.77s user 1.31s system 99% cpu 3.088 total
    rg erzg4 k.json > /dev/null  1.74s user 1.36s system 99% cpu 3.128 total
    rg erzg4 k.json > /dev/null  1.76s user 1.41s system 97% cpu 3.265 total

    # rg --no-mmap
    zerogravitas$ for x in {1..4}; do (time rg erzg4 k.json --no-mmap >/dev/null); done
    rg erzg4 k.json --no-mmap > /dev/null  0.98s user 0.75s system 99% cpu 1.743 total
    rg erzg4 k.json --no-mmap > /dev/null  0.99s user 0.75s system 99% cpu 1.740 total
    rg erzg4 k.json --no-mmap > /dev/null  1.01s user 0.76s system 99% cpu 1.772 total
    rg erzg4 k.json --no-mmap > /dev/null  0.99s user 0.75s system 99% cpu 1.754 total

    # rg -j4
    zerogravitas$ for x in {1..4}; do (time rg erzg4 k.json -j4 >/dev/null); done
    rg erzg4 k.json -j4 > /dev/null  1.75s user 1.35s system 98% cpu 3.134 total
    rg erzg4 k.json -j4 > /dev/null  1.75s user 1.44s system 98% cpu 3.224 total
    rg erzg4 k.json -j4 > /dev/null  1.80s user 1.38s system 99% cpu 3.204 total
    rg erzg4 k.json -j4 > /dev/null  1.80s user 1.35s system 99% cpu 3.164 total

    # rg -j4 --no-mmap
    zerogravitas$ for x in {1..4}; do (time rg erzg4 k.json -j4 --no-mmap >/dev/null); done
    rg erzg4 k.json -j4 --no-mmap > /dev/null  0.98s user 0.75s system 99% cpu 1.740 total
    rg erzg4 k.json -j4 --no-mmap > /dev/null  0.97s user 0.74s system 99% cpu 1.721 total
    rg erzg4 k.json -j4 --no-mmap > /dev/null  0.99s user 0.75s system 99% cpu 1.752 total
    rg erzg4 k.json -j4 --no-mmap > /dev/null  0.98s user 0.76s system 99% cpu 1.748 total
Sounds like "alias rg=rg --no-mmap" is a good idea on a Mac.


Wow. Those are awesome results, thank you.

> Sounds like "alias rg=rg --no-mmap" is a good idea on a Mac.

I will fix that in ripgrep proper by making --no-mmap the default on Mac. :-) It should be an easy one to knock off: https://github.com/BurntSushi/ripgrep/issues/36

> not my first rodeo

Right, sorry about that. :-) Just had to cover all my bases!

(Also, `-j` on a single file won't do anything, and ripgrep should try to use multiple threads by default when searching multiple files.)


I suspected that -j wouldn't do anything on a single file. For three large (6.5GB in total) files I'm getting good performance, about 1.6x of what GNU Grep does, best case.


Compiling it to try right now...

Some discussion over on /r/rust: https://www.reddit.com/r/rust/comments/544hnk/ripgrep_is_fas...

EDIT: The machine I'm on is much less beefy than the benchmark machines, which means that the speed difference is quite noticeable for me.


Nice work. But your story is incomplete if you don't include comparison with icgrep (parabix.costar.sfu.ca). Although icgrep is still an active research project, it is faster in many cases and has broader Unicode support (full Unicode level 1 of UTS #18, plus many level 2 features). For example, try the '\N{SMIL(E|ING)}' search that finds lines containing emoji characters with SMILE or SMILING in their Unicode name. icgrep also correctly applies Unicode character class intersection with expressions such as [\p{Greek}&&\p{Lu}], while ripgrep fails to meet UTS 18 level 1 requirements by interpreting '&&' as literal characters.

Nevertheless, we appreciate the challenge that ripgrep presents to our performance story. We definitely see some cases in which rg achieves better performance taking advantage of fixed strings somewhere in the pattern. We'll have to work on that... But for patterns based on Unicode classes (e.g., \p{Greek}, icgrep can be much faster, especially on large files. (We are only focused on big data applications -- icgrep has significant dynamic compilation overhead). It also does very well in cases involving alternations and ambiguity.

The icgrep performance story is based primarily on a new bitwise data parallel regular expression algorithm working off the Parabix transform representation of text. See our ICA3PP 2015 or PACT 2014 papers.

It might be fair to say that icgrep is not yet polished enough for inclusion in your study. I just added our first implementation of -r/-R flags last night and we certainly haven't yet handled .gitignore, etc. But if you want to understand the truth about regular expression performance, I think that the data point represented by icgrep (and its continuing development) needs to be included.


Benchmarking aside, I think the icgrep approach is very interesting.

The appeal of this sort of bit parallel stuff is that you don't get performance variability. It's all fun and games with literal optimizations until you pick the wrong literal - all you have to do is stuff the file in the down-thread benchmark with a huge pile of "Holmes Holmes Holmes Lestrade Lestrade Lestrade" etc and suddenly both rg and Hyperscan will look a lot dumber and icgrep will keep going at the same speed.

Unfortunately, we've never figured out a way to dispense with literal-based matching for the large scale cases that we work with. When someone gives you a dozen regexes - or 12,000 - bit parallel approaches go into the weeds (as do most regex-oriented techniques; DFAs explode and bit-parallel NFAs - regardless of organization - become ponderous).


Speaking as the leader of the Hyperscan project (https://github.com/01org/hyperscan), I'd say you might not be the only project feeling a bit neglected in the performance comparison here.

It's nice to be mentioned - and even called out by name as the inventor of Teddy (!) - but we're always even more pleased when someone else measures Hyperscan, as it minimizes the prospect of us embarrassing ourselves by posting some self-serving and/or outlandish microbenchmark.

Also it saves everyone time reading the obligatory Intel legal disclaimers...


We are very interested in tackling the multiple-pattern regular expression problem and will definitely want to use Hyperscan as a comparator.

Any help in setting up a study with both patterns and data sets would be most appreciated!


This sounds interesting. I suspect there are many alternate approaches to multiple regex.

Sadly, regex benchmarking is a sewer. There are two classes of multiple regex benchmarks: public ones and good ones, and not much intersection between the two. Synthetic pattern generation can be manipulated to say whatever you want it to say, Snort patterns aren't intended to be run simultaneously (so putting a big pile of them into a sack and running them is of arguable use), and most vendors guard proprietary signature sets closely (we have thousands, but all the good ones are customer confidential).

That being said, there are some paths forward here. Let's talk.


Certainly. I dropped an e-mail at the hyperscan account.

By the way, if you try out icgrep and use the -DumpASM option, you may notice a very unusual characteristic: the generated code is almost completely dominated by AVX2 instructions!


I did a brief performance comparison with icgrep here: https://github.com/BurntSushi/ripgrep/issues/63 --- Yes, it does a little better on things like \p{Greek} (and has even better Unicode support than ripgrep) but is missing literal optimizations, which are a ridiculously common case for search tools like this. (As you acknowledged.) Just about every single person who has come to me with a performance comparison of ripgrep has used a simple literal pattern, because that's what people search. You need to knock that case out of the park to be competitive. Even sift and pt do this to some extent.

Hyperscan is another top-notch regex implementation worth looking at too, for example. With that said, it does look like icgrep could be added to the single-file benchmark at least, but it is by far the hardest piece of software to actually get installed. (Your build instructions required me to compile llvm.)

> But your story is incomplete if you don't include comparison with icgrep

My story is incomplete for a very very large number of reasons. icgrep is only one of them. As I said in the blog post, the benchmarks are not only biased, but curated.

Of course, I agree icgrep is worth looking into. It's on my list to learn more about.

> It also does very well in cases involving alternations and ambiguity.

So does ripgrep. :-)

    $ ls -hl OpenSubtitles2016.raw.en
    -rw-r--r-- 1 andrew users 9.3G Sep 10 11:51 OpenSubtitles2016.raw.en
    
    $ time rg '\w+ Holmes|\w+ Watson|\w+ Adler|\w+ Moriarty|\w+ Lestrade' OpenSubtitles2016.raw.en | wc -l
    26464
    
    real    0m30.606s
    user    0m29.987s
    sys     0m0.567s
    
    $ time icgrep '\w+ Holmes|\w+ Watson|\w+ Adler|\w+ Moriarty|\w+ Lestrade' OpenSubtitles2016.raw.en | wc -l
    26464
    
    real    0m41.346s
    user    0m40.770s
    sys     0m0.513s
    
    $ time rg -i '\w+ Holmes|\w+ Watson|\w+ Adler|\w+ Moriarty|\w+ Lestrade' OpenSubtitles2016.raw.en | wc -l
    27370
    
    real    0m17.611s
    user    0m17.100s
    sys     0m0.510s
    
    $ time icgrep -i '\w+ Holmes|\w+ Watson|\w+ Adler|\w+ Moriarty|\w+ Lestrade' OpenSubtitles2016.raw.en | wc -l
    27370
    
    real    0m43.715s
    user    0m43.193s
    sys     0m0.523s
In the case insensitive search, ripgrep isn't actually doing any literal optimizations, but it is in the first case. Namely, it sees that ` ` is required in every alternate. Since ` ` is so common, this ends up slowing down the search. (This shows how literal optimizations aren't necessarily so simple, because it's easy to make a mistake like I've done and cause search to be slower.)

If we change ` ` to `\s+`, we can see ripgrep regain its performance precisely because it can't do any literal optimizations:

    $ time rg '\w+\s+Holmes|\w+\s+Watson|\w+\s+Adler|\w+\s+Moriarty|\w+\s+Lestrade' OpenSubtitles2016.raw.en | wc -l
    26464
    
    real    0m17.607s
    user    0m17.117s
    sys     0m0.490s
    
    $ time icgrep '\w+\s+Holmes|\w+\s+Watson|\w+\s+Adler|\w+\s+Moriarty|\w+\s+Lestrade' OpenSubtitles2016.raw.en | wc -l
    26464
    
    real    0m46.368s
    user    0m45.883s
    sys     0m0.483s
    
    $ time rg -i '\w+\s+Holmes|\w+\s+Watson|\w+\s+Adler|\w+\s+Moriarty|\w+\s+Lestrade' OpenSubtitles2016.raw.en | wc -l
    27370
    
    real    0m17.583s
    user    0m17.080s
    sys     0m0.493s
    
    $ time icgrep -i '\w+\s+Holmes|\w+\s+Watson|\w+\s+Adler|\w+\s+Moriarty|\w+\s+Lestrade' OpenSubtitles2016.raw.en | wc -l
    27370
    
    real    0m48.662s
    user    0m48.127s
    sys     0m0.503s
Nevertheless, ripgrep outperforms icgrep in every case.

Now, with all that said, I bet this is my bias showing. I'm confident you can find other cases (we don't already know about) where icgrep beats ripgrep. :-)

BTW, the corpus I used can be got here: http://opus.lingfil.uu.se/OpenSubtitles2016/mono/OpenSubtitl... (The one I used in my benchmark is a sample of this. For these benchmarks here, I used the full file.) The Cyrllic corpus is here: http://opus.lingfil.uu.se/OpenSubtitles2016/mono/OpenSubtitl...


Running the same tests on our test machine, I am seeing dramatically different performance results. Our test machine has AVX2, but this can't be the whole story.

cameron@cs-osl-10:~/ripgrep/datadir/subtitles$ time rg '\w+ Holmes|\w+ Watson|\w+ Adler|\w+ Moriarty|\w+ Lestrade' OpenSubtitles2016.raw.en | wc -l 26464

real 1m8.380s user 1m6.211s sys 0m2.006s cameron@cs-osl-10:~/ripgrep/datadir/subtitles$ time icgrep '\w+ Holmes|\w+ Watson|\w+ Adler|\w+ Moriarty|\w+ Lestrade' OpenSubtitles2016.raw.en | wc -l 26464

real 0m11.899s user 0m9.212s sys 0m2.125s cameron@cs-osl-10:~/ripgrep/datadir/subtitles$ time rg -i '\w+ Holmes|\w+ Watson|\w+ Adler|\w+ Moriarty|\w+ Lestrade' OpenSubtitles2016.raw.en | wc -l 27370

real 0m31.891s user 0m29.559s sys 0m2.119s

cameron@cs-osl-10:~/ripgrep/datadir/subtitles$ time icgrep -i '\w+ Holmes|\w+ Watson|\w+ Adler|\w+ Moriarty|\w+ Lestrade' OpenSubtitles2016.raw.en | wc -l 27370

real 0m14.359s user 0m11.765s sys 0m2.211s

Here is the processor info for (only showing the first processor). cameron@cs-osl-10:~/ripgrep/datadir/subtitles$ cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 61 model name : Intel(R) Core(TM) i3-5010U CPU @ 2.10GHz stepping : 4 microcode : 0x16 cpu MHz : 2076.621 cache size : 3072 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 20 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap bogomips : 4189.93 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management:


Strange. I'm not sure how to explain the results either. Is there something I'm supposed to do to enable icgrep to use AVX2? I followed the build instructions in the README verbatim. Here's my cpu info (which is quite new and does have AVX2):

    processor       : 0
    vendor_id       : GenuineIntel
    cpu family      : 6
    model           : 79
    model name      : Intel(R) Core(TM) i7-6900K CPU @ 3.20GHz
    stepping        : 1
    microcode       : 0xb00001d
    cpu MHz         : 1267.578
    cache size      : 20480 KB
    physical id     : 0
    siblings        : 16
    core id         : 0
    cpu cores       : 8
    apicid          : 0
    initial apicid  : 0
    fpu             : yes
    fpu_exception   : yes
    cpuid level     : 20
    wp              : yes
    flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts
    bugs            :
    bogomips        : 6398.91
    clflush size    : 64
    cache_alignment : 64
    address sizes   : 46 bits physical, 48 bits virtual
    power management:


Are you using icgrep1.0? That may explain it.

My reports are from our current development version r5163.

  cameron@cs-osl-10:~/ripgrep/datadir/subtitles$ perf stat -e instructions:u,cycles:u,branch-misses:u icgrep1.0 -i -c '\w+ Holmes|\w+ Watson|\w+ Adler|\w+ Moriarty|\w+ Lestrade' OpenSubtitles2016.raw.en 
  27370
  Performance counter stats for 'icgrep1.0 -i -c \w+ Holmes|\w+ Watson|\w+ Adler|\w+ Moriarty|\w+ Lestrade OpenSubtitles2016.raw.en':
   252,725,532,395      instructions:u            #    2.75  insns per cycle        
    91,867,444,975      cycles:u                 
       283,661,301      branch-misses:u                                             
      46.570725331 seconds time elapsed

   
  cameron@cs-osl-10:~/ripgrep/datadir/subtitles$ perf stat -e instructions:u,cycles:u,branch-misses:u rg -i -c '\w+ Holmes|\w+ Watson|\w+ Adler|\w+ Moriarty|\w+ Lestrade' OpenSubtitles2016.raw.en 
  27370
  Performance counter stats for 'rg -i -c \w+ Holmes|\w+ Watson|\w+ Adler|\w+ Moriarty|\w+ Lestrade OpenSubtitles2016.raw.en':
    84,296,004,027      instructions:u            #    1.38  insns per cycle        
    61,298,903,577      cycles:u                 
           510,918      branch-misses:u                                             
      31.962195024 seconds time elapsed

  cameron@cs-osl-10:~/ripgrep/datadir/subtitles$ perf stat -e instructions:u,cycles:u,branch-misses:u icgrep -i -c '\w+ Holmes|\w+ Watson|\w+ Adler|\w+ Moriarty|\w+ Lestrade' OpenSubtitles2016.raw.en 
  27370
  Performance counter stats for 'icgrep -i -c \w+ Holmes|\w+ Watson|\w+ Adler|\w+ Moriarty|\w+ Lestrade OpenSubtitles2016.raw.en':
    42,064,581,840      instructions:u            #    1.94  insns per cycle        
    21,723,251,095      cycles:u                 
        47,953,756      branch-misses:u                                             
      13.301160493 seconds time elapsed


    [andrew@Cheetah icgrep-build] pwd
    /home/andrew/clones/icgrep1.0/icgrep-devel/icgrep-build
    [andrew@Cheetah icgrep-build] ./icgrep --version
    LLVM (http://llvm.org/):
      LLVM version 3.5.0svn
      Optimized build.
      Built Sep 24 2016 (11:27:32).
      Default target: x86_64-unknown-linux-gnu
      Host CPU: x86-64
If the file path is any indication, it looks like I'm using `icgrep1.0`. But the file path also has `icgrep-devel` in it. So I don't know. When I get a chance, I guess I'll try to figure out how to compile the devel version. (It seemed like that was what I was doing, by checking out the source, but maybe not.)


Yes, you have icgrep 1.0. The current development version has about the same build process, sorry that it is such a pain. It is available by svn checkout as follows.

  svn co http://parabix.costar.sfu.ca/svn/icGREP/icgrep-devel
AVX2 is autodetected and used if available and enabled by the operating system/hypervisor (Although icgrep1.0 can be compiled to use AVX2, it had some issues).


Rust is really staring to be seen in the wild now.


I agree, and I'm already using both ripgrep and rust-parallel ( https://github.com/mmstick/parallel , a gnu-parallel replacement which should probably get another name ).

I am really happy to see Rust apps actually been written -- Rust programmers seem to be actually trying to replace the existing code lying around, instead of just insulting it and telling us how much better the world would be if we used their language.


Honestly, for a certain kind of developer—I haven't fingerpointed exactly which kind—writing Rust code is dangerously addictive. There's some combination of aggressive type checking, tooling, and expressiveness that just hits a sweet spot in my brain.

Rust fills roughly the same niche in my brain as old-school C++, except it also lets me do some rudimentary functional stuff, there are virtually no footguns, and the CLI tooling is out of this world. And I've long since made friends with the borrow checker. So it's tempting to just spend my life cranking out Rust code as fast as I can. It's actually a problem. :-)


Well in the interests of making contraversal statements, that's now two Rust programs I plan on using day-to-day, and zero for Haskell, Scala, and I'm sure a few other languages I'm not remembering.


Pandoc

Xmonad

Darcs

Git-annex

Although, it would be very hard to write SIMD performant code in Haskell, even its string libraries don't use any optimization tricks.


I was excited when I heard of it, but I thought Darcs had run into some major issues with patch theory and exponential merges. Other than it being written in Haskell, why would I use it over git?


Well, speed issues are currently not that noticeable (a lot of times passed since initial bumps) but I'm pretty sure some large repositories (more than millions of lines of patches) wouldn't be able to work with it.

git is a whole different story compared to darcs, darcs is simpler to use, especially when merging.

if it was consistently fast I'm pretty sure git wouldn't be as popular as it is.

there is rust vcs https://pijul.org/ that seems to be doing the same thing as darcs but faster.

so I guess CJefferson's controversial comment stands :D


It will only be a matter of time before someone rewrites xmonad in Rust.


Negative time, actually: https://github.com/kintaro/wtftw

(there's an i3 inspired one as well, I forget the link)


Nice!

I think this is the other one you were thinking of: https://github.com/tsurai/xr3wm


There is a different one based on wlc (So it's a wayland compositor, rather than just an X11 window manager). The one you linked doesn't seem to have been updated since 2015

Edit: Here it is: https://github.com/Immington-Industries/way-cooler


I need an R5RS+ Scheme that is tightly integrated with Rust. High-level and fast enough, with the ability to drop down into Rust for speed.


Well Burntsushi, the author is quite famous and level headed programmer who has written in both Go and Rust. He is exception to what you describe of Rust programmers who are still not much changed.


Actually, I'm no exception at all. The Rust community (and the Go community) is full of level headed exceptional people. I love working with them!


This would have been a lot more exciting if it were designed to actually be even slightly compatible with grep (or maybe had some core in there that was, while leaving the other UI parts he wants to change on top) and were then approaching GNU and saying "hey, this is something I've been working on: would you consider making grep the first standalone tool to move to Rust, and what would it take from me to make this happen?" as opposed to writing an article about how "I am smarter than the GNU grep people for these reasons and have built a tool named after how my tool is going to kill their tool" which almost seems to go out of its way to set up an air of competition rather than collaboration. Maybe you value pure technical chops, and the difference between "insulting it and telling us how" is much worse than "insulting it and actually writing code", but to me they both start with "insulting it" and demonstrate an almost tragic inability to work with others. For people like me, people who actually want to see a language like Rust get used en masse and entirely replace languages like C, the attitude in this blog post is extremely depressing. Even if I now we're to myself take the time to go to the GNU grep authors and try to talk to them about this, the mere existence of this blog post is going to make that slightly more taxing and slightly more of a battle for everyone involved :/. (I mean: seriously... "ripgrep"?! This developer is clearly going out of their way to be combative. What ever happened to the open source spirit of collaboration? What happened to actual communication between teams? Why do projects seem to just assume "the design decisions, or even specific implementations, or even accidental mistake of existing tools are set in stone, and so the right way to talk about changing them is to discuss competition between entire projects or at best hard forks rather than working with other people"? :/)


> as opposed to writing an article about how "I am smarter than the GNU grep people for these reasons and have built a tool named after how my tool is going to kill their tool"

You're assuming way too much bad faith here. The article tries very hard to make a fair comparison, and doesn't throw dirt on grep at any point. It's pretty clear that the intention of the author was to write a search tool with particular characteristics, and it turned out that that choice was faster than grep.

I think you're extrapolating from the name of the crate -- as steve said, the name is not "R.I.P. grep", and the intention was not to replace grep.

Go through the post again with this in mind. Does it sound at all combative to you? It's purely technical, and fairly explains everything.


Nobody is under the slightest bit of moral obligation to work with existing projects when starting a new one. There is nothing wrong with competition.


I'm trying to describe just how disappointed I am in hearing you say that, but the words just aren't coming to me today.


If you invert the two elements of pcwalton's comment, another way of putting it that might sound less disappointing to your ear would be this: competition is another way of contributing to other projects, if done openly, collaboratively, and in a constructive way.

One of the delightful aspects of this entire HN thread is the interchange between the authors of rg and ag with regard to burntsushi's detailed explanation of the tradeoffs involved, and how the ag author considers making some changes based on learning and philosophy detailed by burntsushi. This interchange is a great bazaar moment in the sense that new ground was staked out. If the proposed moral obligation would be to simply contribute PRs to existing projects (for a slew of philosophical architectural and features differences like Unicode support, etc.), then innovation slows to a crawl because of passivity constraints, bikeshedding, etc.

If you have the talent and capacity to build a better alternative, and share it openly, and explain it so well, as burntsushi has in this domain, then the only moral imperative I see is that you should do so.


(small note, the name wasn't intended to be about killing grep, it was "rip" as in "rips through your code very quickly". Of course, intention doesn't count, etc, but just to be clear about the history.)


> (or maybe had some core in there that was, while leaving the other UI parts he wants to change on top)

But I did! Have you looked at the dependency list of ripgrep? It's utterly filled with tons of tools that you can pick out and use in other projects for any purpose you like. I didn't mention this in the blog post because there's already too much there, but sure, here they are:

memchr - Fast single byte search: http://burntsushi.net/rustdoc/memchr/

walkdir - Recursive directory iterator: http://burntsushi.net/rustdoc/walkdir/

utf8-ranges - Generate utf8 automata: http://burntsushi.net/rustdoc/utf8_ranges/

regex-syntax - A regex parser (including literal extraction helpers): https://doc.rust-lang.org/regex/regex_syntax/index.html

regex - The regex engine itself: https://doc.rust-lang.org/regex/regex/index.html

grep - Line-by-line search (as a library). This is where all of the inner literal optimizations happen, for example. http://burntsushi.net/rustdoc/grep/

And this is only the stuff that I did. This doesn't count all the other wonderful stuff I used that other folks built!

And sure, I could do better. There's more stuff I could move into the `grep` crate, but this is only the beginning, not the end.

> "hey, this is something I've been working on: would you consider making grep the first standalone tool to move to Rust, and what would it take from me to make this happen?"

But I didn't want to do that. I don't understand why that's a problem. Doing this requires being POSIX interface compatible, and that's not a hole I care to dig. I don't mean any disrespect, it's just not what I want to spend my time doing.

> as opposed to writing an article about how "I am smarter than the GNU grep people for these reasons and have built a tool named after how my tool is going to kill their tool" which almost seems to go out of its way to set up an air of competition rather than collaboration

I'm really sorry if I came across that way. It was of course not my intention. My intention was to write about what I had learned, which seems like a pretty fundamental component of collaboration. I'm sure five years from now, when I've moved on to other problems, someone will come along and beat ripgrep, and I can only hope that they write about it. :-)

I mean, if it weren't for the innumerable people who wrote about their experience with this kind of stuff, I never would have been able to get here. It only makes sense for me to write if I think I have something valuable to share.

> (I mean: seriously... "ripgrep"?! This developer is clearly going out of their way to be combative.

I'm not. "rip" was supposed to mean "rip through text." I'm sorry it came off as combative. I wasn't even aware of the "rest in peace" interpretation until someone else pointed it out.

Honestly... I was trying hard to find a way to justify the binary name `rg` because I liked that `r` could stand for Rust. But `rustgrep` seemed a bit too in-your-face, so I started searching for small relevant words starting with `r`. That's it.


I just wanted to say that I think you're doing great work here! Obviously everyone has their own motivations for doing open source work, and yours are clearly to work in Rust and contribute to the Rust community while also scratching some personal itches.

Honestly, I'm not sure that trying to contribute to grep would even be worthwhile. You've got a few cases where ripgrep is faster than grep, but to submit patches to grep to fix it for those cases would be an enormous amount of effort, and either require you to rewrite a whole bunch of Rust code in C or the grep maintainers to accept patches in Rust, neither of which seem like great ideas. What you have done is advance the state of the art in search tools, as well as provide lots of building blocks to the Rust community for other developers to get great behavior out of their own programs. (Incidentally this is one thing I love about Rust--it's very straightforward for someone to write a crate that implements something in an extremely optimized way but still presents an easy-to-use API for other developers.)

I haven't had an opportunity to use many of your crates in my Rust development yet (except byteorder!) but I'm excited that they exist and I can definitely see myself using them in the future.


Thanks so much for your kind words and I agree with everything you said. :-)


> I'm sure five years from now, when I've moved on to other problems, someone will come along and beat ripgrep, and I can only hope that they write about it. :-)

I contend that the world would be a much better place if instead of someone building a new tool which beats yours, they worked to improve your tool (which, at the point where you moved on, would hopefully have been granted to a body of separate maintainers, whether people you find or a group such as Apache which specializes in maintaining valuable open source projects). Sure: a world where people write about what they do is better than a world where they don't, but that's a really depressing thing to be "hoping for".

I am apparently becoming extremely unpopular in these circles for expressing this opinion, but a really important aspect of open source was about people collaborating towards a common effort to build high quality software: to avoid working together, to even expect that people will or should continually build new projects from scratch that "compete" with each other, defeats many (if not even most) of the benefits of open source software, as it relegates us to the same process by which closed source software improves.

> But I didn't want to do that. I don't understand why that's a problem. Doing this requires being POSIX interface compatible, and that's not a hole I care to dig. I don't mean any disrespect, it's just not what I want to spend my time doing.

Providing some of the parts, or an offer to help, goes a long way: you don't have to do all of the work (and others wouldn't expect that); also, it is worth noting that GNU grep has on occasion added alternative backends (whether "extended" expressions or later using PCRE), and often have made improvements or added features. The assumption that grep is a tool which works the way it does and which will always do what it does, and that improvements should come in the form of competition, is demotivating to contribution.


> I am apparently becoming extremely unpopular in these circles for expressing this opinion

To be really clear: I don't think your opinion is necessarily the problem. When I first read your comment, I typed up a response that I wasn't proud of. It wasn't nice because your comment wasn't nice. I had to step away from the computer and take a moment to put things back into focus to give you the response I did. It wasn't easy.

And really, my reaction to your comment had nothing to do with your opinion that we should try to collaborate more. That's a completely reasonable thing to hope for. But some of the things you said, or implied (about me personally), were really way way off, and I personally found them pretty insulting.

I get that you took my blog post as combative, so maybe you think the same about me. But you didn't ask for clarification, you just kind of dove right into the insults and assumptions and bad faith, and personally, I think that is just a really awful way to interact with other humans.

It's clear that we have different valuations on how to spend our time, and I really don't appreciate your implicit condescension. I also don't appreciate you telling me how I should spend my time. My free time is precious, and I want to spend it doing the things I find interesting. I don't want to work on a legacy code base, in C and spend enormous social resources pushing on one of the most established C projects in existence to switch to a new programming language. That does not sound like fun to me, and I want to work on something fun in my free time. `ripgrep` happened to be it. (N.B. Fun is not the only criterion, but it's a big one.)

> but a really important aspect of open source was about people collaborating towards a common effort to build high quality software

I've spent a huge portion of my free time in the past 2.5 years contributing to the Rust ecosystem. If that's not collaborating towards a common effort, then I don't know what is. `ripgrep` itself is barely a blip in that effort. All the stuff that went into building `ripgrep` that is freely available as other libraries? Yeah, that took a while.


I don't really accept moral arguments that argue that one thing is a better use of one's free time than some other thing. Even the best cancer researchers have every right to use their free time to watch a movie or read fiction instead of working on solving problems, even though they could theoretically save human lives by doing so. If they get to, then surely people who write programmer tools to search files get to do whatever they want in their free time too.

The author chose to spend his time writing his own file searching tool instead of trying to work with GNU. That's his call.


> Providing some of the parts,

He just gave a laundry list of parts which grep or any other tool could use, and gives a thorough explanation of what benefits each one gets you in the blog post.

He's also having a seemingly productive discussion with the ag author on HN, which could result in ag sharing components.

Ripgrep seems to be a fun side project for the author. Making it posix compatible and getting involved in the politics of changing the language and algorithms of a major tool might not be considered fun by the author. This is totally reasonable. Especially given that Rust isn't really the reason why rg is fast -- justifying Rust is harder in this case. But if you like Rust (and not C/++), then it's even less fun to work on a C/++ codebase.

The blog post is written in a way that any tool can pick up the same perf improvements if it wishes (often by just picking up a dependency). The author has been collaborative, not combative, with the ag author here (and presumably with other tool maintainers if they come forth). This is very much in the spirit of open source, since even if things superficially compete they still can share code and ideas. You're being very unreasonable here.


The developer has already stated that they did not intentionally name the tool "RIP grep". Even if they did, who cares? Why do people making grep alternatives have to treat grep with an extreme, undue amount of respect?


Why claim GNU grep would get "extreme, undue" amounts of respect? Should not all collaborative projects be afforded some basic level of respect? What is the underlying meaning of entering a collaborative software development community (the world of open source) and feeling, as you do, that it would be an "extreme, undue" level of respect to not purposefully attack (as to explicitly reference your "even if they did") the work of other people who are just trying to do good and would likely love to work together with you on a shared goal?


If you think a project is bad then you should be allowed to say so. I'm sure you have used a software tool that you have complained about despite the fact that it was open-source.

It's a bridge to far to directly attack people just for working on an open-source project, but the creator of this project did not do that, obviously.


Is there anything about ripgrep that makes its performance uniquely fast? That is, is it not possible to achieve the same speed and correctness of results using, say, C or C++?

If neither condition holds, then the use of Rust is almost purely incidental. It isn't even interesting, from that perspective. Far more interesting would be a measure of how safe the Rust code is versus equivalent-results implementations in other languages.


I can't think of any particular piece that requires Rust specifically for performance, no. The real beauty of Rust is that I was able to achieve this performance in the first place, all with very very little use of unsafe (even in the regex engine itself).

But of course, as we all know, performance isn't everything. Even ripgrep itself accepts a performance penalty in its default mode of operation, in order to improve the relevance of results shown.


Yes, that was my main point, re: safety. I've been implementing a BLAS with Rust (pure Rust--not FFI wrappers around C-wrappers around Fortan or other bindings) and it has been interesting to see where the C-programmer in me gets tripped up and how the performance compares.


It seems to be faster than grep primarily by having fewer features (it doesn't even attempt to support all of POSIX regular expressions, for one).


To be clear, I do not believe ripgrep is faster than grep because it has fewer features. At least, I've yet to see any evidence to the contrary.

I did try to explain why I think ripgrep is faster in all kinds of gory details though. :-)


I don't know about that (as in I haven't looked so I literally don't know). First of all, provided the tools produce identical results for similarly-complex regex patterns I'm not sure having to support all POSIX regexes is necessary. Second of all my point was really that the performance achievement is not nearly as interesting as the level of rust-guaranteed safety found as compared with similar implementations in other ("unsafe") languages.


I love the layout of this article. Especially the pitch and anti-pitch. I wish more more tools/libraries/things would make note of their downsides.

I'm convinced to give it a try.


When I use grep (which is fairly regularly), the bottleneck is nearly always the disk or the network (in case of NFS/SMB volumes).

Just out of curiosity, what kind of use case makes grep and prospective replacements scream? The most "hardcore" I got with grep was digging through a few gigabytes of ShamePoint logs looking for those correlation IDs, and that apparently was completely I/O-bound, the CPUs on that machine stayed nearly idle.


> Just out of curiosity, what kind of use case makes grep and prospective replacements scream?

Unicode? Check out the subtitle benchmarks in the blog post. In the best case, grep is a little slower. In the worst case, grep is orders of magnitude slower.

ripgrep achieves speed by building UTF-8 decoding straight into its DFA regex engine (well, strictly speaking, this is Rust's regex engine, not ripgrep).

The other case where grep users might scream is when you're searching large code repositories. A `grep -r` might catch a large binary file, or search your `.git` or whatever. Both `ag` and `rg` will look at your `.gitignore` so that the results you see have higher relevance. (Of course, this is just a default, you can always "search everything" with ripgrep too!)


Maybe I'm unique to this, but that sort of default would drive me batshit insane. Not all of us are programmers by trade who use git. It's just another gotcha to keep track of. I'd seriously recommend removing that as a default.

That said, this is an overall awesome project.


The default isn't going to change. The Silver Searcher has, IMO, proven that it's a good default. I've spoken with so many people that love it, myself included.

If you want to not respect .gitignores, it's easy: `alias rg="rg -u"`.

(To be clear, I agree that the default is a trade off. I'm not so gung-ho on this that I think it should never be anything else. But for my project and its goals, I think it's the best fit.)


Thanks for the explanation!


`grep` itself is admirably fast on single files, and if you're just searching a small number of files, the bottleneck will be IO, and `grep` will be about as fast as anything. But if you want to use `grep` recursively on a selected subset of millions of files it suffers a bit. `grep` assumes that if you want to collect some subset of files to search, you'll do that with some other utility, and pass the list of files to `grep`, but that creates a bottleneck.

More recent search programs like `ack` and `ag` solve the multifile bottleneck issue, but also perform worse than `grep` on single large files.


> and `grep` will be about as fast as anything

I'd encourage you to check out the benchmarks in my blog, especially the subtitle benchmarks, because this isn't actually true in the case of ripgrep. :-)


I take it back! I'm definitely switching over to using ripgrep for the next bit.

Off topic - is there a way to exclude directories? I tried doing things like `rg pattern -g '!my_dir/' but nothing I tried seemed to work.


Other than using your .gitignore or .ignore files to ignore it, that is indeed the intended way to do it.

Sadly, I have an embarrassing bug. Should be fixed soon: https://github.com/BurntSushi/ripgrep/issues/43


Ahh, fantastic. I just tried out the new build. It's a real joy to be able to do things like `rg pattern -g '!{docs,tests}/' -tpy` and have it just work. Honestly, all performance aside, your thoughtful choice of command line options is the big selling point for me. I struggle to find the command line flags I need to make `ag` do just what I want, but `rg` seems to fit my expectations without much training.


Thanks for your encouragement. It's really hard to stand my ground because so many people have so many different use cases. The design space is large and it's very challenging to get it even a little right.

In any case, your example is quite fortuitous, since the `{docs,tests}` glob syntax was also added in 0.2.0. :-)


Thank you!


"if you like speed, saner defaults, fewer bugs and Unicode"

Warning - Conditional always returns true.


Thanks for the detailed comparisons and writeup.

I find this simple wrapper around grep(1) very fast and useful:

http://www.pixelbeat.org/scripts/findrepo


Thanks for the kind words!

Note that the key thing that `findrepo` doesn't support is respecting your .gitignore files. For example, in the Rust ecosystem, we often have a `target` directory in our projects that contains a lot of stuff we probably don't want to search. In fact, running `cargo new` will add that directory to your `.gitignore` automatically!

Tools like The Silver Searcher and ripgrep will ignore that directory (and all others like it) automatically.

There are other advantages to ripgrep. For example, every other tool that supports Unicode as well as ripgrep (that's `git grep` and GNU grep) experience a substantial slow down when trying to use more advanced Unicode features (like \w, -i, etc.). This is one of the things my benchmarks show.


Yes ignoring .gitignore contents is a very useful feature. That could be added quite easily to findrepo though would probably have to be an opt as it is often useful to search intermediate build files etc. that aren't checked in. Whereas `git grep` handles the other use case of only searching checked in files.

Interesting info wrt efficient unicode processing for \w and -i.

cheers


I'm glad to see this work get written up. If anyone wants a good project, there are some optimizations we (the Hyperscan team) have done in our "Teddy" SIMD string implementation that aren't captured (yet) in the Rust implementation to my knowledge. We're very happy to see techniques from our library get used in other projects as one of the points of open sourcing it (https://github.com/01org/hyperscan) was to share how we do things with the community.


You guys have so many amazing optimizations that I haven't captured yet. :-)

I personally love bragging about your Teddy algorithm everywhere I go! Thank you so much for opening up the Hyperscan project. It has been a huge boon!


Anyone have any suggestions regarding how to best use Ripgrep within Vim? Specifically, how best to use it to recursively search the current directory (or specified directory) and have the results appear in a quickfix window that allows for easily opening the file(s) that contain the searched term.


This is what I set in my .vimrc today, and it seems to be working pretty well:

  if executable('rg')
    set grepprg=rg\ --no-heading\ --vimgrep
    set grepformat=%f:%l:%c:%m
  endif
Then you can just :grep yourquery (or :grep yourquery path), and the results will show in a quickfix window.


I'd like to get this working too, since I know a lot of folks are happy with it for ag and ack. rg does have a --vimgrep option that should make it as easy as ag to use, but I don't think there is a proper integration just yet.


I like ripgrep quite a bit from trying it today, and am hoping to find time to work on a Vim plugin this weekend. No promises, but I'll share as soon as I have something usable.


That's fantastic! Please don't hesitate to file an issue if you run into problems.


I am not sure how excited I am ... I readily accept this to be faster than ag -- but ag already scans 5M lines in a second for a string literal on my machine. Not having to switch tools when I need a recursive regexp is win enough to tolerate a potential .4s vs .32s second everyday search.


nice, but does it compile and run on armhf? I don't see any binaries


I was able to build ripgrep from source for ARM, cross-compiling from my laptop running Debian, following the directions here:

https://github.com/japaric/rust-cross#tldr-ubuntu-example

(I haven't actually run it because I don't have an ARM linux device handy.)


Cross-compiled and ran a couple basic searches on an armv7l device. So at least the basic functionality works just fine.


I'm not terribly familiar with Rust's story on ARM, but I do know there are people working on cross compilation. I'll see if I can hook it up to the CI later today. Certainly, there's nothing in ripgrep itself that should prevent it from working on ARM.


I cross compile my home automation sever that I wrote in Rust to ARM. It should work.


I'll be doing that soon! Care to share your code? or explain what exactly you're doing with it?


I've published a few crates in the home automation space, but they're both pretty terrible at this point. One of them is for controlling WeMo devices, and now that I know Rust better I'm going to rewrite it using the new blocking APIs rather than Mio. (And real SOAP support.)

What kind of home automation will you be doing? I'd absolutely love to collaborate on a gateway that interacts with Wemo, Z-wave, Zigbee, etc. and can do basic things like events, scenes, schedules, etc.

I haven't focused on home automation all summer long since I've been trying to get my Donald Trump TTS engine (also written in Rust) ready for the election. It's soooo close.


Why not make --with-filename default even for e.g. "rg somestring" ? That seems like it could hinder adoption since grep does it and it's useful when piping to other commands.

Is it enabled when you specify a directory (rg somestring .) ?


It should be the default whenever you search more than one file.


That is really cool. Although I think this is a case where Good Enough will beat amazing, at least for me (especially given how much I use backrefs).


Does it use PCRE (not the lib, the regex style). If not, ack is just fine. My main concern with grep are Posix regular expressions.


It uses the re2 style, as provided by the rust regex library.


"regex style" is a bit too broad. The regexes feel more like PCRE syntactically than they do POSIX, but does not support the more exotic features of PCRE.


On a somewhat related note.

There does not appear be a popular indexed full-text search tool in existence.

Think cross-platform version of Spotlight's mdfind. Could there be something fundamental that makes this approach unsuitable for code search?

Alternatively, something like locate, but realtime and fulltext, instead of filename only.


AFAIK mdfind heavily depends on file system events in macOS, it would be painful or impossible to implement such system in cross platform way with support for file systems like FAT etc.


sure, FAT. OTOH some people think it's not a totally intractable problem

A cross-platform file change monitor with multiple backends: Apple OS X File System Events, BSD kqueue, Solaris/Illumos File Events Notification, Linux inotify, Microsoft Windows and a stat()-based backend.

http://emcrisostomo.github.io/fswatch/

https://github.com/emcrisostomo/fswatch


nice.


Superb work, and a superb writeup. It's really great to see such an honest and thorough evaluation.


  Mega-Thanks to the authors of grep (longtime user) 
                                 ack (nice innovation)
                                  ag  (outstanding work)
                                  rg  (outstanding work)


Impressive! It works really well. Has anyone set it up with ctrlp in vim? I have `rg %s --files --color=never`, which works, but shows an empty white line at the prompt, so I need to use a cursor to jump to the desired file.


Great tool. Does there exist a faster implementation of sort as well? I once implemented quicksort in C and it was faster than Unix sort by a lot, I mean, seconds instead of minutes for 1 million lines of text.


I've never waited more that a few milliseconds to sort a million lines.

I expect GNU sort does quite a bit, including handling data sets that don't fit into memory.

Anyway, this post is about regexes, which really has nothing at all to do with sorting. They are two very different problems. :)


rg is harder to type with one hand because it uses the same finger twice. :)


It does?

I guess if you use a rigid fingering system. I never bothered to learn that back in elementary school, so my fingering is ad hoc based on what I'm typing. To type "rg" (using QWERTY), I'd just bring my middle finger up to R while using my index finger for G. This would probably be a little slower than "ag" because my hand is more likely to already be in place to type the latter without movement ("home row"), but not as slow as reusing a finger would be. It's not something I'd have to think about; this is already what I do whenever I have to type "rg" as part of a word.

I'm curious whether an ad-hoc approach is more or less efficient overall. Fingering customized per word clearly has the potential to optimize finger movement, but my error rate is relatively high - mostly timing-related - which might be exacerbated by an ad-hoc system because there are more (and more complex) unfamiliar transitions between words.

Anyway, I upvoted you for mentioning typing. It might seem trivial - well, if you use a lot of custom aliases, it is trivial - but if a command runs fast enough (and ag is already very fast on small source trees), the time spent typing its name can become a significant bottleneck. The author of Pijul, for instance, a version control system meant to compete with Git, seems not to recognize this... the command is 'pijul', which is essentially impossible to type on QWERTY without reusing a finger at least once.


I'm not super rigid with all keys but I do seem to have "R with index finger" ingrained. R with the middle finger does work nicely in this case if I think about it and force myself to do it.


Off topic, but you've just made me realise I don't always use the same finger when I type the letter R. If I were typing lower case "rg" I use middle finger for r and then index finger for g.

I'm now thinking about it too much to be able to type naturally, but I seem to use middle and index interchangeably for some keys depending on which key was just pressed / is next to be pressed.

... and I just noticed if I sometimes use different fingers from the "wrong" hand to type depending on context too! I've typed the letter Y using right-index, left-index and left-middle and left-index during the course of writing this post. I guess this is what learning to touch type without being actually taught buys you :)


Was just about to write a similar reply too. I type "rg" the same way. My left hand very often goes for the YHN column, the right hand won't ever touch TGB though. For typing "lol" I'll very often just move my right hand to the right (or sort of angle it) and press the l with the index and the o with the middle finger.

I've been plateaued at a 110wpm avg (121 top speed) @ 10fastfingers for quite a few months now, so perhaps breaking some of these bad habits would help me get to the desired 140.


But 'grep' doesn't?


True, but `ag` and `ack` don't.

This is a really interesting point. If we're going to be typing commands frequently, it would make sense for them to be optimally efficient to type using QWERTY (and maybe other common keyboard layouts).

`ls` and `cd` are super easy to type.


Not if you switch to the Colemak layout

...you can switch your keyboard layout back to qwerty straight after you've typed the rg command. :)


lol


It looks very good and I'd like to try it. However I'm lazy and I don't want to install all the Rust dev environment to compile it. Did anybody build a .deb for Ubuntu 16?


If you don't want to compile it yourself, the blog post has links to binaries. Since they're entirely self-contained, you don't need a full .deb to compile them; just delete the binary when you want to get rid of it.


I didn't notice that. Thanks.

It works. The binary is 11 MB, or 1.5 MB stripped. ag is 69176 bytes. Shared libraries were a good invention :-) Sooner or later rg will be packaged properly too.


I'd much rather have a self contained binary than 11 more MB of free space.


Yes, especially given that rustc/cargo are appearing in distro's package sets already. Exciting times :)


Looks like every tool has its upsides and downsides. This one lacks full PCRE syntax support. Does one have to install Rust to use it?



No, Rust tools don't need a Rust environment installed.


Nice writeup! Any chance you'll support macports for those of us who never jumped ship to homebrew?


I'm not a mac user, so I'm terribly unfamiliar with the ecosystem. In principle, I have no problem with supporting macports, but I haven't looked into it.

One thing that would be a huge help is if someone briefly wrote up what would be necessary to get ripgrep into macports: https://github.com/BurntSushi/ripgrep/issues/10


> We will attempt to do the impossible

Oh well. Waste of time then.


Maybe the author hasn't had breakfast yet and has only done 5 impossible things so far?


Tragically the news that LLVM is switching to a non-Copyfree license (see copyfree.org/standard/rejected) has ruined everything... Nothing written in Rust can be called Free Software. :(


According to that site, Free Software (as defined by the FSF) is not Copyfree.

As far as I can tell, Copyfree is like the BSD license, except you can only restrict changes to the license text itself; the license itself may not be changed or added to, but everything else in the project may be used and changed in any way. So, effectively, it seems like a license where the license itself is copyrighted with all rights reserved, but the rest of the project is public domain.

So, just like BSD, anyone can take a Copyfree project proprietary, turning it non-free.

The GPL is as important today as ever.


I'm never sure whether or not I should adopt these fancy new command line tools that come out. I get them on my muscle memory and then all of a sudden I ssh into a machine that doesn't have any of these and I'm screwed...


I have to switch back and forth between ack and grep all the time. Sometimes I use ack, sometimes I use grep. I wrote ack, but I've never stopped using grep, and ack has never been intended as a replacement for grep.

For most of the common ack flags, they are the same as grep. -i, -l, -v, -w, -A, -B, -C, etc. That was intentional, to minimize that whiplash.

One other suggestion is that you drop ack into your ~/bin directory that you sync between machines. It's a single Perl file, and that portability is a feature. As long as your machine has a Perl on it, you can use ack.


Just make your ssh script install all the binaries where ever you go! ssh-caravan


1. Ag have nice editor integration. I would miss emacs helm-projectile-ag

2. Pcre is good regexp flavor to master. It is have good balance of speed, power and popularity. In addition to Ag, there are accessible libraries in many languages, including python.

I think it would be good if everyone settled on Pcre, rather than each language thinking they will do regexps better.


PCRE suffers from worst case exponential behavior, so it's not suitable for all tasks.

For the most part, the syntax supported by ripgrep is a strict subset of the syntax supported by PCRE.

But yes, I can agree that supporting PCRE can be considered an advantage if you use advanced features heavily (backreferences and lookaround come to mind).


Meh. Back-references and lookaround both take too much brainpower to use at an interactive shell. I've used them few times in programs that I was writing, but just to find some text in some files on my disk? Never.


> I think it would be good if everyone settled on Pcre, rather than each language thinking they will do regexps better.

On the contrary, I think we can do better than a big pile of C code: https://www.cvedetails.com/vulnerability-list/vendor_id-3265...


>It is not, strictly speaking, an interface compatible “drop-in” replacement for both, but the feature sets are far more similar than different.


    ...
    $ rg -uu foobar  # similar to `grep -r`
    $ rg -uuu foobar  # similar to `grep -a -r`
I knew it. The name is absolutely ironic. I cannot just drop-it-in and make all my scripts and whatever scripts I download work immediately faster (nor is it compatible with my shell typing reflexes). New, shiny, fast tool, doomed from birth.


It fundamentally can't be interface compatible, sorry. I think I was pretty clear about this in the blog. :-)




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: