I'm not trying to start a theological war about grep/ack here, I'm just mentioning it in case someone hasn't heard about 'ack' before and they (like me) might find it extremely useful: http://betterthangrep.com
It's grep, just better. It highlights the selected text, it shows which files, and in what line the text was found (and uses vivid colors so you can distinguish them easily), ignores .git and .hg directories (among others, that shouldn't be searched) by default, you can tell it to search, for example for only `--cpp` or `--objc` or `--ruby` or `--text` files (with a flag, not a filename pattern), and many many other neat features that I'm sure grep has, but you have to remember and memorize them. ack has sensible defaults.
Did you benchmark read() vs mmap()? Most tools seem to go
with read() for grep-like io patterns.
In fact looks like GNU grep has --mmap switch and it's a little bit faster in the simple case than default on my Ubuntu system. But -i makes mmap slower. Maybe GNU grep
just avoids mmap because of error handling
(you get a segfault/bus error instead of an io error return
when things go wrong).
Ag supports the same regexes as Ack. I use the PCRE library. I only call pcre_study once, and I use the new PCRE-JIT on systems where it's available. These tweaks add up to a 3-5x speedup over Ack when regex-matching.
For every single file found you start ack again. You compare startup times here. Ack is so slow here because it's a perl script. For every single file you start the perl interpreter, and the perl interpreter compiles and interpretes ack every time.
First, without knowing the makeup of files he has, you can't tell how much a corner case this is. It could be 100K small files or 10 large ones. Few care about runtimes for small files, but many care about runtimes for large ones.
Also, and probably more importantly, you'd use ack differently in a recursive-find situation. You just "ack" from the top of the tree. The perl interpreter starts only once.
I don't think this is a useful benchmark for typical uses of ack.
> it shows which files, and in what line the text was found (and uses vivid colors so you can distinguish them easily),
grep -rn --color pattern ./files/
files/foo.sh:123: echo "Look at the floral pattern on this dress!"
> ignores .git and .hg directories (among others, that shouldn't be searched) by default,
git --exclude=.git --exclude=.hg --exclude=.svn
> you can tell it to search, for example for only `--cpp` or `--objc` or `--ruby` or `--text` files (with a flag, not a filename pattern),
You would use `find` in conjunction with `grep`. "Art of Unix Programming", modularity, and all that jazz. Presumably you would just modify your own grep alias or define a function to avoid retyping. The end result pretty much looks like my grep alias:
alias grep='grep -Ein --color --exclude=.git --exclude=.hg --exclude=.svn'
I still fail to see a reason to use ack, especially when I can assume grep is always available for portability.
I just replicated the test and I can confirm the FreeBSD grep compiled on Darwin is about 30x slower.
% /usr/local/bin/grep --version
/usr/local/bin/grep (GNU grep) 2.14
% time find . -type f | xargs /usr/local/bin/grep 83ba
find . -type f 0.01s user 0.06s system 8% cpu 0.870 total
xargs /usr/local/bin/grep 83ba 0.66s user 0.31s system 95% cpu 1.017 total
% /usr/bin/grep --version
grep (BSD grep) 2.5.1-FreeBSD
% time find . -type f | xargs /usr/bin/grep 83ba
find . -type f 0.01s user 0.06s system 0% cpu 28.434 total
xargs /usr/bin/grep 83ba 31.65s user 0.40s system 99% cpu 32.113 total
There was also some discussion about this on one of the Apple mailing lists a few months ago, and it turns out there are major differences in how the two grep implementations on OS X interact with the buffer cache. In particular, empirical evidence suggests 10.6's GNU grep build caches its input, while 10.7+ BSD grep does not.
Incidentally, on OS X, you can commonly get another order of magnitude improvement over even GNU grep with Spotlight's index: use xargs to grep only through files that pass a looser mdfind "pre-screen".
Is speed really that much of a concern with grep? I typically use :vimgrep inside of vim, not because it's faster (it's orders of magnitude slower due to being interpreted vimscript), but because I hate remembering the differences between pcre/vim/gnu/posix regex syntax.
I use grep in some pipelines to bulk-process data, because if you have a fast grep, using it to pre-filter input files to remove definitely-not-matching lines is one of the quickest ways to speed up some kinds of scripts without rewriting the whole thing. And in that case, sometimes processing gigabytes+ of data, it's nice if it's fast.
One common case: I have a Perl script processing a giant file, but it only processes certain lines that match a test. You can move that test to grep, to remove nonmatching lines before Perl even hits them, which will typically be much faster than making Perl loop through them.
At the scale you are talking about (10Gb+ files), it's far more efficient to put primitive filtering in the application generating the lines in the first place. you pay two penalties for using grep: having another process touch the data and having to generate superfluous lines in the first place.
just tried on snow leopard, not quite 10x but nearly 2x faster, certainly. (admittedly, by firefox checkout is mercurial, and hg locate seems to pass something invalid to xargs half way through, but I guess the first chunk of files are the same.)
Someone commented on the article that this might be caused by missing off the -F flag; I tried this, and -F makes both versions slightly faster again.
I once tried a sed script on a couple million text files (60 GB in total) - they were web pages downloaded in some format (WARC? I don't remember what it was called) and I needed to change the formatting slightly (to feed them to Nutch) - Mac's default sed was literally 50 times slower than gsed (on the same machine). If I remember correctly, gsed finished the task in under two hours.