
Useful Perl one-liners (2013) - kawera
http://www.catonmat.net/download/perl1line.txt
======
0xmohit
Reminds of sed and awk one liners:

[http://sed.sourceforge.net/sed1line.txt](http://sed.sourceforge.net/sed1line.txt)

[http://www.pement.org/awk/awk1line.txt](http://www.pement.org/awk/awk1line.txt)

~~~
pkrumins
That's exactly how perl1line.txt was created.

Also I did article series explaining every single one of sed1liners,
awk1liners and perl1liners in those files:

[http://www.catonmat.net/blog/sed-one-liners-explained-
part-o...](http://www.catonmat.net/blog/sed-one-liners-explained-part-one/)

[http://www.catonmat.net/blog/awk-one-liners-explained-
part-o...](http://www.catonmat.net/blog/awk-one-liners-explained-part-one/)

[http://www.catonmat.net/blog/perl-one-liners-explained-
part-...](http://www.catonmat.net/blog/perl-one-liners-explained-part-one/)

~~~
pcsanwald
I bought your book and still use it as a reference all the time. These
articles were my entry point. nice work, Peteris!

------
shanemhansen
My favorite perl one liner is a globbed search and replace. It even has it's
own website: [http://perlpie.com/](http://perlpie.com/)

    
    
         perl -p -i -e 's/foo/bar/gi' ./*.txt

~~~
jxy
What is the advantage of this over sed?

~~~
paulmd
They're functionally interchangeable for simple usage (minus some perl-
specific stuff like PCRE).

As I note below, escape behavior is a little different because sed wants you
to escape +'s to have the normal regex semantics ("one or more matches"). And
I actually think Perl is correct here and you should only need to escape those
characters if you want _literal matches_ , but I have a weird environment
(Cygwin) and it's possible the sed build there is a little messed up.

The major difference for me is that Perl can match across multiple lines using
the -0777 flag. I've been doing a lot of regex-based mass manipulation of
source code lately and most people write functions across multiple lines. You
can't do that with sed without multi-line appending and it gets really ugly
really fast. Sed is pretty much just single line matches only.

For example, I had 100-odd classes with getters for certain values but not
setters. So I did:

    
    
      grep -rle "getAddTime" | while read line; do if ! grep -q "setAddTime" $line ; then echo $line; perl -i -0777 -pe 's/public\s+Date\s+getAddTime\s*\(\s*\)\s*\{[\s\w=;]+\}/$&\n\n  public void setAddTime(Date addTime) { this.addTime = addTime; }/' $line; fi done
    

translation: look for files that contain getAddTime, if they do not contain
setAddTime then find the string "public Date getAddTime() {...} and append the
setter after that". There are a few edge cases you could hit there but it was
close enough to work on my codebase.

I wish Perl would do an inplace edit of a file without creating a backup,
though. I am under source control so there's no harm in just operating right
on the files. It's not the end of the world to follow up with a rm -r *.bak I
guess, but it's annoying. At least they're in my git-ignore which helps a
little.

~~~
throwbsidbdk
Fun factoid most have forgotten: regex is perl. The beginnings are elsewhere
but regex as we know it was designed as part of the language and the engine
was pulled out and reused when people found how useful it was.

Perls regex parser is still far above the features in more modern languages,
supporting, among other things, code execution within capture groups. If I
remember right the perl regex parser is actually Turing complete

~~~
TazeTSchnitzel
Thus the PCRE regex library, Perl-Compatible Regular Expressions, for
instance.

~~~
theoh
Note that this library is by Philip Hazel and did not originate in the Perl
source code, but in Exim.

Regex facilities for text processing were first implemented by Ken Thompson,
long before Perl.

On the topic of implementations, this is important:
[https://swtch.com/~rsc/regexp/regexp1.html](https://swtch.com/~rsc/regexp/regexp1.html)

------
JadeNB
I like Krumins's explanations series even more than the bare one-liners:
[http://www.catonmat.net/search?q=Perl+one-
liners+explained](http://www.catonmat.net/search?q=Perl+one-liners+explained)
. (That said, a good context-free Perl one-liner can always be enjoyed on its
mystifying own.) Note that there is a book with the one-liners (and, I assume,
their explanations):
[https://www.nostarch.com/perloneliners](https://www.nostarch.com/perloneliners)
.

He also posted about Abigail's classic primality-testing one-liner at
[http://www.catonmat.net/blog/perl-regex-that-matches-
prime-n...](http://www.catonmat.net/blog/perl-regex-that-matches-prime-
numbers) .

~~~
waynecochran
For real?

    
    
         # Check if a number is a prime
         perl -lne '(1x$_) !~ /^1?$|^(11+?)\1+$/ && print "$_ is prime"'

~~~
JadeNB
I guess you mean "For real?" as in "Is this really a primality test?" Yes, it
is, and Krumins describes how it works in the post that I linked
([http://www.catonmat.net/blog/perl-regex-that-matches-
prime-n...](http://www.catonmat.net/blog/perl-regex-that-matches-prime-
numbers)).

Essentially, although it looks complicated, it's the simplest possible idea:
it's just trying factorisations, with the `(11+?)` part being the factor, and
the `\1+` part insisting that we repeat that factor. It is of course wildly
inefficient, and limitations in the backtracking engine mean that it can give
false positives for large numbers ([http://zmievski.org/2010/08/the-prime-
that-wasnt](http://zmievski.org/2010/08/the-prime-that-wasnt), also linked
from that post).

------
pmoriarty
I like to use "ped" (Lee Eaken's "sed done right" rewrite in perl, which
allows in-place editing):

[http://www.cpan.org/authors/id/L/LE/LEAKIN/ped-1.2](http://www.cpan.org/authors/id/L/LE/LEAKIN/ped-1.2)

[https://jakobi.github.io/script-archive-
doc/cli.list.grep/00...](https://jakobi.github.io/script-archive-
doc/cli.list.grep/00_LISTPROCESSING.html)

------
vram22
While it's not in Perl (it uses grep, sed and awk), I like this Unix one-liner
(mine) to kill a hanging Firefox process, not so much for the code but for the
interesting comment thread that it resulted in on my blog - about Unix
processes, zombies, etc.

UNIX one-liner to kill a hanging Firefox process:

[http://jugad2.blogspot.in/2008/09/unix-one-liner-to-kill-
han...](http://jugad2.blogspot.in/2008/09/unix-one-liner-to-kill-hanging-
firefox.html)

~~~
vacri
Use `xkill` (if you're using X). `xkill` in a terminal, then click on the
window you want to kill. Boom. No need to type out a convoluted command.

~~~
petre
Sadly there are situations when a hanging Firefox doesn't have a window to
place the xkill skull-and-crossbones cursor if death on.

------
tyingq
I use perl with find quite a bit...it's great for doing something with a list
of files without spawning a bunch of processes via -exec.

Like, for example:

# delete a bunch of matching files and print the name

find . -name whatever\\* -type f | perl -ne 'print;chomp;unlink'

~~~
vesinisa
I think you can achieve exactly the same without pipes with:

> find . -name whatever\\* -type f -print -delete

... except this variation should be safe with filenames containing the '\n'
character (which is perfectly legal on most UNIXes).

~~~
JadeNB
> > find . -name whatever\\* -type f -print -delete

Or you can also do it all in Perl:

    
    
        perl -E 'while (<whatever*>) { say; unlink }'
    

EDIT: As tyingq points out
([https://news.ycombinator.com/item?id=13065132](https://news.ycombinator.com/item?id=13065132)),
I forgot that `find` recurses into subdirectories. I don't know a way to make
Perl do the same that doesn't start getting more verbose (but there must be a
Perl golfer somewhere around here who does …).

~~~
EvilTerran
> there must be a Perl golfer somewhere around here who does …

Hello!

    
    
        perl -MFile::Find -e 'find sub{if(-f&&/^whatever/){say$File::Find::name;unlink}},"."'
    

You might even be able to do 'find{...}"."' (no "sub" or ","), but I don't
have perl on my phone to test that ;)

~~~
a_gopher

      perl -E 'sub x{say;for(<$_/*>){&x;unlink}};x for@ARGV' *
    
      perl -E 'push@ARGV,<$_/*>for@ARGV;say,unlink for@ARGV' *

~~~
EvilTerran
Isn't that just "rm -rv *"? The original spec was to only delete files, not
directories - and only those matching a given pattern, at that.

Also, I think yours would go into an infinite spin if it met a symlink loop
(say, "ln -s . foo") - File::Find is hardened against that kinda shenanigans.

~~~
a_gopher
fair. not quite so pretty now...

    
    
      perl -E 'for(@ARGV){say,unlink for<$_/whatever*>;push@ARGV,($_)x(-d^-l)for<$_/*>}' .

------
rektide
I liked this part of perl so much that I recently rewrote perl style
substitution in Node.js. Used as an executable, it functions largely like perl
-p -e: [https://github.com/rektide/perls](https://github.com/rektide/perls)

------
berntb
Here are a few quick and dirty I use for handling output of MySQL. (For
serious stuff I include real CSV libraries in the one liner to quote correctly
etc.)

    
    
      alias sqlcmdtocsv="perl -nE 'chomp; s/\\t/;/g; say \$_;'"
      alias sqlcmdtoperl="perl -MData::Dump=dump -nE 'chomp; @r=split(/\\t/); if (@titles == 0) { @titles=@r; next; } \$row={};   for(\$i=0; \$i < @r; \$i++) { \$row->{\$titles[\$i]} = \$r[\$i];  } push @rows, \$row;  END { say dump \\@rows };'"
    
      # Quick and dirty for moving from the real DB to the test DB.
      # Use: mysql_generates_a_row | sqlcmdtoperl  | dumptosqlinsert
      alias dumptosqlinsert="perl -E 'my \$txt; { local \$/; \$txt=<>; } my \$rows= eval \$txt; \$q=chr(39); for my \$r (@\$rows) {  @cols=map { \$v=\$r->{\$_}; if (\$v eq \"NULL\") {\"\$_=NULL\"} else {\"\$_=\${q}\$v\$q\"} } keys %\$r; say join(\", \", @cols);}'"
    

I have a bunch of aliases for conversions, to generate common SQL that
dependend on parameters and so on.

~~~
pre_action
I wrote the `ysql` Perl application for exactly this problem

~~~
berntb
It was faster than Googling. :-)

But it would have been better to build off a well tested module as a basis for
my set of tools, sigh. I could have used yaml as an intermediary.

A favorite saying is from chemistry -- with a few weeks of hard work in the
lab, you can save _hours_ in the library...

------
yati
A very useful idiom I often use is

    
    
        $ foo |perl -nafe '<code>'
    

Where <code> will be wrapped in a while(<>){ ... }. You can access the current
line normally with $_ and either print out something everytime, keep state and
then have an END block which can spit the final output. e.g., I was using
something like the following today for getting some info on row sizes on a
mysql table (untested):

    
    
        $ echo 'DESCRIBE table'|mysql |cut -f1 |tail -n +2 |perl -nafe 'chomp; push @fields, $_; END { print qq/SELECT MAX(row_size) from (SELECT/ . join(" + ", map { qq/CHAR_LENGTH($_)/ } @fields) . qq/ AS row_size FROM table)/; }'

~~~
EvilTerran
Incidentally, you can pass a query as a parameter to `mysql` with -e, you
don't need to pipe it in; and -N suppresses column headings, so you could use
that instead of`tail`.

... also, you can pass -l (lowercase L) to `perl` to enable automatic line-end
processing, which autochomps each line in the input (and sets the output
record separator so echo() includes a trailing newline, not that it matters
here). And FWIW you're not actually using -a (autosplit each line into @F)
here - but you could use it instead of `cut`, and it does imply -n; you might
not need -f either, unless you actually have a sitecustomize.pl for it to
disable.

So:

    
    
        $ mysql -Ne 'DESCRIBE table' | perl -ale 'push @fields, $F[0]; END { print qq/SELECT MAX(/ . join(" + ", map { qq/CHAR_LENGTH($_)/ } @fields) . qq/) FROM table/ }'
    

(I could golf it further, but I think that covers the stuff you might actually
find useful. I blame my perlmonks days...)

~~~
yati
Ha, nice. I must admit I just do `perl -nafe` almost out of muscle memory,
without paying too much attention to what the options mean individually.
Thanks for this :)

------
EvilTerran
A personal favourite of mine:

    
    
        perl -lp0e1 /proc/$pid/environ
    

It relies on some fairly subtle interactions in perl's command-line
processing, and I do enjoy how cunning that makes it feel. _And_ it's actually
useful, by way of being a lot easier to type (less moving your hands around
the keyboard for odd bits of punctuation) than the "sensible" way to do it:

    
    
        tr '\0' '\n' < /proc/$pid/environ

~~~
teddyh
I prefer this, which is shorter than both of those:

    
    
        xargs -0n1 < /proc/$pid/environ

~~~
EvilTerran
Oh, nice! I always forget that xargs defaults to echoing the parameters if you
don't specify a command.

That said, variations on my one are also handy for doing any further
processing on the lines - eg, I do stuff like this pretty regularly:

    
    
        perl -ln0e '/^FOO=/&&print' /proc/$pid/environ

------
scrame

       cat FILE | perl -lne 'm|(REGEX)|&&print$1'
    

Pays for itself pretty quickly. I default to pipes for the delimiter so you
don't have to escape file paths.

------
ldoroud
My favorite command when dealing with csv/txt files generated by Microsoft
products:

    
    
       perl -p -e 's/\r/\n/g' Bad_Microsoft.csv Good_unix.csv

~~~
Gorgor
Don't you end up duplicating every line break that way? I think it should be

    
    
        perl -p -e 's/\r//g' Bad_Microsoft.csv Good_unix.csv
    

After all, Microsoft files usually end in a carriage return followed by a line
break and not a bare carriage return, don't they?

~~~
ldoroud
Not really or at least I haven't had that problem. It would be a problem if
they used both \r and \n at the same time. in that case you can always do \r\n
to \n or something like that.

------
kazinator
Most of the basic text processing ones are shorter in and Awk and when they're
not shorter, they are still clearer: no cryptic command line options, no
gratuitous line noise:

    
    
      # Double space a file
      perl -pe '$\="\n"'
      perl -pe 'BEGIN { $\="\n" }'
      perl -pe '$_ .= "\n"'
      perl -pe 's/$/\n/'
      perl -nE 'say'
    
      awk 'BEGIN { ORS="\n\n" }' # output record separator
      awk 'print; print "\n"'
      awk '$0=$0"\n"'
    
      txr -e '(awk ((set rec `@rec\n`)))' # Awk macro in Lisp!
    
    
      # Double space a file, except the blank lines
      perl -pe '$_ .= "\n" unless /^$/'
      perl -pe '$_ .= "\n" if /\S/'
    
      awk 'BEGIN { ORS="\n\n" } /./'
      awk '/./ { print; print "\n" )'
      awk '/./&&$0=$0"\n"'
    
      txr -e '(awk (#/./ (set rec `@rec\n`) (prn)))'
    
    
      # Remove all consecutive blank lines, leaving just one
      perl -00 -pe ''
      perl -00pe0
    
      awk -v RS= -v ORS="\n\n" 1    # RS=<blank> -> Awk paragraph mode
    
      txr -e '(awk (:set rs nil ors "\n\n") (t))'
    
      # Number all lines in a file
      perl -pe '$_ = "$. $_"'
    
      awk '{ print FNR, $0 }'
    
      txr -e '(awk (t (prn fnr rec)))'
    
    
      # Print the total number of lines in a file (emulate wc -l)
      perl -lne 'END { print $. }'
    
      awk 'END { print FNR }'
    
      # Find the total number of fields (words) on each line
      perl -alne 'print scalar @F'
    
      awk '{print NF}'
    
      # elides loop if cond/action clauses absent so (nil) needed
      txr -e '(awk (nil) (:end (prn fnr)))'
    
      # Print the last 10 lines of a file (emulate tail -10)
      perl -ne 'push @a, $_; @a = @a[@a-10..$#a]; END { print @a }'
    
      awk '{ ln[FNR%10]=$0 }
           END { for(i=FNR-9;i<=FNR;i++)
                  if (i > 0) print ln[i%10] }'
      txr -e '(awk (:let l)
                   (t (push rec l)
                      (del [l 10]))
                   (:end (tprint (nreverse l))))'
      txr -e '(awk (:let l)
                   (t (set [l 10..10] (list rec))
                      (del [l 0..-10]))
                   (:end (tprint l)))'
      # (real way)
      txr -t '[(get-lines) -10..:]' < in
      txr -t '(last (get-lines) 10)' < in

------
MichaelMoser123
if you really need it then finding the one liner you need takes more time than
writing it up yourself. Am i the only one who thinks so?

~~~
berntb
I think the obvious counter argument is:

You read them to learn. I learned a couple of perl flags in this discussion I
had forgotten. Also, you get ideas about what you can do. And third, after
reading through a collection like that, you can find stuff back by looking in
an index.

------
btilly
And here is a dirty Perl joke. What does this command have to do with pussy?

    
    
        perl -penis
    

.

.

.

(It is an implementation of cat.)

