Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Awk is still one of my favorite tools because its power is underestimated by nearly everyone I see using it.

    ls -l | awk '{print $3}'
That’s typical usage of Awk, where you use it in place of cut because you can’t be bothered to remember the right flags for cut.

But… Awk, by itself, can often replace entire pipelines. Reduce your pipeline to a single Awk invocation! The only drawback is that very few people know Awk well enough to do this, and this means that if you write non-trivial Awk code, nobody on your team will be able to read it.

Every once in a while, I write some tool in Awk or figure out how to rewrite some pipeline as Awk. It’s an enrichment activity for me, like those toys they put in animal habitats at the zoo.





>To Perl connoisseurs, this feature may be known as Autovivification. In general, AWK is quite unequivocally a prototype of Perl. You can even say that Perl is a kind of AWK overgrowth on steroids…

Before I learned Perl, I used to write non-trivial awk programs. Associative arrays, and other features are indeed very powerful. I'm no longer fluent, but I think I could still read a sophisticated awk script.

Even sed can be used for some fancy processing (i.e scripts), if one knows regex well.


> this means that if you write non-trivial Awk code, nobody on your team will be able to read it.

Sort of! A lot of AWK is easy to read even if you don't remember how to write it. There are a few quirks like how gsub modifies its target in-place (and how its default target is $0), and of course understanding the overall pattern-action layout. But I think most reasonable (not too clever, not too complicated) AWK scripts would also be readable to a typical programmer even if they don't know AWK specifically.


I wrote a BASIC renumberer and compactor in bash, using every bashism I could so that it called no externals and didn't even use backticks to call child bashes, just pure bash itself (but late version and use every available feature for convenience and compactness).

I then re-wrote it in awk out of curiosity and it looked almost the same.

Crazy bash expansion syntax and commandline parser abuse was replaced by actual proper functions, but the whole thing when done was almost a line by line in-place replacement, so almost the same loc and structure.

Both versions share most of the same advantages over something like python. Both single binary interpreters always already installed. Both versions will run on basically any system any platform any version (going forward at least) without needing to install anything let alone anything as gobsmacking ridiculous as pip or venv.(1)

But the awk version is actually readable.

And unlike bash, awk already pretty much stopped changing very much decades ago, so not only is it forward compatible, it's pretty backwards compatible too.

Not that that is generally a thing you have to worry about. We don't make new machines that are older than some code we wrote 5 years ago. Old bash or awk code always works on the next new machine, and that's all you ever need(2).

There is gnu vs bsd vs posix vs mawk/nawk but that's not much of a problem and it's not a constantly breaking new-version problem but the same gnu vs posix differences for the last 30 years. You have to knowingly go out of your way to use mawk etc.

(1) bash you still have for example how everything is on bash 5 or at worst 4, except a brand new Mac today still ships with bash3, and so you can actually run into backwards compatibility in bash.

(2) and bash does actually have plugins & extensions and they do vary from system to system so you do have things you either need to avoid using or run into exactly the same breakage as python or ruby or whatever.

For writing a program vs gluing other programs together, really awk should be the goat.


>and so you can actually run into backwards compatibility in bash.

let's have a bash and bash that backwards compatibility in bash.


I feel the same about using Awk, it is just fun to use. I like that variables have defined initial values so they don't need to be declared. And the most common bits of control flow needed to process an input file are implicit. Some fun things I've written with awk

Plain text accounting program in awk https://github.com/benjaminogles/ledger.bash

Literate programming/static site generator in awk https://github.com/benjaminogles/lit

Although the latter just uses awk as a weird shell and maintains a couple child processes for converting md to html and executing code blocks with output piped into the document


AWK, rc, and mk are the 3 big tools in my shell toolkit. It's great

Why mk instead of any of the other builders?

I already get it with plan9port and it addresses 100% of my issues with make. It integrates nicely with rc so there's really not a lot of additional syntax to remember.

> That’s typical usage of Awk, where you use it in place of cut because you can’t be bothered to remember the right flags for cut.

Even you remember the flags, cut(1) will not be able to handle ls -l. And any command that uses spaces for aligning the text into fixed-width columns.

Unlike awk(1), cut(1) only works with delimiters that are a single character. Meaning, a run of spaces will be treated like several empty fields. And, depending on factors you don't control, every line will have different number of fields in it, and the data you need to extract will be in a different field.

You can either switch to awk(1), because its default field separator treats runs of spaces as one, or squeeze them with tr(1) first:

  ls -l | tr -s' ' | cut -d' ' -f3

Cut has flags to extract byte or character ranges.

You don't have to use fields.


Can these flags be used to extract the N-th column (say, the size) of every line from ls -l output?

Yes.

    $ ls -l | cut -c 35-41

        22 
      4096 
      4096 
      4096 
      4096 
      4096 
      4096 
        68 
       456 
       690 
      7926 
      8503 
     19914

This is what I get:

  ls -l | cut -c 35-41
  
  6 Nov 1
  6 Nov
  6 Nov 1
  6 Nov 1

Well, sure. I said it did character ranges so you don't have to use fields.

What were you expecting? That your character ranges in ls would match mine?


> What were you expecting? That your character ranges in ls would match mine?

I would expect the command to work in any directory. Try a few different directories on your computer and you'll see that it won't work in some of them.


> I would expect the command to work in any directory.

But ... why expect that? That's not what "character ranges" mean.

I mean, I was only trying to clarify that `cut` is not limited to fields only.


Love awk. In the early days of my career, I used to write ETL pipelines and awk helped me condense a lot of stuff into a small number of LOC. I particularly prided myself in writing terse one-liners (some probably undecipherable, ha!); but did occasionally write scripts. Now I mostly reach for Python.

one of the best word-wrapping implementations I've seen (handles color codes and emojis just fine!) is written in pure mawk

very fast, highly underrated language

I'm not sure how good it would be for pipelines, if a step should fail, or if a step should need to resume, etc.


This sounds interesting. Could you give an example where you rewrote a pipeline in awk?

Not the op but here is an example: TOKEN=$(kubectl describe secret -n kube-system $(kubectl get secrets -n kube-system | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t' | tr -d " ")

This pipeline may be significantly reduced by replacing cut's with awk, accommodating grep within awk and using awk's gsub in place of tr.


Example of replacing grep+cut with a single awk invokation:

    $ echo token:abc:def | grep -E ^token | cut -d: -f2
    abc
    
    $ echo token:abc:def | awk -F: '/^token/ { print $2 }'
    abc
Conditions don't have to be regular expressions. For example:

    $ echo $CSV
    foo:24
    bar:15
    baz:49
    
    $ echo $CSV | awk -F: '$2 > 20 { print $1 }'
    foo
    baz

Somebody wanted to set breakpoints in their C code by marking them with a comment (note “d” for “debugger”):

  //d
You can get a list of them with a single Awk line.

  awk -F'//d[[:space:]]*' 'NF > 1 {print FILENAME ":" FNR " " $2}' source/*.c
You can even create a GDB script, pretty easily.

(IMO, easier still to configure your editor to support breakpoints, but I’m not the one who chose to do it this way.)


Why are you using the locale-specific [:space:] on source code? In your C source code, are you using spaces other than ASCII 0x20?

Would you have //d<0xA0>rest of comment?

Or some fancy Unicode space made using several UTF-8 bytes?


> Why are you using the locale-specific [:space:] on source code?

Because it’s the one I remembered first, it worked, and I didn’t think that it needed any improvement. In fact, I still don’t think it needs any improvement.


Tab characters can also be found in source code.

Since you control the \\d format, why would you allow/support anything but a space as a separator? That's just to distinguish it from a comment like "\\delete empty nodes" that is not the \\d debug notation.

If tabs are supported,

  [ \t]
is still shorter than

  [[:space:]]
and if we include all the "isspace" characters from ASCII (vertical tab, form feed, embedded carriage return) except for the line feed that would never occur due to separating lines, we just break even on pure character count:

  [_\t\v\f\r]
TVFR all fall under the left hand, backspace under the right, and nothing requires Shift.

The resulting character class does exactly the same thing under any locale.


There's also [:blank:], which is just space and tab. Both I think are perfectly readable and reasonable options that communicate intent nicely.

ISO C99 says, of the isblank function (to which [:blank:] is related:

The isblank function tests for any character that is a standard blank character or is one of a locale-specific set of characters for which isspace is true and that is used to separate words within a line of text. The standard blank characters are the following: space (’ ’), and horizontal tab (’\t’). In the "C" locale, isblank returns true only for the standard blank characters.

[:blank:] is only the same thing as [\t ] (tab space) if you run your scripts and Awk and everything in the "C" locale.


Interesting, the GNU Grep manual describes both character classes as behaving as if you are in the C locale. I shouldn't have assumed it was the same as in the C standard!

awk is so much better than sed to learn given its ability, the only unix tool it doesn't replace is tr and tail, but other than that, you can use it instead of grep, cut, sed, head.

I think you could replace tail with awk, if you absolute needed to. This is a naive attempt:

   cat /etc/passwd | \
   awk -v n=10 '{ lines[NR] = $0 }
            END{
                for (i = NR - n + 1; i <= NR; i++)
                    if (i > 0) print lines[i]
            }'

You can, sure, but, tail seeks to EOF and then goes back until it finds "\n", awk cannot seek, so you must do what you did there, that means the bigger the file the longer the time.

And there's also tail -f, how would you go about doing that? a while loop that sleeps and reopens the file? yuck


Stop using awk, use a real programming language+shell instead, with structured data instead of bytestream wrangling:

  > ls -l | get user

  ┌────┬──────┐
  │  0 │ cube │
  │  1 │ cube │
  │  2 │ cube │
  │  3 │ cube │
  │  4 │ cube │
  │  5 │ cube │
  │  6 │ cube │
  │  7 │ cube │
  │  8 │ cube │
  │  9 │ cube │
  │ 10 │ cube │
  │ 11 │ cube │
  │ 12 │ cube │
  │ 13 │ cube │
  │ 14 │ cube │
  │ 15 │ cube │
  └────┴──────┘
You don't need to memorize bad tools' quirks. You can just use good tools.

https://nushell.sh - try Nushell now! It's like PowerShell, if it was good.


PowerShell is open source and available on Linux today for those who enjoy an OO terminal.

MIT licensed.

https://learn.microsoft.com/en-us/powershell/scripting/insta...


> try Nushell now!

So, I'm curious. What's the Nushell reimplementation of the 'crash-dump.awk' script at the end of the "Awk in 20 Minutes" article on ferd.ca ? Do note that "I simply won't deal with weirdly-structured data." isn't an option.


Once you get TSV and CSV related tools, nushell and psh are like toys.


Current AWK (One True AWK, under OpenBSD in base) got CSV support, you can read the man page for it.

While your recommendation is sound: this is not only a rudely-worded take, but also missing the point of the parent comment.

Also, the nushell code is self-explanatory. Who knows what $3 refers to?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: