
The most surprising Unix programs - vitplister
https://minnie.tuhs.org/pipermail/tuhs/2020-March/020664.html
======
abetusk
For me, the most surprising one was paste.

paste allowed me to interleave to streams or to split out a single stream into
two columns. I'd been writing custom scripting monstrosities before I
discovered paste:

    
    
        $ paste <( echo -e 'foo\nbar' ) <( echo -e 'baz\nqux' )
        foo     baz
        bar     qux
        $ echo -e 'foo\nbar\nbaz\nqux' | paste - - 
        foo     bar
        baz     qux
    
    

I wonder what other unix gems I've been missing...

~~~
jerf
Recently I found myself wanting to stop a program after a certain timeout. I
found myself thinking "sure there's a UNIX program to do this in shell?" There
is. And it's called... brace yourself... "timeout".

20+ years in UNIX, never encountered it. Part of GNU coreutils.

~~~
mekster
Sometimes good to use for cron jobs so that the previous one won't keep
running on top of the current run.

You should debug for it to not happen though but probably better than seeing
your system choke with dozens of instances even slowing the jobs more.

There's also "run-one" that allows you to control which job should survive
instead of just killing the old one.

[http://manpages.ubuntu.com/manpages/trusty/man1/run-
one.1.ht...](http://manpages.ubuntu.com/manpages/trusty/man1/run-one.1.html)

~~~
majewsky
If you're using systemd timers, you get this behavior for free because those
timers only activate service units. If the service is already active, nothing
happens.

(Of course, this can be surprising in a different way when you expect a new
job to start, but it does not because a previous one is still lingering. Not
saying it's better or worse, just pointing out for those who don't know.)

------
mci
> Hidden inside WWB (writer's workbench), Lorinda Cherry's Parts annotated
> English text with parts of speech, based on only a smidgen of English
> vocabulary, orthography, and grammar.

Writer's Workbench was indeed a marvel of 1970's limited-space engineering.
You can see it for yourself [1]: the generic part-of-speech rules are in
end.l, the exceptions in edict.c and ydict.c, and the part-of-speech
disambiguator in pscan.c. Such compact, rule-based NLP has fallen out of favor
these days but (shameless plug alert!) Writer's Workbench inspired my 2018
IOCCC entry that highlights passive constructions in English texts [2].

[1] [https://github.com/dspinellis/unix-history-
repo/tree/BSD-4_1...](https://github.com/dspinellis/unix-history-
repo/tree/BSD-4_1_snap-Snapshot-Development/.ref-BSD-4/usr/src/cmd/diction)

[2]
[https://ioccc.org/2018/ciura/hint.html](https://ioccc.org/2018/ciura/hint.html)

~~~
ivan_ah
This _Writer 's Workbench_ seems really cool. The wikipedia page indicates
there were quite a few more programs in the suite:
[https://en.wikipedia.org/wiki/Writer%27s_Workbench#Package_c...](https://en.wikipedia.org/wiki/Writer%27s_Workbench#Package_contents)

Do you know where I could be able to find the source for all of these?

I'd be interested to "revive" these utils, possibly rewriting as python or
bash for easy hacking. I have some basic scrips for that, and they are already
proving to be useful even though they simply call grep
[https://github.com/ivanistheone/writing_scripts](https://github.com/ivanistheone/writing_scripts)

~~~
dbremner
I don't think it has all of them, but [0] is a tarball from Research Unix that
has some Writer's Workbench source code.

The files are in cmd/wwb.

Other tarballs may have more Writer's Workbench code but I haven't looked at
them.

[0]
[https://www.tuhs.org/Archive/Distributions/Research/Dan_Cros...](https://www.tuhs.org/Archive/Distributions/Research/Dan_Cross_v10/v10src.tar.bz2)

------
sn41
One of the useful applications of trigram-based analysis I have done is the
following: for a large web-based application form where about 200000 online
applications were made, we had to filter out the dummy applications - often,
people would try out the interface using "aaa" as a name, for example.

Since the names were mostly Indian, we did not even have a standard database
of names to test against.

What we did was the following: go through the entire database of all
applications, and build a trigram frequency table. Then, using that trigram
table, do a second pass over the database of names to find names with
anomalous trigrams - if the percentage of trigram frequency anomaly in a name
was too high (if the name was long enough), or the absolute number of trigrams
in name was too high (if the name was short), we flagged the application and
examined it manually. Using this alone, we were able to filter out a large
number of dummy application forms.

Of course, it is not a comprehensive tool since what forms a valid name is
very vague, but I think this kind of a tool is useful and culture-neutral.

~~~
amelius
The problem with these methods is that you exclude everybody that deviates
from the norm. Yes, it might make your life (as a developer) a little bit
easier, but it makes the lives of some of the applicants a lot harder.

~~~
pjc50
I think the "manual review" phase makes this OK, in a way that simply
autobanning Mr Null from your system isn't.

~~~
amelius
I'm not sure if that would work either.

Why not send them an email to verify? Or use captcha tech developed by big
companies that actually has some science behind it.

~~~
sn41
These are high school students in India. Many of them are from rural
backgrounds and do not have personal email accounts. From our experience, the
forms are many times filled up by employees at cyber cafes who fill their own
email addresses and mobile numbers instead of the students.

The only reliable means of communications back to students is by a government
approved website, or newspapers, or official media. (The process also has to
stand up in court in case some student says that (s)he did not get the
communication, and newspaper ads are a documentable evidence of communications
on a specified date.)

(BTW, the forms do have captchas, the spurious forms are manually filled in by
mischievous/malicious/curious applicants.)

------
znpy
It's surprising that Doug McIlroy still reads and writes about UNIX.

For those who don't know, Dough is the guy that invented pipes.

~~~
cmroanirgo
Also interesting:

> _Originators of nearly half the list--pascal, struct, parts, eqn--were
> women, well beyond women 's demographic share of computer science._

~~~
indigochill
In the 40s, computing was seen as primarily women's work (similar to the
stereotype of switchboard operators). Into the 60s, women still comprised up
to half of the computing workforce. In 84, they peaked at 37%. So
demographically speaking, the ratio was not as bad as it is today.

(Source:
[https://en.wikipedia.org/wiki/Women_in_computing](https://en.wikipedia.org/wiki/Women_in_computing))

~~~
reaperducer
_Into the 60s, women still comprised up to half of the computing workforce._

My life experience corroborates this.

When I was in school, girls were taught to type, and boys weren't. Because of
that, many of the girls from my school went into computing, while the boys
went to more "manly" pursuits. It's also why Catholic nuns were over-
represented in early computing.

~~~
ruricolist
"It's also why Catholic nuns were over-represented in early computing."

Would you mind expanding on that?

~~~
reaperducer
Highly educated, and able to type. The first woman to earn a PhD in Computer
Science was a nun in Ohio.

Catholic nuns have done much for the business world that is overlooked.

From memory, so no citations:

The first female CEOs were Catholic nuns — in the early 1900's, a time when
women in the American workplace was uncommon, Catholic nuns founded over 800
hospitals in the United States. They also ran schools, colleges, and
universities.

The concept of just-in-time delivery was invented by nuns running those
hospitals.

The first health insurance company was founded in Missouri by nuns to care for
railroad workers.

There are others. I once ran across a list of them on the intarwebs, but these
are the ones that stuck with me.

~~~
yesenadam
Fascinating, thanks. This all makes Sister Celine's contribution less
surprising. I came across her in the classic _A=B_ "about identities in
general, and hypergeometric identities in particular, with emphasis on
computer methods of discovery and proof." An amazing and wonderfully-written
book.

[https://www.math.upenn.edu/~wilf/AeqB.html](https://www.math.upenn.edu/~wilf/AeqB.html)

[https://mathworld.wolfram.com/SisterCelinesMethod.html](https://mathworld.wolfram.com/SisterCelinesMethod.html)

[https://en.wikipedia.org/wiki/Mary_Celine_Fasenmyer](https://en.wikipedia.org/wiki/Mary_Celine_Fasenmyer)

------
tangue
I didn't knew about typo. One surprising unix program I discovered this year
is cal (or ncal). Having a calendar in your terminal is sometimes useful and I
wish I knew earlier I could type things like _ncal -w 2020_

~~~
hinkley
A similarly flavored one I’ve always appreciated is the man page for ascii,
which shows the octal, decimal, and hex values for each character in the ASCII
space.

Most unixes have one, although the format differs.

~~~
dotancohen
How have I been googling ASCII codes for two decades with this right under my
fingertips?!? Thank you!

------
saagarjha
And people say theoretical computer science isn’t useful in “the real world”…

I am curious about this one, though, has anyone used it?

> The syntax diagnostics from the compiler made by Sue Graham's group at
> Berkeley were the mmost helpful I have ever seen--and they were generated
> automatically. At a syntax error the compiler would suggest a token that
> could be inserted that would allow parsing to proceed further. No attempt
> was made to explain what was wrong.

On the surface it sounds a lot like it would produce error messages like
“expected ‘;’” that most beginner programmers come to hate: was it any better
than this, or was that the extent of its intelligence and everything else at
the time was even worse?

~~~
thaumasiotes
> On the surface it sounds a lot like it would produce error messages like
> “expected ‘;’” that most beginner programmers come to hate

Do people really come to hate these? I'd expect the opposite -- that people
would start off hating messages like "expected ';'", but fairly quickly become
accustomed to what they almost always mean.

As long as you can look at the message and have a good idea of what's wrong,
it's not a bad message.

~~~
nicoburns
Once you've used a compiler like Rust or Elm that actually provides
suggestions for common solutions to these errors (effectively building the
tribal knowledge of what the error "really means" into the compiler itself),
it's hard to tolerate these cryptic errors that only really make sense to
machines.

~~~
loeg
I often found Rust's errors completely confusing, even after chasing down the
'\--explain CrypticNNNNN' follow-up explainer. This was in 2019 — not some
ancient version of Rust.

~~~
saagarjha
Yeah, Rust’s compiler errors are decent if you make simple mistakes but
degrade to being about as bad as any other modern compiler’s once you start
doing complicated things. Which isn’t horrible, but --explain isn’t really
useful so it’s just wasting space on my screen.

~~~
steveklabnik
Please file bugs for any message that is confusing! We track them like any
other bug, and there’s some folks actively working on them.

~~~
saagarjha
Hmm, I wouldn’t call them confusing per se, they’re just not useful, and I
don’t think any compiler has really solved this problem (but then again,
generation of compiler error messages is not something I’m an expert in).
Let’s say I forgot to put a “*” in front of something: the compiler’s error
might be something like “xyz does not implement SomeTrait, here is a page
explaining what traits are”. I’d be more than happy to file bugs for things
like these but I have generally refrained from doing so because I am unsure if
this is something that is possible to fix. If you’d like, I could file issues
for things like this, but I’m genuinely curious to hear if there’s any
strategies on improving these or work done in this area.

~~~
steveklabnik
Let us determine if it's possible or not. The person who currently works on
errors is of the opinion that any time the error isn't useful, it's a bug.

> I’m genuinely curious to hear if there’s any strategies on improving these
> or work done in this area.

It's just a ton of work. You look at what the compiler produces, look at what
information you can have at that point, see if you can craft something better,
and then ship. And then look at the next error.

~~~
loeg
> Let us determine if it's possible or not. The person who currently works on
> errors is of the opinion that any time the error isn't useful, it's a bug.

That's a fantastic attitude and I really appreciate that someone is working
towards that goal, thanks.

> It's just a ton of work. You look at what the compiler produces, look at
> what information you can have at that point, see if you can craft something
> better, and then ship. And then look at the next error.

Exactly. That person is a saint.

------
chmaynard
The author is THE Doug McIlroy. It's wonderful to learn that he's still around
and spreading the good word.

[https://en.wikipedia.org/wiki/Douglas_McIlroy](https://en.wikipedia.org/wiki/Douglas_McIlroy)

------
mjw1007
« Typo was as surprising inside as it was outside. Its similarity measure was
based on trigram frequencies, which it counted in a 26x26x26 array. The small
memory, which had barely room enough for 1-byte counters, spurred a scheme for
squeezing large numbers into small counters. To avoid overflow, counters were
updated probabilistically to maintain an estimate of the logarithm of the
count. »

This sounds like something from the same family as hyperloglog

Wikipedia traces that back to the Flajolet–Martin algorithm in 1984. When
would typo have been written?

~~~
DagAgren
Probably not related.

Sounds like it's just doing something like replacing `counter++` with
`if(rand() % counter == 0) counter++`, so that the counter will increase
slower and slower the larger it gets.

~~~
morelisp
Absolutely related! This is essentially the same observation that makes
Flajolet-Martin and HyperLogLog work - that when comparing counts, the exact
low bits of large numbers "matter less" than the low bits of small numbers, so
you can store the logarithm of the count. They differ in how they calculate
the "incremental log" without storing the real values, based on what they are
counting (high-dimensional events vs. high-cardinality sets).

------
adben
How about GNU parallel?
[https://www.gnu.org/software/parallel/](https://www.gnu.org/software/parallel/)

~~~
nunoferreira
wow! You just saved the future me thousands of hours.

~~~
saagarjha
I hope you don’t mind the citation nags ;)

------
nunoferreira
What about "comm" \- compare two sorted files line by line. You can easily get
occurrences only in file 1, in both files, only in file 2.

Super powerful and saved me hours of work.

~~~
pimlottc
comm is a really useful tool, with one big caveat — you must make sure your
input files are all sorted the exact same way. If not, you can get unexpected
results, and worse, might not even realize it.

This may seem obvious, but there are many tiny ways that sorts can differ
between locales, operating systems and programs (e.g. Excel), especially when
dealing with Unicode. It may look the same 99% of the time, and you may not
realize until later that you’ve accidentally filtered out values.

~~~
TomNomNom
My advice is to sort the files just-in-time using the shell:

    
    
        comm <(sort fileA.txt) <(sort fileB.txt)

------
beefbroccoli
There's a very simple system tool that clicked on about 50 simultaneous
lightbulbs in my brain after only 10 minutes of playing with it: mkfifo

~~~
ric2b
The man page for it is awful, 0 explanation of what it actually does.

It allows you to create pipes as files! So you can do:

`echo 'hello world' > mypipe` on one terminal and `cat < mypipe` on another!

Very neat, I'm sure I'll find uses for it in the future.

------
ur-whale
The fact that dc does (or at least tries to) guarantee error bounds on the
_result_ is news to me.

And if that does indeed work, that's pretty cool.

~~~
nn3
I doubt the modern GNU or BSD versions of it that you are likely using do.
Noone uses the original anymore.

~~~
swixmix
Is scale factor the same as error bounds in
[http://man.openbsd.org/dc](http://man.openbsd.org/dc) ?

~~~
nn3
I believe it's just the number of digits when the printing cuts off

------
kmstout
sl

```

    
    
                              (  ) (@@) ( )  (@)  ()    @@    O     @     O     @      O
                         (@@@)
                     (    )
                  (@@@@)
    
                (   )
             ====        ________                ___________
         _D _|  |_______/        \__I_I_____===__|_________|
          |(_)---  |   H\________/ |   |        =|___ ___|      _________________
          /     |  |   H  |  |     |   |         ||_| |_||     _|                \_____A
         |      |  |   H  |__--------------------| [___] |   =|                        |
         | ________|___H__/__|_____/[][]~\_______|       |   -|                        |
         |/ |   |-----------I_____I [][] []  D   |=======|____|________________________|_
       __/ =| o |=-O=====O=====O=====O \ ____Y___________|__|__________________________|_
        |/-=|___|=    ||    ||    ||    |_____/~\___/          |_D__D__D_|  |_D__D__D_|
         \_/      \__/  \__/  \__/  \__/      \_/               \_/   \_/    \_/   \_/

```

~~~
elteto
During college my friend and I kept an innocent prank going for a couple of
years: every time one of us left our laptops unlocked the other would jump in
and type 'alias ls=sl' in the prompt and then clear the screen. Good times.

~~~
Izkata
Put it in their bashrc ;)

------
morelisp
> _struct - Brenda Baker undertook her Fortan-to-Ratfor converter against the
> advice of her department head--me. I thought it would likely produce an ad
> hoc reordering of the orginal, freed of statement numbers, but otherwise no
> more readable than a properly indented Fortran program. Brenda proved me
> wrong. She discovered that every Fortran program has a canonically
> structured form. Programmers preferred the canonicalized form to what they
> had originally written._

We could've had prettier et al instead of style linters 40(+?) years ago. :(

~~~
qubex
I had to look up ‘Ratfor’ because I’d never heard of it — apparently it’s a
FORTRAN preprocessor that added control structures.

~~~
davidwihl
One of the books that most influenced my coding was Software Tools by Brian W.
Kernighan, P.J. Plauger [0]. Even though I never used Ratfor, the clear
descriptions were immensely useful.

[0]
[https://www.goodreads.com/book/show/515603.Software_Tools](https://www.goodreads.com/book/show/515603.Software_Tools)

~~~
qubex
I have that book. I have read snippets of it. Evidently I should read all the
way through it.

------
jawilson
I've written a few useful scripts that everyone should have.

histogram - simply counts each occurrence of a line and then outputs from
highest to lowest. I've implemented this program in several different
languages for learning purposes. There are practical tricks that one can
apply, such as hashing any line longer than the hash itself.

unique - like uniq but doesn't need to have sorted input! again, one can
simply hash very long lines to save memory.

datetimes - looks for numbers that might be dates (seconds or milliseconds in
certain reasonable ranges) and adds the human readable version of the date as
comments to the end of the line they appear in. This is probably my most used
script (I work with protocol buffers that often store dates as int64s).

human - reformats numbers into either powers of 2 or powers of 10. inspired
obviously by the -h and -H flags from df.

I'm sure I have a few more but if I can't remember them from the top of my
head, then they clearly aren't quite as generally useful.

Anyone else have some useful scripts like these?

~~~
nonesuchluck
I work with csv files a lot. I have a short awk script which truncates/pads
each column to a fixed width which I can specify at runtime. It also repeats
the top column (headers) every 20 rows in a different ANSI color. I pipe the
output to less -SR for interactive use so I can scan delimited data in a
scrollable grid, with all columns aligned and labeled.

I understand there's vim plugins for this, but, ehh.

~~~
JdeBP
There's also the likes of console-flat-table-viewer . One would have to
convert the comma-separated stuff into one of the table types, but that's what
Miller is for. (-:

* [http://jdebp.uk./Softwares/nosh/guide/commands/console-flat-...](http://jdebp.uk./Softwares/nosh/guide/commands/console-flat-table-viewer.xml)

* [http://johnkerl.org/miller/](http://johnkerl.org/miller/)

------
mkchoi212
_“To avoid overflow, counters were updated probabilistically to maintain an
estimate of the logarithm of the count.”_

Stuff like this really makes me love what the pioneers of CS did in the past.
In the past, they were counting every byte and every register while nowadays,
programmers make things without considering the impact it will have on the HW.

------
londons_explore
> The math library for Bob Morris's variable-precision desk calculator used
> backward error analysis to determine the precision necessary at each step to
> attain the user-specified precision of the result.

I wonder if compilers could do this today? If you can bound values for
floating point operations, you might be able to replace them with fixed point
equivalents and get a big speedup. You might also be able to replace them with
ints or smaller floats if you can detect the result is rounded to an int.

CPU's also have the possibility to do this since they know (some of) the
actual values at runtime, and could take shortcuts with floating point
calculation in places where not needed for the result.

~~~
pavlov
Replacing floats with fixed point isn’t usually a meaningful optimization on
modern CPUs. The FPU runs in parallel to the integer units, so you can easily
end up idling the FPU while the integer units are too busy doing both the math
and the necessary state management (counters, pointer arithmetic etc.)

This could make sense for SIMD however, but then the problem is getting the
array data in the right format before the computation — if you’re converting
from float to int and back within the loop, it destroys any performance gain.

~~~
londons_explore
Fixed point uses a lot less power though, and many use cases are effectively
power limited rather than functional-unit limited, since if you really do fill
all functional units on every cycle you'll soon need to throttle back your
clock speed...

Perhaps a good example of that is video encoding, which is mostly fixed point,
despite it looking like a pretty close fit for floating point maths.

~~~
pavlov
A very good point. My worldview of performance is highly biased towards “full
steam ahead” desktop graphics.

Video encoding is a bit of a special case though because the common algorithms
are carefully designed for hardware acceleration. For most rendering, it
doesn’t make sense to go out of your way to avoid the FPU.

------
tannhaeuser
What's surprising about eqn, dc, and egrep? I'm using the latter two all the
time, and have used eqn (+troff/groff and even tbl and pic) in the 1990's for
manuals and as late as (early) 2000's to typeset math-heavy course material.
Not nearly as feature-rich as TeX/LaTeX, but much more approachable for casual
math, with DSLs for typesetting equations, tables, and diagrams/graphs. I was
delighted to see that GNU had a full suite of roff/troff drop-in replacements
(which I later learned was implemented by James Clark, of SGML and, recently,
Ballerina fame).

~~~
Mediterraneo10
I had never heard of eqn and was surprised to find that the binary is still
there on my Linux box.

With regard to roff in general, when I got into Linux-based typesetting around
the turn of the millennium, that was already seen as antiquated tech,
superseded by LaTeX which was undergoing a frenzy of development and
improvement around that time. So, anyone under the age of 30 will probably be
hearing of such *roff stuff for the first time (and sadly even familiarity
with LaTeX has waned).

~~~
tannhaeuser
Ok I'm probably showing my age here then :) Back in the 1980 and 1990s, the
roff suite, and most definitely egrep and classic Thompson DFA construction
and DFA->NFA conversion was definitely Unix folklore/taught in Uni. Manpages
are still rendered using roff/groff today, so probably many of us are using it
regularly. Whereas GNU's texinfo has matured less well I'd say, or wasn't even
very useful in practice to begin with due to lack of content.

I'm also using TeX/LaTex, but it's still a programming language whereas
roff/eqn etc are non-Turing DSLs and renderers for particular narrow purposes.
I get your point, but saying these are "antiquated" is like saying HTML is
obsoleted by JavaScript.

~~~
burntsushi
> and most definitely egrep and classic Thompson DFA construction and DFA->NFA
> conversion was definitely Unix folklore/taught in Uni

I think you mean "Thompson NFA construction" and "NFA->DFA."

Regardless though, this is not what the OP is pointing out. 'egrep' (or just
GNU grep these days) is doing something more clever (emphasis mine):

> Al Aho expected his deterministic regular-expression recognizer would beat
> Ken's classic nondeterministic recognizer. Unfortunately, for single-shot
> use on complex regular expressions, Ken's could finish while egrep was still
> busy building a deterministic automaton. To finally gain the prize, Al
> sidestepped the curse of the automaton's exponentially big state table by
> _inventing a way to build on the fly only the table entries that are
> actually visited during recognition._

Russ Cox talks about this a bit in part 3 of his articles on regex
matching[1]. Its implementation in RE2 is here:
[https://github.com/google/re2/blob/master/re2/dfa.cc](https://github.com/google/re2/blob/master/re2/dfa.cc)

[1] -
[https://swtch.com/~rsc/regexp/regexp3.html](https://swtch.com/~rsc/regexp/regexp3.html)

~~~
tannhaeuser
> _I think you mean "Thompson NFA construction" and "NFA->DFA."_

Yep, only noticed it later, then left it in to see who's paying attention :)

------
ur-whale
First time I hear of typo ... it's not on my standard Linux install ... where
can I find the source code?

~~~
saagarjha
It’s not quite the original, but Rob Pike wrote an implementation in Go:
[https://github.com/robpike/typo](https://github.com/robpike/typo)

~~~
TheDesolate0
In go? This is a job for rust!

...there goes my weekend.

------
ruslan
I would add bc to the list, very useful to make occasional calculations from
command line using "human readable" syntax.

~~~
bonzini
Fun fact, the first version of bc was just a frontend to dc. It converted the
structured input to dc's stack-based form and let dc do the math.

~~~
ruslan
Did not know that, thanks. I searched for dc inside bc and could find a
reference to /usr/bin/dc, so I think bc still is just a wrapper.

% uname -a

FreeBSD skyrocket 9.3-RELEASE FreeBSD 9.3-RELEASE #1: Fri Nov 27 20:28:19 UTC
2015

~~~
bonzini
GNU bc isn't though they share the bignum code, I am not surprised that the
BSDs are following the older implementation more closely!

------
lcall
I have found it useful to survey the existing unix utilities (maybe every
several years). I'm no genius but I find things I will use. One way of course
is simply to review the names in wherever your system stores manual pages, and
read (or skim) those where you don't know what they do, trying out some
things, or trying to remember at least where to look it up later when ready to
use it. Another is by browsing to
[https://man.openbsd.org/](https://man.openbsd.org/) , then put a single
period (".") in the search field, optionally choose a section (and/or other
system, not sure how far the coverage goes), and click the apropos button.

------
jhoechtl
Doug McIlroy is regularly active in the groff mailing list
[https://lists.gnu.org/archive/html/groff/](https://lists.gnu.org/archive/html/groff/)

------
Torwald
What does he man by "record structure in the file system" in re to Multics?

~~~
tenebrisalietum
Unix files are simply a stream of bytes and outsource concern of file
structure to userland. There's nowhere to set/get a type, no mechanism to
create schema in the file like fields, lengths, constraints, etc. You simply
can seek to a place in the file (if it's seekable) and read/write the bytes.
What they mean is up to the programs/user/convention.

Earlier filesystems were trying much more to be like databases.

------
vladdoster
Crabs seems likes a really cool program.

Here is a paper from Bell Labs

[http://lucacardelli.name/Papers/Crabs.pdf](http://lucacardelli.name/Papers/Crabs.pdf)

------
noisy_boy
I didn't find egrep surprising - I use it quite often. The thing I didn't know
about it was that it was Al Aho's creation. I only knew about him from awk.

------
yegle
killall5 is the most bizarre command that I learned recently.

Read manpage before trying it.

~~~
nullc
Sys5 killall: Bane of all regular linux administrators that also sometimes
administered solaris boxes.

Once after blowing up an in production database server during the day, I
suffered the unfortunate difficulty of having to explain why running a command
"killall" on a critical server that killed everything was an innocent mistake
and that I didn't have any reason to expect it to kill everything.

It's extremely difficult to not sound like a moron when explaining that you
didn't expect "killall" to "kill all".

------
smitty1e
Hadn't heard of most of these.

The peoples' names were more recognizable.

------
winrid
I found GNU parallel to be very useful/cool.

------
katharine7
sed awk tr egrep for processing making special greeting lol converting images

all are so exciting!!

------
TheDesolate0
sed & awk for life

------
pvaldes
both rename and mmv are pretty handy

------
hyperpallium
xargs parallelizes with -P _n_

