
Xz format inadequate for long-term archiving (2017) - pandalicious
https://www.nongnu.org/lzip/xz_inadequate.html
======
moltensyntax
This article again? In my opinion, this article is biased. The subtext here is
that the author is claiming that his "lzip" format is superior. But xz was not
chosen "blindly" as the article claims.

To me, most of the claims are arguable.

To say 3 levels of headers is "unsafe complexity"... I don't agree.
Indirection is fundamental to design.

To say padding is "useless"... I don't understand why padding and byte-
alignment that is given so much vitriol. Look at how much padding the tar
format has. And tar is a good example of how "useless padding" was used to
extend the format to support larger files. So this supposed "flaw" has been in
tar for dozens of years, with no disastrous effects at all.

The xz decision was not made "blindly". There was thought behind the decision.

And it's pure FUD to say "Xz implementations may choose what subset of the
format they support. They may even choose to not support integrity checking at
all. Safe interoperability among xz implementations is not guaranteed". You
could say this about any software - "oh no, someone might make a bad
implementation!" Format fragmentation is essentially a social problem more
than a technical problem.

I'll leave it at this for now, but there's more I could write.

~~~
pmoriarty
_" Look at how much padding the tar format has. And tar is a good example of
how "useless padding" was used to extend the format to support larger files.
So this supposed "flaw" has been in tar for dozens of years, with no
disastrous effects at all."_

Just because it's in tar doesn't mean that the design is flawless. tar was
created a long time ago, when a lot of things we are concerned with now
weren't even thought of.

Deterministic, bit-reproduceable archives are one thing that tar has recently
struggled with[1], because the archive format was not originaly designed with
that in mind. With more foresight and a better archive format, this need not
have been an issue at all.

[1] - [https://lists.gnu.org/archive/html/help-
tar/2015-05/msg00005...](https://lists.gnu.org/archive/html/help-
tar/2015-05/msg00005.html)

~~~
rootbear
The name tar comes from Tape ARchive. Lots of padding makes sense when you
know that tar was originally used to write files to magnetic tape, which is
highly block oriented. The use of tar today as a bundling and distribution
format is something of a misapplication, as it lacks features one might want
of such a program.

------
comex
Last time this came up on HN, I did some research, and discovered that _lzip_
was quite non-robust in the face of data corruption: a single bit flip in the
right place in an lzip archive could cause the decompressor to silently
truncate the decompressed data, _without_ reporting an error. Not only that,
this vulnerability was a direct consequence of one of the features used to
claim superiority to XZ: namely, the ability to append arbitrary “trailing
data” to an lzip archive without invalidating it.

Like some other compressed formats, an lzip file is just a series of
compressed blocks concatenated together, each block starting with a magic
number and containing a certain amount of compressed data. There’s no overall
file header, nor any marker that a particular block is the last one. This
structure has the advantage that you can simply concatenate two lzip files,
and the result is a valid lzip file that decompresses to the concatenation of
what the inputs decompress to.

Thus, when the decompressor has finished reading a block and sees there’s more
input data left in the file, there are two possibilities for what that data
could contain. It could be another lzip block corresponding to additional
compressed data. Or it could be _any other_ random binary data, if the user is
taking advantage of the “trailing data” feature, in which case the rest of the
file should be silently ignored.

How do you tell the difference? Simply enough, by checking if the data starts
with the 4-byte lzip magic number. If the magic number itself is corrupted in
any way? Then the entire rest of the file is treated as “trailing data” and
ignored. I hope the user notices their data is missing before they delete the
compressed original…

It might be possible to identify an lzip block that has its magic number
corrupted, e.g. by checking whether the trailing CRC is valid. However, at
least at the time I discovered this, lzip’s decompressor made no attempt to do
so. It’s possible the behavior has improved in later releases; I haven’t
checked.

But at least at the time this article was written: pot, meet kettle.

~~~
jwilk
The maintainer's response when I reported this bug was 'Just use "lzip -vvvv"
to see the warning':

[https://lists.debian.org/55C0FE82.7050700@gnu.org](https://lists.debian.org/55C0FE82.7050700@gnu.org)

Their advocacy in this thread was so good that I removed lzip from my system.

------
tedunangst
Are these concerns, about error recovery, outdated? If I want to recover a
corrupted file, I find another copy. I don't fiddle with the internal length
field to fix framing issues. Certainly, if I want to detect corruption, I use
a sha256 of the entire file. If that fails, I don't waste time trying to find
the bad bit.

To add to that, if you need parity to recover from errors, you need to
calculate how much based on your storage medium durability and projected life
span. It's not the file format's concern. The xz crc should be irrelevant.

~~~
pmoriarty
_" If I want to recover a corrupted file, I find another copy."_

So you've archived two or more copies of each file? That means you're use at
least twice as much space (and if you're keeping the original as well, more
than twice).

For the likely corruption of the occasional single bit flip here and there,
you could do a lot better by using something like par2 and/or dvdisaster
(depending on what media you're archiving to).

~~~
jlgaddis
> _So you 've archived two or more copies of each file_

You haven't?

It took me just one minor "data loss incident" ~20 years ago to very quickly
convince me to become a lifetime member of the "backup all the things to a few
different locations" club.

> _That means you 're use at least twice as much space (and if you're keeping
> the original as well, more than twice)._

"Storage is cheap."

~~~
rikkus
Storage is cheap indeed, though it takes some effort to make it cheap.

99% of the digital data I'm keeping for the long term is family photos and
videos. All my photos go to Dropbox (easy copy-from-device and access
anywhere) and are then backed up to multiple locations by CrashPlan.

It'll be a while yet, but in the next few years I'll be hitting the 1TB
Dropbox limit. I'm hoping that Dropbox make a >1TB 'consumer' plan in the next
couple of years. There's no way I'm assuming my backups are fine, deleting
from Dropbox to make space, then finding out in a few years that some set of
photos is missing.

I also sync up to Google Drive - but again, there's a 1TB limit (or a large
cost).

In the future, I might have to create a new Dropbox account and keep the old
one running. Storage might be cheap, but keeping it cheap is tricky.

~~~
viraptor
> Storage might be cheap, but keeping it cheap is tricky.

If it's really for pure backup, not continuous sync, Glacier is $4 per TB.

~~~
white-flame
That's $4 per TB- _month_. Meaning you're effectively paying more than the
cost of a 1TB hard drive replaced every year, for every TB you're storing.
Plus fees to get your data back out. An 8TB drive, replaced every year, is
half the cost per TB, with no additional access cost.

Depending on how price conscious you are, I agree with the GP's "keeping it
cheap is tricky". And with things like backup, even if you do it yourself, the
time spent maintaining it should be negligible: Occasionally kick off a format
shift or failed drive replacement, have scripts running everything else.

~~~
viraptor
> Meaning you're effectively paying more than the cost of a 1TB hard drive
> replaced every year, for every TB you're storing.

Yes. But what you get in return is not having that data at home. It doesn't
matter how many copies you have locally if your home gets robbed, flooded, or
burns down.

------
arundelo
I upvoted this because it seems to make some good points and I think the topic
is interesting and important, but I can't understand why the "Then, why some
free software projects use xz?" section does not mention xz's main selling
point of being better than other commonly used alternatives at _compressing
things to smaller sizes._

[https://www.rootusers.com/gzip-vs-bzip2-vs-xz-performance-
co...](https://www.rootusers.com/gzip-vs-bzip2-vs-xz-performance-comparison/)

~~~
wyldfire
> compressing things to smaller sizes.

...relative to ... ? Is it better than lzip? lzip sounds like it would also
use LZMA-based compression, right? This [1] sounds like an interesting and
more detailed/up-to-date comparison. Also by the same author BTW.

[1]
[https://www.nongnu.org/lzip/lzip_benchmark.html#xz](https://www.nongnu.org/lzip/lzip_benchmark.html#xz)

~~~
derefr
Relative to the compression formats people were aware of at the time (which
didn't include lzip.)

People began using xz because mostly because they (e.g. distro maintainers
like Debian) had started seeing 7z files floating around, thought they were
cool, and so wanted a format that did what 7z did but was an open standard
rather than being dictated by some company. xz was that format, so they leapt
on it.

As it turns out, lzip had already been around for a year (though I'm not sure
in what state of usability) before the xz project was started, but the people
who created xz weren't looking for something that compressed better, they were
looking for something that compressed better _like 7z_ , and xz is that.

(Meanwhile, what 7z/xz is _actually_ better at, AFAIK, is long-range
identical-run deduplication; this is what makes it the tool of choice in the
video-game archival community for making archives of _every variation of_ a
ROM file. Stick 100 slight variations of a 5MB file together into one .7z (or
.tar.xz) file, and they'll compress down to roughly 1.2x the size of a single
variant of the file.)

~~~
userbinator
_that did what 7z did but was an open standard rather than being dictated by
some company._

7z _is_ an open standard, and the SDK is public domain:

[https://www.7-zip.org/sdk.html](https://www.7-zip.org/sdk.html)

------
carussell
(2016)

Previously discussed here on HN back then:

[https://news.ycombinator.com/item?id=12768425](https://news.ycombinator.com/item?id=12768425)

The author has made some minor revisions since then. Here are the main
differences to the page compared to when it was first discussed here:

[http://web.cvs.savannah.nongnu.org/viewvc/lzip/lzip/xz_inade...](http://web.cvs.savannah.nongnu.org/viewvc/lzip/lzip/xz_inadequate.html?r1=1.3&r2=1.4)

And here's the full page history:

[http://web.cvs.savannah.nongnu.org/viewvc/lzip/lzip/xz_inade...](http://web.cvs.savannah.nongnu.org/viewvc/lzip/lzip/xz_inadequate.html)

------
cpburns2009
It may not be a good choice for long-term data storage, but I disagree that it
should not be used for data sharing or software distribution. Different use
cases have different needs. If you need long-term storage, it's better to
avoid lossless compression that can break after minor corruption. You should
also be storing parity/ECC data (I don't recall the subtle difference). If you
only need short to moderate term storage, the best compression ratio is likely
optimal. Keep a spare backup just in case.

~~~
snuxoll
For long-term archival I think relying on your compression software to protect
data integrity is a fool's errand, protecting against bit-rot should be a
function of your storage layer as long as you have control over it (in
contrast to say, Usenet, where multiple providers have copies of data and you
can't trust them to not lose part of it - hence the inclusion of .par files
for everything under alt.binaries).

~~~
dv_dt
I keep seeing recommendations for par/par2 but it seems like as software, the
project isn't actively maintained? As an aside, that makes me think of dead
languages and the use of latin for scientific names because it isn't changing
anymore... but do you want that out of archival formats and software?

~~~
pronoiac
There's a par2 "fork" under active development -
[https://github.com/Parchive/par2cmdline](https://github.com/Parchive/par2cmdline)

The fork compiled for me this week, when the official 0.3 version on
Sourceforge wouldn't. I vaguely remembered par3 being discussed, but couldn't
find anything usable. And that's an example of why to be wary of new formats,
I guess?

------
jwilliams
I sent a reasonable amount of data to Cloud Storage. It varies a lot. Usually
~10GB/day, but it can be up to 1TB/day regularly.

xz can be _amazing_. It can also bite you.

I've had payloads that compress to 0.16 with gzip then compress to 0.016 with
xz. Hurray! Then I've had payloads where xz compression is par, or worse.
However, with "best or extreme" compression, xz can peg your CPU for much
longer. gzip and bzip2 will take minutes and xz -9 is taking hours at 100%
CPU.

As annoying as that is, getting an order of magnitude better in _many_
circumstances is hard to give up.

My compromise is "xz -1". It usually delivers pretty good results, in
reasonable time, with manageable CPU/Memory usage.

FYI. The datasets are largely text-ish. Usually in 250MB-1GB chunks. So
talking JSON data, webpages, and the like.

~~~
londons_explore
If you get compression ratios that good, you should consider if your
application might be doing something stupid like storing the same data
thousands of times inside it's data file.

If you store enough of the same type of data, invest in redesigning the
application. There's a reason we all use jpegs over zipped bitmaps...

~~~
jwilliams
> There's a reason we all use jpegs over zipped bitmaps...

It's because it's an appropriate compression - just like xz can be? Not sure
what you're actually suggesting here.

~~~
yorwba
The suggestion is to design an application-specific format that avoids storing
redundant data in the first place. When that's an option at all it gives you
higher compression than any general-purpose compression algorithm can achieve.

------
freedomben
This is purely anecdotal and could easily be PEBKAC, but I created a bunch of
xz backups years ago and had to access them a couple of years later after a
disc died. To my panicked surprise, when trying to unpack them, I was informed
that something was wrong (sorry at this point I don't remember what it was). I
never did get it working. From that point on I went back to gzip and have not
had a problem since. Yes xz packs efficiently, but a tight archive that
doesn't inflate is worse than worthless to me.

------
eesmith
FWIW, PNG also "fails to protect the length of variable size fields". That is,
it's possible to construct PNGs such that a 1-bit corruption gives an entirely
different, and still valid, image.

When I last looked into this issue, it seemed that erasure codes, like with
Parchive/par/par2, was the way to go. (As others have mentioned here.) I
haven't tried it out as I haven't needed that level of robustness.

------
davidw
FWIW, xz is also a memory hog with the default settings. I inherited an
embedded system that attempts to compress and send some logs, using xz, and if
they're big enough, it blows up because of memory exhaustion.

~~~
pmoriarty
_" xz is also a memory hog with the default settings"_

Then why use the default settings?

I tend to use the maximum settings, which are much more of a memory hog, but I
have enough memory where that's not an issue.

Just use the settings that are right for you.

~~~
davidw
You'd have to ask the guy who wrote the code in the first place.

I think he saw "'best' compression" and stopped looking there.

~~~
pmoriarty
I didn't mean to ask why the defaults are defaults, but rather why anyone
would use the defaults rather than settings more appropritate to their use
case?

It's not like xz is unable to be lighter on memory, if that's what you want.
It's an option setting away.

~~~
davidw
To clarify: you'd have to ask the guy who wrote _our_ code.

------
doubledad222
Thank you for sharing this. I am in charge of archiving the family files -
pictures, video, art projects, email. I want it available through the aging of
standards and protected against the bitrot of aging hard drives. I'll be
converting any xz archives I get into a better format.

~~~
moviuro
Mix and match, according to criticity and max affordable data loss: multiple
locations, multiple solutions, multiple local copies (e.g. one cloud solution
+ DVD + NAS). See: [https://www.backblaze.com/blog/the-3-2-1-backup-
strategy/](https://www.backblaze.com/blog/the-3-2-1-backup-strategy/)

------
ryao
Requiring userland software to worry about bitrot is a great way to ensure
that it is not done well. It is better to let the filesystem worry about it by
using a file system that can deal with it.

This article is likely more relevant to tape archives than anything most
people use today.

------
nurettin
Too bad for arch [https://www.archlinux.org/news/switching-to-xz-
compression-f...](https://www.archlinux.org/news/switching-to-xz-compression-
for-new-packages/)

~~~
saghm
Is this really an issue for this use case? My naive take is that since Arch
updates packages so often, "long-term storage" doesn't come up that much in
practice.

~~~
aidenn0
It's zero issue since the packages are updated regularly and hashed before
installing.

~~~
saghm
Good point, I hadn't even thought of checksums!

------
pmoriarty
When I use xz for archival purposes I always use par2[1] to provide redundancy
and recoverability in case of errors.

When I burn data (including xz archives) on to DVD for archival storage, I use
dvdisaster[2] for the same purpose.

I've tested both by damaging archives and scratching DVDs, and these tools
work great for recovery. The amount of redundancy (with a tradeoff for space)
is also tuneable for both.

[1] -
[https://github.com/Parchive/par2cmdline](https://github.com/Parchive/par2cmdline)

[2] - [http://dvdisaster.net/](http://dvdisaster.net/)

------
londons_explore
The purpose of a compression format is not to provide error recovery or
integrity verification.

The author seems to think the xz container file format should do that.

When you remove this requirement, nearly all his arguments become moot.

~~~
zzzcpan
> The purpose of a compression format is not to provide error recovery or
> integrity verification.

On the contrary. People archive files to save space, exchange files with each
other over unreliable networks able to corrupt data, store them in corrupted
ram and corrupted disks, even if just temporary. Compression formats are there
to help with that, this is their main purpose. This is why fast and proper
checksumming is expected, but not cryptographic, like sha256, that adds
nothing to this goal but overhead.

------
leni536
I fail to see why integrity checking is the file format's responsibility. Is
this historical? Like when you just dd a tar file directly onto a tape and
there is no filesystem? Anyway seems like it should be handled by the
filesystem and network layers.

I can understand the concerns about versioning and fragmented extension
implementations though.

~~~
JdeBP
> _you just dd a tar file directly onto a tape_

Actually, one uses the _tape archive_ utility, tar, to write directly to the
tape. (-:

------
LinuxBender
Perhaps renice your job so that others don't complain about their noisy
neighbor.

    
    
        renice 19 -p $$ > /dev/null 2>&1
    

then ...

Use tar + xz to save extra metadata about the file(s), even if it is only 1
file.

    
    
        tar cf - ~/test_files/* | xz -9ec -T0 > ./test.tar.xz
    

If that (or the extra options in tar for xattrs) is not enough, then create a
checksum manifest, always sorted.

    
    
        sha256sum ~/test_files/* | sort -n > ~/test_files/.sha256
    

Then use the above command to compress it all into a .tar file that now
contains your checksum manifest.

------
AndyKelley
I did some compression tests of the CI build of master branch of zig:

    
    
        34M zig-linux-x86_64-0.2.0.cc35f085.tar.gz
        33M zig-linux-x86_64-0.2.0.cc35f085.tar.zst
        30M zig-linux-x86_64-0.2.0.cc35f085.tar.bz2
        24M zig-linux-x86_64-0.2.0.cc35f085.tar.lz
        23M zig-linux-x86_64-0.2.0.cc35f085.tar.xz
    

With maximum compression (the -9 switch), lzip wins but takes longer than xz:

    
    
        23725264 zig-linux-x86_64-0.2.0.cc35f085.tar.xz  63.05 seconds
        23627771 zig-linux-x86_64-0.2.0.cc35f085.tar.lz  83.42 seconds

------
qwerty456127
Why do people use xz anyway? As for me I just use tar.gz when I need to backup
a piece of a Linux file system into an universally-compatible archive, zip
when I need to send some files to a non-geek and 7z to backup a directory of
plain data files for myself. And I dream of the world to just switch to 7z
altogether but it is hardly possible as nobody seems interested in adding tar-
like unix-specific metadata support to it.

~~~
LinuxBender
xz has substantially better compression than gz or bz2, especially if using
the flags -9e. You can use all your cores with -T0 or set how many cores to
use. I find it to be on par with 7-zip.

Perhaps folks are trying to stick with packages that are in their base repo.
p7zip is usually outside of the standard base repos.

~~~
yason
Substantially is a relative term. There are niche cases but how many people
really care, or need to care, about the last bytes that can be compressed?

Packing a bunch of files together as .tgz is a quite universal format and
compresses _most of the redundancy_ out. It has some pathological cases but
those are rare, and for general files it's still in the same ballpark with
other compressors.

I remember using .tbz2 in the turn of the millennium because at the time
download/upload times did matter and in some cases it was actually faster to
compress with bzip2 and then send over less data.

But DSL broadband pretty much made it not matter any longer: transfers were
fast enough that I don't think I've specifically downloaded or specifically
created a .tbz2 archive for years. Good old .tgz is more than enough. Files
are usually copied in seconds instead of minutes, and really big files still
take hours and hours.

None of the compressors really turn a 15-minute download into a 5-minute
download consistently. And the download is likely to be fast enough anyway.
Disk space is cheap enough that you haven't needed the best compression
methods for ages in order to stuff as much data on portable or backup media.

Ditto for p7zip. It has more features and compresses faster and better but for
all practical purposes zip is just as good. Eventhough it's slower it won't
take more than a breeze to create and transfer, and it unzips virtually
everywhere.

~~~
joveian
I never thought bz2 was worth it over gzip, but xz is much much better in many
common cases (particularly text files, but also other things). Source code can
often be xz compressed to about half the size as gzip. If you are downloading
multiple things at once or a whole operating system or uploading something
then even on slower DSL lines it makes a huge difference IMO. I wish more
package systems provided deltas.

The only issue I've had with xz is that it doesn't notice if it is not
actually compressing the file like other utilities do and then just store the
file uncompressed, so if you try to xz a tar file with a bunch of already
highly compressed media files then it both takes forever and and you end up
with a nontrivially larger file than you started with.

Also, I like that, unlike gzip, xz can sha256 the uncompressed data if you use
the -C sha256 option, providing a good integrity check. Yes, I would really
like to use a format that doesn't silently decompress incorrect data and I
can't understand why the author of this article thinks that is a bad thing.
For backups I keep an mtree file inside the tar file with sha512 of each file
and then the -C sha256 option to be able to easily test the compressed tar
file without needing another file. In some cases I encrypt the txz with the
scrypt utility (which stores HMAC-SHA256 of the encrypted data).

------
orbitur
Related: where can I find a thorough step-by-step method for maintaining the
integrity of family photos/videos in backups on either Windows or macOS?

------
ebullientocelot
The [Koopman] cited throughout is my boss, Phil! At any rate I'm sadly not
surprised and a little appalled that xz doesn't store the version of the tool
that did the compression..

------
microcolonel
Given that there is basically one standard implementation, and virtually
nobody has ever had an issue with compatibility with a given file, I don't see
how it is "inadequate". Sure, if it's inadequate now, it'll be inadequate if
you read it in a decade, but not in any way which would prevent you from
reading it.

If your storage fails, maybe you'll have a problem, but you'd have a problem
anyway.

Sometimes I feel like genuine technical concerns are buried by the authors
being jerks and blowing things way out of proportion. I, for one, tend to lose
interest when I hear hyperbolic mudslinging.

------
Annatar
So long as xz(1) gets insane amounts of compression and there is no compressor
which compresses better, people are going to keep preferring it.

------
vortico
What is the probability that a given byte will be corrupted on a hard disk in
one year?

What is the probability of a complete HD failure in a year?

------
loeg
Use par2 to generate FEC for your archives and move on with your life.

------
sirsuki
So what wrong with plain and simple

    
    
      tar c foo | gzip > foo.tar.gz
    

or

    
    
      tar c foo | bzip2 > foo.tar.bz2
    

Been using these for over 20 years now. Why is is so important to change
things especially as this article points out for the worse?!

~~~
dchest
Better (smaller and/or faster) compression.

------
nailer
To read the article:

    
    
        document.body.style['max-width'] = '550px'; document.body.style.margin = '0 auto'

~~~
fenwick67
or just resize your browser window

------
Lionsion
What are better file formats for long term archiving? Were any of them
designed specifically with that use case in mind?

~~~
cpburns2009
There's a post on Super User that contains useful information:

"What medium should be used for long term, high volume, data storage
(archival)?"
[https://superuser.com/q/374609/52739](https://superuser.com/q/374609/52739)

It mostly focuses on the media instead of formats though.

~~~
paulmd
It all depends on what your definition of "high-volume" is, and just how
"archival" your access patterns really are.

Amazon Glacier runs on BDXL disc libraries (like a tape library). There's
nothing truly expensive about producing BDXL media, there just isn't enough
volume in the consumer market to make it worthwhile. If you contract directly
with suppliers for a few million discs at a time, that's not an issue (you
_did_ say high-volume, right?).

[https://storagemojo.com/2014/04/25/amazons-glacier-secret-
bd...](https://storagemojo.com/2014/04/25/amazons-glacier-secret-bdxl/)

For medium-scale users, tape libraries are still the way to go. You can have
petabytes of near-line storage in a rack. Storage conditions are not really a
concern in a datacenter, which is where they should live.

(CERN has about 200 petabytes of tapes for their long-term storage.)

[https://home.cern/about/updates/2017/07/cern-data-centre-
pas...](https://home.cern/about/updates/2017/07/cern-data-centre-
passes-200-petabyte-milestone)

If you mean "high-volume for a small business", probably also tapes, or BD
discs with 20% parity encoding to guard against bitrot.

Small users should also consider dumping it in Glacier as a fallback - make it
Amazon's problem. If you have a significant stream of data it'll get expensive
over time, but if it's business-critical data then you don't really have a
choice, do you?

~~~
jlgaddis
> _Amazon Glacier runs on BDXL disc libraries ..._

This has been a rumor I've heard for quite a while (probably since shortly
after Glacier was announced) but has it ever been confirmed?

------
kazinator
> _The xz format lacks a version number field. The only reliable way of
> knowing if a given version of a xz decompressor can decompress a given file
> is by trial and error._

Wow ... that is inexcusably idiotic. Whoever designed that shouldn't be
programming. Out of professional disdain, I pledge never to use this garbage.

~~~
menacingly
Histrionic reactions don't improve the overall quality of software.

We certainly should have environments where we can tell someone code is shit,
it's just silly and counterproductive to then leap to attacks on the abilities
on the person behind it.

~~~
kazinator
Improving badly designed software that is _unnecessary_ in the first place is
foolish; just "rm -rf" and never give it another thought.

