I was lucky enough to have David Huffman as an instructor at UC Santa Cruz. Very engaging and smart guy. He got a little tired about being asked about "Huffman Coding" all the time, given it was so long in the past and he had done a number of other things.
One of the things he enjoyed talking about during office hours (if help wasn't needed) was his paper folding:
Gives a good example.
I hate when people output json that's not self documenting enough because they don't understand that aReallyNiceLongNameThatDescribesTheAttribute and nam1 compress to functionally the same size once you turn on compression.
One API recently I saw had a few single letter names. I could still figure it out, but it was figuring it out instead of just reading the name.
I agree with the general sentiment though. Compression isn't magic, and understanding how it works helps you to work with compressed formats.
Writing your own implementation of some compression algorithms is a lot of fun to. I learned a lot from implementing Huffman coding and an LZ77 variant a few years ago.
That's how gzip works, but all you have to know that gzip WILL be packed via huffman coding in a compression algorithm, and you will be able to conclude you can have a far longer variable name length with no cost.
The actual algorithm itself is a far less common topic than huffman coding.
There are only two hard things in Computer Science: cache invalidation and naming things.
-- Phil Karlton
The notion that the first brick in a run has an offset of 0 as does the first upright stud in a frame is something that's been basic to bricklayers and carpenters for a very long time.
Sometimes it's natural to refer to elements by their position, or offset, other times it's natural to talk about sequence numbers or indexes.
Naming things. Again.
1 indexed arrays are truly indexed :D