Specifically, it abuses the progressive scan system. It's well-known that progressive scan in JPEG (which causes the image to load starting at the lowest detail and being refined up to full quality) doesn't just add to usability for the viewer of the image, but it also slightly improves compression. However, progressive scan allows you to specify almost any splitting of the coefficients--each of which gets its own Huffman table for compression.
To make this script, Loren did an exhaustive search of all possible splitting options on a large collection of images. He then collected statistics from this and used it to devise a pretty fast and simple search of the most common best-split situations to maximize compression. You can read the comments for more specific details.
Extra note: Loren has been the primary maintainer of the x264 video encoder over the past 5 years and is also an ffmpeg developer.
1) Huffman table optimization
2) Ordinary progressive scan
3) Trying all sorts of split orders for progressive scan
Most image apps worth their salt hopefully do 1) and 2). This script does 1) if it wasn't done already, and 3), which is its main purpose.
The best result is then used.
du -d 1 -k
du -d 1 -k
So about a 6-7% reduction. These are files uploaded via wordpress, not sure if it does any type of reduction by itself in the upload wizard, i doubt it.
Time to run the utility:
279.84 real 176.08 user 72.19 sys