He's basically taking advantage of a trick which is almost never used for compression purposes, so it can be used after any JPEG encoder to improve compression, since nobody else seems to do this.
Specifically, it abuses the progressive scan system. It's well-known that progressive scan in JPEG (which causes the image to load starting at the lowest detail and being refined up to full quality) doesn't just add to usability for the viewer of the image, but it also slightly improves compression. However, progressive scan allows you to specify almost any splitting of the coefficients--each of which gets its own Huffman table for compression.
To make this script, Loren did an exhaustive search of all possible splitting options on a large collection of images. He then collected statistics from this and used it to devise a pretty fast and simple search of the most common best-split situations to maximize compression. You can read the comments for more specific details.
Extra note: Loren has been the primary maintainer of the x264 video encoder over the past 5 years and is also an ffmpeg developer.
It only lets you pick the number of levels of progression if I recall correctly; it doesn't experiment with various different possible locations to split the DCT coefficients, only the number of splits total.
Just ran this on some files, random files about 801 or so jpg files...
Pre utility:
du -d 1 -k
39710 .
Post utility:
du -d 1 -k
37074 .
So about a 6-7% reduction. These are files uploaded via wordpress, not sure if it does any type of reduction by itself in the upload wizard, i doubt it.