It's odd that in listing the ways to store a choice of 40 numbers from the set {1..100} the paper doesn't mentioning the traditional approach of using a string of 100 bits. Bit n is set to 1 if n is in the subset, and 0 otherwise. This is damn close to the theoretical optimum of 94 bits, and way simpler.
Thanks for the hint! I really forgot that one. I just added it to the presentation. In addition, I adjusted the example to "30 of 100" as this better reflects the average case. ("40 of 100" was too close to the pathological corner case "50 of 100").
The author has a rather narrow definition of "efficient". Yes, he uses close to the minimum number of bits of storage... but his encoding scheme is horribly slow -- encoding takes O(k^2 log n) time, and decoding using the presented algorithm takes O(k^2 n log n) time.