> These days if you’re going to iterate on a solution you’d better make it multithreaded.
Repetition eliminating compression tends to be inherently sequential. You'd probably need to change the file format to support chunks (or multiple streams) to do so.
Because of LZ back references, you can't LZ compress different chunks separately on different cores and have only one compression stream.
Statistics acquisition (histograms) and entropy coding could be parallel I guess.
(Not a compression guru, so take above with a pinch of salt.)
There are gzip variants that break the file into blocks and run in parallel. They lose a couple of % by truncating the available history.
But zopfli appears to do a lot of backtracking to find the best permutations for matching runs that have several different solutions. There’s a couple of ways you could run those in parallel. Some with a lot of coordination overhead, others with a lot
of redundant calculation.
Repetition eliminating compression tends to be inherently sequential. You'd probably need to change the file format to support chunks (or multiple streams) to do so.
Because of LZ back references, you can't LZ compress different chunks separately on different cores and have only one compression stream.
Statistics acquisition (histograms) and entropy coding could be parallel I guess.
(Not a compression guru, so take above with a pinch of salt.)