So up until now Dropbox used zlib compression? I would have thought that they would use zstd or brotli for a good speed/ratio-ratio or lz4 for speed.
DivANS looks interesting/promising like a nice little kit for compression. Doesn't seem like it's good for archiving since it doesn't seem to have support for seek (or so it seems, would like to be proven wrong), but very interesting nonetheless.
DivANS author here: the compression benchmarks all measure seekability to the nearest 4MB chunk, but the current lib itself doesn't support seekability out of the box, yet. However, it would be trivial to reset the encoder at each 4mb chunk and could even be done without additional library support by just reinitializing the compressor (or decompressor) each 4mb
What future work can we see here? Is this sort of approach really pushing the state of the art or is it just an attempt squeeze another few percent out of brotli at the expense of more CPU.
My understanding is the main novel idea here is splitting compression into independent subproblems. Is there potential for this idea to become the basis for all new modern (lossless) codecs (e.g. redesigns of FLAC or PNG)?
DivANS looks interesting/promising like a nice little kit for compression. Doesn't seem like it's good for archiving since it doesn't seem to have support for seek (or so it seems, would like to be proven wrong), but very interesting nonetheless.