
Real Time Data compression using LZ4 - suprgeek
http://fastcompression.blogspot.fr/p/lz4.html
======
mkup
With such compression ratio for text, and compression speed, this algorithm
fits well to the Google Snappy / LZO / FastLZ group. Every database engine
should use one of these, they are operating at disk I/O speed.

~~~
alecco
Only for certain kind of data. Like a document index. But not for numerical or
many other types. RLE, delta encoding and many other simpler algorithms are a
better match on many cases.

------
blitzprog
I just integrated it into the programming language I am developing, thanks for
such a small and handy library!

~~~
alecco
You should consider zlib, it's compatible with everything and has very nice
options. Also it's very low in memory usage (around 500KB vs *MB). Check out
how pigz implements it with pthreads (one thread per block).

------
altrego99
Speed is good, but how does it perform in terms of compression ratio against
other algorithms?

~~~
pyxy
From Compression Ratings website I've catched this:

    
    
        program    comp-ratio  comp-time  decomp-time
        bzip2      34.1%       468.09s    167.03s
        gzip -5    37.6%       141.30s    34.76s
        lz4 -c2t4  43.9%       52.72s     5.84s
    

<http://compressionratings.com/sort.cgi?rating_sum.full+p3>

~~~
alecco

      program       comp-ratio   comp-time  decomp-time
      pigz -1          40.3%       26.92s      20.31s
      lz4 -c2t4        43.9%       52.72s       5.84s
      info-zip -1      41.3%      122.31s     119.81s
    

More relevant would be a comparison with gzip -1, lzop, snappy and rolz.

Also the test data is mixed. It is not very helpful to see where this
algorithm shines.

Also note the memory usage shots up from 5m/2m for info-zip -1 to 46m/42m for
the specific case of lz4 you picked.

EDIT: also bzip2 seems to be paticularly bad for this specific dataset, other
algorithms in that category get better compression ratios. Added pigz to the
comparison (info-zip with pthreads).

