
FastLZ - lightning-fast compression library - ColinWright
http://fastlz.org/
======
ogrisel
It does compress slightly faster and better than lz4 (when the output is
written to disk on a SSD) using the default parameters:

For an input text file alpha.txt whose size is 3.3GB:

\- fastlz compresses in 1m6.446s (real time)

    
    
      $ time ./6pack ~/Desktop/alpha/alpha.txt /tmp/out.fastlz 
      alpha.txt         
      [##################################################] 43.4% saved
    
      real	1m6.446s
      user	0m26.482s
      sys	0m8.346s
    

\- lz4 compresses in 1m15.681s (real time)

    
    
      $ time ./LZ4Demo.exe ~/Desktop/alpha/alpha.txt /tmp/out.lz4
      *** Compression CLI using LZ4 algorithm , by Yann Collet (Jul 25 2012) ***
      Compressed 3590403790 bytes into 2418583396 bytes ==> 67.36%
      Done in 35.99 s ==> 95.13 MB/s
    
      real	1m15.681s
      user	0m23.081s
      sys	0m12.927s
    

Here is the size of the output:

    
    
      1.9G /tmp/out.fastlz
      2.3G /tmp/out.lz4
    

lz4 is here: <http://code.google.com/p/lz4> alpha.txt was generated as
advertised on this website: <http://leon.bottou.org/projects/sgd>

Both projects were compiled from the trunk on a 2009 MacBook Pro running OSX
10.6 with gcc i686-apple-darwin10-gcc-4.2.1.

I will now make some room on my SSD to time uncompress.

 _Edit_ : I rerun the previous compression and the results are pretty stable:
+/- 2s

~~~
ogrisel
Here are the decompression timings:

    
    
      $ time ./6unpack /tmp/out.fastlz
      Archive: /tmp/out.fastlz
    
      alpha.txt       
      [..................................................]
    
    
      real	2m1.514s
      user	0m12.038s
      sys	0m16.199s
    
      $ time ./LZ4Demo.exe -d /tmp/out.lz4 /tmp/out.lz4.txt
      *** Compression CLI using LZ4 algorithm , by Yann Collet (Jul 25 2012) ***
      Successfully decoded 3590403790 bytes 
      Done in 24.54 s ==> 139.53 MB/s
    
      real	1m47.692s
      user	0m6.320s
      sys	0m18.227s
    

Decompression is thus faster for LZ4 but probably very limited in both cases
by writing the output file to the SSD.

~~~
ogrisel
Indeed for in-memory decompression LZ4 is much faster as expected:

    
    
      $ time ./LZ4Demo.exe -d /tmp/out.lz4 /dev/null
      *** Compression CLI using LZ4 algorithm , by Yann Collet (Jul 25 2012) ***
      Successfully decoded 3590403790 bytes 
      Done in 8.04 s ==> 426.03 MB/s
    
      real	0m17.829s
      user	0m5.837s
      sys	0m2.206s
    

I cannot do the same bench using fastlz default 6unpack program as it does not
seem to be able to skip the filesystem based decompression.

------
beagle3
Comparing to zip (algorithm last updated 94 or so) for speed? Why not compare
against the leaders, LZO and snappy?

~~~
dchest
Here's comparison table from a competing library LZ4
<http://code.google.com/p/lz4/>

------
0x1997
<http://blosc.pytables.org>

~~~
j_s
Wow!

    
    
      > In this mode Blosc can copy memory usually faster than a plain memcpy()
    

I'm surprised to see something like this flying under the radar (this sounds
too good to be true!). Has this library or similar ideas become a standard
part of mainstream projects?

~~~
ogrisel
Hadoop can use various fast compressors such as snappy, LZO, LZ4 so as to be
able to read and store data from and to the hard drives faster than if
uncompressed.

------
WimLeers
I wonder how it compares to 7-Zip (<http://www.7-zip.org/>) and its LZMA/LZMA2
algorithm, when you have a ready-heavy/write-light situation.

~~~
ogrisel
7zip is a slow compressor that compresses well. fastlz should better be
compared to LZ4 or Snappy which are meant to be trade compression rate for
speed.

------
FigBug
I'm looking for something like this for a 16bit micro (dsPIC33). A quick scan
of the code and I don't see any obvious issues on running on a 16bit machine,
so hopefully it works.

~~~
beagle3
<http://www.oberhumer.com/opensource/lzo/> has been ported to everything.

~~~
FigBug
LZO requires 64kB of memory. I've only have 16kB total. It looks like FastLZ
requires 8kB. Still looking for something smaller.

~~~
beagle3
I suspect you'll have to write something yourself - I've not seen an LZ-style
compressor (either lz78/lzw/"arc" or lz77/lzss/lzma/zip) that can do with so
little memory and have reasonable performance. (The memory mostly has hash
tables; less memory with these algorithms essentially guarantees much, much
longer run time or much, much lower compression ratio).

Note, though, that's only for compression. LZO can decompress in-place, as can
snappy, and I'm sure others can too.

------
ChrisArchitect
fast or not, this url/link is olddddd

