Hacker News new | past | comments | ask | show | jobs | submit login

i'm totally correct in what am i writting. zstd not wide range suitable compression. And all current implementation that used non for 1 file compression - awful.



No, your critiques are of how 7zip implements an archival format built on top of zstd. The zstd algorithm has no concept of files, only bytes.

Archival software then has to build a file format on top of the compression algorithm, and there are multiple ways to slice the problem. For example, a tar.gz will first tar everything into a big archive file, then feed it into gzip for compression. zip, on the other hand, feeds each file individually into the chosen algorithm (DEFLATE for most implementations).

Your critique is that the 7zip archive format is not suitable for use with zstd in the case of many small yet identical files. zstd is doing its job, just the archival format is not playing along.


Zstandard doesn't accept multiple files at all, so it's an archiving format's job to convert files into byte sequences and compress them accordingly. It looks like that 7z wasn't able to deduplicate entirely or partially identical files, in the other words zstd could never know that it is compressing almost identical files over and over.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: