Hacker News new | past | comments | ask | show | jobs | submit login

An option in the future to write data with a fast zfs level so everything is speedy and recompress blocks which have not changed in some time with a more efficient compression ratio would be really great. So you would have almost no performance penalty writing data and very high compression ratio for old data.



Fast compression levels of a given algorithm means lower compression ratio.

I don't know if ZFS supports variable compression levels (maybe per dataset), but Btrfs ZSTD support uses a mount option, e.g. mount -o compress=zstd:[1-15]

Thus it's possible to use a higher level (high compression ratio, slower speed, more CPU and RAM) for e.g. an initial archive. And later use a lower level (or even no compression) when doing updates. Writes use the compression algorithm and level set at mount time; and it's possible to change it while remaining mounted, using -o remount.


Btrfs is soon getting support for specifying the level on a per-file basis: https://github.com/kdave/btrfs-progs/issues/184


> I don't know if ZFS supports variable compression levels (maybe per dataset)

> Thus it's possible to use a higher level (high compression ratio, slower speed, more CPU and RAM) for e.g. an initial archive. And later use a lower level (or even no compression) when doing updates.

yup. works the same in ZFS. you can change the compression setting any time you like, for future writes.


I think that would require the same architectural changes needed for offline deduplication (i.e. probably not going to happen any time soon, unless I've missed some recent developements).




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: