Hacker News new | past | comments | ask | show | jobs | submit login
OpenZFS deduplication is good now and you shouldn't use it (despairlabs.com)
8 points by tjwds 5 months ago | hide | past | favorite | 2 comments



Nice work!

I found setup worked well on my ZFS device. The main issue with ZFS that I ran into was slow writes even with caches when dealing with a lot of disks. It just felt like I wasn’t making the most of my hardware even with bulk not modifying writes.

I guess it may be the result of needing to do random writes to update directory structures or similar?

I had an array of 10 8GB drives and large writes would get <100MB/s even on 10GBE and the bottleneck wasn’t cpu or memory either.


Everybody talks about OpenZFS block level dedup. The real gem is to use file level deduplication in copy-on-write transactional filesystems like ZFS.

   cp --reflink=auto
The commands above perform a lightweight copy (zfs clone in file level), where the data blocks are copied only when modified.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: