Hacker News new | past | comments | ask | show | jobs | submit login

How does zfs handle that test?



In both ZFS and btrfs, at initial setup you can create an extra dataset/subvolume of 1-2GB or whatever and leave it unused. If you ever fill up root and run into problems freeing up space on root, you can unallocate that extra volume and add it to root to fix the problem.


I haven't tried that particular test, but I've had a ZFS drive filled to the brim. I just deleted a bunch of files and was back to normal. This was on a computer with a single pool, also used for booting. The system didn't even crash or anything.


dd says "no space left" and kills itself, "df" says 100% full, "zpool list" says 98% 80GB free...it's like black magic that ZFS knows to keep some space for itself ;)

And to other comments:

It was a DC-Harddisk, and NO, not even root should be capable destroying the filesystem by simply write to it, it's not 1970 anymore.

Calculating the "to reserve" Metadata-block should be rather trivial since it's ONE big file. And it's not dd that is the problem, it's btrfs that cannot handle a process that writes ONE BIG file.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: