Hacker News new | past | comments | ask | show | jobs | submit login

Some filesystems may require allocating metadata to delete a file. AFIK it's a non issue with traditional Berkeley-style systems, since metadata and data come from a separate pools. Notably ZFS has this problem.



btrfs has this problem too it seems. but there it is usually easy to add a usb stick to extend the filesystem and fix the problem.

i find it really frustrating though. why not just reserve some space?


btrfs does reserve some space for exactly this issue, although it might not always be enough.

https://btrfs.readthedocs.io/en/latest/btrfs-filesystem.html

> GlobalReserve is an artificial and internal emergency space. It is used e.g. when the filesystem is full. Its total size is dynamic based on the filesystem size, usually not larger than 512MiB, used may fluctuate.


Yeah, with ZFS some will make an unused dataset with a small reservation (say 1G) that you can then shrink to delete files if the disk is full.


This hasn't been a problem you should be able to hit in ZFS in a long time.

It reserves a percent of your pool's total space precisely to avoid having 0 actual free space and only allows using space from that amount if the operation is a net gain on free space.


For more details about this slop space, see this comment:

https://github.com/openzfs/zfs/blob/99741bde59d1d1df0963009b...


Yeah, a situation where you pool gets suspended due to no space and you can't delete files is considered a bug by OpenZFS.


I mean, the pool should never have gotten suspended by that, even before OpenZFS was forked; just ENOSPC on rm.


Oh, that's good to know. I hit it in the past, but it was long enough ago that ZFS still had format versions.


Yeah, the whole dance around slop space, if I remember my archaeology, went in shortly after the fork.


The recommended solution is to apply a quota on top-level dataset, but that's mainly for preventing fragmentation or runaway writes.


I think the solution is to not use a filesystem that is broken in this way.


Note that ZFS explicitly has safeguards against total failure. No filesystem will work well with near full state when it comes to fragmentation.


This is a whataboutism. Being unable to use the filesystem, due to space full, without arcane knowledge, is not the same as "not working well".

This is a brokwn implementation.


You're misunderstanding. See the sibling thread where p_l says that this problem has been resolved, and any further occurrence would be treated as a bug. Setting the quota is only done now to reduce fragmentation (ZFS's fragmentation avoidance requires sufficient free space to be effective).


No, I'm not. They said the "recommended solution" for this issue is to use a quota.

They also said it was mainly used for other issues, such as fragmentation. In other words, this was stated as a fix for the file delete issue.

How does this invalidate my comment, that this was a broken implementation?

It doesn't matter if it will be fixed in the future, or was just fixed.


According to rincebrain, the "disk too full to delete files" was fixed "shortly after the fork" which means "shortly after 2012." My information was quite out of date.


Well I'm glad they fixed a bug, which made the filesystem unusable. Good on them, and thank you for clarification.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: