I've had one issue with btrfs that took it off my radar completely. A customer had a runaway issue that filled a btrfs device with unimportant things. We found the errant process and killed it, but apparently if a btrfs device is completely full, you can't delete anything to free up space. File removal requires some amount of free space. Bricked the device, annoyed a customer, back to ext4.
ZFS had this issue (I believe fixed) workaround was to pick up one large file that you wanted to delete and do `echo -n > /the/unimportant/file` once the file was reduced in size to 0, rm started to work again.
Not sure if that workaround would work in btrfs, but it worked on ZFS.
ZFS reserves 1/64 of every disk precisely so it can't be truly fully allocated. It leaves enough room to delete snapshots, truncate files, and so forth.
Mind that everything is copy-on-write, you can't do anything, even metadata changes, without allocating new blocks. It needs the reserve space.
I had a ZFS bug once where they increased the amount reserved in a new release which caused my file system to be 100% and me unable to delete anything until I went back to the previous release.
Btrfs uses the the disk completely. This is harder to do (also compared to e.g. ext4 reserving a fixed amount of inode space which may be unused when the disk is full). At some point they added an in-memory "global reserve" metadata space which allows you to delete stuff even if the file system is full.
Yep. I had this happen a few weeks ago (I'm not sure how much maintenance the server has had since it was set up 2-3 years previous). Thankfully, after seeing whatever the error was ("No space left on device" or something) and furrowing my brow it seemed obvious enough to try without having to search for a solution. It seemed just dumb enough to work.
I do a similar thing with my laptop's swap partition. swapoff, add it to btrfs, then remove it and mkswap again. Always seemed safer than a potentially dodgy USB drive.
Ext4 reserves space that can be used only by root, it is so system services can continue to work when users take all the space. It doesn't have issues like this if you exhaust all of that space.
In ZFS and I'm sure in btrfs you can set up quotas and reserved space, globally and or user, but by default it is set to 0. I actually set my quota to 80% because apparently if you fill ZFS more it causes heavy fragmentation.
Ext4 reserved space also helps with fragmentation.
To be more specific, reserved space on ext3 helps the fs so that it can be more flexible during allocation and avoid fragmentation.
Ext4 has delayed allocation mount option for that purpose so reserved space is not as much important for that but it'd still help if you turn off delayed allocations.
A copy on write file system has this potential problem because nothing is overwritten. To delete anything requires space to write the metadata change reflecting the deletion, and before the data extends can be freed the change must be committed to stable media.
It's been years since Btrfs introduced "global reserve" which reserves enough metadata space to ensure it's possible to delete files on full file systems. But an old work around for this is to add a small device to the Btrfs volume, making is a 2 device volume. It could be a USB stick, a zram device (ramdisk), partition, or even a loop mounted file on some other file system. Delete the files, and now you can remove the temporary 2nd device.
this wasn't fixed years ago ?
I had a BTRFs partition filled by a rogue process and it not bricked. it allow me to remove the junk files without any issue. And I'm talking of a Ubuntu 14.04 Ltd server
This was 14.04 as well, on a Tegra (armhf) system.
eta: looked up the ticket, customer reported "when trying to delete any files, even as root, btrfs says "cannot remove"", field engineering observed the same.