But ... is it normal to have massive amounts of data like everything except snapshots/ and var/ in my system subvolume being one of them, as error-nodes (/open (No such file or directory))?
Also: Is there a way to stop sampling after some time or amount of samples? It turns out your idea is correct and you don't actually need all that many to get a good impression :-)
Thanks for making it!
No... This particular one most likely indicates that the path on btdu's command line is not to the root subvolume. The attached explanation mentions this when the node is opened. If it's not this, then it's not something I'm aware of - what paths are under the node?
> Also: Is there a way to stop sampling after some time or amount of samples?
Once you're satisfied with the precision, you can press p to stop sampling. Does that do it or were you looking for a configurable limit?
Thanks for the feedback!
My mistake was thinking that my mounted file system's root would be my btrfs's root, which of course it is not, it's itself a subvolume (a standard configuration has @system and @home, i think). When I re-ran btdu on the separately mounted root volume it worked flawlessly.
Maybe I'm the only one making this mistake, but maybe it makes sense to remind people of this. Especially since currently btdu doesn't fail with an error, but seems to try to navigate up to the root volume (it shows my @-subvolumes from wherever I start) but then presents a broken result.
The basic idea behind this project is that you only need to cast 100 random samples to have roughly a 1% resolution, which is usually enough to know what ate your disk space. Even on very slow drives, resolving 100 samples should take no more than a few seconds.