I understand your design (and yes, it would work for the simple cases). Even then, it all still boils down to whether you want the overhead of extra computations for every write to get a [lower,upper] bound on the size of what every folder contains or not.
Then there are the complex situations (this is just a small sample I can come up with right on the spot):
What happens when a file is hard-linked under the same ancestor folder? Should its size be counted once or twice?
How do you even know the parents of a file at write time? Current (unix) filesystems only store folder -> [inodes], where an inode in that list may be referenced by other folders. There is just no inode -> folder(s) where it is stored mapping that I know of.
And then there are bind-mounts (similar to "folder hard-links" but not quite), special files/devices, etc.
All in all, it is a huge mess for a questionable benefit. What actual use cases are just not possible without this feature?
Then there are the complex situations (this is just a small sample I can come up with right on the spot):
What happens when a file is hard-linked under the same ancestor folder? Should its size be counted once or twice?
How do you even know the parents of a file at write time? Current (unix) filesystems only store folder -> [inodes], where an inode in that list may be referenced by other folders. There is just no inode -> folder(s) where it is stored mapping that I know of.
And then there are bind-mounts (similar to "folder hard-links" but not quite), special files/devices, etc.
All in all, it is a huge mess for a questionable benefit. What actual use cases are just not possible without this feature?