It’s only a ‘disaster’ if you are using it exclusively programmatically and want to do special tuning.
File systems are pretty good if you have a mix of human and programmatic uses, especially when the programmatic cases are not very heavy duty.
The programmatic scenarios are often entirely human hostile, if you try to imagine what would be involved in actually using them. Like direct S3 access, for example.
High-density drives are usually zoned storage, and it's pretty difficult to implement the regular filesystem API on top of that with any kind of reasonable performance (device- vs host- managed SMR). The S3 API can work great on zones, but only because it doesn't let you modify an existing object without rewriting the whole thing, which is an extremely rough tradeoff.
One way it's a disaster is that file names (on Linux at least, haven't used Windows in a long time) are byte strings that can contain directory paths from different/multiple file systems.
So if you have non-ASCII characters in your paths, encoding/decoding is guesswork, and at worst, differs from path segment to path segment, and there's no metadata attached which encoding to use.
ZFS actually has settings related to that which originated from providing filesystems for different OSes, where it enforces canonical utf-8 with a specific canonization rule. AFAIK the reason for it existing was cooperation between Solaris, Linux, Windows, and Mac OS X computers all sharing same network filesystem hosted from ZFS.