I fully agree.
The problem is that it's even harder to design something that is fully p2p than something federated.
(One thing that tries to be exactly what you want would be https://secushare.org/ But its in a very very early stage, right now. There exist others, though.)
And you have to agree that (even if most people choose the biggest provider) simply having the choice of different providers or even being your own provider is a huge improvement.
Its not really about the design. At some point you have to recognize the physical impossibilities of p2p models - primarily availability. The reason why Matrix is more popular than Tox or why we haven't seen any remotely successful p2p social network while projects like Mastodon took off is because there is simply no way to make the UX of the scenario where you want to send a message to X, who is offline, and before they come online you go offline and the message is never sent.
The way Tox does it (and any network trying to work around this problem) is to locally cache messages en masse as close to the destitination as you can get. But as you can imagine that makes the bandwidth and power requirements of maintaining the network too streinuous to be competitive with a federated option that simply works when the always-on server is available or doesn't when its offline.
"Physical impossibilities of p2p models" - Although there might be structural limitations, I think they're not too strong.
Just because we currently do not have a major p2p network doesn't mean that it's not possible.
I think it's very possible to have something like this (even for availability). You just need to have a good design/mechanism.
But there's the problem (why we didn't see something like this yet): No one puts many resources into the design of p2p-stuff. The competing, central solutions get tons of resources from big companies that try to make money with it. There is no company that tries to build something p2p because with giving away the control, they give away the possibility to make money out of it.
Peer to peer connections on web browsers are pretty good (assuming you have relays to get around router issues with shared IP addresses). And Javascript is generally fast enough for encryption (although I'm not sure what the random number generator situation is).
But we lack the ability to easily guarantee file contents, which makes delivering encryption software more suspect. Additionally, data storage is still very unreliable. It is difficult to share information seamlessly between multiple browsers without a server, storage limits vary between browsers, data can get deleted for weird reasons. I've advocated for a while that users probably should be able to grant pages separate read/write access to specific files and folders on disk, but that's obviously a tricky decision to make and implement.
The Same Origin Policy obviously comes with security benefits. But it also means that if you share a 3rd party link, there's no way to look up metadata about the link without a proxy server to bypass the policy. Building something like an RSS reader in purely clientside Javascript is impossible because you literally won't be able to request many of the RSS feeds.
It can be a little bit surprising when you dig into all of the theoretical stuff that's possible with clientside Javascript to discover exactly what the areas are where the web is behind native. They're usually not the parts that get the most attention.