The major issue is that the Web is vulnerable to single-points-of-failure (unlike the Internet). Personally I wouldn't use the phrase "unravel from its own internal contradictions", but I think most people have been bitten by a SPOF on the Web, even though the Internet itself is robust to failures. IPFS is about making the Web as reliable as the Internet itself.
I agree there's technical issue there, but I don't see it as much of a practical one. People wait a bit or they hit reload. And most of the practical issues I see are bad nearby network connections (overloaded first hop, bad connection to cell tower or wifi, ISP issue), which wouldn't be helped by this approach.
For those working at scale, single points of failure get solved through CDNs, multiple servers with failover, and offering services from multiple data centers. Google's a good example of this; despite Google being an enormously complex service, I've seen effectively no downtime from them.
For me IPFS seems like a solution in search of a problem. It's definitely a cool solution, but cool isn't enough for broad adoption.