In practice, the speed of recovery is typically bottlenecked by the incoming bandwidth of the recovering server, which is easily exceeded by the outgoing read bandwidth of the other servers, so this limitation is typically not a big deal in practice.
The actual insight is that you want failure domain anti-affinity; That is, if you have 1000 servers on 50 network switches, you want your replica selection algorithm to use not three different machines at random, but three different switches at random. If you have three different AZ's, for each copy, put one replica in each of the three. Copysets can provide this, but as stated in the article, they're much more likely to give you Achilles heels - A typical failure won't hurt, won't have any unavailability - But the wrong one, and you go down hard, with N% dataloss rather than thousandths of a percent dataloss.
In short - Failures happen. Recovering from them is what matters, not convincing yourself that they can't happen.
Chainsets were an attempt to add the properties of copysets to a system based upon chain replication.
Working with the original Copysets authors we refined the chainsets algorithm into a tiered replication algorithm that can enforce independence assumptions on the replica set (what you've termed anti-affinity). Here's the paper on the subject: https://www.usenix.org/conference/atc15/technical-session/pr...