The only one of the counterarguments that interests me is that it defeats caching. I mean, if 100 users in a large network want to access the same video or other large resource from the Internet, it seems pretty ridiculous that the connection must use 100 times as much bandwidth as it would if they could just install a simple caching proxy, especially if it's just some cat video or online game, which is probably the common case. True, not all large resources are as innocent, and there is no real way around encrypting and not caching everything if you don't want devices on the network to tell the difference... but the result is just so pathological. The price of freedom?
[For the record, YouTube seems to use HTTPS by default for video content, so this is already the case for some large percentage of the types of large resources typically accessed from shared networks.]
Caching already happens through CDNs at the ISP level, such as through the Google Global Cache (YouTube), Netflix Open Connect. That roughly covers about half of network traffic.
Plus, running a squid proxy on 100 users isn't nearly as effective as it once was; pages contain far more dynamically generated content than they used to. Think about a Facebook News Feed or Twitter Stream.
Lets say one uses http/2 across microservices in a datacenter (or "cloud") with (possibly) ipv6 (or ip4) over secure (vpn or physically secure) links. Would you reallly want to complicate the stack by having to choose between using both 1.1 and 2 or do double encryption?
I get that browsers demand tls as there's no sane ui/ux to show that the link is secure because of vpn etc to the user. Not so for other clients.
The answer is probably to have a way to sign and/or encrypt headers separately so that clients can request authentication and/or encryption on a per-resource basis. Perhaps a public and a private header section.
Checksumming and cryptographic signing of responses as an alternative to full-blown encryption might be useful as well (since response signatures could still be cached).
As the article mentions, for better or worse, TLS-piercing proxies aren't exactly unusual anymore. An ISP may not be able to just jam one in front of their customers, but use cases involving a corporate entity owning computers such that they can push a root cert update and wanting such a cache are still unaffected.
[For the record, YouTube seems to use HTTPS by default for video content, so this is already the case for some large percentage of the types of large resources typically accessed from shared networks.]