> Penalty #2: TCP window size will drop dramatically, and all streams will be simultaneously throttled down.
This implies that, for both HTTP/1 and /2 connections, throughput per connection can be limited by min(throughput) for that connection. With a http/2 connection being used for multiple streams, and therefore more packets, the connection is more likely to get hit by a dropped packet.
That scenario doesn't quite seem to be realistic, because it assumes that dropped packets are independent (3% in the test). The rate of injected errors was apparently also not adjusted for any of the factors that would affect errors in the real world, such as package size and (attempted) speed.
But reducing sizes and throttling are direct countermeasures intended to provide maximum throughput even on flakey connections. Since any connection problems are bound to be highly correlated per client, it would seem that http/2 may actually perform faster than /1 because the "information" about connection troubles, and the countermeasures, are applied to a larger number of streams.
A client-specific, randomised error rate would be the second step. Without this, the test is still meaningful. But it measures the system's response to internal failures, not to a typical production environment where most clients see 0 dropped packets, and a small number encounter most of the errors. I'm not sure how dropped packages are distributed, but I would guess 99.9%+ clients have none, yet those that see any might actually get an error rate far higher than 3%.
The one benefit I can see with QUIC is that because it is a user level protocol it is more likely to be updated. Many embedded or IoT devices never get kernel updates, although they sometimes get user level software updates.
There is even a issue reques to bring HTTP/2 feature implement in HTTP/1 for better performance.
The point of this blogpost is that design of HTTP/2 (specifically, multiplexing multiple http transfers over a single TCP connection) behaves badly under packetloss conditions.