Hacker News new | past | comments | ask | show | jobs | submit login

> As the packet loss rate increases, HTTP/2 performs less and less good. At 2% packet loss (which is a terrible network quality, mind you), tests have proven that HTTP/1 users are usually better off - because they typically have six TCP connections up to distribute the lost packet over so for each lost packet the other connections without loss can still continue.

> Fixing this issue is not easy, if at all possible, to do with TCP.

Are there any resources to better understand _why_ this can't be resolved? If HTTP 1.1 performs better under poor network conditions, why can't we start using more concurrent TCP connections with HTTP 2 when it makes sense?

I'm a bit wary of this use of UDP when we've essentially re-implemented some of TCP on top, though I understand it's common in game networking.




>Are there any resources to better understand _why_ this can't be resolved?

The issue is TCP's design assumption around a single stream. You don't get any out of order packets but that also means you don't get any out of order packets, even if you want them. When you have multiple conceptual streams within a single TCP connections you actually just want the order maintained within those conceptual streams and not the whole TCP connection, but routers don't know that. If you can ignore this issue, http/2 is really nice because you're saving a lot of the overhead of spinning up and tearing down connections.

>If HTTP 1.1 performs better under poor network conditions, why can't we start using more concurrent TCP connections with HTTP 2 when it makes sense?

Because it performs worse under good conditions. TCP has no support for handing off what is effectively part of the connection into a new TCP connection.

And QUIC essentially _is_ your suggestion.


Good points, but the fragment

> just want the order maintained within those conceptual streams and not the whole TCP connection, but routers don't know that.

seems to imply that routers inspect TCP streams and maintain order. I'm not aware of any routers that actually do anything like this, and things need to keep working just fine if different packets in the stream take different paths. Certainly in theory, IP routers don't have do inspect packets any deeper than the IP headers, if they're not doing NAT / filtering / shaping. The protocols are designed to strictly minimize the minimum amount of state kept in the routers.

As far as I'm aware, only the kernel (or userspace) TCP stack makes much effort at all to maintain packet order (other than routers generally using FIFOs).


Hard to do deep packet inspection otherwise. Or DDoS protection to some degree. Etc. on a SoHo router though, I agree with you.


What uses of deep packet inspection on the router itself don't fall under filtering / shaping?


The other problem with TCP the assumption that packet loss is caused by congestion. That's why a 2% loss causes more than a 2% drop in bandwidth. Unfortunately, congestion is no longer a problem on the modern internet. [1]

1. Sic.


TCP fix requires a lot of coordination. First you need microsecond time stamps. Then you need an RFC to reduce RTOmin below 200ms. Then you need ATO discovery and negotiation. A lot of moving parts and you end up with a protocol that’s still worse than QUIC. Also note that Linux maintainers have refused to accept patches for all of these things and QUIC is to some extent a social workaround for their intransigence.


There are TCP proxies designed to make it work better over lossy links. These are frequently found in satcom modems.

An example of this is SCPS-TP.

https://en.wikipedia.org/wiki/Space_Communications_Protocol_...


> why can't we start using more concurrent TCP connections with HTTP 2 when it makes sense

Because using 6 TCP connections per site is a hack to have larger initial congestion windows, i.e. faster page loading, ending up using more bandwidth in retransmission instead of in goodput. Instead we could have more intelligent congestion control algorithms in one TCP connection to properly fill up the available bandwidth. See https://web.archive.org/web/20131113155029/https://insoucian... for a more detailed account (esp. the figure of "Etsy’s sharding causes so much congestion").




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: