> yet having to shoulder additional complexity for little or no benefit
They don't, though? HTTP/2 and HTTP/3 are voluntary to all parties concerned; whether you're a client, a server, or a gateway, if you don't speak those protocols but choose to just speak HTTP/1.1, then it's the peer that has to cope with that, not you.
(There isn't even any fancy forward-compatibility needed in TLS to support the semantics of the ALPN extension. If you just use an old TLS library that ignores the unknown extension data in the TLS stream, the other side will continue on assuming you didn't understand the question, and therefore aren't an HTTP/{2,3} server.)
HTTP/{2,3} are like a local language of a culture, spoken by immigrants when they run into other immigrants from the same culture. If either party is not an immigrant from that culture, it just doesn't come up.
> Why can't the "big sites" not then create their own network and take their enormous traffic there?
That's called circuit switching (i.e. the thing telco and cable services do that's not the Internet), and it's the thing the packet-switched Internet effectively obsoleted. From an Internet engineer's perspective, if you have two data streams, it's strictly better engineering to just feed them into a switch that linearizes those packets onto a high-bandwidth line "as they come" (and upgrade the bandwidth of the switch+line as needed, so that no signal is ever starved of line-time), than to try to time-divide or frequency-divide the pipe; let alone to keep those packets isolated on two separate networks of pipes. Then you'd need to maintain two networks of pipes! (And the people working at FAANG are still fundamentally Internet engineers who believe in Internet principles, rather than telecom principles.)
But besides that, how would that network be delivered into people's homes? Unless you're proposing that these services take the form of their own additional cable going into your house/SIM in your phone, this network has to merge into the regular Internet somewhere. And it's exactly at that point when that traffic once again contends with the rest of the traffic on the Internet. Even if it's only on the last mile, it's still getting in the way.
> it's just Google who's behind QUIC anyways, isn't it?
SPDY and QUIC are the names of "prototype standards" developed by Google. HTTP/2 and HTTP/3 are standards inspired by SPDY and QUIC, developed by HTTPWG, with Google as just one participant in that conversation.
The other backers of the standard are, of course, the groups whose interests are aligned behind having more-efficient HTTP: carriers, bigcorps, switch/NAT/WAF hardware manufacturers, cellular ISPs, etc.
But I see your deeper point—you're saying that this is all Google's solution to Google's problem, so shouldn't the onus be on Google to solve every downstream problem as well?
Well, it is and it isn't. Google is solving this problem for us right now, but it's not a Google-exclusive problem. TCP was created by DARPA, but maintaining a consistent stream over packet loss/reordering is not a DARPA-specific problem. They just happened to be the first group to need a solution for that problem.
The reason HTTP/2 and HTTP/3 are public standards, rather than things going on secretly only between Google Chrome and Google's backend servers, is that other parties see value in them—not just present value to themselves, but also future value.
New big uses of internet bandwidth arise every day. Netflix started sucking up half the Internet ten years ago, and it's already dropped down to less than 15% because other even larger use-cases have eclipsed it.
HTTP/2 and HTTP/3 are engineered to allow small businesses a path to grow into the next big bandwidth-sucking businesses (i.e. things so many people find useful that the whole Internet becomes about using them) without having to solve a thousand little scaling problems in the process.
Would you rather we live in a world where TCP/IP was a proprietary thing DARPA did; any company that needs TCP/IP semantics, has to re-invent it (probably poorly)? No? Then why would you rather live in a world where any company needing HTTP/{2,3} semantics in the future has to re-invent those (probably poorly)?
They don't, though? HTTP/2 and HTTP/3 are voluntary to all parties concerned; whether you're a client, a server, or a gateway, if you don't speak those protocols but choose to just speak HTTP/1.1, then it's the peer that has to cope with that, not you.
(There isn't even any fancy forward-compatibility needed in TLS to support the semantics of the ALPN extension. If you just use an old TLS library that ignores the unknown extension data in the TLS stream, the other side will continue on assuming you didn't understand the question, and therefore aren't an HTTP/{2,3} server.)
HTTP/{2,3} are like a local language of a culture, spoken by immigrants when they run into other immigrants from the same culture. If either party is not an immigrant from that culture, it just doesn't come up.
> Why can't the "big sites" not then create their own network and take their enormous traffic there?
That's called circuit switching (i.e. the thing telco and cable services do that's not the Internet), and it's the thing the packet-switched Internet effectively obsoleted. From an Internet engineer's perspective, if you have two data streams, it's strictly better engineering to just feed them into a switch that linearizes those packets onto a high-bandwidth line "as they come" (and upgrade the bandwidth of the switch+line as needed, so that no signal is ever starved of line-time), than to try to time-divide or frequency-divide the pipe; let alone to keep those packets isolated on two separate networks of pipes. Then you'd need to maintain two networks of pipes! (And the people working at FAANG are still fundamentally Internet engineers who believe in Internet principles, rather than telecom principles.)
But besides that, how would that network be delivered into people's homes? Unless you're proposing that these services take the form of their own additional cable going into your house/SIM in your phone, this network has to merge into the regular Internet somewhere. And it's exactly at that point when that traffic once again contends with the rest of the traffic on the Internet. Even if it's only on the last mile, it's still getting in the way.
> it's just Google who's behind QUIC anyways, isn't it?
SPDY and QUIC are the names of "prototype standards" developed by Google. HTTP/2 and HTTP/3 are standards inspired by SPDY and QUIC, developed by HTTPWG, with Google as just one participant in that conversation.
The other backers of the standard are, of course, the groups whose interests are aligned behind having more-efficient HTTP: carriers, bigcorps, switch/NAT/WAF hardware manufacturers, cellular ISPs, etc.
But I see your deeper point—you're saying that this is all Google's solution to Google's problem, so shouldn't the onus be on Google to solve every downstream problem as well?
Well, it is and it isn't. Google is solving this problem for us right now, but it's not a Google-exclusive problem. TCP was created by DARPA, but maintaining a consistent stream over packet loss/reordering is not a DARPA-specific problem. They just happened to be the first group to need a solution for that problem.
The reason HTTP/2 and HTTP/3 are public standards, rather than things going on secretly only between Google Chrome and Google's backend servers, is that other parties see value in them—not just present value to themselves, but also future value.
New big uses of internet bandwidth arise every day. Netflix started sucking up half the Internet ten years ago, and it's already dropped down to less than 15% because other even larger use-cases have eclipsed it.
HTTP/2 and HTTP/3 are engineered to allow small businesses a path to grow into the next big bandwidth-sucking businesses (i.e. things so many people find useful that the whole Internet becomes about using them) without having to solve a thousand little scaling problems in the process.
Would you rather we live in a world where TCP/IP was a proprietary thing DARPA did; any company that needs TCP/IP semantics, has to re-invent it (probably poorly)? No? Then why would you rather live in a world where any company needing HTTP/{2,3} semantics in the future has to re-invent those (probably poorly)?