
Multiplexing on top of TCP (2011) - riobard
http://250bpm.com/multiplexing
======
zimbatm
That's one of the issues HTTP/2 has. Because all the requests and responses
are now multiplexed over a single TCP connection, losing a single packet is
going to block everything. This makes HTTP/2 actually work on unreliable
networks. This problem is also known as Head-Of-Line (HOL) blocking.

Google is aware of that issue, that's why they explored SCTP first and now
QUIC as alternate layer 3 protocols to replace TCP in the future.

~~~
riobard
SCTP is problematic because middle boxes need to be made aware of it since it
uses a different IP protocol. This is gonna require massive infrastructure
changes and probably won't fly (like IPv6).

QUIC, which runs over UDP, is easier to adopt. However, in reality many ISPs
and company networks de-prioritize UDP traffic so the quality of connection is
unpredictable.

TCP is still the safest bet. We need to figure out a better (and magical!) way
somehow...

~~~
DanielDent
[quickly becoming standard comment for me]: IPv6 is in widespread use. IPv6 is
not "the future". It's the present. I have services in production where >30%
of users access the service over v6. Use is growing. IPv6-only mobile
operators are also now a thing.

The idea that middle boxes need to be aware of protocols is incredibly
destructive to the end-to-end principal - the critical property of IP networks
that enables a rapid pace of permissionless innovation. Middleboxes are evil
incarnate.

Fortunately, academic research as well as operational experience has shown
that TCP and UDP provide the primitives required to build any protocol. QUIC,
which you mentioned, is an example of this. Google has made the explicit - and
probably wise - decision to keep protocol state encrypted. While there would
be some nice optimizations possible if routers could be aware of things like
'this session is closed now', the risk of middleboxes making it so that the
protocol can't continue to evolve is too great. The solution is to
specifically prevent middleboxes from having the knowledge needed meddle in an
'intelligent' way.

The last statistics I saw from Google were that 95% of networks don't cause
problems for QUIC (i.e. QUIC connections are as fast or faster than HTTP/2).
The main issue I'm aware of is not so actually de-prioritization but rather
rate limiting (it's often implemented as a quick - and very very dirty - DDoS
mitigation technique). Google's stated solution is to fallback to HTTP/2 based
on measurements of user experience on a per-AS basis.

Networks which try to be "intelligent" rather than just provide adequate
bandwidth will tend towards providing a crappy user experience. Solutions
which optimize for well behaved adequately provisioned networks and have an
acceptable-though-degraded fallback seems like a reasonable way to promote
progress.

SCTP has an RFC-defined UDP encapsulation which will deal with many evil
networks.

~~~
nucleardog
> IPv6 is in widespread use. IPv6 is not "the future". It's the present. I
> have services in production where >30% of users access the service over v6.
> Use is growing. IPv6-only mobile operators are also now a thing.

That's... great, and I'm glad there's progress being made _somewhere_ , but I
don't think it's fair to say it's "the present". It's on its way.

My ISP has rolled out FttH across a wide area but still only offers IPv6 on
business buildouts. Even if they offered IPv6 to general customers, they still
have a ton of home gateway devices out there that don't support any IPv6 at
all. The other ISPs don't offer it or have any external roadmap either. I
don't think this situation is all that unusual or unique - it's just not
something that the people paying the bills are clamouring for and it's a long
road from where we are to significant penetration.

Oh, and EC2 still doesn't support IPv6 on anything but their load balancers.
What portion of the market and traffic do you think they make up?

------
frederikvs
I'm glad to see they came to the sensible conclusion : "Multiplexing on top of
TCP in overall fails to deliver the advantages it is assumed to provide."

~~~
josteink
> I'm glad to see they came to the sensible conclusion : "Multiplexing on top
> of TCP in overall fails to deliver the advantages it is assumed to provide."

So basically the only thing the HTTP/2 crowd could come up with as a _real_
advantage over HTTP/1.1 (discounting ofcourse that HTTP/1.1 supports
pipelining), which is a massive source of complexity, which will also be a
massive source of bugs, does _not_ provide the benefits it claimed it would.

Can we just call off HTTP/2 yet? It was pushed by Google to cover their needs
and agenda, without any respect for the protocol's history. It had a bunch of
riders introduced with misleading language with regard to privacy violating
semantic changes (For instance HTTP/2 cannot do regular HTTP, only HTTPS. You
can no longer deploy private apps on your LAN without registering with
centralized internet registries such as DNS and CAs. Etc etc).

Basically HTTP/2 was Google's attempt at making it easier for them to keep
their huge amount of tracking cookies on every single HTTP(S) request across
the internet, without it causing too much of an impact on users.

You know what? If I have 200KBs of Google-cookies tracking me, I _want_ those
cookies to impact performance. I _want_ to know something is up.

Can't we just say "HTTP/2 considered harmful" at this point?

I'll be disabling it in all my browsers which support proper user-managed
configuration. Needless to say, that excludes Chrome.

~~~
roblabla
What on earth is this comment even about. First of all, HTTP/2 has a lot of
advantage over HTTP/1.1, like server push. Saying it doesn't provide the
benefits it claimed is wrong. It uses a single TCP connection, which for
websites that require a lot of them (hint: lots of people nowadays do. You've
got the webapp trend to thank for that) actually IMPROVES things. TCP overhead
is not negligeable. And then there's the header compression algorithm that's
included in HTTP2 that further helps getting size down.

HTTPS-only is NOT a protocol limitation. It was pushed by BOTH Google AND
Mozilla, in the name of security, like all the new features coming to the web
(need webrtc ? Https. Want the camera ? https. Want https ? https). I'm not
actually too sure I like that trend either, it feels like shoving candies down
my throat. I like candies. But not like that.

However, saying this is a Big Google Conspiracy is ridiculous. I mean what the
hell does cookie have to do with anything.

PS: On a separate note, could we get 10.0.0.X, 192.168.X.X, 127.0.0.1 and the
file:// protocol counted as "Secure" please ? It's a pain to have to create a
self-signed certs just to develop stuff.

~~~
colanderman
10.0.0.X is commonly used across large institutional or corporate networks. If
anything, a MITM is even easier there than on the public Internet.

~~~
roblabla
The point is, it's also outside the reach of the public internet, so you can't
get "proper" HTTPS anyway (unless you get a custom root CA). I fail to see the
danger added in making this ip "Secure". Suddenly, a MITM-ing actor could
modify the responses to do some webrtc and __ask the user __for his geoloc. I
don 't think it's that much more dangerous.

And let's not mention the fact you can get a free HTTPS with let's encrypt,
MITM an existing 10.0.0.X connection and serve a 301 to your https-enabled
domain name.

My use-case is rather simple : I host a few things at home, and I've had to
install a root CA on every device just because I can't do webrtc otherwise.

I could get a letsencrypt certificate for a domain name, but honestly that
sucks.

------
known
TCPTuner: congestion control utility
[https://tuxdiary.com/2016/05/14/tcptuner/](https://tuxdiary.com/2016/05/14/tcptuner/)

------
zaxomi
Does this solve something that ssh-tunneling can't do?

By the way, maybe it should say in the title that the post is from 2011?

------
caseymarquis
I think it really depends on the use case. My current work project is
multiplexing on one port with a separate port for critical control commands.
It seems like a good design, but maybe I'll make another writeup like this in
a year when I realize it was a terrible decision.

