
The Road to QUIC - zackbloom
https://blog.cloudflare.com/the-road-to-quic/?hn
======
peterwwillis
If I were developing a new protocol, and I thought more than just web browsers
might want to use it, I would develop it as a new standard internet transport
protocol. None of this trying to avoid getting industry adoption. Talk to
Cisco, Juniper, IBM, Google, Microsoft, Apple, Intel, Broadcom, TP-Link,
Huawei, Foundry, Avaya, Marvell, Foxconn, etc. Get it adopted in silicon and
software, and get the industry actually excited about an upgraded protocol.
They get to use it to push sales of new devices, so it's great for them. You
get to use it to work around stupid intermediary networking issues.

------
cm2187
I am so glad to see these old standards (tcp, http) finally being offered a
better, more secure alternative. But the most problematic of all in my opinion
is smtp. Anyone aware of an equivalent of the quic/http2 initiative for smtp?

dns would be second most problematic one, but like for ipv6, dnssec exists, it
is just not used enough to get a chance to take over.

~~~
dtech
It's been a few years since I did server e-mail management, but SMTP/email in
2018 is quite different from SMTP/email in 1985.

For starters all large e-mail exchanges (Gmail/Live) require you to use TLS,
so most mx<>mx connections are already secured. Additional protocols like SPF,
DKIM and DMARC are securing origins, messages and reducing spam and spoofing.

~~~
cm2187
Yeah but TLS is using STATTLS which is trivially downgradable, most smtp
servers on the internet use invalid (selfsigned or expired) certs, and no one
validates anything. As for spf, dkim, they are nice hints for spam level but
are not used to block spoofed traffic.

So all these features are a thin layer in of lipstick on the pig and don’t
really make the problem a solved problem.

~~~
dtech
Not blocked, but in Gmail at least you get a nice big red warning if the SPF
or DKIM does not match up.

~~~
tambourine_man
Not only that, I’ve had email outright bounce without those two.

------
IshKebab
I think they forgot to change the source port in the NAT diagram.

------
hinkley
So who are the bad actors they’re freezing out with this spec? They mention
problems with middle boxes which I’ve heard before, but no finger pointing.

~~~
tialaramex
I can't find any mention of "bad actors" or "freezing out" anybody or anything
so I'm going to guess you meant just generally which middle boxes are "bad"
and the answer is all of them, almost by definition.

The specifications we're dealing with here don't (and in modern protocols,
this is quite deliberate) allow for any middleboxes. The only, minimal way to
implement such a thing correctly in the face of that situation is to act as a
full proxy, which is going to _suck_ for performance and your customers aren't
going to pay for a product that throttles their connectivity badly nor for the
hardware that would let you run a line speed proxy.

So, they don't, they try to make an end run around the protocol compliance,
typically the idea goes something like this:

During connection setup we'll inspect everything and implement whatever rules
are key to our product, but mostly we'll pass things between the real client
and server transparently, only intervening as necessary for our role

Then, we can "whitelist" most connections, and let them continue at line speed
without actually being inspected further.

Unlike a full proxy this design breaks, messily, when optional protocol
features are understood by the client and server but not the middlebox. This
is because either the middlebox pretends to understand when it doens't (so
client and server seem to get their mutually agreed new feature, but if it has
any impact on how the protocol is used it breaks mysteriously since the
middlebox didn't know) or the middlebox squashes everything it doesn't
understand then steps out of the way and expects that to work out OK even
though the client and server now misunderstand each other's situation.

~~~
vitus
> generally which middle boxes are "bad" and the answer is all of them, almost
> by definition.

NATs and firewalls are the first two classes of middleboxes that come to mind,
and I wouldn't consider either of them inherently "bad".

As pointed out in the article, NATs suffer from the shortcoming that without
visibility into stream semantics (e.g. SYNs / RSTs / FINs), they often fall
back to using arbitrarily set timeouts that can sever long-lived connections
(e.g. an idle SSH session, where messages might be infrequent).

In my view, "bad" middleboxes are those that lead to protocol ossification --
TLS 1.3 (also from the article) is a good example of that. With encrypted
control state, middleboxes (without cooperation by one of the endhosts) are
forced to treat QUIC packets as opaque UDP blobs.

Part of the problem is that some middleboxes don't actually follow the
robustness principle, and will in fact strip unrecognized protocol options or
drop the packets entirely.

~~~
fulafel
NAT are objectively bad in the sense that it's a violation of protocol
standards and, like seen in the article, breaks stuff that should work.

This makes it harder (often prohibitively hard) to develop and improve new
protocols and applications.

~~~
vitus
That's not inherent to the provided functionality, but rather an
implementation detail of existing boxes.

There are objectively bad NATs, yes, but extending the lifetime of IPv4 by 20
years and providing isolation between internal and external networks are not
inherently bad.

~~~
fulafel
NA(P)T is fundamentally incompatible with the guarantees set out by the IP
standards that specify packets to be transmitted unmodified end-to-end. Their
entire idea is to change the IP address and higher-level protocol identifiers
such as ports.

A basic case that they break is when applications embed IP addresses in the
data. The "timeout" problem in the article is also impossible to avoid in a
guaranteed-correct way, since the NA(P)T can not know when a flow is finished
and the mapping is safe to recycle.

Since these things are basically forbidden by the standards, their
functionality has never been standardized. Hence the wild west of varying
timeouts, heuristics, and various more-or-less broken attempts to munge
application-level data (ALG).

~~~
cesarb
> NA(P)T is fundamentally incompatible with the guarantees set out by the IP
> standards that specify packets to be transmitted unmodified end-to-end.

A small nitpick, but some fields of the IP packet are meant to be modified in
transit. For instance, the TTL, the ECN marking bit, and the fragment fields.
The checksum field is defined so it can be updated (without being recomputed)
to match these changes.

But yeah, other than these fields in the IP header, and a few hop-by-hop
headers or options, packets are not meant to be modified in transit (other
than fragmentation, but this applies once the packet fragments are put
together).

------
omginternets
Does anybody have data (or some sort of formal analysis) comparing protocol
overhead between TCP and QUIC?

~~~
AndrewDucker
Not specifically overhead, but there are general benchmarks covering a variety
of situations: [https://blog.apnic.net/2018/01/29/measuring-quic-vs-tcp-
mobi...](https://blog.apnic.net/2018/01/29/measuring-quic-vs-tcp-mobile-
desktop/)

~~~
omginternets
Thanks for this. It's very informative, even though it's not the packet-level
analysis I was initially looking for.

>However, we observed that QUIC performs significantly worse than TCP when the
network reorders packets (Figure 2).

>Upon investigating the QUIC code, we found that in the presence of packet
reordering, QUIC falsely infers that packets have been lost, while TCP detects
packet reordering and increases its NACK threshold.

Under what conditions does packet ordering usually occur?

From the looks of the figures, it seems like packet-reordering is a function
of the x-axis value (rate-limit of some sort?).

What's going on here? Does anybody know?

------
kim0
Sounds very interesting indeed. I wish more of those secure protocols would
focus more on some sort of obfuscation. This is to stop repressive regimes
from simply blocking all this impressive privacy tech, yielding zero actual
benefit to end users.

------
innocenat
I don't know about differences between gQUIC and new QUIC, but I always hate
gQUIC to the level that I always disabled it on my machine. Main reason being
that my ISP always prioritize TCP over UDP, so during congested period
(6-10pm), gQUIC is useless.

~~~
andrius4669
Should't that hate be directed to your ISP?

~~~
pixl97
Yes, but being that most people have very little choice in their choice of
ISP, so much hate has already been directed at them with little change in
outcome, directing hate at any other group is more effective.

------
popshart
Too much complexity.

I find it interesting how everything has to be encrypted and 'secure' now. The
excuse is always the NSA, but let's be honest: A new transport protocol isn't
going to protect you from them. I think there's a lot more to the cellular
baseband and Intel ME backdoors than anyone can imagine.

Google is going to do the same thing with this they did with HTTPS. Soon
enough, you'll be penalized through search and/or Chrome for not supporting
it.

~~~
dtech
The incentive for Google is that all browsers support QUIC, it saves them a
lot of hardware costs and increases performance. They couldn't care less if
your website uses QUIC.

You're receiving downvotes because you're propagating the "pushing encryption
is motivated by Google's evil agenda" that's currently in vogue with a
subsection of HN commenters, for reasons unknown to me because quite frankly
the arguments are ridiculous.

~~~
myhf
> all browsers support QUIC

Try blocking your network's access to www.google-analytics.com (with a fast-
loading block notice page) and you will see that most webpages become unusable
with a 30-second delay before page load completes.

If QUIC only works when the network has no partitions, then QUIC doesn't work.

~~~
icebraining
I block Google Analytics and experience no such delay.

