
Google’s QUIC protocol: moving the web from TCP to UDP - Mojah
https://ma.ttias.be/googles-quic-protocol-moving-web-tcp-udp/
======
cm3
If you're looking for a proven protocol that's better suited to transferring
vast amounts of data across the globe, while being based on UDP, and you have
been using rsync or bittorrent, here's something that's less known but offers
a good speed advantage: [https://en.wikipedia.org/wiki/UDP-
based_Data_Transfer_Protoc...](https://en.wikipedia.org/wiki/UDP-
based_Data_Transfer_Protocol)

Please don't complain about Sourceforge, UDT is an old project:
[http://udt.sourceforge.net/](http://udt.sourceforge.net/)

And here's their distributed filesystem built on top of UDT:
[http://sector.sourceforge.net/](http://sector.sourceforge.net/)

Edit: Protocol-level security is being worked on for UDT5, but in the meantime
there's a Rust-based experimental attempt at replacing SCP with UDT-based
solution: [https://github.com/mcginty/shoop](https://github.com/mcginty/shoop)

~~~
_asummers
Re: SourceForge, my understanding is that the previous owners sold SF in the
past few months. The new owners have apparently taken down the malware and are
working to try and restore it to its former name, despite the large incline of
that hill.

[https://www.reddit.com/r/sysadmin/comments/4n3e1s/the_state_...](https://www.reddit.com/r/sysadmin/comments/4n3e1s/the_state_of_sourceforge_since_its_acquisition_in/)

~~~
cm3
Given that gforge is a fork of the original sourceforge code, it's probably
possible to host a mirror to not repeat the loss of projects as happened with
Google Code. There's just too much abandoned code on there, and most of it is
still valuable. And even stuff not useful today is still valuable for archival
purposes. Is there already a neocities like attempt at doing that?

------
x3ro
For the interested: A working group to standardize QUIC was accepted into the
Transport and Services area of the IETF at the last meeting in Berlin a couple
of weeks ago [1].

The WG charter is this one [2]:

Define a new standards track IETF transport protocol based on deployment
experience with QUIC. Four focus areas:

* Core transport work: wire format, basic mechanisms

* Security: TLS 1.3 to protect QUIC header and payload

* Application semantic mapping: initial focus on HTTP/2

* Extension to multipath for migration and load sharing

[1]: [http://etherpad.tools.ietf.org:9000/p/notes-
ietf-96-quic?use...](http://etherpad.tools.ietf.org:9000/p/notes-
ietf-96-quic?useMonospaceFont=true&showChat=false)

[2]:
[https://www.ietf.org/proceedings/96/slides/slides-96-quic-0....](https://www.ietf.org/proceedings/96/slides/slides-96-quic-0.pdf)

~~~
amluto
> * Security: TLS 1.3 to protect QUIC header and payload

That seems odd. QUIC's crypto is much nicer than any TLS version or draft last
time I looked.

------
skywhopper
Interesting stuff and a good technical overview. I remember hearing something
about this a while ago but I hadn't seen the details. I found the time spent
on the need to unblock 443/udp on server firewalls amusing because that's the
absolute least concern I would have about the protocol.

I know Google has the most genius geniuses working for them, but it's
important to be wary of the risks of this sort of thing. Not that TLS/SSL
itself has a fantastic track record, but replacing the TLS handshake with
something new and shorter _and_ mixing it with a protocol that can accept
incoming packets from multiple source IPs sounds like a recipe for a thousand
new security vulnerabilities. If not in Google's code, then in the other
implementations. Researchers, take note.

------
nmalaguti
The post didn't discuss The backoff behavior of QUIC. For TCP, a dropped
packet results in a halving of the data rate to help with assumed congestion
of the network.

Does QUIC act as a good network citizen? Are they experimenting with different
approaches?

~~~
r1ch
TCP congestion control is one of the main offenders in holding back high speed
broadband. It can take minutes for TCP to discover the link bandwidth and a
single lost packet can cut performance in half and the ramp back up to full
speed may never happen.

Is congestion collapse still a risk on today's internet? Do we need such
aggressive congestion control?

~~~
wtallis
Congestion control is as important as ever. Your packets can traverse links of
just a few Mbps then 1Gbps then back down to a few Mbps, all within your own
house on the way from your computer to your ISP's network. Common consumer
networking equipment will do extremely stupid things in the face of
congestion, such as storing packets in a FIFO buffer more than one second
long. Initial window sizes still need to be small because there will always be
users at the fringes of a wireless network where speeds are low enough that a
burst of a few dozen full-size packets is a major call-dropping problem.

The best solution is for AQM and ECN to be deployed widely, so that congestion
can be identified and dealt with before it gets bad enough to require drastic
rate decreases. QUIC currently cannot use ECN because those bits of the IP
header typically aren't accessible from the APIs for UDP. Modern TCPs
operating on networks that keep buffering delays low and signal congestion
without dropping packets don't have trouble determining link bandwidth
quickly.

------
falcolas
Somewhat of a meta observation - we seem to be entering an era where mantras
like "Those who write their own networking protocol are doomed to re-create
TCP, badly" are becoming less true, at least for a small subset of the
programming population.

It's fascinating to watch some of the foundations upon which we do our work
being shaken up a bit. I just hope they settle into more stable and more
secure foundations, not just "better".

~~~
chillydawg
Virtually no one has access to a network they own end to end that is big
enough to create and test a credible alternative. Google, and the tiny number
of people inside it who work on this stuff, are one of probably 10 orgs in the
world qualified and well placed to do it. Other contenders would be MS, US
Navy, Apple..who else?

------
kartickv
Is packet loss in the real world random or correlated?

TCP's head-of-line blocking problem goes away if packet loss turns out to be
correlated — lose a packet belonging to one stream within the connection, and
you're going to lose packets belonging to other streams as well, so QUIC
doesn't help.

Is packet loss in the real world random or correlated?

~~~
orasis
With Forward Error Correction the packet loss issue is somewhat mitigated.

~~~
jthol
I think the question is: If I have a 1000 packets and I lose 10 is the packet
loss spread out of is it clustered? If it's clustered you'll have to
retransmit anyway.

~~~
orasis
Interleaving the FEC can also avoid bursts. The tradeoff is the more you
interleave, the higher the latency on reconstruction. They could do something
like using round trip time to calculate the number of packets in the air and
interleave based on that.

I was working on this type of stuff 15 years ago and now might be the time to
do it. Bandwidth keeps increasing but latency stays the same, so it makes
sense to waste some bandwidth to improve latency.

------
crazyloglad
UDP based and Encrypted from the get-go? sounds like Daniel Bernstein, ~6
years ago..

[1] [https://curvecp.org/index.html](https://curvecp.org/index.html)

[2]
[https://www.youtube.com/watch?v=K8EGA834Nok](https://www.youtube.com/watch?v=K8EGA834Nok)

~~~
anonbanker
wow. any reason why it hasn't caught on?

------
a_imho
Total networking layman here. Am I understanding correctly the author uses a
bit outdated example to show how quic can be theoretically faster for the
initial connection while at the same time admits TCP fast open will reduce
RTT?

Also, with quic you actually receive 10% less data, but then saying a packet
retransmit would take longer is not convincing to me. It should depend on how
many packets are lost (on average) like 1 RTT = 20 UDP packets * 10% thus
using quic/FEC on a stable network would actually _decrease_ performance and
drive up data plan costs.

Also I don't see why a few packets matter that much to actually introduce a
new networking stack with all of its own problems from e.g. increasing
complexity. Just opened a news site on desktop, it was over 2 MB in size
without the ads. If we should be concerned about percentages, we would surely
be cutting down on the JS/CSS bloat first.

~~~
k__
TCP has a few ugly problems that enhance each other. First you need to wait
till all the handshaking stuff is done and then you can't even transmit with
full speed.

If you use UDP, you have to implement much of the TCP stuff yourself. But you
can use the experience with web connections to implement it and leave out
things that weren't needed.

~~~
wtallis
> [...] and then you can't even transmit with full speed.

This is nothing specific to TCP. "Full speed" is a fundamentally unknowable
quantity in advance in the general case. It varies with time and endpoints. If
you try to start a connection transmitting at the full speed of your first-hop
link (the only one you have any chance of knowing the bandwidth of), you just
put the congestion control a hop or more away from the box that has the
information necessary to do it right.

QUIC can have an advantage in re-using a connection in situations where
multiple sequential TCP connections might have been made. Probing for link
bandwidth fewer times is not the same as not having to probe for link
bandwidth. And HTTP/2 has already addressed the most common case of this
without abandoning TCP.

~~~
k__
Ah, didn't know this.

I thought this was bacause of the tsc slow start

------
Qantourisc
Shouldn't this eventually be put in OSI layer 4 (on top of IP instead of on
top of UDP) ?

~~~
drdaeman
The problem is, we've basically lost L4. Too many networks (firewalls, and
especially NATs) just don't pass anything but protocols 6 (TCP) and 17 (UDP).
Even ICMP is not always reliably available (leading to all sort of weird PMTU
issues[1]), or heavily filtered.

I think that's why many good protocols (e.g. SCTP) are rare sights and
frequently aren't even considered as an option.

[1] Yes, some idiots just block ICMP completely because they heard it's
"secure" and make DHCP or PPP cap MTU at 1200 with "uh, it works just fine"
attitude.

~~~
panic
It's too late for IPv4, but maybe a strong push could get a protocol like SCTP
supported on IPv6 networks?

~~~
drdaeman
Given that IPv6 doesn't normally use NAT (and for IPv4, it's NAT, not
firewalls, is the primary source of problems why non-TCP/UDP protocols don't
work in practice) it's a possibility that it would be okay.

IPv6 adoption is a problem, though. I think despite all IANA efforts to push
v6, we'll be stuck with IPv4 for long years.

------
pc2g4d
As I read this post, QUIC has been introduced at least partly to solve the
problem of head-of-line blocking. Head-of-line blocking in turn has become a
problem since the adoption of HTTP/2, formerly known as SPDY, which is the
protocol Google previously foisted upon the world. So the core protocols
involved in the web are now the playthings of Google, and when Google screws
up the answer is yet another new and shiny protocol with unknown sideeffects.

I know this misconstrues Google's role in all of this somewhat, but it's an
interpretation that crossed my mind.

~~~
arpa
And it does ring true.

------
nimrody
Do mobile networks (cellular networks) these day commonly allow anything but
TCP? (or even HTTP? Most firewalls block anything that is not HTTP/HTTPS)

~~~
azylman
Lots of video games (including mobile) use UDP so I'd be very surprised if
mobile networks blocked UDP.

~~~
clarry
In Finland, the big operators use CGN by default. So if it works, it only
works as a one-way street and p2p or hosting your game at home is no-go.

At least one operator allows you to lift the restriction though. With money.

------
pookeh
> The QUIC protocol implements its own crypto-layer so does not make use of
> the existing TLS 1.2.

So in addition to proving lower layer solution to things like network
congestion they also have to prove their crypto layer? Sounds like an equally
large task, if not larger.

And even if all TCP implementors decide to adopt the next task is to make
servers and clients adopt the crypto layer.

------
z52
How about creating QUIC protocol tuneling tool (which would be needed on both
sides of connection) - for example for tuneling mysql tcp traffic between
client and server in configurations where those are on diffrent networks?
(pings >1ms)

------
meganvito
I think before googlers stopping add dramas to our daily drop sorry typo, job.
Maybe I kindaly sort of suggest them dedup the www and provide web-wise random
file access.

------
anonbanker
How long until this becomes a transport over BATMAN?

