
A QUIC update on Google’s experimental transport - jplevine
http://blog.chromium.org/2015/04/a-quic-update-on-googles-experimental.html
======
josho
Google really gets how standardization works. They innovate and once the
innovation has proven its value they offer the technology for standardization.

I previously saw companies, like Sun, completely fail at this. Eg. The many
Java specifications that were created by the standards bodies. Sun tried to do
it right by having a reference implementation with the spec. But the Reference
implementations were rarely used for real work, so it proved only that the
spec could be built, not that the spec elegantly solved real problems.

~~~
username223
> Google really gets how standardization works. They innovate and once the
> innovation has proven its value they offer the technology for
> standardization.

I wouldn't necessarily say "innovate" or "offer," but they do understand the
process. You can make pretty much anything a "standard" with a bit of money
and time (isn't Microsoft Office's XML format a "standard"?), but adoption is
always an issue. However, Google controls a popular web server (Google search)
and client (Google Chrome), so for web-things, they can create "widespread
adoption" for whatever they standardize.

~~~
josho
> However, Google controls a popular web server (Google search) and client
> (Google Chrome), so for web-things, they can create "widespread adoption"
> for whatever they standardize.

Google's innovation is to make http faster over a slow unreliable network
(e.g. a wireless device). They solved a real world problem, proved it using
their own users and now are going to standardize. Their innovation is driving
their standardization efforts.

If google didn't solve a real world problem then even with their platform they
couldn't impact widespread adoption. Their innovations (SPDY and now QUIC)
solve real world problems, so adoption will become widespread.

MSFT with Office XML was solving a political problem, not a real world
problem. Ie. Office was taking a hit because DOC/XLS were proprietary formats,
and governments were concerned about archiving documents in a proprietary
format and were therefore threatening to move to open standards (ie. OSS
office suites). MSFT fought back by pushing through a standard document format
to offer their sales staff with a rebuttal to customers threatening to move to
an open standard. Ie. The 'standard' only has traction due to MSFTs monopoly
on office and serves no real benefit to anyway except for MSFTs salesforce.

~~~
yuhong
I have OOXML on my poorly written wishlist among with a proposal to withdraw
it from standardization without breaking compatibility.

------
jws
_Today, roughly half of all requests from Chrome to Google servers are served
over QUIC and we’re continuing to ramp up QUIC traffic_

Google says they will eventually submit this to IETF and produce a reference
implementation, but it is interesting how a company was able to quietly move a
large user base from open protocols to a proprietary protocol.

~~~
_stephan
The Chromium implementation of QUIC is released as Open Source, so I'm not
sure how "proprietary" the protocol actually is.

~~~
pcwalton
In a competitive multi-vendor ecosystem like the Web, public-facing protocols
that are introduced and controlled by a single vendor are proprietary,
regardless of whether you can look at the source code. NaCl and Pepper, for
example, are proprietary, even though they have open-source implementations.

The distinction between open-source-but-proprietary and open-standard is
important for many reasons. One of the most important is that open-source-but-
proprietary protocols, if they catch on, end up devolving into bug-for-bug
compatibility with a giant pile of C++.

~~~
kjksf
I don't understand this negativity towards QUIC/Dart/NaCL/Pepper etc. which
are exemplary open-source efforts.

By your definition Mozilla's (your employer's) asm.js and Rust are also
proprietary.

Somehow I doubt that you jump on every thread about asm.js or Rust to point
out how proprietary they are or how they are implemented as a giant pile of
C++. Double standards.

There have been plenty of research and work even in standard bodies like IETF
that try to implement a better tcp/ip-like protocol.

They all went nowhere because at this point in time, you can't just have some
guys in a room to design a new transmission protocol and have it taken
seriously by anyone that matters (Google/Apple/Microsoft/Mozilla).

Google is following the only realistic route: implement something, test it in
a scale large enough to conclusively show an improvement and then standardize
it.

This is exactly how HTTP/2 happened.

We should be cheering them on instead of spreading FUD because it doesn't live
up to your impossible standard of non-proprieterness.

~~~
pcwalton
> Somehow I doubt that you jump on every thread about asm.js or Rust to point
> out how proprietary they are or how they are implemented as a giant pile of
> C++. Double standards.

asm.js isn't a new protocol, and so isn't proprietary according to that
definition. It's a subset of JavaScript (to be specific, ES6, including such
extensions such as Math.imul). You can implement asm.js by simply implementing
the open, multi-vendor ES6 standard. In fact, that's exactly what some
projects, like JavaScriptCore, are doing.

Rust isn't relevant, as it's not intended to be added to the Web. Adding
<script type="text/rust"> to the browser would be a terrible idea for numerous
reasons. Nobody has proposed it.

~~~
tptacek
Plenty of IETF standardization efforts can be described as "a subset of
Javascript" or even just "a bunch of Javascript APIs". WebCrypto, for
instance, fits that bill. What makes QUIC so different from WebCrypto?

~~~
pcwalton
QUIC and Web Crypto are both things that need to be standardized, so I don't
know what the implication is or how to respond to that statement.

I do think there is a big difference between "a subset of JavaScript" and "a
bunch of JavaScript APIs" from a standardization point of view. All engines
have been implementing special optimizations for JavaScript subsets ever since
the JS performance wars started. Nobody thinks we need to standardize
polymorphic inline caches, for example, even though the set of JS code that is
PIC-friendly is different from the set of JS code that is heavily polymorphic
(and this distinction would be easy to describe formally if anyone cared to).
asm.js is just an optimization writ large: the reason why it's not a protocol
is that any conforming JavaScript implementation is also an asm.js
implementation.

I think people are reading a lot more into my posts than was intended. I'm not
calling out QUIC specifically, since I'm not involved with the details of its
standardization anyway. The point is simply that open source doesn't
automatically mean non-proprietary.

~~~
tptacek
Oh, sure. QUIC is a proprietary protocol. An IETF-standardized QUIC would not
be.

------
jeremie
As part of telehash v3, we've separated out most of the crypto/handshake
packeting into E3X, which has a lot of similarities to QUIC:
[https://github.com/telehash/telehash.org/blob/master/v3/e3x/...](https://github.com/telehash/telehash.org/blob/master/v3/e3x/intro.md)

Personally I have a much broader use case in mind for E3X than QUIC is
designed for, incorporating IoT and meta-transport / end-to-end private
communication channels. So, I expect they'll diverge more as they both
evolve...

~~~
1gn1t10n
How is work on Telehash coming? I'm still waiting for an XMPP equivalent for
the mobile age that will free us from the medieval [1] state of communication
we are experiencing.

[1]
[https://www.schneier.com/blog/archives/2012/12/feudal_sec.ht...](https://www.schneier.com/blog/archives/2012/12/feudal_sec.html)

------
FullyFunctional
MinimaLT [1], developed independently and about the same time as QUIC, also
features the minimal latency, but with more (and better IMO) emphasis on
security and privacy. (Though both are based on Curve25519). QUIC has an edge
with header compression and an available implementation. EDIT: and of course,
forward error correction!

[1] cr.yp.to/tcpip/minimalt-20130522.pdf

~~~
djcapelis
I hate to be harsh because I like a lot about MinimaLT, but until MinimaLT
ships code it doesn't feature anything.

I wish we were having a conversation where djb had written an amazing and
performant minimaLT implementation that we could prepare against QUIC. But
we're not. We're having a conversation where shipping performant code runs a
protocol and you're presenting an alternative that pretty much exists only as
a PDF document.

Believe me, I looked to figure out if there was a good solution for
incorporating MinimaLT into code right now and there's not. I have a project
where this is relevant. I'm looking at QUIC now and I may incorporate it as an
alternative transport layer. (It duplicates some of my own work though, so I'm
not sure whether to strip that stuff out or just make mine flexible enough to
work on top.)

(To say nothing that QUIC can be implemented without a kernel module, which is
a handy side-effect of doing things over UDP. A shame that's a factor, but of
course it is in realistic systems deployment.)

~~~
FullyFunctional
Not harsh at all. I agree completely and wish it wasn't so. At this rate, it
might be better to focus on improving the security and privacy of QUIC.

Re. kernel module: both QUIC and MinimaLT can be implemented in user space.

------
jzawodn
I wonder if this is why I've been having weird stalls and intermittent
failures using GMail the last few weeks. Every time, I try it in Firefox or
Safari and it works perfectly.

~~~
svijaykr1
I work on the QUIC team. If you file a bug with a network log, we'll take a
look to see what is going on.

------
portmanteaufu
Possibly silly question: I was under the impression that only TCP allowed for
NAT traversal; if I send a UDP packet to Google, how can Google respond
without me configuring my router?

~~~
wmf
NAT traversal is easier with UDP than with TCP. Here's a good article on the
topic:
[https://www.zerotier.com/blog/?p=226](https://www.zerotier.com/blog/?p=226)

~~~
manigandham
Thanks for posting, that was a nice article but the intro to ZeroTier was even
better, pretty cool software.

------
FreakyT
Interesting, I wonder if this will will end up gaining enough momentum to
become a standard, similarly to how SPDY ended up essentially becoming HTTP/2.

------
rpcope1
It will be interesting to see how this works out with NAT being as difficult
to work with UDP as it often can be.

It's a shame that SCTP is not more widely adopted, as I suspect it may be just
as good (if not better) as a transport layer for building a new web protocol
on.

~~~
lucian1900
It's unlikely that DTLS over SCTP would be faster than QUIC, which has been
specifically designed to have TLS with a minimal number of round trips.

------
Fando
I wonder how they managed the zero RTT connections? How would that ever work?

~~~
api
Crypto? You can know who your peer is with a single packet if you've already
exchanged keys, and other cleverness is also possible.

~~~
ndesaulniers
At the cost of perfect forward secrecy, since then you're no longer using
ephemeral keys?

~~~
jorangreef
QUIC has a mechanism to upgrade to ephemeral keys once the connection has
started.

~~~
ndesaulniers
As will TLS 1.3, starting from page 3:
[http://www.ietf.org/proceedings/92/slides/slides-92-tls-3.pd...](http://www.ietf.org/proceedings/92/slides/slides-92-tls-3.pdf)

------
rdsubhas
I'm not sure if this is related. But sometimes I have a slow home internet (60
kbps after I cross a threshold). At those times, I see websites loading really
slow, specially HTTPS connections crawling - But YouTube streaming, Google
search and Google webcache works really fast! In fact I've been waiting for a
normal website to load for a few minutes on my PC, and the whole time YouTube
was streaming in another mobile without any interruptions.

Does UDP mess up other traffic?

------
Splendor
The first image really confused me with the 'Receiver' looking like a server
and the 'Sender' looking like a laptop.

------
antirez
50% and nobody noticed. Can't wait for another marginal latency win that makes
the software stack more complex.

~~~
tptacek
Isn't this an argument that nothing in TCP/IP should change, and that we
should still be pretending that there is a point to the URG pointer?

~~~
brighteyes
I took it more for an argument on the "and nobody noticed" aspect.

------
easytiger
I assume they aren't counting the transit time of the first SYN equivalent?
Are they saying it traverses the network infinitely fast. Because it doesn't

------
polskibus
Google should investigate (or perhaps just buy outright) a low level
communications technology stack from one of the HFT firms - they've already
mastered low-latency networking, they just have no incentive to share this
knowledge with the outside world.

~~~
polskibus
There are plenty of vendors that provide good UDP-based solutions, for example
TIBCO. In my opinion multicast is not used widely enough, partially because
everybody thinks that tcpip pub sub is good enough.

Financial incentives made HFT's and alike go farther than the average software
companies - just look at the microwave networks.

~~~
tptacek
Multicast isn't used widely for a lot of reasons, but the most important of
them is that it simply doesn't scale across routing domains.

------
higherpurpose
Wasn't the point of QUIC that it's basically encrypted UDP? I'm not seeing
that great of a performance improvement here - 1 second shaved off the loading
of top 1% slowest sites. Are those sites that load in 1 minute? Then 1 second
isn't that great.

However, if the promise is to be an always-encrypted _Transport layer_ (kind
of like how CurveCP [1] wanted to be - over TCP though) with small performance
gains - or in other words _no performance drawbacks_ \- then I'm all for it.

I'm just getting the feeling Google is promoting it the wrong way. Shouldn't
they be saying "hey, we're going to encrypt the Transport layer by default
now!" ? Or am I misunderstanding the purpose of QUIC?

[1] - [http://curvecp.org/](http://curvecp.org/)

~~~
comex
The first diagram, if I'm interpreting it correctly, shows two whole round
trip times shaved off compared to TCP + TLS, and one compared to plain TCP
(which is basically no longer acceptable). For a newly visited site, that
becomes one and zero.

The 100ms ping time in the diagram may be pretty high for connections to
Google, with its large number of geographically distributed servers, but for
J. Random Site with only one server... it's about right for US coast-to-coast
pings, and international pings are of course significantly higher. [1] states
that users will subconsciously prefer a website if it loads a mere 250ms
faster than its competitors. If two websites are on the other coast, have been
visited before, and are using TLS, one of them can get most of the way to that
number (200ms) simply by adopting QUIC! Now, I'm a Japanophile and sometimes
visit Japanese websites, and my ping time to Japan is about 200ms[2]; double
that is 400ms, which is the delay that the same article says causes people to
search less; not sure this is a terribly important use case, but I know I'll
be happier if my connections load faster.

Latency is more important than people think.

[1] [http://www.nytimes.com/2012/03/01/technology/impatient-
web-u...](http://www.nytimes.com/2012/03/01/technology/impatient-web-users-
flee-slow-loading-sites.html)

[2] [http://www.cloudping.info](http://www.cloudping.info)

~~~
nomel
> Packet sequence numbers are never reused when retransmitting a packet. This
> avoids ambiguity about which packets have been received and avoids dreaded
> retransmission timeouts.

How does this work?

As a total guess, I assume the client gets a stream of packets, buffers them
all up, waits for some threshold before re-requesting any missing sequence
numbers. When that missing packet comes back in (all while the stream
continued) with its new number, it puts in in place, and pushes the data up to
the application and clears it's buffer. Client probably sends "I'm good up to
sequence n" every once and a while so the server can clear it's re-transmit
buffer.

That's pretty cool. Treat it as a lossy stream, rather than a "OH CRAP
EVERYBODY STOP EVERYTHING, FRED FELL DOWN!". With this, FRED IS DED!

