
Identifying QUIC deliverables - arusahni
https://mailarchive.ietf.org/arch/msg/quic/RLRs4nB1lwFCZ_7k0iuz0ZBa35s
======
tyingq
_" confusion between QUIC-the-transport-protocol, and QUIC-the-HTTP-binding. I
and others have seen a number of folks not closely involved in this work
conflating the two, even though they're now separate things."_

Well, yeah, they have the same name. My first reaction to the headline was
that lots of software isn't ready for http over udp.

~~~
ksec
So what _exactly_ is the QUIC-the-HTTP-binding?

Everything on the web about QUIC is about UDP. It doesn't talk about HTTP/3
and /2 difference. /2 is already complex enough as it is.

~~~
tveita
AIUI, the QUIC protocol [1] will be a general-purpose transport protocol built
on top of UDP that offers multiple independent streams with encryption. For
the application it's like it can open as many TCP connections as it wants,
without the extra overhead from handshakes and rate control start.

HTTP-over-QUIC [2] specifies how to talk HTTP 2 over the QUIC protocol, with
some adjustments like using "native" QUIC streams instead of multiplexing
streams over a TCP stream, and adjusting the header compression to allow more
out-of-order operations.

[1] [https://datatracker.ietf.org/doc/draft-ietf-quic-
transport/](https://datatracker.ietf.org/doc/draft-ietf-quic-transport/)

[2] [https://datatracker.ietf.org/doc/draft-ietf-quic-
http/](https://datatracker.ietf.org/doc/draft-ietf-quic-http/)

~~~
breatheoften
> For the applications it’s like you can open as many TCP connections as you
> want without extra overhead from handshakes and rate control start.

This sounds a fascinating claim — is the idea that the programming model when
implementing QUIC server applications will allow one to easily aggregate
distinct server endpoints which automatically share or amortize the overhead
of establishing encryption and congestion detection between all the logical
streams used by the application across all the involved nodes?

~~~
tveita
I mean just between two endpoints, sorry for being imprecise. So instead of
opening X TCP connections to a server to run requests in parallel, you'd use
one QUIC connection with multiple streams. Much like HTTP 2 does, but using
UDP instead of TCP so streams can be ordered and retransmitted separately.

------
mobilemidget
Seeing how long we have used http 1.0/1.1 (and still); and http/2.0 far from
fully adopted, I'm somewhat surprised to see v3 being discussed already and v4
being mentioned in there.

Though maybe it requires way more future vision and planning then I expect.

~~~
nextweek2
What you are seeing is someone picking up the baton and running with it. HTTP
1.1 has drawbacks which we've been living with for far too long. Getting
people used to change was step one, that was HTTP/2, now were in a position to
fix pain points.

I'd also like to see updates to IMAP, SMTP and FTP to name a few.

~~~
tptacek
The most reasonable "update" to FTP would be to formally replace it with HTTP,
because FTP is an awful protocol --- maybe the worst one in common use ---
that deserved to die off decades ago.

~~~
skissane
FTP includes support for record-oriented files (STRU R, MODE B). This mostly
isn't supported by FTP clients/servers on Unix-like platforms or Windows, but
it is on those platforms which have record-oriented filesystem support (IBM
mainframes, IBM i, Unisys mainframes, OpenVMS RMS, etc.) Although one could
standardise a mechanism for transferring record-oriented files over HTTP, no
such standard has been widely adopted. If someone wants to transfer a record-
oriented file from e.g. VMS to z/OS and have the record boundaries kept (and
with the necessary ASCII-EBCDIC conversion applied), FTP is the only widely
adopted standard that can do that.

This is also why these platforms often use FTPS (FTP over TLS) instead of SFTP
(SSH-based) – SFTP doesn't include any support for record-oriented files, only
the stream-oriented files used on Unix and Windows.

~~~
kbenson
> Although one could standardise a mechanism for transferring record-oriented
> files over HTTP

Or you could just provide a simple API over HTTP, whether an actual REST
system or a single endpoint one or two params with well defined inputs that
can be accepted (a CGI, basically). Why bother formalizing some standard when
the tools to handle this are so ubiquitous (Apache+$LANG on the server, cURL
or wget on the client)?

> SFTP doesn't include any support for record-oriented files, only the stream-
> oriented files used on Unix and Windows.

That's because SFTP isn't really FTP (in the protocol sense) at all. It's just
a specialized shell started after an SSH session/tunnel is created.

That it includes FTP in the name is really just marketing because they wanted
to supersede the real FTP. In that respect, it makes sense for them to just
cover what 99% of the users of FTP needed and stop.

~~~
skissane
curl and wget on the client can't easily do this, when the client is another
mainframe/minicomputer OS with a record-oriented filesystem. I don't believe
they have any platform-specific code to support record-oriented files. You
probably can get it to work with external configuration (e.g. on z/OS, using
JCL to invoke curl/wget with a DD statement which sets the necessary dataset
parameters.) But FTP-over-TLS is already a well-documented and well-understood
technology in mainframe environments. What possible advantage could one get by
replacing it with something hacked together with Apache/curl/wget?

~~~
kbenson
> I don't believe they have any platform-specific code to support record-
> oriented files.

Yes, I was assuming a more traditional client, which is probably a mistake on
my part.

> What possible advantage could one get by replacing it with something hacked
> together with Apache/curl/wget?

Well, there's always the obvious one, which is firewalls are much easier to
deal with, since there aren't two separate ports in use, so you get rid of a
whole class of network and firewall errors that are quite common.

~~~
skissane
> Well, there's always the obvious one, which is firewalls are much easier to
> deal with, since there aren't two separate ports in use, so you get rid of a
> whole class of network and firewall errors that are quite common.

If possible, one should use extended passive mode (EPSV) over TLS. Then, we
are just talking about two ports instead of one, without any of the connection
tracking complexity on middle-boxes that active mode or non-extended passive
mode can require (such as rewriting the IP address in the PASV command
response to implement NAT). And then, you have to wonder, if you have to
substantially change the software in use at the client and server (and
possibly even write custom code, per your Apache+$LANG suggestion), are those
significant changes really worth it just to save on one extra port open on the
firewall?

~~~
kbenson
> And then, you have to wonder, if you have to substantially change the
> software in use at the client and server (and possibly even write custom
> code, per your Apache+$LANG suggestion), are those significant changes
> really worth it just to save on one extra port open on the firewall?

Oh, sorry, I didn't make it clear before. I'm fully admitting that in the case
of record-oriented file requirements, especially on the client side, FTPS
probably doesn't have a better solution available.

I was just noting that a vastly simplified firewall configuration that both
doesn't require multiple ports opened (or to somehow read the PORT/PASV or
EPRT/EPSV commands and open ports dynamically) is simpler, which is an
advantage. It doesn't necessarily make up to the record-oriented storage
problems that would have to be dealt with in those cases though.

------
ggm
Don't people read about the OSI/ISO 7 layer model any more?

The biggest deliverable ( _to me at least_ ) of QUIC is the formalism of a
"session" layer which is transport binding agnostic: If you move to alternate
underlying IP addresses, it can recover and continue.

Sessions were big in OSI. Internet applications/protocol stack basically
ignored them, for reasons of NIH, complexity, place-and-time. But, Session
layer is inherently _useful_

UDP? Schmoo Dee Pee. What matters to me, is a session layer with cryptography.

(ok. taste-testing the underlying fragmentation barriers, which was in an
earlier QUIC, that was interesting because pMTU is broken)

------
detaro
Seems to me there is a difference between "we're gonna propose that naming"
and "will most likely be"?

That said, it doesn't seem like a good idea to me, since it's less of a clear
protocol evolution. A variation of the http/2 name would IMHO make more sense,
e.g. http/2q or something, since it's not changing the HTTP semantics.

~~~
bpicolo
To be fair, helps that this is coming from the chair of the IETF HTTP working
group

~~~
arusahni
That was my thought in constructing the title for this post. Apologies if I
failed to capture the appropriate level of nuance.

~~~
detaro
I might very well be wrong with my interpretation of the politics involved,
and you right :D

------
maxfurman
[https://en.wikipedia.org/wiki/QUIC](https://en.wikipedia.org/wiki/QUIC), for
those like me wondering what it is.

I think it's funny that the Q in QUIC stands for Quick. Naming things is hard.

~~~
Cyph0n
That reminds of me recursive acronyms[0].

Some examples:

* GNU: GNU's Not Unix

* cURL: Curl URL Request Library

* WINE: WINE Is Not an Emulator

[0]:
[https://en.wikipedia.org/wiki/Recursive_acronym](https://en.wikipedia.org/wiki/Recursive_acronym)

~~~
diydsp
From the old days when it ran on slow hardware: Emacs Makes A Computer Slow

~~~
teddyh
No, that’s a _joke_. The ones you commented on are all the _official
definitions_.

------
mychele
From what I understand, this email is proposing to update the name of the RFC
for HTTP over QUIC, not QUIC itself.

------
QuadrupleA
This was the first I heard of QUIC - having dealt a lot with crypto libraries,
https, TLS etc. it sounds super promising. Gets rid of a lot of old-cipher
cruft and TLS latency, bake security into to the transport, multiplex
connections etc. Some really cool ideas there.

~~~
ori_b
Let's see a few independent implementations before we start using it. As far
as I can tell, this is also horrendously complicated, and I'm not aware of any
implementations outside of Google's.

~~~
wmf
[https://github.com/quicwg/base-
drafts/wiki/Implementations](https://github.com/quicwg/base-
drafts/wiki/Implementations)

~~~
nicoburns
This one wins the best name competition
[https://github.com/ghedo/quiche](https://github.com/ghedo/quiche)

------
anentropic
I had to disable QUIC in Chrome a few days ago, was completely unable to
access YouTube, and google.com search only intermittently

maybe they need to make it work first

~~~
DanielDent
One of the ways to get it working is to get more people using it... QUIC is
implemented with a number of fallbacks to deal with broken networks, but at
some point it's probably better to just let things break and force network
operators to fix their broken networks.

------
ArtWomb
Google Cloud introduced QUIC support recently for load balancing

[https://cloudplatform.googleblog.com/2018/06/Introducing-
QUI...](https://cloudplatform.googleblog.com/2018/06/Introducing-QUIC-support-
for-HTTPS-load-balancing.html)

Adoption of cloud gaming is accelerating. And we are already starting to see
custom protocols in the wild

[https://blog.parsecgaming.com/a-networking-protocol-built-
fo...](https://blog.parsecgaming.com/a-networking-protocol-built-for-the-
lowest-latency-interactive-game-streaming-1fd5a03a6007)

~~~
the_clarence
QUIC is effectively TCP 2.0. It is not a lossy protocol which is what games
need.

------
plopz
Not being familiar at all with QUIC other than it has something to do with
UDP. Would this have any impact on being able to send/receive UDP from the
browser for games?

~~~
the_clarence
QUIC is like TCP 2.0. So it won't help games

~~~
Matthias247
Even if quic would bring along an unreliable stream option: The browsers Api
still only expose http semantics via xhr and fetch. So those would need to get
extended too.

~~~
Dylan16807
I would imagine it as something like an option on websockets, so it wouldn't
take much API work.

Actually doesn't webRTC already have unreliable channels?

~~~
Matthias247
Sure there are possibilities, but they all require additional standardization
(which takes time and effort).

WebRTC has unreliable data channels, but they are based on sctp/dtls/udp.
Making them work over quic streams might be possible, but also requires a new
standardization effort.

If someone does it I would actually prefer an API outside of webrtc, since
that one carries a lot of complexity for signaling and requires again a few
extra protocols (ice, stun, turn, etc). An unreliable/unordered client to
server protocol and api could potentially be much simpler.

~~~
Orphis
There is a WebRTC over QUIC experiment, the standard document can be found
here: [https://w3c.github.io/webrtc-quic/](https://w3c.github.io/webrtc-quic/)
It is currently not a work item of the W3C WebRTC WG, but we hear about it
every time we meet.

The current focus right now is to deliver WebRTC 1.0, when this is done, we
may reconsider it.

~~~
pthatcherg
At TPAC last week the WebRTC WG decided to adopt the spec, although it needs
to be verified on the mailing list, as there were some members present not in
favor of adopting it.

(I'm an editor of the document and am in favor of it).

------
LinuxBender
Somewhat OT, if v3 has not yet been ratified, is there a chance to get SRV
record support added for http? That could indirectly tie into QUIC
optimization.

~~~
londons_explore
The goal would be to use a SRV record rather than an A record for http?

Whats the benefit?

~~~
wmf
Using ports other than 443.

~~~
teddyh
And load balancing without a separate server to act as a load balancer in
front. Just announce a bunch of servers and all clients will all just pick
one, with the ratio of servers which you adjust! No more proxy!

~~~
LinuxBender
Exactly! This also means the clients become more self healing. Throw a couple
Anycast IP's out there, add some SRV records, and entire datacenter routing
issues could be side-stepped by the client. That is something load balancers
can't even directly handle. Today we hack around that using GSLB and short
lived DNS, which makes DNS DDoS more effective.

------
KaiserPro
Why does it still adhere to the weird semantics of HTTP, when in reality HTTP
has evolved into a file delivery protocol, with a rich inband messaging
system?

Surely we have an opportunity to split HTTP into a file delivery system and a
socket-like streaming system?

------
the_other_guy
Does QUIC provide any advantage for wireless connections over TCP/TLS? Also,
does QUIC use DTLS or something that's more customized?

~~~
londons_explore
It is especially beneficial for wireless connections, since it handles packet
loss better than http/2, and flow control better than many http/1.1
connections.

------
fulafel
Meanwhile we are still waiting for http 1.1 pipelining to be enabled in
browsers.

~~~
wmf
Stop waiting; there's no reason to work on this any more. If you want
pipelining that bad, use HTTP/2.

------
Lotus123
It seems they are getting ahead of themselves...why should it be a good idea
to burn the http/3 name now given the stage of development quic is in?

~~~
xori
Names aren't sacred. Windows skipped 9 for technical reasons, iPhone skipped 9
for marketing reasons. They only people impacted by a skipped http/3 would be
technical people upgrading from /2 to /4 in 5 years.

~~~
m_eiman
I really, really hope we're not going to be replacing fundamental low-level
infrastructure code every five years.

Keep the web frontend culture contained where it is now, don't let them ruin
everything else too.

~~~
arusahni
I don't think of subsequent HTTP versions as replacing the previous ones -
merely augmenting.

~~~
marcosdumay
So we'll need 3 servers instead of 1 just for serving HTTP?

~~~
dagenix
Nginx can support HTTP 1.0, 1.1, and 2 all at once. I'd suspect one day it
will be able to support 3 as well. And I'd imagine that any other server
software of consequence is going to as well.

~~~
marcosdumay
Yes, and the more protocols running, the less likely it is to ever be replaced
some day. Who has the time to write 2 (1.0 and 1.1 are basically equal)
servers for a minimum package?

If somebody still sold HTTP servers, I would look how this one was bribing the
standardizing process. But it looks like people are doing it freely.

~~~
dagenix
I really can't figure out what you are saying. I don't see any evidence of
bribery. And the technologies we're talking about have pretty legitimate
reasons for existing. So, ...?

------
dustingetz
Is QUIC what is disrupting my video calls when I type into a google doc from
chrome, or any other chrome->google interaction?

~~~
vntok
No.

------
rebelde
Interesting. Google has been pushing TLS on the world, with its additional
750ms round-trip time to clients on the other side of the world, while taking
advantage of their ownership of Chrome to use QUIC to deliver very fast
connectivity for their users. That is a sweet competitive advantage that they
have over competing services. EDIT: Am I misunderstanding this?

~~~
tialaramex
If you are False Start compatible (allow modern ECDHE, etcetera and speak
HTTP/2, don't use crappy middleboxes from Cisco, Palo Alto) you get 1RTT with
TLS 1.2. If you do TLS 1.3 you always get 1RTT.

You can't avoid the one round trip on first connections, you pay that in QUIC
too.

~~~
xfs
There's 0RTT but once the connection setup cost is amortized by preconnects
and multiplexing being 0 or 1 RTT is not that important to justify its
engineering ugliness.

------
sametmax
When http2 will be widely adopted by most small companies, and not just big
ones, yeah, let's talk about v2.

But right now, http2 adds complexity, a lot of frameworks are not compatible
with it and must be fronted by a proxy that does, and the perf benefit is not
obvious at all.

~~~
xroche
The perf benefits are mainly for the big companies [namely, Google, Mozilla,
and few others]. Hard to see any real ones for the rest of the world.

~~~
drewg123
The problem is that QUIC, due to using UDP and not trusting the NIC to do
segmentation or crypto offload, is incredibly expensive on the server side.
You're paying costs upwards of a factor of 4 as compared to TCP for quality of
experience gains that are questionable when compared to TCP.

This doesn't matter for small sites, or for clients. But when you're serving
190Gb/s from a single box, paying 4x per byte really hurts.

~~~
ldng
Which Google can afford. Oh look your small independent cloud provider is
slow, come to Google Cloud. They've been subverting standards to push up the
cost of entry and very few people have calling them out ...

~~~
dagenix
The article is literally about the IETF standards body making a decision. If
working with the IETF to propose and refine a new technology is, by your
definition, "subverting standards" I really am at a loss for what a company
should be doing.

~~~
ldng
It's not one standard. It's about having a huge head on the on all the
important internet standards and pushing their cloud interests. It designed to
work well at their scale on their network you connect to directly. The
standard group should taking into small and medium provider but there not
really. Who has the resources to follow Google's multi-azimuth
"standardisation" pace ?

~~~
dagenix
> It's not one standard.

Do you have specifics?

> It's about having a huge head on the on all the important internet standards
> and pushing their cloud interests

Is Google not supposed to be trying to improve the services they offer?

> It designed to work well at their scale on their network you connect to
> directly.

I don't see any reason why QUIC won't be perfectly usable on most networks and
that is what the article was about. I'm not sure what standard Google is
spearheading which is only going to be useful on networks their size and will
someone harm other networks.

> The standard group should taking into small and medium provider but there
> not really.

Again, what is an example of this?

> Who has the resources to follow Google's multi-azimuth "standardisation"
> pace ?

I really don't see what the issue here is - Google is spearheading new
standards. In doing so, it's working with the IETF to develop those standards.
I'm not aware of any indication that its working in bad faith or subverting
IETF processes. The IETF process is fairly open. No one can follow everything
happening in technology all the time - too much is going on. It seems like you
are saying that Google should stop or artificially slow its work with
standards bodies so that outside observers unaffiliated with those bodies can
follow more closely?

------
mtgx
If we're really looking to start from scratch with a modern protocol and break
compatibility then HTTP v3 should be something like
[http://noiseprotocol.org](http://noiseprotocol.org) instead.

I assume the QUIC protocol also comes with some "happy accidents" that allows
Google to more easily collect data on us, too.

~~~
dagenix
> I assume the QUIC protocol also comes with some "happy accidents" that
> allows Google to more easily collect data on us, too.

This is just conspiracy theory nonsense. If QUIC has such a flaw, it has an
open spec and you are free to go find it and point it out.

~~~
teddyh
Pointing it out would change nothing; Google would implement it anyway.

~~~
dagenix
So? If what is claimed to be true is true, maybe Apple, Mozilla, and Microsoft
won't implement it - and that would be a big deal to the alleged nefarious
plot. But, that won't happen if no one speaks up. That is, of course, if this
isn't just nonsense speculation.

