
Working Group Last Call: QUIC protocol drafts - pimterry
https://mailarchive.ietf.org/arch/msg/quic/F7wvKGnA1FJasmaE35XIxsc2Tno/
======
api
I'm really happy to see QUIC pushed, not because I personally have a use for
it but because it's a battering ram against network non-neutrality and what
I've come to call "network nerfing."

Now all those ISPs, IT departments, and cloud providers that de-prioritize or
outright block UDP will get "bug reports" about things being "slow" or not
working.

Now all those traffic shaping middle-boxes are worthless, and your ISP can no
longer spy on your requests to gather marketing data about you.

The Internet is an IP network, not a TCP/80 and TCP/443 network.

~~~
gruez
>Now all those ISPs, IT departments, and cloud providers that de-prioritize or
outright block UDP will get "bug reports" about things being "slow" or not
working.

Doubt it. I suspect that browsers have some sort of happy eyeballs algorithm
for determining whether to use http/3 specifically because some networks don't
handle it well. In those cases it'll fall back to http 1.1.

>Now all those traffic shaping middle-boxes are worthless

How so? how is TCP 80/443 and UDP 80/443 harder to traffic shape than TCP
80/443 alone?

> and your ISP can no longer spy on your requests to gather marketing data
> about you.

That's more encryption (ie. https) than switching to http/3\. Also, encryption
is already mandatory for http/2 (for most browsers).

~~~
humblebee
Curious, why fallback to /1.1 over /2?

~~~
tialaramex
So far as I can see they won't. Because QUIC (and thus HTTP/3) is always
encrypted your fallback is always a TLS connection.

Modern TLS agrees the sub-protocol to use (in this case h2 = HTTP/2) early
with ALPN. If that ALPN sub-protocol isn't available that same connection just
becomes HTTP/1.1 (over TLS) instead.

So there's no reason to fall all the way to HTTP/1.1 without asking if HTTP/2
is available.

------
eadan
How well does QUIC perform over networks with relatively high packet loss? It
seems like Aspera
([https://www.ibm.com/products/aspera](https://www.ibm.com/products/aspera))
is the industry standard for high performance WAN transfers, but it's a
proprietary protocol. I'm wondering if QUIC performs better than HTTP1/2 in
this respect?

~~~
atesti
How does Aspera work?

How is it possible to be 1000x faster? Is it?

TCP works by slowing the data rate when data loss happens. On purpose, to be
fair!

I think it's not that hard to write a program/protocl that blasts out a big
file over UDP and just sends all the parts as packets, then waits for the
receiver to assemble a big list of missing packets, send them back and have
the sender blast out everything again at a high rate.

But this would be at the expense of all the other TCP connections.

Wasn't QUIC changed to be so called "TCP friendly", that means that it has the
same back off behaviour as TCP so that if you are in a crowded hot spot every
tcp stream has a fair chance?

(On the other site, google and some other companies have a huge initial TCP
window size and try (or tried?) to send e.g. the whole google homepage with
like 8 1500 byte packets at once, also trying to get a better ux than other,
again on the expense of other TCP streams)

Looks like using Aspera is unfair and only works if a few people do it.

~~~
eadan
My understanding is that the protocol underlying Aspera (FASP) uses UDP for
the main data transfer and a TCP connection for coordination. By using UDP for
data transfer it's not restricted to the ACK ranges imposed by TCP which can
hinder throughput on networks with relatively high packet loss. Its throughput
is not necessarily achieved by being a "bad citizen" but by having full
control over how and when it communicates lost packets.

Since QUIC is also over UDP, perhaps we now have more flexibility on ACK
windows etc.

Btw. there are open protocols like Tsunami UDP
([https://en.wikipedia.org/wiki/Tsunami_UDP_Protocol](https://en.wikipedia.org/wiki/Tsunami_UDP_Protocol))
that try to fill the same niche

~~~
the8472
My understanding is that window scaling and SACKs enable TCP to detect losses
within large segments too. The only limitation is that most congestion
controllers throttle back when detecting packet loss. Newer latency-based
controllers don't suffer from that problem.

~~~
eadan
This is interesting. I haven't seen much in regard to comparing TCP+BBR to
FASP other than a masters thesis which suggests FASP outperforms on
transferring large files over long distances (which is exactly its intended
purpose). But, I wonder if splitting a file over multiple QUIC connections and
re-assembling at the other side would come closer to the performance of FASP?
Could be a fun experiment, I might try it!

~~~
the8472
An optimal CC should have high utilization with a single stream¹ and behave
fairly to other connections on the network. Measuring multiple streams misses
the point of the exercise.

> other than a masters thesis which suggests FASP outperforms on transferring
> large files over long distances

I found that one too. I would take it with a grain of salt since it uses scp
for data transfer (which has its own flow control and may be the limiting
factor here) and doesn't list the values of some important tcp settings, e.g.
tcp_wmem, which can be essential for transoceanic connections.

¹ ignoring the case where a single CPU core can't move bytes fast enough to
saturate a link.

------
exabrial
All I would like is an unencrypted version of QUIC for low overhead stuff
where privacy/security isn't a concern such as experimental, CAN, or air-
gapped networks. This has been staunchly rejected unfortunately :/

~~~
bawolff
There are reasons for that beyond crypto-everywhere-goodness - in the main use
case crypto helps prevent middleboxes from messing with things which is
critical for its success.

~~~
DaiPlusPlus
Funny that - I thought the advantage of unencrypted HTTP was so that
middleboxes could do things like caching (squid, etc) and outbound deep packet
inspection to prevent information leaks in corporate environments.

~~~
axaxs
Middleboxes can still do that, assuming they have a valid SSL certificate. But
that obviously makes it a -trusted- middlebox, not say, your ISPs middlebox.

~~~
ta17711771
That almost sounds like security.

~~~
tialaramex
_If_ they actually do this correctly† then sure, you get security in the sense
that you decided to trust My Corp Monkey Proxy to do whatever corporate
monkeying, and nobody else can meddle with that.

†The correct way to proxy HTTPS is wire a client and server with your meddling
code in between, your client negotiates a TLS connection to say
[https://news.ycombinator.com/](https://news.ycombinator.com/) and it decides
exactly what is sent and can see in plaintext exactly what's received. Users
make TLS connections to your server and it presents them with a certificate it
made, maybe claiming to be news.ycombinator.com and signed by My Corp Monkey
Proxy then passes through some requests to the client we described.

But it's _much_ cheaper not to do this, and all of the other things you might
do to achieve the same goal are insecure or break the protocol or both. So
there's a big financial incentive to destroy security, while often ironically
claiming to be improving security e.g. anti-phishing. For example choosing
random number is hard yet at the same time essential for security - so several
major vendors sold products that just didn't. When they turned A->B into
A->M->B with their product as M, they'd just re-use the random numbers from
the other conversation in the pair. This had been destroying security for all
their customers for years, but we only found out they were doing it because it
broke with TLS 1.3 final.

Specifically when a TLS 1.3 server (but not draft versions for important
compatibility reasons) talks to a client that doesn't agree to TLS 1.3 it
scribbles the letters "DOWNGRD" over part of the random number field. Since
the client supposedly doesn't know TLS 1.3 it shouldn't care, DOWNGRD is a
ludicrously unlikely "random" choice but so is "FCKTRMP" or "KICKASS" and
those don't mean anything in older TLS versions either. _But_ if the client
really does speak TLS 1.3 and has somehow been tricked into negotiating an old
version, this weird choice of "random" number signals it's a trick and it will
disconnect.

The affected "proxies" would tell a nice shiny new Chrome or Firefox (which
can speak TLS 1.3 and thus are looking out for DOWNGRD) that they only knew
TLS 1.2, fair enough. And they'd tell servers, say www.google.com, the same
thing. But the TLS 1.3 capable servers would scribble DOWNGRD into their
random number field, and then because these "proxies" weren't actually
assembled from a separate client and server but just re-used all the numbers,
the DOWNGRD gets copied into data received in Chrome, and it disconnects
because it has seemingly detected an active attack.

~~~
axaxs
This is all great information. FWIW, managing and automating anycasted
'middleboxes' for others is part of my work. The scary part, to me, is what
paying customers demand. Largely, they don't want to fix broken or legacy
systems, but just want an A+ on SSL-Labs. Of course, you can imagine the
security implications of downgrading, or worse, terminating the TLS connection
at the proxy. The end user has no idea their data is being sent in plaintext
over the open web. I won't do such things, but I'm sure plenty of other
companies are willing.

------
jakeogh
HTTP/3 spin bit:
[https://news.ycombinator.com/item?id=20990754](https://news.ycombinator.com/item?id=20990754)

------
baby
My only problem is why force TLS when we have better protocols nowadays. We
did some research on using noise (in addition to TLS) but I’m not sure anyone
really pushed for this to be considered officially (cf nQUIC)

~~~
tialaramex
We have actual security proofs for TLS 1.3 (which is the only version offered
in QUIC)

Of course such proofs come with caveats - in particular the proofs assume your
cryptographic primitives work as intended (e.g. that AES isn't broken) and
that you've implemented the specification and not something else - but our
experience has been that it's valuable to get those proofs, and where proof
turns out to be difficult it's a valuable pointer to weaknesses in your
design.

As far as I know nobody has done such a proof for the Noise design, and so we
only have our intuition that it looks safe.

~~~
luizfelberti
AFAIK Noise has end-to-end proofs of correctness using CryptoVerif, the work
having mostly been done by the Prosecco team at INRIA. You can explore Noise
protocols on NoiseExplorer [0][1]

Also I may be wrong again, but it is for TLS, that even with all the
simplifications and major changes on TLS 1.3, that we only have proofs of
correctness for certain subsets of the spec, and specific modes of operation
(theres a version of the protocol targeting IoT for which this work has been
done, but I can't remember the name)

[0] [https://noiseexplorer.com/](https://noiseexplorer.com/)

[1]
[https://rwc.iacr.org/2019/slides/NoiseExplorer.pdf](https://rwc.iacr.org/2019/slides/NoiseExplorer.pdf)

~~~
baby
Not sure why you are getting downvoted because what you say is true.

------
jacobush
I thought it would be some sort of last stand of QUIC drives.

------
mekster
Is http/3 completely transparent to the current network and no network that is
capable of handling http/1 and 2 would have any problem handling it?

~~~
shockinglytrue
Far from it. It's quite likely going to be over 5 years if not a decade before
it would be possible to run a pure-HTTP3 service without risking connectivity
problems

The problem is similar to the IPv6 transition, except thanks to the browser
monopolies, it's possible at least for network providers to quickly feel
significant pressure to fix their networks. But there will always be some
networks that will never be fixed

edit: for those inexplicably downvoting this, please pay attention to the
parent comment's question, and the Internet's long chequered history of
adopting new protocols in _any_ setting. TCP port 443 isn't going to magically
disappear overnight, or indeed any time soon. This is evidently true because
it has been true for all prior transitions. Mail still flows to many places
unencrypted despite the standardization of STARTTLS 21 years ago. The long
tail has only gotten much longer in those intervening 21 years.

~~~
cryptonector
HTTP/2 and HTTP/3 do not change the semantics of HTTP. That means you can run
reverse proxies to server HTTP/1 services as /2 and/or /3 and vice-versa. As a
result the transition will be a lot easier than the transition to IPv6. I
expect that the transition in corporate networks will be faster -- the
opposite of the IPv6 case -- because there is a lot of appeal to HTTP/3.

~~~
user5994461
You realize that HTTP/2 is still nowhere near to being adopted by
corporations? It's really far fetched to plan HTTP/3 and expect any adoption.

~~~
cryptonector
The fact that UDP involves much smaller PCBs than TCP alone will drive
adoption of HTTP/3 because it will free up a fair bit of memory.

More availability of HTTP version gateways in load balancers and other reverse
proxies is all that's needed, and that's coming along.

~~~
microcolonel
More specifically _no PCB,_ for UDP itself.

~~~
cryptonector
It's not nil. For "connected" UDP sockets, it's smaller than TCP's, but not
nil because, well, buffers. And for non-connected UDP sockets there's still
buffers. The main thing is that you can have much less buffer space because
you might always be willing to drop packets. Ultimately you can have much
lower memory pressure from those buffers and the smaller PCBs.

------
The_rationalist
I wonder how much different would it be if HTTP/3 was based on SCTP instead of
on UDP.
[https://en.m.wikipedia.org/wiki/Stream_Control_Transmission_...](https://en.m.wikipedia.org/wiki/Stream_Control_Transmission_Protocol)

In a parallel universe: [https://tools.ietf.org/html/draft-natarajan-http-
over-sctp-0...](https://tools.ietf.org/html/draft-natarajan-http-over-sctp-02)
It might be that http 4 experiment this

~~~
bawolff
It wouldn't be adopted due to middle boxes, so all the difference?

~~~
The_rationalist
_Until middleboxes support SCTP, UDP encapsulation is a possible solution_
That would mean that HTTP3 enable HTTP4 towards STCP

------
The_rationalist
Where are the benchmarcks? HTTP 2 gave ~X2 faster performance. I believe that
HTTP3 will give far less, especially against tcp fast open

~~~
microcolonel
"Faster" is a hard comparison to make. QUIC resolves issues that will be more
important to some networks than others.

~~~
The_rationalist
I'm talking about average browsing like to ALEXA

------
Avamander
Hm, I don't see any mention of ESNI (TLS ECH), is there a good reason it isn't
recommended/mandated by the standards?

~~~
tialaramex
eSNI isn't finished and time resolutely insists on moving in one direction, so
a document which says "Use this thing that will exist in the future" isn't
useful today.

~~~
LunaSea
Hasn't it already been implemented in Firefox?

~~~
dathinab
isn't finished => the standard of it isn't finished so implementations might
not to change quite a bit over time and might not be compatible with each
other

------
microcolonel
Hopefully the state of UDP will improve.

