Hacker News new | past | comments | ask | show | jobs | submit login
QUIC as a solution to protocol ossification (lwn.net)
203 points by signa11 on Feb 9, 2018 | hide | past | favorite | 120 comments



Honestly, I am kind of disappointed at QUIC.

It could have been much more...

* there is no support for non-reliable streams

* I am not keen on the variability and non-alignment of the whole header (which can go from 0 to 60+ bytes).

* it points too much towards just reducing the RTT for the initial connection (0RTT is a bad idea, especially if your application now needs to handle the special case of the connection being 0RTT or not).

* There is no support for datagrams, just datastreams.

* Perfect forward secrecy is not granted on the initial handshake (you need to explicitly do a second key exchange for that)

* Forward error correction is actually cool but just using XOR is too limited.

* The proof of ownership of the connection is one incredible hack

And these are just the things I remember from looking at it a couple of years ago. Still, it's a nice improvement over older protocols.

To those that are going to comment something like "Well, then do your own": that's exactly what I'm doing although it's going slowly as I don't have Google's budget.


Regarding non-reliable streams: There is https://tools.ietf.org/html/draft-tiesel-quic-unreliable-str..., while it won't be part of the initial standard, it's likely that the working group makes sure that it can be added later.


did not see that, thanks for the link


> There is no support for datagrams, just datastreams.

Datagrams can be easily and effectively encoded as discrete short-lived streams. If you need streams of datagrams, you can encode your own header to do that. I see no reason why this has to be built into the protocol when the provided primitives are sufficient.

> Forward error correction is actually cool but just using XOR is too limited.

Pretty sure FEC was removed from the IETF draft.


> Datagrams can be easily and effectively encoded as discrete short-lived streams. If you need streams of datagrams, you can encode your own header to do that. I see no reason why this has to be built into the protocol when the provided primitives are sufficient.

Using discrete short-lived streams could mean a little more trouble understanding which message was in response to which other message. A stream of datagrams is already a logic container. But I gues you could track that by hand, too.

Maybe I just found an easier way to implement all of that on the protocol, so I don't see why QUIC could not. Maybe it's just something that I always end up doing by hand so I would just be happier if was provided. I mean, IP/UDP already has all the primitives I need, but it's not like everyone likes to reinvent the wheel.

> > Forward error correction is actually cool but just using XOR is too limited.

> Pretty sure FEC was removed from the IETF draft.

Yes, did not know about that. A pity, my experiments with proper FEC showed wonders after a minimum packet loss rate was surpassed. Even better on high-latencies.


> * there is no support for non-reliable streams

Can you have a stream based protocol that doesn't try to be reliable? A message oriented one i understand.

> * There is no support for datagrams, just datastreams.

because, I guess, you can always just use UDP!


I am currently building a protocol in which each stream can be any combination of (un)reliable,(un)ordered,datagram or datastream. Yes, some combinations make little sense, but it can be done.

You only need the control stream to be reliable+ordered.

UDP datagram limits you to the size of the packet. The basic difference of datagram and datastream is only wether you have a way to identify the start/end of the user messages. Think like TCP, were you don't need to handle the beginning and end of your messages.


Have you looked at SCTP?


yes, nice protocol, except everything is cleartext :)


> there is no support for non-reliable streams

I think the point of QUIC is to provide a reliable protocol. I'm not sure if this complaint is relevant.

> I am not keen on the variability and non-alignment of the whole header (which can go from 0 to 60+ bytes).

Agree

> it points too much towards just reducing the RTT for the initial connection (0RTT is a bad idea, especially if your application now needs to handle the special case of the connection being 0RTT or not).

It's annoying because you need to try decrypting with two different keys, but it simplifies the handshake for both side as the state machine is now super simple

> Perfect forward secrecy is not granted on the initial handshake (you need to explicitly do a second key exchange for that)

if you're talking about 0-RTT, I think replayability is a much bigger problem than forward secrecy as forward secrecy is bound to the server config rotation, so you just need to rotate frequently.

> The proof of ownership of the connection is one incredible hack

How is it a hack? It allows the server to be stateless and it works quite well. How would you make it better?


> I think the point of QUIC is to provide a reliable protocol. I'm not sure if this complaint is relevant.

I found it easy to include support for unreliable streams in my protocol, I'm just not sure why they did not even try. I heard about a hack-proposal where you could just use a new stream for one packet and then forget about it, but it's not really the same, and I am not sure they did even that.

> It's annoying because you need to try decrypting with two different keys, but it simplifies the handshake for both side as the state machine is now super simple

The issue I have with it is that now it's the application that needs to be sure not to write too much without an additional round-trip, otherwise you could end up with an amplification attack. I remember reading that there was an additional call to be made by the application just for that. But other simplifications in the handshake are actually great, yes.

> if you're talking about 0-RTT, I think replayability is a much bigger problem than forward secrecy as forward secrecy is bound to the server config rotation, so you just need to rotate frequently.

Yes, see above. But if I remember correctly rotating keys also makes client unable to use the 0-RTT, so you can't do it too often. QUIC has a weaker PFS than TLS. Still secure, just weaker, and basically disabled unless you explicitely call a key rotation. I am just afraid that the implementations and servers out there will not use that.

> How is it a hack? It allows the server to be stateless and it works quite well. How would you make it better?

Are we talking about the same thing? I meant the entropy bit. Really dislike it. (Then again, it works, eh...).

Yep, my bad, I said "ownership of connection" instead of "ownership od the IP".

The ownership of the connection is derived only from the connection id and crypto keys, that's a nice simplification, yes.


> I found it easy to include support for unreliable streams in my protocol, I'm just not sure why they did not even try. I heard about a hack-proposal where you could just use a new stream for one packet and then forget about it, but it's not really the same, and I am not sure they did even that.

Thinking about it, what would you want for an unreliable stream beside encryption? Isn't DTLS taking care of what you want?

> The issue I have with it is that now it's the application that needs to be sure not to write too much without an additional round-trip, otherwise you could end up with an amplification attack. I remember reading that there was an additional call to be made by the application just for that. But other simplifications in the handshake are actually great, yes.

So, the server is not going to reply unless the client has proven that it owns the source IP address. This should prevent any amplification attack tied to 0-RTT. Or you mean the kind of amplification attack where the client is tricked into talking to the wrong server? This probably could happen in some scenarios, mmm...

I'm thinking that the client wouldn't use 0-RTT without knowing exactly who he's talking to (the cached server configuration is probably tied to the ip:port the client had to connect to the first time around).

> if I remember correctly rotating keys also makes client unable to use the 0-RTT, so you can't do it too often.

I think 0-RTT targets people connecting to the same website multiple times per day. So I can see scenarios where you would rotate it every day? I'd be interested to know how Google configures its own servers.

I just checked, news.google.com has a SSTL of 84:4e:02:00:00:00:00:00 seconds, I think this should be in little endian so it should be around 2 days. This is how much time is left. It's not really informative actually, the SCFG (server config) contains an EXPY field with a unix timestamp indicating 1518350400 (2018/11/02 @ 12:00pm). I realize now that this doesn't tell us much more since it doesn't tell me when the server config was signed :|

> QUIC has a weaker PFS than TLS

Are you comparing it to TLS 1.3? If you're doing a PSK-based handshake with no key exchange, they should provide the same security properties.

> I meant the entropy bit

Can you tell me what you're talking about? I don't know the spec by heart.

The cookie that the server gives to the client is an encryption of source ip and an expiration date, it just looks opaque to the client but the server can decrypt it and check it.


> Thinking about it, what would you want for an unreliable stream beside encryption? Isn't DTLS taking care of what you want?

No, the encryption/transport is a nice part, but I'm also working on other stuff like federated authentication. Besides, why would a developer want to use different stacks when he can use one that includes all the others? I believe protocols should be as general as possible, so I really see no reason not to include this.

> Are you comparing it to TLS 1.3? If you're doing a PSK-based handshake with no key exchange, they should provide the same security properties.

I was comparing it to a standard TLS key exchange with PFS enabled.

> Can you tell me what you're talking about? I don't know the spec by heart.

As I said I remember old information so I have not checked the latest draft, but I am sure that after the connection is set up the server and client continue to flip a single bit somewhere, and hash the result. This hash is sent by the client to the server so that the server can check that the client owns the IP and is actually receiving the data.

Don't know if they still have it, but it was there a couple of years ago.


> federated authentication

What is it?

> the server and client continue to flip a single bit somewhere, and hash the result

There is a "message authentication hash" being sent in messages and I can't see anything about that in the spec, so that might be it?

But two things about that:

* I don't think it makes much sense to prove that you own the IP after the handshake has been done. It's mostly a countermeasure to DoS attacks for the very first handshake message

* The client should actually be able to roam and change IP without having to re-negotiate a secure connection with the server, as long as it uses the same 64-bit connectionID.


>> federated authentication

> What is it?

Think Jabber, AD, kerberos, email... those are federated environments.

Basically you tie a user to a domain, have some way to discover all domains, and design an authentication that can be trustworthy enough for the interaction of multiple domains. The end result is that your username is (more or less) trusted on other domains.

Of course, it does not mean that what I'm building is limited to federation, but it simplifies a lot of things after a while.

> There is a "message authentication hash" being sent in messages and I can't see anything about that in the spec, so that might be it?

By the name it is either that or you are talking about the MAC of each packet, but it could be that, yes.

> * I don't think it makes much sense to prove that you own the IP after the handshake has been done. It's mostly a countermeasure to DoS attacks for the very first handshake message

> * The client should actually be able to roam and change IP without having to re-negotiate a secure connection with the server, as long as it uses the same 64-bit connectionID.

Yes, I think only QUIC has such a mechanism. It was probably introduced exactly to handle roaming stations, and avoid having to require another check before the server can send data to the new IP. If no such check is in place, then you could start a file transfer and then spoof the source IP to that of someone you wanted to DoS.


> what would you want for an unreliable stream

Video?


That's application specific, not protocol specific. Or maybe I didn't understand your comment? Could you be more specific about what DTLS doesn't provide for streaming that QUIC could have provided?


As far as I know, neither DTLS nor QUIC provide datagrm transport (as in: managing the beginning/end of user messages, regardless of size), I am not sure if DTLS does unordered delivery (probably not if it simulates a stream), and my favourite is that none can handle transmission of data in a "reliable multicast" fashion.

By reliable-multicast I mean something like having a main multicast stream plus a unicast that delivers only the lost data.

DTLS multicast also kind of sucks because all clients share the same keys for the MAC, meaning that the clients could spoof the server data towards other clients.

Also, I am afraid the Forward Error correction of QUIC is a bit too basic to be really useful, which is why I developed a second library with more flexible FEC than just implementing raid-4 over the network.

I am designing my protocol with that transport in mind, because I did not want to limit the user in what he could do.

You can do all of this on the application, you can use even multiple stacks, but the upper layers are usually less efficient, more error prone, and we cause a global "reinvent the wheel" movement without even realizing it.


> I am not sure if DTLS does unordered delivery

That's an interesting point, indeed the RFC[1] seems to support your claim:

> DTLS implementations maintain (at least notionally) a > next_receive_seq counter. This counter is initially set to zero. > When a message is received, if its sequence number matches > next_receive_seq, next_receive_seq is incremented and the message is > processed. If the sequence number is less than next_receive_seq, the > message MUST be discarded. If the sequence number is greater than > next_receive_seq, the implementation SHOULD queue the message but MAY > discard it. (This is a simple space/bandwidth tradeoff).

[1]: https://tools.ietf.org/html/rfc6347

> DTLS multicast

I'm not seeing mentions of that in the RFC

> Forward Error correction of QUIC

I'm looking at the QUIC spec and I don't see anything about that, there doesn't seem to be any error correction done in QUIC (perhaps at the UDP level with the checksum you mean?)


>> DTLS multicast

> I'm not seeing mentions of that in the RFC

My bad, still a draft: https://datatracker.ietf.org/doc/draft-lucas-dtls-multicast/

> I'm looking at the QUIC spec and I don't see anything about that, there doesn't seem to be any error correction done in QUIC (perhaps at the UDP level with the checksum you mean?)

Nope, basically it should be: send X packets (X>=2), then the xor of those packets. Lose one, everything can be recovered

Googoling again I see that they later disabled it 'cause it did not give the expected results on youtube:

https://docs.google.com/document/d/1Hg1SaLEl6T4rEU9j-isovCo8...

Which is not that surprising considering that network errors are more often in bursts, and they could recover only one error.

I still think proper FEC can be useful, especially in networks with high packet loss (at 5% and up my experiments gave me a huge advantage over any control flow algorithm of TCP, even more on high-latency networks)


Have you tried engaging with IETF-QUIC?


I'm a nobody, the protocol is already well defined and what I need would call for something close to a rewrite, which is why I am actually doing my own.

Hopefully in a one or two months I will have a stable enough implementation so people might be interested enough to contribute, then I can think about writing a proper rfc.


If you can nonetheless voice a design issue, consider putting it on the record in their own forum [1], even if your concerns won't result in substantive changes in the end.

[1] https://github.com/quicwg/base-drafts/blob/master/CONTRIBUTI...


From your website it sounds like your project is much bigger than just improving QUIC. Wouldn’t it make sense to bring your expertise to improve QUIC in the short term? It seems to be relatively common to hear people talk about how nobody will listen to them because of cronyism or something, but then the thing that they are proposing is actually a one-man rewrite of everything. Maybe nobody has the time to dig into your massive, non-working project. Also, it’s kind of an implicit statement that you are not interested in working with anyone else. Maybe if you approached things in a more collaborative way, your ideas would get some traction. I’m reminded of a relevant XKCD about standards here.


QUIC has already enough intelligent people behind it, I'll look at the github page to see if I can say something, but the main difference is probably the scope.

The scope of QUIC is quite clear: it aims at putting together TCP and TLS. It already does that well enough, it won't do more. I found that limiting, the only option I have is to do something myself.

I tried talking to people around (conference and privately) but what I got from it is basically: first show that it can work, then we can talk. I tried to talk about the theory and objectives behind before the code, but without an implementation it does not seem to be worth much.

Google is a big company that can put people to work on something and that gets things done, wether it is a good idea or not. I can't pay anyone, I have not found people interested in even only the theory, so I'll go on, and maybe something will change when things start to work.


I encourage you to just enter the working group and engage them via whatever medium they're using. There's nothing to lose, really, and much potential win for both them and you.


One of the things that I don't like about TCP is that it really goes out of its way to emulate a stream of data over a wire. This then leads to things like Nagle's algorithm that deliberately slow it down.

This is problematic when doing things like HTTP, or any of the infinite RPC schemes that come into popularity every few years. (REST, Thrift, Protocol Buffers, ect.) As a programmer, we're trying to send a request of known length and get a response where the server knows the length. As TCP makes this looks like a wire stream, it means that the protocol implementer needs to disable Nagle and implement logic that figures out where the requests begin and end; even though most likely these boundaries are the packet boundaries.

Think of it this way: An HTTP GET request may fit into one or two IP packets; but the nature of TCP streams means that the server's HTTP API needs to find the \n\n to know when the request headers end. Instead, the stream API itself should just be able to say, "here's a packet of 300 bytes!" Furthermore, if the client didn't disable Nagle, its TCP library will deliberately delay sending the header for 200 ms.

The reality of the TCP API is that it's great for long streams of data. It just isn't optimized for half of its use case.


It's a common misconception that Nagle's causes the delay problem. The problem is tcp delayed acks. Nagle's can improve network efficiency (and latency as a result) on slow links. On fast links it shouldn't have much impact at all.

The rest of what you said doesn't make much sense to me. Searching for \n\n is a property of HTTP, it's nothing to do with TCP. TCP is meant as a generic connection oriented protocol that uses packet switching, and it's largely optimized for that.

The main benefit of QUIC it seems is that it hides itself in UDP so middleboxes don't tamper with it. It isn't really a fault of TCP itself that middleboxes mess it up so much.


The Nagle toggle and relegating record boundaries to application level sound like quite small potatoes (and have significant upsides too). Those things probably wouldn't be the main things to optimize if protocol designers could start from stratch with perfect hind sight.


That doesn't make much sense. Http being that way was a deliberate decision by its creators. They could as well have designed it so that the request header is prefixed by its length as a 4 byte value, for example. You definitely still want the stream representation for that and not juggle around with packet fragmentation yourself. You can simply disable nagle for your own protocol if it speeds up things for you. The only thing different in your suggestion is that the length prefix adding and handling should be done by the network api instead of your app, which I don't think is that much of a hassle anyways.


John Nagle comments on HN frequently. wonder if he'll see this...

https://news.ycombinator.com/user?id=Animats


if you care about these things, I imagine that using the PUSH flag of TCP gets rid of Nagle's?


In RHEL 7, using TCP_NODELAY and FAST_ACK still results in IP level conflation. Even when you turn off hardware coalescence. Only way i've been able to get one push = one tcp packet is via solarflare. Even then you have to disable a SF specific batching amount that still kicks in when nagle is disabled


I thought SSH managed to do one packet per key pressed via the push option :o


I think we definitely need more standalone libraries for both servers and clients to really see QUIC's adoption flying. Nowadays it's either a Go library some people have written or scavenging Chromium sources for libquic - which is way less than ideal.


Honestly, I'd rather they keep it limited to Chrome and Google, while they iterate and develop the protocol. That way can iterate much faster and without backward compatibility concerns.

In a way, that's another example of ossification, this time without middle boxes.

Once the protocol has settled and development slowed down, they can then build libraries for all the other languages and servers.


The protocol is quite complex, unfortunately, and also in flux, which deters a lot of companies from adopting it in their supported solutions.

There is also the issue of HTTP/2 ... just because the major open source servers have implementations for it, it doesn't mean that it has been assimilated to its full capabilities in hundreds of other stacks. For many organizations, it's hard to sell internally the adoption of yet another protocol in such a short timeframe.


You're also using a protocol that changes at every version of Chrome, making you only compatible with the last version of Chrome if you're keeping up to date, or with multiple versions if you're willing to do that to your code.

Chrome is also only talking QUIC to Google websites, not sure how you would go about making it work with other websites (ask the G maybe?)


Wait until the IETF-version is finished and build on that? (it's a bit confusing it is named the same, since it is not just a version of Google's)


it'll either NOT happen, or if it does it will probably be in ages.


I don't think that's true. There's a lot of momentum in the space, and several large vendors have IETF-quic implementations that are pretty interoperable already.


That's to bad. Last I heard of it (which admittedly has been a while) it seemed like they made useful progress.


Chrome will talk QUIC to anyone else who talks QUIC


How does Chrome determine that a service supports QUIC? Does QUIC use a specific port and Chrome always pings it?


Emit an HTTP header and Chrome will attempt a QUIC connection.

Alternate-Protocol: quic:<QUIC server port>

https://www.chromium.org/quic/quic-faq


Do you need to serve that over HTTPS?

EDIT: Alright, I figured it out, this page is out of date. Google services reply with this header which works:

alt-svc: hq=":443"; ma=2592000; quic=51303431; quic=51303339; quic=51303338; quic=51303337; quic=51303335,quic=":443"; ma=2592000; v="41,39,38,37,35"

No idea what these quic values are :) but here it is.


Those are probably supported version numbers.


yup, 4-octet version numbers for IETF QUIC (according to the draft)


If nginx had QUIC support, I'd use it. I used nginx's spdy back before nginx got http2 support.


David Benjamin has proposed the GREASE proposal for TLS. Basically, random values of future flags are thrown around so that servers (not middleboxes) have no choice but to process them. I think it is a good idea, but there are still occasionally niggles such as certain machines processing different meanings for specific flags, causing them to not be able to speak TLS 1.3.


Cloudflare had an interesting blog post[1] a month ago about TLS 1.3. It talks extensively about ossification and the GREASE proposal.

[1] https://blog.cloudflare.com/why-tls-1-3-isnt-in-browsers-yet...

Previous HN discussion of the post: https://news.ycombinator.com/item?id=16010930


CloudFlare's Filippo Valsorda also dives into the nitty gritty of their TLS 1.3 implementation here:

TLS 1.3 - CloudFlare London Tech Talk

https://vimeo.com/177333631

And his experimental golang crypto/tls pkg:

https://github.com/cloudflare/tls-tris


grease works for string-valued flags. It's a bit more difficult to grease bitfields.


> The first byte of the clear data was a flags field which, he said, was promptly ossified by a middlebox vendor, leading to packets being dropped when a new flag was set.

That was a classic example of why changing network protocols is hard and what needs to be done to improve the situation. Middleboxes are the control points for the Internet as we know it now. The only defense against ossification of network protocols by middleboxes, he said at the conclusion of the talk, is encryption.

So apparently we're now at a point where any middlebox trying to make any use of a protocol higher than IP counts as "ossification"? No matter if the middlebox is well-behaved or not.

And the only solution naturally is to make any stack parts higher than IP completely unusable by middleboxes by encrypting them. I guess this is make sense if you already fully control both clients and servers and get annoyed by middleboxes restricting your freedom to change the protocol on a whim.

However, this seems like a major change in the vision of the internet to me. So has the internet community at large now agreed that we want to get rid of middleboxes completely? What about middleboxes I want to employ myself, e.g. to watch my own traffic?


> However, this seems like a major change in the vision of the internet to me.

The vision of the Internet is that the network does little more than moving packets from point A to point B, with all the intelligence being in the endpoints. That way, the core network doesn't have to be upgraded for every little change in the protocols; protocol evolution happens on the edges of the network. Please read the classic "World of Ends" (http://www.worldofends.com/), and the technical article it links to, "End-to-end arguments in system design" (http://www.reed.com/dpr/locus/Papers/EndtoEnd.html).


Is it middle boxes that do TTL-- as in ping?


The problem here isn’t really middleboxes - it’s that people who build middleboxes tend to break protocols. The flags field is specifically designed so that you can add stuff to it - if the middleboxes were to be built properly, they would fall back to “don’t touch anything” rather than “break the connection” when they don’t understand what’s going on.

Middlebox vendors brought this upon themselves.


> Middlebox vendors brought this upon themselves.

Not really, I'm sure they don't care much, most of internet users will blame their browsers instead.


> However, this seems like a major change in the vision of the internet to me. So has the internet community at large now agreed that we want to get rid of middleboxes completely? What about middleboxes I want to employ myself, e.g. to watch my own traffic?

The internet was built on the idea of having a dumb network with smart endpoints. Middleboxes are a good idea that just fundamentally doesn't work. Figure out how to do what you want on your edge nodes.


Except I don't have control about the edge nodes either, because the one edge is a remote server while the other edge is a locked-down smartphone or IoT device.


> What about middleboxes I want to employ myself, e.g. to watch my own traffic?

> Except I don't have control about the edge nodes either, because the one edge is a remote server while the other edge is a locked-down smartphone or IoT device.

If you don't have control over either of the endpoints, is it really your own traffic? Going a bit deeper: if you don't have control over your smartphone or IoT device, is it really your smartphone or IoT device?


Interestingly, note that TLS 1.3 is not going to make things better as it is jumping through hoops and hoops just trying to conform to every little quirks middleboxes can have.

I'm wondering if the IETF should have taken another route to force middleboxes to be more flexible in the future?


Indeed. It seems header encryption would be a good start. David Benjamin has proposed the GREASE protocol, but I think that is only for servers, not middleboxes.


GREASE only fuzzes some of the fields in TLS 1.3 handshake messages, to prevent middleboxes and TLS servers to incorrectly behave towards unknown values for extensions and other various fields. That doesn't help outside of TLS 1.3.


Indeed: it would be nice to see these techniques applied in more contexts.


> However, this seems like a major change in the vision of the internet to me. So has the internet community at large now agreed that we want to get rid of middleboxes completely?

Quite the opposite: Middleboxes are traditionally considered rule breakin kludges, they were never never part of the vision of the internet, or even allowed by the standards. The internet vision was end-to-end architecture, and global addressability, and a dumb network that does routing in a best-effort fashion in order enable host-to-host communication. IPv6 is an attempt to keep this all working.

(And no, inspecting your own personal traffic on your own private network does not traditionally count as middleboxing)


> What about middleboxes I want to employ myself, e.g. to watch my own traffic?

You'll need to do encryption termination. But then, you're subject to the same problem. If you reject packets you don't understand, or if you badly parse stuff, you'll compromise your traffic.


It's not a problem if you can keep them updated, but that's not usually what happens.

Network equipment often keeps running in the closet without updates for 10+ years. That's the part that isn't working well.


Coincidentally came across this while looking up a board game: "The hidden costs of QUIC and TOU" https://www.snellman.net/blog/archive/2016-12-01-quic-tou/


While I still stand behind what was written in that post re: the non-endpoints needing the ability to troubleshoot networks, it might not be totally up to date.

A caricature of the stakeholders in QUIC standardization would split them in three groups:

- The privacy cabal thinks that every single bit of unencrypted data is unacceptable, since with enough statistical evidence over enough data paths it could be used to deduce some bit of totally irrelevant information about a user.

- The operator cabal believes that any bit of encrypted metadata (but not payload data) is unacceptable, since there could be some valid operational reason to access it.

- The ossification cabal believes that any bit of unencrypted data is unacceptable, since somebody in the operator cabal would end up misusing it, and accidentally fix the protocol in place.

Now, obviously actual humans are more nuanced than that. Still, at the time that was written, it appeared that there was absolutely no chance of getting any manageability metadata at all. These are not positions conducive to compromise. (And the lack of that kind of data leads to horrible things like using the TCP traffic going to client X to infer measurements on the QUIC traffic going to the same client).

But at the moment it feels like it might be possible (but nowhere near guaranteed) for the standardization to export some minimal amount of transport layer data. See for example the spin-bit proposal [0], where just 1-2 bits of incredibly non-sensitive data would already go a long way. Give me a single spin-bit and say 3 low-order bits of the packet number, and I'd be happy.

[0] https://tools.ietf.org/html/draft-trammell-quic-spin-01


I have a question about an older UDP-based protocol. I was wondering for a long time whether UDT is worth it and in which scenarios. Has somebody tried it or is it kind of outdated?


UDT solves problems tcp has with congestion especially at 10Gbit+, this gives it a much higher throughput that tcp and pretty much anything else at 10Gbit+. I don't think it did much to fix latency issues which I think quick is more focused on.

If you are interested in low latency protocols you should check out minimaLT[1] better performance than QUIC but the connections are portable across ip's so its better for mobiles, and can tunnel multiple connections together. DJB helped out with design so you know its super secure.

[1] https://cr.yp.to/tcpip/minimalt-20130522.pdf


We did some experiments a few years ago when we were trying to quickly move several large-ish (30GB I think?) climate and weather data files around on Internet2 and we found that UDT solved our problems quite nicely.


For those of you using Go, it's been an absolute pleasure to use https://github.com/lucas-clemente/quic-go.

FWIW.


I failed to find anything a few times in the last years, so maybe it does not yet exist, but maybe someone here has any pointers.

How can linux' iptables/netfilter match quic packages. Because quic is no ip protocol (but on top of udp) `-p quic` will not be implemented. Will one has to write a quic helper, so that one could match using `-m helper --helper quic`? I tend to think so, but I failed to find such a helper.

Is there some difficulty I fail to see? Or is nobody interested (because nobody uses linux as a router)?


It should in principle be possible to classify brand new Quic connections, since a tiny handful of its fields are cleartext during the initial handshake (flags and version, in both client/server directions)

As for classifying existing Quic connections, it might get a little messier. There are still some cleartext fields but it might require observing multiple packets in the flow before making a guess the traffic is likely to be Quic.

Meanwhile an explicit design goal of Quic is to avoid interference from middleboxes, so it would be no surprise if a perfect iptables Quic match is or remains impossible.


I don't want to mess with the content in any way, I do want to send connections over more than one WAN, depending on utilization. That works great with https?-connections on tcp. It would be great to be able to do the same with quic connections.


> I do want to send connections over more than one WAN

QUIC has something which can help with that (from https://tools.ietf.org/html/draft-ietf-quic-transport-09):

"QUIC connections are identified by a Connection ID, a 64-bit unsigned number randomly generated by the server. QUIC's consistent connection ID allows connections to survive changes to the client's IP and port, such as those caused by NAT rebindings or by the client changing network connectivity to a new address."

If that's not enough for your use cases, now is a good opportunity to mention them and suggest enhancements, since the IETF standard is still under development.


I don't think this will work for randomly sending over one or the other connection, but I will just have to test to be sure.


Simply classifying on UDP port 443 would cover most usecases I'd guess?


Where did you get 443 from? AFAIK even Google aren't using that. I didn't think there was any standardized port number at all


Google uses UDP 443 for all it's quic services. Chrome will autoconnect to UDP 443 for any https request in parallel with TCP, and prefer QUIC if it works.

Sure you could use another port, but it looks like UDP 443 is becoming the unofficial standard, even if not IETF endorsed...


Yoiks, good to know! When scanning the Quic docs, especially around the HTTP header that advertises an alternative protocol, the examples showed some random >1024 high port. I assumed that was part of the middlebox aversion stuff

But it makes perfect sense to have a defined port, so that connection racing can work without first having to receive a HTTP header


> Because quic is no ip protocol (but on top of udp)

Did you mean to write something else?

UDP very much is an IP protocol, just like TCP is an IP protocol.

https://en.wikipedia.org/wiki/Internet_Protocol_Suite


TCP and UDP are "IP protocols" in that their packets are nested directly inside an IP packet and they have associated protocol numbers [1] to distinguish them from other things that might be inside an IP packet. QUIC theoretically deserves to be one of these too but for pragmatic reasons it has to be nested inside UDP. GP is asserting that `-p quic` will not be implemented, because it would be a layering violation.

[1] https://en.wikipedia.org/wiki/List_of_IP_protocol_numbers


Exactly! Thank you for putting in the time to elaborate what I meant.


Ahh, thanks!


I think the GP wanted to know how a firewall can tell quic traffic from other UDP traffic.


can't you do it easily with eBPF?


There might be a way to use eBFP, but it won't be as easy as having a netfilter module. I assert that would be useful.


I'm talking out of my knowledge bucket here, but isn't an eBPF script basically the same as having a netfilter module?


Not entirely, but it's possible to convince flowing traffic through eBPF. The main difference will be the ease of use! Thank you for the suggestion though.


How do QUIC features and capabilities overlap with those of HTTP/2? For example would using them both defeat the purpose of using one in the area of encryption or transfer speed?


You SHOULD use both of them at the same time. HTTP/2 intertwines your data by using different streams, except all of that is on top of TCP and one stream blocking (because a packet was dropped for example) will block the other streams. This is called head-of-line blocking and you basically can't do much to fix this in TCP unless you create multiple TCP connections for your different resources. But this has a bunch of other issues, you will need to figure out congestion control for each separate TCP connection even if they're connecting to the same service :/

QUIC also has multiple streams, but it doesn't have this head of line problem and other streams will continue to work fine if one stream blocks. So QUIC was made for HTTP/2, where your HTTP/2 streams are basically replaced by QUIC streams which are not affected by loss of packets at all.

If you want to see how loss of packets affect an HTTP/2 connection over TCP, it's pretty bad. In general if you're on a lossy connection you should just use HTTP/1. Check this awesome talk: https://www.youtube.com/watch?v=0yzJAKknE_k


I looked for this some time ago, but IIRC there's no third party implementation that does that. e.g integrating quic-go and grpc-go


They overlap significantly, and some of it is by accident and some is by design. Most of this resulted from convergent evolution by some of the same authors; HTTP/2's transport innovations over HTTP/1.1 were largely lifted from SPDY by Google.

QUIC is a more holistic view of the transport techniques to achieve some of the same design goals of SPDY, so the ultimate vision is HTTP/2-over-QUIC, and this is being reconciled [1]. In earlier versions of QUIC, it even included a homebrew transport encryption protocol [2], but this has since been downplayed in favor of TLS 1.3.

[1] https://datatracker.ietf.org/doc/draft-ietf-quic-http/ [2] https://www.ietf.org/proceedings/92/slides/slides-92-saag-5....


> HTTP/2 was designed to address this problem using multiple "streams" built into a single connection. Multiple objects can be sent over a stream in parallel by multiplexing them into the connection. That helps, but it creates a new problem: the loss of a single packet will stall transmission of all of the streams at once, creating new latency issues. This variant on the head-of-line-blocking problem is built into TCP itself and cannot be fixed with more tweaks at the HTTP level.

[0] https://lwn.net/Articles/745590/



”One key feature of QUIC is that the transport headers — buried inside the UDP packets — are encrypted”

I’m not sure I completely understand this, but does that mean you get net neutrality for free if you use this?


Not for any useful definition of net neutrality; IP addresses are still visible for ISPs to discriminate on.


Makes me wonder whatefver happened to PGM Reliable Transport Protocol Specification

- https://tools.ietf.org/html/rfc3208

Not an entirely dissimilar concept. LBM/29West being its successor.


Instead of making order guaranteed on UDP, allow removal (delay) of guarantee from TCP:

https://www.ietf.org/id/draft-add-ackfreq-to-tcp-00.txt


For cases where the payload is small (<100kb), Caffeine (http://www.caffei.net/) is a good solution (TCP based)


I wonder what is the real difference between QUIC vs SCTP-over-UDP?


Lower connection establishment latency: SCTP over UDP using DTLS requires 4 roundtrips (that's worse than standard TCP) while QUIC (which includes DTLS like encryption) only requires 1 roundtrip for the first connection and 0 RT for subsequent connections, when it cached some information.


Thanks, I found this: https://docs.google.com/document/d/1RNHkx_VvKWyWg6Lr8SZ-saqs...

"One of the major and believably achievable goals of QUIC, is to predominantly have zero RTT connection (re)establishment, as was mentioned in goal 3a above. It is highly doubtful that SCTP over DTLS can come close to achieving that goal."

----

I actually think the "zero round-trip-time" claim is a bit too strong. It's only 0RTT if you've communicated before, so in a way it's not a new connection. There's a privacy implication here -- you should wipe that state between communications to be more anonymous -- and also you only get "predominantely 0RTT" with an increasingly centralized web.


I'm guessing QUIC integrates better with HTTP/2?


Or enet (ignoring encryption).


The title really should be "UDP as a solution to protocol ossification". QUIC makes the point that you can develop a new transport-ish layer over UDP. Developing a new protocol on raw IP packets apparently gets blocked by firewall boxes around the world.


My take-away from the article is actually "encryption as a solution to protocol ossification". If some data (headers, etc.) is available as clear-text, there will be routers that act on it and possibly misbehave or block traffic when they encounter new patterns.

So the article suggests that making it impossible for routers to inspect any protocol-specific information (via encryption) helps with this problem. Routers can't act on fields they can't read.


But then you have the whole industry of "security appliances" against you, which do want to have a look, and which do want to MITM everything - ironically arguing that this makes everything safer, depite becoming themselves a huge attack surface.


Ya but they're wrong.


They're interested in business security at least, security of their own business I mean.


Ya I agree with the encryption statement for sure. My only complaint with the title is that UDP was used as the solution to not being able to make QUIC on top of IP directly (which is madness really, but also not the QUIC developers problem).


It's not as though udp has much overhead; it's pretty damn close to straight IP


The beauty of it is that UDP is not even really a protocol, it's just a few additions (sport and dport) on top of IP. The RFC is only 3 pages long. So it's not like UDP is going to get in the way of your new protocol if you design on top of it.


While I think you are correct in that IP seems more stuck by middleboxes (SCTP is unreliable), I think any protocol whether it be over UDP or TCP can be subject to ossification. QUIC seems to have some useful properties to prevent this, such as the headers being encrypted. I could imagine alternative UDP protocols not doing this and causing more ossification.


Also encryption:

> The first byte of the clear data was a flags field which, he said, was promptly ossified by a middlebox vendor, leading to packets being dropped when a new flag was set.

[...]

> The only defense against ossification of network protocols by middleboxes, he said at the conclusion of the talk, is encryption.


Well, it shows that Google can develop a new transport layer protocol over UDP.


Heads up to the lazy folks: the article is much shorter than the scroll bar suggests.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: