It could have been much more...
* there is no support for non-reliable streams
* I am not keen on the variability and non-alignment of the whole header (which can go from 0 to 60+ bytes).
* it points too much towards just reducing the RTT for the initial connection (0RTT is a bad idea, especially if your application now needs to handle the special case of the connection being 0RTT or not).
* There is no support for datagrams, just datastreams.
* Perfect forward secrecy is not granted on the initial handshake (you need to explicitly do a second key exchange for that)
* Forward error correction is actually cool but just using XOR is too limited.
* The proof of ownership of the connection is one incredible hack
And these are just the things I remember from looking at it a couple of years ago. Still, it's a nice improvement over older protocols.
To those that are going to comment something like "Well, then do your own": that's exactly what I'm doing although it's going slowly as I don't have Google's budget.
Datagrams can be easily and effectively encoded as discrete short-lived streams. If you need streams of datagrams, you can encode your own header to do that. I see no reason why this has to be built into the protocol when the provided primitives are sufficient.
> Forward error correction is actually cool but just using XOR is too limited.
Pretty sure FEC was removed from the IETF draft.
Using discrete short-lived streams could mean a little more trouble understanding which message was in response to which other message. A stream of datagrams is already a logic container. But I gues you could track that by hand, too.
Maybe I just found an easier way to implement all of that on the protocol, so I don't see why QUIC could not. Maybe it's just something that I always end up doing by hand so I would just be happier if was provided. I mean, IP/UDP already has all the primitives I need, but it's not like everyone likes to reinvent the wheel.
> > Forward error correction is actually cool but just using XOR is too limited.
> Pretty sure FEC was removed from the IETF draft.
Yes, did not know about that. A pity, my experiments with proper FEC showed wonders after a minimum packet loss rate was surpassed. Even better on high-latencies.
Can you have a stream based protocol that doesn't try to be reliable? A message oriented one i understand.
> * There is no support for datagrams, just datastreams.
because, I guess, you can always just use UDP!
You only need the control stream to be reliable+ordered.
UDP datagram limits you to the size of the packet. The basic difference of datagram and datastream is only wether you have a way to identify the start/end of the user messages. Think like TCP, were you don't need to handle the beginning and end of your messages.
I think the point of QUIC is to provide a reliable protocol. I'm not sure if this complaint is relevant.
> I am not keen on the variability and non-alignment of the whole header (which can go from 0 to 60+ bytes).
> it points too much towards just reducing the RTT for the initial connection (0RTT is a bad idea, especially if your application now needs to handle the special case of the connection being 0RTT or not).
It's annoying because you need to try decrypting with two different keys, but it simplifies the handshake for both side as the state machine is now super simple
> Perfect forward secrecy is not granted on the initial handshake (you need to explicitly do a second key exchange for that)
if you're talking about 0-RTT, I think replayability is a much bigger problem than forward secrecy as forward secrecy is bound to the server config rotation, so you just need to rotate frequently.
> The proof of ownership of the connection is one incredible hack
How is it a hack? It allows the server to be stateless and it works quite well. How would you make it better?
I found it easy to include support for unreliable streams in my protocol, I'm just not sure why they did not even try. I heard about a hack-proposal where you could just use a new stream for one packet and then forget about it, but it's not really the same, and I am not sure they did even that.
> It's annoying because you need to try decrypting with two different keys, but it simplifies the handshake for both side as the state machine is now super simple
The issue I have with it is that now it's the application that needs to be sure not to write too much without an additional round-trip, otherwise you could end up with an amplification attack. I remember reading that there was an additional call to be made by the application just for that. But other simplifications in the handshake are actually great, yes.
> if you're talking about 0-RTT, I think replayability is a much bigger problem than forward secrecy as forward secrecy is bound to the server config rotation, so you just need to rotate frequently.
Yes, see above. But if I remember correctly rotating keys also makes client unable to use the 0-RTT, so you can't do it too often. QUIC has a weaker PFS than TLS. Still secure, just weaker, and basically disabled unless you explicitely call a key rotation. I am just afraid that the implementations and servers out there will not use that.
> How is it a hack? It allows the server to be stateless and it works quite well. How would you make it better?
Are we talking about the same thing? I meant the entropy bit. Really dislike it. (Then again, it works, eh...).
Yep, my bad, I said "ownership of connection" instead of "ownership od the IP".
The ownership of the connection is derived only from the connection id and crypto keys, that's a nice simplification, yes.
Thinking about it, what would you want for an unreliable stream beside encryption? Isn't DTLS taking care of what you want?
> The issue I have with it is that now it's the application that needs to be sure not to write too much without an additional round-trip, otherwise you could end up with an amplification attack. I remember reading that there was an additional call to be made by the application just for that. But other simplifications in the handshake are actually great, yes.
So, the server is not going to reply unless the client has proven that it owns the source IP address. This should prevent any amplification attack tied to 0-RTT. Or you mean the kind of amplification attack where the client is tricked into talking to the wrong server? This probably could happen in some scenarios, mmm...
I'm thinking that the client wouldn't use 0-RTT without knowing exactly who he's talking to (the cached server configuration is probably tied to the ip:port the client had to connect to the first time around).
> if I remember correctly rotating keys also makes client unable to use the 0-RTT, so you can't do it too often.
I think 0-RTT targets people connecting to the same website multiple times per day. So I can see scenarios where you would rotate it every day? I'd be interested to know how Google configures its own servers.
I just checked, news.google.com has a SSTL of 84:4e:02:00:00:00:00:00 seconds, I think this should be in little endian so it should be around 2 days. This is how much time is left. It's not really informative actually, the SCFG (server config) contains an EXPY field with a unix timestamp indicating 1518350400 (2018/11/02 @ 12:00pm). I realize now that this doesn't tell us much more since it doesn't tell me when the server config was signed :|
> QUIC has a weaker PFS than TLS
Are you comparing it to TLS 1.3? If you're doing a PSK-based handshake with no key exchange, they should provide the same security properties.
> I meant the entropy bit
Can you tell me what you're talking about? I don't know the spec by heart.
The cookie that the server gives to the client is an encryption of source ip and an expiration date, it just looks opaque to the client but the server can decrypt it and check it.
No, the encryption/transport is a nice part, but I'm also working on other stuff like federated authentication.
Besides, why would a developer want to use different stacks when he can use one that includes all the others? I believe protocols should be as general as possible, so I really see no reason not to include this.
> Are you comparing it to TLS 1.3? If you're doing a PSK-based handshake with no key exchange, they should provide the same security properties.
I was comparing it to a standard TLS key exchange with PFS enabled.
> Can you tell me what you're talking about? I don't know the spec by heart.
As I said I remember old information so I have not checked the latest draft, but I am sure that after the connection is set up the server and client continue to flip a single bit somewhere, and hash the result. This hash is sent by the client to the server so that the server can check that the client owns the IP and is actually receiving the data.
Don't know if they still have it, but it was there a couple of years ago.
What is it?
> the server and client continue to flip a single bit somewhere, and hash the result
There is a "message authentication hash" being sent in messages and I can't see anything about that in the spec, so that might be it?
But two things about that:
* I don't think it makes much sense to prove that you own the IP after the handshake has been done. It's mostly a countermeasure to DoS attacks for the very first handshake message
* The client should actually be able to roam and change IP without having to re-negotiate a secure connection with the server, as long as it uses the same 64-bit connectionID.
> What is it?
Think Jabber, AD, kerberos, email... those are federated environments.
Basically you tie a user to a domain, have some way to discover all domains, and design an authentication that can be trustworthy enough for the interaction of multiple domains. The end result is that your username is (more or less) trusted on other domains.
Of course, it does not mean that what I'm building is limited to federation, but it simplifies a lot of things after a while.
> There is a "message authentication hash" being sent in messages and I can't see anything about that in the spec, so that might be it?
By the name it is either that or you are talking about the MAC of each packet, but it could be that, yes.
> * I don't think it makes much sense to prove that you own the IP after the handshake has been done. It's mostly a countermeasure to DoS attacks for the very first handshake message
> * The client should actually be able to roam and change IP without having to re-negotiate a secure connection with the server, as long as it uses the same 64-bit connectionID.
Yes, I think only QUIC has such a mechanism. It was probably introduced exactly to handle roaming stations, and avoid having to require another check before the server can send data to the new IP. If no such check is in place, then you could start a file transfer and then spoof the source IP to that of someone you wanted to DoS.
By reliable-multicast I mean something like having a main multicast stream plus a unicast that delivers only the lost data.
DTLS multicast also kind of sucks because all clients share the same keys for the MAC, meaning that the clients could spoof the server data towards other clients.
Also, I am afraid the Forward Error correction of QUIC is a bit too basic to be really useful, which is why I developed a second library with more flexible FEC than just implementing raid-4 over the network.
I am designing my protocol with that transport in mind, because I did not want to limit the user in what he could do.
You can do all of this on the application, you can use even multiple stacks, but the upper layers are usually less efficient, more error prone, and we cause a global "reinvent the wheel" movement without even realizing it.
That's an interesting point, indeed the RFC seems to support your claim:
> DTLS implementations maintain (at least notionally) a
> next_receive_seq counter. This counter is initially set to zero.
> When a message is received, if its sequence number matches
> next_receive_seq, next_receive_seq is incremented and the message is
> processed. If the sequence number is less than next_receive_seq, the
> message MUST be discarded. If the sequence number is greater than
> next_receive_seq, the implementation SHOULD queue the message but MAY
> discard it. (This is a simple space/bandwidth tradeoff).
> DTLS multicast
I'm not seeing mentions of that in the RFC
> Forward Error correction of QUIC
I'm looking at the QUIC spec and I don't see anything about that, there doesn't seem to be any error correction done in QUIC (perhaps at the UDP level with the checksum you mean?)
> I'm not seeing mentions of that in the RFC
My bad, still a draft: https://datatracker.ietf.org/doc/draft-lucas-dtls-multicast/
> I'm looking at the QUIC spec and I don't see anything about that, there doesn't seem to be any error correction done in QUIC (perhaps at the UDP level with the checksum you mean?)
Nope, basically it should be: send X packets (X>=2), then the xor of those packets. Lose one, everything can be recovered
Googoling again I see that they later disabled it 'cause it did not give the expected results on youtube:
Which is not that surprising considering that network errors are more often in bursts, and they could recover only one error.
I still think proper FEC can be useful, especially in networks with high packet loss (at 5% and up my experiments gave me a huge advantage over any control flow algorithm of TCP, even more on high-latency networks)
Hopefully in a one or two months I will have a stable enough implementation so people might be interested enough to contribute, then I can think about writing a proper rfc.
The scope of QUIC is quite clear: it aims at putting together TCP and TLS. It already does that well enough, it won't do more. I found that limiting, the only option I have is to do something myself.
I tried talking to people around (conference and privately) but what I got from it is basically: first show that it can work, then we can talk. I tried to talk about the theory and objectives behind before the code, but without an implementation it does not seem to be worth much.
Google is a big company that can put people to work on something and that gets things done, wether it is a good idea or not. I can't pay anyone, I have not found people interested in even only the theory, so I'll go on, and maybe something will change when things start to work.
This is problematic when doing things like HTTP, or any of the infinite RPC schemes that come into popularity every few years. (REST, Thrift, Protocol Buffers, ect.) As a programmer, we're trying to send a request of known length and get a response where the server knows the length. As TCP makes this looks like a wire stream, it means that the protocol implementer needs to disable Nagle and implement logic that figures out where the requests begin and end; even though most likely these boundaries are the packet boundaries.
Think of it this way: An HTTP GET request may fit into one or two IP packets; but the nature of TCP streams means that the server's HTTP API needs to find the \n\n to know when the request headers end. Instead, the stream API itself should just be able to say, "here's a packet of 300 bytes!" Furthermore, if the client didn't disable Nagle, its TCP library will deliberately delay sending the header for 200 ms.
The reality of the TCP API is that it's great for long streams of data. It just isn't optimized for half of its use case.
The rest of what you said doesn't make much sense to me. Searching for \n\n is a property of HTTP, it's nothing to do with TCP. TCP is meant as a generic connection oriented protocol that uses packet switching, and it's largely optimized for that.
The main benefit of QUIC it seems is that it hides itself in UDP so middleboxes don't tamper with it. It isn't really a fault of TCP itself that middleboxes mess it up so much.
In a way, that's another example of ossification, this time without middle boxes.
Once the protocol has settled and development slowed down, they can then build libraries for all the other languages and servers.
There is also the issue of HTTP/2 ... just because the major open source servers have implementations for it, it doesn't mean that it has been assimilated to its full capabilities in hundreds of other stacks. For many organizations, it's hard to sell internally the adoption of yet another protocol in such a short timeframe.
Chrome is also only talking QUIC to Google websites, not sure how you would go about making it work with other websites (ask the G maybe?)
Alternate-Protocol: quic:<QUIC server port>
EDIT: Alright, I figured it out, this page is out of date. Google services reply with this header which works:
alt-svc: hq=":443"; ma=2592000; quic=51303431; quic=51303339; quic=51303338; quic=51303337; quic=51303335,quic=":443"; ma=2592000; v="41,39,38,37,35"
No idea what these quic values are :) but here it is.
Previous HN discussion of the post: https://news.ycombinator.com/item?id=16010930
TLS 1.3 - CloudFlare London Tech Talk
And his experimental golang crypto/tls pkg:
That was a classic example of why changing network protocols is hard and what needs to be done to improve the situation. Middleboxes are the control points for the Internet as we know it now. The only defense against ossification of network protocols by middleboxes, he said at the conclusion of the talk, is encryption.
So apparently we're now at a point where any middlebox trying to make any use of a protocol higher than IP counts as "ossification"? No matter if the middlebox is well-behaved or not.
And the only solution naturally is to make any stack parts higher than IP completely unusable by middleboxes by encrypting them. I guess this is make sense if you already fully control both clients and servers and get annoyed by middleboxes restricting your freedom to change the protocol on a whim.
However, this seems like a major change in the vision of the internet to me. So has the internet community at large now agreed that we want to get rid of middleboxes completely? What about middleboxes I want to employ myself, e.g. to watch my own traffic?
The vision of the Internet is that the network does little more than moving packets from point A to point B, with all the intelligence being in the endpoints. That way, the core network doesn't have to be upgraded for every little change in the protocols; protocol evolution happens on the edges of the network. Please read the classic "World of Ends" (http://www.worldofends.com/), and the technical article it links to, "End-to-end arguments in system design" (http://www.reed.com/dpr/locus/Papers/EndtoEnd.html).
Middlebox vendors brought this upon themselves.
Not really, I'm sure they don't care much, most of internet users will blame their browsers instead.
The internet was built on the idea of having a dumb network with smart endpoints. Middleboxes are a good idea that just fundamentally doesn't work. Figure out how to do what you want on your edge nodes.
> Except I don't have control about the edge nodes either, because the one edge is a remote server while the other edge is a locked-down smartphone or IoT device.
If you don't have control over either of the endpoints, is it really your own traffic? Going a bit deeper: if you don't have control over your smartphone or IoT device, is it really your smartphone or IoT device?
I'm wondering if the IETF should have taken another route to force middleboxes to be more flexible in the future?
Quite the opposite: Middleboxes are traditionally considered rule breakin kludges, they were never never part of the vision of the internet, or even allowed by the standards. The internet vision was end-to-end architecture, and global addressability, and a dumb network that does routing in a best-effort fashion in order enable host-to-host communication. IPv6 is an attempt to keep this all working.
(And no, inspecting your own personal traffic on your own private network does not traditionally count as middleboxing)
You'll need to do encryption termination. But then, you're subject to the same problem. If you reject packets you don't understand, or if you badly parse stuff, you'll compromise your traffic.
Network equipment often keeps running in the closet without updates for 10+ years. That's the part that isn't working well.
A caricature of the stakeholders in QUIC standardization would split them in three groups:
- The privacy cabal thinks that every single bit of unencrypted data is unacceptable, since with enough statistical evidence over enough data paths it could be used to deduce some bit of totally irrelevant information about a user.
- The operator cabal believes that any bit of encrypted metadata (but not payload data) is unacceptable, since there could be some valid operational reason to access it.
- The ossification cabal believes that any bit of unencrypted data is unacceptable, since somebody in the operator cabal would end up misusing it, and accidentally fix the protocol in place.
Now, obviously actual humans are more nuanced than that. Still, at the time that was written, it appeared that there was absolutely no chance of getting any manageability metadata at all. These are not positions conducive to compromise. (And the lack of that kind of data leads to horrible things like using the TCP traffic going to client X to infer measurements on the QUIC traffic going to the same client).
But at the moment it feels like it might be possible (but nowhere near guaranteed) for the standardization to export some minimal amount of transport layer data. See for example the spin-bit proposal , where just 1-2 bits of incredibly non-sensitive data would already go a long way. Give me a single spin-bit and say 3 low-order bits of the packet number, and I'd be happy.
If you are interested in low latency protocols you should check out minimaLT better performance than QUIC but the connections are portable across ip's so its better for mobiles, and can tunnel multiple connections together. DJB helped out with design so you know its super secure.
How can linux' iptables/netfilter match quic packages. Because quic is no ip protocol (but on top of udp) `-p quic` will not be implemented. Will one has to write a quic helper, so that one could match using `-m helper --helper quic`? I tend to think so, but I failed to find such a helper.
Is there some difficulty I fail to see? Or is nobody interested (because nobody uses linux as a router)?
As for classifying existing Quic connections, it might get a little messier. There are still some cleartext fields but it might require observing multiple packets in the flow before making a guess the traffic is likely to be Quic.
Meanwhile an explicit design goal of Quic is to avoid interference from middleboxes, so it would be no surprise if a perfect iptables Quic match is or remains impossible.
QUIC has something which can help with that (from https://tools.ietf.org/html/draft-ietf-quic-transport-09):
"QUIC connections are identified by a Connection ID, a 64-bit unsigned number randomly generated by the server. QUIC's consistent connection ID allows connections to survive changes to the client's IP and port, such as those caused by NAT rebindings or by the client changing network connectivity to a new address."
If that's not enough for your use cases, now is a good opportunity to mention them and suggest enhancements, since the IETF standard is still under development.
Sure you could use another port, but it looks like UDP 443 is becoming the unofficial standard, even if not IETF endorsed...
But it makes perfect sense to have a defined port, so that connection racing can work without first having to receive a HTTP header
Did you mean to write something else?
UDP very much is an IP protocol, just like TCP is an IP protocol.
QUIC also has multiple streams, but it doesn't have this head of line problem and other streams will continue to work fine if one stream blocks. So QUIC was made for HTTP/2, where your HTTP/2 streams are basically replaced by QUIC streams which are not affected by loss of packets at all.
If you want to see how loss of packets affect an HTTP/2 connection over TCP, it's pretty bad. In general if you're on a lossy connection you should just use HTTP/1. Check this awesome talk: https://www.youtube.com/watch?v=0yzJAKknE_k
QUIC is a more holistic view of the transport techniques to achieve some of the same design goals of SPDY, so the ultimate vision is HTTP/2-over-QUIC, and this is being reconciled . In earlier versions of QUIC, it even included a homebrew transport encryption protocol , but this has since been downplayed in favor of TLS 1.3.
 https://datatracker.ietf.org/doc/draft-ietf-quic-http/  https://www.ietf.org/proceedings/92/slides/slides-92-saag-5....
I’m not sure I completely understand this, but does that mean you get net neutrality for free if you use this?
Not an entirely dissimilar concept. LBM/29West being its successor.
"One of the major and believably achievable goals of QUIC, is to predominantly have zero RTT connection (re)establishment, as was mentioned in goal 3a above. It is highly doubtful that SCTP over DTLS can come close to achieving that goal."
I actually think the "zero round-trip-time" claim is a bit too strong. It's only 0RTT if you've communicated before, so in a way it's not a new connection. There's a privacy implication here -- you should wipe that state between communications to be more anonymous -- and also you only get "predominantely 0RTT" with an increasingly centralized web.
So the article suggests that making it impossible for routers to inspect any protocol-specific information (via encryption) helps with this problem. Routers can't act on fields they can't read.
> The first byte of the clear data was a flags field which, he said, was promptly ossified by a middlebox vendor, leading to packets being dropped when a new flag was set.
> The only defense against ossification of network protocols by middleboxes, he said at the conclusion of the talk, is encryption.