The IETF is not a conventional SDO, nor indeed a conventional organisation of any sort, since it has no members, and it operates on "rough consensus" rather than having some specific formal process that invariably (see Microsoft's interaction with ECMA and ISO) would be gamed by scumbags.
But nevertheless those are de jure standards that come out the far end, the result of "getting all major stakeholders in a room" albeit that room is most often a mailing list since only a hardcore few can attend every IETF physical meeting. The IETF even explicitly marks standards track RFCs distinctly from non-standards track ones. If you contribute documentation for a complete working system, rather than (as Google did with QUIC) a proposal based on such a system that needs further refinement, it'll just get published as an Informational RFC. Such RFCs are how a bunch of Microsoft network protocols are documented, by Microsoft. Whereas months of arguing and back-and-forth technical discussion have shaped the IETF's QUIC and will continue to do so, the documentation for MSCHAPv2 (commonly used in corporate WiFi deployments) is an informational RFC so a Microsoft employee just dumped it as written, no chance for anyone to say "Er, this protocol is stupid, change it not to shove zero bytes into this key labelled C or else anybody can crack user passwords after spoofing the AP". So they didn't, and you can.
[Edited: wording tweaks near start, sorry]
Which is ironic, considering “RFC” stands for “Request for Comments”.
For an RFC to be published by a WG, it must first be "adopted" by the group, which means a first draft is submitted by the author, and then debatted (sometimes lightly) until the group agrees that it fits the topic and is suitable for adoption. Similarly, once the RFC is adopted, it goes through a series of calls by the WG chair where people have opportunities to comment, until it is finally published. Informational RFCs have lighter requirements than standard tracks one, so they are easier to get published, but they still get some amount of review and comments before publication.
It took 14 months and 4 drafts for MSCHAPv2 to get published: https://datatracker.ietf.org/doc/rfc2759/
In fact, even "independent submissions" with "experimental" status (that do not go through a WG at all, https://tools.ietf.org/html/rfc2026#section-4.2.1) get reviewed before publication. The reviews in that case are private, but a RFC editor is responsible for sanity-checking the document and sometimes requests additional input from reviewers specialized in the domain area covered by the draft.
[Edit: the actual WG for MSCHAPv2 was https://tools.ietf.org/wg/pppext/, not "Networking" which is just the generic name on top of the RFC]
Edited to add:
The drafting process wasn't worthless, it fixed typographical errors, unclear descriptions, and so on. For example the zero draft insists Windows usernames are "Unicode" (UCS-2) but actually they're just ASCII, the examples show ASCII encoded hexadecimal but the text in the zero draft specifically calls it Unicode. And originally the document repeatedly says something is a 16-bit value in the text while showing a 24-bit value in structures, the final RFC has corrected this to split out an 8-bit "reserved" all zero field in the structure when this happens. In at least one place the RFC seems to "extend" the protocol compared to the zero draft, but again this isn't a response to Working Group feedback, it's documenting a patch Microsoft shipped in later Windows versions after the zero draft.
I don't know how much a WG chair could have usefully interfered here. As I say it's documenting something that already existed, so "fixing" it to document a more secure protocol nobody was using wouldn't help. The IETF's role here was to help people interoperate with Microsoft's solution, your non-Windows OS that can sign-in to a corporate WiFi system with Windows domain servers is enabled by this documentation.
Isn't QUIC new transport layer protocol based on UDP and, if I remember correctly, HTTP/3 will be HTTP bindings for QUIC?
You might think this is nitpicking, but HTTP is application layer protocol, so it's little bit confusing to me.
>To address this, I'd like to suggest that -- after coordination with the HTTP WG -- we rename our the HTTP document to "HTTP/3", and using the final ALPN token "h3". Doing so clearly identifies it as another binding of HTTP semantics to the wire protocol -- just as HTTP/2 did -- so people understand its separation from QUIC.
TL;DR the rename is to resolve the confusion.
If it doesn't pan out they'll just move on. Remember IPv5?
But hey who knows, SCTP never took of but we are talking about google here
People here say that we should use HTTP/2 over SCTP, no protocol will be adopted if there's no good implementations of it.
> But moving from TCP to UDP can get you much the same performance without usermode drivers. Instead of calling the well-known recv() function to receive a single packet at a time, you can call recvmmsg() to receive a bunch of UDP packets at once.
TCP is a streaming protocol, there are no datagrams to read one at a time. Nothing stops you from reading the entire kernel buffer (containing multiple HTTP messages) in to userspace in one syscall.
Not withstanding the other benefits of QUIC, the UDP vs TCP thing wrt to crossing the kernel-userspace boundary doesn't seem that significant.
You could do something similar with TCP if there were kernel support, but the article suggests that putting this much complexity in the kernel's TCP stack is a bad idea because it increases the chance of failure--which can be catastrophic in kernel space. Better to have your web browser/server crash instead of the entire machine.
On the other hand, HTTP/3 is way, way faster. So there are strong incentives for big players to adopt it as fast as possible :)
This also means the kernel, which could previously steer traffic to a connection onto a single core. Realistically, QUIC will need a BPF filter to inspect the connection identifier and steer to the same core in the event of an IP address change.
All this is to say: I don't think most software is ready for QUIC, even if the protocol allows for cool things.
Yeah, but I don't think that's a hard thing to do, realistically speaking. Instead of mapping a source ip/port to a destination in the LB it's mapping a QUIC session ID instead.
> This also means the kernel, which could previously steer traffic to a connection onto a single core.
Most likely the kernel will not be rewritten to support this, since QUIC is a user space protocol. Also, the LB might just be able to rewrite the packet so it looks like it came from the LB when it sends it on to the server. The end server would never see the ip address change.
The load balancer will route any ongoing sessions with the session id, and not the ip address.
Also, most firewalls on end-point devices (think built-in to the OS) are stateful firewalls.
Granted, it requires transferring of ICE connection candidates out of band (via the horrors of SDP)
I get why Google, which controls a great deal of the software on both ends of a very large number of connections, finds a settled standard inconvenient.
But from my perspective, as somebody who uses Google software but does a lot of other things too, I like when we have standards that are implemented by many different people and aren't controlled by a single vendor that is eager to maintain or extend their large market shares in many areas. Can that be slow to change? Sure. But the speed is proportional to how much the change benefits people besides Google.
Personally, I hope that QUIC is a first step toward taking the lessons learned and implementing them widely, rather than something that will evolve at a rapid pace precisely as long as Google needs it to and then stop.
Maybe that’s a matter of opinion, but I consider that to be a feature.
I want protocols to remain stable, and not be subject to whatever whims-of-the-month the fruity people at Google LLC comes up with.
With existing HTTP, that library needs to handle portability concerns to use TCP on different operating systems.
With new HTTP, the library will need to do the same thing but with UDP.
Seems like the complexity is no higher, just a difference (TCP vs. UDP) in how your library will interface with the platform network stack API.
Some UNIX distros also have their kernel implementation, and Linux has/had TUX but I’m not sure if it’s in use anywhere.
Didn't read through this article the whole way, but it was the first I found to share that seemed to give a good overview: https://www.codeproject.com/articles/437733/demystify-http-s...
The point being that there are advantages to implementing a kernel or hybrid mode HTTP server and Microsoft has done it on Windows some other implementations exist but other than MF/Big Unix I’ve never seen them in actual use.
I don’t think there is much of a point of implementing an HTTP client library in the kernel tho since the performance should be an issue really on the client side.
setsockopt(fd, SOL_SOCKET, SO_SNDTIMEO, ...)
I'm not dismissing QUIC, but it is in your control to redefine those defaults. Maybe in 2020 we'll be grappling back toward [sane] defaults.
QUIC also frees us from other outdated misfeatures of TCP such as timestamps in milliseconds when they should be in microseconds.
Notably many of the people whose proposals have been shot down by linux netdev are currently working on QUIC.
I feel this move won't make internet a better and safer place, but let's see.
QUIC uses two mechanism to make sure you cannot do such attacks:
* it requires a proof of IP ownership (exactly like TCP sequence numbers) to setup a connection ID (pretty much, you're able to receive the server's response to finalize the connection) 
* it requires the client's first message (client hello) to be padded to at least the size of the server's response. Which implies that an attacker would require as much bandwidth as is spent by the server performing the attack, making the attack as practical as the attack without QUIC. 
But, since it runs on UDP, a hacker could attack few DNS servers and amplify a UDP attack toward a Quic server. This is true for all reflection and amplification attacks.
Hence, Quic is vulnerable to receive huge amplification attacks +100Gb and soon 1Tbps.
It will not make internet a safer and better place.
Even video game companies used to use UDP and they move away because UDP is too dangerous. They now use TCP with a kind of websocket techno to not allow UDP.
Many big enterprises don't allow UDP toward their critical infrastructure.
I've personally worked on multiplayer game engine code and I assure you that UDP is far superior for VOIP and game state packets. TCP requires far too much overhead, requires packets be received in order, etc.
These make no sense for a game engine. If we have a sequence of player movements, lets say their X position [1, 2, 3], but we miss a packet [1, -, 3] we're fine, we only want their most recent packet. But the protocol will require acknowledgment and that packet to be resent, so it will require 8 different packets be sent, instead of 3! We don't even need the packet!
A lot of games are implementing web based technologies for their UIs (Panorama for example) and those will of course use TCP but that's not what the actual game server uses for VOIP/game state
TCP can cover more use cases than UDP but for some use cases this will be at the expense of performance.
Most (all?) multiplayer games I play still seem to use UDP, though there is definitely more mixed TCP use than there used to be.
Real-time games have to be UDP or more typically a variation of Reliable UDP (RUDP) . Many networking kits are based on reliable UDP and common early implementations as the core/base of their network layers such as enet  or RakNet  (Unity, Unreal, Sony, Oculus and more). RUDP or variants are UDP with channels, ordering, priority as well as ACKs where needed for reliable/must deliver messages through the use of a return UDP ACK datagram for verification. Reliable UDP is a set of service enhancement such as congestion control, retransmission, thinning server algorithms that allow a Real-time Transport Protocol (RTP) for media broadcasts even in the presence of packet loss and network congestion.
Reliable UDP ACKS are used commonly in areas like global events such as game start, game end, player entered, player died, player hit etc, all other positioning/action is UDP broadcast with dropped packets lerped  and slerped  out with interpolation  and extrapolation to deal with lag compensation  and client prediction . Sometimes this also involves channels and grid/graph areas where only messages to players around you or in that area are required to ACK when needed i.e. player hit/death.
Most large real-time games are just UDP broadcast for 99% of action. TCP is almost never used in real-time action games like FPS, MMO, RTS etc.
Rarely are TCP and UDP combined, rather RUDP or later something like SCTP, allows streaming/real-time capable broadcasts with enough verification/reliable messages where needed. Combining TCP and UDP can end up with queuing issues that affect both TCP and UDP traffic  so most games just go with reliable variant of UDP.
Gaffer on Games has a good section on why UDP is used in games 
> The web is built on top of TCP, which is a reliable-ordered protocol.
> To deliver data reliably and in order under packet loss, it is necessary for TCP to hold more recent data in a queue while waiting for dropped packets to be resent. Otherwise, data would be delivered out of order.
> This is called head of line blocking and it creates a frustrating and almost comedically tragic problem for game developers. The most recent data they want is delayed while waiting for old data to be resent, but by the time the resent data arrives, it’s too old to be used.
> Unfortunately, there is no way to fix this behavior under TCP. All data must be received reliably and in order. Therefore, the standard solution in the game industry for the past 20 years has been to send game data over UDP instead.
> How this works in practice is that each game develops their own custom protocol on top of UDP, implementing basic reliability as required, while sending the majority of data as unreliable-unordered. This ensures that time series data arrives as quickly as possible without waiting for dropped packets to be resent.
> So, what does this have to do with web games? The main problem for web games today is that game developers have no way to follow this industry best practice in the browser. Instead, web games send their game data over TCP, causing hitches and non-responsiveness due to head of line blocking.
> This is completely unnecessary and could be fixed overnight if web games had some way to send and receive UDP packets.
>The problem is fairness in the presence of network congestion. To a large extent it depends on most TCP implementations using the same congestion control algorithm, or at least algorithms that have the same general behavior. Google's developed a new algorithm called BBR that is robust, but also unfair. When a TCP connection implementing the NewReno algorithm shares a congested link with another one implementing BBR, the BBR grabs the lion's share of the bandwidth:
>QUIC specifies NewReno as default and mentions CUBIC, but the choice of algorithm is left to the implementation. I can easily envision Google using BBR for connections between Chrome and Google properties, which means Google traffic would be prioritized over competitors'. Over time, more players would implement BBR in a race to the bottom (or a tragedy of the commons) and Internet brown-outs as in the 1980s and 1990s would come back.
That part definitely needs more justification. An algorithm playing badly with NewReno doesn't mean that we'd be worse off if every system switched to it.
If you break TCP, and your answer is that "we'd be ok if every system switched to it", then you'd better start working on switching every system.
And this kind of thing needs more numbers in general. Maybe in a large mix of traffic, outside the edge case of a saturated link with two streams, it doesn't dominate too badly. Maybe because TCP incorrectly blocks so often, the average 'victim' user still benefits overall because only some of their devices are unpatched.
WebRTC is powerful but seems to be limited by not having a good portable server implementation, due to complexity.
In fact, as I understand it, the point of the rename to HTTP/3 is to distinguish the QUIC-HTTP bindings from the QUIC-by-itself protocol.
> there is going to be a new TCP named TCP/2
There is a new protocol called QUIC, which runs on top of UDP. That is, to routers/firewalls/middleboxes etc. that are not specifically aware of QUIC, it will just look like UDP traffic.
QUIC provides TCP-like features (reliable streams, with retransmissions of dropped packets etc.) plus more. As to why QUIC instead of improving TCP; experience has suggested that TCP has essentially become ossified, meaning that middleboxes will drop TCP packets using new features (see e.g. ECN). Thus in QUIC there's a focus on ossification-resistance (mandatory encryption, and as little information as possible exposed outside the encryption, etc.).
HTTP3 runs on top of QUIC (which runs on top of UDP). Hopefully that makes it clearer.
Just another question, are the "middleboxes" well prepared for a massive switch towards UDP traffic? I don't know the network hardware world very well.
Also I thought TCP traffic was benefitting from hardware implementation and optimization, I guess that's also wrong.
I'm not an expert either, but AFAIK QUIC development has been heavily influenced by the requirement to work on the "real" internet with various middleboxes of varying quality.
And it's not like it's going to be an instant change. QUIC is AFAIK already used between Chrome and google/youtube, and once QUIC & HTTP3 are official, it'll be a very long tail.
> Also I thought TCP traffic was benefitting from hardware implementation and optimization, I guess that's also wrong.
Well, going all the way (that is, TOE), hasn't been that popular, and e.g. the Linux kernel has refused to support such cards in the mainline kernel. What is common, and very useful, is checksum and segmentation offloads. Checksums are present in both TCP and UDP, and AFAIK NIC's capable of TCP checksum offload can also do UDP checksum offloading. Similarly for segmentation offloading, except in the UDP world it's called fragmentation offload. Though I guess for QUIC, receive fragmentation offloading won't work as long as the kernel and HW don't understand QUIC, as they won't understand that two incoming UDP packets can be merged.
Why run HTTP/3 if HTTP/1 costs nothing?
Where I work legacy gets dropped sooner rather than later.
I hope this bombs, er tanks.
You run HTTP/2 and HTTP/1, because a lot of people are still using that, and you don't want to lose them. This especially applies to mobile devices, many of which are stuck with software that cannot be updated for various reasons.
There's no threat of the majority of websites going HTTP/3 anytime soon. By the time that might be a possibility, Tor will catch up.
Everything that goes over UDP is not supported as of 2018 and some volunteer project is going to figure this out?
If you wait a few years for HTTP/3 to settle, proxies will be available that could be glued into tor inside a weekend hackathon.
In 16 years they have not managed to support UDP and now you say of a project (you are not familiar with...), that they can get it up over a weekend hackathon.
And you are calling me ridiculous?
This speaks volumes for your predictive ability.
Good day, sir!
Tor doesn't have UDP support right now because it doesn't need it for anything. The last 16 years are not equal to the next 16 years, surprisingly enough.
The second one requires that the feature be maintained in major web servers. Which it will be. On a per-site basis, the developer needs to do nothing.
Lots of things need to happen for HTTP/2 to 'stay alive'.
Your use of the future tense does not convince me.
It's built into the software because enough sites will still be using it fifty years from now, even if it's a minority of traffic.
I seriously doubt you'll ever see TCP blocked.
I literally can not find a criteria that would support this claim.
Market cap - no. Revenue - no. Customer satisfaction? Employee satisfaction? Contribution to society? No.
2. Alphabet (Google)
3. Alphabet (Google)
If you want to quibble that the five listed aren't always the top five or that the order isn't exactly that listed in the article, or that this isn't valid criteria, go ahead, but this seems like a pretty minor point and tangential to the article.
"With QUIC, however, the identifier for a connection is not the traditional concept of a "socket" (the source/destination port/address protocol combination), but a 64-bit identifier assigned to the connection."
Also, Google benefits from a faster and more secure Internet and its employees have the freedom to pursue that in a wide variety of ways. They aren't all mustache-twirling villains.
But I do think the security and privacy implications should be explored. What could an attacker do with a persistent connection?
But a session cookie doesn't allow to transfer a stream to a new IP address during a long request/response.
To be fair to the proponents of QUIC I add the next sentence, which depicts a use case:
"This means that as you move around, you can continue with a constant stream uninterrupted from YouTube even as your IP address changes, or continue with a video phone call without it being dropped"
This is the ultimate dream of every surveillance company & gov't. Of course Google is solving this "problem."
HTTP/3 aka QUIC driven by Go & protobufs ? ;))))