Personally I am _very_ excited by HTTP/3 (and QUIC), it feels like the building block for Internet 2.0, with connection migration over different IPs, mandatory encryption, bidirectional streams and it being a user-space library – sure, more bloat, but from now on we won't have to wait for your kernel to support feature X, or even worse, your ISP-provided router or decade old middleware router on the Internet.
I haven't had the chance to read the actual spec yet, but it's obvious that while the current tech (HTTP2) is an improvement over what we had before, HTTP/3 is a good base to make the web even faster and more secure.
HTTP/3 won't be IPv6: it only requires support from the two parties that benefit from it the most: browser vendors and web server vendors. We won't have to wait on the whole internet to upgrade their hardware.
I'm worried because you have a protocol implemented in the userland for a few mainstream languages. It seems everyone now has to pay the price of a protocol implementation on top of a protocol implementation on top of a protocol implementation. Big players---either because they have thousands of open source developers, or is backed by a corporation---they have it easy. Smaller players? Not so much.
Also, note that the exact problem that HTTP/3 tries to solve was known in the design process of HTTP/2 and some people even noted having multiple flow control schemes at multiple layers would become a problem. We are letting the same people design the next layer, and probably too fast in the name of time to market.
This should definitely live in a way people can make use of it easily, with an API highly amenable to binding. If it gains traction, we need a new UDP interface to the kernel as well, for batching packets back and forth. This kills operating system diversity as well, or runs the risk at doing so.
OTOH, I see the lure: SCTP never caught on for a reason, and much of this is the opposite of my above worries.
It could, but it didn't in reality. HTTP/2 has two levels of flow control, stream-level and connection-level. You use 1 connection per site and as many streams as you want multiplexed inside that connection, thus stream-level flow control is necessary to avoid stream head-of-line blocking.
The actual layering violation is connection-level flow control, which seems to duplicate TCP flow control, but it's not mandatory, as you can see most if not all open source implementations simply set a very large connection-level window size to hand off flow control at this level to TCP.
There is a good reason for this to exist, which is to compete for bandwidth with HTTP/1.1 domain sharding technique which uses N connections per "site", effectively having N times the Initial Congestion Window (IW) than what HTTP/2 can have in one connection. IW was a huge issue in improving connection startup latency, and after managing to convince Linux netdev to raise it to 10, Google couldn't get them to allow applications to customize its value anymore. The only solution for Google was to add some flow control information in HTTP/2 and pass it to coupled TCP flow control to improve IW. So in reality only one flow control scheme is working at any time, instead of the common perception of TCP over TCP meltdown. For anyone else they can simply not do connection-level flow control in HTTP/2 and nothing of value is lost.
TCP isn't alien technology we don't understand. We do understand it, and its limits, and its constraints, and that means we can build a better one next time.
int yes = 1;
setsockopt(s, SOL_SOCKET, SO_KEEPALIVE, &yes, sizeof(yes));
Not really. Or just up to a point only. Then it will drop them into the bit bucket without telling either the sender or receiver. With TCP the sender will "find out" eventually the receiver isn't getting the data.
The point is that with streams on top of UDP all that has to happen in the application layer.
If it's better for TCP to be handled in userspace, fine -- they should build the APIs for that on the OSes they control; and agitate for it in the OSes they don't.
And, maybe, just maybe, they could turn on path MTU blackhole detection, please please please please please; it's only been in the Linux kernel for all versions of Android, but not turned on.
This can be partially mitigated in the same way it has been worked-around before: Through proxies. The fact that HTTP/3 is still only HTTP makes it even easier.
E.g. on the server side it might be good enough to have an API gateway, load balancer or CDN which understands HTTP/3 and forwards things in boring HTTP/1.1 to internal services. That's not very different from terminating TLS somewhere before the actual service implementation. Actually service implementations don't even have to speak HTTP - they can also talk via stdin/out to a HTTP/3 server in another language - which means back to CGI.
On the client side, we could deploy a client-side proxy server which translates localhost HTTP/1.1 requests into remote HTTP/3 requests. If that thing is part of the OS distribution, it's actually not that much different from a TCP/IP stack which is delivered as part of the kernel. However if it's not part of the OS it might cause some trust issues. And apart from that it might be a bit inconvenient for users, since now applications need to be changed to make use of the proxy.
But if we’re doing that we get none of the so called benefits Google-HTTP 2.0 and Google-HTTP 3.0 brings, so what’s the point of using them in the first place?
That’s completely ignoring Google-HTTP 4.0, 5.0 and 6.0 probably coming next year, and the issue of when Google thinks it is “reasonable” to break compatibility with the real HTTP, ie HTTP 1.1.
(also, if you want your concerns to be taken seriously, I'd tone it down a bit. "so called benefits", "Google HTTP", and "probably coming next year" when QUIC has been in development and testing for over 5 years all don't really give the impression you actually care about the details)
If you think that's bad, try building a browser from scratch these days!
Then, make it adhere 100% to the HTML5 and CSS3 specs! (W3C versions; I know WHATWG has uses living docs.)
What is that reason, exactly?
I know why I never use it. The use cases where it really shines aren't that common, and it's a very heavy, telco-style, protocol.
However those things are (mostly) true for QUIC as well.
Why not SCTP-over-UDP
SCTP is a reliable transport protocol with streams, and for WebRTC there are even existing implementations using it over UDP.
This was not deemed good enough as a QUIC alternative due to several reasons, including:
SCTP does not fix the head-of-line-blocking problem for streams
SCTP requires the number of streams to be decided at connection setup
SCTP does not have a solid TLS/security story
SCTP has a 4-way handshake, QUIC offers 0-RTT
QUIC is a bytestream like TCP, SCTP is message-based
QUIC connections can migrate between IP addresses but SCTP cannot
> But because of the second point, why should someone implement it?
I'm reading this as "SCTP has ports, why should someone implement it?" There is way more to SCTP than ports. For example, SCPT can deliver data on multiple independent streams, something HTTP/2 in many ways reinvents.
Those already exist.
After that point it's been becoming more and more sterilized. My web apps that automatically played some sound aren't going to work anymore without some obnoxious "click here to begin" screen that doesn't fit in with the content. No more plugins letting us extend our browsers in new ways (what a convenient "coincidence" for Google that this gives them more control over what the user gets to do and makes tracking what goes on easier). I have to give Reddit Enhancement Suite permission every single time it tries to show a preview from a domain it hasn't previously done so from before. It's all suffocating. HTML5 makes up for some of the lost capability but it's not enough and what parts of HTML5 are going to actually work are basically at the whim of Google now.
But at least HTTP/3 will let us load buzzfeed listicles a few milliseconds faster, so there's that.
On the other hand, we already live in this world. When was the last time you used a homemade CPU or graphics chip?
It's still possible for an indie scene to arise that values hand-crafted stuff, possibly at a different layer.
Then again, it is still a massive cross platform content publishing and distribution system that works, despite the hostile ecosystem it inhabits. And it even includes the first truly successful cross platform programming environment.
So there's that.
1. You don't need Web PKI certificates for encryption. Indeed in TLS 1.3 this is very obvious because the encryption switches on before any certificates are even involved. You need certificates to... certify identity. And this isn't some oddity of "the web" which might show it's "broken" but simply a mathematical fact about what identity is. If you don't want certificates, you have to just magically know every identity somehow. Works for ten PCs in your office, doesn't scale for tens of millions of web sites.
2. Tim's "Original design goals" are for a system that runs at CERN in Switzerland and is modelled on an earlier system he'd worked with in the 1980s. Tim's system has no encryption, nor does it have most other features you'd expect.
The other comment sums it up, a third party is a good line between convenience and security.
Device? We're talking about browsers. Browsers are getting increasingly hostile towards self-signed certs. Ironically, Google doesn't trust third-party root CAs, so they became one themselves. It's good to be the exception to the rules you push on others.
It's definitely worth having the encryption that prevents a lot of problems today, but I'm worried that QUIC has no unencrypted variant at all. That's almost certainly safer for the user, but it means that if a government blacklisted you out of a certificate, you're screwed.
I'm trying to interpret your stance in the most favorable possible manner, but... dude. If you think hobbyist websites are increasingly burdensome to set up, you haven't been paying any attention at all.
Flash/Java (applets, presumably) were never easier to deploy than HTML...
and deploying static sites continues to get easier and easier. See eg Netlify or Zeit/Now.
Autoplay is abused by advertisers and is a terrible UX. I get that you have a particular, outdated workflow and you'd prefer that nothing change, but really that ship sailed a long time ago.
The energies invested in developing HTTP successor protocols are not being deprived from efforts to stifle Google from ruining the concept of the web browser as a _user_ agent.
If you're a large organisation you can move to IPv6 "today". What you do is, internally you cease buying IPv4-only gear and using IPv4 addressing etcetera. Everything inside is purely IPv6. A lot of your networking gets simpler when you do this, and debugging is a LOT smoother because there's no more "Huh 10.0.0.1, could be _anything_" everything has globally unique addresses because it's not crammed into this tiny 32-bit space.
At the edge, you have protocol translators to get from IPv6 (which all your internal stuff users) to IPv4 (which some things on the Internet use) but you probably already had a bunch of gear at the edge anyway, to implement corporate policies like "No surfing for porn at work" and "Nobody from outside should be connecting to port 22 on our machines!".
This isn't really practical for "One man and an AWS account" type businesses where your "Internet access" is a Comcast account and an iPhone, but if you're big enough to actually have an IT department, suggest they look into it. It may be cheaper and simpler than they'd realised.
"Throw everything away and start from scratch." uh yeah, that's totally gonna work for a large organization. They'll be done in an afternoon! That includes rewriting all your legacy apps that only support ipv4, including the ones you bought from 3rd parties where you don't even have the source code.
Yes and no. As I stated at the end of my comment, the problem w/ IPv6 is that who's benefitting the most isn't clear: I am interested in it, as a power user. Average Joe doesn't care. App developer doesn't care (no killer IPv6 apps yet). Large ISPs with extensive CG-NAT deployments don't care (not worth the money, yet, see IPv6 adoption in the UK).
Who cares about HTTP/3? Average Joe — Not really. Mozilla/Google — Hell yeah they do. It'll be in Chrome before anyone else (if it isn't already). Same with nginx/Apache/any other webserver, Joe Blog with his own VPS will want to enable it. And that's all you need.
If it helps, Apple now requires apps support IPv6-only networking.
I mean, if you can easily update whatever userland library you're using, why can't you upgrade your OS? If the library is easy to upgrade it means that it uses a well defined and backward-compatible interface. What do you get by shifting everything one layer up? In the end it's just software, there's not really any reason why upgrading a kernel driver should be any harder than upgrading a .so/.dll.
So the logic is "kernels are too slow to update and integrate the last new standards, so let's just move everything one step up because browsers auto-update"? Except that there's no technical reason for that, on my Linux box my browser and my kernel are updated at the same time when I run "apt-get upgrade" or "pacman -Syu" or whatever applies. The kernel I'm using at the moment has been built less than a week ago.
So if the problem is that Windows sucks balls and as a result people end up effectively re-creating an operating system on top of it to work around that, then yeah, from a practical standpoint I get it but I'm definitely not "_very_ excited" about it. It's a rather ugly hack.
If in general if the question is "who do you trust more to select and implement new internet standards, kernel developers or web developers?" then I take a side-glance at the few GBs used by my web browser to display a handful of static pages at the moment and I know the answer as far as I'm concerned...
So yeah, it might make sense, but I still think it just goes to show what a shitshow modern software development has become. Instead of fixing things we just add a new layer on top and we rationalize that it's better that way.
The problem are the network appliances that are sitting between you and the server. i.e. the whole internet. To support feature X, everything between you and the server will need to support that (unless it's backwards compatible, but that's not always the case, as described in the article).
Decade-long adoption will solve this problem, until one day your packet gets routed through some router running Linux v2.5 and your connection silently fails.
This isn't good enough to build a faster (and more reliable) internet on, whereas UDP is a 40-year old standard, and we can assume everybody supports it, even Linux v2.5
>from now on we won't have to wait for your kernel to support feature X
This is orthogonal to the issue you're discussing (for instance as a thought experiment you could design a new protocol on top of ethernet in userland using raw sockets and it won't be supported by anybody, or you could implement something on top of TCP in the kernel and it'll work everywhere).
I just wanted to point out that outdated kernels aren't a fatality, it's a consequence of bad industry practices (in particular, although not uniquely, by Microsoft with its Windows OS). On Linux everything is updated together and the kernel is mostly just another package, so it's a non-issue. It's also means that applications don't have to package a custom updater (and all related infrastructure) by themselves.
Except say on my linux (ubuntu), yes the kernel is patched, but the version doesn't increase very often at all sadly. Yes I decide to run the mainline kernels since I'm on a laptop and I find that beneficial, but it's not the default of most linux installations I believe.
Actually I'm kinda relieved QUIC succeeded at all with much less layering on top existing stuff than usual. (Compared to say Websockets-over-HTTPS-over-TLS-over-TCP-over-someIPv6-over-IPV4-tunnel...). If it's feasible to deploy a major new protocol over just UDP, that's practically as good as directly over IP!
P.S. I think encryption is the main force that held back the (economically almost inevitable) desire of middleboxes to "add value" by manipulating inner layers.
If you can actually remember the days of Linux 2.5 (development branch which became 2.6) this is a hilarious analogy. I guess that's what the kids are calling ancient these days, eh? Linux 2.5, when dinosaurs roamed the earth! It even did UDP, can you believe it?!
On the other hand there are some really nice QUIC implementations in Rust, and running in userspace has security advantages.
Instead we now have transport layers that are application specific and 3 completely different web protocols with none of them being considered legacy, 2 of them being complex enough that people aren't very willing to move.
That does not look like a good foundation for anything.
[About a third of all "popular" (ie top 10 million) web sites are HTTP/2 today]
Or did you just mean "I don't care about the facts, I'm angry and the world changes which I don't really understand, so I just make things up and call that truth because it's easier" ?
Don't forget that a huge chunk of them are hosted on megacorp cloud platforms.
Everything became so "simple" and "streamlined" that companies are forced to outsource all their hardware and platform management and then hire a small army of AWS certified devops.
Nothing is being forced. You can still set up a server in your basement, or rent/build a data center and run nginx to get all of the benefits of H2, TLS1.3, etc. You can even get "megacorp-quality platform management" with things like Outposts, GKE on-prem, Azure stack, etc.
Not directly, but it is by complexity of dominant technology stacks, protocols and standards that are influenced by ubercorps.
It definitely has an impact on our system which requires sub 50ms response times on 2000+ concurrent requests.
It's a PITA if you want to debug the streams because not plain text, but given that we're over TLS, that's not really possible anyway.
In testing, we use ye-olde HTTP/1.1 and no TLS, but even over HTTP/2 and TLS, the browser will still display a JSON request/response happily. Rare that we have to go lower in the stack.
I like datagrams so much more than an accept-listen-keepalive-blocking-foreverloop-callback-async-threaded-future hell that is TCP.
I could be wrong of course I need to read the spec too. Anything UDP makes me giddy.
I can guarantee you that middleware will continue to exist. If they need to they'll force QUIC connections to terminate and switch to TLS 1.3. There's no way that companies will allow encrypted communications leaving their companies en-masse without being able to decrypt the content. Even more so for any totalitarian state governments that need to spy on their citizens..
Then they'll install MITM certificates on the individual endpoints that they already control. The ability to intercept connections between endpoints is inexorably going away.
> Fixing this issue is not easy, if at all possible, to do with TCP.
Are there any resources to better understand _why_ this can't be resolved? If HTTP 1.1 performs better under poor network conditions, why can't we start using more concurrent TCP connections with HTTP 2 when it makes sense?
I'm a bit wary of this use of UDP when we've essentially re-implemented some of TCP on top, though I understand it's common in game networking.
The issue is TCP's design assumption around a single stream. You don't get any out of order packets but that also means you don't get any out of order packets, even if you want them. When you have multiple conceptual streams within a single TCP connections you actually just want the order maintained within those conceptual streams and not the whole TCP connection, but routers don't know that. If you can ignore this issue, http/2 is really nice because you're saving a lot of the overhead of spinning up and tearing down connections.
>If HTTP 1.1 performs better under poor network conditions, why can't we start using more concurrent TCP connections with HTTP 2 when it makes sense?
Because it performs worse under good conditions. TCP has no support for handing off what is effectively part of the connection into a new TCP connection.
And QUIC essentially _is_ your suggestion.
> just want the order maintained within those conceptual streams and not the whole TCP connection, but routers don't know that.
seems to imply that routers inspect TCP streams and maintain order. I'm not aware of any routers that actually do anything like this, and things need to keep working just fine if different packets in the stream take different paths. Certainly in theory, IP routers don't have do inspect packets any deeper than the IP headers, if they're not doing NAT / filtering / shaping. The protocols are designed to strictly minimize the minimum amount of state kept in the routers.
As far as I'm aware, only the kernel (or userspace) TCP stack makes much effort at all to maintain packet order (other than routers generally using FIFOs).
An example of this is SCPS-TP.
Because using 6 TCP connections per site is a hack to have larger initial congestion windows, i.e. faster page loading, ending up using more bandwidth in retransmission instead of in goodput. Instead we could have more intelligent congestion control algorithms in one TCP connection to properly fill up the available bandwidth. See https://web.archive.org/web/20131113155029/https://insoucian... for a more detailed account (esp. the figure of "Etsy’s sharding causes so much congestion").
— UDP-based and different stream multiplexing such that packet loss on one stream doesn't hold up all the other streams.
— Fast handshakes, to start sending data faster.
— TLS 1.3 required, no more clear-text option.
Overall this has the potential to help with overall latency on the web, and that is something I am really looking forward to.
(Yes I'm aware that there are many steps that can be done today to reduce latency, but having this level of attention at the protocol level is also an improvement.)
The documentation says how in theory it could happen, but all actual client software just does ALPN, which is a TLS feature to let you pick a different sub-protocol after connecting. Since it's a TLS feature you are obliged to use encryption.
Frequent use of Google probably puts this number on the higher end without revealing much information about general adaptation.
Personally, I am waiting for HTTP/5, since the speed for new protocol versions seems to be set on "suddenly very fast".
That said, I think HTTP/2 was a good add-on for the protocol.
On the other hand a lot of over-engineered protocols fail or are a giant pain to use. I think we will only see adaptation if there is a real tangible benefit to upgrade infrastructure.
Quic doesn't really convince me yet. It is certainly advantageous for some cases, but it isn't obvious to me. Yes, non-blocking parallel streaming connections are certainly great... 0-RTT? Hm, I don't think the speed advantages are worth the reduced security if used with a payload. Maybe for Google and similar services, but otherwise? Quic needs to re-implement TCPs error checking and puts these mechanism outside of the kernel space. Let's hope we don't see other shitty proprietary protocols that are "similar" to HTTP.
(I am no web- or network-developer)
Protocols that live on top of a transport (QUIC or TLS 1.3 itself) that offers 0-RTT are supposed to explicitly define whether and how it's used. HTTP is drafting such advice.
You should definitely avoid software that "magically" uses 0-RTT today without that definition being completed, particularly client software. Because of how TLS works, if you never use client software that can do 0-RTT, nothing you send can be replayed, so you're safe. The danger only sneaks in if you run client software that does 0-RTT _and_ the server has dangerous behaviour. Well, you can't tell about the server, but you can easily choose not to run that client.
No popular TLS 1.3 clients (e.g. Firefox, Chrome) do 0-RTT today. They've talked about it, and I can imagine it sneaking in for specific jobs where nobody can see how it causes problems, but I do not expect them to screw up and start doing 0-RTT GET /money-transfer?dollars=1million because they've been here before and they know what will happen when some idiot builds a server.
In client software libraries it's a bit scarier. So, if you use an HTTP library and one day it's like "Yay, now we do 0-RTT to make everything faster" that's probably going to need some stern words in a bug report.
This was wrong. 0-RTT is enabled in current Firefox builds. I haven't been able to determine under what circumstances Mozilla now chooses to do 0-RTT, but you can switch it off if you're concerned, it is controlled by the pref security.tls.enable_0rtt_data
Have you ever err... checked how bad TCP's error checking is? It just adds together all 16-bit words! Hard to imagine worse algorithm for the purpose.
Major versions look better on Googlers’ quarterly promotion case.
I'm wondering if anyone with a little more knowledge could go deeper into what the difference is between "TLS messages" and "TLS records" as talked about in this snippet:
> the working group also decided that [...] [QUIC] should only use "TLS messages" and not "TLS records" for the protocol
From my understanding quickly reading through the spec, it looks like HTTP/3 starts with a standard TLS handshake for key exchange, but then QUIC "crypto" frames are used to carry application-level data instead of TLS frames. Is this accurate? If so, why define a new frame format? Just to be able to lump multiple frames into one packet?
Sort of, kinda, no? It's a "standard TLS handshake" from a cryptographic point of view, but the TLS standard specifies that all this data travels over TCP. QUIC doesn't use TCP, so for QUIC the same data is cut up differently and moved over QUIC's UDP channel. So, everything uses QUIC's frames, not just application data.
QUIC needs to solve a bunch of problems TCP already solved, plus the new problems, and chooses to do so in one place rather than split them and have an extra protocol layer. For example, "What do I do if some device duplicates a packet?" is solved in TCP, so TLS doesn't need to fix it. But QUIC needs to fix it. On the other hand, "What do I do if some middleman tries to close my connection to www.example.com?" is something TCP doesn't solve and neither does TLS but QUIC wants to, so again QUIC needs to fix it.
One reason to do all this in one place is that "it's encrypted" is often a very effective solution even when your problem isn't hostiles just idiots. For example maybe idiots drop all packets with the bytes that spell "CUNT" in them in some forlorn attempt to protect "the children". Ugh. Now nobody can mention the town of Scunthorpe! But wait, if we encrypt everything now the idiot filter will just drop an apparently random and vanishingly small proportion of packets, which we can live with. "I just randomly drop one entire packet for every 4 gigabytes transmitted" is still stupid, but now everything basically works again.
>The work on sending other protocols than HTTP over QUIC has been postponed to be worked on after QUIC version 1 has shipped.
I'm very interested in this bit. I'm working on a sensor network using M2M SIM cards which are billed for each 100kb. Being able to maintain an encrypted connection without having to handshake every time could have nice applications.
QUIC is also decently old itself, the last 7 years have been spent proving it is well suited for the real world and able to be iterated upon. This is the kind of difference that matters for standards track vs ignored.
I've read it's 2 to 3 times more CPU intensive, aren't we implicitly giving an artificial competitive advantage to the "Cloud" ? By the "Cloud" I mean big provider with like (obviously) Google, Cloudflare, Akamaï ...
That is raising the barrier of entry for newcomers, is it not ?
Isn't TCP already versioned ?
I think parts of it can still be hardware-accelerated. For example, OpenSSL et al will take advantage of available AES encryption CPU instructions, if it knows about them. So, if the TLS library supports such offloading, then the HTTP/3 library would get that benefit.
> I've read it's 2 to 3 times more CPU intensive, aren't we implicitly giving an artificial competitive advantage to the "Cloud" ? By the "Cloud" I mean big provider with like (obviously) Google, Cloudflare, Akamaï ...
Happily, a number of those vendors are kernel developers, and contribute changes back upstream. So, if the bottleneck is in the kernel (for example, by a lack of UDP fast processing paths), then I expect those cloud providers would be working on contributions to make kernel UDP as performant as kernel TCP.
The next thing that would be missing is support for UDP offloading in the NIC space. But TBH I don't know much about the current state of hardware offloading, so I can't speak to it.
> Isn't TCP already versioned ?
I was curious about this, so I looked it up, and I don't think it is. IP is certainly versioned (IPv4 vs. IPv6), but looking at the list of protocol numbers, I only see one entry for TCP. And I don't see anything that looks obviously like 'TCPv2'.
> I was curious about this, so I looked it up, and I don't think it is. IP is certainly versioned (IPv4 vs. IPv6), but looking at the list of protocol numbers, I only see one entry for TCP. And I don't see anything that looks obviously like 'TCPv2'.
Currently there is only a single TCP, it didn't need new version, because it has options mechanism to add additional information as needed. If it would need to be redesigned a new protocol would be created and a new protocol number would be allocated. Kind of like what happened with ICMP and ICMPv6.
And you could offload UDP, TLS but not QUIC itself. Unless you're BigBuck Company and offload to FPGAs competition can't afford. Could happen.
It's a gap that might close, but right now, to me, it is a notable competitive advantage.
Pretty sure it stands for Quick UDP Internet Connections.
That was my first thought, and the following seem to be assuming that companies will decide to change their policy.
But many public WiFi block UDP traffic, are they going to change their policy? Are the people in charge of it even aware about it? (Think coffee shops, restaurants, hotels, ...)
Are we going to have websites supporting legacy protocols ("virtually forever") in order to build a highly available internet?
Also, ISPs in some countries have not been UDP-friendly. I'm thinking about China mainly, where UDP traffic if being throttled and often blocked (connection shutdown) if the volume of traffic is consequent - I assume they apply this policy to block fast VPNs.
Are they going to change their policy? Worst scenario here, would be to see a new http-like protocol coming out in China, resulting in an even larger segmentation of the internet
If you control the clients you may be able to retain your status quo for some time (by just refusing to upgrade) but the direction is away from having anything filterable. So client software or MITM are your only options.
Ignoring ESNI will probably work fine for a good length of time. If pornhub implements it or something I'd probably have to revisit. Or, since I control the clients I might disable it in their browsers.
If enough people bark up the filter vendor's tree I'm sure they'll add a checkbox to drop esni traffic. They added one for QUIC recently.
Disappointingly, out of all of the changes in HTTP/3, cookies are still present. It'd be nice if HTTP/4 weren't also a continuation of Google entrenching its tracking practices into the Web's structure and protocols.
If it were off by default, would web developers cater to the incredibly small percentage of people who change default settings to turn it on?
We, by the other side, have no unbiased number to look and discover if it's a common behavior ;)
I would bet that the definition of "completely broken" could vary as well.
They can more easily just assume (correctly, no doubt) that few users are emulating favoured browsers rather than actually using them. One might imagine they could have a bias toward assuming that the number of such users is small, even if it wasn't. :)
Because without JS pages load much faster and browser takes less memory. Ad and tracking often doesn't work without JS. Why would anyone want to use JS?
What you say is true, so the question is why is it on and not off by default?
Also sorry: I answered the wrong question. Http basic with would still work.
I remember a time when basic auth gave you a unique url and the referrer was used to validate you. This was easy to break because you can fake the referrer.
I think the comment you are replying to is reacting to the top-level comment which is advocating removing cookies.
Cookies have been around almost as long as the web proper, and vilified for about the same amount of time.
It's not that I think an encrypted web is bad, it's a very good thing. I am just spooked by tying a a text transfer protocol to a TCP system.
You can tell browsers to dump the session keys, which then can be read by wireshark .
> What about devices that are power constrained?
That's thinking from 10 years ago. 10 years ago, there were no native AES extensions in power constrained devices. But now there are, so encryption is really power efficient.
> I am just spooked by tying a a text transfer protocol to a TCP system.
I guess instead of "TCP system" you meant transport layer protocol. I can actually understand your view: stuff is getting more complicated. I can fire up netcat, connecting to wikipedia, typing out a HTTP/1.0 request manually. With 1.1 this is hard and with 2.0 it's impossible due to TLS requirements. But there are reasons for this added complexity: you want to be able to re-use connections, or use something better than TCP. As long as there is a spec, and there are several implementations lying around, I think it's okay to add complexity if there is a performance reward for it. Most people care about the performance, who wants to fire up netcat to do a HTTP request.
Of course there are disadvantages, like when you are in a lan or such. But I think those cases are covered well by the HTTP/1.x family already and if not you can always add root certificates yourself or make public DNS names you control point to your 192.168.... address.
- HTTP/2 is "multiple HTTP streams multiplexed over 1 TCP-ish L4 connection"
- HTTP/3 is "HTTP over QUIC"
HTTP/3 is meant to replace HTTP/1 or HTTP/2 only to the degree that QUIC replaces TCP. In your air-gapped system, or for local development, QUIC-instead-of-TCP is less compelling.
The whole point of HTTP/3 is that it doesn't treat TLS as a separate layer, that it tightly binds parts of the two protocols to allow more efficient use of time and data. It's not just an option, the protocol doesn't make sense without it. If doing encrypted HTTP isn't what you're after, then this protocol isn't for you.
I suppose this can still happen regardless, except the HTTP/3 connection would stop at the load balancer (which would have to translate to plain ol' HTTP/1 for the servers behind it).
This follows into the debugging conversation, web browsers and web servers have debugging tools 10x better than reading HTTP packets in Wireshark/tcpdump.
This revealed a problem with the http proxy that the government department used, when they were blaming our system.
A government department will not install tcpdump for you. They might be actively unhelpful if their IT team is run by a BOFH.
This problem had to be debugged from the client, the server comms show no problem.
The more the browser can help me solve real life communication issues, the happier we are (and our users are!)
Now since you know this value, and the other value you need (from the client in this case) is sent over the wire, you can run the DH algorithm and decrypt everything.
You should (obviously) never do this in production, although it is what various financial institutions plan to do and they have standardised at ETSI as an "improvement" on TLS (you know, like how TSA locks are an "improvement" over actually locking your luggage so random airport staff can't steal stuff) ...
Using a fixed ephemeral on the other hand is going to happen in prod too...
If you are a developer or engineer then eat the complexity tax as part of your responsibility and ensure that you are shipping code and products that are secure for the end user who probably doesn't have the expertise to overcome the security gaps left by "developer inconvenience".
I'm at the point where I believe that you can't "layer on" or "abstract away" security like you can with other things, it needs to be thought about at every step.
Just look at attacks that can take advantage of content-length to pluck out which page the user is requesting of a mostly-static site, or how compression and encryption seem to almost be at odds with one another.
You can't ever just assume TLS will handle it when it's abstracted away, and while HTTP/3 may not get rid of those kinds of attacks entirely, bringing "security" closer to the application logic may enable better protections.
Going from using an application framework that's more abstracted, such as ASP.Net (not mvc/api/core) to those where you are closer to the metal (node, python, .net core/mvc/api) was a jump.
Thinking in terms of leveraging push with HTTP/2 alone has me concerned. The tooling around building web applications hasn't even caught up to the current state of being, let alone moving farther. Another issue is dealing with certificates against local/internal development in smaller organizations. It may get interesting, and it may get more interesting than it's actually worth in some regards.
But this book isn’t about concerning yourself with using it or implementing it, it’s about understanding what the future holds, how it works, and what roadblocks lie ahead.
The lack of API support in OpenSSL for it’s TLS requirements and poor optimization for heavy UDP traffic loads on Linux et al (they say it doubles CPU vs HTTP/2 for the same traffic) sounds like it’s going to be a major hurdle for widespread adoption any time soon.
If they add easy support in NGINX and HTTPD, then its easier for self-hosted endpoints to change as well, with minimal to no effort on their side.
The way I have understood, the book says that what is now in use (these 7%) is a "Google-only-QUIC" whereas the "standardized HTTP/3" is still used... nowhere?
The IETF QUIC remains a work in progress, perhaps to be published in 2019. HTTP/3 is an application layer on top of (IETF) QUIC, it might also be published in 2019 or later. There are implementations of current drafts, and the rough shape is settled but they're a long way from being truly set in stone and aren't in anything ordinary people use.
So unsurprisingly nobody is already doing a thing that isn't even standardised yet, but people are, as you see, writing about it.
"> The book says 7% of all internet traffic already uses QUIC (HTTP/3)"
was wrong: the book doesn't say that, and that what is claimed that the book says (even if it doesn't) is false in more aspects.
IMO it's positive. We are getting free new stuff, and I actually prefer to have two incremental steps, where HTTP 2.0 still uses TCP, giving stuff like multiplexing and pipelining, and HTTP 3.0 uses a novel UDP based transport layer protocol, improving stuff further.
There is objectionable stuff like the recent manifest 3.0 version to make ad blockers crappier but this is not one of the objectionable things imo.
I used uMatrix myself in the past (I also used NoScript a much longer while ago), but it requires too much time to cherry pick the remote hosts (usually CDNs) and files to allow.
The web is so refreshingly fast without all the scripts.
aside: used to use/implement something like ajax callbacks with hidden frames and dynamic form posts.
Man, I love JS/Browsers today over pre-2016.
Here are just a few of the immediately obvious flaws I found:
* The UDP checksum is only 16 bits, when it should have been 32 or arbitrary
* The UDP header is far too large, using/wasting something like 28 bytes (I'm drawing that from memory) when it only needed about 12 bytes to represent source ip, source port, destination ip, destination port
* TCP is a separate protocol from UDP, when it should have been a layer over it (this was probably done in the name of efficiency, before computers were fast enough to compress packet headers)
* Secure protocols like TLS and SSL needed several handshakes to begin sending data, when they should have started sending encrypted data immediately while working on keys
* Nagle's algorithm imposed rather arbitrary delays (WAN has different load balancing requirements than LAN)
* NAT has numerous flaws and optional implementation requirements so some routers don't even handle it properly (and Microsoft's UPnP is an incomplete technique for NAT-busting because it can't handle nested networks, Apple's Bonjour has similar problems, making this an open problem)
* TCP is connection oriented so your stream dropped by doing something as simple as changing networks (WIFI broke a lot of things by the early 2000s)
There's probably more I'm forgetting. But I want to stress that these were immediately obvious for me, even then. What I really needed was something like:
* State transfer (TCP would have probably been more useful as a message-oriented stream, this is also an issue with UNIX sockets, for example, could be used to implement a software-transactional-memory or STM)
* One-shot delivery (UDP is a stand in for this, I can't remember the name of it, but basically unreliable packets have a wrapping sequence number so newer packets flush older packets in the queue so that latency-sensitive things like shooting in games can be implemented)
* Token address (the peers should have their own UUID or similar that remain "connected" even after network changes)
* Separately-negotiated encryption (we should be able to skip the negotiation part on any stream if we already have the keys)
Right now the only protocol I'm aware of that comes close to fixing even a handful of these is WebRTC. I find it really sad that more of an effort wasn't made in the beginning to do the above bullet points properly. But in fairness, TCP/IP was mostly used for business, which had different requirements like firewalls. I also find it sad that insecurities in Microsoft's (and early Linux) network stacks led to the "deny all by default" firewalling which lead to NAT, relegating all of us to second class netizens. So I applaud Google's (and others') efforts here, but it demonstrates how deeply rooted some of these flaws were that only billion dollar corporations have the R&D budgets to repair such damage.
Okay, enough with the sarcasm. Is it too much to ask for historical perspective in protocol design?
The reason that TCP beat out all the other protocols is because it didn't "layer" everything. OSI was beautiful in the abstract, but a complete cluster-fuck in the implementation.
Now we have enough processing power that the abstract layering makes more sense. But where the layers interact with cross-layer requirements like security was never actually dealt with in the OSI days.
1. is QUIC only for HTTP3 or can be generalized for any TCP-based L7 protocol but over TLS/UDP?
2. How is Websockets dealt with in HTTP3?
> The QUIC working group that was established to standardize the protocol within the IETF quickly decided that the QUIC protocol should be able to transfer other protocols than "just" HTTP.
> The working group did however soon decide that in order to get the proper focus and ability to deliver QUIC version 1 on time, it would focus on delivering HTTP, leaving non-HTTP transports to later work.
That's the real question nobody is asking.
And HTTPS, which is much slower than HTTP, was said to be much much faster BECAUSE with HTTPS you could used HTTP/2 and not with with HTTP.