
HTTP/3 explained - signa11
https://http3-explained.haxx.se/en/
======
1_player
So much negativity in this thread.

Personally I am _very_ excited by HTTP/3 (and QUIC), it feels like the
building block for Internet 2.0, with connection migration over different IPs,
mandatory encryption, bidirectional streams and it being a user-space library
– sure, more bloat, but from now on we won't have to wait for your kernel to
support feature X, or even worse, your ISP-provided router or decade old
middleware router on the Internet.

I haven't had the chance to read the actual spec yet, but it's obvious that
while the current tech (HTTP2) is an improvement over what we had before,
HTTP/3 is a good base to make the web even faster and more secure.

HTTP/3 won't be IPv6: it only requires support from the two parties that
benefit from it the most: browser vendors and web server vendors. We won't
have to wait on the whole internet to upgrade their hardware.

~~~
jlouis
I'm worried, not because of the standard itself, which seems well thought out,
even if rushed.

I'm worried because you have a protocol implemented in the userland for a few
mainstream languages. It seems everyone now has to pay the price of a protocol
implementation on top of a protocol implementation on top of a protocol
implementation. Big players---either because they have thousands of open
source developers, or is backed by a corporation---they have it easy. Smaller
players? Not so much.

Also, note that the exact problem that HTTP/3 tries to solve was known in the
design process of HTTP/2 and some people even noted having multiple flow
control schemes at multiple layers would become a problem. We are letting the
same people design the next layer, and probably too fast in the name of time
to market.

This should definitely live in a way people can make use of it easily, with an
API highly amenable to binding. If it gains traction, we need a new UDP
interface to the kernel as well, for batching packets back and forth. This
kills operating system diversity as well, or runs the risk at doing so.

OTOH, I see the lure: SCTP never caught on for a reason, and much of this is
the opposite of my above worries.

~~~
shereadsthenews
The TCP state machine sucks and all of its timing parameters are outdated and
unsuitable for modern networks. QUIC frees us from the tyranny of the kernel.
Being in userspace is a feature.

~~~
jandrese
The rallying cry of everybody who later comes to the realization that they
have re-implemented TCP.

~~~
Jasper_
So what if we use our experiences and in-depth knowledge of a past protocol,
take into account the flaws, and build something better? You say "re-
implemented TCP" as if it's the only possible way to build a reliable packet
protocol, and that it has no flaws, and we can't make any improvements to it.

TCP isn't alien technology we don't understand. We _do_ understand it, and its
limits, and its constraints, and that means we can build a better one next
time.

------
lol768
> As the packet loss rate increases, HTTP/2 performs less and less good. At 2%
> packet loss (which is a terrible network quality, mind you), tests have
> proven that HTTP/1 users are usually better off - because they typically
> have six TCP connections up to distribute the lost packet over so for each
> lost packet the other connections without loss can still continue.

> Fixing this issue is not easy, if at all possible, to do with TCP.

Are there any resources to better understand _why_ this can't be resolved? If
HTTP 1.1 performs better under poor network conditions, why can't we start
using more concurrent TCP connections with HTTP 2 when it makes sense?

I'm a bit wary of this use of UDP when we've essentially re-implemented some
of TCP on top, though I understand it's common in game networking.

~~~
jayd16
>Are there any resources to better understand _why_ this can't be resolved?

The issue is TCP's design assumption around a single stream. You don't get any
out of order packets but that also means you don't get any out of order
packets, even if you want them. When you have multiple conceptual streams
within a single TCP connections you actually just want the order maintained
within those conceptual streams and not the whole TCP connection, but routers
don't know that. If you can ignore this issue, http/2 is really nice because
you're saving a lot of the overhead of spinning up and tearing down
connections.

>If HTTP 1.1 performs better under poor network conditions, why can't we start
using more concurrent TCP connections with HTTP 2 when it makes sense?

Because it performs worse under good conditions. TCP has no support for
handing off what is effectively part of the connection into a new TCP
connection.

And QUIC essentially _is_ your suggestion.

~~~
KMag
Good points, but the fragment

> just want the order maintained within those conceptual streams and not the
> whole TCP connection, but routers don't know that.

seems to imply that routers inspect TCP streams and maintain order. I'm not
aware of any routers that actually do anything like this, and things need to
keep working just fine if different packets in the stream take different
paths. Certainly in theory, IP routers don't have do inspect packets any
deeper than the IP headers, if they're not doing NAT / filtering / shaping.
The protocols are designed to strictly minimize the minimum amount of state
kept in the routers.

As far as I'm aware, only the kernel (or userspace) TCP stack makes much
effort at all to maintain packet order (other than routers generally using
FIFOs).

~~~
sroussey
Hard to do deep packet inspection otherwise. Or DDoS protection to some
degree. Etc. on a SoHo router though, I agree with you.

~~~
KMag
What uses of deep packet inspection on the router itself don't fall under
filtering / shaping?

------
athenot
Several things that excite me about this protocol:

— UDP-based and different stream multiplexing such that packet loss on one
stream doesn't hold up all the other streams.

— Fast handshakes, to start sending data faster.

— TLS 1.3 required, no more clear-text option.

Overall this has the potential to help with overall latency on the web, and
that is something I am really looking forward to.

(Yes I'm aware that there are many steps that can be done today to reduce
latency, but having this level of attention at the protocol level is also an
improvement.)

~~~
fiatjaf
If we aren't using TCP anymore does that mean all network congestion tooling
developed in the last 30 years are suddenly worthless and quality of service
will degrade everywhere?

~~~
collinmanderson
HTTP/3 hopes to improve flow control.

------
raxxorrax
> [..] around 70% of all HTTPS requests Firefox issues get HTTP/2 [...]

Frequent use of Google probably puts this number on the higher end without
revealing much information about general adaptation.

Personally, I am waiting for HTTP/5, since the speed for new protocol versions
seems to be set on "suddenly very fast".

That said, I think HTTP/2 was a good add-on for the protocol.

On the other hand a lot of over-engineered protocols fail or are a giant pain
to use. I think we will only see adaptation if there is a real tangible
benefit to upgrade infrastructure.

Quic doesn't really convince me yet. It is certainly advantageous for some
cases, but it isn't obvious to me. Yes, non-blocking parallel streaming
connections are certainly great... 0-RTT? Hm, I don't think the speed
advantages are worth the reduced security if used with a payload. Maybe for
Google and similar services, but otherwise? Quic needs to re-implement TCPs
error checking and puts these mechanism outside of the kernel space. Let's
hope we don't see other shitty proprietary protocols that are "similar" to
HTTP.

(I am no web- or network-developer)

~~~
tialaramex
0-RTT is one of those features where the decision was it's better if we build
it and in the end nobody uses it (because it's so dangerous) than if we don't
build it and then we all wish we had it, because now we need an entirely new
protocol to get it.

Protocols that live on top of a transport (QUIC or TLS 1.3 itself) that offers
0-RTT are supposed to explicitly define whether and how it's used. HTTP is
drafting such advice.

You should definitely avoid software that "magically" uses 0-RTT today without
that definition being completed, particularly client software. Because of how
TLS works, if you never use client software that can do 0-RTT, nothing you
send can be replayed, so you're safe. The danger only sneaks in if you run
client software that does 0-RTT _and_ the server has dangerous behaviour.
Well, you can't tell about the server, but you can easily choose not to run
that client.

No popular TLS 1.3 clients (e.g. Firefox, Chrome) do 0-RTT today. They've
talked about it, and I can imagine it sneaking in for specific jobs where
nobody can see how it causes problems, but I do not expect them to screw up
and start doing 0-RTT GET /money-transfer?dollars=1million because they've
been here before and they know what will happen when some idiot builds a
server.

In client software libraries it's a bit scarier. So, if you use an HTTP
library and one day it's like "Yay, now we do 0-RTT to make everything faster"
that's probably going to need some stern words in a bug report.

~~~
tialaramex
> No popular TLS 1.3 clients (e.g. Firefox, Chrome) do 0-RTT today.

This was wrong. 0-RTT is enabled in current Firefox builds. I haven't been
able to determine under what circumstances Mozilla now chooses to do 0-RTT,
but you can switch it off if you're concerned, it is controlled by the pref
security.tls.enable_0rtt_data

------
lazulicurio
Really neat resource. Coming into this thread with next-to-no knowledge of
HTTP/3, this was a great high-level overview of the motivation and resulting
protocol.

I'm wondering if anyone with a little more knowledge could go deeper into what
the difference is between "TLS messages" and "TLS records" as talked about in
this[1] snippet:

> the working group also decided that [...] [QUIC] should only use "TLS
> messages" and not "TLS records" for the protocol

From my understanding quickly reading through the spec, it looks like HTTP/3
starts with a standard TLS handshake for key exchange, but then QUIC "crypto"
frames are used to carry application-level data instead of TLS frames[2]. Is
this accurate? If so, why define a new frame format? Just to be able to lump
multiple frames into one packet[3]?

[1] [https://http3-explained.haxx.se/en/proc-
status.html](https://http3-explained.haxx.se/en/proc-status.html)

[2]
[https://en.wikipedia.org/wiki/Transport_Layer_Security#TLS_r...](https://en.wikipedia.org/wiki/Transport_Layer_Security#TLS_record)

[3] [https://tools.ietf.org/html/draft-ietf-quic-
tls-18#page-8](https://tools.ietf.org/html/draft-ietf-quic-tls-18#page-8)

~~~
tialaramex
> From my understanding quickly reading through the spec, it looks like HTTP/3
> starts with a standard TLS handshake for key exchange, but then QUIC
> "crypto" frames are used to carry application-level data instead of TLS
> frames[2]. Is this accurate?

Sort of, kinda, no? It's a "standard TLS handshake" from a cryptographic point
of view, but the TLS standard specifies that all this data travels over TCP.
QUIC doesn't use TCP, so for QUIC the same data is cut up differently and
moved over QUIC's UDP channel. So, everything uses QUIC's frames, not just
application data.

QUIC needs to solve a bunch of problems TCP already solved, plus the new
problems, and chooses to do so in one place rather than split them and have an
extra protocol layer. For example, "What do I do if some device duplicates a
packet?" is solved in TCP, so TLS doesn't need to fix it. But QUIC needs to
fix it. On the other hand, "What do I do if some middleman tries to close my
connection to www.example.com?" is something TCP doesn't solve and neither
does TLS but QUIC wants to, so again QUIC needs to fix it.

One reason to do all this in one place is that "it's encrypted" is often a
very effective solution even when your problem isn't hostiles just idiots. For
example maybe idiots drop all packets with the bytes that spell "CUNT" in them
in some forlorn attempt to protect "the children". Ugh. Now nobody can mention
the town of Scunthorpe! But wait, if we encrypt everything now the idiot
filter will just drop an apparently random and vanishingly small proportion of
packets, which we can live with. "I just randomly drop one entire packet for
every 4 gigabytes transmitted" is still stupid, but now everything basically
works again.

------
jorrizza
The author gave a talk about the topic at FOSDEM last weekend.

[https://fosdem.org/2019/schedule/event/http3/](https://fosdem.org/2019/schedule/event/http3/)

------
ttsda
>Non-HTTP over QUIC

>The work on sending other protocols than HTTP over QUIC has been postponed to
be worked on after QUIC version 1 has shipped.

I'm very interested in this bit. I'm working on a sensor network using M2M SIM
cards which are billed for each 100kb. Being able to maintain an encrypted
connection without having to handshake every time could have nice
applications.

------
nitrix
I want to mention that ENet
([http://enet.bespin.org/](http://enet.bespin.org/)) did this a decade ago and
mostly ignored.

~~~
zamadatix
At first glance I don't think it's fair to say "ENet did this a decade ago".
ENet simply provides multi channel communication over a UDP stream. It doesnot
provide 0/1 RTT handshakes, encryption of the protocol beyond the initial
handshake, or HTTP bindings. Based on some Github issues it doesn't even look
like there was a protocol extension/version negotiation.

QUIC is also decently old itself, the last 7 years have been spent proving it
is well suited for the real world and able to be iterated upon. This is the
kind of difference that matters for standards track vs ignored.

~~~
andrewmcwatters
nitrix isn't referring to the other features, simply the concept of
reliability over UDP to minimize overhead. The games industry has been using
this concept for decades for efficient networking, and only now is the web
community thinking about it.

------
ldng
One thing I don't understand is, if it's encrypted, we'll never see hardware
accelerated QUIC ?

I've read it's 2 to 3 times more CPU intensive, aren't we implicitly giving an
artificial competitive advantage to the "Cloud" ? By the "Cloud" I mean big
provider with like (obviously) Google, Cloudflare, Akamaï ...

That is raising the barrier of entry for newcomers, is it not ?

Isn't TCP already versioned ?

~~~
CaliforniaKarl
> One thing I don't understand is, if it's encrypted, we'll never see hardware
> accelerated QUIC ?

I think parts of it can still be hardware-accelerated. For example, OpenSSL et
al will take advantage of available AES encryption CPU instructions, if it
knows about them. So, if the TLS library supports such offloading, then the
HTTP/3 library would get that benefit.

> I've read it's 2 to 3 times more CPU intensive, aren't we implicitly giving
> an artificial competitive advantage to the "Cloud" ? By the "Cloud" I mean
> big provider with like (obviously) Google, Cloudflare, Akamaï ...

Happily, a number of those vendors are kernel developers, and contribute
changes back upstream. So, if the bottleneck is in the kernel (for example, by
a lack of UDP fast processing paths), then I expect those cloud providers
would be working on contributions to make kernel UDP as performant as kernel
TCP.

The next thing that would be missing is support for UDP offloading in the NIC
space. But TBH I don't know much about the current state of hardware
offloading, so I can't speak to it.

> Isn't TCP already versioned ?

I was curious about this, so I looked it up, and I don't think it is. IP is
certainly versioned (IPv4 vs. IPv6), but looking at the list of protocol
numbers[0], I only see one entry for TCP. And I don't see anything that looks
obviously like 'TCPv2'.

[0]: [https://www.iana.org/assignments/protocol-
numbers/protocol-n...](https://www.iana.org/assignments/protocol-
numbers/protocol-numbers.xhtml)

~~~
takeda
> > Isn't TCP already versioned ?

> I was curious about this, so I looked it up, and I don't think it is. IP is
> certainly versioned (IPv4 vs. IPv6), but looking at the list of protocol
> numbers[0], I only see one entry for TCP. And I don't see anything that
> looks obviously like 'TCPv2'.

Currently there is only a single TCP, it didn't need new version, because it
has options mechanism to add additional information as needed. If it would
need to be redesigned a new protocol would be created and a new protocol
number would be allocated. Kind of like what happened with ICMP and ICMPv6.

------
IshKebab
> QUIC is a name, not an acronym.

Pretty sure it stands for Quick UDP Internet Connections.

[https://lwn.net/Articles/558826/](https://lwn.net/Articles/558826/)

~~~
winklock
The QUIC protocol draft states that it's not an acronym:
[https://tools.ietf.org/html/draft-ietf-quic-
transport-16#sec...](https://tools.ietf.org/html/draft-ietf-quic-
transport-16#section-1.2)

~~~
BurnGpuBurn
The original creator of quick also explicitly named it as an acronym [0]. But
of course, if the big boys at IETF decree it's not an acronym, it isn't. Just
like we've always been at war with Oceanea.

[0] [https://docs.google.com/document/d/1RNHkx_VvKWyWg6Lr8SZ-
saqs...](https://docs.google.com/document/d/1RNHkx_VvKWyWg6Lr8SZ-saqsQx7rFV-
ev2jRFUoVD34/edit)

~~~
anticensor
Prescriptive language always falls apart from actual usage.

------
tbronchain
> A lot of enterprises, operators and organizations block or rate-limit UDP
> traffic

That was my first thought, and the following seem to be assuming that
companies will decide to change their policy. But many public WiFi block UDP
traffic, are they going to change their policy? Are the people in charge of it
even aware about it? (Think coffee shops, restaurants, hotels, ...) Are we
going to have websites supporting legacy protocols ("virtually forever") in
order to build a highly available internet?

Also, ISPs in some countries have not been UDP-friendly. I'm thinking about
China mainly, where UDP traffic if being throttled and often blocked
(connection shutdown) if the volume of traffic is consequent - I assume they
apply this policy to block fast VPNs. Are they going to change their policy?
Worst scenario here, would be to see a new http-like protocol coming out in
China, resulting in an even larger segmentation of the internet

~~~
discreditable
Working in a school I block QUIC traffic so my web filter can (attempt to)
keep kids off porn. Such filtering is required by law for schools. I haven't
found a passive filter that handles QUIC. I don't want to install invasive
client software or MITM connections.

~~~
tialaramex
There won't ever be a passive filter. The QUIC traffic is deliberately opaque.

If you control the clients you may be able to retain your status quo for some
time (by just refusing to upgrade) but the direction is away from having
anything filterable. So client software or MITM are your only options.

~~~
discreditable
I've seen it coming for a while. I'll have to decide which is the lesser evil:
blocking QUIC/HTTP3 or using MITM.

------
DaiHafVaho
This "book" does not work without JS enabled.

Disappointingly, out of all of the changes in HTTP/3, cookies are still
present. It'd be nice if HTTP/4 weren't also a continuation of Google
entrenching its tracking practices into the Web's structure and protocols.

~~~
Jonnax
JavaScript is part of the web. Developers shouldn't have to cater for an
incredibly small percentage of people that don't enable it.

~~~
3xblah
No need to enable it. It is on by default. :)

If it were off by default, would web developers cater to the incredibly small
percentage of people who change default settings to turn it on?

Why is there even a setting? How many people would ever want to turn
Javascript off?

When they provide a setting to toggle Javascript are browser developers
catering to an incredibly small percentage of people? How many people use it?

~~~
marcosdumay
Those people making sites that are completely broken without javascript have
very precise numbers to look at showing that approximately none of their
repeating visitors disable javascript.

We, by the other side, have no unbiased number to look and discover if it's a
common behavior ;)

~~~
3xblah
I reckon the key word in this comment is "approximately".

I might still be able to get what I need from a site that someone believes is
"completely broken", including on repeat visits, without using Javascript.

Sometimes HN commenters debate what it means when a site "does not work"
without Javascript. Some believe if an HTTP request can retrieve the content,
then the site works. Others believe if the content of the site is not
displayed as the author intended then the site is not "working".

I would bet that the definition of "completely broken" could vary as well.

Do the people running sites try to determine how many users are actually using
Javascript to make the requests, e.g., to some endpoint that serves the
content, maybe a CDN?

Browser authors could in theory include some "telemetry" in their software
that reports back to Mozilla, Google, Microsoft, Apple, etc. when a user has
toggled Javascript on or off. Maybe it could be voluntarily reported by the
user in the form of opt-in "diagnostics".

OTOH, what can people making sites do to distinguish if a GET or POST
accompanied by all the correct headers sent to a content server came from a
browser with Javascript enabled or whether it was sent with Javascript off or
by using some software that does not interpret Javascript?

The content server just returns content, e.g., JSON. It may distinguish a
valid request from an invalid one, but how does it accurately determine
whether the http client is interpreting Javascript? If a user were to use
Developer Tools and make the request from a custom http client that has no JS
engine, can/do they measure that?

Regardless of how easy or difficult it would be to reliably determine whether
a client making a request is interpreting Javascript (i.e. more than simply
looking at headers or network behaviour), the question is how many people
making sites are doing that?

They can more easily just assume (correctly, no doubt) that few users are
emulating favoured browsers rather than actually using them. One might imagine
they could have a bias toward assuming that the number of such users is small,
even if it wasn't. :)

------
exabrial
TLS 1.3 _being required_ makes me sigh loudly. What about local development,
where tools like tcpdump and wireshark are really handy? What about air gapped
systems? What about devices that are power constrained?

It's not that I think an encrypted web is bad, it's a very good thing. I am
just spooked by tying a a text transfer protocol to a TCP system.

~~~
est31
> What about local development, where tools like tcpdump and wireshark are
> really handy?

You can tell browsers to dump the session keys, which then can be read by
wireshark [1].

> What about devices that are power constrained?

That's thinking from 10 years ago. 10 years ago, there were no native AES
extensions in power constrained devices. But now there are, so encryption is
really power efficient.

> I am just spooked by tying a a text transfer protocol to a TCP system.

I guess instead of "TCP system" you meant transport layer protocol. I can
actually understand your view: stuff is getting more complicated. I can fire
up netcat, connecting to wikipedia, typing out a HTTP/1.0 request manually.
With 1.1 this is hard and with 2.0 it's impossible due to TLS requirements.
But there are reasons for this added complexity: you want to be able to re-use
connections, or use something better than TCP. As long as there is a spec, and
there are several implementations lying around, I think it's okay to add
complexity if there is a performance reward for it. Most people care about the
performance, who wants to fire up netcat to do a HTTP request.

[1]: [https://jimshaver.net/2015/02/11/decrypting-tls-browser-
traf...](https://jimshaver.net/2015/02/11/decrypting-tls-browser-traffic-with-
wireshark-the-easy-way/)

~~~
exabrial
To clarify, HTTP 1.0/1.1 were successfully transmitted over TCP, multiple
versions of SSL, then several versions of TLS. Just seems a bit pretentious to
be tying to TLS 1.3.

~~~
est31
Those older SSL and TLS versions are insecure now or at least deemed a bad
idea from today's security ideas. TLS 1.3 partly was about removing insecure
modes from TLS 1.2. If HTTP/3.0 supported anything other than TLS 1.3, then
those insecure setups would persist.

Of course there are disadvantages, like when you are in a lan or such. But I
think those cases are covered well by the HTTP/1.x family already and if not
you can always add root certificates yourself or make public DNS names you
control point to your 192.168.... address.

------
Solar19
Any data on HTTP/3 performance? I don't see it in the book. There's the
general claim that it's faster/lower latency, but there are no numbers behind
that claim -- last time I checked QUIC's performance benefits were incredibly
slight.

~~~
shereadsthenews
It is really easy to observe the performance benefit of QUIC in a congested
datacenter network. In the face of loss, QUIC tail latency is dramatically
better than TCP tail latency. This is mainly due to TCP's 200ms minimum
retransmit time; a single dropped packet will add at least 200ms to the
request time (modulo tail loss probing which can lower this to ~20ms in many
cases). When your request service time is 10µs this makes a huge difference.

------
mscasts
Many, many sites haven't even started migration to http/2 yet. Why should I
care about http/3 now? It seems kind of far in the future.

~~~
dmix
The book says 7% of all internet traffic already uses QUIC (HTTP/3) and Chrome
has long implemented Google’s version of it.

But this book isn’t about concerning yourself with using it or implementing
it, it’s about understanding what the future holds, how it works, and what
roadblocks lie ahead.

The lack of API support in OpenSSL for it’s TLS requirements and poor
optimization for heavy UDP traffic loads on Linux et al (they say it doubles
CPU vs HTTP/2 for the same traffic) sounds like it’s going to be a major
hurdle for widespread adoption any time soon.

~~~
acqq
> The book says 7% of all internet traffic already uses QUIC (HTTP/3)

The way I have understood, the book says that what is now in use (these 7%) is
a "Google-only-QUIC" whereas the "standardized HTTP/3" is still used...
nowhere?

~~~
tialaramex
Google's QUIC (GQUIC) is used on this 7% of sites

The IETF QUIC remains a work in progress, perhaps to be published in 2019.
HTTP/3 is an application layer on top of (IETF) QUIC, it might also be
published in 2019 or later. There are implementations of current drafts, and
the rough shape is settled but they're a long way from being truly set in
stone and aren't in anything ordinary people use.

So unsurprisingly nobody is already doing a thing that isn't even standardised
yet, but people are, as you see, writing about it.

~~~
acqq
Therefore I believe I'm right that claiming that HTTP/3 is used at all is
wrong, and that 7% is not even the same QUIC that will be used with HTTP/3\.
So " already uses QUIC (HTTP/3)" is a wrong statement, the right can be only
"GQUIC is used at the moment", also, as far as I understood, "making according
to Google _the 7% of the traffic_ " (and not on 7% of the sites as claimed).
And HTTP/3 and the matching IETF QUIC are used nowhere. So, again

"> The book says 7% of all internet traffic already uses QUIC (HTTP/3)"

was wrong: the book doesn't say that, and that what is claimed that the book
says (even if it doesn't) is false in more aspects.

------
all_blue_chucks
I've got to say, the phrase "physical TCP connection" made me chuckle.

------
BorRagnarok
Happy for everybody, but since it only really delivers benefits in less than
2% of use cases (those with crappy connections) I personally can't wait to
have it be as quickly implemented and supported as ipv6 was.

------
Asooka
It's sad that the site doesn't work without javascript. We had this exact
navigation working with iframes 20 years ago. And I could resize the TOC on
the left back then.

~~~
cupofjoakim
Hey, javascript is fundamental to web today, unlike 20 years ago. Even if a
site like this definitely wouldn't need javascript since it's so simple, there
really isn't much of a trade-of since less than one percent of all visitors
are likely to have javascript disabled.

~~~
mrspeaker
I disagree JavaScript is fundamental to the web - I'm a huge JavaScript fan
(top 1% I'd say), and write it for my job... I've written a few books on it
even... but I always use noscript when browsing the web: hackernews, reddit,
twitter - can all operate fine without JavaScript. Dodgy third-party ad
scripts/malware do need it, but I don't really want them running anyway.

Yes, 20% of sites I load are either a blank page or "You need to enable
JavaScript to run this app" (it's the new "An error has occurred"). If it's a
friend's site, or something that obviously needs it - like a game, or art
project - then I'll temporarily whitelist it. But if not, then hey, I just got
a 20% productivity boost by saving some time on whatever it is that thinks it
needs JavaScript!

~~~
x15
Most if not all ads can be blocked with an ad blocker like uBlock origin.

I used uMatrix myself in the past (I also used NoScript a much longer while
ago), but it requires too much time to cherry pick the remote hosts (usually
CDNs) and files to allow.

------
zackmorris
I'm still reading through the article, but I have to say that I'm pleasantly
surprised by QUIC and HTTP/3\. I first learned socket programming around the
fall of 1998 (give or take a year) in order to write a game networking layer:

[https://beej.us/guide/bgnet/](https://beej.us/guide/bgnet/)

Here are just a few of the immediately obvious flaws I found:

* The UDP checksum is only 16 bits, when it should have been 32 or arbitrary

* The UDP header is far too large, using/wasting something like 28 bytes (I'm drawing that from memory) when it only needed about 12 bytes to represent source ip, source port, destination ip, destination port

* TCP is a separate protocol from UDP, when it should have been a layer over it (this was probably done in the name of efficiency, before computers were fast enough to compress packet headers)

* Secure protocols like TLS and SSL needed several handshakes to begin sending data, when they should have started sending encrypted data immediately while working on keys

* Nagle's algorithm imposed rather arbitrary delays (WAN has different load balancing requirements than LAN)

* NAT has numerous flaws and optional implementation requirements so some routers don't even handle it properly (and Microsoft's UPnP is an incomplete technique for NAT-busting because it can't handle nested networks, Apple's Bonjour has similar problems, making this an open problem)

* TCP is connection oriented so your stream dropped by doing something as simple as changing networks (WIFI broke a lot of things by the early 2000s)

There's probably more I'm forgetting. But I want to stress that these were
immediately obvious for me, even then. What I really needed was something
like:

* State transfer (TCP would have probably been more useful as a message-oriented stream, this is also an issue with UNIX sockets, for example, could be used to implement a software-transactional-memory or STM)

* One-shot delivery (UDP is a stand in for this, I can't remember the name of it, but basically unreliable packets have a wrapping sequence number so newer packets flush older packets in the queue so that latency-sensitive things like shooting in games can be implemented)

* Token address (the peers should have their own UUID or similar that remain "connected" even after network changes)

* Separately-negotiated encryption (we should be able to skip the negotiation part on any stream if we already have the keys)

Right now the only protocol I'm aware of that comes close to fixing even a
handful of these is WebRTC. I find it really sad that more of an effort wasn't
made in the beginning to do the above bullet points properly. But in fairness,
TCP/IP was mostly used for business, which had different requirements like
firewalls. I also find it sad that insecurities in Microsoft's (and early
Linux) network stacks led to the "deny all by default" firewalling which lead
to NAT, relegating all of us to second class netizens. So I applaud Google's
(and others') efforts here, but it demonstrates how deeply rooted some of
these flaws were that only billion dollar corporations have the R&D budgets to
repair such damage.

~~~
spc476
Yeah, it really sucks that the developers of TCP didn't foresee these issues
in 1981 when they first designed it. I can't believe they were so short-
sighted.

Okay, enough with the sarcasm. Is it too much to ask for historical
perspective in protocol design?

~~~
rswail
Agreed, as well as the "Oh why didn't the Berkley people implement the OSI 7
layer model, then TCP would have been layered over UDP".

The _reason_ that TCP beat out all the other protocols is _because_ it didn't
"layer" everything. OSI was beautiful in the abstract, but a complete cluster-
fuck in the implementation.

Now we have enough processing power that the abstract layering makes more
sense. But where the layers interact with cross-layer requirements like
security was never actually dealt with in the OSI days.

~~~
anticensor
OSI was a compromise between SNA and ARPA protocol stacks.

------
the_other_guy
HTTP3 is exciting, I got a couple of questions:

1\. is QUIC only for HTTP3 or can be generalized for any TCP-based L7 protocol
but over TLS/UDP?

2\. How is Websockets dealt with in HTTP3?

~~~
pacificmint
From the link:

> The QUIC working group that was established to standardize the protocol
> within the IETF quickly decided that the QUIC protocol should be able to
> transfer other protocols than "just" HTTP.

> ...

> The working group did however soon decide that in order to get the proper
> focus and ability to deliver QUIC version 1 on time, it would focus on
> delivering HTTP, leaving non-HTTP transports to later work.

------
aruggirello
Obligatory xkcd:

[https://xkcd.com/927/](https://xkcd.com/927/)

------
CloudNetworking
If QUIC is a name and not an acronym, as the book says, why is it written all
in CAPITALS?

That's the real question nobody is asking.

------
fiatjaf
So HTTP/2 was crap all the time but was still forced upon everybody with the
repeated discourse: "it's much much better"?

And HTTPS, which is much slower than HTTP, was said to be much much faster
BECAUSE with HTTPS you could used HTTP/2 and not with with HTTP.

