Hacker News new | past | comments | ask | show | jobs | submit login
HTTP/2 Push is dead (evertpot.com)
354 points by lostmsu on Dec 3, 2020 | hide | past | favorite | 168 comments



"link" headers giving asset references with `rel="preload"` work pretty fine for this purpose, although it can kick start only after the first response comes in.

Browsers seem to do the right thing on encountering this and all the "preload" resources get fetched "in parallel" .. meaning the server doesn't wait for resource1 to finish before starting to stream resource2 and on the network panel they all seem to be coming in simultaneously .. each taking roughly the same time as when fetched from the server sequentially.

Header pattern -

   link: </somewhere/something.png>; rel="preload"; as="image", </somewhere/something.js>; rel="preload"; as="script", ...
This saves considerable page load time if you maintain granular resources and simplifies build and deploy by permitting that granularity in the first place .. and the resources can be parsed+compiled on multiple cores (not sure which browser engines do that, but would be possible).

This mechanism seems effective enough in cases where HTTP/2 push could be beneficial. Thoughts by push advocates/critics?

edit: .. and this works well with the cache.


Yeah this works well for specifically this case.

There's other cases that I feel we haven't had a good chance to explore, specifically as it pertains to APIs.

I wanted to kill many compound HTTP request within a single one. Many API formats pack many logical entities in a single response, and that doesn't work well with caches.

Wouldn't it be nice if the next paradigm shift in API development goes towards every entity being addressable, cacheable by a browser but not suffering from the penalty of having to do many requests?

That's mostly where my work on Prefer-Push came from. I wrote a much longer article about this use-case earlier this year: https://evertpot.com/h2-parallelism/


> Yeah this works well for specifically this case.

What do you think of the opinion of Kévin Dunglas (Vulcain) in this case. Too optimistic?

See:

https://github.com/dunglas/vulcain/issues/70#issuecomment-72...


I'm aware of the project but don't know enough details of this feature to really comment on it! I imagine that it addresses some, but not all of the potential benefits H/2 push could have.


Thanks.

Do you still see ways/options leading to that 'perfect world' scenario? Or does that seem unrealistic for years to come given the current situation?


I may not have a full understanding on the project, but I have doubt.

One benefit of Push is that the server can potentially generate a group of responses together, which can be much cheaper than generating them individually. When prefetch is used instead of push, the decision on when and what to fetch is on the client, so if the server could generate them as a group, a proxy like Vulcain would have to buffer the results somehow until the browser asks for them, which is not ideal. I might be wrong, but I don't think their server ever intended to fix the Query+1 issue.

If Vulcain and Push already worked well for you in the past, maybe it's still as effective without Push though!


Maybe your familiar with it already, but Joe Armstrong had some similar ideas about addressable entitites: https://joearms.github.io/published/2015-03-12-The_web_of_na...


Joe was such a prolific thinker. I could listen to him talk about anything.


Isn’t a URL a “uniform resource locator” which effectively addresses an “entity” conceptually. It would seem then that your gripe is with the design of particular APIs?


Yes, nearly all of them. For example: consider an operation like 'give me a list of articles', most APIs will combine this result in a single request/response.

But if each article has a URI as well, they won't end up in a browser cache by their own address.


Your article mirrors very well the use case I wanted to solve in one API I'm working on. Thank you for detailing the matter so thoroughly.


> although it can kick start only after the first response comes in

There is a RFC for a new 103 Early Hint response: https://tools.ietf.org/html/rfc8297

This would allow to to send `link` headers before the full response was generated.

However no browser support it right now, and Chrome simply added instrumentation to later decide if they'll implement it.

The problem is nobody want to bother implementing it if it doesn't bring anything yet, so we're kinda stuck in a chiken and egg situation.

If some Chrome or other browser developers are around: please implement 103 Early Hints support.


Chrome set's the exclusive flag on it's H2 requests so although the requests might be dispatched in parallel servers may still queue the responses.

Preload is as much of a foot gun as it is a useful tool - it's way too easy to preload content that then delays other more critical content, and it takes time to find the balance


it’s a little bit different in that with preload the broader needs to parse the content first, but with push it was coming as a header and could be done earlier.

I think.


Weird that all these man hours were spent revising the entire web stack for the sake of squeezing out the last bit of unnecessary latency via quic / http3 just for them to leave out server push. It’s like going from 80% to 99% just to leave off the last 1%.

Without going all the way with server push, the logic behind the transition from http1.1/tls/tcp to http3 seems a bit dubious to me. If they aren’t going to go all the way, things already worked well enough IMO. Seems like google is just arbitrarily deciding what they want the web stack to look like.


Indeed, and I won't be surprised to hear that the other advertised benefit of http/2 and beyond (avoid head of line blocking) turns out to be marginal at best, and just as difficult to take advantage of. I've got to ask, what was all the buzz about then, and why was it necessary to "improve" a simple text protocol into a giant ball of binary protocols replacing and circumventing regular TCP/IP. In combination with subverting web standardization, I can only assume it was a deliberate effort to deprive the web of its open nature, by making it so fscking complicated that only Google is capable to implement it. Same with Kubernetes, which turns out just as the one ring to bind them all, and enslave humanity into "the cloud". I have a suspicion a coming generation will not look favorably to Google tech in a post-SV era, who managed to subvert RFC and just about every other org we had to keep the net sane.


> the other advertised benefit of http/2 and beyond (avoid head of line blocking) turns out to be marginal at best

When I'm on a slow connection (EDGE, for example) I can clearly notice a difference with HTTP/2. The extra round trip times and potential connection resets they might encounter do have an impact that can sometimes be measured in whole seconds of loading. The same might also be true for those forced to use satellite connections for internet access, where latencies of half a second are common even with good reception and clear weather. Burst speeds are fine, as long as the data comes in over existing connections, for which HTTP/2 provides are real benefit.

Most websites still push half a megabyte of JS down your throat on slow connections, so there's not as much of an improvement as advertised, but that's not the protocol's problem. Of course there are no improvements on cable internet or faster, but there are plenty of people for whom protocols like HTTP/2 make a difference.

As for Kubernetes, it's a decent tool that solves a real problem. Automated replication and load balancing isn't trivial and for companies that need it, k8s, even on bare metal, can save huge amounts of development effort. It wasn't the first product in its class, it just became the most popular. There are plenty of alternatives available online if you just look for them. That said, most "cool" companies that use kubernetes because it's the "standard" or whatever should really pick something else because you waste loads of resources for nothing if your problem doesn't need a solution as complex as K8s.


EDGE won't cut it for any modern bloated site, http/2+ or not. Will timeout long before the page is loaded.


EDGE can get you up to 1 Mib/s. Even if we just take 400 Kib/s as a sustained estimate, that's about 3 MiB/min. It's not a fun experience, but most of the times, it's enough to eventually load the site you want.

(Source: Where my parents lived, they only had EDGE for a really long time, until LTE was deployed.)


Not sure I agree here.

The binary protocol aspect of HTTP/2 is already a massive benefit, especially when combined with the ability to omit redundant headers. Consider e.g. just a basic API with Bearer tokens - those tokens are sent once instead of on each request. Cloudflare saw a 53% reduction in ingress volume size (https://blog.cloudflare.com/hpack-the-silent-killer-feature-...). These changes restore something that had been lost - efficient use of the available bandwidth - GET requests fit in one packet.

W.r.t. head of line blocking, all you have to do is find a website that shows a hundred pictures or loads a hundred scripts and you can see the benefit. It's true that the benefit is minor if you're already webpacking and spriting everything, but restoring performance to naive implementations that just use the img tags is not a minor benefit. See this website for example: https://imagekit.io/demo/http2-vs-http1.

HTTP/3 adds a lot more complexity than HTTP/2 added, for a smaller benefit, but.. that's how these things go. Our processors reorder instructions too. Even if it's only single digit percent, optimization are worth it. A few percent in the processor, a few percent in the OS, a few percent from the network, a few percent from the JIT - these things add up.


> W.r.t. head of line blocking, all you have to do is find a website that [...] loads a hundred scripts and you can see the benefit

Ok then, I guess this answers the question regarding the other purported benefit of http/2+.


Have you ever used Facebook and clicked on "my friends"? Have you user browsed a page of forum posts where each user has their own picture? Have you ever looked at a slideshow on the internet? Have you ever gone on reddit or another aggregator site that shows thumbnails? These are hardly obscure use cases.


These things exist since way before 2000, and are working just fine without http/2+. I can think of another use case, though: sending lots of ads and videos from a single domain of a well-known ad company.


You’re getting the narrative quite wrong here and assuming a great deal of bad faith that simply doesn’t exist.

People often seem to think Google presented SPDY and QUIC to IETF as faits accomplis, and they were just adopted as HTTP/2 and HTTP/3 because Google said so.

This is not how the standardisation processes work. Rather: (1) many involved parties recognised that something like this would be of value; (2) Google developed something, because they happened to be one of the parties that cared the most about it; (3) Google gave it to IETF; (4) all the relevant stakeholders joined in on improving it until there was consensus and practical experience that it really was worth it; and (5) it was finalised and published.

What ends up being standardised normally has some quite major differences from what was initially proposed. IETF QUIC is definitely quite different from gQUIC, as HTTP/2 is from SPDY. Standardisation within IETF brings diverse parties together to improve things based upon their experience and expertise. Certainly there will be dissenters, because there are trade-offs everywhere (e.g. the Varnish author was lukewarm about HTTP/2, reckoning it wasn’t worth it and that more radical changes should be made to HTTP semantics), but the end result will be better than what was initially presented, and there must be broad consensus that what is to be published is better than what preceded it (in this case HTTP/1.1). At Fastmail I observed a fair bit of the process of the standardisation of JMAP at IETF, and it benefited enormously from the process, changing shape quite significantly in some areas from what Fastmail initially presented.

The end result of HTTP/2 is definitely harder to implement than HTTP/1 (though it’s still not too bad—I implemented it in the draft days and didn’t have any real trouble, there was mostly just more to implement than with HTTP/1), but of its operational parameters, it’s better in every way than HTTP/1. Turns out that a protocol being plain-text really just isn’t useful, so long as the semantics are conveyed—literally the only people that need to care about the wire protocol are the people making tools that speak it (that is, HTTP libraries).

There’s really just one issue with HTTP/2: it makes it possible to use a single TCP connection where HTTP/1 probably used up to six, but this leads to TCP head-of-line blocking issues becoming more serious.

And make no mistake, TCP head-of-line blocking is a real issue on high-loss networks like the outskirts of wi-fi and cellular networks.

HTTP/2’s multiplexing solved real problems, and HTTP/3’s HOLB-fixing solves the last real problem with HTTP/2.

Google didn’t browbeat people into doing their will; rather, they presented a draft, and then everyone worked together to improve upon that draft, and they all (or at the least, almost all) agreed that the end result was a good improvement.

> a deliberate effort to deprive the web of its own nature by making it so complicated that only Google is capable to implement it.

… are you aware of how many HTTP/3 implementations there are already? In Rust alone, there are at least three fairly mature implementations: Quinn by various people, Quiche by Cloudflare, Neqo by Mozilla; as well as a handful more not-so-mature implementations.

Look, I’m not fond of Google, and I do think they abuse their position in many and various places, but this is not one of them.


The question is if on balance, http/2 and up really is worth the complexity when http/1.1 has served us well enough, with peak web traffic already behind us. Considering that we were able to somehow run the web (and messaging and mail and apps) 20-30 years ago with much, much less capable computers. It's almost as if computers have become way too powerful, so we had to invent atrocious web apps to offset any performance advances. Similarly, F/OSS was too ubiquitous so we had to invent "the cloud". I can't help to see anything other than a self-serving power end game in this, with a few monopolies grabbing everything left, and a legion of developers having a vested interest to keep the hamster wheel spinning.

Regarding the implementations of http/2 and up you speak of, I know only a single F/OSS one (ng-http2) actually used in server-side production.


> Regarding the implementations of http/2 and up you speak of, I know only a single F/OSS one (ng-http2) actually used in server-side production.

I've been using NginX HTTP/2 for years, and that's F/OSS. It doesn't use ng-http2.

Pretty sure this is widely used by others, including by Cloudflare who sponsored it.


Point taken, you're right re nginx


> peak web traffic already behind us

I absolutely don't see peak traffic being behind us. The internet is moving more and more bits on HTTP; videos' size is always ever increasing and is always distributed on HTTP, VR is coming, the default protocol for any Service is _always_ HTTP... we've definitely reached the maximum yet


> videos' size is always ever increasing and is always distributed on HTTP, VR is coming, the default protocol for any Service is _always_ HTTP...

Therein lies the problem. Do we really all have to shoulder youtube's (and porn networks) problems? Especially when yt always sends huge video streams even when you only want music?


> Therein lies the problem. Do we really all have to shoulder youtube's (and porn networks) problems?

No, you don't. Whether you're working on the client or server side, you can keep using HTTP/1.1, and the other side will downgrade to accommodate you. Meanwhile, those of us who want to optimally serve our users on both good and bad connections will just use the multiple, freely available implementations of HTTP/2 and eventually HTTP/3.


This is half truth at best. It's only a matter of time until HTTP/1.1 is forced out by the tech/browser oligopoly: just look where we are with plaintext HTTP now.

The reason for HTTP3 is marginal gains that only make sense at enormous scales for large operators. The rest of us pay for this with increased complexity.


> just look where we are with plaintext HTTP now.

Yes, and the upgrade to HTTPS has been an improvement for end-users.


And upgrade to HTTP3 is going to be even bigger improvement!

Except when you don't need it, but are stuck with obsolete ecosystem otherwise.


Fortunately, there is Cloudflare (incidently, also behind the http/3 push) who'll be happy to proxy your old site using http/3


3 has real value 2 is a waste of time. I will state with great confidence you won't see 2 in the wild in the next 5 years.


I dunno, the topic may be a bit more nuanced than that because the HTTP/2 upgrade is free (via ALPN, part of the TLS handshake) while there’s not yet any good way of starting on HTTP/3. See https://news.ycombinator.com/item?id=24855848 that I wrote 42 days ago on this very topic, with arguments in both directions.


> http/2 and up you speak of, I know only a single F/OSS one (ng-http2) actually used in server-side production.

I can think of, like, five off the top of my head (mod_http2, nginx, h2o/quicly, about a million go apps that use http2, Rust has a production HTTP/2 implementation or three, and Microsoft/Apple's implementations if you want ones on the client side.) It's not anyone's fault but your own if you can't look it up. There are 23 implementations in the QUIC interop matrix which are cross tested against each other as of now, too, and it wasn't hard to find: https://docs.google.com/spreadsheets/d/1D0tW89vOoaScs3IY9RGC... and several of those stacks also implement HTTP/2 as well.

It's not like the internet was some rosy garden in the HTTP/1.1 era where everything was magical and democratic and perfect. HTTP/1.1 is easy to implement wrong, and most people just used stock HTTP servers to front their application anyway regardless of the actual protocol spoken to the end user, which is how it's always been.

Besides, you don't have to actually have to be a megacorp to see the benefit of HTTP/2 or QUIC, you can just... try using your imagination. I have an actual real workload where I want to fetch potentially hundreds of metadata files from an HTTP server. HTTP/2 is a dramatic performance boost for workloads like this. It's not rocket science to see why, despite people wringing their hands about opening multiple parallel connections, etc.

> Similarly, F/OSS was too ubiquitous so we had to invent "the cloud".

You've got a lot of things very confused in your head, it seems. FOSS was never "ubiquitous" until recently, and it was only allowed that status because corporations decided they could make more money with it. They can also make money with proprietary software, so they do that too when possible. You seem to be implying the rise of FOSS was some kind of "outsider threat" to the system which needed to be suppressed, lest it make things "too good for us", and so it was then tragically coopted by Google. No, it was not; FOSS as a movement was always a captive animal from the very beginning and its viability was always at the mercy of corporations with mass market penetration and reach, not the other way around. It's not surprising it took off; it turns out "Don't pay people for their work and keep all the profits for yourself" is a tried-and-true corporate tactic for making money since basically forever.

Not that it's relevant to this thread, but the sooner the free software movement realizes it's completely failed, that it's never even truly had a chance at success, the sooner it'll be able to actually succeed at something.


mod_h2 use nghttp2, and so does h2o I believe.


> And make no mistake, TCP head-of-line blocking is a real issue on high-loss networks like the outskirts of wi-fi and cellular networks.

On high loss, and high speed networks. Think of wifi, or lte, with a very fat pipe after it.

On really slow network, there is not much difference.

Google's "real world" telemetry, and benchmarks shown HTTP2 as great, but it wasn't.

Opening multiple TCP connections may well be cheaper than dealing with all of that.


It saves google a very large amount of money if every chrome browser opens http3 connections to google.com on startup instead of the layered stack. You don’t see much of an effort to deprecate the layered stack and in my opinion that’s because standardizing http3 is enough to lower their costs, what other server hosts do is irrelevant to their bottom line.


HTTP/2 and HTTP/3 are both very useful even without server push. HTTP/2 fixed the issues of HTTP pipelining and made it so multiple requests could share one connection without issues like head-of-line blocking, and HTTP/3 improves things further so that a missed packet in one request doesn't slow down simultaneous requests. Server push was never a critical feature of them. The measured improvements from 2 and 3 were almost always without server push.


> so that a missed packet in one request doesn't slow down simultaneous requests

This is already true for HTTP/1.1. The browser uses 6 parallel TCP connections to fetch resources, and a packet los in one of the TCP connections wouldn't stall the other ones. QUIC only fixes this for HTTP/2.0 which uses a single TCP connection with multiplexed HTTP streams; a single packet loss on the TCP level would stall all multiplexed streams on an OS level.


But HTTP/1.1 using multiple TCP streams means that, where there are a lot of small requests, there's a lot of bandwidth left on the table because the TCP window never grows.


Late with a reply, but want to note that this is not true. Most OSes scale tcp congestion control windows (snd_cwnd) based on the target IP, that is any increase of the window in any tcp connection to the same host increases the window for all open connections. Even if you close all connections, the kernel caches the last known window size for this host and re-uses this on subsequent connections for a given timeframe.


You can open multiple tcp connections in parallel to virtually eliminate the head of line blocking problem for the vast majority of experiences.

The problem has been exaggerated for the sake of justifying http2/3 but in practical terms it barely affects the average user experience. 80% optimal was good enough. On the other hand it reduces cost by a few percentage points for the vanishingly few web hosts large enough such that that would have a meaningful financial impact.


There is a cost to opening a new connection and having to re-incur multiple roundtrips in order to complete a new TCP and TLS handshake. This is a meaningful overhead savings for folks with a network setup that has high last-mile latency.


As has been demonstrated by my original post, clearly saving a few rtt is irrelevant here. Anyway your point is halfway-irrelevant since if you initiate all connections in parallel you only suffer the rtt latency of a single connection.

The only people saving here are the people running servers that serve world-scale numbers of requests. The average user notices nearly no difference.


Those connections will all go though TCP startup phase where throughput is limited by the congestion window size.

And because they're unaware of each other it's likely they'll eventually start competing with each other and cause packet loss


...but multiple TCP connections (within reason: too many causes a problem, but not just a handful) actually almost always out-perform a single one for numerous reasons (including the congestion controller having more leverage from looking like multiple users, being able to hit multiple backend machines from dns load balancing, and working around various per-connection bottlenecks); this is in fact the primary trick of "download helpers": they download multiple independent segments of the file on parallel connections.


In the vast majority of cases, neither of those things practically matter. The multiple connections are for intended for concurrency, not parallelism. Web assets are not large enough for slow start to impact overall download speed by any large margin, nor sustain downloads long enough that the connections start to compete. Requests are bursty.


It's quite a few years old –https://web.archive.org/web/20170724120441/https://insoucian... – but I still see this in some packet captures


Yea technically there is a potential benefit but the point is that it’s so marginal that the majority of users won’t ever notice it. You’re saving 1-5ms off of a page load.


What if you're not running requests in parallel and instead want a pre-warmed tls connection?

Something like gRPC could be implemented as a websocket protocol but you lose all the http semantics for each internal request. You can't expect a load balancer to handle your custom socket protocol but with a standard like http/2 nginx is able to do it.


That sounds like a TLS problem and not an http problem. Fortunately TLS has already solved that problem: https://blog.cloudflare.com/introducing-0-rtt/

Though http keepalive seems to already solve your use-case, without TLS 1.3’s connection rtt optimizations, if you indeed aren’t running requests in parallel.


>Specifically, only GET requests with no query parameters are answered over 0-RTT.

Sorry but no. Not solved.

They also have a newer post about 0-RTT and how QUIC makes it even faster.

https://blog.cloudflare.com/even-faster-connection-establish...


>Specifically, only GET requests with no query parameters are answered over 0-RTT.

That only applies to cloudflare’s implementation of 0-rtt. That isn’t a general restriction.

Additionally 0-rtt isn’t the only rtt optimization in tls 1.3.

Either way your original point was moot since http keepalive is sufficient to avoid tls connection costs for serial requests.


The world-scale companies are also the companies writing the standards.


Funny that


I'm not sure I fully follow here, but I would like to be able to. Created a video response:

https://www.loom.com/share/4539869f41cf47c5a154a342308584de


Hehe I engage with people like that all the time and I think I’ve learned a lot. In my humble opinion the people who are easily offended aren’t really worth discussing difficult problems with. Actually I find that I learn a lot more from people who don’t get hung up when their ego happens to unintentionally get bruised.

I think you need to reread my initial post in this thread. The importance of minimizing rtt blocking is acknowledged in my questioning of chrome’s decision to not implement http push.


My ego is not bruised. As I said:

> I'm not sure I fully follow here, but I would like to be able to

What I'm calling out is that you may have fallen into the trap (as I did years back) that you need to be an asshole to drive towards solving hard problems. You don't, and it actually blocked me from creating the psychological safety necessary to get good ideas out of people who otherwise wouldn't feel comfortable speaking up. Genuinely good/supporting vibes from my direction to you - do with it what you will, but I do believe your ego is showing here more than mine. We can agree to disagree since I believe each of us will over-index on our past life experiences vs. those of some random person on Hacker News.

> I think you need to reread my initial post in this thread. The importance of minimizing rtt blocking is acknowledged in my questioning of chrome’s decision to not implement http push.

I think I get what you're saying, but honestly the way you're wording your points is a bit confusing. I am going to chalk this up to "I'm just too dumb" for sake of laziness at this point.


I don’t believe that you need to be an asshole to drive towards solving hard problems. Not in the least. That is your projection. Calling your point irrelevant is not a personal attack on you, it was intended to be a neutral objective statement. Smart people often make irrelevant points, in fact the smartest people often make the most irrelevant points when trying to drive at the truth. I don’t believe you are dumb but I do have reason to believe you can interpret a personal attack where there is none.


This is a fair assessment.


Isn't part of the idea with HTTP2 that the server can see everything you're asking for and more intelligently prioritize vs a bunch of separate independent connections?


I’m not aware of any open source servers that implement such an optimization and if proprietary software exists that does, it’s trivial to implement custom logic to do that across multiple connections as well.


It's actually not trivial to optimize dynamic prioritisation across multiple connections.

To keep a TCP efficient, the sending process needs to fill up the send buffer to a reasonable amount, generally by writing data to the kernel with write()/send(). If the sending process doesn't do this, the sending TCP will not send full frames as fast as the network or receiver allows, and do so at a steady cadence.

But to prioritise the data flow of multiple responses across multiple connections, the sender needs to pause existing TCP flows in progress as soon as it has higher priority response data ready to send if the higher priority TCP will send at full rate. Yet if the higher priority TCP is sending slower than full rate temporarily due to slow start, congestion control or receive window full, etc., the lower priority TCP needs to be resumed just enough to fill the gaps.

It's not even possible to do this with the ordinary sockets API, so it's certainly not trivial.

With HTTP/2 and TCP in kernel, it's still not optimal because prioritised response data will be queued behind data already in the TCP send buffer, but at least there's only one send buffer's worth to compete with rather than lots.

With HTTP/3 and QUIC in userspace, adaptive prioritisation can be optimised further because the decision is made on every packet just before it leaves the machine.

Per-packet prioritisation would be possible in theory with HTTP/1 and HTTP/2 by implementing TCP in userspace, or by implementing HTTP in kernelspace, in either case tightly coupling TCP and HTTP. I don't know of anyone doing it, because it would be a lot of work for a marginal gain.


The optimization proposed was not about packet prioritization but prioritizing assets to send.


The optimization proposed was whatever HTTP/2 does:

> "part of the idea with HTTP2 that the server can see everything you're asking for and more intelligently prioritize"

HTTP/2 prioritisation does in fact optimize continuously by starting lower priority streams when higher priority data isn't yet available, then pausing streams in mid-send as other higher priority streams' data becomes available.

The size of the send buffer makes a difference to how fast this can react, as I described. Details at Cloudflare: https://blog.cloudflare.com/http-2-prioritization-with-nginx...

> "if the most important response takes longer to generate than lower priority responses, the server may end up starting to send data for a lower priority response and then interrupt its stream when the higher priority response becomes available"

> "the problem with large send buffers is that it limits the nimbleness of the server to adjust the data it is sending on a connection as high priority responses become available. Once the response data has been written into the TCP send buffer it is beyond the server’s control and has been committed to be delivered in the order it is written"

You proposed to replace this with HTTP/1.1 and multiple TCPs, which does not provide an equivalent optimization.

Browsers do in fact optimize HTTP/1.1 per asset by keeping a list of requests in priority order and running a limited number of TCP connections in parallel, but if they can use HTTP/2 that usually works out faster. On the internet for reasons described in the Cloudflare article, and also because giving as many requests to the server as possible up front allows the HTTP server to start fetching or generating lower priority assets sooner, hiding some backend latency - especially significant with load balancers, other reverse proxies, or HDD cold storage.


> vanishingly few web hosts large enough such that that would have a meaningful financial impact

By that logic, electricity wasted on inefficient power adapters and devices in standby mode is irrelevant: The marginal savings per household are cents per year.

If it doesn't hurt me as an end user (and I don't see how HTTP/2 does), what's the objection here? Nor is any smaller service provider forced to adopt it server-side.


The premise of my argument is that the end user is barely affected either way. There is potential harm to the web ecosystem by making the stack more complex and I am questioning the rationale for tolerating that harm is since these fundamental changes are being mostly driven by the specific use cases of large corporation, not the needs of the greater community.

You’re right no one is forced to adopt this new web stack and for that I am grateful. Given recent events I will likely not adopt google’s web stack.


But you would have to implement it on a layer that maybe should be exempt from dealing with network loading.

I agree that the benefits only fit certain use cases, but it is still nice to have it and it is already widely adapted.


> google is just arbitrarily deciding what they want the web stack to look like

well yes they do try but not arbitrarily, with them running as much infra they do and margin being as they are, a bit here and there on the stack can dramatically influence their bottom line.


People often seem to think that HTTP/2 Push was a good thing that could let you improve performance.

It really wasn’t. As a performance optimisation tool, it absolutely depended upon cache digests to make any sense, but cache digests were never finished, and so push became a wild guess that would routinely be a pessimisation instead of an optimisation: “why did you clog my pipe up, sending me that megabyte of JavaScript? I already had it!”

“But this helped webpack and other such bundlers”—this was never actually true. Even if cache digests had materialised, almost every situation that PUSH frames could have helped with in JavaScript bundling would be better or at least similarly handled in a different way: if you need the resource immediately, you were probably splitting your bundle up too small and should merge the files (though this is certainly not always the case; there are definitely cases where cache digests + push could have done a better job); and if you don’t need the resource immediately, <link rel=preload> is much of a muchness with the push frames, but allows the client to retain control of what it fetches when, which is normally a mildly desirable property; or a service worker could also directly interact with the cache, too.

I think it was the other purposes of PUSH frames that were more interesting in their technical possibilities, things like letting APIs kinda return multiple responses to a request. But those possibilities never really eventuated, because once you’re doing things like that, streaming response bodies and WebSockets are mostly easier and definitely more reliably available—though you may end up reimplementing a few pieces of HTTP/2 Push along the way, depending on your use case.

So all up, I say of it all that yeah, it was a nice idea, but it ended up just not being useful in practice, because its use cases all either depend on something else that doesn’t exist, or are at least as well handled by something else.

I haven’t followed any discussions around this, but as a developer that’s not at all fond of Google my feeling and expectation is that it’s not really “Chrome dictates the direction of the web and they’ve unilaterally killed it” here so much as a general consensus of “we thought this thing would be great, but it didn’t really pan out, and it makes things quite a bit more complex and no one should be actually depending on it, so we’re removing it”—and Google is just the first one to actually do so. I think the average developer here is just thinking that HTTP/2 Push is better and more useful than it actually is.


This makes sense. What makes less sense is why cache digests were abandoned and not finished? Surely that would have made Push useful?


Great summary. We indeed switch to Websockets for a few things that we would like to have (eventually) used HTTP/2 Push for.

To give an example of an issue with this, is that we would like to receive updated resource states when they happen. Receiving these via websocket unfortunately means that they can't really interact with the browser cache, which led us to implementing our own Javascript caching layer.

There's many small examples of this where the ideal approach doesn't exist yet, and we're left to imperfect workarounds.


there's still one area where it kind of mitigates another issue, which is making sure fonts are on the client by the time text is visible, idk what's the status now but last time I checked, admittedly more than a year ago, there weren't much other options to at least reduce the flicker.


I'm quite sad about this, there was one thing that push was great for, and thats optimizing pageload speeds for first-time visitors.

We used it to push about ~140kb of static assets before our django views finished computing, and it noticeably sped up initial pageload speed for first-time visitors, which is the segment we cared most about.

It's a powerful footgun, obviously it will only make things worse if you're pushing multiple megabytes of JS to users who already have it cached, but when used sparingly in the right situations it was quite helpful.

I even built a django middleware that handled figuring out which assets to push automatically: https://github.com/pirate/django-http2-middleware


One place where I can still see this working well is landing pages for marketing campaigns; it’s a fair assumption that most visitors there will be first time visitors.

Other than that, I’d say that there should be a mechanism where the browser informs the server of what assets they have cached, although I can only assume that may open a privacy can of worms.


That was cache digests but it's dead unfortunately.


I hope browsers implement 103 early hints. I’d like to tell the browser to preload assets while the server is still working on the response, which may take a significant amount of time to determine if the response is 200, 401, or 404.

https://evertpot.com/http/103-early-hints


I was seriously thinking the domain name was a play on an HTTP status code: "Ever Tea Pot", but went to the root and it is just the guys name "Evert Pot".


Evert here =) first and last name are dutch. I moved to Canada and joked about naming my son Jack ;)

I didn't like my name growing up, but as an adult I'm grateful I have a kinda weird but more memorable name.


It's certainly a better name to have than John Smith, Joe Bloggs, or Karen.


It is definitely for discoverability!


You're confusing him with his wife, Ima T.


I was thinking the same thing


Yes to me 103 is more exciting then even HTTP3.

Do we know if Safari, Chrome and Firefox even plan to have it implemented?


None at this stage. Chrome did add some instrumentation a few versions back to collect data and decide wether it's worth implementing.

Problem is: who's going to bother sending early hints if it doesn't improve performance?


> Yesterday, the Chrome team announced to remove the feature from their HTTP/2 and HTTP/3 protocol implementations.

It's sad and scary that Chrome has such market dominance that they can basically decide how the standards should be.


I think in this case it’s the opposite. At least back in the day when I worked on this the chrome team (or parts of the team) tried to introduce this feature but it never caught on so now they kill it.

Overall I think this is good. Experiment and try new stuff, but if it doesn’t work out kill it off at some point.


The only silver lining in this is that at least most of what they are doing is reasonable is some level of good, whereas Microsoft's version of this control back in IE6 days was mostly horrific.


For a long time browsers, both Firefox and Chrome had a bug that if you HTTP2-pushed resources, it re-fetched them anyway. But since that got fixed in both browsers I've used HTTP2 push extensively when optimizing web applications. For example instead of first requesting the main html-file, parsing it, requesting the css file and parsing it, requesting a background image, etc, both the css and the background image is pushed when the index.htm is requested. This can shave off seconds and drastically improve user experience.

I've also implemented commonJS module system where all dependencies are HTTP2 pushed on the first module request.

The main advantage with HTTP2 push is that is simplifies development as you do not get penalized for not using resource bundling (which complicates caching).


The part I don’t get is why http push wasn’t cache friendly.

The idea of fetching a graph of objects is so common.

All I want to do as a web dev is a browser client requests a tip REST object or an entrypoint js script. When the client receives that, the client can send the object/script back and all the connnected objects back together to avoid multiple round trips.

It makes me sad that this still isn’t a thing (server can reply with more than one cacheable url)

To work around this we invent all sorts of tools like graphql, webpack with layers of complexity.

Sure there is the problem of what if the client already has the resource. Then the answer is the response can optionally send content or just the url. You then have a 2nd roundtrip to fetch the content the browser doesn’t have.

At the end of the day, I just want the browser to essentially rsync-ish a bunch of url/resources from the server.


Agreed. We're building HATEOAS-powered SPA's for customers, and we've ultimately built an architecture that reimplements features we wished browsers did better. Our end-goal is ultimately exposing a graph, make everything addressable, have great cache control and means for different parts in the stack to decide when caches should expire.

All of these are features of the 'web'. But clients and servers are a bit behind what we would like to see.


Its a shame. Because it would do away with nasty packing strategies like webpack.

I always built my go+react apps to avoid webpack,on service start it would scan all the includes from the each entry point and built out a map for each entry point. On request I would push all the dependencies.

It was really nice. :(


While I think webpack is a beast in its complexity, it and other bundlers can/do output based on the strategy you described.

It emits entry point bundles, as well as smaller chunks, many of which remain unchanged between releases. All of that gets pushed out. I think it works pretty well.


How much have you benchmarked this? If you have a cache-friendly HTML page, any modern browser will load individual resources almost as quickly — and this way you avoid the concern about clogging the pipe with pushed content which is already cached.

I get the appeal, having spent plenty of years fighting old browsers back in the day but we're in a much better place now with cheap CDNs and HTTP/2. I've yet to see a project where this time wouldn't be better spent on, for example, using less JavaScript or tuning image/font usage.


Push doesn’t work with cache, so that strategy was less efficient than just concatenating everything.


Push can work with cache (the article mentions Cache-Digest proposal which directly does this) but it never got implemented for reasons unclear to me.


I guess I never really understood the problem that wasn't just solved by old-school tcp keepalive and pipelining HTTP requests for more assets/files from the CDN


HTTP/1.1 pipelining was never useful due to head-of-line blocking and no browsers enabled it by default.


It continues to be useful to me. I do not use a browser for pipelining; I wrote simple, custom utilities for generating HTTP.

The notion of "HOL blocking" as a "problem" relies on an assumption about the user. Namely, that (a) the user is trying to load resources from multiple sources to display a webpage and (b) the order of the delivery of those resources does not matter. I understand how this fits into Google's business however this is not something I am trying to do.

Rather, I routinely download multiple resources from the same source and the order of delivery is important. The simplest example is I am downloading 100 pages from a website; I want page one first and page 100 last, and I do not want page 2 before page 1 is finished. I use pipelining to make 100 HTTP requests over a single TCP connection. It is fast, efficient and has worked reliably for decades.^1

There was a time when server resources were important. For example, opening 100 TCP connections to make 100 HTTP requests might be unduly burdensome on the server. Not to mention each request might complete (or not complete) at a different time. I might receive page 42 before page 21. Then I have to check to make sure each page was received and order them after they all complete. Relative to what I can do, easily, using HTTP/1.1 pipelining, this is a PITA.

HTTP/1.1 pipelining has more uses than what the designers of HTTP/2 (Google) might envision. The way I use HTTP/1.1 pipelining, for information retrieval, which only requires a TCP client, e.g., netcat, and a text editor (cf. a third party HTTP/2 library), does not have a "HOL blocking problem".

HTTP/1.1 pipelining is not "useless" simply because it does not fit a particular use case involving browsers, internet advertising and commercialisation of the www by those organisations who control browsers and the internet advertising market (and pitch the things they develop in-house like HTTP/2 as "standards").

1. It is common when the topic of discussion occasionally turns to HTTP/1.1 pipelining to see the follwing factoid repeated: "No browsers have it enabled". There is another factoid we should repeat: "All web servers have it enabled". (This is why I have been using it for so long.) In both cases there may be exceptions. There could be a browser that has pipelining enabled and there are some websites, a very small minority, that have pipelining disabled.


Opera had it enabled. Firefox and Chrome supported it but had it disabled because web servers were too buggy.


The idea was to eliminate the latency caused by round trips on initial payload. With quic+server push it was theoretically possible to get the server to send you everything necessary to display the requested page with a single request packet (sans subsequent application-transparent flow control packets).


Maybe when resources have dependencies it avoids having to wait for each resource to parse before knowing what else will need to be fetched.


You can achieve the same thing by concatenation those dependencies in your HTML. It’s equally un-cacheable.


Google never liked HTTP/2 push. I've been hosting some pages on Google App Engine and push never got implemented for the mandatory frontend proxy.


Google also "invented" it as part of SPDY (it's surprisingly difficult to find a clear reference, but I'm reasonably sure it was in pre-standardization-process SPDY too). For this kind of thing, one one hand it doesn't make much sense to apply a global opinion about a relatively minor thing to a company of that size, where different teams and their goals aren't aligned or even at odds. On the other, I wouldn't take GAE not having a feature as a big signal: its best days clearly are over and it's lacking/buggy/... in all kinds of places sadly.


You are correct, I worked a bit on spdy back in the start.

Also correct in that google doesn’t have one opinion. Even inside the chrome team there were many different opinions.


I don’t understand. How can Google just arbitrarily decide to not support part of a standard because it’s easier for them?


It is optional for a client to support server push.

It I understand the RFC correctly a client can announce server push support via the SETTINGS_ENABLE_PUSH setting. So if a client chooses not to announce push support it is still standard compliant.


Two words: Market domination.


Please, they're the one that added this in SPDY to begin with. They would do that even if they were a minority player because the reason they actually remove it is that it provides no gain for the work required.


Well, either it's a standard or not.

And if the most powerful browser vendor decides to drop support for standardized features that is a strong "political" signal against the standard and it absolutely has a different meaning...


I don't understand. What makes you think Google or any other company out there HAS to support the entirety of a (non certification dependant) standard if they estimate that they could drop a part of it at no loss / at a gain ?

Is your company properly supporting HEAD on its website ? Probably not (treat it as GET), because it's easier and no real loss is incurred. Same here.


Almost as if there's no standard, and everything's really just running on a proprietary Google protocol...


I always thought that push was interesting, but frameworks/tooling integration never made it very approachable to use on smaller scale projects in small/medium businesses.

If your userbase is below tens of millions, there's usually other places to spend your development efforts for far bigger impact.


As someone who helped contribute to Google's pre-HTTP2 apache/mod_spdy push, I am sad :(


> The Chrome team has considered removing Push support since at least 2018. The main reason being that they hadn’t really seen the performance benefit for pushing static assets.

Because they probably benchmarked Google properties that are already using something akin to HTTP/2 Push, because they are inlining a lot of their assets and data.


Is there a mechanism for communicating to the server that the http2 implementation doesn't support push beyond parsing user agent?

Or are we just going to end up with servers pushing to chrome and it just ignoring it? That sounds even worse.


Yes there is. HTTP/2 has SETTINGS frame to communicate implementation capabilities, and SETTINGS_ENABLE_PUSH is standardized to communicate push support. Chrome will send SETTINGS_ENABLE_PUSH set to zero, so no user agent parsing is necessary and all compliant implementations should handle this transition transparently.


What is dead may never die. Server Push never gained any traction sadly, despite having a great potential.



You can still do initial page speed optimization using dirty tricks. Some of these were possible even before HTTP/2 Push existed.

If you already have fancy middleware to predict the best assets to push for different users, perhaps you can extend the middleware to inject the dirty tricks as well.

- data: URLs inlined into the initial page HTML

- compression to remove the entropy of base64 encoding

- ordinary inlining, e.g. for JavaScript, CSS and (less likely) HTML fragments

- Cache and ServiceWorker to allow future pages to use the inlined assets without fetching them a second time

Combine the above with heuristics like you were using for HTTP/2 Push to decide what to inline for each user. So a user with an If-None-Match Etag or a cookie has probably seen the page before, and a user with neither probably has not.

As others have pointed out, <link> preloads are also available. Although those don't kick in until the first data from the initial page has been received, if there is enough page data, one extra RTT needn't matter for some resources, especially JavaScript that is only needed once the user starts interacting with the page, items far below the fold, and other page resources if it's a single-page application.


Looks like you can achieve an equivalent effect with service workers, like this:

1. If no cookie is set, the server inlines all resources in the initial HTML, along with the original URL in a data attribute if not content-hashed, and sets a cookie. Ideally, the inlining is performed so that base64 data is all together for better compression (this can be achieved by putting all scripts in the header and images in the header as CSS background-image values)

2. If JavaScript is enabled, a service worker is installed, and all the inlined resources are extracted, put in local storage (or in the browser cache if possible) and will be returned by the service worker

3. If JavaScript is enabled, the service worker keeps track of what is cached in the browser and modifies each request to send a bloom filter indicating these resources

4. The server later only inlines resources that are not cached by the client, using either the bloom filter if available, or using server-side tracking

As far as I can tell this has equivalent latency and bandwidth requirements to having server push plus cache digests, with the disadvantage of requiring JavaScript and slightly higher overhead, but the advantage of being more customizable.


That's a lot more overhead. For one it requires you to spin up an entire JavaScript engine! I don't know if it was ever implemented but the beauty of doing this at the HTTP level is that (for example) it could be built into the networking layer of iOS or Android.


Developers asked for some support for PUSH frames in Fetch for 5 years[1]. There were a couple api ideas that came up. But no browser ever implemented anything to make Push actually useful & interesting to developers, beyond the very basic & boring.

It is extremely unfortunate (cruel?) that we will no longer have any chance to see what these very interesting capabilities are good for.

This is a seriously difficult change, but I don't think it should be ended. I hope a browser with more courage can help it along, and that the world has a chance to use this amazing enhancement to HTTP.

[1] https://github.com/whatwg/fetch/issues/65


The writing is on the wall about Google being the most powerful actor mandating what should die or live in the web specs. The monopoly would be bad alone, but to make things worse, they have a clear conflict of interests giving they are a powerful cloud computing operator that is very interested in centralizing, controlling and monopolizing data (which is the 21th century gold).

We can expect a "war" against anything really distributed from the big tech once some tech that would menace their centralized control over our data.

Have them dictating what can or cannot be in browsers will prevent new blue seas on tech, and i expect that if nothing changes the picture, even the startup scene will be pretty slow, giving it will have not much left to do, and the ones who find out a way, will probably be bought and left in a dump of vaporware.

Lets get a blue sea as an example, AI.. they have access to massive data no one have, and there's no way any AI product can compete with that in the long term.

So your browser being a "dumb" client that fetch and save everything in the cloud is the only thing they need, and the thing they will optimize for. And this is not in the users or devs best interests, much less the civil society as a whole.


Apple has deliberately ceded this ground by focusing on native apps and Microsoft are still in purgatory for their crimes of the IE era. Google won this war by attrition.


A subscription mechanism seems like it would be broadly useful. Maybe "push" is the wrong framing for this? I've seen some efforts in that direction [1], but don't have a real sense of what's coming when.

1. https://braid.news/


> A subscription mechanism

Already specced out and implemented for years. https://enwp.org/AtomPub https://enwp.org/WebSub https://enwp.org/Webhook


Man. Modern internet standards are some serious turd-polishing.

First you have HTTP2, which goes from a plaintext protocol to a binary protocol. But it's backwards compatible with HTTP/1.1, because... nobody in the entire world will consider adding a new internet service port (81, 442, etc). In 100 years, we will still be using ports 80 and 443 - not because it's a good idea, but because literally every business in the world just refuses to update their software.

Then you have HTTP3. Where we do the above, but with a transport protocol. Literally nobody in the world wants to update their protocol stack, hardware, middleware, software, etc to add a new internet protocol. So we clamber on top of another protocol (UDP)'s back, to create a new protocol (QUIC), and then clamp yet another protocol (HTTP3) on that. So we have IP->UDP->QUIC->HTTP3. Which is backwards compatible with HTTP/1.1, because, you guessed it.... nobody wants to add ports 82 and 440.

The next step is going to be nobody wanting to update to IPv6. So they will probably implement IPv7, and stick it on top of whatever replaces UDP on top of IPv7. Then add on QUIC, and HTTP4. So it will be Ethernet->Arp->IPv4->UDPv4->IPv7->UDPv?->QUIC->HTTP4. All because nobody wants to update their software to use an existing, widely supported, network layer protocol. Or transport protocol. Or application protocol. Meanwhile, anything that wants to use the new features of the new protocols needs to implement 4 new protocols.

It's going to keep going like this until we have some new Apple-ish company come along with brand new everything, and everyone will flock to it, and the only way anyone will be able to use it will be to implement the new snowflake protocols. To even connect to the devices you'll need to have a license agreement with them and establish a secure connection that only that company has the keys for (because "privacy", because the internet standards are so ancient they never adapt fast enough and hackers basically compromise all web security). And those new things will become a de-facto standard. So we'll never have, say, a new TCP, but we'll have an AppleTCP.


> So we have IP->UDP->QUIC->HTTP3. Which is backwards compatible with HTTP/1.1, because, you guessed it.... nobody wants to add ports 82 and 440.

I’m not sure what you mean by backwards-compatible here. HTTP/3 is not running over TCP port 80. It conveys the same semantics as older TCP HTTP, but that’s so applications can transparently use one or the other.

I’m afraid your rant only goes downhill from there. Your IPv7 and UDPv4 are quite implausible, especially where you’ve put them in the stack.

Look, we’re dealing with people here. You can’t just say “we’re dropping this thing you’ve been using for years, effective immediately”. We’re also dealing with capital investments: half the world is built upon stuff that only supports TCP and UDP at the layer on top of IP and can never support alternatives like SCTP. So replacing things takes a great deal of time. Therefore, sitting on top of existing protocols lets you do something rather than nothing.

All progress in all fields is developed on top of existing things.


> HTTP/3 is not running over TCP port 80

Yes, obviously it's not TCP, and obviously not that port. I explicitly stated "UDP->QUIC->HTTP3", which is only inaccurate because HTTP3 is an implementation of QUICv1.

But it is still running over port 443. So my point is exactly the same. They refuse to change the port so that nobody has to agree to implement something crazy, like a new L4 protocol, or worse. (HTTP/3 does supports the ability advertise additional ports besides 443, but so does HTTP/1.1, in "Location: https://someurl:442/" redirects, and still nobody has ever used anything but 443 for production)

> Look, we’re dealing with people here. You can’t just say “we’re dropping this thing you’ve been using for years, effective immediately”.

But we are saying "All the clients [that effectively 2 companies control] will be using this new thing, so unless you want to be left behind, grandpa, get on the bandwagon." There is plenty of muscle to start the process of upgrading legacy crap. Over 5-10 years we could get 90% adoption of an entirely new network stack. But we have to actually start going there by writing the standards with that intent.

QUIC is a good start, but from the protocol stack's perspective it's a hack. 10-15 years ago, it simply would have been a new revision of SCTP. But the design is being pushed by an advertising company that just wants an 8% reduction in traffic so they can increase their shareholder dividend. They don't give a crap about evolving the stack to actually improve computing as a whole, which is what the standards are really intended for.


UDP port 443 is entirely unconnected to TCP port 443—they’re the same number, but in a completely different namespace. HTTP/3 didn’t have to use UDP ports 80 and 443, it was just most convenient for all parties concerned to use the same numbers as the TCP ports, because there was no obvious reason to change it.

I do wish you could specify which port to use for your HTTP with SRV records, but it’s unlikely that will ever happen.

10–15 years ago, SCTP didn’t work on the internet at large, just like it doesn’t now. Face it, protocols at this layer have ossified so that TCP and UDP are your only options if you want broad compatibility—and that’s nothing to do with any one party, it’s everyone’s fault. TCP and UDP will never get another sibling. (See also how TLS ossified in 1.2 so that fundamental improvement was an enormous effort and 1.3 is still not universally supported.) Fortunately, UDP is pretty close to raw IP, so there’s not a great deal of difference between implementing QUIC on top of UDP and QUIC on top of IP.

> Over 5-10 years we could get 90% adoption of an entirely new network stack.

Nope, this is absolutely false; I even have a concrete counterexample: IPv6. Standardised in 1995–1998, supported by DNS since 2008, supported by all major OSes by 2011, but still unavailable to a very large fraction of internet users. (As a consumer of internet, I’ve encountered IPv6 support in one commercial internet supply in Australia, but not in any other, or any residential or cellular connection in Australia, or in a few stints in India, or in a cellular connection in the USA.)

You’re seriously overestimating the ability to get people to change. If your solution is incompatible with the past, people won’t change because no one’s using the new thing (because it’s incompatible). If your solution is compatible with the past, people won’t change because there’s no good reason to. Most of the time people will only change if there’s a compelling reason to: e.g. with HTTP/2, many adopted it because the multiplexing normally improved page load performance, and with HTTPS, many adopted it because browsers were starting to label their sites as insecure if they didn’t, and they didn’t like that.


> Face it, protocols at this layer have ossified so that TCP and UDP are your only options if you want broad compatibility—and that’s nothing to do with any one party, it’s everyone’s fault. TCP and UDP will never get another sibling.

> Nope, this is absolutely false; I even have a concrete counterexample: IPv6. Standardised in 1995–1998, supported by DNS since 2008, supported by all major OSes by 2011, but still unavailable to a very large fraction of internet users.

> You’re seriously overestimating the ability to get people to change.

I think it's just the opposite: everyone else in the world is just giving up.

Actually changing these things is not some insurmountable, inscruitable task. It's just a protocol! It's ones and zeroes! It's not rocket science, or nuclear fission (which now seems possible?). It's a crappy little specification for a communications protocol. All we have to do is do the work. It's not easy, and it's not fast, but it's also not some unknown, unseen force in the universe that we can't comprehend. It's a damn network protocol (and a simple one at that).

We know how to change it, and we know that the only thing stopping us is the will to do it and cooperation. Giving up on this change is defeatism and laziness. Can you imagine if in any other category of science or technology, researchers refused to make progress because "it's too hard to work with other people" ?


How do you propose to convince literally millions of businesses to spend at least tens or hundreds of thousands of dollars to change something deep in the technical stack that they don't understand, to achieve some nebulous slight improvement, when what they have at present works perfectly? And unless you can convince almost all of them to do it at about the same time, the old will necessarily linger (see also the compatible/incompatible problem I mentioned) and provide further disincentive. And then there are regulations that make any changes like this take an absolute minimum of five or ten years, because of multiple iterations of required regulation changes and certifications (e.g. the regulator must be convinced to allow thing Q, then manufacturer A must make and get certified product X, then manufacturer B that builds on that can make and get certified product Y, then C for Z, and finally the product you need is on the market and now you have to convince your boss you actually need to buy it).

You must have a strong lever if you wish to effect meaningful change. HTTPS adoption is a good example: browsers were in a position to honestly bully people into preferring it; and even then, it has been taking quite some years to get any substantial majority. I see no possible route whereby HTTP/3 could have succeed had QUIC been built upon IP rather than UDP.

Progress fails to happen due to inertia all the time. This is not something specific to software or hardware. The first example that springs to mind is climate change mitigation. Simply as a general observation about life, most people don't want to change what they're doing.


That was an amazing rant. I really respect the level of detail and dedication to the rant.


> The next step is going to be nobody wanting to update to IPv6.

I hate that IPv6 has such low usage, and it is exactly the same problem you describe. We have a protocol meant to replace IPv4 and eliminate several kludges (NAT) in the process. But guess what? People would rather keep on hacking the previous protocol to the extremes than learn and implement a newer better system. IPv6 is better than IPv4 in literally every sense, but your sysadmin can't be bothered to deviate even slightly from her decade old knowledge.

So now we have all sorts of v4 in v6 tunneling, someone even talked about NAT for IPv6 (WTF?!). The end result is that v6 is incredibly complex due to these backward stuff.

We need Apple to deprecate IPv4 like they did 32 bit. Otherwise companies will keep us locked into old technology for eternity.


The telcos use it, ISPs use it. Cloud doesn't use it yet because AWS is backwards at migrating some of their services (eg RDS). But they support IPv6 in VPCs and for ingress/egress to the internet.

Effectively they run dual stack and people are starting to realize that allocating subnets in /24 chunks is getting painful and it's much easier to allocate in /56 then /60 then /64 and that using ULAs and routing and NRDP and DNS is a much better way to do things.

IPv4 in the cloud is mostly people not wanting to understand more than 10/8 and 172.16/12 and 192.168/16.

IPv6 is actually easier but it requires learning about a new Layer 2/3 set of protocols. NDP is really better than arp/bootp/dhcp etc.

Use SLAAC and then mDNS and DNS-SD.

There was a really neat use of IPv6 on here the other day that lets you avoid the port mapping/iptables dance in K8s by treating a node as a LAN and giving each pod its own IPv6 "node area network" address. [1]

[1] https://news.ycombinator.com/item?id=25245057


> IPv6 is actually easier but it requires learning about a new Layer 2/3 set of protocols.

Exactly! It's simple to understand if you throw away the decades of cruft that has accumulated in IPv4.

ARP is a mess, half the specification is vague and everybody just does whatever they want. NDP is amazingly intuitive and sensible, but you have to stop thinking of it in terms of ARP.


> IPv6 is better than IPv4 in literally every sense

The addresses are harder to type, harder to yell across an office while setting up equipment, and impossible to remember. I honestly think this is it’s major failing that has prevented wider adoption. Aesthetics make a big difference, like it or mot, and IPv6 is far less accessible than IPv4.

Beyond that, IPv4 is arguably generally simpler for someone not well versed in networking to get a baseline understanding of.


And yet if you set it up right, you don't need to type them or yell them or remember them.

> Beyond that, IPv4 is arguably generally simpler for someone not well versed in networking to get a baseline understanding of.

And BASIC is easier to program in than Rust if you're not well versed in programming.

IPv6 addresses come in 3 flavors, unicast, anycast, multicast.

Unicast addresses are for a single host, like a unique telephone number

Anycast addresses go to the nearest receiver, like 911

Multicast addresses go to a group of hosts that are listening to the address for that group, like everyone streaming the same show on Netflix at the same time.

Unicast addresses are either local to an interface/LAN/site, or global.

Anycast addresses are global.

Multicast addresses can be very local or across a wider area like a company or can be global.

You can tell what sort of address it is by the start:

* Multicast addresses start with "ff".

* LAN addresses start with "fe".

* Site (multiple LAN) addresses start with "fd"

* Anything else other than "00" is global, but some have special meanings, like "2001" is used for documentation.

There's a protocol called NDP that all the hosts use to give themselves addresses and find out how to connect to other hosts.

That's a start. The fiddly bit twiddling details can come later.


> And BASIC is easier to program in than Rust

And Visual Basic is #6 on the TIOBE rankings despite being officially retired whereas Rust is #25

Simplicity sells.

Look at XML vs JSON. XML is feature packed. Arguably the "better" format but JSON has taken over because it's way easier to use.

REST beat SOAP and XML-RPC because it was less complex and waaay easier to build.

When there is something that is easy and even potentially a little simpler than needed which does the same thing as something complex, the simple thing almost always wins - even if - or especially if - its worse. The whole "Worse is Better" mantra has always proven true.

I've said for like 20 years now that they should have just tacked a couple more octets onto IPv4 and rather than trying to reinvent the wheel.

Exhaustion was and still is the most important problem to fix. Get that out of the way, and then we could have fixed the other things wrong with IP one at a time.

We could have been on like IPv12 by now. Small iterative changes are easier to absorb. Adoption would have happened already. instead we threw the baby out with a bath water, classic second system syndrome.


I thought domain names were created so we don’t have to yell IP Addresses.


Faster computers and network speed will encourage higher level of abstraction. We can't just leave all those extra performance on the table! Unused cpu cycles is wasted cycles, unused ram is wasted ram, and unsaturated bandwidth link capacity is wasted capacity! \s


Can someone give a useful example of HTTP/2 Push being used?

I believe I understand how it works but I'm curios what IRL techniques could be achieves.


The blog post links to another one of their blog posts with a real world example for push: https://evertpot.com/h2-push-for-apis/


It's great for pushing a small bundle of static assets (e.g. <250kb) in advance of the HTML to make pageloads first-time visitors nearly instant. (e.g. for landing pages, public sites, blog posts, etc.)

It's bad for pushing large bundles of JS and images to every visitor that already has them cached (which is unfortunately how it was misused in practice).


Unrelated, but I just want to rant.

I hate how the minimum possible websocket message is smaller than the largest possible websocket header. You can have an entire message that's 6 bytes, but you can also have a header that's 14 bytes.

I'm curious why they added support for massive messages (2^32+1). And I'm curious how much bandwidth would be "wasted" if instead of supporting a 6 byte header (for messages 0 - 125 bytes) and 10 byte header (for messages 126-2^32) they just made it fixed for 0-2^32 with a 9 byte header.


I think you are ignoring the TCP/IP overhead. Add:

22 bytes Ethernet Header

20 bytes IP Header

20-32 bytes TCP Header

16 bytes Ethernet Footer

That's 78 to 90 bytes extra - trying to optimise the websocket packet size is really not going to help much.

See https://www.researchgate.net/publication/269031593_Performan...


I'm trying to simplify parsing.writing by making it less variable. The spec is the one with the optimization to save 3 bytes for short messages. Why?


Ahhhhh, well, perfection is hard to achieve eh?

Off-topic, but all those little cuts do sting. Of course, if they did it your way, surely someone else would be ranting about why they wasted bytes unnecessarily.


What kind of optimization. From performance point of view 64bytes would be a cache line (on most architectures), so if the message fits there, it's good enough.


WebRTC is UDP based and can have a lot less overhead than this (with the obvious drawbacks).


They mention websocket which is TCP based, are are not talking about WebRTC.


Is HTTP/1.1 pipelining basically dead too?


Yes. There are no current browsers that implement HTTP/1.1 pipelining, and even various proxies and others don't support HTTP/1.1 due to head of line blocking and proxy errors.

Also, with HTTP/1.1 pipelining if a client sends multiple requests but one of the requests results in an error and the server closing the connection all of those other requests are lost.


Thrift and gRPC can "push" until the cows come home (streaming). Use the correct tool for the job. HTTP is for hypertext documents, not APIs or RPC.


Well, gRPC uses HTTP/2 as its default transport so I'm not sure "HTTP is for hypertext documents" is valid.

Note that gRPC never used HTTP/2 PUSH_PROMISE frames so it isn't affected by their removal. gRPC's bi-directional streams have to be initiated by the client.


HTTP is for REST, REST is an API style for client/server interactions.

RPC is another API style, but it's more tightly coupled. RPC has been tried and tried and tried and the same problems happen all the time. Tightly coupled, brittle, hard to upgrade.

If by "gRPC" you meant protobuf as a message format, then sure. But TLV based binary streaming has been around for yonks (eg ASN.1).

Pushing is an optimization to avoid API round trips, not just the capability of sending a binary stream.

Link headers have the same effect as HTTP/2 push, but a client can cache previous requests (and the new 103 Early Hints status).

It's a hint from the server that it's highly likely that the Link URLs will have useful information in terms of the current request.

It can be seen by caching layers between the client and the server, so they can pre-cache the links or deliver them without server interaction if previously cached.

All of that is part of HTTP, which has nothing to do with "hypertext documents". It just happened to be that hypertext documents were the first use of the protocol.


Most "REST" APIs are just RPC. Many "REST" URLs are just the names of the methods they call on the remote endpoint.

REST is good for things like downloading and uploading documents or media to a web server. GET, PUT, POST, DELETE, etc. were designed for documents/media. Because it's available from a web browser, like JavaScript, people started using it even on native apps because it's familiar. That doesn't mean it's the best or most productive protocol.


And telephones are for phone conversations, not squealy modems! Use an ISDN line like god intended!


this but unironically


Does this have any effect on Server Sent Events (SSE)? I understand that uses HTTP/2 in some way.

https://developer.mozilla.org/en-US/docs/Web/API/Server-sent...


The main benefit of HTTP/2 for SSE is having just one server connection regardless of how many naively implemented widgets you have on a single page. SSE works just fine over HTTP, but it does keep a socket open to the server per each connection, so a badly architected client can exhaust server resources quite fast


no


Two of my co-workers migrated completely to Firefox in the last month. I don't expect Firefox to regain it's market share, probably ever, but I can't stand Chrome, it's ads, it's backer. DDG extension installed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: