Hacker News new | comments | show | ask | jobs | submit login

HTTP/1.1 to the backend? How about HTTP/2 so people can run gRPC services straight through without translations?






To be fair, the majority of benefits for HTTP/2 and QUIC are for the browser and experienced at the edge of your service. While that's not to say having the option to pipe HTTP/2 to your gRPC servers wouldn't be nice the majority I've seen are using HTTP/1.1 at their application layer and what happens from there is a matter between consenting servers.

If QUIC allows arbitrary length datagrams and retransmits them out of order then it's still going to be useful for video games and other systems with long lived connections. This is how I imagine how a replacement for TCP and UDP should work. But it doesn't look like there are standalone implementations of QUIC so I have no idea what the API actually looks like. There is only google's implementation which is part of chromium.

There are a few. This page lists the IETF QUIC implementations and gQUIC implementations (which is supported here) at the bottom. https://github.com/quicwg/base-drafts/wiki/Implementations

The counterpoint to that would be that, inside Google, it's all gRPC or Stubby.

I counter your counter with this neither being an internal tool nor gRPC/Stubby owning even plurality utilization outside of Google.

HTTP/2 would still be great for upstreams to ensure HTTPS and make things much more efficient via compression and multiplexing. It makes a difference in higher-scale situations.

HTTP/2 as a spec doesn't demand HTTPS. Only browsers enforce this requirement.

From a technical perspective, HTTP/2 makes little sense when you're commingling requests from different user agents. Header compression is ineffective. Persistent connections provide little benefit, as keep-alive works just as well in a data center environment. Between servers, there is no connection limit (as there is with browsers) so there isn't really head-of-line blocking issues that would go away when using HTTP/2.

In general, you're unlikely to benefit materially from using HTTP/2 (versus vanilla HTTPS) between your edge servers and your application servers.


Yes, with low traffic it won't matter, but "it makes a difference in higher-scale situations." We benefit materially when doing billions of daily requests.

Servers don't have unlimited connections so HTTP/2 provides more throughput and concurrency over the same TCP connection pool, while also preventing any slow requests from affecting all other requests queued on the same connection. HPACK compression operates on individual headers and saving the keys alone can save significant overhead for smaller payloads, along with the binary protocol.


A QUIC to h2 adapter is probably very complex to implement in a fully compliant way since both protocols are highly stateful. gRPC has a Cronet transport which again has experimental support for QUIC. I don't think it's reached production level yet...

It shouldn't be that complex. On top of both protocols are HTTP semantics. You get a request, which has some headers in both direction and then contains a bidirectional stream. You forward both in each direction.

It's not different than building a HTTP/2 (or HTTP/QUIC) to HTTP/1 gateway. On both sides are HTTP APIs.

The statefulness is on the lower level, where streams/requests are multiplexed. However that doesn't need to be conveyed or transferred during proxying.


You're basically describing attaching an h1 to h2 adapter behind the existing QUIC to h2 adapter with HTTP/1 as the intermediary protocol, which is a simplification for both sides. Of course that will not be that complex. But the downside is this approach limits both protocols from reaching their full potential by having to operate at a higher level.

gRPC over QUIC has been in use by some Google apps in production for over a year :)



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: