
HTTP/2 is not the future, it’s the present - Eleven_Wilson
http://blog.eleven-labs.com/en/http2-future-present/
======
rita3ko
It's not Eleven-Lab's present, seeing as they don't support HTTPS, and HTTP2
doesn't work without it.

~~~
lumisota
The standard [1] doesn't require the use of TLS/HTTPS. However, all of the
major implementations only support HTTP/2 over TLS.

[1] RFC7540 -
[https://tools.ietf.org/html/rfc7540](https://tools.ietf.org/html/rfc7540)

~~~
rita3ko
Correct, "all of the major implementations" :)

~~~
frik
Which I'd quite bad. I don't understand why/who lobbied to implement it that
way in Firefox. Can someone disclose the decision-making behind this? It would
help devs a lot if HTTP/2 works without TLS too.

~~~
Twirrim
All the main browsers want to encourage HTTPS everywhere. They're taking a
number of initiatives to encourage it, some are carrots, some are sticks.

Mostly it's ending up as the stick, eg "You can only get these major
performance advantages over HTTPS". Soon there is an intent to specifically
call out non-HTTPS content very prominently in the browser, even more so that
HTTPS content was ever called out back when that was introduced.

~~~
frik
Yes, exactly. But why? What's the agenda? Why now? It's not like Amazon.com
was broken between 1996 and 2016. There is a push to HTTPS at all costs, even
if things (ad networks, devs, software) is not ready and is there even a need
for 100% TLS? That an open source browser is engaged in such initiative is not
good. With HTTP you can be anonymous, and most website visits aren't mission
critical things. With HTTPS your traffic is quite unique and the players with
big money have the resources anyway, there TLS is no big thing as we learned
in the last few years. So which Think-tank or what ever is behind this
initiative?

~~~
drchickensalad
Even if it's not a mission critical web request, isp's injecting content into
your http pages is unacceptable.

Even if the government could brute force or fingerprint one of your TLS
sessions, they can't do it for everyone for every site.

Just because Amazon wasn't broken doesn't mean we can't improve it.

Also your comment about a think tank coming up with this idea is an odd
insult, given how broadly you can empirically see people support this.

~~~
jsudhams
Hmmm but i am pretty sure ISP and Ad Agents(Google etc..) will figure a way to
inject content.

~~~
orf
Well... They won't be able to. If they can figure out a way it means they have
broken the underlying encryption, so that gets fixed.

------
KirinDave
I'd like to go further: HTTP/2 is the present not just for content serving,
but could be for _APIs_.

Right now, everyone who's supported both mobile and web APIs (or cared about
bandwidth constraint) can tell you how frustrating it is that well factored
restful (or rest-ish) APIs are often the casualty of accepting the realities
in mobile client support. You end up increasingly collapsing requests into one
Special Bigger Request because One Special Bigger Request almost always has
better performance on bandwidth-constrained clients than the same performance
elsewhere.

And this is especially frustrating because often times the client can't tell
us exactly what they need in the initial request. The best we often can do is
stash an etags header in the request and try and get very fancy there. We end
up with smarter endpoints which require tons more programmer maintenance, and
move us away from the world of autogenerated and automatically audited
endpoints.

HTTP/2 w/ Push offers us a way to split the difference. While individual
endpoints can be served very quickly, we can have our APIs recognize (via
etags, probabilistic inference, previous negotiation, etc) that more data is
about to be requested and push it immediately.

This means we can keep our well-factored, simple API designs that we developed
10 years ago without sacrificing performance to the bandwidth-constrained. It
means that we can further lean on the abstraction Service Worker gives us in
web clients to just naively poll in client code and trust in clientside
caching to make this efficient (with the fallback being the same network call
we would have always made).

It's why I'm absolutely infuriated with so many languages for balking at
supporting HTTP/2 as part of their stack (more often than not, on the grounds
that SSL + ALPN seems tedious to implement).

~~~
shanemhansen
I think that within a datacenter, a bunch of microservices make sense. When
communicating with a customer, you want to craft just the data they need and
minimize roundtrips.

GraphQL seems to be a great candidate for bridging this gap. I recommend using
graphql to allow clients can use to make "One Special Bigger Request" which
then gets split into multiple requests on the backend.

~~~
derefr
The thing about One Special Bigger Request is that it isn't cacheable. GraphQL
is no exception: [https://philsturgeon.uk/api/2017/01/26/graphql-vs-rest-
cachi...](https://philsturgeon.uk/api/2017/01/26/graphql-vs-rest-caching/)

If you want network-level caching, you want multiple atomic resource requests.
HTTP2 enables this.

------
yoavm
This just made me add the 'http2' directive to my Nginx configs, and it seems
that I'm getting a 15% decrease of loading time for my websites. Should have
done this long ago...

~~~
pixl97
Just to be sure, use a site like ssllabs.com to make sure you are actually
using an ALPN. For example on Ubuntu 14, setting http2 does nothing, as it
requires Openssl 1.0.2. If you are using Ubuntu 16, or a distro that provides
the newer Openssl it should work correctly.

------
TazeTSchnitzel
> Remember, in may 1996, the very first HTTP protocol version (HTTP/1.0) was
> born.

Uh, what? HTTP predates 1996. The first version was HTTP/0.9, from 1991. 0.9
is still supported by a lot of web servers, too!

~~~
yes_or_gnome
Personally, I've found it hard to find much information about pre-HTTP/1.0. To
my understanding, "HTTP/0.9" is a retroactive version number. The original
HTTP protocol was incredibly simple. An HTTP Request was simply "GET /", then
the response was just the content immediately followed by a closed connection.
There were no headers.

"HTTP/0.9" is used to describe the implementations of "HTTP/1.0" before it was
-- for lack of a better word -- 'ratified'.

If you, or anyone, has more information or analysis of pre-HTTP/1.0 protocol
implementations (client or server), I would be very interested in reading it.

Also, if there is a public, pre-HTTP/1.0 HTTP server available on the
internet, I would very much like to check it out.

Thanks.

Edit: Alternatively, if it's possible to run an pre-HTTP/1.0 server (or
client), that would be phenomenal. It only occurred to me after making the
comment, however I highly doubt that it would be a simple process to get one
running. I suspect that I wouldn't be able to spin a Docker container with
some ancient version of Apache (I assume due to changes in the Linux
environment, shared libraries, changes to gcc, etc.), but possibly, a VM
running an equally-ancient version of Red Hat (pre-EL) would do the trick.

~~~
TazeTSchnitzel
This is the only reference I've known:
[https://www.w3.org/Protocols/HTTP/AsImplemented.html](https://www.w3.org/Protocols/HTTP/AsImplemented.html)

It's a little more sophisticated than GET /, as you can also specify a full
URL.

> Also, if there is a public, pre-HTTP/1.0 HTTP server available on the
> internet

Most HTTP/1.0+ web servers still support HTTP/0.9. I don't know of any
0.9-only servers though, but there's probably some.

------
daphreak
While I look forward to a faster desktop and mobile browsing experience with
HTTP/2 I do worry about the complexity. I hope that the simpler protocols
remain supported for a long time to enable implementations on low resource
embedded systems.

The last thing we need is for all the closed-source, internet-connected, black
boxes in our lives to poorly implement a complicated web standard protocol.
There are so many places where we have already seen vulnerabilities with
implementations of simple web severs and clients.

~~~
MichaelGG
What looks particularly complex about HTTP/2?

The "simple" text based protocols are NOT simple to implement correctly. Go
try out line-wrapping and play with using \r\n vs \r vs \n as line endings and
tell me what the compatibility ends up like.

And this does create real-world security problems. Some VoIP companies allow
you to make free calls by screwing with their SIP proxies because popular
software handles line endings differently allowing you to make their edge
software interpret packets differently than their core.

Even parsing is easier in a nice binary format (and much, much, faster, too).

~~~
ris
It's not got much to do with the serialization protocol itself as such - most
server software uses a library which handles it properly and then doesn't have
to worry about it. It's more that web developers have to get bits of their
stack (the dynamic and static serving aspects) to cooperate in ways they never
had to before (in fact, the general wisdom was to separate them as much as
possible as they have totally different requirements when it comes to
performant serving).

This is not even to get into persistent connections & server push etc., which
is quite a big subject for developers used to handling independent fire-and-
forget requests.

I guess I'm saying the massively complexity comes in trying to _use_ these new
features.

------
spankalee
For those interested in HTTP/2 Push, check out Eric Bidelman's Push Manifest
format[1], which is a JSON format describing the dependencies to push for any
request URL. You can generate the manifest by hand or with tools, and then
there are servers that can read the manifest and push the right resources.[2]

In Polymer we're most of the way done with support for automatically
generating push manifests from a dependency analysis of your project[3] and
have push manifest support in our dev server and soon a small production
server, so you can `polymer deploy` and get a push-enabled site.

[1]: [https://github.com/GoogleChrome/http2-push-
manifest](https://github.com/GoogleChrome/http2-push-manifest)

[2]: [https://github.com/GoogleChrome/http2push-
gae](https://github.com/GoogleChrome/http2push-gae)

[3]: [https://github.com/Polymer/polymer-
build/pull/172](https://github.com/Polymer/polymer-build/pull/172)

~~~
pcollins123
This is really interesting thanks.

Doesn't this lead to the most highly used static assets being pushed for every
subsequent request? Isn't that wasteful?

I use varnish cache extensively and am considering writing a probabilistic
server push module. It could keep track of subsequent requests and push the
next set of assets. This is more of a server module approach rather than
client side.

Any thoughts on that?

------
tabeth
Can someone recommend an up-to-date book/guide on HTTP(2)? I'm thinking this
may do: [https://http2.github.io/](https://http2.github.io/), but I'm not sure
if that's the best guide.

Much of the jargon in the comments is unknown to me. The article in the OP is
a good start.

~~~
shurcooL
I've learned most things about HTTP/2 from talks given by Brad Fitzpatrick.
Go's HTTP/2 support in standard library was largely implemented by him.

------
6t6t6t6
I'd like to see a web framework that implements well HTTP2.

AFAIK, the Assets Pipeline in Rails is not designed for HTTP2 and I think that
there are no immediate plans to implement it.

~~~
chucke
> AFAIK, the Assets Pipeline in Rails is not designed for HTTP2...

Not true. You can define as many manifests as you want, and design your own
heuristics around resources in rails. It doesn't need to be an HTTP feature.

You have to patch rails to fill up the link header with the assets however. If
you do it, proper reverse proxies like h2o can serve assets using server push.

~~~
6t6t6t6
> You have to patch rails

That sounds like a bad idea in the long term... ;)

------
solidr53
Can somebody ask mr. Heroku why they don't support it...

~~~
6t6t6t6
You should probably not be serving a site directly from Heroku anyway. Put
Cloudflare in front of your Heroku and you will get HTTP2

------
magicbuzz
To be clear, Server Push as described in the article is not actually provided
by nginx yet and thus by the server config in the article. I have no doubt
that the awesome nginx folks will implement in due time though.

~~~
avichalp
Its implementation is already present in nginx plus since 2015.
[https://www.nginx.com/blog/http2-r7/](https://www.nginx.com/blog/http2-r7/)
Not sure if they are going to provide server pushes in the community version
soon.

~~~
juliangoldsmith
From your link:

>The “Server Push” feature defined in the HTTP/2 RFC is not supported in this
release. Future releases of NGINX Plus might include it.

~~~
scardine
Does not invalidate his main point: the developers have a conflict of interest
into releasing features for the open source version that competes with the
paid version.

~~~
juliangoldsmith
My reply wasn't intended to. I was pointing out that even the Plus version
doesn't have Server Push. I don't expect it will ever be present in the open
source version, barring a fork.

------
ElijahLynn
Worth watching:
[https://www.youtube.com/watch?v=CkFEoZwWbGQ](https://www.youtube.com/watch?v=CkFEoZwWbGQ)
(HTTP/2: What no one is telling you by Hooman Beheshti @ Fastly)

tldr; H2 can be slower than H1.1 under degraded network connections. e.g.
mobile connections with high packet loss.

------
cflat
As I've said previously [1][2][3], you shouldn't PUSH with `Link re="preload"`
for performance gains. You will likely run into TCP HOL Blocking or browser
request race conditions. Instead send PUSH_PROMISES in the whitespace while
the client waits for the TTFB.

[1]
[https://news.ycombinator.com/item?id=14082636](https://news.ycombinator.com/item?id=14082636)
[2] [https://shouldipush.com](https://shouldipush.com) [3]
[https://youtu.be/GjWD1pOkxUk?t=1534](https://youtu.be/GjWD1pOkxUk?t=1534)

------
webo
Is the common pattern to support http2 at the highest level?

If a request goes through CDN/edge network -> Load Balancer -> nginx -> Python
app, should http2 be enabled on all components or is enabling just on the CDN
enough? Any pros/cons?

With HTTPS, usually enabling on the CDN is good enough for most cases.

~~~
bluejekyll
That will only get you the protocol. All of the server push stuff for example,
needs to be supported by the actual web server.

~~~
predakanga
That's not correct - while any HTTP/2 capable servers along the path _may_
implement server push, the most important one is the closest server to the
user (i.e. the CDN). The only requirement is that the intermediate servers
must not strip out the server push headers/tags, so that the edge server knows
what to actually push.

It's quite possible to have your own server provide HTTP/1.1 and still take
advantage of server push, and this is a usecase supported by Akamai[1],
Cloudflare[2], and I suspect other CDNs (can't find details) - you may get
better performance using HTTP/2 on that intermediate link, but that's less
likely to be because of server push.

[1]: [https://blogs.akamai.com/2016/04/are-you-ready-for-
http2-ser...](https://blogs.akamai.com/2016/04/are-you-ready-for-http2-server-
push.html) [2]: [https://blog.cloudflare.com/announcing-support-for-
http-2-se...](https://blog.cloudflare.com/announcing-support-for-
http-2-server-push-2/)

~~~
bluejekyll
Ah yes. For static content your correct. I've got my head in dynamic stuff at
the moment. Of course you're right.

------
cagenut
Anyone combining h2o and node in interesting ways they'd like to share/blog-
about yet?

------
treve
It's here, but current server-side implementations are super weak.
rel="preload" is a hack. Only the few languages that have a native HTTP 2 push
api are really here.

------
merb
and it's not as good as it could've be. the statefulness makes it really
really really really really akward.

~~~
sametmax
HTTP2 has been created for big players. When you are a small team, benefits
from it are not that interesting. Plus mandatory TLS is making boostraping a
project unnecessary hard.

~~~
fredsir
If you mean bootstrapping a webservice-project, I don't believe it's hard in
any way. With offerings like Lets Encrypts, it's incredibly easy and takes no
more than 20 minutes.

~~~
dijit
Bootstrapping using letsecrypt requires an internet connection and a public
facing service. I can say that 14 year old me, who had books, a pc and no
internet would have been screwed by this.

~~~
icebraining
If you're just using it locally, you can use a self-signed cert. Caddy
generates one automatically by just adding "self_signed" to your config[1]. Or
you can copy-paste a single OpenSSL or PowerShell command to generate one.

[1]: [https://caddyserver.com/blog/caddy-0_9-released#easy-self-
si...](https://caddyserver.com/blog/caddy-0_9-released#easy-self-signed-
certificates)

~~~
sametmax
But this implies you can then deploy your caddy server. Which means a private
server with root rights. Do you realize most devs don't even know server admin
? If you know the command line, you are not most dev. You are far, far away
from the reality of enterprise software.

Tomcat and apache are still massively deployed. If you moved to nginx years
ago, remember than it's still a new toy for a huge number of people. So
caddy...

~~~
jonathanoliver
Caddy doesn't require root. You can easily use systemd/upstart/custom scripts
to run as an unprivileged user.

