

HTTP2 [pdf] - kator
http://daniel.haxx.se/http2/http2-v1.7.pdf

======
Animats
From the article: _“The protocol is only useful for browsers and big services”
This is sort of true. One of the primary drivers behind the http2 development
is the fixing of HTTP pipelining. If your use case originally didn 't have any
need for pipelining then chances are http2 won't do a lot of good for you._

This may be a big issue, and may impact net neutrality. You get better
performance if your stuff is inside a pipe from a Big Service. This makes
Google look good. It also increases the relative benefit of running everything
through a content delivery service.

This, in turn, creates a use case for CDNs which suck up all the components
needed to display a page by whatever means necessary and deliver them to the
end user over one HTTP2 connection. It then makes sense for the CDN, not the
end user page, to be the advertising insertion point. If ads are loaded from a
third-party ad server, they might not show up before the user is done viewing
the page. This sometimes happens now. With server-side control of ordering
within a single HTTP2 pipe, advertisers tied in with the CDN can be sure that
ads will appear when the advertiser wants them to appear.

So there's a net neutrality issue. When multiple streams are multiplexed over
a single pipe, the server gets to determine who goes first. Control over that
order is valuable. That control may end up in the hands of the CDN, not the
site operator.

~~~
cmpb
Unless the client is able to change/enforce the priority of the streams or
block a stream altogether with a reset. Then it should still be possible to
strip out the ads through some Adblock-like mechanism.

I see how this could potentially shift the ad-bearing responsibility to CDNs,
but I'm not seeing how it's any more of a net neutrality issue than what we've
got now. Could you propose a hypothetical scenario?

~~~
Animats
Whoever owns the big single pipe gets to determine what goes into it first. A
CDN can insure that preferred ads go in early, so they always appear before
the content is fully loaded. Ads that don't go through the CDN may show up,
eventually.

There's also an encryption issue. The big pipe is one SSL/TLS session under a
single key. Whoever owns the big pipe gets to see everything. Things like
embedded Facebook content, normally encrypted with Facebook's keys, can't go
through the big pipe unless they accept the CDN looking at their traffic.

It's not clear how all this plays out, but it definitely implies more
centralization.

------
meowface
Slightly off-topic, but I know Google is working on efforts to completely
replace TCP with something specifically designed for web applications. Does
anyone know if they also plan on optimizing HTTP 2.0 to fit the new transport
protocol they're trying to design, or vice versa?

It's currently implemented as a layer over UDP:
[http://en.wikipedia.org/wiki/QUIC](http://en.wikipedia.org/wiki/QUIC)

~~~
charleslmunger
See this presentation:
[https://www.youtube.com/watch?v=hQZ-0mXFmk8](https://www.youtube.com/watch?v=hQZ-0mXFmk8)
Slides, with quote about why quic is being worked on:
[https://docs.google.com/presentation/u/0/d/13LSNCCvBijabnn1S...](https://docs.google.com/presentation/u/0/d/13LSNCCvBijabnn1S4-Bb6wRlm79gN6hnPFHByEXXptk/present?slide=id.g2b4eb9937_815)

------
jonobird1
Good document and read. Got a question about using separate CDNs for image
serving. The initial HTTP connection to the host takes a while, especially
with a long latency for some countries if there's no close server to the user
so by splitting it up into separate image hosting servers, that may actually
cause longer... love to hear thoughts on this as that's always been my
thoughts?

------
mdaniel
Thanks for a great document.

I think in the future all protocol diagrams should use colored lego blocks.
That was fantastic.

------
0x0
I wonder how many experimental deployments we're missing out on due to the
pay-to-play mandatory SSL CA regime.

~~~
ehPReth
[https://letsencrypt.org/](https://letsencrypt.org/) is a new promising
solution to the 'pay-to-play' system

~~~
codexon
But what if the organization behind it goes bankrupt or some browser removes
the CA because one of the sites has malware?

I think this whole CA system is needlessly complex. Your registrar should be
providing you with a free certificate for your domain and that should be the
end of the hassle.

~~~
schoen
I'm working on Let's Encrypt, and I'd be happy to see domain registrars reduce
the need for Let's Encrypt by issuing cryptographic credentials to domain
registrants.

Every CA that issues DV certs for public DNS names takes registrars' databases
as the ultimate ground truth about domain ownership -- at least for the
domains that the CA is willing to issue for -- so the DV-cert-issuing world is
reliant on them to be correct, secure, up-to-date, and so on. That's true
whether the CA is using whois data plus DNS data, or just DNS data, to verify
domain control.

(In saying that, I thought about the idea that Let's Encrypt may use
safeguards to limit issuance based on historical observations of domain
control and prior issuance history by other CAs. For example, our draft ACME
spec has a mechanism where we could ask a requestor to prove control of an
existing subject key from a cert that we know was issued for the same subject
domain by another CA. So if we've already seen a valid cert in the wild, or in
Certificate Transparency, for example.com, we could say that you have to show
that you have control of the key in that cert before you can get a new cert
from Let's Encrypt for example.com. But all of those things ultimately go back
to what registrars said in the past, even if some of them are independent of
what registrars say today.)

