
HTTP/2 Frequently Asked Questions - xngzng
http://http2.github.io/faq/
======
teddyh
Here is a question I hoped to be answered, but it wasn’t: Why doesn’t the
HTTP/2 protocol use SRV DNS records¹? Using it would solve some real problems:

1\. The well-known resilience of mail can be largely attributed to its use of
MX records to specify the use of backup servers and/or load balancing. SRV
records are essentially a generalization of MX records for any service, not
just mail, so using SRV records would bring this benefit to any protocol which
uses them.

2\. The problem of “www” or no “www” would be entirely eliminated; no need for
non-standard so-called ANAME or ALIAS “records”. Just imagine – if, in an
alternate universe, mail servers _hadn’t_ used MX records, it would then have
been necessary to have an A record on the bare domain name, and later, when
the Web was created in this alternate universe, the web server would have to
be the _same_ server as the mail server, since both now would use the bare
domain name. In our universe, this position is now solely occupied by the HTTP
protocol, which causes lots of pain in the DNS when redundant (and harder to
update) A records need to be put on the bare domain name just for HTTP. Not to
mention IPv6, which, for HTTP to work, requires an _additional_ AAAA record on
the bare domain.

3\. The “everything is on port 80, let’s reinvent everything on top of HTTP”
problem could certainly have been entirely avoided, and could be slowly fixed,
by using SRV records. (A server concerned about clients needing to traverse
old-style restrictive firewalls could provide a low-priority port 80 server as
a fallback.)

① _A DNS RR for specifying the location of services (DNS SRV)_
[https://tools.ietf.org/html/rfc2782](https://tools.ietf.org/html/rfc2782)

~~~
Lukasa
> Why doesn't the HTTP/2 protocol use SRV DNS records?

This was discussed by the WG. The best rationale is given in the thread
starting from this mail[0] from Will Chan at Google, followed up by this
one[1] from Amos Jeffries (Squid Proxy). The summary is: waiting for SRV
lookups can be slower than just doing straight-up A records, so no client in a
competitive space is going to do it.

Everyone agrees that in an ideal world this would be great, but in practice
no-one is going to be the first person to move on that front. The wins aren't
big enough and the cost is too high.

[0]: [http://lists.w3.org/Archives/Public/ietf-http-
wg/2014JanMar/...](http://lists.w3.org/Archives/Public/ietf-http-
wg/2014JanMar/0788.html) [1]: [http://lists.w3.org/Archives/Public/ietf-http-
wg/2014JanMar/...](http://lists.w3.org/Archives/Public/ietf-http-
wg/2014JanMar/0797.html)

~~~
nl
Interesting.

Reading that I'm not sure the problem is that SRV lookups are slower, but that
forcing clients to do a SRV lookup, wait, and then a A/AAAA queries is slower:

 _If I own a popular client and I switch to mandatory SRV lookup, and don 't
issue A and AAAA queries until the SRV lookup fails, then for the services and
paths where SRV lookup doesn't work, user experience will be vastly
degraded._[1]

However,it appears that making SRV optional remains a possibility:

 _Note that none of what I 've said necessarily applies to optionally doing
SRV and using it when the results are available._

[1] [http://lists.w3.org/Archives/Public/ietf-http-
wg/2014JanMar/...](http://lists.w3.org/Archives/Public/ietf-http-
wg/2014JanMar/0788.html)

~~~
Lukasa
Correct, but you don't need the RFC to make SRV optional. If you'd like to use
it, use it.

~~~
teddyh
No, on the contrary, the SRV standard specifies that clients SHOULD NOT use
SRV records for protocols for which the standards do not explicitly allow for
SRV records.

------
protonfish
Under the "Why Revise HTTP" [http://http2.github.io/faq/#why-revise-
http](http://http2.github.io/faq/#why-revise-http) section the summary is
basically "So we don't have to be as mindful of the number of HTTP requests"
Is this really a compelling reason? I would argue it is not, especially if we
have to trade a human-readable and easy to troubleshoot text-based protocol
with a binary one. It seems like we are solving a small problem by creating a
larger one.

~~~
ubernostrum
The correct answer is that HTTP/2 has to do this because Google is unable to
show you a simple search box without making over a dozen HTTP requests.

Their more involved properties require dozens to even hundreds of requests in
order to perform complex tasks like showing a list of plain text posts.

And, of course, they can't _possibly_ impose any kind of discipline on the way
they do things, so instead the entire internet has to change to accommodate
them.

~~~
zaphar
My plaintext no animations statically generated site does 5 connections to
load the content.
[http://jeremy.marzhillstudios.com](http://jeremy.marzhillstudios.com)

Now I _could_ optimize that even more by inlining everything with data-inline:
but that would impact caching code and be strictly worse for the network.

A property from google even websearch has fancy transitions and animations and
dynamic content. And Google has every incentive to limit the number of
connections to their page. When I worked there I worked on some of those
technologies specifically to avoid too many connections.

Blaming Google for not being disciplined enough disregards their economic
incentives to _be_ disciplined enough. They invested a lot of energy in this
before SPDY came out. There comes a time where you just have to recognize that
the protocol is fighting against you and either give up or evolve the
protocol.

------
shurcooL
Brad Fitzpatrick is working on a Go implementation of HTTP/2 at
[https://github.com/bradfitz/http2](https://github.com/bradfitz/http2).

------
fletchowns
I'm using nginx with SPDY and I've noticed that if I start downloading a file,
I can no longer load pages on the website. I get an error message in Firefox
saying "The connection was interrupted". If I do a ctrl+F5 then I can continue
browsing the site. Does this have to do with the "one TCP connection"?

~~~
mankyd
It should not. That sounds like a bug or configuration problem somewhere.

SPDY and HTTP/2 can multiplex their data streams across a single connection.
That is to say, they can receive 2+ datastreams on the same TCP connection at
the same time.

~~~
fletchowns
Which configuration parameters should I be looking at?

------
bigbango
In my opinion HTTP/2 is too complex.

I think the better route is to investigate how HTTP/1.1 could be layered atop
multiplexing transport protocols like SCTP.

It might take 10 years to reach wide deployment, but HTTP/1.1 over TCP is
working just fine anyways.

~~~
wmf
We know SCTP is less deployable. What benefit does it have?

And as the FAQ says, if you don't use header compression then response headers
alone will take many RTTs to transmit due to slow start.

~~~
bigbango
It has the benefit of giving us separate protocols for the transport and
application layers. And of already existing, though it isn't widely deployed
yet.

But it would allow multiple HTTP requests to be multiplexed over a single
connection.

SCTP is also using some form of slow start congestion control, but since it
would multiplex the HTTP requests over a single connection there would only be
one initial slow start for all the requests.

~~~
wmf
_there would only be one initial slow start for all the requests_

Which is worse than HTTP/1.1. Avoiding that problem is why HTTP/2 uses hpack.

~~~
bigbango
Why worse? Wouldn't multiple HTTP/1.1 requests sharing a persistent TCP-
connection also only have one initial slow start phase?

Maybe the SCTP multiplexing / parallelization of the requests and thus the
initial headers affect this negatively as more headers would be transferred
during the one initial slow start.

If the delay caused by slow start is a big problem one could add header
compression to HTTP/1.x and run it over SCTP.

I understand that a transport protocol designed especially for and embedded in
HTTP/2 will be more optimal than a generic one like SCTP. But my argument is
that a multiplexing transport protocol like SCTP could be good enough. And
usable for more than HTTP/2\. And the focus could be on simplifying HTTP
instead.

------
andrewstuart2
> Spriting, data-inlining, domain sharding, and concatenation. These hacks are
> indications of underlying problems in the protocol itself.

While this is true, I don't think HTTP/2 will or should negate some of these.
Precomputing things like minification and concatenation will certainly reduce
load and always increase capacity. Human-readable code is only useful for
humans, and during transfer, storage, parsing, etc., it's just plain
inefficient. Also, with spriting and concat, fewer files means more of the
information you know you need is at the same place in disk or memory, speeding
access at a low level.

------
thedufer
This page appears to be Twitter branded at certain window sizes. Specifically,
if the Twitter icon in the header happens to show up more or less centered, it
looks just like the header on Twitter (when you're logged in). Maybe use text
for that link, or have another logo in the header somewhere?

------
Animats
Amusingly, many of the design decisions of HTTP/2 were made the same way in
Macromedia Flash. Binary representation and multiplexing of streams to improve
the user experience are both used in Flash.

~~~
ender7
Approximately half of Flash's design decisions and APIs were extremely well-
designed and well-engineered. There are still many things that Flash does
better than pure HTML and their view framework API was actually quite lovely.

That said...the other 50% was frequently so bad that it more than cancelled
out the benefit on the other side. Which is why no one uses Flash anymore [1].

[1] Everyone still uses Flash.

------
carsonreinke
Never understood why the re-use of the "[http://"](http://") protocol, why not
us "http2://" instead.

~~~
parasubvert
Because that would break every hyperlink on the web, and would create a
separate web with content that existing clients or intermediaries would be
excluded from. Much more evolvable to keep the wire protocol version
orthogonal to URI.

Further, technically speaking, the URL scheme has little to with the protocol
used , it is merely a way of linking a hierarchical namespace to a spec (or
more) that tells a developer how to resolve it. That means we have a lot of
flexibility in terms of protocol (we don't even need a protocol defined in
some cases - mailto: URIs for example).

~~~
jimktrains2
> Because that would break every hyperlink on the web,

Or you just run two servers (or a single server that speaks both), you know,
like you'll have to for the next 30 years anyway.

> and would create a separate web with content that existing clients or
> intermediaries would be excluded from.

If you have an HTTP2-only site, aren't you doing that anyway?

> Much more evolvable to keep the wire protocol version orthogonal to URI.

> Further, the URL scheme has little to with the protocol used

That first part, before the colon, is a protocol descriptor. It would be
perfectly reasonable to change the protocol descriptor.

What should have been done was use the Upgrade header (like websockets) or
specify using SRV records.

~~~
wpietri
The only way an upgrade this big will happen is having a single server that
speaks both, one that's a drop-in replacement. In which case, there's no point
in breaking reverse compatibility with every HTTP-capable device on the planet
just to make some fussbudgets happy that the URL is "more correct" in some
obscure technical sense.

As a counterexample, look at how long we've been working on IPv6. [1] We've
been working on that for 20 years, and we're not even close.

And all that aside, URLs aren't primarily API. They're UI. You don't go
messing with a UI that billions of people use without a strong reason to do
so.

[1] [http://www.ipv6.com/articles/general/timeline-of-
ipv6.htm](http://www.ipv6.com/articles/general/timeline-of-ipv6.htm)

~~~
jimktrains2
> The only way an upgrade this big will happen is having a single server that
> speaks both, one that's a drop-in replacement. In which case, there's no
> point in breaking reverse compatibility with every HTTP-capable device on
> the planet just to make some fussbudgets happy that the URL is "more
> correct" in some obscure technical sense.

Why would this preclude using a different protocol in the URI?

> And all that aside, URLs aren't primarily API. They're UI. You don't go
> messing with a UI that billions of people use without a strong reason to do
> so.

You mean UI that hidden by most browsers by default now?

~~~
wpietri
Hi! You get one round of point-sniping. After this, make a coherent point if
you want a reply.

It wouldn't _preclude_ using a different protocol. It means that there's no
point. If every HTTP/2 server is also an HTTP/1.1 server offering the same
content then there is no practical need to distinguish the protocols.

As to URL-hiding, you know that's controversial, and for good reason. Their
theory is that a lot of people don't normally look, and that could even be
true. Many always will, though.

More importantly, that's not the only place that people use links. Every time
they share one via email, for example. Every time they type one in from a
magazine or the side of a bus or a flyer. Every time they cite one in a paper.

Forcing those people suddenly include a protocol distinction that is
meaningless to them and unimportant to the server involved would be a
pointless waste. HTTP/2 is purely an under-the-hood improvement. They didn't
have to change the URL for SPDY, and they don't have to change it for this.

~~~
jimktrains2
> If every HTTP/2 server is also an HTTP/1.1 server offering the same content
> then there is no practical need to distinguish the protocols.

But not ever HTTP/1.1 server will be an HTTP/2 server; that's my point.

> As to URL-hiding, you know that's controversial, and for good reason.

I know; I hat it. I brought it up because obviously most people don't seem to
care enough about it that we should overload a schema specifier for the sake
of aestetics that don't matter.

> More importantly, that's not the only place that people use links. Every
> time they share one via email, for example. Every time they type one in from
> a magazine or the side of a bus or a flyer. Every time they cite one in a
> paper.

OK? some people cite ftp links. I'm not seeing your point.

> Forcing those people suddenly include a protocol distinction that is
> meaningless to them and unimportant to the server involved would be a
> pointless waste.

So we shouldn't use ftp:// or smb:// anymore? I don't get your point, because
people are already used to non-http schemas.

> HTTP/2 is purely an under-the-hood improvement.

I mean you could say the same thing if all your assets were served over ftp
and the browser rendered them as it does http. HTTP/2 is _fundamentally_
different than HTTP/1.1.

------
jokoon
there are some serious problems with current http use. it's almost gore.

~~~
jimktrains2
And implementing TCP at Layer 7 isn't?

~~~
jokoon
what do you mean ?

~~~
jimktrains2
HTTP2 implements stream multiplexing and flow control inside itself, instead
of relying on the underlying networking protocol.

~~~
jokoon
why is that so bad ?

I mean multiplexing, why not, flow control is more the job of TCP, that I
agree.

But if the goal is network performance and relieving stress on servers and
routers and wireless internet phone equipments, I don't think it's such a bad
idea. If CPUs are faster and have more cores, it might be good to let CPU do
more.

What I was saying is that the current problems this link is talking about,
that http would try to solve, those problems are quite gore.

~~~
jimktrains2
To me the issue with multiplexing is that it's multiplexing "streams", not
simply sending multiple files.

(There is complete support for MIME multipart in browsers already, but if that
were fixed it could be useful for "pushing" content from the server).

> What I was saying is that the current problems this link is talking about,
> that http would try to solve, those problems are quite gore.

I just don't think that HTTP2 (that name still kills me as this protocol is as
much HTTP as clean coal is clean) really makes for a cleaner protocol. I think
that it's being sold as HTTP is what really bothers me to the core -- it's not
a replacement for HTTP; it's a Layer 4 protocol implemented at Layer 7. That's
fine, I guess, but I don't think that it's inherently "simpler" than a
Request-Response, text-based protocol (which, admittedly has been used to
kludge a lot of things).

Also, I was hoping HTTP would have solved actual issues, and not just those
faced by heavy-weight websites. Issues such as:

    
    
       * Better authentication
       * More secure caching
       * Better methods to find alternate downloads locations
       * Keeping the protocol simple
       * Making each request contain less information about the sender
       * Improved Metadata
    

(I get into those a little more here:
[https://github.com/jimktrains/http_ng](https://github.com/jimktrains/http_ng))

