

HTTP/2.0 Initial Draft Released - bpedro
http://apiux.com/2013/07/23/http2-0-initial-draft-released/

======
inopinatus
Why didn't they use SRV[1] records in DNS to resolve http2 requests? It has so
many advantages:

    
    
      * Permitted at the domain apex (yes really! unlike CNAMEs!)
      * Allow weighted round-robin
      * Allows lower-priority fallback services
      * Unusual port numbers no longer required in URIs
      * Doesn't get confused with non-HTTP services located at the same FQDN.
    

It's the modern way to federate services! And there's very wide DNS server
support - everything from BIND to Active Directory.

Fortunately the standard (nor as far as I can see, the normative references)
doesn't actually say you have to use an A-type record. Unfortunately that will
remain the convention unless someone makes this easy but explicit change.

I'd get involved but I fear the politics. Would I have any chance of being
able to advocate for this change?

[1]
[http://en.wikipedia.org/wiki/SRV_record](http://en.wikipedia.org/wiki/SRV_record)

~~~
hobohacker
[https://code.google.com/p/chromium/issues/detail?id=22423](https://code.google.com/p/chromium/issues/detail?id=22423)
discusses some of the issues with using SRV records in browsers.

SRV records could help clients discover server HTTP/2 support, but it does not
mean that all intermediaries along the path support it.

~~~
inopinatus
Kinda moot; that thread is a discussion of what happens if it isn't explicitly
in the standard.

------
judofyr
> Another new concept is the ability for either side to push data over an
> established connection. While the concept itself is hardly revolutionary —
> this is after all how TCP itself functions – bringing this capability to the
> widespread HTTP world will be no small improvement and may help marry the
> simplicity of an HTTP API with the fully-duplexed world of TCP. While this
> is also useful for a server-to-server internal APIs, this functionality will
> provide an alternative to web sockets, long polling, or simply repeated
> requests back to the server – the traditional three ways to emulate a server
> pushing live data in the web world.

As far as I know, this is not true. Server Push is only for the server and can
only be done as a response to a request. It's not a WebSocket alternative.

Server Push means that when a client sends a request (GET /index.html), the
server can respond with responses for multiple resources (e.g. /index.html,
/style.css and /app.js can be sent). This means the client doesn't have to
explicitly GET those resources which saves bandwidth and latency.

~~~
ablankst
If you check out the draft ([http://tools.ietf.org/html/draft-ietf-httpbis-
http2-04](http://tools.ietf.org/html/draft-ietf-httpbis-http2-04)), it looks
like either the client or the server is able to initiate and send data along
the HTTP/2.0 streams.

And in Section 2.2:

>HTTP/2.0 provides the ability to multiplex multiple HTTP requests and
responses onto a single connection. Multiple requests or responses can be sent
concurrently on a connection using streams (Section 5).

This requires supporting client-initiated requests over an established
connection.

------
fenesiistvan
Microsoft already released and open sourced server code which (partially)
supports HTTP 2:
[http://blogs.msdn.com/b/interoperability/archive/2013/07/29/...](http://blogs.msdn.com/b/interoperability/archive/2013/07/29/start-
testing-with-first-implementation-of-ietf-http-2-0-draft-from-ms-open-
tech.aspx)

------
jimktrains2
I think that the changes being made for "HTTP 2" are a terrible decision for
HTTP. For SPDY, sure, make it as complex and as hard to work with as you want
in the name of performance, but please keep my HTTP a nice, simple, text-based
protocol that I can work with very easily.

I just feel that HTTP should not reïmplement TCP. SPDY/HTTP2 just seems much
more complex than necessary.

[http://jimkeener.com/posts/http](http://jimkeener.com/posts/http) is a 90%
complete post of what I would like to see as HTTP 1.2 and some other things I
think would be beneficial.

~~~
chacham15
I actually dont like a lot of what is on that page. For example: he says to
remove the User-Agent header. Without that
[https://www.dropbox.com/downloading](https://www.dropbox.com/downloading)
wouldnt work (where they can give you the correct download and show you
pictures of how to access/install it). Furthermore, the Date header is very
successfully used for caching operations in many cases. Moreover, it suggests
problems that I see (such as the cookie kludge) but not a good
replacement/solution for it. The solution given isnt adequate because the
solution cookies are trying to solve is maintaining stateless servers.
However, they cannot trust the clients and so have to resort to nasty things
like hmac'ing the cookies and more easy to mess up security details.

~~~
jimktrains2
> For example: he says to remove the User-Agent header. Without that
> [https://www.dropbox.com/downloading](https://www.dropbox.com/downloading)
> wouldnt work (where they can give you the correct download and show you
> pictures of how to access/install it).

There is no good reason to do UA sniffing. That page could simply provide you
one of 3 (or more) options to select.

> Furthermore, the Date header is very successfully used for caching
> operations in many cases.

Date headers are not useful for that purpose. Expiration would be based on the
time of the UA, not the one given by a server.

> Moreover, it suggests problems that I see (such as the cookie kludge) but
> not a good replacement/solution for it.

The use of a session identifier, or to use client-side storage until it is
needed. The session identifier is not the best solution, but it is a step
towards a better system, I believe. Eventually I would like to see it removed.

> However, they cannot trust the clients and so have to resort to nasty things
> like hmac'ing the cookies and more easy to mess up security details.

You should never trust anything given to you from a client. If I send you a
product list, that product list should be opaque ids. The session should be
ephemeral and not matter anyway, so there is no reason for it to be signed.

~~~
chacham15
> There is no good reason to do UA sniffing. That page could simply provide
> you one of 3 (or more) options to select.

So, giving people the correct file instead of making them know what they need
(which many people dont... especially if it is browser specific) is not a good
reason? What if a server wants to provide a client with native order endianess
(for RPC for example), that shouldnt be allowed?

> Date headers are not useful for that purpose. Expiration would be based on
> the time of the UA, not the one given by a server.

Tell that to my browser which countless times doesnt fetch a new file because
it has a cached copy. Furthermore, if the headers are stored with the cached
copy, there is no server/client time problem because you can calculate the
difference between server time and client time.

> The session should be ephemeral and not matter anyway, so there is no reason
> for it to be signed.

This is correct in idea, but not in practice. Often times, the server isnt a
single server, but rather a set of load-balanced servers. When this happens,
it is hard to keep track of client state because a client might get load-
balanced to another machine on its next request. Therefore, client state is
kept with the client (and signed to make sure that it is legitimate). This
sort of behavior has become necessary, although can be better dealt with with
some sort of standard (esp. to ensure the protection of the cookie et cetera).

~~~
jimktrains2
> So, giving people the correct file instead of making them know what they
> need (which many people dont... especially if it is browser specific) is not
> a good reason?

No, it is not. Give the user the option of what to download. What if I want
the Windows version even though I'm running Linux?

> What if a server wants to provide a client with native order endianess (for
> RPC for example), that shouldnt be allowed?

RPC should have a standard byte order defined.

> Tell that to my browser which countless times doesnt fetch a new file
> because it has a cached copy.

That's based on cache control, which isn't affected by the Date header sent by
the server.

> Often times, the server isnt a single server, but rather a set of load-
> balanced servers.

I know. I've set these systems up before. There shouldn't be anything of
consequence stored on the client. So what if someone changes the session ID?
Does it really matter? If someone has someone else's ID they probably have the
signature too. If it's a random search, using random ids goes a very long way.
Also, beyond sessions, there doesn't need to be a session. There are now ways
to store data on the client that don't require sending it back and forth to
the server on every request.

------
asm89
The draft was released earlier this month. There was an interesting discussion
about it back then too:
[https://news.ycombinator.com/item?id=6012525](https://news.ycombinator.com/item?id=6012525)

At the same time I also submitted another article that I still think is
interesting and relevant as of today:
[https://news.ycombinator.com/item?id=6014976](https://news.ycombinator.com/item?id=6014976)

------
cm3
Does [http://tools.ietf.org/html/draft-ietf-httpbis-
http2-04#secti...](http://tools.ietf.org/html/draft-ietf-httpbis-
http2-04#section-4.1) [http://tools.ietf.org/html/draft-ietf-httpbis-
http2-04#secti...](http://tools.ietf.org/html/draft-ietf-httpbis-
http2-04#section-4.2) and [http://tools.ietf.org/html/draft-ietf-httpbis-
http2-04#secti...](http://tools.ietf.org/html/draft-ietf-httpbis-
http2-04#section-9.1) mean sendfile(2) can't be used with HTTP/2.0?

~~~
ealexhudson
Not entirely. You have TCP_CORK to allow headers to be stuck in front;
sendfile can also take ranges so you don't blow the frame limits. I would
imagine that kind of set up is more trouble than it's worth though (is
sendfile(2) still the fastest way of doing things? I thought it had been
superceded anyway...)

~~~
cm3
You'd context switch to/from kernel way more often with small ranges. sendfile
on Solaris, Linux, BSD and TransmitFile on Windows allow much larger ranges in
one call.

What's the replacement for sendfile(2)? Solaris has sendfilev which is still
pretty much the same thing and sendfile(2) on Linux uses splice(2),
vmsplice(2), tee(2) internally but I don't know of a replacement.

~~~
infofarmer
If needed, sendfile will most probably just learn enough to handle framing on
its own, with a bit more parameters passed into it.

------
X4
I think HTTP/2.0 should break backward compatibility and take a more advanced
step than "little improvements like that". Killing TCP/IP completely and
inventing a more efficiently compressed, more government resistant and more
easily encryptable Protocol would be highly anticipated. The reason is that
even adopting HTTP2.0 in that state would take at least a decade or more.

Here's stuff that backs my argument:s

a) [http://rina.tssg.org/docs/PSOC-
MovingBeyondTCP.pdf](http://rina.tssg.org/docs/PSOC-MovingBeyondTCP.pdf)

b)
[http://users.ece.cmu.edu/~adrian/630-f04/readings/bellovin-t...](http://users.ece.cmu.edu/~adrian/630-f04/readings/bellovin-
tcp-ip.pdf)

And here are more viable and real alternatives that not only increase the
speed by a factor of n, but also increase security and compatibility to our
mobile generation:

[http://www.fujitsu.com/global/news/pr/archives/month/2013/20...](http://www.fujitsu.com/global/news/pr/archives/month/2013/20130129-02.html)

[http://users.ece.cmu.edu/~adrian/630-f04/readings/bellovin-t...](http://users.ece.cmu.edu/~adrian/630-f04/readings/bellovin-
tcp-ip.pdf)

[http://roland.grc.nasa.gov/nrg/local/sctp.net-
computing.pdf](http://roland.grc.nasa.gov/nrg/local/sctp.net-computing.pdf) /
[http://tools.ietf.org/html/rfc4960](http://tools.ietf.org/html/rfc4960)

[http://www.qualcomm.com/media/documents/why-raptor-codes-
are...](http://www.qualcomm.com/media/documents/why-raptor-codes-are-better-
tcpip-file-transfer)

PS: I was initially afraid that HTTP2.0 was optimized for Advertisers...pheww

~~~
jimktrains2
> Killing TCP/IP completely and inventing a more efficiently compressed, more
> government resistant and more easily encryptable Protocol would be highly
> anticipated.

You do realize HTTP and TCP/IP reside at very different OSI stack levels,
right? Reïnventing TCP is not HTTP's job.

~~~
X4
eh, yes.. now what? You do realize that the HTTP RFCs define what protcols are
used?

I know that most TCP improvements mostly add new behaviour to specific
situations, especially congestion and yes I have read those papers/links. I
know that many improvements are UDP based. So, I think you misread it. I said
kill TCP/IP in order to replace it by something better and wished that
HTTP/2.0 would be that anticipated step. Did you even check the alternatives,
before going negative?

~~~
jimktrains2
> Did you even check the alternatives, before going negative?

One does not need alternatives to dislike a system. Alternatives may affect if
the system is used, but they don't negate criticism of it.

I don't think HTTP 2 should replace TCP/IP or any other protocol that low in
the OSI stack. That is not what HTTP was designed for and I believe throwing
out the ideas behind its creation and still calling a new protocol HTTP is
disingenuous.

