
Last Call: HTTP2 - lazyloop
http://lists.w3.org/Archives/Public/ietf-http-wg/2014OctDec/0982.html
======
jacquesm
Giant mistake in the making. HTTP is elegant, HTTP2 is a monstrosity.

Edit: downvoters: please explain what's to like about HTTP2. I have a very
hard time finding anything to like.

For example: no more easy debugging on the wire, another TCP like
implementation inside the HTTP protocol, tons of binary data rather than text
and a whole slew of features that we don't really need but that please some
corporate sponsor because their feature made it in. Counter examples
appreciated.

Compare:
[http://tools.ietf.org/html/rfc1945](http://tools.ietf.org/html/rfc1945)

~~~
inopinatus
I can only agree, and further voice my strong disapproval at the continuing,
damaging and absurd lack of DNS and IPv6 considerations, most notably the
omission of any discussion of endpoint resolution.

Literally so: this protocol document does not specify how you determine which
server to connect to. HTTP2 is, in definition, only very loosely coupled to IP
despite making significant optimisations for TCP. Thus in implementation we
simply get the same old mistakes and undefined behaviours. Issues with
floating apex records, hacks based on IPv4/6 race conditions, unnecessary
address wastage and so forth will continue; all derived from the colossal
architectural wart of overloading the DNS host (A/AAAA) record as a service
endpoint discovery mechanism.

Once again, I say unto the peanut gallery: shoulda used SRV. The benefits are
many and the downsides greatly overstated. I bemoan the missed opportunity.

~~~
haberman
Grandparent says HTTP2 is a monstrosity, because it "includes a whole slew of
features that we really don't need."

You say you agree, because "the protocol does not specify how you determine
what server to connect to."

In other words, HTTP2 sucks because it simultaneously includes too many
features, and not enough features.

At least everyone can agree that they don't like it for _some_ reason, even if
the reasons themselves contradict each other.

~~~
runeks
> At least everyone can agree that they don't like it for some reason, even if
> the reasons themselves contradict each other.

How does too many features and missing features contradict each other?

It's entirely possible for something to both have too many, unneeded features,
while at the same time missing other important features.

------
magila
It feels like HTTP2 is a classic case of "something must be done, this is
something, therefor it must be done". Clearly there are shortcomings in HTTP
1.1 which would be nice to address. Google to their credit spent a lot of
resources coming up with a solution which met their needs. The problem is that
when Google then went to httpbis the people on the WG apparently took it as an
imperative that _something_ must be released as HTTP2 in relatively short
order. There was a halfhearted attempt to open things up to competing ideas,
but unsurprisingly SPDY was by far the most mature of the proposals. Thus SPDY
became the heir apparent to HTTP by default, despite being a mud ball of
complexity and layering violations.

------
drawkbox
Technology ebbs and flows, I feel like this is a backdrift like XHTML but it
will flow again.

Binary in Hyper Text Transfer will never seem right. I understand it is more
performant but it always creates more bugs, ask any game developer, binary
needed but also living on the edge of indexes/ordering/headers/harder to
debug/etc. Indexing, overflows, incorrect implementations, will follow.

Many of the advancements in HTTP2 are good but there are some steps backwards
we'll have to re-learn again. It isn't all about performance when it comes to
correct interoperability as standards lead to many interpretations, it is why
XML then JSON won data transfer, it is easy to interoperate, yes binary is
more efficient over the wire but not to interoperate. Should we go back to
binary formats for data exchange on the network? The protocol level is lower
level but still it has been beneficial in the current standards to spreading
innovation with lower barriers to understanding.

HTTP2 is one of those 'version 2' of an app that some of the legacy genius of
it was lost and overlooked in the redesign, like simplicity. An engineers job
is to make something complex into something simple and blackboxing data isn't
simplifying it.

~~~
nemothekid
>HTTP2 is one of those 'version 2' of an app that some of the legacy genius of
it was lost and overlooked in the redesign, like simplicity.

Calling HTTP1/1.1 genius sounds like an "intelligent design" argument (as
opposed to "evolution"), and I think detracts from what makes it good.

What makes more sense to me is HTTP1/1.1 was invented and then we
hacked/adapted/"evolved" on top of it to get it to do what we want. It wasn't
the spec that was genius - it was the effort of countless engineers overtime
the crammed a genius, trillion dollar industry into an "okay" spec (The same
way it was done for HTML/CSS/JS).

in that vein, the whole binary/plaintext header arguments seem a lot closer to
"this is the way my father did it" rather than "this is the most efficient
way". To counter your XML/JSON example - I would argue they won over binary
formats because there a huge need for humans to write & edit data exchange
structures. OTOH, I can't remember the last time I sent/edited/created HTTP
Headers by hand. While JSON has tons of uses and is stored in countless
places(config, user data, state data), HTTP servers are the only services that
seem to care about HTTP headers.

~~~
XorNot
And moreover, the new push is towards protocols which seamlessly turn JSON
into binary and back again. Which is a very good way to do things, it's just
so rare that anyone ever bothers to maintain and standardize both sides of
something.

------
hjfgdx
That mess should have never made it to last call. [https://www.varnish-
cache.org/docs/trunk/phk/http20.html](https://www.varnish-
cache.org/docs/trunk/phk/http20.html)

------
gmzll
The fact that M. Belshe is listed as the primary author, when he didn't even
work on the document, says it all. This is just Google forcing the IETF to
gold plate SPDY.

~~~
youngtaff
Mike continues to contribute to the standard long after he's left Google

------
fubarred
SSL/TLS is something that needs to be thrown away and start over (not that it
would happen realistically without immense pressure after another spectacular
failure). The over-complexity of X509 and the ease of which one can acquire
legitimate certs for domains one doesn't own is appalling. From recent
revelations, it's even more troublesome the number and scale of exfiltration
of private keys, making it possible for some state actors to MITM 10's-100's
megaconnections. (One has to put on their tinfoil hat to estimate how many
countries have successfully placed staff in core IT/webops positions of
Fortune 100 that are then able to leverage that access... Not to mention high-
level engagement. [The direct approach conversation might go like this: "gives
your keys or we will send in agents to expose embarrassing details about your
org and we will still get the keys anyway."])

Perhaps folks like 'cperciva would be kind enough to propose a single, simple
TOML-based cert system that is extremely lightweight with the fewest of
features. (Not that TLS/SSL would change without focused, sustained herculean
effort immediately after yet another Heartbleed.)

~~~
tptacek
What exactly does the ease of acquiring a bogus cert have to do with the
complexity of X.509?

------
nly
I'm so glad HTTP/2 is finally here to save us from the horrors of the web
stack by providing a decent session layer, privacy preserving defaults, cross-
domain and efficient differential caching, as-near-as-can-be bulletproof
password-based authentication, and mandatory encryption.

Oh, wait... maybe that was a dream.

------
sanxiyn
While HTTP2 is a layering violation incarnate, apparently properly layered
solution is undeployable. Perfect is the enemy of good.

~~~
tptacek
What's a "layering violation"? Who draws the borders between the "layers"?
Isn't "layering" just another way of invoking the status quo?

~~~
magila
HTTP2 is a layering violation because it implements a new layer of
multiplexing and flow control _on top of_ the existing layer of multiplexing
and flow control. Rather than solving the problems with TCP they slapped a
band-aid on top because that allowed them to get to market faster. In the
short term it's a win for Google et al, but in the long term this sort of
thing will turn the internet into (even more of) an unmanageable mess.

~~~
tptacek
There's a huge cost to "solving problems with TCP": the existing protocol is
intricately coupled between implementations (see: every attempt to improve TCP
congestion control since Vegas) and so widely deployed as to be impossible to
forklift out. Meanwhile: new protocols deployed alongside TCP aren't firewall-
friendly.

If there's a multiplexing problem to be solved on the Internet, it more-or-
less must be solved _above_ TCP, no matter what the "layering" guidelines are.

Meanwhile: I'm still not clear on why these "layers" exist, or need to be
dignified. The whole idea of a "layer" of complicated functionality is in
tension with the End To End Argument: if there's a debate about how something
should be implemented, that thing should be implemented as close to the
endpoints and as close to the application as possible.

~~~
magila
Sure, it would be a huge amount of work to fix/replace TCP. Google is one of
the few companies with the resources to even attempt it, so it's disappointing
that they aren't making it a priority.

The really frustrating thing is that Google is already working on QUIC, which
is probably the best approach with a realistic chance of success. It removes
TCP from the stack and provides a general purpose transport layer rather than
one which is tightly coupled to HTTP. Unfortunately Google decided to push
SPDY with it's own kludged up transport rather than taking the time to do it
right with SPDY over QUIC.

------
Pxtl
Anybody got a good summary of HTTP2 features (which I know could be described
as "everythign plus the kitchen sink")?

~~~
shurcooL
It's mostly more efficient (in terms of average latency, speed and total
bandwidth).

Kinda like BMP format to PNG format (with compression and alpha channels).
Yes, BMP is a lot simpler and you can find out the RGB of any pixel by looking
at pixels[y*width+x] while with PNG you have non-trivial complexity with
compression, etc. But the size efficiency is worthwhile.

~~~
pdkl95
This analogy exaggerates the bandwidth savings.

Converting from uncompressed BMP to compressed PNG can easily save over half
of the file size; even 10:1 compression is common for some images.

The bandwidth savings in HTTP2 are much smaller, and are probably only
significant in aggregate.

~~~
stephen_g
True - the bandwidth savings aren't huge.

What you do get with HTTP/2 is the ability to use bandwidth a lot more
efficiently. Because of TCP's congestion control, such as slow-start, doin a
different TCP connection for each file, as HTTP/1.1 does, is actually a fairly
terribly inefficient for transferring small files. One way that web developers
have tried to get around this is to combine resources together - such as
having huge combined javascript and CSS files, and big sprite sheet images.
But this messes up caching - if I change a 20KB source file that is part of a
half meg combined JS file, then you have to redownload the entire file.

Another problem is that you're limited to how many HTTP requests the browser
will make to a single domain at once, so you have to wait around for files to
finish before others will download. You can try sharding the files across
different subdomains but it's a suboptimal solution.

The multiplexing in HTTP/2 solves these problems. You can send a bunch of
files at once in one connection without repeating the (necessary) slowness of
TCP slow start for every file, and the browser realises that they're
different, so can cache them separately.

This can translate into noticeably faster page load times.

------
dreszg
Shouldn't a protocol as important as HTTP get more than two weeks?

~~~
lazyloop
New Year's Eve was also an odd choice for starting a last call.

~~~
aroch
Or a great choice if you want to shoehorn your wished spec into reality?

~~~
pluma
Maybe the W3C didn't want a repeat of what happened with XHTML2.

~~~
aroch
If memory serves there were a number of issues with XHTML2, chief among them
being it didn't serve the original intent of XHTML very well. HTTP/2 has good
_meaning_ but some of the deign choices were made because Google wanted them
not because they were necessarily best (at least from my understanding as an
interested spectator).

~~~
pas
Could you expand on that? Is there a list of these features, or at least a
blogpost that interested parties could read to quickly get up to speed with
the special interest influences currently observable in the spec?

~~~
jacquesm
[http://datatracker.ietf.org/doc/draft-ietf-httpbis-
http2/?in...](http://datatracker.ietf.org/doc/draft-ietf-httpbis-
http2/?include_text=1)

Tons of change for marginal gain. And of course the old stuff will need to be
supported at the same time.

------
cdent
HTTP2 is yet another in a long series of developments that feel like the
corporate takeover of the commons. Sure there are plenty of excellent features
in it but they are primarily of benefit to systems doing huge (on lots of
dimensions) stuff.

Is this the inevitable path of any technology which has initial promise for
enabling individual public expression?

~~~
organsnyder
My opinion is exactly the opposite. Large organizations have the resources
(person-hours, server infrastructure, etc.) to do all of the hacks required to
get decent performance out of HTTP/1.x. Smaller shops aren't as likely to have
the time to do domain sharding, sprites, etc.; yes, there are tools and
services to do a lot of that work for you, but it still adds complexity.

------
TwoBit
I am against this. This is not a good standard. It's a response to Google 's
Microsoft-like protocol hack.

------
lkrubner
Back in 1989 Sir Tim Berners-Lee put a lot of careful thought into the design
of a protocol for sharing documents using IP/TCP. However, when Ajax and Web
2.0 got going circa 2004, the emphasis was on offering software over TCP, and
for that the HTTP protocol was poorly suited. Rather than carefully rethink
the entire stack, and ideally come up with a new stack, the industry invented
what amount to clever hacks, such as WebSockets, which were then bolted into
the existing system, even relying on HTTP to handle the initial "handshake"
before the upgrade.

What I would like to see is the industry ask itself, can HTTP be retro-fitted
to work for software over TCP or UDP? It is clear that HTTP is a fantastic
protocol for sharing documents. But it is what we want when our goal is to
offer software as a service?

I'll briefly focus on one particular issue. WebSockets undercuts a lot of the
original ideas that Sir Tim Berners-Lee put into the design of the Web. In
particular, the idea of the URL is undercut when WebSockets are introduced.
The old idea was:

1 URL = 1 document = 1 page = 1 DOM

Right now, in every web browser that exists, there is still a so-called
"address bar" into which you can type exactly 1 address. And yet, for a system
that uses WebSockets, what would make more sense is a field into which you can
type or paste multiple URLs (a vector of URLs), since the page will end up
binding to potentially many URLs. This is a fundamental change, that takes us
to a new system which has not been thought through with nearly the soundness
of the original HTTP.

Slightly off-topic, but even worse is the extent to which the whole online
industry is still relying on HTML/XML, which are fundamentally about
documents. Just to give one example of how awful this is, as soon as you use
HTML or XML, you end up with a hierarchical DOM. This makes sense for
documents, but not for software. With software you often want either no DOM at
all, or you want multiple DOMs. Again, the old model was:

1 URL = 1 document = 1 page = 1 DOM

We have been pushing technologies, such as Javascript and HTML and HTTP, to
their limits, trying to get the system that we really want. The unspecified,
informal system that many of us now work towards is an ugly hybrid:

1 URL = multiple URLs via Ajax, Websockets, etc = 1 document (containing what
we treat as multiple documents) = 1 DOM (which we struggle against as it often
doesn't match the structure, or lack of structure, that we actually want).

Much of the current madness that we see with the multiplicity of Javascript
frameworks arises from the fact that developers want to get away from HTTP and
HTML and XML and DOMs and the url=page binding, but the stack fights against
them every step of the way.

Perhaps the most extreme example of the brokenness are all the many JSON APIs
that now exist. If you do an API call against many of these APIs, you get back
multiple JSON documents, and yet, if you look at the HTTP headers, the HTTP
protocol is under the misguided impression that it just sent you 1 document.
At a minimum, it would be useful to have a protocol that was at least aware of
how many documents it was sending to you, and had first-class support for
counting and sorting and sending and re-sending each of the documents that you
are suppose to receive. A protocol designed for software would at least offer
as much first-class support for multiple documents/objects/entities as TCP
allows for multiple packets. And even that would only be a small step down the
road that we nee d to go.

A new stack, designed for software instead of documents, is needed.

I would have been happy if they simply let HTTP remain at 1.1 forever -- it is
a fantastic protocol for exchanging documents. And then the industry could
have focused its energy on a different protocol, designed from the ground up
for offering software over TCP.

------
thomasfoster96
Pushing content to the client, emphasis on encrypted and secure connections -
woo!

Waiting months/years for HTTP\2 support to appear in all the tools I use - :(
....

------
alexwilliamsca
Let's just skip this like we did with IPv5.

