
Please admit defeat (HTTP WG) - Supermighty
https://lists.w3.org/Archives/Public/ietf-http-wg/2014AprJun/0815.html
======
mike_hearn
As far as I can tell everyone thinks HTTP/2 is a good idea except one guy,
whose HTTP cache software doesn't even implement HTTP/1.1 fully.

His reason for why HTTP/2 sucks is that it's not ambitious enough, yet when
the WG chair says "v2 isn't perfect but we can always do a v3" he also gets
upset.

Apparently this guy's hobby is boiling oceans. I think we can give it a rest.
There's no real controversy or drama here. There's just one guy who is very
vocal.

~~~
copsarebastards
> His reason for why HTTP/2 sucks is that it's not ambitious enough

No, his reason (and everyone's reason) that HTTP/2 sucks is that it doesn't
solve the problems it was intended to solve. This isn't even a topic of debate
within the working group. The argument at this point is not whether or why
HTTP/2 sucks, it's how to deal with that fact. The approach supported by the
majority seems to be to just hurry through HTTP/2 so they can start working on
HTTP/3, but PHK is arguing that they should treat current HTTP/2 code as a
prototype and scrap it.

Given that supporting HTTP/2 will cost the entire industry billions of dollars
while still leaving a giant security gap, I tend to agree with PHK.

> whose HTTP cache software doesn't even implement HTTP/1.1 fully.

And you've done what? You think writing BitcoinJ qualifies you to talk about
HTTP/1.1? PHK was committing solid code to FreeBSD when you were still in
diapers. He invented the term "bikeshedding". If we are going to discard his
opinion because Varnish doesn't support HTTP/1.1 fully, we should _definitely_
discard your opinion.

Maybe don't attack people based on their qualifications when they're more
qualified than you.

~~~
youngtaff
Whether HTTP/2 sucks is a matter of opinion - yes it has some technical
compromises but it delivers a faster experience for browsers (not always
though) by overcoming some of the latency penalty inherent in modern web
pages.

IETF had no option to but to start a standardisation process around HTTP/2
otherwise SPDY would have become a de-facto standard.

The IETF have been trying to come up with the next version of HTTP for years
and didn't get anywhere so perhaps Google and others forcing their hands is a
good thing.

PHK has been telling everyone what's wrong with the current HTTP/2 approach
since he beginning but with the exception of his session stuff has never
really come up with any detail on what should be done different.

It's easy to kick HTTP/2 but the reality is it delivers many good things, not
least a way for us to upgrade from it in the future.

~~~
copsarebastards
> IETF had no option to but to start a standardisation process around HTTP/2
> otherwise SPDY would have become a de-facto standard.

As is SPDY is becoming the de-facto standard. If this is the role the IETF
wants to relegate itself to, just slapping an "Approved" sticker on existing
de-facto standards, then the IETF is already irrelevant.

I think a better role for the IETF would be to define best practices for the
industry to work toward rather than simply describing what has already
happened.

> PHK has been telling everyone what's wrong with the current HTTP/2 approach
> since he beginning but with the exception of his session stuff has never
> really come up with any detail on what should be done different.

Opportunistic encryption. AFAIK PHK hasn't said anything about this, but only
because he doesn't need to: it's the obvious giant hole in the standard. A few
people have already proposed solutions.

------
cnst
I think HTTP/2 should have opportunistic encryption support, as per
[https://lists.w3.org/Archives/Public/ietf-http-
wg/2015JanMar...](https://lists.w3.org/Archives/Public/ietf-http-
wg/2015JanMar/0106.html).

As things stand, HTTP/2 is not done in compliance with RFC 7258 / BCP 188,
because it doesn't do anything for the independent sites that cannot deploy
the `[https://`](https://`) address scheme for compatibility reasons.

You basically either have to support the whole pre-1.2 TLS and forget about
[http://](http://), or you cannot have any TLS at all.

~~~
youngtaff
You need to read the rest of that thread there are several wrong assertions in
that email not least that HTTP/2 requires TLS - the reality is Chrome and
Firefox have decided to only support it over TLS

------
stormbrew
The worst thing about all of this is that there has never been a good reason
to push through anything as HTTP/2 at this time. The key innovation of spdy
wasn't any of the technical aspects of the protocol itself, but the fact that
it was making good on the idea that protocol upgrade is a feasible prospect.

Formalizing and standardizing the means by which we can upgrade to spdy now,
and later to http/1.2 or /2 (beyond the lipservice paid to the idea in the
HTTP/1.1 RFCs), would have done a great service to the evolution of the web.
It also would have allowed a reasonable time for other alternatives to spdy to
show up, rather than a standardization timeline that was so ridiculously
narrow as to allow only one possible alternative to have enough experience in
the wild to succeed.

To sum up: spdy should have been standardized as spdy, not http/2, and the
mechanism for protocol upgrade that it brought to the table should have been
standardized separately in order to make it so when http/2 was actually ready,
we'd be able to move to it.

------
dcsommer
I'm wondering what can be gained by discussing this exact email again? See
[https://news.ycombinator.com/item?id=7798946](https://news.ycombinator.com/item?id=7798946)

We've also had more recent PHK rants on HN since then, so I don't see what we
gain from a plain repost, with no additional context.

~~~
copsarebastards
> I'm wondering what can be gained by discussing this exact email again?

More publicity for the discussion. I didn't see the email the first time and
I'm sure I'm not the only one.

------
kemayo
It does seem like if they are already saying that an HTTP/3 is going to be
needed then they might as well not bother releasing HTTP/2\. Implementors are
going to have minimal motivation to actually implement it, and will probably
hang around waiting for the improved version that they know is coming.

~~~
debacle
Those "Implementors" are the ones who will benefit from HTTP 2. Namely, I
think Google could probably provide patches to nginx, apache, webkit, and
firefox without even feeling it. Doing so would put MS on the hook to update
IIS and IE. That's 90% of the Internet.

~~~
youngtaff
MS are already updating IIS and IE - you can use HTTP/2 using the Azure
platform preview today and the IE Windows 10 preview supports it too.

Of the mainstream commercial HTTP servers I think MS are going to get there
before nginx, and HTTP/2 may well render Apache irrelevant as there seems very
little work on updating it happening.

------
debacle
On the one hand, any HTTP 2.0 spec that still has cookies is a travesty.

On the other hand, I think we need to get some input from dissenters other
than PHK (not that I don't agree with him).

~~~
gsnedders
This is _by design_. The aim for HTTP/2 was always a new messaging scheme for
HTTP/1.1 semantics. Basically, any HTTP/1.1 message can be converted to an
HTTP/2 message (and this can be done by intermediaries); the only headers not
expected to be supported in HTTP/2 are those that fall into the category of
connection control (e.g., Connection).

From the charter:

> It is expected that HTTP/2.0 will:

> * Retain the semantics of HTTP/1.1, leveraging existing documentation (see
> above), including (but not limited to) HTTP methods, status codes, URIs, and
> where appropriate, header fields.

> * Clearly define how HTTP/2.0 interacts with HTTP/1.x, especially in
> intermediaries (both 2->1 and 1->2).

Quite probably a more radical, new version of HTTP, that drops message
compatibility would be a good idea — but that's a lot harder to design and
harder to deploy. As the old adage goes, "we believe in rough consensus and
running code" — we have running code for a new message serialisation, and
rough consensus for it. We should accept just that as done.

~~~
debacle
So if HTTP 2.0 is a superset of HTTP 1.1 then why the hell can't they just
leave it as SPDY?

Which goes back to PHK's argument about the IETF getting caught with their
pants down.

