
HTTP/2.0 – Please admit defeat - hungryblank
http://lists.w3.org/Archives/Public/ietf-http-wg/2014AprJun/0815.html
======
taspeotis
Related [1]:

    
    
        Wired: How has your thinking about design changed over the past decades?
    
        Brooks: When I first wrote The Mythical Man-Month in 1975, I counseled
        programmers to “throw the first version away,” then build a second one.
        By the 20th-anniversary edition, I realized that constant incremental
        iteration is a far sounder approach. You build a quick prototype and
        get it in front of users to see what they do with it. You will always
        be surprised.
    

[1]
[http://www.wired.com/2010/07/ff_fred_brooks/](http://www.wired.com/2010/07/ff_fred_brooks/)

~~~
aaron695
I'm surprised that people still believe the Mythical Man-Month. As mentioned
it was written almost 40 years ago.

It was revolutionary at the time but people have moved on and found many
improvements to the original and also outright mistakes.

(As the author seems to have acknowledged when releasing a new improved
iteration of his book.)

~~~
adamlett
> I'm surprised that people still believe the Mythical Man-Month. As mentioned
> it was written almost 40 years ago.

That hardly means that what it has to teach isn't still valid. Admittedly,
I've only read a few chapters in it, but the central point that throwing more
man power at a late project only serves to make it later, is at least as
relevant today as when the book was first published.

> It was revolutionary at the time but people have moved on and found many
> improvements to the original and also outright mistakes.

Could you be more specific?

> (As the author seems to have acknowledged when releasing a new improved
> iteration of his book.)

So, do your points above refer only to the first edition of the book then?

~~~
aaron695
> throwing more man power at a late project only serves to make it later

This is only part of the book. And as per this thread there's a lot more to
the book . IE we are specifically talking about it's comments on prototyping
which the original OP mentioned in their email.

It's been years since I read it, but I'll toss in this review with their
points
-[http://www.goodreads.com/review/show/882155551?book_show_act...](http://www.goodreads.com/review/show/882155551?book_show_action=true&page=1)

To me, to believe one persons opinionated book from 40 years ago as relevant
today is just plain wrong. Even things like the way science was done back then
is in questionable today.

Systems they developed will be improved on, technology and societal change
will make specifics no longer totally accurate and one person won't get an
entire book right.

------
jballanc
I think this ([http://lists.w3.org/Archives/Public/ietf-http-
wg/2014AprJun/...](http://lists.w3.org/Archives/Public/ietf-http-
wg/2014AprJun/0816.html)) follow-up makes a valid point:

> _In the old days we had different protocols for different use cases. We had
> FTP and SSH and various protocols for RPC. Placing all our networking needs
> over HTTP was driven by the ubiquitous availability of HTTP stacks, and the
> need to circumvent firewalls. I don’t believe a single protocol can be
> optimal in all scenarios. So I believe we should work on the one where the
> pain is most obvious - the web - and avoid trying to solve everybody else’s
> problem._

If we're not careful, we're just going to end up cycling back around again and
find ourselves 20 years in the past.

That said, I do think to some extent "that ship has sailed". The future of
network programming seems like it will be "TCP --> HTTP -(upgraded
connection)-> WebSockets --> _actual_ application layer protocol". See, for
example, STOMP over WebSockets. While it is annoying that this implies we've
added a layer to the model, it's hard to argue with the real-world
portability/ease of development that this all has enabled.

~~~
pjc50
The importance of firewall punching can't be overstated. There are plenty of
end users in workplaces or on other people's wifi who find that all outgoing
ports other than 80 and 443 are blocked. Yes, this is incredibly stupid, but
they're not going to do anything about it.

~~~
josteink
So what you're saying is that since incompetent firewall admins have blocked
all applications except the web and no amount of reason can convince them to
do otherwise, all applications of the future should be tunnelled through HTTP.

And thus naturally HTTP must be made to accommodate for all those
applications.

You may think that makes sense, but you don't create something as long-lasting
as internet-scale architecture based on hacks around incompetence.

Besides, if that's the path we're walking, DPI-based firewall with HTTP-level
application firewalling will become the new norm, and we've gotten nowhere
further. Except we now have a even bigger mess to work with.

While the OSI-model may be going a bit over board for some aspects, making all
future application-protocols be a squashed through HTTP is madness. This
thinking is of the same quality and mindset as of PHP developers.

~~~
thaumasiotes
> you don't create something as long-lasting as internet-scale architecture
> based on hacks around incompetence

The right half of your brain is wired to the left half of your body (and vice
versa). That's just the standard, go-to, basically harmless example of
stupidity in the design of long-lasting systems.

From
[http://uncyclopedia.wikia.com/wiki/Unintelligent_Design](http://uncyclopedia.wikia.com/wiki/Unintelligent_Design)
:

> Unintelligent Design is the theory that the world was designed by some
> higher power, but this higher power did a piss poor job at it. There are
> many theories as to how the universe could have been so stupidly and half-
> heartedly spilt into existence.

Or from the slightly more serious
[http://en.wikipedia.org/wiki/Unintelligent_design](http://en.wikipedia.org/wiki/Unintelligent_design)
:

Your optic nerve originates at the front of your retina and pierces through
it, instead of more sensibly originating at the back:

> The retina sends electrical signals to the brain through the optic nerve and
> people see images. The optic nerve, however, is connected to the retina on
> the side that receives light, essentially blocking a portion of the eye and
> giving humans a blind spot. A better structure for the eye would be to have
> the optic nerve connected to the side of the retina that does not receive
> the light, such as in cephalopods.

You stupidly breathe through the same tube you eat and drink with, causing a
staggering number of unnecessary deaths:

> If the [pharynx and larynx] were not connected and did not share a portion
> of their travel paths, choking would not be an issue, as it isn’t for most
> other animals in the world.

~~~
vxNsr
I think people are missing your point: Human beings seem to be built based on
hacks around incompetence, yet we've made it for quite some time, it's not a
great point but nevertheless makes sense.

~~~
thaumasiotes
That's one of two points. Indeed, humans are a much, much longer-lasting
system than the internet. But you can also think about why these kinks in the
design of living things arose in the first place; as wikipedia nicely points
out, a lot of glaring mistakes in one animal work the way you'd expect in
other animals. The "mistakes" arise because they're the fastest way to produce
a desirable result, and they persist for great periods because making changes
to a working system is very hard. In a certain sense, history does show that
adding hacks on to the "outside" of a working system, so that it stays working
at all times, can be a superior strategy to trying to revamp the whole thing
in an elegant manner.

------
stormbrew
I am so glad there is at least one prominent name advocating this line,
because I feel like this quote from another IETF discussion is becoming more
and more relevant:

> Is there an IETF process in place for "The work we're doing would harm the
> Internet so maybe we should stop?" \- [http://www.ietf.org/mail-
> archive/web/trans/current/msg00238....](http://www.ietf.org/mail-
> archive/web/trans/current/msg00238.html)

HTTP/2.0 has been rammed through much faster than is reasonable for the next
revision of the bedrock of the web. It was always clearly a single-bid tender
for ideas, with the call for proposals geared towards SPDY and the timeline
too short for any reasonable possibility of a competitive idea to come up.

There has never been any good reason that SPDY could not co-evolve with HTTP
as it had already been doing quite successfully. If it was truly the next-step
it would have been clear soon enough. All jamming it through to HTTP/2.0 does
is create a barrier for entry for similar co-evolved ideas to come about and
compete on even footing.

~~~
youngtaff
PHK has always been skeptical of HTTP/2 & SPDY

He wants radical change in the protocol but when given the opportunity
submitted a (by his own admission) half baked proposal - there's also the
question of what a protocol like HTTP/2 means for his product.

Although HTTP/2 started from SPDY it has evolved, and in different ways e.g.
see the framing comments from the thread the OP links to.

We need a better protocol for the web now, yes we could wait around longer for
more discussion but where did that get us with HTTP/1.1 - I'd be quite happy
if IETF had just adopted SPDY lock, stock and barrel (and no I don't work for
Google)

~~~
stormbrew
I'm aware that he's always been skeptical, I saw his posts as my own
excitement over the idea of HTTP/2.0 died on the vine from being subscribed to
the WG mailing list.

There has never and will never be a point in time where we don't need "a
better protocol for the web now." The issue is that canonization was
unnecessary, adoption of spdy has been progressing fine without it. And HTTP/2
diverging significantly from spdy does not inspire confidence, either. Rather
it just reminds of a famous xkcd [1]and again begs the question of whether
trying to turn spdy into http/2 even manages to achieve any of the goals the
process was setting out for.

The whole thing just seems like a big fat SNAFU.

[1] [http://xkcd.com/927/](http://xkcd.com/927/)

------
justincormack
There is another interesting thread about internet of things
[http://lists.w3.org/Archives/Public/ietf-http-
wg/2014AprJun/...](http://lists.w3.org/Archives/Public/ietf-http-
wg/2014AprJun/0602.html)

"So it looks like HTTP 2 really needs (at least) two different profiles, one
for web hosting/web browser users ("HTTP 2 is web scale!") and one for HTTP-
as-a-substrate users. The latter should have (or more accurately should _not_
have) multiple streams and multiplexing, flow control, priorities,
reprioritisation and dependencies, mandatory payload compression, most types
of header compression, and many others."

[http://lists.w3.org/Archives/Public/ietf-http-
wg/2014AprJun/...](http://lists.w3.org/Archives/Public/ietf-http-
wg/2014AprJun/0604.html)

"First and foremost, it needs to be recognized that HTTP/2 has been designed
from the start to primarily meet the needs of a very specific grouping of high
volume web properties and browser implementations. There is very little
evidence that ubiquitous use of the protocol is even a secondary consideration
-- in fact, the "they can just keep using HTTP/1.1" mantra has been repeated
quite often throughout many of the discussions here on this, usually as a way
of brushing aside many of the concerns that have been raised. So be it. It's
clear at this point that HTTP/2 is on a specific fixed path forward and that,
for the kinds of use cases required by IoT, alternatives will need to be
pursued."

~~~
youngtaff
Pretty sure every ecommerce site will benefit from deploying HTTP/2

(although their tendency to fill sites full of third party components may
reduce some of its benefits)

~~~
fredliu
Not exactly sure under what specific scenarios "every e-commerce site" will be
running, but if HTTP/2 == SPDY, it's not looking good on mobile:
[http://conferences.sigcomm.org/co-
next/2013/program/p303.pdf](http://conferences.sigcomm.org/co-
next/2013/program/p303.pdf) <== TLDR: current SPDY implementation has negative
impact on Mobile performance. SPDY sucks on mobile is sort of a known fact,
but this paper shows solid evidence that it's true.

~~~
youngtaff
The conclusion of "As a result, there is no clear performance improvement with
SPDY in cellular networks, in contrast to existing studies on wired and WiFi
networks." is rather different to "current SPDY implementation has negative
impact on Mobile performance."

Of cousre as SPDY is only using a single connection then it's more vulnerable
to issues with that connection.

~~~
fredliu
So you agree that single connection is a problem :)? Under mobile, where high
RTT is the norm rather than the exception, this has compound negative effect
on performance (mistaken long response time as packet loss, reducing the
congestion window that makes all flows suffer, etc, etc.). Everything else
being equal, applying SPDY on mobile introduced this new mobile
"vulnerability", isn't this SPDY's negative impact on Mobile? Sure, if
cellular networks has the same delay/packet loss characteristics as
wired/wifi, SPDY would fly, but mobile and wired/wifi are clearly not the
same. Also, I used that paper just as an example showing SPDY's performance
problem on mobile (with very nice detailed analysis why SPDY suffers). My
conclusion of "SPDY sucks on mobile" is from my experience, not from that
paper. I just use that paper to show my point. Actually, I think their paper's
conclusion is a little bit too "polite" toward SPDY. [Edit: adding reference
to the paper] In that paper the sentence right before the "conclusion" you
quoted is "In cellular networks, there are fundamental interactions across
protocol layers that limit the performance of both SPDY as well as HTTP."

~~~
youngtaff
I've tested SPDY over wired / wifi but not over mobile so my mobile experience
is anecdotal.

That said all my mobile browsing (minus HTTPS) is run over SPDY (via the
Google proxy) and I wouldn't describe it as sucking.

Even in the HTTP case it will still depend on what resource the packet loss
etc., occurs for e.g. if it's something on the critical rendering path will it
make that much difference?

~~~
fredliu
Glad that you brought up the critical rendering path, because SPDY uses single
connection for all requests, once a packet loss/re-transmission happens, the
entire connection's congestion window will be cut, which will affect _all_
requests, so most definitely it'll hurt requests on the critical rendering
path. Actually the good old HTTP1.1 doesn't suffer from this because if
multiple connections are used, and only one connection suffers packet loss,
only the request using that connection will suffer(assuming pipelining is not
enabled), the other requests using other connections won't. This actually
reduces the chance of resources on critical rendering path suffer from packet
loss.

~~~
youngtaff
But is there any research into how often the 'independent' connections
actually mitigate against packet loss?

Even on HTTP if the packet loss comes in the middle of negotiating the
connection for the CSS, the page is still going to be waiting for the three
seconds timeout before re-negotiating the connection.

~~~
fredliu
We've seen in our work, that under high RTT, high packet loss rate (you can
simulate similar effect by using things like dummynet, but it wont' be exactly
what you'd expect on cellular network due to reasons mentioned in ATT lab's
paper), SPDY results in performance degradation over raw HTTP. Also, I'm not
saying 'independent' connections can mitigate against packet loss, it can't,
and for the case you mentioned, definitely we'd get a performance hit (either
first paint or pageload). It's just SPDY makes it worse than the default HTTP
behavior, and that's understandable because it only uses single connection,
and single connection suffers from different sorts of problem, and you seem to
agree this from your first comment.

------
zhyder
Interesting response from the WG chair:
[http://lists.w3.org/Archives/Public/ietf-http-
wg/2014AprJun/...](http://lists.w3.org/Archives/Public/ietf-http-
wg/2014AprJun/0820.html)

An aside: I find it odd how HN users jump to agreement when a link to a single
mailing-list message is posted, ignoring other discussion on the thread. I
think it's because the UI makes it hard to see the rest of the conversation
(unlike -say- the comments UI on HN itself)

------
gioele
To put the comment and its author in context, Poul-Henning Kamp is the main
developer of Varnish, a widely used high-performance standard-compliant HTTP
cache.

PHK has experience of HTTP both from the server point of view (the main job of
Varnish is acting as a fast HTTP server) and from client point of view
(Varnish acts as client to the slow upstream HTTP servers).

As a side note, he also refrained for years from adding TLS support to Varnish
after his review of OpenSSL and SSL in general (see [https://www.varnish-
cache.org/docs/trunk/phk/ssl.html](https://www.varnish-
cache.org/docs/trunk/phk/ssl.html) ).

~~~
ruben_varnish
Good one. More context on the ideas and proposal Poul-Henning has put on the
table:
[http://phk.freebsd.dk/words/httpbis.html](http://phk.freebsd.dk/words/httpbis.html)

------
themgt
A much better comment to link to would have the grandparent, by Greg Wilkins
from Jetty, who gives a lot more substance and context to the debate:
[http://lists.w3.org/Archives/Public/ietf-http-
wg/2014AprJun/...](http://lists.w3.org/Archives/Public/ietf-http-
wg/2014AprJun/0807.html)

It does seem a little shocking that the WG chair is proposing last call while
still there's serious discussion of things like dropping HPACK.

------
phkamp
I'm here in case you want to ask me anything.

Poul-Henning

~~~
acdha
For those of us who don't follow this discussion in detail, why are you
thinking the protocol needs to be scrapped outright rather than modified? Is
it simply the complexity Greg Wilkins mentioned or are you really thinking
about bigger philosophical changes like dropping cookies as we know them?
Dropping HPACK seems like a great engineering call but that seems like a
relatively minor change rather than starting over.

[http://lists.w3.org/Archives/Public/ietf-http-
wg/2014AprJun/...](http://lists.w3.org/Archives/Public/ietf-http-
wg/2014AprJun/0833.html) made me wonder what your standby plan would be – let
the people who really need to care about performance use SPDY until a more
ambitious HTTP 2.0 stabilizes? One of the concerns I have is that many people
want performance now and it seems like HTTP 2.0 might turn into the next XHTML
if it takes too long to emerge.

~~~
phkamp
I think the fundamental problem with HTTP/2.0 is that it is a inadequate rush
job for no good reason.

If you really want to gain performance for instance, the way to go is to get
rid of the enormous overhead of cookies, to replace the verbose but almost
content-free User-Agent header and so on.

Likewise, wrapping all the small blue 'f' icons and their associated tracking
icons in TLS/SSL does not improve privacy on the net in any meaningful way.

But the entire focus has been to rush out a gold-plated version of SPDY,
rather than to actually solve these "deep" problems in HTTP.

Similarly: Rather than accept that getting firewalls fixed will take a bit of
time, everything gets tunneled through port 80/443, with all the interop
trouble that will cause.

And instead working with the SCTP people on getting a better transport
protocol than TCP ? Stick it all into the HTTP protocol.

Nobody seems to have heard the expression "Festina Lente" in this WG.

~~~
pierrebai
All these concerns share an overlook of the practicalities of the web.

On the protocol level, there is a huge knowledge and API momentum that makes
any protocol that fundamentally differs from HTTP an uphill battle for
adoption. Whatever changes are made to HTTP, if it is too different from the
application POV, it will linger behind.

Same thing about tunneling. It may not be elegant, but it's the way to fight
the system. You won't change IT behaviour with a protocol change. One possible
way to work would be to make new version oF HTTP have a working mode that
reduces overhead to a minimum for any tunneling operation.

(The same argument hold to SCTP vs TCP.)

~~~
phkamp
A well designed HTTP/2.0 would _not_ "fundamentally differ from HTTP", and it
would be trivial for web-servers like NGNIX and APACHE or frameworks like PHP
to mask the differences for the application code.

For instance moving cookies to the server side would just require a simple
key-value store lookup.

~~~
Sammi
Forgive me if I'm not well enough informed to partake productively in this
discussion. I just wanted to ask:

Can't anyone implementing a webclient and webservice on top of common web
servers and browsers, choose to forgo cookies and keep the state server side?
If this is so, then why do people choose to use cookies if they give more
overhead?

Also you would need to keep some sort of unique identifier on the client, that
the client can send the server, in order for the server to be able to look up
the session state (a session id). Isn't this what cookies often are used for?
I'm guessing this is probably what you meant "information-free session nonces"
would solve above. This sounds interesting, could you explain this scheme to
me or maybe point me in the direction of a good resource?

~~~
phkamp
They could if the client provided a session-id so they knew which "cookie" to
retrive from server-side storage.

My proposals is to do that, and have the top bit in the session-id mean
"Persistent" or "Anonymous", so that client undisputably controls if anything
will be stored about the session.

A Pesistent session-ID would be the same next time you visit the site, an
Anonymous would be random, and thus not retrive any state on the server side,
even if they saved it last time.

This would put the privacy decision in the hands of the client, provided we
also eliminate crap like almost-per-user-unique user-agent headers.

~~~
gregory144
Can't the client side do this already (with HTTP/1.1) by turning off cookies?

Or are you pushing for browsers to turn off cookies by default?

------
lerouxb
I think it makes sense to take the time to get it right. Every version of HTTP
will effectively have to be supported by just about every server and client
forever or otherwise the web will break.

An incredible aspect of the web is that Tim Berners-Lee's first website back
at CERN still works in modern browsers. Same with things like basically the
entire Geocities archive.

When it gets to core infrastructure like HTTP you can't just iterate quickly
and expect the entire internet to constantly upgrade along with you.

What works for early stage lean startups won't work here.

------
maaaats
What an awful way to try and get a point across. An aggressive tone and
negative words baked into every other sentence, making the statements very
loaded. There's probably a lot of missing context from viewing only this link,
though.

------
chacham15
I think that one of the best things about HTTP/1.0 (and to a lesser extent
1.1) is its simplicity. The reason, to me, that that simplicity is so vital is
because it has fostered large amounts of innovation.

------
sebcat
It should be noted that the sender of this e-mail is Poul-Henning Kamp, known
among other things for another e-mail from back in the day (relatively
speaking): [http://bikeshed.com/](http://bikeshed.com/)

------
brianpgordon
I haven't been following the development of HTTP/2.0. What are the most
egregious "warts and mistakes" in SPDY?

~~~
fredliu
Mobile is one of them, although OP's arguments are valid as well.
([http://conferences.sigcomm.org/co-
next/2013/program/p303.pdf](http://conferences.sigcomm.org/co-
next/2013/program/p303.pdf)) Also, among many other things: every thing over
SSL. Single Connection. Also, less of a technical problem, but as many already
mentioned, too complicated as compared to plain text human readable HTTP 1.x

~~~
zobzu
also ssl everywhere without authentication by default. ie slower, encrypted,
but not actually safe.

------
quasque
Would it be accurate to suggest that the rushing of Google's SPDY, as
HTTP/2.0, through IETF standardisation is roughly equivalent to the situation
a few years ago when Microsoft pushed Office Open XML through as an ECMA
standard? Or is that just a huge mischaracterisation?

~~~
marcosdumay
That's a huge mischaracterization.

There are a few different implementations of SPDY, and a clear use case where
it applies. Also, it's a clear standard, made for being used.

What's happening here is that there is a group of very active people that
create most of the software we use on the web, and have a use case they want
to support. At the same time, there are lots and lots of people that are not
as active, with a huge amount of use cases that will be hindered, but since
they are not active, they have very little voice.

~~~
quasque
Thanks for the explanation, I haven't been following the progress of SPDY so
I'm not familiar with the detail. What use cases would be hindered by this new
standard?

~~~
marcosdumay
There are people complaining that it will break things because it's binary,
that they can't have mandatory encryption, and that it's just too complex to
fit in limited resources. There are probably other complaints that I didn't
see.

I've never read it in enough depth to verify those claims, but the response
from the standard group is always "then use HTTP 1.1", what is as a non-
solution as it gets.

SPDY was great exactly because it was not the standard, it was an extra
option, available if everybody agreed to it. Call it HTTP 2, and it will
become mandatory in no time. IETF calling it optional won't change a thing.

------
higherpurpose
I'm in favor of dumping HTTP/S and using a faster and more secure (by default)
Transport protocol altogether. Post Snowden revelations we should be focusing
our energy on that, rather than continuing to hack around this old protocol to
make it faster and more secure.

~~~
marios
MinimaLT [1] comes to mind. Minimal latency through better security sounds
very appealing, especially when it's not a marketint trick but a paper signed
by people like DJB.

The way I see it though, is not only to have a protocol, but how to get
adoption. Especially when you're talking about network protocols, you need
rock solid stacks in all major operating systems which is not an easy feat to
accomplish.

[1]:
[http://cr.yp.to/tcpip/minimalt-20130522.pdf](http://cr.yp.to/tcpip/minimalt-20130522.pdf)

------
kolev
Well, SPDY might be a "prototype", but it's solving real problems _today_ \- I
care less if it's perfect or not, if it solves _all problems_ or not, as long
as it's easy to implement, has a decent footprint, and offers significant
improvements over HTTP/1.1. An imperfect working prototype is better than a
perfect blueprint that materializes in distant future where problems and
environment can differ greatly.

~~~
adamtulinius
Well, according to this mail from the Jetty-team, it doesn't seem like
http/2.0 is easy to implement at all:
[http://lists.w3.org/Archives/Public/ietf-http-
wg/2014AprJun/...](http://lists.w3.org/Archives/Public/ietf-http-
wg/2014AprJun/0807.html)

Furthermore, if http/3.0 is already being discussed, why not just skip
http/2.0 entirely, and live with the current http/1.1+SPDY situation until the
work towards a new standard for http is actually done?

~~~
canadev
What does "LC" mean in that post?

~~~
Tomte
"Last Call".

See [http://datatracker.ietf.org/doc/help/state/draft-
iesg/](http://datatracker.ietf.org/doc/help/state/draft-iesg/)

------
alexnewman
It's not perfect so start over. Seems like the definition of why v2 is always
so hard.

------
cwp
Sounds to me like the real problem is lack of IP addresses, and the best
strategy would be to hold off on updating HTTP and work on IPv6 ubiquity
first. I can see why Google went a different route, but we don't all have to
follow.

------
josteink
Just like they admitted that XHTML 2 was a mistake and scraped it, I feel they
should do the same with this nonsense.

Nothing about spdy or http2.0 sparks any sort of confidence with regard to
proper robust protocol design, keeping things simple nor properly separating
concerns.

~~~
haberman
It's funny that you mention XHTML 2, because I think it demonstrates the
opposite of what you are arguing.

XHTML 2 is a lot more like what PHK is proposing: an attempt to "rethink"
HTML, come up with something simpler, revolutionary rather than evolutionary,
"The Right Thing." It was an attempt to reinvent the space from first
principles, and had lots of ideas that were theoretically good but unproven at
large scale.

When that went nowhere, the world settled on HTML5: evolutionary, incremental,
and based on standardizing existing practice. Much less sexy, but more useful
in practice.

There is a time and a place for bold new ideas, but a standards body designing
v2 of a protocol isn't it. Standards are for codifying proven ideas. When
standards bodies try to innovate you end up with XHTML, VRML, P3P, SPARQL,
etc.

------
mantrax5
You know what we need, we need to pick _one_ of those people and give 'em
_one_ day to invent HTTP/2.0 and it'll be a better spec compared to letting
them all "decide" together by nerd-fighting each other into eternity.

No standard is perfect, but the worst standard is no standard.

Make up your fucking mind already.

