

Http 2.0 specs - September 2013 - redcrusher
http://http2.github.io/http2-spec/

======
pilif
I see all the arguments for a need to optimize HTTP, but by going binary and
increasing the complexity like we're seeing here, I think we lose something
valuable.

A huge part of what I know I owe to tinkering with protocols in the late
90ies. Being able to send an Email using Telnet on port 25 was one hell of an
eye-opener to me. But even today - being able to quickly debug a HTTP thing
using telnet is incredibly handy.

Yeah. 1.1 will remain and with it some of the debugability. But then what you
are deubgging quickly is not what browsers are going to see. Yes. You can add
more tools to the mix to help you, but I still think we lose something (quite
like going to a binary syslog format, btw).

Is the speed increase to gain from HTTP/2.0 really worth the loss of
discoverability and the increase of complexity? It's my feeling that the
connections are getting faster more quickly than optimizing HTTP would gain
us.

If HTTP over TCP is inefficient, can't we try to "fix" TCP? Yeah - that'll be
really hard, but so will be to get the Upgrade-header to work in order to do
HTTP/2.0 over port 80. Too much stuff is interfering with HTTP these days
(maybe also a result of the high readability of the current protocol - I don't
know).

I wonder whether these aspects are part of the discussion currently happening
or whether this feeling of mine is just an effect of me getting old.

~~~
AYBABTME
It's not that involved to decode a standard binary protocol. At least not
involved enough to justify keeping every single user on the web on a less
efficient implementation only to facilitate casual debugging with plain text
tools.

I'd compare that to gziping your logs and using zgrep and zcat and z<tool>.
Sure it's a bit more involved but it's definitely worth the savings to gzip
everything.

