

Why HTTP/2.0 does not seem interesting (2012) - asm89
https://www.varnish-cache.org/docs/trunk/phk/http20.html?re

======
exceptione

      In my view, HTTP/2.0 should kill Cookies as a concept,
      and replace it with a session/identity facility, which makes 
      it easier to do things right with HTTP/2.0 than with HTTP/1.1.
    

count me in. Cookies are a huge waste of bandwidth and freaking annoying here
in Europe as you cannot visit a site anymore without being warned you are
about to receive yet even more cookies.

~~~
yahelc
> Cookies are...freaking annoying here in Europe as you cannot visit a site
> anymore without being warned you are about to receive yet even more cookies.

Seems like the blame for that lies not with cookies themselves, but with the
EU's cookie law.

~~~
drostie
At best that's a symptom.

Cookies are for session management; the central problem with cookies is that
people _feel_ that servers will treat certain sessions as ephemeral, but
instead those servers track these people for a long-term creepy analysis. One
connected problem is that many sites _require_ cookies in order to show
_public content_. Public-content sessions should be _entirely_ ephemeral,
meaning that you shouldn't need a cookie in the first place. (The New York
Times offends in this regard egregiously and persistently.)

You can easily comply with the EU law by either placing the notice on the
login page or else not storing cookies. This means that anybody who abuses
cookies in the above way needs to be loud about it; "we're not giving you an
ephemeral presence like you think!" \-- which actually not only fixes this
problem but also creates an incentive to not abuse cookies in this way.

I am not saying that we should abandon sessions entirely, but that it would be
nice if the 'default' session treatment followed the rules that online banking
uses: when the browser is closed, all sessions are done. If we did this then
we'd want to include a 'persistent login' mechanism, which would take the form
of an in-browser 'would you like to sign in?' dialogue accompanying a web
site. This means that unlike current HTTP authentication, it would have to be
somewhat asynchronous; you are shown the ephemeral version of the page while
the browser itself requests you to confirm that you want to join your long-
term session there. (I was originally going to recommend that the browser just
handle a digital signature scheme, but of course that does not solve the
'logging on to Facebook from your sister's computer' problem easily. Hm.)

~~~
emmett
For advertisers this is a non-starter, because it prevents you from knowing
the size of your audience. All sites would immediately begin requiring some
form of "login" in your scenario in order to enable tracking again.

If you can't track uniques, you can't sell ads, and that's pretty much all
there is to it. So there's huge incentive to undermine any scheme to prevent
unique user tracking.

The solution is to somehow ban advertising, but that's biting off a bit more
than simple user privacy.

~~~
kenko
"For advertisers this is a non-starter, because it prevents you from knowing
the size of your audience. All sites would immediately begin requiring some
form of "login" in your scenario in order to enable tracking again."

If this is really a non-starter for advertisers, then mandating it will
effectively ban advertising, non?

Remember: your business model is not sacrosanct! Disruption!

~~~
nine_k
It will also kill most "free content" websites that live off the ads revenue.
Are you ready for paywalls everywhere, even if subscription is 25 � / mo?

~~~
lambda
Your symbol comes out as a Unicode replacement character for me. What was it
supposed to be?

~~~
nine_k
It was the cent symbol, ¢; somehow it was broken in transmission.

~~~
lambda
That time it worked. Not sure why it broke the first time.

------
richardwhiuk
What really frustrates me about HTTP as a protocol is that it provides the
beginning of a framework to do session management using the WWW-Authenticate
headers, but it's ignored because the site can't provide a good UX. Instead we
end up with phishing, terrible login forms and poor security when people
reimplement session management in Cookies.

~~~
jessaustin
I've long wondered if we might have had better alternatives via WWW-
Authenticate if major browsers had made it straightforward/possible to write
auth plugins. (Actually it probably _is_ possible, but AFAICT not without non-
portable munging about in NPAPI.) If Mozilla actually do something to
integrate Persona into their clients, will they do so in an open, repeatable
way (with an API accessible to extensions) or will it just be more of the same
oddball one-off coding that supported NTLM?

------
cromwellian
This has been going on for a long time and nothing got done. HTTP-NG started
in 1997. If you try to boil oceans, you get nothing.

The recent why SPDY is succeeding is because it is not trying to solve every
problem and grind every axe.

------
kkielhofner
A good analysis but somewhat dated. It has been established, for instance,
that with a compatible SSL lib and TLS NPN support SPDY/HTTPbis can be
supported on the same socket and port as HTTPS. With that said I really like
what PHK has to say about "HTTP routers".

~~~
mortehu
His suggestion is:

> One simple way to gain a lot of benefit for little cost in this area, would
> be to assign "flow-labels" which each are restricted to one particular Host:
> header, allowing HTTP routers to only examine the first request on each
> flow.

I don't understand what he's saying here. Who assigns a label to the "Host"
header when? Is he proposing sticky cookies?

------
jokoon
Well transmitting text never seemed really very much interesting.

http was nice because it was easy for software programmers to write apps that
could work over http, because no binary protocol was involved: reading ASCII
strings is never complicated. It was good for a growing industry.

Now most browsers are open source, why can't the IETF work out a binary
protocol ? Bittorrent is binary, and it's awesome and it's used. Why can't any
browser work out a binary protocol ? Truly dynamic pages over the network ?
Why not ? I'm sure some software already does that. Make one open source, make
it work on firefox or chrome, I guess things would start to light up.

------
osth
I wonder if header compression is primarily to allow for ubiquitous, large
cookies.

Cookies were originally and with few exceptions remain a hack to try to add
state to transactions that were not intended to be stateful.

If indeed the header compression is driven by the growing prevalence and size
of cookies, then HTTP/2 is an effort to accomodate a hack. Not very
interesting.

Some hacks that find their way into RFC's are difficult to remove because the
transition process would be unreasonably expensive, like replacing the
"sophomoric" compression scheme in DNS with something more sensible like LZ77
(credit: djb). I guess we might see some passionate arguments by web
developers about the great expense of removing cookies from the HTTP standard
and replacing it with a session facility, but I think the (long term) benefits
easily outweigh the (short term) costs.

------
_delirium
Some discussion from last year, fwiw:
[https://news.ycombinator.com/item?id=4253538](https://news.ycombinator.com/item?id=4253538)

------
tracker1
In reading this a protocol that supports two initial statements upon
connection/negotiation as follows...

    
    
        s: http/2.0 {SERVER INFO}
        c: connect host/   <-- no path
        s: OK {server-cert/key}
        -- all futher requests encrypted against public key/cert
        c: session-start {client key/cert}
        s: SESSION: {session id} ({domain1},{domain2},...)
        c: (COMMAND|get|put|post|delete) {PATH}
        s: OK
           or
           DENIED ### (reason)   <-- response code & reason
           or
           REDIRECT host/(path)  <-- if the file is physically on another backend
        c: {OTHER REQUEST HEADERS START}
    

after a session is started, the client may make other requests

    
    
        s: http/2.0 {SERVER INFO}
        c: connect host/{path}
        s: OK {server-cert/key} or DENIED ### Reason
        -- all futher requests encrypted against public key/cert
        c: session-join {SESSION_ID} {client key/cert}
        s: OK or DENIED...
        c: {COMMAND} {path}
    

from there, the "session_id" can be a key for server-side value
storage/lookup, etc... sent over the encrypted channel

------
mbetter
> Our general policy is to only add protocols if we can do a better job than
> the alternative, which is why we have not implemented HTTPS for instance.

~~~
asm89
Longer explanation of why there is no SSL in Varnish can be found here:
[https://www.varnish-cache.org/docs/trunk/phk/ssl.html](https://www.varnish-
cache.org/docs/trunk/phk/ssl.html).

