Hacker Newsnew | comments | show | ask | jobs | submitlogin
Why HTTP/2.0 does not seem interesting (2012) (varnish-cache.org)
189 points by asm89 604 days ago | comments



  In my view, HTTP/2.0 should kill Cookies as a concept,
  and replace it with a session/identity facility, which makes 
  it easier to do things right with HTTP/2.0 than with HTTP/1.1.
count me in. Cookies are a huge waste of bandwidth and freaking annoying here in Europe as you cannot visit a site anymore without being warned you are about to receive yet even more cookies.

-----


> Cookies are...freaking annoying here in Europe as you cannot visit a site anymore without being warned you are about to receive yet even more cookies.

Seems like the blame for that lies not with cookies themselves, but with the EU's cookie law.

-----


At best that's a symptom.

Cookies are for session management; the central problem with cookies is that people feel that servers will treat certain sessions as ephemeral, but instead those servers track these people for a long-term creepy analysis. One connected problem is that many sites require cookies in order to show public content. Public-content sessions should be entirely ephemeral, meaning that you shouldn't need a cookie in the first place. (The New York Times offends in this regard egregiously and persistently.)

You can easily comply with the EU law by either placing the notice on the login page or else not storing cookies. This means that anybody who abuses cookies in the above way needs to be loud about it; "we're not giving you an ephemeral presence like you think!" -- which actually not only fixes this problem but also creates an incentive to not abuse cookies in this way.

I am not saying that we should abandon sessions entirely, but that it would be nice if the 'default' session treatment followed the rules that online banking uses: when the browser is closed, all sessions are done. If we did this then we'd want to include a 'persistent login' mechanism, which would take the form of an in-browser 'would you like to sign in?' dialogue accompanying a web site. This means that unlike current HTTP authentication, it would have to be somewhat asynchronous; you are shown the ephemeral version of the page while the browser itself requests you to confirm that you want to join your long-term session there. (I was originally going to recommend that the browser just handle a digital signature scheme, but of course that does not solve the 'logging on to Facebook from your sister's computer' problem easily. Hm.)

-----


For advertisers this is a non-starter, because it prevents you from knowing the size of your audience. All sites would immediately begin requiring some form of "login" in your scenario in order to enable tracking again.

If you can't track uniques, you can't sell ads, and that's pretty much all there is to it. So there's huge incentive to undermine any scheme to prevent unique user tracking.

The solution is to somehow ban advertising, but that's biting off a bit more than simple user privacy.

-----


"For advertisers this is a non-starter, because it prevents you from knowing the size of your audience. All sites would immediately begin requiring some form of "login" in your scenario in order to enable tracking again."

If this is really a non-starter for advertisers, then mandating it will effectively ban advertising, non?

Remember: your business model is not sacrosanct! Disruption!

-----


I'm not arguing against banning advertising. Actually, I think that would be a fine idea.

Just recognize that unless you do ban advertising, the result of this change will not be what's intended. The intended result is that websites, in general, stop tracking anonymous users. Instead, it will result in every user being explicitly tracked.

The most important issue here is that the political feasibility of reducing tracking of anonymous users is equivalent to the political feasibility of banning advertising.

(I'm slightly overselling here -- in reality, a good amount of advertising of the "sponsorship" form would still work. But the impact would be large enough that most websites would force login as I describe rather than stop tracking people.)

-----


It will also kill most "free content" websites that live off the ads revenue. Are you ready for paywalls everywhere, even if subscription is 25 � / mo?

-----


Your symbol comes out as a Unicode replacement character for me. What was it supposed to be?

-----


It was the cent symbol, ¢; somehow it was broken in transmission.

-----


That time it worked. Not sure why it broke the first time.

-----


To sell ads you don't need to show exact size of audience, estimation is enough as proven by TV.

-----


If you’re an Internet company trying to pull advertising dollars away from TV, one of your arguments is that you can do much better tracking. When you run a TV ad, everyone watching the show gets the same ad, and people watching different shows see the same ad a non-optimal number of times.

-----


That's a very good point. This would definitely happen to some degree. But it would still hurt internet advertising a good deal not to be able to track uniques, to show you an ad only once per day, to count how many people have been shown an ad, etc. even if you could know about how many people were on the site.

The result would likely be tracking via login as I describe, rather than true anonymity.

-----


Well the law is implementend in an annoying fashion. For functional cookies there is no requirement to ask permisson. But many websites choose to block all content asking to permit them to place additional cookies for benefits, i.e. tracking cookies.

Like the author said:

  Cookies are, as the EU commision correctly noted, 
  fundamentally flawed, because they store potentially 
  sensitive information on whatever computer the user 
  happens  to use, and as a result of various abuses and
  incompetences, EU felt compelled to legislate a "notice 
  and announce" policy for HTTP-cookies.

-----


Oh yes, the regulation was rushed in by people not really sure what to do or what was happening.

Why it couldn't be a browser option, I have no idea.

-----


Isn't it already a browser option?

-----


Yes. But the default browser options are "accept all cookies, without asking", which goes against the spirit of the law, and also the letter.

-----


Please correct me if I'm wrong, but wasn't that the UK's law? If it was, they seemed to backtrack on it and say that it's fine as long as you mention it in your site's TOS

-----


It was an EU wide law. Each country implemented EU in similar but different ways.

-----


It is a EU Directive: https://en.wikipedia.org/wiki/Directive_(European_Union)

AFAIK the EU dictates the result, not the means. UK's first implementation of the e-Privacy Directive was unfortunate, but I don't think it means the directive was a bad idea.

-----


An "EU Directive" is a law, it's decided on in the same democratic manner that (say) UK laws are decided on. It's like how in the UK there are "Acts of Parliament" (i.e. laws).

It's up to individual countries to implement the minimum, and likewise, each country will implement each one slightly different. The UK law and the EU Directive are basically the same, "It's illegal to do X, Y and Z unless you do W etc." There isn't really a mean/results differnce here.

-----


Thanks for that!

-----


The EU "Cookie law" is not limited to cookies. The term used in regulation 6 is "storage", which includes at least localStorage, headers, and remotely hosted solutions (off the top of my head).

-----


Basically, I can guarantee that killing cookies would lead to 0% adoption of HTTP 2.0 forever, due to the same inertia that is holding Python 3 or IPv6 back. Doesn't matter if there's a better mechanism included that does the same job.

-----


Provide session support in HTTP and still cookies for backward compatibility. If the support works right, people will migrate themselves.

-----


…unless the EU forbids tracking and session cookies all together, but a ban on tracking cookies alone will be enough motivation already i guess.

-----


I always thought that rfc2616 felt more "pure" since it didn't mention cookies. I mean, http was supposed to be stateless right?

-----


What is the alternative to cookies? What does he mean by session/identity facility?

-----


There's some discussion in this thread: http://lists.w3.org/Archives/Public/ietf-http-wg/2012JulSep/...

in short,

   the need for a Session header to replace the use of Cookies
   for basic session management

-----


So is that just cookies by another name?

-----


Cookies with an expire of session and secure flag set: yes. But this should be more secure, and stored for 1 session implicitly.

-----


And in theory less data, right? a session ID doesn't need to store the kilobytes that cookies do.

-----


Why can't there be multiple sessions for different functionalities? I am not sure people are going to relinquish the cookie concept. All the sessions are doing is have the "cookies" transported as part of the HTTP message and not as a separate file (payload)

-----


I'm not sure I understand you.

I'm talking about a Session-ID header that'd have a 128bit (say) max length or something. Not something that has a few kb limit like a cookie.

Also, a GET request wouldn't send a payload so I'm not sure what you mean.

-----


Cookies are sent in the Cookie and Set-Cookie headers. No separate files.

-----


What really frustrates me about HTTP as a protocol is that it provides the beginning of a framework to do session management using the WWW-Authenticate headers, but it's ignored because the site can't provide a good UX. Instead we end up with phishing, terrible login forms and poor security when people reimplement session management in Cookies.

-----


I've long wondered if we might have had better alternatives via WWW-Authenticate if major browsers had made it straightforward/possible to write auth plugins. (Actually it probably is possible, but AFAICT not without non-portable munging about in NPAPI.) If Mozilla actually do something to integrate Persona into their clients, will they do so in an open, repeatable way (with an API accessible to extensions) or will it just be more of the same oddball one-off coding that supported NTLM?

-----


This a thousand times. Why didn't they fix this already?

-----


A good analysis but somewhat dated. It has been established, for instance, that with a compatible SSL lib and TLS NPN support SPDY/HTTPbis can be supported on the same socket and port as HTTPS. With that said I really like what PHK has to say about "HTTP routers".

-----


His suggestion is:

> One simple way to gain a lot of benefit for little cost in this area, would be to assign "flow-labels" which each are restricted to one particular Host: header, allowing HTTP routers to only examine the first request on each flow.

I don't understand what he's saying here. Who assigns a label to the "Host" header when? Is he proposing sticky cookies?

-----


This has been going on for a long time and nothing got done. HTTP-NG started in 1997. If you try to boil oceans, you get nothing.

The recent why SPDY is succeeding is because it is not trying to solve every problem and grind every axe.

-----


Well transmitting text never seemed really very much interesting.

http was nice because it was easy for software programmers to write apps that could work over http, because no binary protocol was involved: reading ASCII strings is never complicated. It was good for a growing industry.

Now most browsers are open source, why can't the IETF work out a binary protocol ? Bittorrent is binary, and it's awesome and it's used. Why can't any browser work out a binary protocol ? Truly dynamic pages over the network ? Why not ? I'm sure some software already does that. Make one open source, make it work on firefox or chrome, I guess things would start to light up.

-----


I wonder if header compression is primarily to allow for ubiquitous, large cookies.

Cookies were originally and with few exceptions remain a hack to try to add state to transactions that were not intended to be stateful.

If indeed the header compression is driven by the growing prevalence and size of cookies, then HTTP/2 is an effort to accomodate a hack. Not very interesting.

Some hacks that find their way into RFC's are difficult to remove because the transition process would be unreasonably expensive, like replacing the "sophomoric" compression scheme in DNS with something more sensible like LZ77 (credit: djb). I guess we might see some passionate arguments by web developers about the great expense of removing cookies from the HTTP standard and replacing it with a session facility, but I think the (long term) benefits easily outweigh the (short term) costs.

-----


Some discussion from last year, fwiw: https://news.ycombinator.com/item?id=4253538

-----


In reading this a protocol that supports two initial statements upon connection/negotiation as follows...

    s: http/2.0 {SERVER INFO}
    c: connect host/   <-- no path
    s: OK {server-cert/key}
    -- all futher requests encrypted against public key/cert
    c: session-start {client key/cert}
    s: SESSION: {session id} ({domain1},{domain2},...)
    c: (COMMAND|get|put|post|delete) {PATH}
    s: OK
       or
       DENIED ### (reason)   <-- response code & reason
       or
       REDIRECT host/(path)  <-- if the file is physically on another backend
    c: {OTHER REQUEST HEADERS START}
after a session is started, the client may make other requests

    s: http/2.0 {SERVER INFO}
    c: connect host/{path}
    s: OK {server-cert/key} or DENIED ### Reason
    -- all futher requests encrypted against public key/cert
    c: session-join {SESSION_ID} {client key/cert}
    s: OK or DENIED...
    c: {COMMAND} {path}
from there, the "session_id" can be a key for server-side value storage/lookup, etc... sent over the encrypted channel

-----


> Our general policy is to only add protocols if we can do a better job than the alternative, which is why we have not implemented HTTPS for instance.

-----


Longer explanation of why there is no SSL in Varnish can be found here: https://www.varnish-cache.org/docs/trunk/phk/ssl.html.

-----




Applications are open for YC Summer 2015

Guidelines | FAQ | Support | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: