
HTTP/2.0 — Bad protocol, bad politics - ibotty
https://queue.acm.org/detail.cfm?id=2716278
======
akerl_
"HTTP/2.0 is not a technical masterpiece. It has layering violations,
inconsistencies, needless complexity, bad compromises, misses a lot of ripe
opportunities, etc."

I wish the article had spent more time talking about these things rather than
rambling about "politics".

"HTTP/2.0 could have done away with cookies, replacing them instead with a
client controlled session identifier."

That would have destroyed any hope of adoption by content providers and
probably browsers.

"HTTP/2.0 will require a lot more computing power than HTTP/1.1 and thus cause
increased CO2 pollution adding to climate change."

Citation? That said, I'm not considerably shocked that web standards aren't
judged based on the impact of whatever computing devices may end up using them
on their power grids and those power grids' use of energy sources.

"The proponents of HTTP/2.0 are also trying to use it as a lever for the "SSL
anywhere" agenda, despite the fact that many HTTP applications have no need
for, no desire for, or may even be legally banned from using encryption."

In the same paragraph, the author complains that HTTP/2.0 has no concern for
privacy and then that they attempted to force encryption on everybody.

"There are even people who are legally barred from having privacy of
communication: children, prisoners, financial traders, CIA analysts and so
on."

This is so close to "think of the children" that I don't even know how to
respond. The listed groups may have restrictions placed on them in certain
settings that ensure their communications are monitored. But this doesn't
prevent HTTP/2.0 with TLS from existing: there are a variety of other avenues
by which their respective higher-ups can monitor the connections of those
under their control.

~~~
joosters
"HTTP/2.0 could have done away with cookies, replacing them instead with a
client controlled session identifier."

In fact there's no need for this to be tied in with HTTP/2.0 at all. Alternate
systems could be designed without regard to HTTP/1.x or HTTP/2.y, they just
have to agree on some headers to use and when to set them.

Making these kind of changes as part of a new version of HTTP would just be
bloat on an already bloated spec, it is actually a good thing that the spec
writers did not touch this!

~~~
robomc
Anyone have a link where I can read about these 'client controlled session
identifiers'? I can't picture an approach to web app sessions which isn't
essentially equivalent to a cookie (in terms of privacy, and ability of the
client to control persistence).

~~~
phkamp
It's really very simple:

Instead of all the servers dumping cookies on you, you send a session-id to
them, for instance 127 random bits.

In front of those you send a zero bit, if you are fine with the server
tracking you, and you save the random number so you send the same one every
time you talk to that server. This works just like a cookie.

If you feel like you want a new session, you can pick a new number and send
that instead, and the server will treat that as a new (or just different!)
session.

If instead you send a one bit in front of the 127 random bits, you tell the
server that once _you_ consider this "session" over, and that you do not want
them to track you.

Of course this can be abused, but not nearly as much as cookies are abused
today.

But it has the very important property, that all requests get a single fixed
size field to replace all the cookies we drag across the net these days.

~~~
robomc
Seems like anything a server could do via cookies, they could also do via your
client-generated session id.

Servers which currently drop lots of individual cookies on you now, would just
start dropping that data in the server-side session data. In either case they
can tie your client to the same persistent information. Same for situations
where javascript sets or gets cookie values - these could all be achieved via
ajax, storing server-side against your session id.

If anything, it possibly reduces my options as a client, in situations where a
site previously dropped lots of cookies on me, as now my only options are to
completely close my session, or persist all data, when before there were
situations where I could maintain, say, my logged in user session, while
removing the "is_a_jerk=true" cookie.

~~~
Vendan
Well, yeah, for a basic implementation, it's rather easy to track like that.
What about a system where the unique id returned is based on a random value
hashed with the domain name of the window? Then third party trackers would get
a different id from you on each site, but your sessions would be static on the
site.

~~~
robomc
True it would make it difficult to do third-party tracking across multiple
sites (unless there was server-side information connecting your accounts, like
an email address). And leaving aside browser fingerprinting.

That could also just by achieved by disallowing third party cookies though -
feels on the cusp of being a browser implementation problem (just stop
allowing third party cookies).

------
jlebar
> _The same browsers, ironically, treat self-signed certificates as if they
> were mortally dangerous, despite the fact that they offer secrecy at trivial
> cost. (Secrecy means that only you and the other party can decode what is
> being communicated. Privacy is secrecy with an identified or authenticated
> other party.)_

I'm frustrated to read this myth being propagated. We should know better.

In the presence of only passive network attackers, sure, self-signed certs buy
you something. But we know that the Internet is chock-full of powerful active
attackers. It's not just NSA/GHCQ, but any ISP, including Comcast, Gogo,
Starbucks, and a random network set up by a wardriver that your phone happened
to auto-connect to. A self-signed cert buys you nothing unless you trust every
party in the middle not to alter your traffic [1].

If you can't know whom you're talking to, the fact that your communications
are private to you and that other party is useless.

I totally agree that the CA system has its flaws -- maybe you'll say that it's
no better in practice than using self-signed certs, and you might be right --
but my point is that unauthenticated encryption is not useful as a widespread
practice on the web.

Browser vendors got this one right.

[1] Unless you pin the cert, I suppose, and then the only opportunity to MITM
you is your first connection to the server. But then either you can never
change the cert, which is a non-option, or otherwise users will occasionally
have to click through a scary warning like what ssh gives. Users will just
click yes, and indeed that's the right thing to do in 99% of cases, but now
your encryption scheme is worthless. Also, securing first connections is
useful.

~~~
djcapelis
Yes, pinning the cert, (i.e. TOFU - Trust On First Use) is exactly the right
way to treat self signed certificates and under that model they offer real
security. The idea that you can't do anything with self signed certs and
nothing makes them okay is a much more troublesome untruth, IMO.

Rejecting self-signed certs and only allowing users to use the broken CA PKI
model is the wrong choice. Browsers didn't get it right. The CA model is
broken, is actually being used to decrypt people's traffic and though your
browser might pin a couple big sites, won't protect the rest very well by
default. It's a bad hack and we should fix the underlying issue with the PKI.
I believe moxie was right, a combination of perspectives + TOFU is the way to
do this.

Things that also work like this that we all rely on and generally seems more
secure than most other things we use: SSH.

~~~
drzaiusapelord
>TOFU - Trust On First Use

The scenarios where this works are pretty limited. Pretty much only a server
you set up. Jane User has no idea if the first use of ecommercesite.com is
actually safe. You generally do because you have other band access to that
server to see the key or key fingerprint. Even that can be thwarted by a
clever MITM attack.

>Things that also work like this that we all rely on and generally seems more
secure than most other things we use: SSH.

Yeah, that's generally for system administrative access not general public
access for the web. For that kind of access, you need higher safeguards, thus
the CA system we have today.

------
blfr
If _there are already so many ways to fingerprint via cookies, JavaScript,
Flash, etc. that it probably doesn 't matter_ then why do cookies matter? That
the EU parliament decided to pass some weird law is not much of an argument.
They're not exactly known for their technological proficiency. And it did
nothing for privacy. It only annoys people[1].

Sure, we could start with cookies. However, that would break a lot of the web
with no immediate benefit.

On SSL everywhere (not "anywhere"), how much resources does it cost to
_negotiate SSL /TLS with every single smartphone in their area_? Supposedly,
not much[2]. I run https websites on an Atom server.

Frankly, that was rather unconvincing. Although it does seem likely than the
entire process was driven by IETF trying to stay politically relevant in the
face of SPDY.

[1] [https://github.com/r4vi/block-the-eu-cookie-shit-
list](https://github.com/r4vi/block-the-eu-cookie-shit-list)

[2] [https://istlsfastyet.com/#cpu-latency](https://istlsfastyet.com/#cpu-
latency)

~~~
stingraycharles
For what it's worth, the cookie law in the EU is far more general than
cookies, and had a totally different origin. Originally, the law was to
require explicit consent when installing something on an electronic device, to
combat malware. However, some clever politicians later realized that setting a
cookie also "installs" something on an electronic device, and thus the law
further known as the cookie law was passed.

If a webserver were to set some secure session identifier, the same laws still
apply -- just as installing software without explicit consent is covered by
the same law.

~~~
boracay
Not if you are using the session identifier for it's intended purpose, see the
exceptions at:
[http://ec.europa.eu/ipg/basics/legal/cookies/index_en.htm#se...](http://ec.europa.eu/ipg/basics/legal/cookies/index_en.htm#section_2)

~~~
stingraycharles
We all know how broad "intended usage" can be defined. Google has a cookie
that is required for logging in into gmail (clearly intended usage), but is
reusing that exact same cookie for tracking purposes.

Until there is actual legal precedence after people sueing businesses abusing
these abilities, I have no idea how to interpret these laws other than "they
are very broad and vague".

~~~
boracay
I don't think the laws are vague so much as how we use cookies today. With
different mechanisms for different purposes it would (could) be much more
transparent to the end users how things work. It would be more like the "save
password" feature in various browsers. Of course since all the major browsers
vendors also make money from ads this isn't really in their interest.

------
zimbatm
The article is basically a rant. I was hoping the author would go more into
layer violation issues.

Most of the interesting stuff from HTTP/2.0 comes from the better multiplexing
of requests over a single TCP connection. It feels like we would have been
better off removing multiplexing from HTTP altogether and then adopt SCTP
instead of TCP for the lower transport. Or maybe he had other things in mind.

> There are even people who are legally barred from having privacy of
> communication: children, prisoners, financial traders, CIA analysts and so
> on.

This argument is quite weak, SSL can easily be MITMed if you control the host,
generate custom certs and make all the traffic go trough your regulated proxy.

~~~
aidenn0
In the early days of SPDY there was an article comparing SPDY to HTTP/1.1 over
SCTP. The short version was other than the lack of header compression, it got
nearly all the wins of SPDY, except:

1) Unless tunneled over UDP (which has its own problems) failed to work with
NATs and stateful firewalls.

2) HTTP(s) only environments (e.g. some big corporations) would not work with
it; SPDY will look enough like HTTPS to fool most of these.

3) Lack of Windows and OS X support for SCTP (without installing a 3rd party
driver) means tunneling over UDP.

~~~
gsnedders
What are the problems with SCTP over UDP, except the obvious extra (eight
byte) overhead?

~~~
aidenn0
UDP is just not as reliable over NAT (mainly due to crappy NAT
implementations). SCTP tries really hard to keep it working (including
heartbeats on idle connections), but implementing the 50% of SCTP that HTTP
benefits from over TCP will work exactly as well as HTTPS whereas SCTP over
UDP runs into lots of tiny issues due to nobody having tested it on their $5
NAT before putting it in a wifi router or a DSL/CABLE modem.

------
cromwellian
I'll just leave this here: [http://www.w3.org/Protocols/HTTP-NG/http-ng-
status.html](http://www.w3.org/Protocols/HTTP-NG/http-ng-status.html)

In 1995, the process began for HTTP to be replaced by a ground up redesign.
The HTTP-NG project went on for several years and failed. I have zero
confidence that a ground up protocol that completely replaces major features
of the existing protocol used by millions of sites and would require
substantial application level changes (e.g. switching from cookies to some
other mechanism) would a) get through a standards committee in 10 years and b)
get implemented and deployed in a reasonable fashion.

We're far into 'worse is better' territory now. Technical masterpieces are the
enemy of the good. It's unlikely HTTP is going to be replaced with a radical
redesign anymore than TCP/IP is going to be replaced.

Reading PHK's writings, his big problem with HTTP/2 seems to be that it is not
friendly to HTTP routers. So, a consortium of people just approved a protocol
that does not address the needs of one's major passion, http routers, and a
major design change is desired to support this use case.

I think the only way HTTP is going to be changed in that way is if it is
disrupted by some totally new paradigm, that comes from a new application
platform/ecosystem, and not as an evolution of the Web. For example, perhaps
some kind of Tor/FreeNet style system.

~~~
phkamp
Yes, clearly nothing has changed since 1995 at all, obviously nobody has
gotten any wiser or anything.

My big problem with HTTP/2 is that it's crap that doesn't solve any of the big
problems.

~~~
MichaelGG
You say that, but as a small site operator, my experience is that I add the
characters "SPDY" to my nginx config, and clients are happy cause stuff loads
faster.

Why did nothing happen since the HTTP1.1 spec? Everyone say around until
Google decided to move stuff forward.

------
stephen_g
I really don't understand this bit:

> Local governments have no desire to spend resources negotiating SSL/TLS with
> every single smartphone in their area when things explode, rivers flood, or
> people are poisoned.

I remember some concerns about performance of TLS five to ten years ago, but
these days is anybody really worried about that? I remember seeing some
benchmarks (some from Google when they were making HTTPS default, as well as
other people) that it hardly adds a percent of extra CPU or memory usage or
something like that.

Also, these days HTTPS certificates can be had for similar prices to domains,
and hopefully later this year the Let's Encrypt project should mean free high
quality certificates are easily available.

With that in mind, forcing HTTPS is pretty much going to be only a good thing.

~~~
workermonkey
On my load balancers it is more like 5-10%. Not terrible, but not trivial
either. Also, it multiplies throughout the environment. If you are dealing
with PII or financials, everything needs to be encrypted on the wire.

Load balancer decrypts, looks at headers, decides what to do, re-encrypts,
down to the app tier, decrypt, respond encrypted, etc. I'm not saying that is
a bad thing, but that's why some people get cranky.

Somewhat unrelated, compressed headers in HTTP 2.0 makes sense if you only
think about the browser, it saves 'repeated' information. The problem is the
LB has to decrypt them every time anyways, so someone still have to do the
work, it just isn't on the wire. Server push on the other hand could be
awesome for performance (pre-cache the resources for the next page in a flow)
but also has the potential for abuse.

~~~
tracker1
Why not just run unencrypted behind the load balancer? Unless they're on
wholely different networks, it shouldn't be needed.. SSL termination makes
sense at a load balancer, reverse proxy, etc.

~~~
stephen_g
Like how Google used to do it? [http://cdn01.androidauthority.net/wp-
content/uploads/2014/06...](http://cdn01.androidauthority.net/wp-
content/uploads/2014/06/SSL-Added-and-Removed-Here.jpg)

------
joosters
_Twenty-six years later, [...] the HTTP protocol is still the same._

Not true at all. Early HTTP (which became known as HTTP/0.9) was _very_
primitive and very different from what is used today. It was five or six years
until HTTP/1.0 emerged, with a format similar to what we have today.

~~~
TazeTSchnitzel
Thank you, I was going to point it out myself. Early HTTP was literally just
this:

    
    
      GET /somepath
    

That's it. Nothing more (well, that and a CRLF), nothing less. The response
was equally barren: just pure HTML. Existent page? HTML. Non-existent page?
HTML error message. Plaintext file? HTML. (The text is wrapped in a
<plaintext> tag.) Anything else? Probably HTML, though you could also deliver
binary files this way (good luck reliably distinguishing HTML and binary
without a MIME type)!

I actually like HTTP/0.9. If you're stuck in some weird programming language
without an HTTP/1.1 client (HTTP/1.0 is useless because it lacks Host:, while
HTTP/0.9 actually does support shared hosts, just use a fully-qualified URI)
you can just open a TCP port to a web server and send a GET request the old
fashioned way.

------
bayesianhorse
After learning how public key infrastructure really works, I've become quite
disillusioned by the security it seems to provide. After all, what use is a
certification authority, if basically any authoritarian state and intelligence
service can get on that list? In some countries the distinction between
government officials, spies and organized crime is already extremely blurry...

~~~
stephen_g
You can remove their CA certificates from your browser/OS and if you click the
lock icon on your browser you can check what CA the website you're on was
signed.

You're right to say that the PKI doesn't work if you just want to trust any
site that shows a padlock in the address bar, but it's useful if you do a
little work.

~~~
darkarmani
> You're right to say that the PKI doesn't work if you just want to trust any
> site that shows a padlock in the address bar, but it's useful if you do a
> little work.

Not it isn't. It still suffers from not respecting name constraints. You can't
setup trust for only a list of domains. If i run my own CA for some list of
domains, there is no way i can prevent my CA from being able to sign for
google.com. Instead people use wildcard certs so they can be delegated
responsibility for a subdomain.

~~~
mike_hearn
You can get a certificate for a fixed list of domains.

If name constraints were implemented more widely, that'd be great. But someone
has to write the code, debug it, ship it, etc, and then you have to wait until
lots of people have upgraded, etc, and ultimately wildcard certs work well
enough.

~~~
darkarmani
> If name constraints were implemented more widely, that'd be great. But
> someone has to write the code, debug it, ship it

Without name constraints I assert the system is inherently broken. You cannot
limit trust other than yes/no.

> ultimately wildcard certs work well enough.

Well enough is arguable. The problem is that your attack surface grows with
each machine rather than having a private key per machine.

------
skywhopper
Interesting that one of the primary developers of FreeBSD does not understand
one of the two major use cases for SSL, namely identity assurance. News sites
and local governments don't care about privacy when transmitting news or
emergency information, true, but citizens should be concerned about making
sure that information is coming from who they think it's coming from.

I'm no fan of HTTP/2, but this article does not effectively argue against it.
Too many bare assertions without any meat to them. And when you fail to
mention a major purpose of a protocol (SSL) you dismiss as useless, you lose a
lot of credibility.

~~~
phkamp
SSL does not provide identity assurance (=authentication), the CA-cabal does,
SSL just does the necessary math for you.

CA's are trojaned, that's documented over and over by bogus certs in the wild,
so in practice you have no authentication when it comes down to it.

Authentication is probably the hardest thing for us, as citizens to get,
because all the intelligence agencies of the world will attempt to trojan it.

Secrecy on the other hand, we can have that trivially with self-signed certs,
but for some reason browsers treat those as if they were carriers of Ebola.

~~~
arielby
That's an argument against showing scary warnings on self-signed certs, not
against SSL. It would be nice if there was a httpr:// scheme that would be
like HTTPS but without certificate checking.

~~~
phkamp
I'm not arguing against SSL.

I'm arguing against making SSL mandatory, because that will force NSA to break
it so they can do their work, and then we will have nothing to protect our
privacy.

More encryption is not a solution to a political problem:
[http://queue.acm.org/detail.cfm?id=2508864](http://queue.acm.org/detail.cfm?id=2508864)

~~~
cesarb
> I'm arguing against making SSL mandatory, because that will force NSA to
> break it so they can do their work, and then we will have nothing to protect
> our privacy.

That line of reasoning sounds bizarre to me. It sounds like "don't add a lock
to your door, because that will force the criminals to break the lock, and
then your door will be unlocked".

~~~
phkamp
A bad analogy is like a wet screwdriver.

Try this one, it's better, but not perfect:

Imagine what happened if some cheap invention turned all buildings into
inpenetrable fortresses unless you had a key for the lock.

Now police can not execute a valid judge-sanctioned search warrant.

How long time do you think lawmakers will take to react ?

~~~
cesarb
You were talking about the NSA breaking SSL, not lawmakers forbidding it.

If the problem is with the analogy, without analogies this time:

> I'm arguing against making SSL mandatory, because that will force NSA to
> break it so they can do their work, and then we will have nothing to protect
> our privacy.

Without SSL, our privacy is unprotected, since eavesdroppers can read our
traffic. Now add SSL, and eavesdroppers cannot read the traffic. Then NSA
breaks it, and eavesdroppers can read our traffic again - we've just circled
back to the beginning. We will have nothing to protect our privacy, but we
already had nothing to protect our privacy before we added SSL; and in the
meantime before the NSA breaks it, we had privacy.

And it assumes that the NSA will be able to break it, and that the NSA is the
only attacker which matters.

~~~
phkamp
NSA does what the do because lawmakers told them to and gave them the money --
you cannot separate these two sides of the problem.

There are many ways to break SSL, the easiest, cheapest and most in tune with
the present progression towards police-states is to legislate key-escrow.

Google "al gore clipper chip" if you don't think that is a real risk.

------
cflat
Nitpick - It's HTTP/2 not HTTP/2.0

We've all learned from the failure of SNI and IPv6 to gain widespread
adoption. (Thank you windows xp and Android 2.2) HTTP/2 has been designed with
the absolute priority of graceful backward compatibility. This creates limits
and barriers on what you can do. Transparent and graceful backward
compatibility will be essential for adoption.

I agree, HTTP/2 is Better - not perfect. But better is still better.

~~~
nly
Not that I believe SNI and IPv6 are failures, but HTTP/2 faces exactly the
same failure case as IPv6 (the lack of adoption due to HTTP/1.x being 'good
enough').

~~~
dragonwriter
> Not that I believe SNI and IPv6 are failures, but HTTP/2 faces exactly the
> same failure case as IPv6 (the lack of adoption due to HTTP/1.x being 'good
> enough').

HTTP/2 isn't really like IPv6 in that fewer people need to act to adopt it --
if the browser vendors do (which they are already) and the content providers
do (which some of the biggest are already), then its used. Its specifically
designed to be compatible with existing intermediate layers (particularly when
used with TLS on https connections) so that as long as the endpoints opt in,
no one else needs to get involved -- and with one of the biggest content
providers also being a browser vendor who is also one of the biggest HTTP/2
proponents...

IPv6 requires support at more different levels (client/server/ISP
infrastructure software & routers, ISPs actually deciding to use it when their
hardware/software supports it, application software and both client and server
ends, etc.) which makes adoption more complex.

~~~
ethbro
_> > Not that I believe SNI and IPv6 are failures, but HTTP/2 faces exactly
the same failure case as IPv6 (the lack of adoption due to HTTP/1.x being
'good enough')._

 _> HTTP/2 isn't really like IPv6 in that fewer people need to act to adopt it
-- if the browser vendors do (which they are already) and the content
providers do (which some of the biggest are already), then its used._

I, for one, welcome the HTTP/1.x+2 future of 5-10 years from now. (Obligatory
[http://xkcd.com/927/](http://xkcd.com/927/) )

------
dragonwriter
> The proponents of HTTP/2.0 are also trying to use it as a lever for the "SSL
> anywhere" agenda, despite the fact that many HTTP applications have no need
> for, no desire for, or may even be legally banned from using encryption.

What is the basis of this claim? ISTR that SPDY and the first drafts of HTTP/2
were TLS-only, and that some later drafts had provisions which either required
or recommended TLS on public connections but supported unencrypted TCP for
internal networks, but the current version seems to support TLS and
unencrypted TCP equally.

~~~
wmf
Unencrypted HTTP/2 is a fake concession that isn't usable in the real world.

~~~
dragonwriter
The only respect in which that appears to be true is that no major browser
vendor has yet committed to supporting HTTP/2 other than over TLS-encrypted
connections.

But given that that both some HTTP/2-supporting browssers _and_ much of the
server-side software supporting HTTP/2 is open source, and given that all the
logic will be implemented and the only change will be allowing it on
unencrypted TCP connection, it'll probably be fairly straightforward to anyone
who cares much to put the proof of concept of the value of unencrypted HTTP/2
together.

OTOH, the main _gain_ of HTTP/2 seems to be on secure connections, so I'm not
sure why one would _want_ unencrypted HTTP/2 over unencrypted HTTP/1.1, and
given that no browser seems to have short-term plans to stop supporting
HTTP/1.1, there's probably no real use case.

But the _protocol_ supports unencrypted use just fine.

~~~
aidenn0
IIRC, the main issue with nonTLS HTTP/2 is broken web proxies. This is why
Google deployed SPDY in https only, and also why they didn't use SCTP as the
basis, but instead reinvented about 50% of it on top of TCP.

Google didn't want something that would break even a tiny percentage of
existing installs.

------
teddyh
As I keep having to mention: The omission of the use of SRV records is
maddening, and the reasons given don’t make any sense.

[https://news.ycombinator.com/item?id=8550133](https://news.ycombinator.com/item?id=8550133)

[https://news.ycombinator.com/item?id=8404788](https://news.ycombinator.com/item?id=8404788)

~~~
tobz
From the responses given to your linked comments, it seems like there was a
technically valid reason not to use them: performance You just kept pushing
that it wasn't a good enough reason. Are you just going to keep posting and
asking until people decide you're right and they're wrong?

~~~
marcosdumay
Take a better look at the responses. There's no inherent reason for the
performance degradation (it's mainly because of noncompliance on Bind), and
without it we are stuck with the much slower HTTP redirects.

~~~
teddyh
Yes. Also CDNs and having your servers in the Cloud™ for automatic failover
would be _no longer necessary_. So _of course_ every company providing these
services are against it. These companies are also, of course, the
“stakeholders” interested in development of HTTP/2.

------
ak217
The argument about computing power and CO2 pollution is misguided. HTTP/2 no
longer requires encryption, so the TLS/non-TLS trade-offs remain the same as
before (and their compute impact is mitigated by hardware AES support, etc.).
The other relevant changes (SPDY, header compression, push) reduce the number
of context switches and network round-trips required and the total time
required for devices to spend in high power mode, and for the user to spend
waiting. That results in a reduction, not increase, in total power
consumption.

Taking server CPU utilization numbers as an indicator of total power
consumption is pretty misguided in this context, and my understanding is that
even those are optimized (and will continue to be optimized) to the point
where TLS and SPDY have negligible overhead (or, in the case of SPDY, may even
result in lower CPU usage).

~~~
phkamp
Show me the mainstream browsers that will use HTTP/2 without SSL/TLS ?

The difference between you and me, may be that I have spent a lot of time
measuring computers power usage doing all sorts of things. You seem to be
mostly guessing ?

~~~
ak217
I don't have Kill-a-Watts or rack PDU data for fleets of webservers,
unfortunately. What I do have is CPU performance data from running with and
without SSL gateways and SPDY in production, and all I can say is that the
server's CPU utilization is not significantly impacted by them. I also have
client-side data that shows substantial load speed improvements when using
SPDY. That should result in a C-state profile improvement on the CPU, but I'll
need to collect more data to confirm.

------
ibotty
phk (poul henning kamp) is the lead developer of varnish, in case people are
not familiar with him.

~~~
brohee
His technical prowess are only matched by his privacy champion credentials,
see e.g. his Operation ORCHESTRA talk
[https://www.youtube.com/watch?v=fwcl17Q0bpk](https://www.youtube.com/watch?v=fwcl17Q0bpk)

~~~
tomwilde
> Local governments have no desire to spend resources negotiating SSL/TLS with
> every single smartphone in their area when things explode, rivers flood, or
> people are poisoned.

That's one horrible argument, though. The cost of a text-based protocol on
ethernet, over TCP greatly outweighs the cost of the encryption process.

Yes, of course encrypting things will increase computational requirements yet
the cost is negligible in comparison to the problem being solved (stopping the
trade of personal data).

It's hard for me to associate a privacy champion with these statements.

------
bad_user
" _HTTP /2.0 could have done away with cookies, replacing them instead with a
client controlled session identifier._"

It doesn't make much sense to get rid of cookies alone, not when there are
multiple ways of storing stuff in a user's browser, let alone for
fingerprinting - [http://samy.pl/evercookie/](http://samy.pl/evercookie/)

Getting rid of cookies doesn't really help with privacy at this point and just
wait until IPv6 becomes more widespread. Speaking of which, that EU
requirement is totally stupid.

The author also makes the mistake of thinking that we need privacy protections
only from the NSA or other global threats. That's not true, we also need
privacy protections against local threats, such as your friendly local
Internet provider, that can snoop in on your traffic and even inject their own
content into the web pages served. I've seen this practice several times,
especially on open wifi networks. TLS/SSL isn't relevant only for
authentication security, but also for ensuring that the content you receive is
the content that you asked for. It's also useful for preventing middle-men
from seeing your traffic, such as your friendly network admin at the company
you're working for.

For example if I open this web page with plain HTTP, a middle-man can see that
I'm reading a rant on HTTP/2.0, instead of seeing just a connection to
queue.acm.org. From this it can immediately build a useful profile for me,
because only somebody with software engineering skills would know about HTTP,
let alone read a rant on IETF's handling of version 2.0. It could also inject
content, such as ads or a piece of Javascript that tracks my movement or
whatever. So what's that line about " _many HTTP applications have no need for
[SSL]_ " doing in a rant lamenting on privacy?

HTTP/2.0 probably has flaws, but this article is a rant about privacy and I
feel that it gets it wrong, as requiring encrypted connections is the thing
that I personally like about HTTP/2.0 or SPDY. Having TLS/SSL everywhere would
also make it more costly for the likes of NSA to do mass surveillance of
user's traffic, so it would have benefits against global threats as well.

~~~
phkamp
Actually, getting rid of cookies would fit almost all HTTP requests into a
single packet, so there are tangible technical benefits, even without the
privacy benefits.

You seem confused about cryptography.

Against NSA we only need secrecy, privacy is not required.

Likewise integrity does not require secrecy, but authentication (which doesn't
require secrecy either).

You don't think that anybody can figure out what you are doing when you open a
TCP connection to queue.acm.org right after they posted a new article, even if
that connection is encrypted ? Really ? How stupid do you think NSA is ?

Have you never heard of meta-data collection ?

And if you like your encrypted connections so much, you should review at the
certs built into your browser: That's who you trust.

I'll argue that's not materially better than unencrypted HTTP.

(See also Operation Orchestra, I don't think you perceive the scale of what
NSA is doing)

~~~
bad_user
The line on secrecy vs privacy doesn't make sense. Actually help me out,
because I'm not a native English speaker - if you're implying that the NSA
should be able to snoop in on my traffic without a warrant, as long as it
keeps it secret, then I beg to differ.

Visiting an article doesn't happen only after it was posted. And surely the
NSA can certainly figure out ways to track you, but their cost will be higher.
Just like with fancy door locks and alarm systems, making it harder for
thieves to break in means the probability of it happening drops. Imperfect
solutions are still way better than no protections at all (common fallacy nr
1).

All such rants are also ignoring that local threats are much more immediate
and relevant than the NSA (common fallacy nr 2).

On trusting the certificate authorities built into my browser, of course, but
then again this is a client-side issue, not one that can be fixed by HTTP 2.0
and we do have certificate pinning and even alternatives championed by means
of browser add-ons. Against the NSA, nothing is perfect of course, unless
you're doing client-side PGP encryption on a machine not connected to the
Internet. But then again, that's unrelated to the topic of HTTP/2.0.

~~~
phkamp
With unencrypted HTTP, NSA can just grab the packets on the fiber and search
for any keyword they want.

With a self-signed cert they would have to do a Man In The Middle attack on
you to see your traffic.

They don't have the capacity (or ability! man of their fibertaps are passive)
to do that to all the traffic at all the time.

The problem with making a CA-blessed cert a requirement for all or even most
of the traffic, is that it forces the NSA to break SSL/TLS or CAs
definitively, otherwise they cannot do their job.

Fundamentally this is a political problem, just slapping encryption on traffic
will not solve it.

But it can shift the economy of the situation -- but you should think
carefully what way you shift it.

~~~
cesarb
> The problem with making a CA-blessed cert a requirement for all or even most
> of the traffic, is that it forces the NSA to break SSL/TLS or CAs
> definitively, otherwise they cannot do their job.

Isn't the whole point of pervasive authenticated encryption to prevent the NSA
from "doing their job" (at least the spying part of it)?

> But it can shift the economy of the situation -- but you should think
> carefully what way you shift it.

It shifts more than the economy of the situation. It also forces a shift from
passive attacks to active attacks, which are easier to detect and harder to
justify. Forcing the attacker to justify their acts has a political effect.

~~~
phkamp
Pervasive authenticated encryption will not prevent the NSA from doing their
job, as long as lawmakers think they should do their job.

Instead you will see key-escrow laws or even bans on encryption.

You cannot solve the political problem by applying encryption.

~~~
bad_user
Laws against encryption will never be international. In the US it happened
before [1], so there is indeed precedent. But such laws prevent a country from
being competitive in the international marketplace, therefore many countries
will not agree to it, just as they aren't agreeing with IP laws. And yes, I
also believe that this trend on having national firewalls for censoring
content will also not last for long, for the same reason.

What I love about technology is that it cannot be stopped with lawmaking.

[1]
[http://en.wikipedia.org/wiki/Export_of_cryptography_from_the...](http://en.wikipedia.org/wiki/Export_of_cryptography_from_the_United_States)

~~~
phkamp
The historical evidence that technology cannot be stopped by lawmakers is very
weak, if it even exists in the first place.

Very few lawmakers have really tried, and few technologies have been worth it
in the first place.

The relevant question is probably if technology can be delayed by lawmaking
and how long time.

There is no doubt however that policies can be changed, most places it just
takes elections but a few places may need a revolution.

Thinking this is a problem you can solve by rolling out SSL or TLS is
incredibly naive.

------
peterwwillis
_" HTTP/2.0 will be SSL/TLS only"_

Yes! Finally 99% of users won't be hacked by a default initial plaintext
connection! We finally have safe(r) browsing.

 _" , in at least three out of four of the major browsers,"_

You had ONE JOB!

Jokes aside, privacy wasn't a consideration in this protocol. Mandatory
encryption is really useful _for security_ , but privacy is virtually
unaffected. And the cookie thing isn't even needed; every browser today could
implement a "click here to block cookies from all requests originating from
this website" button.

We need the option to remove encryption. But it should be the _opposite_ of
what we currently do, which is to default to plaintext unless you type an
extra magic letter into the address (which no user ever understands, and is
still potentially insecure). We _should_ be secure by default, but allow non-
secure connections if you type an extra letter. Proxies could be handled this
way by allowing content providers to explicitly mark content (or domains) as
plaintext-accessible.

The problem I fear is as everyone adopts HTTP/2 and HTTP/1.1 becomes obsolete
(not syntactically but as a strict protocol) it may no longer be possible to
write a quick-and-dirty HTTP implementation. Before I could use a telnet
client on a router to test a website; now the router _may_ need an encryption
library, binary protocol parser, decompression and multiplexing routines to
get a line of text back.

------
higherpurpose
HTTPS can also be used to protect you from malware [1] [2] and stop censorship
[3]. If anything, news sites should be among the first to adopt strong HTTPS
connections since many people visit them and the news also needs to not be
censored.

[1] [https://citizenlab.org/2014/08/cat-video-and-the-death-of-
cl...](https://citizenlab.org/2014/08/cat-video-and-the-death-of-clear-text/)

[2] [http://www.ap.org/Content/AP-In-The-News/2014/AP-Seattle-
Tim...](http://www.ap.org/Content/AP-In-The-News/2014/AP-Seattle-Times-Upset-
About-FBI-Impersonation)

[3] [http://ben.balter.com/2015/01/06/https-all-the-
things/](http://ben.balter.com/2015/01/06/https-all-the-things/)

As for the performance side, SPDY is probably not perfect, but it seems to
generally improve over current HTTP, even if it uses secure connection. But
even if it didn't, using HTTPS seems to add negligible overhead, and compared
to the security it gives I think it's well worth it.

[https://www.httpvshttps.com/](https://www.httpvshttps.com/)

------
cnst
A very well written rant.

HTTP is supposed to have had opportunistic encryption, as per RFC 7258
(Pervasive Monitoring Is an Attack,
[https://news.ycombinator.com/item?id=7963228](https://news.ycombinator.com/item?id=7963228)),
but it looks like the corporate overlords don't really understand why is it at
all a problem for the independent one-man projects to acquire and update
certificates every year, for every little site.

As per a recent conversation with Ilya Grigorik over at nginxconf, Google's
answer to the cost and/or maintenance issues of https --- just use CloudFlare!
Because letting one single party do MITM for the entire internet is so sane
and secure, right?

------
arielby
What's exactly the difference between cookies and session identifiers exactly?
There's no law requiring you to send kilobytes of cookies
(news.ycombinator.com gets by with a 22-byte cookie). Of course the way HTTP
cookies handle ambient authority is _rather imperfect_ , but that can be
solved within the system.

~~~
phkamp
The difference is who makes the decisions: session id is controlled by the
client. cookies by the server.

FaceBook, Twitter etc. track you all over the internet with their cookies,
even if you don't have an account with them, whenever a site puts up one of
their icons for you to press "like".

With client controlled session identifies, uses would get to choose if they
wanted that.

The reason YC gets by with 22 bytes is probably that they're not trying to
turn the details of your life into their product.

------
kalleboo
> The so-called "multimedia business," which amounts to about 30% of all
> traffic on the net, expresses no desire to be forced to spend resources on
> pointless encryption.

I thought that "pointless encryption" was basically the definition of DRM? And
the largest video site, traffic-wise (YouTube) is already encrypted.

~~~
stingraycharles
I thought Netflix was the largest video site, traffic-wise?

~~~
kalleboo
Seems like it depends on who you ask. I imagine something closer to the truth
is that Netflix are bigger in the US but YouTube are bigger globally.

------
ademarre
> _the IETF can now claim relevance and victory by conceding practically every
> principle ever held dear in return for the privilege of rubber-stamping
> Google 's initiative._

What principles does he claim the IETF is conceding here?

------
GotAnyMegadeth
> One remarkable property of this name is that the abbreviation "WWW" has
> twice as many syllables and takes longer to pronounce.

World Wide Web

Dou Ble U Dou Ble U Dou Ble U

I count three times as many, is this an accent thing?

~~~
tobz
I think it's an accent thing. When I say WWW at normal speed, it sounds more
like dubble-u dubble-u dubble-u. American, east coast USA.

~~~
barrkel
Unless you say "duh blew duh blew duh blew", you're still using 9 syllables
rather than six 6.

~~~
tobz
I'm not arguing that it's nine syllables, just saying that it's most likely an
accent / speed of speech thing as far as the author's six-vs-four thing goes.

------
jbb555
Couldn't agree with most of this more. HTTP/2.0 seems to be me to be an
entirely pointless set of unwanted complications and agendas disguised as
technical improvements.

------
johngd
"Has everybody in IETF forgotten CNN's exponential traffic graph from 14 years
ago?"

Any ideas?

~~~
Matthias247
I guess it's a reference to september 11th 2001.

------
cpach
Upvoted, not because I think it is a particularly good article, but we seem to
have a pretty good discussion based on it.

------
phkamp
I have go go shopping and cook dinner, but if I'll be back in a couple of
hours.

Poul-Henning

------
biggot_man
I dont care about IETF or HTTP/2.

Ill just keep using HTTP/1.1. It works on my computer.

------
tanglesome
When an article about technical problems with a protocol starts whining about
how the protocol will increase CO2 pollution I know it's BS. WTH was the ACM
thinking by wasting out time with this crap?

------
acaloiar
This is the sort of self-important technorant that I've come to despise in
tech news. It is another example of a blogger pandering to readers' absurd
addiction with outrage. HTTP/2.0 is not an outrage. It is imperfect, just as
HTTP/1.1 is imperfect and ill-suited to today's rich web applications that
were not envisioned at its inception.

[edit] It's a bit ironic that this story was delivered to many of us (via
Hacker News) over SPDY--HTTP/2.0's dominant source of inspiration.

~~~
MatthewWilkes
Considering phk works as a developer for a very popular HTTP based application
(Varnish) and contributed to competing specs for HTTP2 he's hardly a pandering
blogger. He's an expert annoyed that the improvements in standards in his
field are absurdly slow and perfectly entitled to voice his ire.

~~~
acaloiar
He is no doubt perfectly entitled to rant. I'm also fairly confident that phk
has contributed to more important projects vital to the tech ecosystem than I
ever will. I simply argue that there are more constructive and productive ways
to go about pointing out a protocol's flaws.

Phk has had issues with the process for quite some time, and I feel that his
embitterment about the process has jaded his view of the protocol. Phk on
SPDY/HTTP/2.0 [http://lists.w3.org/Archives/Public/ietf-http-
wg/2014AprJun/...](http://lists.w3.org/Archives/Public/ietf-http-
wg/2014AprJun/0815.html)

~~~
jerf
"I simply argue that there are more constructive and productive ways to go
about pointing out a protocol's flaws."

Which he has done: [https://www.varnish-
cache.org/docs/trunk/phk/http20.html](https://www.varnish-
cache.org/docs/trunk/phk/http20.html)

It's a pet peeve of mine when people just fling this sort of accusation about
as if every _word-count limited column_ isn't any good unless it's 20 times
longer and basically includes half of Wikipedia transitively. It's a column in
a trade magazine. There isn't a place for a detailed technical discussion
there, so complaining that there isn't one is complaining about something that
can't be fixed.

Besides, cards on the table, I think he's basically correct, cynicism and all
here. Sometimes the right answer is to just say no, and failure to say no is
not good thing when that is what is called for.

~~~
acaloiar
"Which he has done: [https://www.varnish-
cache.org/docs/trunk/phk/http20.html"](https://www.varnish-
cache.org/docs/trunk/phk/http20.html")

You are highlighting my point. The arguments made here are far more pragmatic.

My qualms are with how his points are made in the original blog post, not with
what points he is making; many of which are quite valid.

With that said, the totality of his argument allows the perfect to be the
enemy of the good. Which in my opinion is an invariably flawed position.

