
TLS in HTTP/2 - bagder
http://daniel.haxx.se/blog/2015/03/06/tls-in-http2/
======
higherpurpose
> _Internet Explorer people have expressed that they intend to also support
> the new protocol without TLS, but when they shipped their first test version
> as part of the Windows 10 tech preview, that browser also only supported
> HTTP /2 over TLS. As of this writing, there has been no browser released to
> the public that speaks clear text HTTP/2\. Most existing servers only speak
> HTTP/2 over TLS._

I'm hoping it will stay this way. Defaults are important, so it's the
platforms' responsibility to support and enforce the "safer" options.

> _The fact that it didn’t get in the spec as mandatory was because quite
> simply there was never a consensus that it was a good idea for the protocol.
> A large enough part of the working group’s participants spoke up against the
> notion of mandatory TLS for HTTP /2\. TLS was not mandatory before so the
> starting point was without mandatory TLS and we didn’t manage to get to
> another stand-point._

Which is interesting, because I remember quite clearly the "Snowden
discussion" at the IETF, and there were consensus for an "encrypt everything
Internet".

> _There is a claimed “need” to inspect or intercept HTTP traffic for various
> reasons. Prisons, schools, anti-virus, IPR-protection, local law
> requirements, whatever are mentioned._

Right. So IETF made it non-mandatory so law enforcement can get their "master
keys" in a way. Also this "anti-virus" kind of protection, is basically what
Superfish was. I'd rather that kind of behavior was stopped.

IETF would better start actually becoming useful and come up with ways to
replace the CA system over the next few years, instead of taking protocols
from others and ruining them as they standardize them. Otherwise we should
rethink a new model for standardization if IETF is as useless/malicious as it
is right now.

~~~
sanxiyn
> Which is interesting, because I remember quite clearly the "Snowden
> discussion" at the IETF, and there were consensus for an "encrypt everything
> Internet".

I think this is still the official IETF position. RFC 7258 "Pervasive
Monitoring Is an Attack" is published as Best Current Practice and has not
been retracted.

~~~
EtherealMind
Certain corporate companies formed a consortium to prevent encryption to
ensure that monetisation of personal information would continue.

At the very last stage, the IETF appeared to be hijacked by very large telcos
(e.g. ATT, Verizon, Ericsson, Comcast) to remove the mandatory requirement for
TLS

As an outsider, this look likes a careful co-ordinated attack on the IETF
standards process by a small number of "serial IETF professionals" who are
paid by the big carriers to be inside the organisation and ensure that
standards do the bidding of corporate masters. (some hyperbole there)

Waiting until the last phase restricted discussion, and used the existing
momentum to complete HTTP2 standard while removing one of the fundamental
reasons for HTTP2 to exist.

It is a very sad day that consumer rights have been compromised by big money.
And as the Lenovo Superfish debacle showed, likely it will backfire in the
long run.

Here is the consortium:
[http://www.atis.org/openweballiance/about.asp](http://www.atis.org/openweballiance/about.asp)

Here is an summary I wrote about this topic:
[http://etherealmind.com/response-open-web-alliance-
lobbies-i...](http://etherealmind.com/response-open-web-alliance-lobbies-
intercept-traffic/)

~~~
grey-area
Thankfully the browser and server vendors can do an end-run round this by
simply not supporting http2 without encryption. Then no matter what the
standard says ordinary users will be protected and it'll be one more reason
for sites to move to https everywhere. The article discusses this in _TLS
mandatory in effect_

------
blfr
Even when only doing domain validations, CAs still usually ask for personal
information. You would have to lie at least somewhat convincingly to obtain a
certificate without providing personal details which feels fraudulent and
could potentially put you at risk of having the certificate invalidated. I'm
assuming let's encrypt will address this since it's going to be a fully
automated(?) system.

------
m_eiman
How could certificates ever work in embedded applications connected to
consumer LANs that provide interfaces over HTTP? Aren't certificates tied to
IP addresses, which wouldn't work with e.g. DHCP? Not to mention certificate
expiration and updates…

~~~
stephen_g
Certificates are generally based on hostnames. You can put an IP as the common
name but it's problematic (and you can't get one signed by most authorities
for an IP address as far as I know).

Generally the devices just generate a self-signed certificate and you have to
click through the warning.

~~~
m_eiman
Ok, so it's just a horrible user experience then :]

~~~
bodyfour
Exactly. Worse, it trains users to ignore cert warnings.

I don't have any problems with the campaigns to make the public internet
HTTPS-only. However, for software inside an intranet, or software that just
wants to expose an interface on
[http://127.0.0.1:*someport*](http://127.0.0.1:*someport*) non-SSL is the
better default.

If people want to protect their intranet that's great, but it means that they
have to go through the work of buying a cert, since only they know the
hostname it will be exposed as. That's a poor initial-install experience.

~~~
Karunamon
My view is starting to change on this.. can you really trust a LAN beyond a
certain size? (That size being what one person can comfortably architect and
maintain.)

Nowadays, I'm a firm believer in "encrypt all the things", but that's because
I'm a geek and can deal with the PITA. There _needs_ to be either an
encryption mechanism that's _completely separate_ from authentication, or the
use case of LAN encryption for regular people needs to be addressed in some
other way.

~~~
jeremie
I'm a big believer in a (local/p2p) transport encryption mechanism /in
addition to/ one for auth, and for it to be transparent to any UX... that's
very much our goals for telehash v3 :)

------
comex
The only one of the counterarguments that interests me is that it defeats
caching. I mean, if 100 users in a large network want to access the same video
or other large resource from the Internet, it seems pretty ridiculous that the
connection must use 100 times as much bandwidth as it would if they could just
install a simple caching proxy, especially if it's just some cat video or
online game, which is probably the common case. True, not all large resources
are as innocent, and there is no real way around encrypting and not caching
everything if you don't want devices on the network to tell the difference...
but the result is just so pathological. The price of freedom?

[For the record, YouTube seems to use HTTPS by default for video content, so
this is already the case for some large percentage of the types of large
resources typically accessed from shared networks.]

~~~
cbhl
Caching already happens through CDNs at the ISP level, such as through the
Google Global Cache (YouTube), Netflix Open Connect. That roughly covers about
half of network traffic.

Plus, running a squid proxy on 100 users isn't nearly as effective as it once
was; pages contain far more dynamically generated content than they used to.
Think about a Facebook News Feed or Twitter Stream.

------
r1ch
I still don't see many arguments about advertising when it comes to TLS. I
can't deploy TLS across my sites without losing a huge amount of ad inventory
due to cross-site request policies (no HTTP content from HTTPS domains).

AFAIK, Google is the only network actively working on having HTTPS-supported
ads. The value of the ads drops significantly as the auction pressure from all
the HTTP ads is gone, meaning any sites that rely on ad revenue can not afford
to use TLS.

------
Nimi
I'm surprised to see Certificate Transparency presented as a band-aid. My
understanding is that assuming it will be deployed successfully, forged
certificates will require very significant resources to use successfully for
an attack, typically limiting such attacks to nation states.

But I would love to know where I'm wrong about this.

------
calibwam
For how long will HTTP 1.1/1.0 live alongside HTTP/2? It's all nice if every
web page has TLS, but if I can just not upgrade to 2.0, it will not matter at
all...

~~~
TazeTSchnitzel
HTTP/1.1 and HTTP/2.0 will coexist forever. HTTP/2 is mostly just a higher-
performance, binary version of HTTP/1.1.

HTTP/1.0 is basically dead and has been for years, because it lacks Host: and
so cannot be used for vhosts.

~~~
bodyfour
That's not completely accurate about HTTP/1.0. It's true that HTTP/1.1
requires "Host:" (a compliant server MUST reject any request from a 1.1 client
that lacks that header). However, HTTP/1.0 clients had been sending "Host:"
headers for years before the 1.1 standard came out.

It's still possible to use a 1.0 client today if you don't want to handle
other client-side requirements of 1.1 like chunked transfer-encoding.
Likewise, embedded devices can speak 1.0 only without any problem.

~~~
TazeTSchnitzel
Host: isn't part of HTTP/1.0, though, and if you try to send it, some servers
will respond to you with HTTP/1.1!

~~~
bodyfour
It's perfectly normal (and allowed) for servers to send back a version string
of "HTTP/1.1" even if the client sent the request as "HTTP/1.0". As long as
they don't do anything in their response that assumes that the client has 1.1
features, all is fine. This basically just means: * Don't use chunked encoding
in the response. (Technically a 1.0 client could specifically indicate support
for that by sending a "TE: chunked" header, but since chunked encoding arrived
at the same time as 1.1 I think most servers just assume that HTTP/1.0 clients
never support it) * Don't assume that the client supports keep-alive
connections. However, prior to HTTP/1.0 clients often did indicate that they
could do keep-alive by sending "Connection: keep-alive". The only real
difference in 1.1 is that now the client must support it unless they
specifically indicate that they don't by sending "Connection: close". In the
absence of a "Connection:" header, a 1.1 client supports keep-alive and a 1.0
does not.

