
Explicit Trusted Proxy in HTTP/2.0 - rdlowrey
http://tools.ietf.org/html/draft-loreto-httpbis-trusted-proxy20-01
======
saurik
Wasn't this already discussed on Hacker News, in quite some detail, yesterday?
And wasn't the big revelation that this only applied to traffic that was not
CA verified and thereby was inherently man-in-the-middle-attackable _anyway_
(as the actually-secure https connections are marked in a way where this
feature does not apply), making this a misunderstanding?

~~~
higherpurpose
I thought the whole point of HTTP2.0 was to make traffic encrypted by default,
and not let bit vulnerability holes in the protocol like this. Saying "it just
makes it as before" doesn't make me feel better.

Why are we moving to HTTP2.0 otherwise? For a 5 percent increase in speed? The
big selling point of HTTP2.0 from my perspective _was_ the "always-on
encryption".

~~~
1stop
I think the problem is, always on encryption, is always off caching...

~~~
brokenparser
That's not true, your browser still handles caching just fine. More bandwidth
trumps caching anyway, and caching forward proxies will be a thing of the
past.

~~~
asabil
Heh, bandwidth can help, but caching helps primarily with latency.

~~~
brokenparser
The latency benefits of an extra caching layer are minimal, especially since
HTTP pipelining and CDNs will continue to exist.

------
Lukasa
As discussed yesterday, this is _not_ a new MITM vulnerability. To make this
work you need to establish a TLS connection to the proxy which is verified in
the usual certificate authority way. Note that the standard says that user
agents that discover they're talking to a trusted proxy should obtain user
consent to talk to that proxy.

Any situation in which someone can force your machine to trust one of these
proxies is a situation when they had administrator access to your machine
_anyway_ , and in that situation you're already screwed.

Would it kill HN to actually read one of these specs instead of just whining
about it?

~~~
rdlowrey
I don't really care to argue this point so I'll just explain why I find this
extremely problematic. What percentage of browser users have any concept of
how TLS works? This an exceedingly low number. You're essentially creating a
dragnet to capture and decrypt the contents of transfers for a huge number of
people who likely have no idea that they're volunteering their (sensitive)
information. Browser users are not TLS experts. They will click right through
warnings without a second thought. No, this standard doesn't harm the very
small minority of people capable of protecting themselves. It only takes
advantage of everyone else. This is why, to me, dismissing this off-hand as no
big deal is seriously negligent. Yes, I've read the draft. Yes I have the
technical experience and qualifications to understand fully what it proposes.
And yes, I believe this is an egregious thing to propose.

~~~
tptacek
The TrustedProxy standard specifically documents that it not be invoked for
HTTPS URIs. TrustedProxy doesn't interact at all with TLS the way it's
understood now.

------
joliss
Before people start associating this with actual HTTP/2.0, it is worth
emphasizing that this is a separate document. None of this "trusted proxy"
MITM nonsense is in the HTTP/2.0 draft:
[http://datatracker.ietf.org/doc/draft-ietf-httpbis-
http2/?in...](http://datatracker.ietf.org/doc/draft-ietf-httpbis-
http2/?include_text=1)

Thankfully, it seems fairly unlikely that the trusted proxy thing is going to
get anywhere: It serves the interests of Ericsson and AT&T, but _not_ those of
the HTTP/2.0 spec authors (who are from Google and Mozilla) or server and
browser vendors that will have to implement HTTP/2.0.

------
tjaerv
Some context:
[http://lauren.vortex.com/archive/001076.html](http://lauren.vortex.com/archive/001076.html)

"What they propose for the new HTTP/2.0 protocol is nothing short of
officially sanctioned snooping."

~~~
tptacek
The post you've linked to is technically inaccurate and highly misleading.
Here's Brad Hill's rebuttal:
[http://hillbrad.typepad.com/blog/2014/02/trusted-proxies-
and...](http://hillbrad.typepad.com/blog/2014/02/trusted-proxies-and-privacy-
wolves.html)

~~~
tjaerv
Thanks for the link, it does clarify matters.

------
barrkel
I particularly like how the Privacy section is completely blank.

------
rdlowrey
Section 6 (Security Considerations) is truly shocking. And Section 7 (Privacy
Considerations)? Whaddya know? It's _empty_!

------
dschiptsov
In some third-world countries you cannot get a telecom licence unless you
"implement" this, or your license could be easily revoked or canceled.

In Russia, for example, there are explicit regulations which says that no
telecom company can operate unless it provides "monitoring and law-enforcement
facilities".

My guess is that _each_ country nowadays has regulations of this sort, so
telecom equipment manufactures are forced to "add required functionality". Of
course, US has such "secret" regulations.)

So, it is much better to face the reality and to standardize this shit to
reduce the pain of telecom "workers".)

------
alephnil
It is an improvement compared to HTTP/1.1, in that it allows for opportunistic
encryption, and it is those connections that can be cached (or if you so
prefer, snooped). This will still make it harder for NSA and similar agencies
to do mass surveillance without traces. They would either have to insert their
own certificate, or get the private key from the ISP. That is far more
difficult to do in a covert manner. This alone makes HTTP/2.0 an improvement.

~~~
lallysingh
NSA will have the ISP keys, that's a given.

~~~
alephnil
For American ISPs yes. For ISPs in some allied countries, probably. For all
ISPs in every country in the world? Unlikely. And furthermore, that would
require a nationwide (or worldwide) scheme where NSA gathered or issued
keypairs for every certificate at every ISP. That is much more expensive than
just tapping the lines, which is some of the point here, and some data
probably would even be off limits. It would also be hard to keep an operation
like that hidden, as they could for many years with the current methods.

I have no illusion that NSA can be stopped if they target someone, but it
should be possible to make it impractical to just tap plaintext from the
internet backbone as they do today. If data generally is encrypted _unless_
they do MITM attack it will be too expensive to just collect everything.

This is of cause not enough in itself, but it is certainly a step in the right
direction.

------
crbaker
I understand that HTTP/2.0 needs to address both scalability and security, but
the proposed "trusted" proxies smells really bad. Knowing what we know today,
in that the current level of security offered by HTTP/1.1 is barely adequate
to protect web citizens from real and present threats, shouldn't we be
radically rethinking HTTP security.

------
yeukhon
This would be an awesome term project for students studying computer security
to find problems in the draft, if there is _any_.

