
One of the Most Alarming Internet Proposals I've Seen - seven
http://lauren.vortex.com/archive/001076.html
======
quotemstr
Er, actually reading the specification, it's about proxying _http_ resources,
not _https_ ones. This proposal is strictly better than the transparent
proxying that's common on the internet today.

    
    
        To distinguish between an HTTP2 connection meant to transport "https"
        URIs resources and an HTTP2 connection meant to transport "http" URIs
        resource, the draft proposes to
    
           register a new value in the Application Layer Protocol negotiation
           (ALPN) Protocol IDs registry specific to signal the usage of HTTP2
           to transport "http" URIs resources: h2clr.
    

...

    
    
        4.3. Secure Forward Proxy and https URIs
    
    
        The Proxy intercepts the TLS ClientHello analyses the application
        layer protocol negotiation extension field and if it contains "h2"
        value it does not do anything and let the TLS handshake continue and
        the TLS session be established between the User-Agent and the Server
        (see Figure 8).

~~~
bbatsell
HTTP/2 changes the meaning of the [http://](http://) scheme. All connections
will now be TLS-encrypted. (Edit: Maybe not. See hobohacker below.)
[http://](http://) means that the endpoint has not been verified using the CA
system and is using a self-signed certificate (and is thus trivially
vulnerable to a MITM should certificate keys not be checked out-of-band).

The purpose is to provide confidentiality to the vast majority of traffic,
even if the authentication part of the CIA triangle isn't achieved. This
proposal's purpose is to undo that and expose all traffic using the
[http://](http://) scheme to your ISP, exactly as it is today. (I would also
note that this draft is proposed by AT&T, which is now rolling out new plans
in Austin that charge an extra $30/month if you do NOT agree to them
inspecting and data-mining all of your internet activity and selling it to
advertisers.)

~~~
derefr
> This proposal's purpose is to ... expose all traffic using the
> [http://](http://) scheme to your ISP, exactly as it is today.

Isn't that a semantic requirement of HTTP, though? Half of the "tech" in the
HTTP/1.X spec is to allow for caching of resources and responses by proxies,
allowing anyone between the client and server (e.g. your ISP) to act as a CDN.

HTTPS/1.X effectively throws that away by doing end-to-end encryption. It's a
trade-off: we gain the surety that all the responses are coming directly from
the peer, rather than anyone else... but the web becomes 90% less cacheable,
because the only places things can end up cached are between the client and
the HTTPS pipe (i.e. the browser cache), or between the HTTPS pipe and the
server (i.e. "reverse proxies" like Nginx.)

The current workaround for this, when you _need_ caching for your Big Traffic
on either ingress or egress, is to do what amounts to purposeful self-MITMing
of your HTTPS session: to terminate HTTPS _on a caching proxy_ , that holds
the certificate of your client/server, and acts as if it were you, while
itself doing another HTTPS session for "the last mile" to connect to you. This
is what companies do when they deploy their own CA-cert to their networks, so
everyone's access can be proxied through their own system; and this is what
services like Cloudflare do when they sit "in front of" your server while not
being a part of your company's VPN at all.

Basically, HTTP2 codifies this workaround, and calls it HTTP.

~~~
zmanian
I find it difficult to imagine a world where Applications agnostic caching for
encrypted sessions is possible.

I fully expect a world where application developers have services at their
disposal for positioning assets closer to the end user with VM instance level
isolation and security guarantees.

Transport level security is not likely to be enough for high value/sensitivity
data in the long run but adding a bunch of new trusted parties to the system
is going to be huge enabler for end user surveillance.

~~~
derefr
Why application-agnostic? HTTP's own caching isn't application-agnostic; it
relies on the server to specify Cache-Control headers.

I would imagine an ideal HTTP2 caching protocol to basically specify that some
resources can come from anywhere, as long as the retrieved result conforms to
an attached content hash—while also specifying a primary source to get the
resource from, if you don't have a DHT handy. (Oddly enough, this is basically
a suggestion that web browsers try to resolve magnet: URNs.)

------
wmf
This article ignores the context behind the proposal. Many companies, schools,
and prisons are MITMing all SSL traffic today for a variety of liability
reasons. Today those users get no notice that their Web browsing is being
observed and censored. Trusted proxies are intended to give those users some
notice that they're being MITMed.

I agree that MITM proxies shouldn't be used on the public Internet and thus we
shouldn't make it easier to do so, but what about the people who are already
being MITMed? Is there another way to solve this problem or must we throw
corporate Web users under the bus to save the public?﻿

~~~
hobohacker
As Patrick McManus says in [http://lists.w3.org/Archives/Public/ietf-http-
wg/2013OctDec/...](http://lists.w3.org/Archives/Public/ietf-http-
wg/2013OctDec/0703.html):

If someone can install a root cert onto your computer then you are already
owned - there is no end to the other things they can do too. Call it a virus,
call it an enterprise, but call it a day - you're owned and there is no in-
charter policy this working group can enact to change the security level of
that user for good or for bad..

The good news is not everyone is already owned and SSL helps those people
today.

------
hobohacker
The specification indeed is about proxying http resources, not https ones. So
it's not initially as alarming as some other proposals discussing trusting
proxies to intercept SSL connections. For more details, you can refer to
[https://insouciant.org/tech/http-slash-2-considerations-
and-...](https://insouciant.org/tech/http-slash-2-considerations-and-
tradeoffs/#Proxies).

This specific proposal is interesting because it specifically is related to
opportunistic encryption proposals, in particular, the one that allows sending
[http://](http://) URIs over an unauthenticated TLS connection:
[http://tools.ietf.org/html/draft-nottingham-httpbis-alt-
svc-...](http://tools.ietf.org/html/draft-nottingham-httpbis-alt-
svc-03#section-3.6). The problem here for proxies is, if you mix http and
https (authenticated) traffic on the same TLS connection, the proxy cannot
tell if it can safely MITM the connection. The proxy vendor would like to know
if it can do so, probably for network management / caching / content
modification reasons. Of course, the point of the opportunistic encryption
proposal is to increase security (although its actual effective impact is
controversial: [https://insouciant.org/tech/http-slash-2-considerations-
and-...](https://insouciant.org/tech/http-slash-2-considerations-and-
tradeoffs/#OpportunisticEncryption)). But if you believe in opportunistic
encryption's security purposes, then it doesn't seem to really make sense to
make the MITM'able traffic identifiable so proxies on the network path can
successfully MITM them without detection.

------
vezzy-fnord
It actually appears that the RFC openly admits the potentials for abuse here:

"6\. Security Considerations

This document addresses proxies that act as intermediary for HTTP2 traffic and
therefore the security and privacy implications of having those proxies in the
path need to be considered. MITM [4], [I-D.nottingham-http-proxy-problem] and
[I-D.vidya-httpbis-explicit-proxy-ps] discuss various security and privacy
issues associated with the use of proxies. Users should be made aware that,
different than end-to-end HTTPS, the achievable security level is now also
dependent on the security features/capabilities of the proxy as to what cipher
suites it supports, which root CA certificates it trusts, how it checks
certificate revocation status, etc.

 _Users should also be made aware that the proxy has visibility to the actual
content they exchange with Web servers, including personal and sensitive
information._ "

~~~
MaulingMonkey
To play devil's advocate, this could potentially be less harmful than the
existing situation: where e.g. various corporate nets will require you to
install root certs to accomplish the same MITM attack, in a less visible
fashion (after installation), with some if not all of the same caveats -
especially if given the ability to opt out.

(Bugs, insufficiently scary UI, and "discovery" are all massive concerns of
course...)

~~~
atmosx
Hm, no. _Various corporate networks_ doesn't classify as an ISP, the number of
potentially abused users is not the same. A company can do whatever it wants
to, an ISP offers a service and should respect the privacy of it's costumers,
at least theoretically.

~~~
wmf
This proposal isn't intended for ISPs and should never be used on the public
Internet.

~~~
atmosx
Oh, my bad then. I miss-understood the proposal and it's implications. But
since the protocol supports that, how can we be sure that ISPs won't use it?

~~~
MaulingMonkey
Cynically, "we can't". Or "they already have better options".

Alternatively, outcry and blacklisting ISP proxies - just as we do with root
cert abuse.

------
higherpurpose
I've become increasingly more disgusted with IETF since I found out they have
at least a few NSA agents working with them on protocols, and more importantly
_refusing to kick them out_ \- even after all the Snowden revelations with NSA
trying to subvert and undermine encryption protocols:

[http://mirrors.dotsrc.org/fosdem/2014/Janson/Sunday/NSA_oper...](http://mirrors.dotsrc.org/fosdem/2014/Janson/Sunday/NSA_operation_ORCHESTRA_Annual_Status_Report.webm)

Then I find out that they've been working with Cisco on another similar thing
to this one for "legal intercepts", a.k.a "trusted backdoors", like we're
seeing above.

[https://www.blackhat.com/presentations/bh-
dc-10/Cross_Tom/Bl...](https://www.blackhat.com/presentations/bh-
dc-10/Cross_Tom/BlackHat-DC-2010-Cross-Attacking-LawfulI-Intercept-wp.pdf)

With NIST being already corrupted by the NSA, and now W3C becoming corrupted
by MPAA, too, I think we're seeing the decay and fall of the "standard
bodies", because I don't believe the Internet will tolerate these moves. The
Internet will ignore them, do its own thing, and make it popular. I think
future standards will be built from the bottom-up, and if I'm not mistaken
most of the Internet so far has been built that way anyway.

~~~
wmf
If you consider a standards body corrupt because they have a single member you
disagree with, you might be failing at politics.

~~~
666c6f
It's not like they are just some "ordinary" bad guys, like A&T, who just want
to make some bucks. The NSA is one of the most dangerous enemys to free speech
and the freedom of the Internet. There is a high chance that they are going to
undermine all our efforts to make a free and secure internet.

------
platypii
The best part: the "Privacy" section of the document is blank.

[http://tools.ietf.org/html/draft-loreto-httpbis-trusted-
prox...](http://tools.ietf.org/html/draft-loreto-httpbis-trusted-
proxy20-01#section-7)

------
rdl
There are some kinda legitimate uses for this in certain environments --
enterprise DLP, various kinds of filtering, etc. Potentially even caching and
stuff on the distant end of really weird network connections (when I go to
Mars in ~30y, I'd like to have as much cached as possible, and converted to
message-based vs. connection-oriented protocols).

We have good enough workarounds for this right now (putting wildcard CA certs
on devices and proxying that way), but they're not awesome. So, if there were
a way to keep this from being used for evil, it could make some existing non-
evil activities easier.

But, on balance, the risk of evil might be too high.

~~~
JoshTriplett
There are _potentially_ legitimate (though still sketchy) reasons to MITM
HTTPS traffic from a host configured to allow that (for instance, by trusting
an organizational CA). There are no legitimate reasons to MITM HTTPS traffic
without the host's knowledge.

------
gnoway
There was another article on here a week or two ago effectively blasting the
http/2.0 wg for doing stupid things. I think it was the "HTTP 308 incompetence
expected" article.

Now this. I'm beginning to wonder if I want anything to do with HTTP/2.0.

~~~
hobohacker
Perhaps you should look at the Hacker News comments on that thread:
[https://news.ycombinator.com/item?id=7249193](https://news.ycombinator.com/item?id=7249193).
Notably, my comments:
[https://news.ycombinator.com/item?id=7249560](https://news.ycombinator.com/item?id=7249560)
and
[https://news.ycombinator.com/item?id=7249869](https://news.ycombinator.com/item?id=7249869).
Basically, the author is wrong.

------
lifeisstillgood
Ok - here is a suggestion: The Right to root.

Just as a citizens letters, papers and home are inviolable, should our new
papers our new homes be also inviolable - if I own a device, No-one should
legally be allowed control over it?

~~~
TophWells
>if I own a device, No-one should legally be allowed control over it?

Then tech companies might start leasing out their devices: you technically
don't own it, so you're not allowed to do what you want to it.

Not that the Right to Root wouldn't be nice, but the change in attitude has to
come first. And we need to somehow convince the likes of Apple that their DRM
is bad for business.

~~~
friendzis
Wasn't that written from a mac/ipad?

~~~
lifeisstillgood
errrr... yes. My iphone actually. (How could you tell? Or was that a lucky
piece of sarcasm)

If the signatories to the US constitution owned slaves, I can use an iPhone
while still wanting the Right to Root.

------
sekasi
Another stab at using 'Trusted proxies' huh? I thought we had learnt that
lesson a while ago.. Can we move on please, internet?

~~~
samplonius
The fact is "trusted proxies" are a real thing right now. Plenty of private
networks require that you trust one or more private CA roots and all SSL are
intercepted and filtered. It is sort of a pain to do. You have to use
something like Microsoft System Center to push the root onto all managed
computers.

This IETF proposal just formalizes it.

------
userbinator
The amusing thing about this is that MITM can also be used to one's personal
benefit -- I run a local filtering proxy that strips off most of the crap on
the majority of sites, and I've had to do a bit of hex editing to be able to
do that without the browser complaining.

Look at it another way: With browsers becoming more and more unconfigurable
and nearing the point of being user-hostile, it is any wonder that the content
providers would want their content, whether or not the user likes it, to be
delivered unchanged and forced upon the user? All the Snowden stuff has made
us feel that way, but what I'm saying is that the one who is doing the MITM
isn't always malicious.

~~~
SudoNick
Yes! If you don't have a reasonably easy way to inspect and modify what is
being sent over the encrypted connections your device makes, you are in very
serious trouble. Your device will be [ab]used against you.

------
news_to_me
The most alarming thing about this article is the author's tone.

------
the_watcher
I'm not an expert in internet security or crypto. Some of the comments below
raise some interesting points both defending the intent (and implementation)
of it and pointing out the flaws. However, as an unsophisticated person
interested in my data security, this sounds absolutely awful. Hopefully more
clarity on this emerges.

------
atmosx
This proposal is so stupid it's hard to believe someone actually made it.
Really beats the purpose: Why use SSL? Who am I protecting my data from if the
ISP is snooping??? The kid on the Internet Cafe who just found about SSLSnoop?

At this point the right proposal should be to just remove SSL altogether, no
need to make circles over it.

------
wfunction
Is someone from the NSA behind this?

Sorry, let me rephrase that. _Who_ from the NSA is behind this?

~~~
yk
Apparently the NSA is a intelligence agency. [1] So they are probably not that
blunt.

[1] [https://en.wikipedia.org/wiki/Nsa](https://en.wikipedia.org/wiki/Nsa)

------
jstsch
Crazy. If you want to use caching, just use HTTP for that content.

~~~
stephenbez
It's not that simple.

If you are going to use HTTPS, you need to use it for all content on that
domain. Otherwise if you load for example a large javascript file over HTTP,
the attacker can just poison that file and control your whole page.

Even if you loaded an image from the same domain, your credentials would sent
sent as a cookie in plain text.

You could use a separate domain for content as explained here:
[http://stackoverflow.com/a/5160657/804713](http://stackoverflow.com/a/5160657/804713)

~~~
gojomo
The web is long overdue for a method to specify an exact resource, by content-
hash, from one-of-whatever-sources.

Those sources can then be other less-secure protocols, even those
unanticipated by the referrer, because the client got the necessary verifier
via the secure-path.

~~~
zachrose
I believe there's already a standard HTTP header for this: Content-MD5.

Browsers definitely don't cache across origins by this though.

If they did, would it be possible to create a malicious JavaScript file with
the same MD5 as jQuery?

~~~
Danieru
Yes it would be possible because MD5 has been broken:
[https://en.wikipedia.org/wiki/MD5#Collision_vulnerabilities](https://en.wikipedia.org/wiki/MD5#Collision_vulnerabilities)

Any file can be modified to result in a hash collision with a specific MD5.
This makes is unsuitable for its stated purpose as a cryptographic hash.

The solution would be to use a newer and stronger hash like Keccak.

A solution similar to what you are thinking of is already used by Bittorrent's
Distributed Hash Table to identify files.

~~~
gojomo
To be precise, while MD5 has been 'broken' in the sense of not meeting its
design goals for a long time, and there are now a number of scenarios where
attackers can create pairs of files with the same MD5, it is not yet
practically possible to create a collision for any arbitrary file (such as
jquery.js) on demand. That would be the total 'preimage vulnerability' as
mentioned at:

[https://en.wikipedia.org/wiki/MD5#Preimage_vulnerability](https://en.wikipedia.org/wiki/MD5#Preimage_vulnerability)

MD5 should absolutely not be used for this content-identification purpose or
any other new code... and wise designers haven't been using it for 10+ years.
I'm just mentioning this to be precise about the current state of its proven
weaknesses.

------
droopybuns
Carriers are fighting against being turned into dumb pipes.

Google is fighting to turn carriers into dumb pipes.

I can't take this Google consultant seriously in that context.

~~~
crististm
Carriers _are_ dumb pipes. Carriers are fighting to get rid of that. Internet
_is_ dump pipes connected together. Carriers are fighting against that.

------
glifchits
When I read the title I thought this was going to be from Upworthy.

------
kercker
Maybe he who proposes this proposal is just meant to be funny.

------
bachback
SSL is such crap. time to make a better internet.

~~~
nathancahill
With blackjack and hookers?

~~~
dijit
he's not wrong about SSL being poor.. rather, the CA system is what I consider
to be poor.

there was the idea of notaries that never took off, but that would be ideal
imho.

~~~
bachback
yes notaries + alternative DNS. problem is really the root (literally). its
the same thing: somebody running a server and making decisions based on
profit/power what goes into that server. I mean you choose whether you go to
GoDaddy or Comodo, but that's about it. with .com you don't even have a choice
but verisign. DNSsec add insult to injury, by making domain regs the CA's. and
this 'proposal' is really the height of absurdity. AT&T shouldn't be writing
the trust protocols.

