To distinguish between an HTTP2 connection meant to transport "https"
URIs resources and an HTTP2 connection meant to transport "http" URIs
resource, the draft proposes to
register a new value in the Application Layer Protocol negotiation
(ALPN) Protocol IDs registry specific to signal the usage of HTTP2
to transport "http" URIs resources: h2clr.
4.3. Secure Forward Proxy and https URIs
The Proxy intercepts the TLS ClientHello analyses the application
layer protocol negotiation extension field and if it contains "h2"
value it does not do anything and let the TLS handshake continue and
the TLS session be established between the User-Agent and the Server
(see Figure 8).
The purpose is to provide confidentiality to the vast majority of traffic, even if the authentication part of the CIA triangle isn't achieved. This proposal's purpose is to undo that and expose all traffic using the http:// scheme to your ISP, exactly as it is today. (I would also note that this draft is proposed by AT&T, which is now rolling out new plans in Austin that charge an extra $30/month if you do NOT agree to them inspecting and data-mining all of your internet activity and selling it to advertisers.)
Isn't that a semantic requirement of HTTP, though? Half of the "tech" in the HTTP/1.X spec is to allow for caching of resources and responses by proxies, allowing anyone between the client and server (e.g. your ISP) to act as a CDN.
HTTPS/1.X effectively throws that away by doing end-to-end encryption. It's a trade-off: we gain the surety that all the responses are coming directly from the peer, rather than anyone else... but the web becomes 90% less cacheable, because the only places things can end up cached are between the client and the HTTPS pipe (i.e. the browser cache), or between the HTTPS pipe and the server (i.e. "reverse proxies" like Nginx.)
The current workaround for this, when you need caching for your Big Traffic on either ingress or egress, is to do what amounts to purposeful self-MITMing of your HTTPS session: to terminate HTTPS on a caching proxy, that holds the certificate of your client/server, and acts as if it were you, while itself doing another HTTPS session for "the last mile" to connect to you. This is what companies do when they deploy their own CA-cert to their networks, so everyone's access can be proxied through their own system; and this is what services like Cloudflare do when they sit "in front of" your server while not being a part of your company's VPN at all.
Basically, HTTP2 codifies this workaround, and calls it HTTP.
It's true, HTTPS is full of tradeoffs. You've identified some of them.
What do you see in HTTP/2 that "codifies this workaround"? That wasn't immediately obvious to me. Recall that HTTP/2 is basically just multiplexing with prioritized streams. There's no requirement on TLS in the spec, although all current browser deployments (of SPDY) require TLS.
I fully expect a world where application developers have services at their disposal for positioning assets closer to the end user with VM instance level isolation and security guarantees.
Transport level security is not likely to be enough for high value/sensitivity data in the long run but adding a bunch of new trusted parties to the system is going to be huge enabler for end user surveillance.
I would imagine an ideal HTTP2 caching protocol to basically specify that some resources can come from anywhere, as long as the retrieved result conforms to an attached content hash—while also specifying a primary source to get the resource from, if you don't have a DHT handy. (Oddly enough, this is basically a suggestion that web browsers try to resolve magnet: URNs.)
Is a fair reading of your blog post that it has a high likelihood of succeeding?
Right. I understand that. The lack of certificate verification for the http scheme means that ISPs can MITM HTTP traffic with or without this proposal, just like they can with HTTP/1.1.
So how does this proposal make things worse?
> new plans in Austin that charge an extra $30/month if you do NOT agree to them inspecting and data-mining all of your web browsing and selling it to advertisers
Really? That's a special kind of evil and should be illegal.
Nah, no need to make it illegal. He's referring to AT&T's "GigaPower" gigabit service in Austin... the same service that's available from 2 other providers for the same, or cheaper, as what AT&T's charging for their data-mined bullshit.
AT&T will have a fun time getting customers when Google and Grande point that out in their attack ads. :D
Can they really? At least some of those connections should have the certificates verified out of band, so I'd imagine they would get caught fairly quickly. And then loudly accused of conspiring with the CAs (to stifle competition) or the NSA (to steal your data), or the competitors of whatever site/product was being used, or someone I haven't thought of.
From just a random googling, here's the first news article I found detailing the plan. (See the last three paragraphs.)
UserAgent Proxy Server
TLS Session #1 TLS Session #2
So it's about enabling the user to trust -only- the proxy, whereas currently in order to get utility out of a proxy you need to send the traffic as plain HTTP which then trusts both the proxy and the networks over which the data will travel.
That seems to me to absolutely meet the requirements for "strictly better".
If you care about security of your data, end-to-end, then you should probably only use this feature, if at all, with the proxy running on a machine you control - but presumably in that case you currently aren't using plaintext HTTP for anything anyway, so I don't see how it relates.
proxies can inject dog leg routes, single points of failure, computational and i/o bottlenecks and they make lovely centralized dos and data theft targets. They also often downgrade you out of performance enhancements that the client and server would have negotiated if they had been speaking directly (e.g. some actually block compression negotiation so they can observe the content of more flows with less cpu).
Sometimes they do make things faster though - its just not clear to me why we want to continue to centralize that approach rather than distributing it across the network given all the baggage proxying carries.
When the user has given consent to the use of a proxy, the User-Agent SHOULD store this consent so that the user does not have to give consent for each new TLS connection involving the proxy. The consent SHOULD be limited to the specific access and MAY be limited to a single connection to that access or limited in time. How the consent information is stored is implementation specific, but as a network may have several proxies (for network resilience) it is RECOMMENDED that the consent is only tied to the Subject field of the proxy certificate so that the consent applies to all proxy certificates with the same name.
If the user has previously given consent to use the specific proxy and the user-agent has stored that, the user-agent may conclude that the user has given consent without asking the user again.
If the user provides consent, the User-Agent continues the TLS handshake with the proxy.
Right in the next section, it's again implied:
The proxy will then notice that the TLS connection is to be used for a https resource or for a http resource for which the user wants to opt out from the proxy. The proxy will then forward the ClientHello message to the Server and the TLS connection will be end-to-end between the user-agent and the Server.
Then in 3.2, again implied:
When the User-Agent arrives to the portal page it becomes aware of the existence of a Proxy in the access network and receives a consent request for the proxy to stay in the path for HTTP URI resources. The user-agent then SHOULD secure user consent.
When the user has given consent to the use of a proxy, both the User-Agent and the Proxy SHOULD store this consent so that the user does not have to give consent for each new TLS connection involving the proxy.
I agree that MITM proxies shouldn't be used on the public Internet and thus we shouldn't make it easier to do so, but what about the people who are already being MITMed? Is there another way to solve this problem or must we throw corporate Web users under the bus to save the public?
If someone can install a root cert onto your computer then you are already owned - there is no end to the other things they can do too. Call it a virus, call it an enterprise, but call it a day - you're owned and there is no in-charter policy this working group can enact to change the security level of that user for good or for bad..
The good news is not everyone is already owned and SSL helps those people today.
This specific proposal is interesting because it specifically is related to opportunistic encryption proposals, in particular, the one that allows sending http:// URIs over an unauthenticated TLS connection: http://tools.ietf.org/html/draft-nottingham-httpbis-alt-svc-.... The problem here for proxies is, if you mix http and https (authenticated) traffic on the same TLS connection, the proxy cannot tell if it can safely MITM the connection. The proxy vendor would like to know if it can do so, probably for network management / caching / content modification reasons. Of course, the point of the opportunistic encryption proposal is to increase security (although its actual effective impact is controversial: https://insouciant.org/tech/http-slash-2-considerations-and-...). But if you believe in opportunistic encryption's security purposes, then it doesn't seem to really make sense to make the MITM'able traffic identifiable so proxies on the network path can successfully MITM them without detection.
"6. Security Considerations
This document addresses proxies that act as intermediary for HTTP2 traffic and therefore the security and privacy implications of having those proxies in the path need to be considered. MITM , [I-D.nottingham-http-proxy-problem] and [I-D.vidya-httpbis-explicit-proxy-ps] discuss various security and privacy issues associated with the use of proxies. Users should be made aware that, different than end-to-end HTTPS, the achievable security level is now also dependent on the security features/capabilities of the proxy as to what cipher suites it supports, which root CA certificates it trusts, how it checks certificate revocation status, etc.
Users should also be made aware that the proxy has visibility to the actual content they exchange with Web servers, including personal and sensitive information."
(Bugs, insufficiently scary UI, and "discovery" are all massive concerns of course...)
They don't even bother making you install CA certificates. They just abuse subordinate CAs: see https://blog.mozilla.org/security/2013/02/15/announcing-vers...
I'm also a huge fan of Google's http://www.certificate-transparency.org/, which makes it very difficult to fool very many people for very long.
Alternatively, outcry and blacklisting ISP proxies - just as we do with root cert abuse.
Then I find out that they've been working with Cisco on another similar thing to this one for "legal intercepts", a.k.a "trusted backdoors", like we're seeing above.
With NIST being already corrupted by the NSA, and now W3C becoming corrupted by MPAA, too, I think we're seeing the decay and fall of the "standard bodies", because I don't believe the Internet will tolerate these moves. The Internet will ignore them, do its own thing, and make it popular. I think future standards will be built from the bottom-up, and if I'm not mistaken most of the Internet so far has been built that way anyway.
We have good enough workarounds for this right now (putting wildcard CA certs on devices and proxying that way), but they're not awesome. So, if there were a way to keep this from being used for evil, it could make some existing non-evil activities easier.
But, on balance, the risk of evil might be too high.
Now this. I'm beginning to wonder if I want anything to do with HTTP/2.0.
For once, I invite you to fully read the comments on this post and the one you're referring to. Or, alternatively, take a read of the draft RFCs and the WG mailing list, which is totally open.
Just as a citizens letters, papers and home are inviolable, should our new papers our new homes be also inviolable - if I own a device, No-one should legally be allowed control over it?
But yeah, if you paid for it but can't root it, you got ripped off.
Then tech companies might start leasing out their devices: you technically don't own it, so you're not allowed to do what you want to it.
Not that the Right to Root wouldn't be nice, but the change in attitude has to come first. And we need to somehow convince the likes of Apple that their DRM is bad for business.
If the signatories to the US constitution owned slaves, I can use an iPhone while still wanting the Right to Root.
This IETF proposal just formalizes it.
Look at it another way: With browsers becoming more and more unconfigurable and nearing the point of being user-hostile, it is any wonder that the content providers would want their content, whether or not the user likes it, to be delivered unchanged and forced upon the user? All the Snowden stuff has made us feel that way, but what I'm saying is that the one who is doing the MITM isn't always malicious.
At this point the right proposal should be to just remove SSL altogether, no need to make circles over it.
Sorry, let me rephrase that. Who from the NSA is behind this?
Even if you loaded an image from the same domain, your credentials would sent sent as a cookie in plain text.
You could use a separate domain for content as explained here:
Those sources can then be other less-secure protocols, even those unanticipated by the referrer, because the client got the necessary verifier via the secure-path.
Browsers definitely don't cache across origins by this though.
You definitely wouldn't use MD5, as experts have been recommending against its use for content-security since about 1996. (A practical full chosen-preimage attack hasn't yet been developed, but still, you'd design for security for the next few decades, which would mean a SHA256 or better.) The choice of a good hash would mean no one could practically create an alternate file with the same hash.
Any file can be modified to result in a hash collision with a specific MD5. This makes is unsuitable for its stated purpose as a cryptographic hash.
The solution would be to use a newer and stronger hash like Keccak.
A solution similar to what you are thinking of is already used by Bittorrent's Distributed Hash Table to identify files.
MD5 should absolutely not be used for this content-identification purpose or any other new code... and wise designers haven't been using it for 10+ years. I'm just mentioning this to be precise about the current state of its proven weaknesses.
The security folks I talk to are... nervous... about this use of subresource integrity, however.
You want to make sure the data you're getting is from the source you expect, and that it hasn't been compromised. HTTPS does this with PKI by enabling you to verify the destination host is really who they say they are (Certificates) and to only trade data with them. Anybody who doesn't pass the signed-certificate-verifying test, doesn't get to give us data.
We don't want anyone else knowing what our data is because it may contain sensitive information. Once we verify the identity of the sender, each session is independently encrypted to prevent later decoding.
So what would we need to cache our content and retain its integrity and secrecy? The simplest thing would be encrypted blobs of data signed by our destination host's certificate. A proxy could keep data for a set amount of time, perhaps each piece of data encapsulated in a different session. All our client would need to do was connect once and initiate a session, and the server could deliver a copy of the encrypted/signed payload to the proxy.
With some magic flags in the new protocol our client could be instructed that the server allows the client to make a 'proxy request' to the destination for content. This request could be made in such a way that it allows a proxy to intercept this request from the client (which could be plaintext actually), get the encrypted chunk from the destination (which could also be done plaintext), and the proxy could deliver the chunk to the client, similar to what it does now with HTTP. Since the chunk was signed and encrypted by the destination, the proxy can't do anything but deliver the exact copy the destination gave it. Our client receives the data it wants from the proxy and verifies it's from the destination, unpacks it and loads it.
1. Client requests content from server (HTTPS)
2. Server replies back that server allows proxy requests (HTTPS)
3. Client sends request again with proxy-request flags and arbitrary content identifier & session identifier (HTTP)
4. Proxy receives request, gets content from server (HTTP)
5. Proxy replies to client delivering content from server (HTTP)
6. Client verifies content was signed by server
Google is fighting to turn carriers into dumb pipes.
I can't take this Google consultant seriously in that context.
there was the idea of notaries that never took off, but that would be ideal imho.