Hacker News new | past | comments | ask | show | jobs | submit login
One of the Most Alarming Internet Proposals I've Seen (vortex.com)
534 points by seven on Feb 23, 2014 | hide | past | web | favorite | 91 comments



Er, actually reading the specification, it's about proxying http resources, not https ones. This proposal is strictly better than the transparent proxying that's common on the internet today.

    To distinguish between an HTTP2 connection meant to transport "https"
    URIs resources and an HTTP2 connection meant to transport "http" URIs
    resource, the draft proposes to

       register a new value in the Application Layer Protocol negotiation
       (ALPN) Protocol IDs registry specific to signal the usage of HTTP2
       to transport "http" URIs resources: h2clr.
...

    4.3. Secure Forward Proxy and https URIs


    The Proxy intercepts the TLS ClientHello analyses the application
    layer protocol negotiation extension field and if it contains "h2"
    value it does not do anything and let the TLS handshake continue and
    the TLS session be established between the User-Agent and the Server
    (see Figure 8).


HTTP/2 changes the meaning of the http:// scheme. All connections will now be TLS-encrypted. (Edit: Maybe not. See hobohacker below.) http:// means that the endpoint has not been verified using the CA system and is using a self-signed certificate (and is thus trivially vulnerable to a MITM should certificate keys not be checked out-of-band).

The purpose is to provide confidentiality to the vast majority of traffic, even if the authentication part of the CIA triangle isn't achieved. This proposal's purpose is to undo that and expose all traffic using the http:// scheme to your ISP, exactly as it is today. (I would also note that this draft is proposed by AT&T, which is now rolling out new plans in Austin that charge an extra $30/month if you do NOT agree to them inspecting and data-mining all of your internet activity and selling it to advertisers.)


> This proposal's purpose is to ... expose all traffic using the http:// scheme to your ISP, exactly as it is today.

Isn't that a semantic requirement of HTTP, though? Half of the "tech" in the HTTP/1.X spec is to allow for caching of resources and responses by proxies, allowing anyone between the client and server (e.g. your ISP) to act as a CDN.

HTTPS/1.X effectively throws that away by doing end-to-end encryption. It's a trade-off: we gain the surety that all the responses are coming directly from the peer, rather than anyone else... but the web becomes 90% less cacheable, because the only places things can end up cached are between the client and the HTTPS pipe (i.e. the browser cache), or between the HTTPS pipe and the server (i.e. "reverse proxies" like Nginx.)

The current workaround for this, when you need caching for your Big Traffic on either ingress or egress, is to do what amounts to purposeful self-MITMing of your HTTPS session: to terminate HTTPS on a caching proxy, that holds the certificate of your client/server, and acts as if it were you, while itself doing another HTTPS session for "the last mile" to connect to you. This is what companies do when they deploy their own CA-cert to their networks, so everyone's access can be proxied through their own system; and this is what services like Cloudflare do when they sit "in front of" your server while not being a part of your company's VPN at all.

Basically, HTTP2 codifies this workaround, and calls it HTTP.


I don't see why you think this is a semantic requirement of HTTP. Perhaps there's some confusion over what HTTP semantics are. Let me refer you to http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-2.... It doesn't discuss exposing all HTTP traffic to network intermediaries. Perhaps you're thinking of the HTTP messaging layer http://tools.ietf.org/html/draft-ietf-httpbis-p1-messaging-2.... Also, I think your statement about allowing an in-path intermediary to act as a CDN is weird, since a CDN is defined as "a large distributed system of servers deployed in multiple data centers across the Internet. The goal of a CDN is to serve content to end-users with high availability and high performance." [1].

It's true, HTTPS is full of tradeoffs. You've identified some of them.

What do you see in HTTP/2 that "codifies this workaround"? That wasn't immediately obvious to me. Recall that HTTP/2 is basically just multiplexing with prioritized streams. There's no requirement on TLS in the spec, although all current browser deployments (of SPDY) require TLS.

[1]: http://en.wikipedia.org/wiki/Content_delivery_network


I find it difficult to imagine a world where Applications agnostic caching for encrypted sessions is possible.

I fully expect a world where application developers have services at their disposal for positioning assets closer to the end user with VM instance level isolation and security guarantees.

Transport level security is not likely to be enough for high value/sensitivity data in the long run but adding a bunch of new trusted parties to the system is going to be huge enabler for end user surveillance.


Why application-agnostic? HTTP's own caching isn't application-agnostic; it relies on the server to specify Cache-Control headers.

I would imagine an ideal HTTP2 caching protocol to basically specify that some resources can come from anywhere, as long as the retrieved result conforms to an attached content hash—while also specifying a primary source to get the resource from, if you don't have a DHT handy. (Oddly enough, this is basically a suggestion that web browsers try to resolve magnet: URNs.)


HTTP/2 does not change the meaning of http://. That's the opportunistic encryption proposal: http://tools.ietf.org/html/draft-nottingham-httpbis-alt-svc-.... For more information, you can see https://insouciant.org/tech/http-slash-2-considerations-and-....


Thanks for this correction; I was under the impression that opportunistic encryption had already been chosen based on HTTP/2 descending from SPDY, but I clearly am not following the WG all that closely.

Is a fair reading of your blog post that it has a high likelihood of succeeding?


Only time will tell. It's all still in progress. Of all the major browser vendors (Firefox, Chromium, IE) present at the Zurich HTTP/2 interim meeting, only Patrick McManus (Firefox) has expressed interest. Notably, he's a co-editor of that Alternate-Services internet-draft.


> http:// means that the endpoint has not been verified using the CA system and is using a self-signed certificate (and is thus trivially vulnerable to a MITM should certificate keys not be independently checked).

Right. I understand that. The lack of certificate verification for the http scheme means that ISPs can MITM HTTP traffic with or without this proposal, just like they can with HTTP/1.1.

So how does this proposal make things worse?

> new plans in Austin that charge an extra $30/month if you do NOT agree to them inspecting and data-mining all of your web browsing and selling it to advertisers

Really? That's a special kind of evil and should be illegal.


> Really? That's a special kind of evil and should be illegal.

Nah, no need to make it illegal. He's referring to AT&T's "GigaPower" gigabit service in Austin... the same service that's available from 2 other providers for the same, or cheaper, as what AT&T's charging for their data-mined bullshit.

AT&T will have a fun time getting customers when Google and Grande point that out in their attack ads. :D


Unless Grande and Google eventually decide to do the same thing. Why not, from their perspective?


Frankly, I would be surprised if this was not in Google's ToS from the start, considering that Google's core business model is mining of big data. I suspect there wouldn't be an option to turn it off, not even by paying extra.


Google doesn't need to, your encrypted traffic is to their servers.


One other provider. Neither Google (1 Gbps, "by mid 2014") nor Time Warner (300 Mbps, "by the fall of 2014") is yet offering comparable service. Only AT&T and Grande have any actual ultra-high-speed customers at this time.


The lack of certificate verification for the http scheme means that ISPs can MITM HTTP traffic with or without this proposal, just like they can with HTTP/1.1.

Can they really? At least some of those connections should have the certificates verified out of band, so I'd imagine they would get caught fairly quickly. And then loudly accused of conspiring with the CAs (to stifle competition) or the NSA (to steal your data), or the competitors of whatever site/product was being used, or someone I haven't thought of.


Can you elaborate on those AT&T plans?


It's part of GigaPower (their 300Mbps, soon-to-be 1Gbps FTTH service they rolled out quickly after the Google Fiber announcement). AT&T does a good job of obscuring it on their website[0], but if you click "See offer details", you'll find that the $70/month price is a "special" that requires you opting in to "AT&T Internet Preferences", which is their euphemism for DPI. It's not explained there, but if you opt out, you lose the "special" and your price goes to $99/month.

From just a random googling, here's the first news article I found detailing the plan.[1] (See the last three paragraphs.)

[0]: http://att.com/gigapower

[1]: http://news.cnet.com/8301-1035_3-57615246-94/at-t-delivers-g...


I pay AT&T about half that no-DPI price for 12mbit. I wouldn't sweat paying the extra to avoid DPI if I wanted the service.


The point isn't whether one can afford it, but whether it's right.


I agree. But that sad situation looks good from where I'm standing.


I think the crucial thing to understanding this is this diagram -

               UserAgent             Proxy                 Server
                      TLS Session #1        TLS Session #2
                      <------------>       <------------->
                                     HTTP
                      <----------------------------------->
which makes it clear that the point is to be able to say "I trust this proxy sufficiently that I'm ok with it acting as an intermediary, but I still don't want my stuff in the clear between me and the proxy or between the proxy and the far end server."

So it's about enabling the user to trust -only- the proxy, whereas currently in order to get utility out of a proxy you need to send the traffic as plain HTTP which then trusts both the proxy and the networks over which the data will travel.

That seems to me to absolutely meet the requirements for "strictly better".


If your definition of "better" is simply "faster", then yes, it is better. If you care about security of your data, end-to-end, this is worse (as others have pointed out because of your ISP being a MitM). In addition, now we are introducing another vector of attack for bad guys to exploit - how enticing does it sound that every ISP becomes a root certificate authority, essentially?


Your ISP could be MitM-ing your plaintext HTTP already, and it's the use cases that we currently use plaintext HTTP for that this is addressing.

If you care about security of your data, end-to-end, then you should probably only use this feature, if at all, with the proxy running on a machine you control - but presumably in that case you currently aren't using plaintext HTTP for anything anyway, so I don't see how it relates.


indeed - but I think its even worse than that, because even "faster" is rather contextual.

proxies can inject dog leg routes, single points of failure, computational and i/o bottlenecks and they make lovely centralized dos and data theft targets. They also often downgrade you out of performance enhancements that the client and server would have negotiated if they had been speaking directly (e.g. some actually block compression negotiation so they can observe the content of more flows with less cpu).

Sometimes they do make things faster though - its just not clear to me why we want to continue to centralize that approach rather than distributing it across the network given all the baggage proxying carries.


Read the rest. You'll see that these proxies also serve as handlers for all TLS sessions.


Where does it say that?


3.1.1 TLS Handshake with Proxy certificate

When the user has given consent to the use of a proxy, the User-Agent SHOULD store this consent so that the user does not have to give consent for each new TLS connection involving the proxy. The consent SHOULD be limited to the specific access and MAY be limited to a single connection to that access or limited in time. How the consent information is stored is implementation specific, but as a network may have several proxies (for network resilience) it is RECOMMENDED that the consent is only tied to the Subject field of the proxy certificate so that the consent applies to all proxy certificates with the same name.

If the user has previously given consent to use the specific proxy and the user-agent has stored that, the user-agent may conclude that the user has given consent without asking the user again.

If the user provides consent, the User-Agent continues the TLS handshake with the proxy.

-----------

Right in the next section, it's again implied:

The proxy will then notice that the TLS connection is to be used for a https resource or for a http resource for which the user wants to opt out from the proxy. The proxy will then forward the ClientHello message to the Server and the TLS connection will be end-to-end between the user-agent and the Server.

-----------

Then in 3.2, again implied:

When the User-Agent arrives to the portal page it becomes aware of the existence of a Proxy in the access network and receives a consent request for the proxy to stay in the path for HTTP URI resources. The user-agent then SHOULD secure user consent.

When the user has given consent to the use of a proxy, both the User-Agent and the Proxy SHOULD store this consent so that the user does not have to give consent for each new TLS connection involving the proxy.


The way I'm reading the sections you've quoted, the spec merely allows proxying of https ciphertext. Every router in the internet does that already. Bear in mind that in HTTP 2.0, all connections are TLS connections. The spec sections you've quoted just say that users should only have to consent once to their "http"-resource connections being proxied; they're not talking about "https" resources.


This article ignores the context behind the proposal. Many companies, schools, and prisons are MITMing all SSL traffic today for a variety of liability reasons. Today those users get no notice that their Web browsing is being observed and censored. Trusted proxies are intended to give those users some notice that they're being MITMed.

I agree that MITM proxies shouldn't be used on the public Internet and thus we shouldn't make it easier to do so, but what about the people who are already being MITMed? Is there another way to solve this problem or must we throw corporate Web users under the bus to save the public?


As Patrick McManus says in http://lists.w3.org/Archives/Public/ietf-http-wg/2013OctDec/...:

If someone can install a root cert onto your computer then you are already owned - there is no end to the other things they can do too. Call it a virus, call it an enterprise, but call it a day - you're owned and there is no in-charter policy this working group can enact to change the security level of that user for good or for bad..

The good news is not everyone is already owned and SSL helps those people today.


The specification indeed is about proxying http resources, not https ones. So it's not initially as alarming as some other proposals discussing trusting proxies to intercept SSL connections. For more details, you can refer to https://insouciant.org/tech/http-slash-2-considerations-and-....

This specific proposal is interesting because it specifically is related to opportunistic encryption proposals, in particular, the one that allows sending http:// URIs over an unauthenticated TLS connection: http://tools.ietf.org/html/draft-nottingham-httpbis-alt-svc-.... The problem here for proxies is, if you mix http and https (authenticated) traffic on the same TLS connection, the proxy cannot tell if it can safely MITM the connection. The proxy vendor would like to know if it can do so, probably for network management / caching / content modification reasons. Of course, the point of the opportunistic encryption proposal is to increase security (although its actual effective impact is controversial: https://insouciant.org/tech/http-slash-2-considerations-and-...). But if you believe in opportunistic encryption's security purposes, then it doesn't seem to really make sense to make the MITM'able traffic identifiable so proxies on the network path can successfully MITM them without detection.


It actually appears that the RFC openly admits the potentials for abuse here:

"6. Security Considerations

This document addresses proxies that act as intermediary for HTTP2 traffic and therefore the security and privacy implications of having those proxies in the path need to be considered. MITM [4], [I-D.nottingham-http-proxy-problem] and [I-D.vidya-httpbis-explicit-proxy-ps] discuss various security and privacy issues associated with the use of proxies. Users should be made aware that, different than end-to-end HTTPS, the achievable security level is now also dependent on the security features/capabilities of the proxy as to what cipher suites it supports, which root CA certificates it trusts, how it checks certificate revocation status, etc.

Users should also be made aware that the proxy has visibility to the actual content they exchange with Web servers, including personal and sensitive information."


To play devil's advocate, this could potentially be less harmful than the existing situation: where e.g. various corporate nets will require you to install root certs to accomplish the same MITM attack, in a less visible fashion (after installation), with some if not all of the same caveats - especially if given the ability to opt out.

(Bugs, insufficiently scary UI, and "discovery" are all massive concerns of course...)


> various corporate nets will require you to install root certs to accomplish the same MITM attack

They don't even bother making you install CA certificates. They just abuse subordinate CAs: see https://blog.mozilla.org/security/2013/02/15/announcing-vers...

I'm also a huge fan of Google's http://www.certificate-transparency.org/, which makes it very difficult to fool very many people for very long.


Hm, no. Various corporate networks doesn't classify as an ISP, the number of potentially abused users is not the same. A company can do whatever it wants to, an ISP offers a service and should respect the privacy of it's costumers, at least theoretically.


This proposal isn't intended for ISPs and should never be used on the public Internet.


Oh, my bad then. I miss-understood the proposal and it's implications. But since the protocol supports that, how can we be sure that ISPs won't use it?


Cynically, "we can't". Or "they already have better options".

Alternatively, outcry and blacklisting ISP proxies - just as we do with root cert abuse.


Openly admitting the potential for abuse doesn't make this any less ridiculous of a proposal.


I'm not saying it does.


I've become increasingly more disgusted with IETF since I found out they have at least a few NSA agents working with them on protocols, and more importantly refusing to kick them out - even after all the Snowden revelations with NSA trying to subvert and undermine encryption protocols:

http://mirrors.dotsrc.org/fosdem/2014/Janson/Sunday/NSA_oper...

Then I find out that they've been working with Cisco on another similar thing to this one for "legal intercepts", a.k.a "trusted backdoors", like we're seeing above.

https://www.blackhat.com/presentations/bh-dc-10/Cross_Tom/Bl...

With NIST being already corrupted by the NSA, and now W3C becoming corrupted by MPAA, too, I think we're seeing the decay and fall of the "standard bodies", because I don't believe the Internet will tolerate these moves. The Internet will ignore them, do its own thing, and make it popular. I think future standards will be built from the bottom-up, and if I'm not mistaken most of the Internet so far has been built that way anyway.


If you consider a standards body corrupt because they have a single member you disagree with, you might be failing at politics.


It's not like they are just some "ordinary" bad guys, like A&T, who just want to make some bucks. The NSA is one of the most dangerous enemys to free speech and the freedom of the Internet. There is a high chance that they are going to undermine all our efforts to make a free and secure internet.


The best part: the "Privacy" section of the document is blank.

http://tools.ietf.org/html/draft-loreto-httpbis-trusted-prox...


There are some kinda legitimate uses for this in certain environments -- enterprise DLP, various kinds of filtering, etc. Potentially even caching and stuff on the distant end of really weird network connections (when I go to Mars in ~30y, I'd like to have as much cached as possible, and converted to message-based vs. connection-oriented protocols).

We have good enough workarounds for this right now (putting wildcard CA certs on devices and proxying that way), but they're not awesome. So, if there were a way to keep this from being used for evil, it could make some existing non-evil activities easier.

But, on balance, the risk of evil might be too high.


There are potentially legitimate (though still sketchy) reasons to MITM HTTPS traffic from a host configured to allow that (for instance, by trusting an organizational CA). There are no legitimate reasons to MITM HTTPS traffic without the host's knowledge.


There was another article on here a week or two ago effectively blasting the http/2.0 wg for doing stupid things. I think it was the "HTTP 308 incompetence expected" article.

Now this. I'm beginning to wonder if I want anything to do with HTTP/2.0.


Perhaps you should look at the Hacker News comments on that thread: https://news.ycombinator.com/item?id=7249193. Notably, my comments: https://news.ycombinator.com/item?id=7249560 and https://news.ycombinator.com/item?id=7249869. Basically, the author is wrong.


Both this article and that one were impressively alarmist. Speaking as someone who has implemented a HTTP/2.0 client stack it's really nothing like as bad as either of these articles makes out.

For once, I invite you to fully read the comments on this post and the one you're referring to. Or, alternatively, take a read of the draft RFCs and the WG mailing list, which is totally open.


Ok - here is a suggestion: The Right to root.

Just as a citizens letters, papers and home are inviolable, should our new papers our new homes be also inviolable - if I own a device, No-one should legally be allowed control over it?


This proposal is mostly intended for environments where the users do not own the equipment, like offices and schools.

But yeah, if you paid for it but can't root it, you got ripped off.


You'd need to follow it up with a "Right to Own" whereby you can't lease or rent devices to people.


>if I own a device, No-one should legally be allowed control over it?

Then tech companies might start leasing out their devices: you technically don't own it, so you're not allowed to do what you want to it.

Not that the Right to Root wouldn't be nice, but the change in attitude has to come first. And we need to somehow convince the likes of Apple that their DRM is bad for business.


Wasn't that written from a mac/ipad?


errrr... yes. My iphone actually. (How could you tell? Or was that a lucky piece of sarcasm)

If the signatories to the US constitution owned slaves, I can use an iPhone while still wanting the Right to Root.


if someone leases me a notebook, do they get the right to read what I write in it? seems unlikely.


Another stab at using 'Trusted proxies' huh? I thought we had learnt that lesson a while ago.. Can we move on please, internet?


The fact is "trusted proxies" are a real thing right now. Plenty of private networks require that you trust one or more private CA roots and all SSL are intercepted and filtered. It is sort of a pain to do. You have to use something like Microsoft System Center to push the root onto all managed computers.

This IETF proposal just formalizes it.


The amusing thing about this is that MITM can also be used to one's personal benefit -- I run a local filtering proxy that strips off most of the crap on the majority of sites, and I've had to do a bit of hex editing to be able to do that without the browser complaining.

Look at it another way: With browsers becoming more and more unconfigurable and nearing the point of being user-hostile, it is any wonder that the content providers would want their content, whether or not the user likes it, to be delivered unchanged and forced upon the user? All the Snowden stuff has made us feel that way, but what I'm saying is that the one who is doing the MITM isn't always malicious.


Yes! If you don't have a reasonably easy way to inspect and modify what is being sent over the encrypted connections your device makes, you are in very serious trouble. Your device will be [ab]used against you.


The most alarming thing about this article is the author's tone.


I'm not an expert in internet security or crypto. Some of the comments below raise some interesting points both defending the intent (and implementation) of it and pointing out the flaws. However, as an unsophisticated person interested in my data security, this sounds absolutely awful. Hopefully more clarity on this emerges.


This proposal is so stupid it's hard to believe someone actually made it. Really beats the purpose: Why use SSL? Who am I protecting my data from if the ISP is snooping??? The kid on the Internet Cafe who just found about SSLSnoop?

At this point the right proposal should be to just remove SSL altogether, no need to make circles over it.


Is someone from the NSA behind this?

Sorry, let me rephrase that. Who from the NSA is behind this?



Apparently the NSA is a intelligence agency. [1] So they are probably not that blunt.

[1] https://en.wikipedia.org/wiki/Nsa


All of them, of course. Believing any less is unpatriotic.


Crazy. If you want to use caching, just use HTTP for that content.


It's not that simple.

If you are going to use HTTPS, you need to use it for all content on that domain. Otherwise if you load for example a large javascript file over HTTP, the attacker can just poison that file and control your whole page.

Even if you loaded an image from the same domain, your credentials would sent sent as a cookie in plain text.

You could use a separate domain for content as explained here: http://stackoverflow.com/a/5160657/804713


The web is long overdue for a method to specify an exact resource, by content-hash, from one-of-whatever-sources.

Those sources can then be other less-secure protocols, even those unanticipated by the referrer, because the client got the necessary verifier via the secure-path.


I believe there's already a standard HTTP header for this: Content-MD5.

Browsers definitely don't cache across origins by this though.

If they did, would it be possible to create a malicious JavaScript file with the same MD5 as jQuery?


It would need to be done via the pointer-to-content (URL/URI), and an independently-calculated secure-hash, not just a header. (The 'ni' proposal might serve this role.[1])

You definitely wouldn't use MD5, as experts have been recommending against its use for content-security since about 1996.[2] (A practical full chosen-preimage attack hasn't yet been developed, but still, you'd design for security for the next few decades, which would mean a SHA256 or better.) The choice of a good hash would mean no one could practically create an alternate file with the same hash.

[1] http://tools.ietf.org/html/draft-farrell-decade-ni-10

[2] http://en.wikipedia.org/wiki/MD5#cite_note-30


Yes it would be possible because MD5 has been broken: https://en.wikipedia.org/wiki/MD5#Collision_vulnerabilities

Any file can be modified to result in a hash collision with a specific MD5. This makes is unsuitable for its stated purpose as a cryptographic hash.

The solution would be to use a newer and stronger hash like Keccak.

A solution similar to what you are thinking of is already used by Bittorrent's Distributed Hash Table to identify files.


To be precise, while MD5 has been 'broken' in the sense of not meeting its design goals for a long time, and there are now a number of scenarios where attackers can create pairs of files with the same MD5, it is not yet practically possible to create a collision for any arbitrary file (such as jquery.js) on demand. That would be the total 'preimage vulnerability' as mentioned at:

https://en.wikipedia.org/wiki/MD5#Preimage_vulnerability

MD5 should absolutely not be used for this content-identification purpose or any other new code... and wise designers haven't been using it for 10+ years. I'm just mentioning this to be precise about the current state of its proven weaknesses.


That's actually a very good idea... Browsers could load the jquery file in your cache by its hash, rather than its URL. No more having 100 copies of jquery.min.js in your cache just because they're from different URLs.


Precisely this is actively being discussed in the W3C WebAppSec WG: http://w3c.github.io/webappsec/specs/subresourceintegrity/

The security folks I talk to are... nervous... about this use of subresource integrity, however.


Can you say any more about what makes them nervous? (What do they think will go wrong?)


Early to say... it's their job to be nervous about new things.


It might have privacy issues, though. Say you copy the HN logo to your server, and then serve it with an hash. You can then tell if the person has visited HN by seeing if their browser asks for the logo.


To solve this problem in the context of HTTP proxying you'd need more than just a way to refer to arbitrary content (which, honestly, doesn't need to be a crypto hash, it just needs to be an arbitrary identifier unique to the user and session). First, consider why we use HTTPS (in an overly-simplistic view):

1. Integrity

You want to make sure the data you're getting is from the source you expect, and that it hasn't been compromised. HTTPS does this with PKI by enabling you to verify the destination host is really who they say they are (Certificates) and to only trade data with them. Anybody who doesn't pass the signed-certificate-verifying test, doesn't get to give us data.

2. Secrecy

We don't want anyone else knowing what our data is because it may contain sensitive information. Once we verify the identity of the sender, each session is independently encrypted to prevent later decoding.

So what would we need to cache our content and retain its integrity and secrecy? The simplest thing would be encrypted blobs of data signed by our destination host's certificate. A proxy could keep data for a set amount of time, perhaps each piece of data encapsulated in a different session. All our client would need to do was connect once and initiate a session, and the server could deliver a copy of the encrypted/signed payload to the proxy.

With some magic flags in the new protocol our client could be instructed that the server allows the client to make a 'proxy request' to the destination for content. This request could be made in such a way that it allows a proxy to intercept this request from the client (which could be plaintext actually), get the encrypted chunk from the destination (which could also be done plaintext), and the proxy could deliver the chunk to the client, similar to what it does now with HTTP. Since the chunk was signed and encrypted by the destination, the proxy can't do anything but deliver the exact copy the destination gave it. Our client receives the data it wants from the proxy and verifies it's from the destination, unpacks it and loads it.

  1. Client requests content from server (HTTPS)
  2. Server replies back that server allows proxy requests (HTTPS)
  3. Client sends request again with proxy-request flags and arbitrary content identifier & session identifier (HTTP)
  4. Proxy receives request, gets content from server (HTTP)
  5. Proxy replies to client delivering content from server (HTTP)
  6. Client verifies content was signed by server
Of course this would be limited in its usefulness compared to plaintext caching; it would be user and session specific, so only lots of requests by the same client in a session would benefit from this. But it would theoretically save on bulk requests of encrypted content while preserving integrity and secrecy.


credential-containing cookies should be set as secure.


If HTTP 2.0 requires TLS then to get speed and caching you'll need some kind of trusted proxy.


Carriers are fighting against being turned into dumb pipes.

Google is fighting to turn carriers into dumb pipes.

I can't take this Google consultant seriously in that context.


Carriers _are_ dumb pipes. Carriers are fighting to get rid of that. Internet _is_ dump pipes connected together. Carriers are fighting against that.


When I read the title I thought this was going to be from Upworthy.


Maybe he who proposes this proposal is just meant to be funny.


SSL is such crap. time to make a better internet.


With blackjack and hookers?


he's not wrong about SSL being poor.. rather, the CA system is what I consider to be poor.

there was the idea of notaries that never took off, but that would be ideal imho.


yes notaries + alternative DNS. problem is really the root (literally). its the same thing: somebody running a server and making decisions based on profit/power what goes into that server. I mean you choose whether you go to GoDaddy or Comodo, but that's about it. with .com you don't even have a choice but verisign. DNSsec add insult to injury, by making domain regs the CA's. and this 'proposal' is really the height of absurdity. AT&T shouldn't be writing the trust protocols.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: