
Internet architects propose encrypting all the world's web traffic - prateekj
http://www.wired.co.uk/news/archive/2013-11/15/encrypting-all-web-traffic
======
bostik
I know this may be a bit off-topic, but since we're discussing HTTP/2 already,
I can't help but wonder. The recent attacks (BEAST, CRIME, etc.) have all
relied on being able to build an oracle from (sent) body content and uncover
header elements. The attacks only get better in time, so I expect to see even
more attacks like this.

If we're going to overhaul the HTTP spec in any case and go for framed
messaging - why not go for separate header/body compression AND separate
encryption keys? As far as I understand, that would block the whole family of
attacks. Separate compression contexts would prevent the attacker from
building an oracle from body to attack the header. Separate keys and
encryption contexts should prevent padding and mode oracles against the
header.

I know crypto is hard, and I _know_ there would be devilishly tricky details
to figure out. So I have to assume someone has already thought of this, has
had it vetted out for flaws and then discarded the idea.

But if so, why? Apart from requiring at least twice the key size in
negotiation (compute oversized shared key, split for header/body and likely
HMAC keys), are there any other obvious or non-obvious technical reasons not
to do this?

~~~
AnthonyMouse
I think the focus on the dichotomy between the header and the body is
misguided. If you separate them then an attacker can't attack the header using
the body, but that's just inviting the attacker to find a way to insert data
into the header to attack other header fields, or find secrets in the body
that shouldn't be revealed.

Also, it seems like you could get the same benefit as using a separate key and
encryption context by just padding the header to a block boundary.

The real problem with these attacks is that they aren't attacks on the cipher
at all. You could be using a one time pad and the attacks would still be
effective. What they really are is an attack on content-revealing efficiency.
The only defense is to prevent the length of the message from depending on the
existence of matches between attacker-supplied plaintext and secrets, and
doing that in a way that allows non-cryptographer web developers to not screw
it up basically comes down this: You have to disable TLS protocol compression.
It exists at the wrong layer of abstraction because the TLS layer doesn't know
what data is acceptable to compress against what other data.

But you still want to use compression, so the compression has to exist at a
higher level in the stack, i.e. in HTTP or even HTML, where the web developer
can specify exactly which data has come from an untrustworthy source so that
it won't be compressed against secrets.

------
XorNot
One thing I've really thought is consistently screwed up with security
proposals is _what do you do when it might be compromised_?

Users invariably click through the big red web page because they still need to
use their email at the end of the day, and that page provides no information
as to what's going on.

If the certificate doesn't verify, don't give up - try a different route to it
(sadly we've taken away source-routing for other security reasons). If that
fails then start a Tor session and try connecting through that.

Then show me a little diagram that shows where we think the problem is so I
can think about what might be the problem.

~~~
InclinedPlane
The problem is the false positive rate. It's like strict parents and car
alarms. Car alarms go off without good reason all the time, they're ubiquitous
and thus almost completely useless. When was the last time you heard a car
alarm and investigated whether a car theft was in progress?

Similarly, a parent which tells their kids that pot will kill them instantly,
prevent teenagers from watching PG movies, and so on is more likely to have
their advice completely ignored when there is an opportunity to do so. And
then you end up with kids who get into oxycodone, binge drink, and have
unprotected sex.

The strictness of browser security is pretty ridiculous and not very helpful
to the end user. If I go to google news on my phone I get half a dozen
messages about certificate mismatches, which I have to dismiss individually.
It's a dumb system with even worst UI. Except the UI is the foundation on
which a sense of trust or alarm is built.

------
andrewreds
I would like to see in the HTTP/2 spec allow http traffic, with the
requirement that http 2 traffic must be signed. This would allow publicly
accessible resources (including javascript) to be cacheable, without
compromising security. Random thoughts:

* The signage could be done in headers (HTTP/1.0 compatible)

* Works with all existing internet caches that don't modify pages they cache

* (some proxies inject headers into pages, so the spec should be resistance to this, and throws out all unsigned headers)

* Signature must cover cache expiry and full page url (including FQDN, port number etc)

* don't send cookies, user-agent or other identifying details over http 2 (helps with caching, and privacy)

* Could have a requirement that all https 2 traffic can only link to resources on https or http 2

* If the signature in the http 2 message fails, browser could fall back to https 2

* A redirect could sign the content it is redirecting to (to securely load resources from 3rd party CDN)

Edit: formatting

------
sp332
Lots of discussion from two days ago:
[https://news.ycombinator.com/item?id=6724292](https://news.ycombinator.com/item?id=6724292)

------
ENGNR
Is it folly to continue with https given that the certificate authority tree
of trust has been compromised?

Maybe baby steps are a good option, but I'd like to eventually see all traffic
signed with keys stored in a distributed ledger like namecoin. Good on the
IETF for moving forward though.

~~~
DavidHogue
There are degrees of brokenness with SSL.

Even if certificate authorities are compromised by major governments, SSL is
still enough for now to keep my credit card safe from the shady guy in the
back corner of the coffee shop. Or to keep my Google searches private from the
techies at my ISP.

There are some upcoming workaround that will help too like cert pinning or
DANE.

------
jdhendrickson
Which is all well and good until a three letter agency compromises the issuing
CA, which in turn leads to a false sense of security.

~~~
grey-area
That's really a separate issue though.

Yes, we also need to tackle the crazy mess of certificate authorities and
their monopoly on trust from browsers, but that doesn't mean encryption is not
worthwhile.

~~~
XorNot
Yes it does.

It has been _trivial_ to bolt encryption to things for a long time now.
Encryption is _worthless_ without trust. Ubiquitous encryption more so because
it's no longer a surprise when something is encrypted, it's just expected that
you have your snooping client MitM the traffic with dummy certificates.

It doesn't stop 3-letter agencies (though really, people should stop
pretending that's where the real threat lies for most people) and it doesn't
stop informed hackers or intrusive corporate firewalls.

~~~
pyre
Intrusive corporate firewalls MitM SSL sessions because the client is already
compromised (IT installs the firewall's cert as a trusted CA). _Any_ solution
can't route around a situation where you don't trust the client machine.

~~~
XorNot
Right but that's my point: it doesn't matter if it's encrypted. It matters if
it's encrypted AND you've established trust.

I'd go so far as to say that trust is actually more important - whether
someone can read my messages is less important then verifying they're what the
sender _intended_ to send.

~~~
grey-area
I agree it's really important, but we need both parts of the puzzle, not just
one. Out of interest, what would your proposal for a new trust model for
server communication be?

~~~
XorNot
Web-of-trust rather then CA-rooted, with more attention placed on who the
signers are.

i.e. if I'm using my bank, then what matters to me is whether the bank is
certified by my government, not VeriSign or whoever. If it's a foreign bank,
then it matters whether their government trusts them, and it matters to me
whether MY government trusts them.

If it's NOT a bank, then maybe my trust requirements are different. But the UI
for all this and details are everything - but I think we've definitely stuck
way too much import on that little green padlock icon without doing enough to
educate users moment to moment about what it means.

------
joelhaus
What, if any, affect might this have on net neutrality? I imagine that it
would make discrimination by carriers more challenging.

~~~
salient
As soon as a service becomes relatively popular, the carriers will know about
it, and throttle it manually until they "pay up".

------
stereo
I feel that while it’s good that we’re encrypting HTTP more and more, there is
too much focus on it, and other protocols are being neglected. Most SMTP and
instant message traffic, for example, is not encrypted and authenticated.

------
LukeWalsh

       > HTTP/2 to only be used with https:// URIs on the "open" internet. http:// URIs would continue to use HTTP/1 (and of course it would still be possible for older HTTP/1 clients to still interoperate with https:// URIs).
    

Seems like the most viable option, and leaves open the option of later
implementing TLS Relaxed.

------
typicalrunt
Given the proposal to encrypt everything, I wonder if the architects are going
with standard SSL or the extended validation SSL. Will we need to provide full
business details and such just to get HTTPS on our websites, or will a simple
credit card payment complete the process?

------
rch
What about IPsec?

~~~
tikums
I agree, it's not necessarily the right OSI layer we should be trying to
secure.

Unfortunately, NSA sabotaged IPv6 goal of "encrypting everything by default":

[https://en.wikipedia.org/wiki/IPsec#Alleged_NSA_interference](https://en.wikipedia.org/wiki/IPsec#Alleged_NSA_interference)

Specifically,

"IPsec was originally required in IPv6 before RFC 6434 made it only a
recommendation"

~~~
wmf
Given the current state of IPsec/IKE, making it "required" is pointless. AFAIK
there is no standard for opportunistic session setup, so random hosts don't
know how to speak IPsec to each other even if they both implement it.

------
nodata
With CAs, or without CAs?

~~~
wmf
The proposal to encrypt everything doesn't require CAs (but of course it
doesn't show the lock either).

