
Re: Proposed Statement on "HTTPS everywhere for the IETF" - glenscott1
http://www.ietf.org/mail-archive/web/ietf/current/msg93416.html
======
Aloha
Not all information needs to be secure, pure and simple, and individual
anonymity is more important for the health of internet culture than all the
security in the world.

This section covers my feelings on the topic:

"TLS does not provide privacy. What it does is disable anonymous access to
ensure authority. It changes access patterns away from decentralized caching
to more centralized authority control. That is the opposite of privacy. TLS is
desirable for access to account-based services wherein anonymity is not a
concern (and usually not even allowed). TLS is NOT desirable for access to
public information, except in that it provides an ephemeral form of message
integrity that is a weak replacement for content integrity."

~~~
espadrine
> _individual anonymity is more important for the health of internet culture
> than all the security in the world._

TLS provides exactly as much anonymity as HTTP (ie, none), so it is not
trading that away. It only wins security without losing anything.

Mr Fielding suggests a hypothetical system that would keep the same amount of
privacy as HTTP while ensuring the integrity of the content. But without a
proof that it works (or even a full design — how does it prevent a MITM from
substituting the signature too?), we can't know that it has any potential. And
it certainly doesn't have value now, since it doesn't exist yet.

Even if it did exist, the same amount of privacy as HTTP is still the same
amount of privacy as TLS.

And _integrity is not something we can overlook_. Imagine that, say, the
Chinese government changed a set of RFCs inside its border by MITM them, in
such a way as to suggest a different but compatible implementation of cookies
that allows them to read them. Then chinese implementors would create insecure
tools, and weaken privacy tremendously!

~~~
skrebbel
> _TLS provides exactly as much anonymity as HTTP (ie, none), so it is not
> trading that away. It only wins security without losing anything._

This is simply not true. If I request a resource from a server that is cached
by my ISP, my request never reaches the source server and they'll never be
able to measure that I requested that resource.

~~~
Flimm
In that case, you just unexpectedly told a third party about your request,
whereas with HTTPS, you only tell the intended party about your request. How
does it improve your anonymity to unexpectedly have third parties intercept
your request, read it and respond to it?

If you want to choose to use a third party mirror, then you can do so: just
explicitly request the mirror, over HTTPS to avoid a MITM attack.

~~~
Anderkent
That's message confidentiality, not privacy. The win to privacy is that
instead of telling the intended party _and every third party in the middle_
about you accessing some resource, you only tell the subset of third parties
between you and the cache.

Since it's more difficult to intercept access to all possible caches than to a
centralised server, that's a win for privacy. At a cost of message
confidentiality, of course, but if your message content doesn't need to be
confidential (i.e. you're just GETing a resource), it's not a big loss.

~~~
espadrine
> _you only tell the subset of third parties between you and the cache._

That's not a privacy win at all if you want privacy from the cache, if you
don't trust the cache, if the cache is wholly owned or used by you alone.

Even if all of that doesn't apply, you don't even have a guarantee that it
will hit the cache. If it is a cache miss, no “privacy win”.

How many asterisks does this claim of a privacy win need before it should no
longer be considered valid?

> _At a cost of message confidentiality, of course, but if your message
> content doesn 't need to be confidential (i.e. you're just GETing a
> resource), it's not a big loss._

You are forgetting the loss of message integrity, as well. That is a big loss.

------
userbinator
_TLS everywhere is great for large companies with a financial stake in
Internet centralization. It is even better for those providing identity
services and TLS-outsourcing via CDNs. It 's a shame that the IETF has been
abused in this way to promote a campaign that will effectively end anonymous
access, under the guise of promoting privacy._

I think he makes a very good point here: if browsers did not support plaintext
HTTP at all, and only CA-verified TLS, it would be practically impossible for
those who want to run a server somewhere, to anonymously serve a site
containing public information. If everyone has to obtain a certificate from a
CA, that is another way they can be tracked by a central authority.

~~~
kijin
> _if browsers did not support plaintext HTTP at all, and only CA-verified
> TLS_

That's a very big "if", and it reeks of FUD.

Show me a browser that has any plan to drop support for plaintext HTTP any
time in the foreseeable future.

Firefox ain't one of them. Last time I checked, their plan was to reserve some
of the more dangerous features (such as access to the camera) for secure
websites. Hardly a plan to drop support for plaintext HTTP.

If you still aren't convinced that the current controversy is just a bunch of
FUD, I'll bet $100 that 10 years from now, I'll still be able to post public
information (say, the full text of RFC 2616) on a plain HTTP site and have you
access it with a mainstream browser.

~~~
iopq
Firefox is moving in that direction. Maybe by 2020 you'll have to click a lot
of prompts to see an "insecure" site.

~~~
kijin
Classic slippery slope argument.

When abortion was legalized, some people argued that we'd be murdering
children soon. Has that happened?

If something is moving in the right direction, but if you're worried that it
will go too far, the solution is to get involved and stop it at the right
time, not to spread FUD about the hypothetical doom of the world.

~~~
josteink
> Firefox ain't one of them. Last time I checked, their plan was to reserve
> some of the more dangerous features (such as access to the camera) for
> secure websites. Hardly a plan to drop support for plaintext HTTP.

So basically, by your own admission, you say that websites with a near-future
version of Firefox will only be able to offer a "full" web-experience if they
are offered via HTTPS.

HTTP-based websites will be reserved for an inferior web.

> Classic slippery slope argument.

But somehow saying that this is moving in a HTTPS-only direction is a slippery
slope argument? How long until Javacript is only allowed via HTTPS? How long
until video and media-APIs will only work with a "secure" DRMed connection,
signed by the MPAA?

Taking HTTPS everywhere and removing support for HTTP is the slippery slope
and we're already walking it.

Every feature of every part of the HTML spec has to be supported for every
transport. End of discussion.

HTTPS everywhere is a misguided effort. Trying to artificially limit HTTP to
further your cause is just GOT-level political bullshit. Stop playing
dishonestly. If HTTPS everywhere can't win through on its own merits, you
should let it die.

~~~
scrollaway
> How long until video and media-APIs will only worked with a "secure" DRMed
> connection, signed by the MPAA?

The slope is so slippery I think I might actually fall off my chair. I don't
think you know what the fallacy actually _is_.

There's no arguing against facts - moving to promote HTTPS and make some
features HTTPS-only does go in that direction. But that doesn't mean things
will _continue_ going in that direction.

If I keep driving north I'm sure I'll fall off a cliff eventually. The magic
happens because the road isn't straight.

------
meesterdude
> Roy Thomas Fielding (born 1965) is an American computer scientist,[1] one of
> the principal authors of the HTTP specification, an authority on computer
> network architecture[2] and co-founder of the Apache HTTP Server project.

[http://en.wikipedia.org/wiki/Roy_Fielding](http://en.wikipedia.org/wiki/Roy_Fielding)

A good read, and he's not just blowing smoke.

> If the IETF wants to improve privacy, it should work on protocols that
> provide anonymous access to signed artifacts (authentication of the content,
> not the connection) that is independent of the user's access mechanism.

But it seems to me that there is basically no way to request access to any
kind of data, without it being traceable in some manner; at the very least the
ISP would still see the traffic. I guess you could argue for TOR, but that
still allows vectors and has its own issues to worry about.

Funny, we need the same kinda access that radio and TV used to provide; where
you could just "tune in" to something and have a listen, and you were more or
less untraceable; even if you were to broadcast on that frequency your
traceability, while triangulable, is still fairly anonymous. But on the
internet, there is no such way to broadcast like that. Maybe that's a design
flaw, maybe it's a feature.

~~~
aidos
....And the guy that came up with REST
[https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm](https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm)

~~~
nbevans
REST was more just a clarification and formal study to describe the intended
way of using HTTP.

------
empressplay
100% agree. If you want to ensure the integrity of content, then sign the
content. If you want to protect people's interactions with websites when
exchanging non-public data then use encryption when that's warranted. But
credential-based encryption should never be used across sites or with public
data because (as the poster notes) it just becomes another way in which
broader interaction with the Internet can be tracked.

It's trading yet more liberty for just a little bit more security, and haven't
we all done enough of that already?

~~~
Animats
_" 100% agree. If you want to ensure the integrity of content, then sign the
content."_

That's what "subresource integrity"[1] is about. For static data, a
cryptographic hash of the linked content is attached to links. This detects
any modifications of the document. Now you can use a CDN without trusting it.

A big problem with "HTTPS Everywhere" is that it encourages termination of the
secure connection at a CDN. The CDN can then tamper with the content. Some do,
adding ads and trackers. Subresource integrity will detect such tampering, but
HTTPS Everywhere will not.

[1] [http://www.w3.org/TR/SRI/](http://www.w3.org/TR/SRI/)

~~~
realityking
As the name implies, this only works for subresources (scripts, stylesheets,
images, etc.) and is mainly useful for cross origin requests.

HTTPS on the other hand also gurantees that the article I'm reading hasn't
been modified. And that no one has injected ads into the site.

~~~
Animats
If you're using a CDN, you don't know if the CDN has injected ads or spyware.
If you use Cloudflare's RocketLoader, what the user gets is not what you sent.

Once a CDN is involved in "HTTPS Everywhere", it's security theater.

~~~
realityking
I (as the the website's owner) have to trust one more party, which I know and
have a contract with. That's a whole lot better than having to trust any
random third party between me and the user of my website.

------
gwu78
"authentication of the content, not the connection"

Popular websites that hold themselves out as businesses, i.e. "exclusive"
sources of content (often generated by users, go figure), they have no reason
to support this concept. Because then it does not matter where the user gets
the content. But they might have reasons to support TLS.

One could argue users want authentic content (signed content), not authentic
"websites" (single sources for content trying to serve too many users, all at
the same time).

Or maybe many users do not know the difference?

Van Jacobsen gave a good talk on this at Google:

[http://www.youtube.com/watch?v=8Z685OF-
PS8](http://www.youtube.com/watch?v=8Z685OF-PS8)

There is another thread on the HN front page right now about a Blackhat
conference talk on x86 CPUs. There is another talk on that page about how TLS
relies on the trustworthiness of internet routing.

What is the point of securing connections when you have no control over
routing?

Instead of relying on securing "connections", I think schemes that send out
"encrypted blobs" with the hope they arrive at the proper destination make
more sense.

Encrypting blobs is not something for which an "authority" is needed. This is
something over which the user can retain full control without involving third
parties. As it should be.

TLS might have encryption that works well enough to "secure a connection" but
if I am not mistaken it still has no reliable way to verify an endpoint
(recipient) is the one you want it to be. Some people call that
"authentication".

I'm not even sure that TLS can reliably perform "authentication of the
connection" as Fielding states.

For that, I think SSH is a better protocol.

------
asperous
This doesn't feel right to me. No one can touch this man's credentials, but
lets suspend the argument of authority for a second and look at what he is
saying critically: Is TLS more private overall than plantext http?

If you want to remain private, how could TLS prevent this that plaintext would
not? HTTP is not tor.

~~~
JulianMorrison
I think the argument is "HTTP is edge cached (by your ISP, etc) and so a
request need not imply a connection received at the remote end. HTTPS is not
subject to caching or other benign man-in-the-middle operations so knowledge
of who clicked what is centrally available." This feels like a weak objection
to me, since the government will just snoop at the ISP level.

------
shabble
Can anyone explain what he's referring to with the statement

> _with TLS in place, it becomes easy and commonplace to send stored
> authentication credentials in those requests, without visibility, and
> without the ability to easily reset those credentials (unlike in-the-clear
> cookies)._?

Cookies are orthogonal to presence of TLS, I thought (unless they're marked as
secure, in which case they are only supplied to https hosts?)

Is there some other way of identifying a particular user/browser/session[1]
other than the quirks & features enumerator along hte lines of Panopticlick?

If there is (ISTR some 'session storage' for resuming TLS in nginx), is that
cross-trackable across different services (potentially all TLS-terminating in
the same place, such as Cloudflare or AWS)?

One good point is that I hadn't considered is that the lack of proxyability
means every request which can't be filled from the browser cache _must_ hit
the actual endpoint, making it easier for them to follow along action-by-
action when it might otherwise have been served up before getting to them by a
caching middle-proxy.

My (limited) understanding is that you're potentially providing more
information to the remote service, but are better secured against people
snooping on your traffic as it flows between you and them.

[1] also not including client certificates, because exactly 1 site on the
internet actually uses them :P

~~~
somethingnew
Besides Session IDs and Session Tickets[1] which already exist in the TLS
protocol. He could be referring to the Token Binding Protocol Draft[2] which,
quoting from it's summary, "allows client/server applications to create long-
lived, uniquely identifiable TLS bindings spanning multiple TLS sessions and
connections".

[1]
[https://en.wikipedia.org/wiki/Transport_Layer_Security#Resum...](https://en.wikipedia.org/wiki/Transport_Layer_Security#Resumed_TLS_handshake)

[2] [https://tools.ietf.org/html/draft-ietf-tokbind-
protocol-01](https://tools.ietf.org/html/draft-ietf-tokbind-protocol-01)

------
thwd
I found the title a bit misleading.

He's arguing that viewing public information over HTTPS is no more private
than doing so without encryption.

Sure, insofar as an entire response consists only of public information, this
is a tautology, isn't it?

~~~
perokreco
No, because you would want to keep the fact that you read something private.

~~~
Confusion
Yes, and Roy's claim is that TLS doesn't provide that privacy, so is still
basically useless for public content.

~~~
quonn
His argument is mostly based on analysing the size of the data transferred.
Let's assume HTTP/2 for the moment. You have a single encrypted channel to a
particular website that contains multiple interleaved opaque streams. It's not
easily possible to extract the exact size of a single request from this.
Furthermore, for a typical news website, for example, there will be an huge
number of pages, they are dynamic and constantly changing and they will all
have a very similar size.

You do get privacy. If anyone claims otherwise, he should go and prove that
it's possible and easy by providing a firesheep-like tool. It would make for a
nice research paper.

~~~
jrnvs
Here's an article describing how to find out what someone is watching on
Google Maps by analyzing the encrypted traffic.
[http://blog.ioactive.com/2012/02/ssl-traffic-analysis-on-
goo...](http://blog.ioactive.com/2012/02/ssl-traffic-analysis-on-google-
maps.html)

------
dlitz
> If the IETF wants to improve privacy, it should work on protocols that
> provide anonymous access to signed artifacts (authentication of the content,
> not the connection) that is independent of the user's access mechanism.

He basically wants a better version of Freenet. Fine. However, that's
orthogonal to the effort to make all channels secure channels. IETF can do
_both_ , if people are interested. Plaintext TCP needs to die, and building
the infrastructure to move everyone to HTTPS is a step in that direction.

He shouldn't obstruct HTTPS just because it's not Freenet(bis). To use his
Freenet(bis) vaporware, people will need to become familiar with managing
private keys. HTTPS has a similar requirement, so HTTPS Everywhere is a step
in the direction he wants to go.

------
dfabulich
> TLS is NOT desirable for access to public information, except in that it
> provides an ephemeral form of message integrity that is a weak replacement
> for content integrity.

TLS both encrypts and authenticates the response. Is TLS authentication a
"weak replacement" for some other, better "content integrity" system that's
widely available in browsers?

Roy suggests content signatures... but is there a web mechanism to
authenticate _those_? Or is he just wishing there were something better than
TLS? (Don't we all?)

~~~
URSpider94
I think his point is not that TLS is not good at securing information from the
host to the viewer, his point is that in doing so, it leaks information about
the viewer to the host and potentially to third parties. For public
information, TLS effectively asks each viewer to sign the guest register in
return for seeing the page.

Contrast this with the case where you could download one giant file with
hashes for millions of public sites. Once you have a copy of that file, you
can now fetch a copy of any of those pages from any source you like, and still
validate that your copy is authentic without losing any trace that you
accessed that file.

------
weinzierl

        > If the IETF wants to improve privacy, it should work on protocols 
        > that provide anonymous access to signed artifacts (authentication 
        > of the content, not the connection) that is independent of the 
        > user's access mechanism.
     
       [...]
    
       > It would be better to provide content signatures and encourage   
       > mirroring, just to be a good example, but I don't expect eggs to 
       > show up before chickens.
    

If I wanted to have the eggs before the chickens, what would I do? Sign my
content with PGP? Sign the HTML file? Offer it as separate signed download?
Are there examples of pages doing this?

------
tveita
HTTPS link: [https://www.ietf.org/mail-
archive/web/ietf/current/msg93416....](https://www.ietf.org/mail-
archive/web/ietf/current/msg93416.html)

------
ysleepy
This sounds like complete PR reality distortion.

How exactly is TLS different to plaintext in anonymity? - There is no client-
cert.

Also HTTPs everywhere does not necessarily mean "real" CAs. Self signed certs,
even without pinning, would raise the bar of snooping from monitoring (easy)
to traffic manipulation (hard). In this case there would not be a green lock
in the address bar of course.

This whole thread feels like one propaganda attempt to sway the techinal
community. And yes, here is where the people to manipulate are.

------
nickysielicki
I would wager that absolutely no one has ever become uniquely identifable as a
result of using TLS. People have MAC and IP addresses tied to their real
identities. People have social media profiles and run scripts from dozens of
places on every page load.

Can someone please describe a situation in which someone reasonably wouldn't
have been trackable to Google or to the NSA, but becomes trackable as a result
of HTTPS?

I can't think of one. Screw this guy and his politics.

------
JulianMorrison
The thing being commented on is here:
[https://trac.tools.ietf.org/group/iesg/trac/wiki/HttpsEveryw...](https://trac.tools.ietf.org/group/iesg/trac/wiki/HttpsEverywhere)

------
tvvocold
>It would be better to provide content signatures and encourage mirroring.

What does it mean? Mirror Site?

------
nchelluri
Is there a way I can view this with line wrapping in Firefox or in Chrome?

EDIT: Using Firefox Dev Ed, there's a little "Reader View" icon on the right
hand side of the address bar that seems to do the trick.

------
sklogic
Https for everything is worthy for at least one reason: to shut up the scum
operators who insert their ads. Privacy and confidentiality are of a much less
importance.

------
mc_hammer
agree also

tls is not https3.

tls already has been broken 3+ times.

this is an open ticket for 1 year in the TLS repo for an exploit.

and the author of TLS released a MITM exploit on TLS he wrote (not that other
protocols are not vulnerable to MITM)

theres no reason to put this in SSH/SSL/HTTP and run all internet
communication over it.

------
donottrack2010
Roy is a wannabe industry shill who plays politics at a very amateur level.
Roy could care less about your privacy, as long as ads and tracking work. He
fundamentally thinks that ad blocking is theft and that you have no right to
privacy.

\- an ex-tracking protection working group member.

~~~
revelation
It certainly seems that hes more obsessed with some hypothetical potential
political economic scenario than the tech aspects of it.

Stick to the tech.

