
TLS 1.3 and Proxies - wglb
https://www.imperialviolet.org/2018/03/10/tls13.html
======
koliber
I just read the discussion and I see that people are conflating a few things
and having an emotional response. There is a difference between different
types of network operators, and whether they have any business snooping on
your traffic.

There is a difference between an unauthorized party intercepting TLS
communication, and a party you have authorized to do so.

Let's say I am a private person and am trying to access my bank account
through a TLS connection. I am connected via WiFi to my cafe. I have not
authorized the cafe, its uplink provider, nor any of the other network
operators between me and the bank to intercept my traffic. It should be
impossible to do so.

Let's say I am employee of a financial institution. This company, in order to
adhere with record-keeping laws, needs to log all network connections. One of
the conditions of my employment is that I authorize the company to intercept
my network communications. The proxies within my company should be able to
intercept my communications. However, no network operator outside of the
company for which I work for and to which I have given consent to intercept my
traffic, should be able to intercept the communications.

The real world presents situations that are more nuanced than "encrypt
everything and don't let anyone between me and the destination to see what is
going on." There are many places where the above is the desired and sensible
requirement. There are also many use cases where crypto is warranted, but
select parties should have the ability to break it.

How this should happen is to be seen. In the examples I have given above,
"consent" will have to come both in the idea of consent, as well as some
crypto key that will allow the privileged proxies to intercept my traffic.

~~~
fulafel
There is a strong argument that in the financial-institution case you should
be just blocking web-bound TLS traffic, instead of lobbying internet standards
to break TLS. The applications that want to route their traffic through
inspection proxies can then do so explicitly.

~~~
koliber
One of the issues is that, due to poorly constructed legislation or private
contracts, there is may be a requirement to provide "industry-standard"
protection to various network connections. TLS 1.3 will, in due time, be the
industry standard.

This needs to be reconciled with the conflicting requirement to log, record,
and or block connections based on content.

Perhaps TLS 1.3 is not the solution here, as it can not reconcile this
conflict. Perhaps something else needs to be developed that will provide a
solution that meets these two requirements.

~~~
pas
Could you cite the exact statute that requires logging mere connections?

I thought most of these compliance frameworks don't really specify how, just
compile, archive and produce the data periodically (or on demand). And make
the system tamper-proof (or tamper-evident), and get it audited from time to
time.

~~~
wglb
I am not aware of legislation, but there are many industries where there are
compliance requirements that require monitoring, for example financial
institutions, possibly FISMA.

------
nimbius
the reason TLS1.3 has been delayed so long is vendors and researchers that
believe the dual-speek that TLS needs to be both secure, as well as readily
interceptable (and therefore insecure) in order for it to be "ready" to use.

TLS and encryption arent going anywhere and theyre not always going to wait
around for a concensus from industry. The sobering truth is that if not now,
than in a decade or so the companies shilling TLS/SSL interception appliances
and software will need to shift focus as the protocol will likely have been
evolved by force over time to meet the needs of increasingly prevalent
surveillance states. TLS interception or "proxying" started out as a graduate
students parlour trick and eventually evolved into an entire shady industry
where players like Bluecoat are routinely caught selling their products and
services to repressive regimes.

Heres hoping LibreSSL delivers the goods with or without the marketing teams
say.

~~~
Spooky23
I should absolutely be able to intercept TLS traffic on my computers on my
network. That's the distinction. Third party interception capability needs to
be illegal and connections should be tamper evident.

Frankly, I have a higher duty than user privacy. My users have access to data
that's critically sensitive in various ways, in some cases they face criminal
sanction. I need to both control and detect unauthorized software on the
network and ensure that users are following the rules.

More extreme privacy activists will make noises about things using endpoint
based solutions or something similar. It's a bullshit position that will
ultimately weaken security.

~~~
paralelogram
The largest ISP in Kazakhstan believes that it should be able to intercept all
TLS traffic on their network:
[https://bits.blogs.nytimes.com/2015/12/03/kazakhstan-
moves-t...](https://bits.blogs.nytimes.com/2015/12/03/kazakhstan-moves-to-
tighten-control-of-internet-traffic/). Because there are no technical
differences between your TLS interception and what Kazakhtelecom is doing and
no legal differences in most non-Western countries, I believe that all
software should be changed to make TLS interception as hard as possible.

~~~
emmelaich
There is absolutely a significant legal and moral difference between national
interceptions like Kazakhstan does and the ones we do protecting children you
are guardians of or protecting company secrets and integrity.

In the latter case ideally (and possibly legally required) you'd have
acceptance of potential interception a condition of employment.

------
fulafel
Wow, the linked GCHQ piece is impressive: The title is that TLS 1.3 is "harder
for enterprises" because "Many enterprises have security appliances that seek
to look into TLS connections to make sure that the enterprise security is
appropriately protected." And, later: "It certainly looks like it’ll have a
negative effect on enterprise security."

Open lobbying against crypto from the spooks, in the name of collective
security, is of course not a new thing - but dressing it as necessary for
"enterprise security" in standards lobbying is a clever move.

~~~
robin_reala
To be fair to the original author, NCSC is no longer part of GCHQ.

~~~
fulafel
There is a prominent "a part of GCHQ" banner at the top.

~~~
robin_reala
Wow, that’s the biggest facepalm I’ve done recently. I was under the
impression that they were officially split but apparently not. They’ve
definitely now got seperate office blocks, and more of an operational split
than the previous CESG ever had at least.

------
bb88
Having worked in locked down networks before, I get the idea of securing
resources and keeping secure data inside the network.

But what bothers me, is I keep asking myself the question: When and why did we
decide to give the proxy all the power in this relationship?

If we have to use proxies, at least the proxy should be transparent about
itself to all parties on the connection. Then my bank can drop the connection
or restrict functions/data to non-proxied data, secure government servers can
drop the connection, my browser can drop the connection and give me an error
(because I don't want anyone snooping on my banking history), etc.

Eg. Let's say I open the patient portal for my hospital through a proxy. Is
the proxy software HIPAA compliant? What about the people that have access to
my health data through the proxy software? In this case, I would think we
should allow the portal software to drop the connection because the connection
itself is not secure.

~~~
cesarb
> In this case, I would think we should allow the portal software to drop the
> connection because the connection itself is not secure.

It sounds like what you want is the server authenticating the client. That
already exists in TLS: it's called a client certificate (complementing the
usual server certificate, which authenticates the server to the client).

Unless the MITM proxy has access to the client certificate's private key, or
the server trusts the MITM proxy's CA, the proxy cannot impersonate the
client.

~~~
sitepodmatt
Is any B2C mainstream bank using client certificates? I've not seen it in the
wild. I think easier solution is just BYOD to work with 3g/4g SIM, you can
pick up a reasonable 8" tablet for $100 that supplements your phone for when
you need a bigger screen size.

~~~
cpach
_”Is any B2C mainstream bank using client certificates?”_

I doubt it. AFAIK the web browsers’ UI for handling client certificates is way
too cumbersome for mainstream usage.

~~~
tialaramex
One of the nice things in TLS 1.3 that we might never end up using in anger
but is there if we want it is the request from a server for a client
certificate now gets to express arbitrary constraints.

In TLS 1.2 you could only express a list of CAs whose signatures you trust
(this is one of the most widely misconfigured settings in OpenSSL-based
software, telling OpenSSL you _trust_ some CA to identify clients when
actually you meant to say your server certificate is _signed_ by that CA)

In TLS 1.3 you can write out arbitrary constraints, although somebody will
need to define any new ones in a separate ID or RFC. So this might simplify
the end user experience down the road because the browser can do enough
matching to just hand over the correct certificate automatically.

Or it might never get used on the public Internet, oh well.

------
newman314
I'll reiterate what I said before. I'm for TLSv1.3 breaking middleboxes.

It's more important to have good viable security rather than "my middleboxes".
To me, it's like the FBI asking for NOBUS crypto.

~~~
letsgetphysITal
The side effect of this is that corporate networks will become draconian in
what services can be accessed from their networks. Expect everything outside
of 10./8 to be black holed; No internet access of any kind. The company's
legal requirement to prevent data exfiltration trumps your ability browse
reddit during your lunch break. Plus, if they can't monitor the connection,
they _will_ monitor the endpoint.

~~~
tialaramex
The reality is that corporations always exist under a tension between an
inclination to forbid everything, for fear it will cost the company money or
reputation, and a need to allow everything so that people can get their jobs
done or are willing to work there.

The Internet is nothing special in this respect, large companies struggle to
make policies that cover all the bases without strangling themselves and they
will err on both sides of the line, sometimes learning from their mistakes and
sometimes not so much.

------
andrewaylett
A regular (non-transparent) HTTP proxy like Squid will process HTTP requests
internally, but clients use CONNECT to forward their raw connection to the
remote site for HTTPS. This is good: it means the client establishes a TLS
connection with the origin.

It would be really helpful, though, if there were an operating mode wherein
one could instruct the proxy to talk to an HTTPS server without using CONNECT.
So the browser talks to the proxy over TLS, and displays the proxy's
certificate details, possibly with a big red warning, and the proxy terminates
that TLS connection, decides what to do based on the request, makes a new
onwards connection and gets the response plaintext too.

The user obviously loses some privacy guarantees here, but no more than with
an intercepting "transparent" proxy, and it's much clearer what's actually
happening and which devices the user needs to trust. I'm much happier with an
explicit proxy than any attempts at a transparent proxy that I've encountered,
not least because it makes it possible for the browser to be clear to the user
about what's happening.

------
darkhorn
Some religious nuts are preventing me to visit Wikipedia. I use encrypted DNS
to protect myself from man in middle attack, and it helps. But then they
intercept certificate's name and drop the connection. So, thus this mean that
TLS 1.3 will prevent sniffing certificate's name? And there won't be any
certificate fingerprint on the wire to be sniffed, right?

~~~
tialaramex
Under TLS 1.3 the Server Name Indication extension becomes mandatory, so your
client will automatically, in plain text, transmit the full DNS name of
whatever server it wants to talk to

[SNI is there to make "virtual hosting" possible for HTTPS, which is why you
can get working SSL on a cheap bulk host without paying them extra for a
dedicated IP address]

So, a middlebox might choose to drop connections based on the name your client
sends (and of course it could also choose to drop them based on the
destination IP address, the amount of traffic you've sent recently, or the
phase of the moon). But the certificate itself is now always encrypted, so the
middlebox can't snoop that without acting as a proxy.

It sounds as though you've (perhaps against your will) accepted the proxy, so
in that case all bets are off anyway, a proxy can do whatever it likes, if you
don't want that don't trust proxies.

~~~
twic
> your client will automatically, in plain text, transmit the full DNS name of
> whatever server it wants to talk to

AIUI, it's _slightly_ better than this, because you only actually need to send
the name of some domain that the server can serve, not necessarily the one you
actually want to talk to. If the domain is on Cloudfront, App Engine, Heroku,
etc, that means you can choose one of a billion innocuous sites to use for
SNI, before connecting to the one you actually want.

This is called 'domain fronting':

[https://www.bamsoftware.com/papers/fronting/](https://www.bamsoftware.com/papers/fronting/)

I can't quite work out the trust algebra of this, though. You don't have any
cryptographic guarantee that you're connecting to the right site. But you can
be sure that you're connecting to whichever server hosts the site whose name
you're taking in vain. But if that server was able to serve your site all
along, because it had its private key, did you ever really have any guarantee?

Probably won't help for wikipedia, though, as they're not behind a CDN.

~~~
tonyztan
Wikipedia summary of domain fronting:
[https://en.wikipedia.org/wiki/Domain_fronting](https://en.wikipedia.org/wiki/Domain_fronting)

------
bogomipz
The author states:

"The heuristics are necessarily imprecise because TLS extensions can change
anything about a connection after the ClientHello and some additions to TLS
have memorably broken them, leading to confused proxies cutting enterprises
off from the internet."

Can someone elaborate on a specific instance of this where a TLS extension
lead to breakage? I don't doubt the author, quite the opposite - I'm
interested in reading more about the specifics of it.

------
inlined
Am I the only one noticing parallels between this debate and the debate over
gun regulation in the US? Concerns that firearms or privacy are fundamental
rights. Concerns that regulations on inspectability of destinations or
restricted access of firearms are necessary others' safety Even the distrust
of the government comes up in both debates.

I have no actionable feedback from the parallels, just fascination.

------
badrabbit
Why not have the insecure or interceptable protocols as optional protocol
extensions and make everyone happy? Much like with fips and how you can build
openssl and other tls libs with fips mode on or off.

TLS 1.3 client libraries could then optionally support interceptable key
exchange depending on who is using them. An individual can use normal OS and
distro that exclude the insecure features while banks and military facilities
might turn it on.

Alternatively,why the "one size fits all" approach? Why not have a "TLS-
commercial". Obviously the "one size fits all" requires a comptomise by all
parties resulting in a collective reduction of security.

~~~
cesarb
> Why not have the insecure or interceptable protocols as optional protocol
> extensions and make everyone happy?

That sounds like the "draft-rehired" proposal. Here's a list of arguments
against it (and similar proposals):
[https://github.com/sftcd/tinfoil](https://github.com/sftcd/tinfoil)

------
yuhong
I have disliked the arms race against middleboxes since the beginning.

------
peterwwillis
I remember when TLS was about security, not privacy. I also remember when
proxies were a tool to help everyone, not just corporations who wanted to
inspect all your traffic.

By ensuring that HTTPS is used everywhere, and that no other security regimes
are allowed, they've killed proxies for all uses except spying on users. The
end result is some of the Internet is now less private, by design.

If you think proxies are a useful tool to save bandwidth, decrease latency and
reduce load, and want to stay _secure_ , but not necessarily _private_ , there
are very obvious ways this can be done. But there are people literally
fighting against this because they want privacy or nothing.

This is Internet puritanism.

~~~
phicoh
Explicit proxies are no problem. There are a lot of servers behind proxies. No
problem there.

There is plenty of client software that can be configured to use proxies. Also
no problem there.

Where it goes wrong is transparent proxies (also called 'middleboxes') that
operate without consent of the endpoints. In general, those proxies have
caused so many network problems that most people involved in the IETF will
happily see them die.

And the easiest way to do that is to encrypt all traffic.

~~~
peterwwillis
There's a large number of web users - like myself - who benefited from
transparent proxies. The IETF's complaints are probably almost certainly due
to incompatibilities between products and erroneous modification of streams in
transit, which of course should not happen. But it certainly doesn't need to
die - if we killed technology for that, the whole WWW wouldn't exist.

