
ETS Isn't TLS and You Shouldn't Use It - j4cob
https://www.eff.org/deeplinks/2019/02/ets-isnt-tls-and-you-shouldnt-use-it
======
Animats
We need a big budget cut in the "homeland security" area. All this
interception is not paying off. The biggest "terrorist event" in the US since
2001 was the guy who shot up a gay nightclub in Orlando FL in 2017. That was a
solo nutcase; there was no planning chatter to intercept. The Boston Marathon
bombing was two brothers. The San Bernardino shooting was a husband and wife.

What's discouraging terrorism is the US's overreaction outside the US. It's
become very clear to terrorist organizations that if they attack the US, the
US is going to hit back, even if it's insanely expensive and causes collateral
damage. The people in charge, and many people around them, end up dead.

Remember ISIS, the Islamic State? ISIS is down to 1.5 square miles,
surrounded, and everybody but the most fanatical fighters is surrendering. The
holdouts have days to live.

We don't need more Big Brother.

~~~
influx
I completely agree with you, but the counter argument is the only incidents
that are getting through are the ones that are solo because the more
complicated plots are getting intercepted and disrupted.

~~~
AnthonyMouse
> I completely agree with you, but the counter argument is the only incidents
> that are getting through are the ones that are solo because the more
> complicated plots are getting intercepted and disrupted.

The problem with this argument is that it can't justify continued spending
because that would make it unfalsifiable. We need to spend $450B/year on a
bear-repelling rocks because we currently pay for the rocks and there are no
bears. And if any bears do appear then we obviously didn't have enough bear-
repelling rocks and we need to start spending $900B/year.

If there is a real question as to whether the ~0 bears is a result of the
rocks, it's time to cut the bear-repelling rock budget in half and see how
many bears there are next year. If it's still ~0 then it didn't need to be as
high as it was and it may _still_ be too high.

------
uniformlyrandom
Funny how word 'Enterprise' picks up more and more negative connotation in
modern software world. These days, 'enterprise' means outdated, inflexible and
intentionally flawed monster of technology.

~~~
pjmlp
Did it ever meant anything else?

Many that throw jabs at J2EE (written on purpose), never had the joys of
trying out xBaseEE, CEE, C++EE (CORBA, DCOM/MTS),...

~~~
DaiPlusPlus
I think it’s an unfair characterisation. Let’s go with early-2000s
“Enterprise” stuff: CORBA and SOAP specifically.

There are these large corporations with a significant investment in their
existing infrastructure and systems - and now they all need to make them
interop. The mindset is “how do we make our CORBA ERP communicate with their
Java CRM without needing to make any changes to either of them?”. Hence SOAP:
It packages existing method-call semantics into a HTTP message that will cross
a firewall: not even the IT dept needs to get involved to change firewall
rules. And they hammered out a working spec within a couple years. That’s
impressive considering the slow-moving nature of large, risk-averse
enterprises. We now know that REST-is-Best, but it took the industry around 10
years to figure that out, and another 5 years for the tooling and ecosystem to
catch up. SOAP was a quick-fix that was needed immediately.

So I’d recharacterise “Enterprise software” as “fits into your existing system
and does what you need it to, right now” - and their MC Escher-inspired
architecture is a consequence of it needing to support and fit-in to whatever
systems were prevalent when their project was started.

It’s not Enterprise software that’s rigid and inflexible - but cutting-edge
software that I have more problems with. I was working with Neo4j in 2016 and
having issues with security because it didn’t have any built-in security
support until last year. I had to change what I was doing to accommodate them,
instead of vice-versa.

~~~
pjmlp
Except for two year pause, all my career has been in the enterprise space.

More often than not, those MC Escher-inspired architecture as you call it, are
the result of corporate politics with each department having a say in how
their tooling should look like, and bringing in externals to actually build it
for them at lowest bid with fixed cost projects.

------
nneonneo
It’s worth reading the mailing list posts by BITS (the main proponent of ETS)
here:
[https://mailarchive.ietf.org/arch/msg/tls/KQIyNhPk8K6jOoe2Sc...](https://mailarchive.ietf.org/arch/msg/tls/KQIyNhPk8K6jOoe2ScdPZ8E08RE).
The replies are pretty informative. You can see here the message in which BITS
starts to consider fixed DH keys, which were implemented in ETS:
[https://mailarchive.ietf.org/arch/msg/tls/3d7TM0g_EdtMzhgmcP...](https://mailarchive.ietf.org/arch/msg/tls/3d7TM0g_EdtMzhgmcPn_Xb-
ZFjY)

> Tue, 27 September 2016 18:21 UTC

> The various suggestions for creating fixed/static Diffie Hellman keys raise
> interesting possibilities. We would like to understand these ideas better at
> a technical level and are initiating research into this potential solution.

The core argument made by BITS is that they need a way to log TLS traffic such
that it can be decrypted later, in order to provide data retention in line
with regulations. While this could be done by logging all ephemeral keys
generated by the servers, BITS argues that this isn’t practical due to their
use of dedicated packet logging hardware that is key-ignorant. Instead they
want to use non-forward-secret TLS so they can decrypt past messages easily.
Their beef with TLS 1.3 is that it removes all non-FS key exchange methods,
and further that by explicitly obsoleting TLS 1.2 as a standard pushes them to
have to adopt 1.3 in an enterprise environment (or risk current/future
regulatory scrutiny over their use of an obsoleted standard). Hence why they
want to develop a competing, active standard with non-FS key exchange.

~~~
devy
I've read Andrew Kennedy's email. This line hits the point for me. His
argument is reasonable, sometime it's impractical or hard or costly or all the
above to upgrade all the systems to meet regulatory compliance and the newer
and stricter and safer security standards.

    
    
         It is vital to financial institutions and to their customers and regulators 
         that these institutions be able to maintain both security and regulatory compliance 
         during and after the transition from TLS 1.2 to TLS 1.3.
    

One example of that is the NIST's recommendation on password policies. Most of
the time the regulatory mandates are outdated and it's hard to bring them up
to speed, in the mean time, as a financial institution you simply cannot have
your IT system incompliant, even that means having a less security practice.

~~~
avianlyric
> as a financial institution you simply cannot have your IT system incompliant

This just isn’t true, or rather “compliance” tends to be quite fuzzy.

Regulators generally expect you follow recommendations from places like NIST.
But it’s not a hard requirement, you just need to explain why deviating is
better.

Unfortunately most fincial institutions trip up at the “explain why it’s
better” bit. Either because they aren’t competent enough, or (more likely)
can’t be bothered.

~~~
zamadatix
If something was better for the entire industry one would think the compliance
recommendations would be the topic of discussion, not explaining why it needed
to be done differently individually.

------
propter_hoc
This is a remarkable story. Fortunately, this ETSI-backed "ETS" standard
appears to have just about zero uptake or internet presence, let alone vendor
acceptance. So although this is fairly outrageous based on the EFF article, it
doesn't look like something that's a big threat to TLS at this point.

PS. I can't even get ETSI's website to load!
[https://www.etsi.org/](https://www.etsi.org/)

~~~
rev_null
If you're having trouble getting etsi.org to load, try using their static key
for diffie-hellman: 0x00000000.

~~~
1001101
Works every time.

------
kevin_thibedeau
> This would only require changes to servers, not clients, and would look just
> like the secure version of TLS 1.3.

If a TLS 1.3 client will happily connect to an ETS server that isn't playing
by the rules, doesn't that indicate a flaw in 1.3?

~~~
tlb
Either client or server can break secrecy. Server compromise isn't a threat
model the client can defend against. For example, the server could simply
forward a copy of the whole communication in cleartext to someone, and the
client can't know this.

In this case, the server is using a predictable number instead of a random one
for part of the protocol. Possibly a client could detect this by doing
multiple transactions and seeing if a number gets reused, but that seems
outside the scope of TLS.

~~~
kevin_thibedeau
The expectation is that the encrypted link is not decryptable by a third
party. If that isn't always true in the face of an adversary then claims of
forward secrecy for TLS 1.3 are false.

~~~
cbsmith
The expectation is that the encrypted link is not decryptable by a third party
_if_ both parties are implementing the protocol in good faith.

------
Spivak
So what's the argument from the other side? Going through all this effort to
allow PFS to be disabled seems like a ton of work? What's their use-case?

~~~
rocqua
If your stance is:

No opaque data leaves my network

Then this is the only way you can have outbound HTTPS connections. And for
e.g. a bank, certain legal firms, or any company that has a lot of sensitive
data they either don't want to be leaked, or at least want the option of
detecting when it is leaked, that is a somewhat reasonable stance. In the case
of banks, this is needed for regulatory compliance regarding insider trading.
For legal companies, I imagine this is about ensuring certain confidentiality.
I could see the same thing for companies dealing with trade-secrets.

The statement 'Just log it on the end-points' presumes complete access to
those end-points and all software running on them.

This method is considered better than terminating TLS early at a proxy and
setting up a separate tunnel to the clients because breaking PFS is passive,
rather than active. Thus it is a lot less resource intensive, a lot less
vulnerable (no internet facing box that, if broken, has all communication in
plaintext), and introduces no extra latency.

It is essentially a 'better' way to do an authorized MitM on everything on
your network, and some companies want this authorized MitM. Like any
authorized MitM, it introduces a third party who can compromise security,
which is not generally desirable, but some companies don't mind being that
third party to their own employees.

~~~
tinus_hn
It’s a pipe dream. The webpage or application can easily include its own
encryption that can’t be broken by these proxies.

If your stance is ‘no opaque data leaves my network’ your only option is an
air gap.

~~~
acdha
Engineering is all about making trade-offs (“perfect is the enemy of the good
and all”) and security engineering is no different. The same logic could be
used to say that you don't need an edge firewall because each client should
have one but it's much easier to simplify the baseline.

If nothing else, it would make the remaining traffic stand out more since you
wouldn't be spending time auditing normal apps which decode cleanly and it's
highly likely that someone trying to circumvent such a system would be
required to do things which stand out more than routine usage.

As a simple example, an organization which does that kind of monitoring is
unlikely to allow users to install arbitrary applications or visit any site on
the web. With a standard setup, someone trying to exfiltrate data could just
hit a popular site like Github, Gmail, Dropbox, etc. but if they need to use
some custom encryption or steganography code they're either forced to install
it somewhere far less common (i.e. more likely to stand out) or installing
something locally where client monitoring can report an unusual browser
extension or application.

~~~
tinus_hn
A cute excuse until one of your popular sites starts to do this. Now it
happens to be that Google hates these proxies as they are a popular target for
repressive governments, who technically can’t be distinguished from a snooping
‘enterprise’.

The reality is that world kept turning without these proxies and it will keep
turning once they are made obsolete.

------
forty
I'm not sure I understand: why can't they record the decrypted traffic
instead? (I assume they have it plain text at some point). Of course they
could encrypt it again before sending it to their audit server

------
unethical_ban
How does ETS break MITM for corporate LANs that are trusted CAs on work
devices? Why can't a proxy still MITM a connection by terminating the client
side, establishing the server side, and that be that?

Also, banks seeing their own corporate traffic is ethical and moral. Whether
they need to simply find another way to read all data leaving their network is
another piece of the story.

------
rocky1138
Seeing articles like this remind me why I donate to EFF on the regular and
recommend others to do so, too. They're on our side.

------
glitcher
> Instead of thinking of this as “Enterprise Transport Security,” which the
> creators say the acronym stands for, you should think of it as “Extra
> Terrible Security.”

------
dreamcompiler
As long as browser makers don't support it, this is a non-issue. Correct?

------
flanfly
I wonder how they plan to get this into Chrome.

------
jeffrallen
Christ, what assholes.

------
peterwwillis
Yeah, let's just make it harder for banks to protect your money so that nobody
can figure out your Facebook password in 10 years.

EFF: "Everything sent over the network should be a secret! Nobody has a good
reason to inspect traffic, it puts users' privacy at risk!"

Bank: "We keep trillions of your dollars. Inspecting our own traffic is how we
make sure nobody is stealing it. We're a pretty big organization, so this
stuff costs a lot of money, and is complex and takes a long time to get right.
Can you give us a way to do that in this new TLS standard?"

EFF: "No!! Privacy!!!"

Bank: "Ok... I guess we'll have to make our own standard, then...?"

EFF: "Don't ANYONE use that standard, it will cause REAL HARM!!!!"

Bank: "..... Nobody else was going to... except us....."

~~~
wolrah
PFS doesn't prevent inspection of traffic, it just makes passive capture more
complicated since you now need to log the ephemeral keys for each connection
rather than just using the private key to decode the whole package.

Not necessarily trivial but not exactly impossible for someone who controls
one of the endpoints.

Active interception with a middlebox still works exactly the same as it always
has.

