These boxes can only work with a single static secret, which is shared between the DPI boxes and the actual servers. If the servers are using a forward secret mode, this is no longer enough, you have to share a secret for every session.
This necessitates some kind of software running on each endpoint to transmit these secrets. But wait, the moment you have to have software running on every endpoint, why do you need a special box? Why not do it all in software?
This represents a huge threat to the DPI market. No box means no lock in, no mandatory upgrades, no support contracts. Sure, software can have these things too, but it's inherently a more open, competitive market where you are vulnerable to open source invasion. Solutions like eTLS are just a last ditch gnashing of teeth from DPI box sellers, trying to prevent a lucrative market from disappearing.
Once you move everything to software: a) competition in general gets better and b) open source starts to take over, c) security will improve.
Actually, the boxes can also MitM the entire SSL connection. This just happens to be a much more efficient system. It can easily be turned off without affecting the connection, and it doesn't introduce extra latency.
Moreover, this system allows for post-hoc DPI rather than requiring that it happens on-line.
> But wait, the moment you have to have software running on every endpoint, why do you need a special box?
There are reasons beyond 'market dominance' for not wanting to do this on the end-points. End-points are numerous, heterogeneous, occasionally and occasionally difficult to access. This makes actually implementing this system on all endpoints very hard. Let alone keeping all end-points up-to-date.
In general, which sounds like the nicer approach to take: "drop in solution" or "solution that affects all endpoints and needs to support all endpoints".
The discussion is a lot more about 'Is PFS an acceptable loss for getting DPI' with a very large side discussion about whether DPI should even be possible.
It's not a big burden to install a MitM box either; most places call it a load balancer.
Great point! As you say, it scales much worse and introduces additional points of failure though.
> There are reasons beyond 'market dominance' for not wanting to do this on the end-points. End-points are numerous, heterogeneous, occasionally and occasionally difficult to access. This makes actually implementing this system on all endpoints very hard. Let alone keeping all end-points up-to-date.
Absolutely true, but this does lead to a qualititve advantage for open standard / open source solutions where you externalise the costs of additional implementations.
> In general, which sounds like the nicer approach to take: "drop in solution" or "solution that affects all endpoints and needs to support all endpoints".
I don't think this is quite the right distinction, looking at the deployment issues middlesboxes have caused for TLS1.3 and QUIC... I think it might be better phrased as:
"do you want to deploy some static hardware which has to support all endpoint network protocols correctly and upgrade when new protocols come along or do you want to write/use the software for each endpoint you choose to use?"
My point is that software is much cheaper and more flexible (in the long run) than hardware.
> The discussion is a lot more about 'Is PFS an acceptable loss for getting DPI' with a very large side discussion about whether DPI should even be possible.
I agree this is what most of the discussion is about, but I don't think its the real issue. Here are the NIST comments that were posted a few days ago:
Check out the NSA's comments on page 21!
> With respect to TLS it seems better to deprecate all non-forward secure cipher suites, not just RSA key transport
This isn't just "we support PFS in TLS1.3", this is actually "please take non-PFS TLS1.2 modes away from people"!
This subject doesn't seem like the most attractive for open-source solutions. Especially when it comes to supporting legacy enterprise systems. This feels more like a case of a consortium of companies creating a standards body.
> looking at the deployment issues middlesboxes have caused for TLS1.3 and QUIC (snip) My point is that software is much cheaper and more flexible (in the long run) than hardware.
I don't think middle-boxes as ETS intends them need hardware acceleration. As such, they could just as easily be implemented in software. This would give the same software-flexibility as modifying endpoints, with the advantage of only needing to support a few systems in your network rather than every single one.
I'd expect the same ossification and bad behavior in software middleboxes as we have had so far. But honestly, I see the same thing happening by supporting this on the end-points.
I'd summarize my position as follows:
If we want to support inspection of traffic by network owners, I see real advantages to selectively breaking forward secrecy for them. But that is a big if. We might be better off just telling those network owners to suck it up and MitM everything.
If the endpoint in compromised, in the first scenario, the most the attacker can do it not share the session secret. This is easily detectable.
In the second scenario, the attacker can pretend that the endpoint-local DPI software is still being run, while completely going around it.
And it is a universal construction: For any cryptographic protocol, one party can replace its random number generator by a deterministic CSPRNG and store or leak seeds. This is undetectable from outside. There you go, backdoor for later reversal of forward secrecy: Forward secrecy is obtained in the moment you erase from memory the internal state of your CSPRNG, and the server can just not do that, without violating any protocol assumptions.
Specifying how to implement this in practice is worthwhile; it is not a weakening or violation of TLS, instead it is an interesting description of inherent properties of TLS.
The naming (eTLS) might be unfortunate. Better to just make it an RFC on "Cryptographic backdoors for TLS".
Can clients detect the use of this, and if detected refuse to connect with a scary warning? That should kill this abomination fairly effectively.
The risk isn’t much about internal networks, it’s when this starts leaking onto the open internet.
Also the fact they call themselves “eTLS” to use TLS’ reputation when actually it’s a voluntarily degraded version of TLS.
If the server doesn't faithfully implement the protocol, of course it will not provide the expected security guarantees. But then it isn't the committee who is lying, it's the server by claiming to implement TLS and then not.
But my point was more along the lines that PFS was never a guaranteed contract with the client, only a possibility offered by certain key exchange protocols, and even then, easy enough to get wrong that most people did.
I work for such organization which actually took a fairly reasonable stance and told BOA to piss off when they asked us to join them in petitioning the IETF to make exemptions to PFS in TLS 1.3.
Our current stance is that we dissallow it internally until the vendors that provide us with the DPI and web traffic inspection solutions will have full scalable support for TLS 1.3 or until the regulation would change in a way that would no longer require us to capture, store and be able to decrypt all user traffic within the network.
Surely your IT department already updates the software on client computers. Time to put on their big boy tech pants and decrypt data where the secrets are, on the clients. Then your industry can stop harassing everyone else for bad crypto.
Decrypting traffic on clients is also much harder due to the multiple types of clients you have and the fact that there is no easy way to MITM every connection the the client.
The security threat model by definition defines clients as untrustworthy hence relying on them for decryption is a flawed approach.
If you are going to be cocky and disrespectful at least be right.
Yeah, it's a hard problem. If you don't know half the things your clients are doing, it's much easier to pretend all the security conscious stuff will be going through TLS and then we break just that. It's also obviously wrong, as we all learned when they started filling USB ports with glue.
The boxes already rely on the client, unless someone signed another CA=yes certificate.
It’s simple a client makes an external TCP connection if that connection uses TLS the its MITMed on the network level and captured this happens to all connections if the client does not accept the handshake because for example the CA for the MITM box isn’t trusted or the client uses certificate pinning the client can simply refuse to proceed with the connection.
If the connection cannot be captured and inspected for any reason it’s simply terminated and the attempt is logged for future investigation.
There is no reason to break TLS on the client or compromise the browser it’s worse in every way and cannot be trusted.
Without knowing the internal structure of these particular organizations at all, that's quite a bold claim. If a company has a half million employees and their technology supports billions/trillions of dollars of transactions, it's quite likely that "laziness" has nothing to do with upgrading the entirety of the IPS & DLP products they support, to say nothing of solutions on the client or server side. They can't just edit some config and make all their technology magically support a new protocol that is explicitly designed to stymie their efforts.
You could try to assure me that it doesn't, but I do not trust uninformed assurances.