Hacker News new | past | comments | ask | show | jobs | submit login
ETS Isn't TLS and You Shouldn't Use It (eff.org)
224 points by j4cob 56 days ago | hide | past | web | favorite | 109 comments



We need a big budget cut in the "homeland security" area. All this interception is not paying off. The biggest "terrorist event" in the US since 2001 was the guy who shot up a gay nightclub in Orlando FL in 2017. That was a solo nutcase; there was no planning chatter to intercept. The Boston Marathon bombing was two brothers. The San Bernardino shooting was a husband and wife.

What's discouraging terrorism is the US's overreaction outside the US. It's become very clear to terrorist organizations that if they attack the US, the US is going to hit back, even if it's insanely expensive and causes collateral damage. The people in charge, and many people around them, end up dead.

Remember ISIS, the Islamic State? ISIS is down to 1.5 square miles, surrounded, and everybody but the most fanatical fighters is surrendering. The holdouts have days to live.

We don't need more Big Brother.


I completely agree with you, but the counter argument is the only incidents that are getting through are the ones that are solo because the more complicated plots are getting intercepted and disrupted.


> I completely agree with you, but the counter argument is the only incidents that are getting through are the ones that are solo because the more complicated plots are getting intercepted and disrupted.

The problem with this argument is that it can't justify continued spending because that would make it unfalsifiable. We need to spend $450B/year on a bear-repelling rocks because we currently pay for the rocks and there are no bears. And if any bears do appear then we obviously didn't have enough bear-repelling rocks and we need to start spending $900B/year.

If there is a real question as to whether the ~0 bears is a result of the rocks, it's time to cut the bear-repelling rock budget in half and see how many bears there are next year. If it's still ~0 then it didn't need to be as high as it was and it may still be too high.


That's because people have hands and throats, weapons and weaknesses. You simply can't stop a killer operating at an animal level unless it's stopped while in progress. Stopping organized killing beforehand is however quite feasible, and society does have some responsibilities there.


The retention requirements needed by the enterprise consumers are not terrorism related.

Also, cutting the government's budget would not impact the cost of compliance to corporations.


The retention requirements for enterprises can be satisfied by transaction logging where the transactions are done. You don't need to break into TLS.


No it can't. We're literally talking about monitoring and capturing packets for data loss prevention, something that can't be done if the packets are encrypted.


So what would things look like if that homeland security spending was useful and needed? How do we know its okay to cut? Cut the programs and see if people start getting blown up?


Funny how word 'Enterprise' picks up more and more negative connotation in modern software world. These days, 'enterprise' means outdated, inflexible and intentionally flawed monster of technology.


Did it ever meant anything else?

Many that throw jabs at J2EE (written on purpose), never had the joys of trying out xBaseEE, CEE, C++EE (CORBA, DCOM/MTS),...


Sure, ~20 years ago Sun Microsystems used to sell some "Ultra Enterprise" servers which offered nice reliability features like redundant power supplies and a backplane and slot setup where you could install several CPU/memory boards or I/O boards.

In comparison to some of their other hardware, these servers were more suited to organizations with more demanding needs like minimizing downtime or having lots of compute power or configuration flexibility.

But of course people quickly realized that a key characteristic of actual enterprise computing is large budgets, so it almost immediately turned into a game of labeling things with the word "enterprise" in hopes of vacuuming up as much of that money as possible.


I think it’s an unfair characterisation. Let’s go with early-2000s “Enterprise” stuff: CORBA and SOAP specifically.

There are these large corporations with a significant investment in their existing infrastructure and systems - and now they all need to make them interop. The mindset is “how do we make our CORBA ERP communicate with their Java CRM without needing to make any changes to either of them?”. Hence SOAP: It packages existing method-call semantics into a HTTP message that will cross a firewall: not even the IT dept needs to get involved to change firewall rules. And they hammered out a working spec within a couple years. That’s impressive considering the slow-moving nature of large, risk-averse enterprises. We now know that REST-is-Best, but it took the industry around 10 years to figure that out, and another 5 years for the tooling and ecosystem to catch up. SOAP was a quick-fix that was needed immediately.

So I’d recharacterise “Enterprise software” as “fits into your existing system and does what you need it to, right now” - and their MC Escher-inspired architecture is a consequence of it needing to support and fit-in to whatever systems were prevalent when their project was started.

It’s not Enterprise software that’s rigid and inflexible - but cutting-edge software that I have more problems with. I was working with Neo4j in 2016 and having issues with security because it didn’t have any built-in security support until last year. I had to change what I was doing to accommodate them, instead of vice-versa.


Except for two year pause, all my career has been in the enterprise space.

More often than not, those MC Escher-inspired architecture as you call it, are the result of corporate politics with each department having a say in how their tooling should look like, and bringing in externals to actually build it for them at lowest bid with fixed cost projects.


IT ... the glue that halts technology's progress.


Also "industry standard" in this industry means "lowest-common-denominator garbage, for which you can find a lot of cheap programmers".


I find a useful definition of "enterprise" is this: products or services whose customers are several levels up the org chart from their users.


It's even simpler than that: Enterprise just means the people paying for the software are not its users. This is why enterprise software always sucks.


dont forget the enterprise processes (in software for example it would be Agile/Scrum/Lean/Six Sigma/etc.) and the enterprise people deformed by them. Archaeologically speaking it is a whole culture layer :)


Ugh. I’ve been shafted by all of those ideologies.

It turned me grey, bald and cynical aka experienced in every possible way to fuck something up. That turned out to be quite valuable!


It’s worth reading the mailing list posts by BITS (the main proponent of ETS) here: https://mailarchive.ietf.org/arch/msg/tls/KQIyNhPk8K6jOoe2Sc.... The replies are pretty informative. You can see here the message in which BITS starts to consider fixed DH keys, which were implemented in ETS: https://mailarchive.ietf.org/arch/msg/tls/3d7TM0g_EdtMzhgmcP...

> Tue, 27 September 2016 18:21 UTC

> The various suggestions for creating fixed/static Diffie Hellman keys raise interesting possibilities. We would like to understand these ideas better at a technical level and are initiating research into this potential solution.

The core argument made by BITS is that they need a way to log TLS traffic such that it can be decrypted later, in order to provide data retention in line with regulations. While this could be done by logging all ephemeral keys generated by the servers, BITS argues that this isn’t practical due to their use of dedicated packet logging hardware that is key-ignorant. Instead they want to use non-forward-secret TLS so they can decrypt past messages easily. Their beef with TLS 1.3 is that it removes all non-FS key exchange methods, and further that by explicitly obsoleting TLS 1.2 as a standard pushes them to have to adopt 1.3 in an enterprise environment (or risk current/future regulatory scrutiny over their use of an obsoleted standard). Hence why they want to develop a competing, active standard with non-FS key exchange.


I've read Andrew Kennedy's email. This line hits the point for me. His argument is reasonable, sometime it's impractical or hard or costly or all the above to upgrade all the systems to meet regulatory compliance and the newer and stricter and safer security standards.

     It is vital to financial institutions and to their customers and regulators 
     that these institutions be able to maintain both security and regulatory compliance 
     during and after the transition from TLS 1.2 to TLS 1.3.
One example of that is the NIST's recommendation on password policies. Most of the time the regulatory mandates are outdated and it's hard to bring them up to speed, in the mean time, as a financial institution you simply cannot have your IT system incompliant, even that means having a less security practice.


> as a financial institution you simply cannot have your IT system incompliant

This just isn’t true, or rather “compliance” tends to be quite fuzzy.

Regulators generally expect you follow recommendations from places like NIST. But it’s not a hard requirement, you just need to explain why deviating is better.

Unfortunately most fincial institutions trip up at the “explain why it’s better” bit. Either because they aren’t competent enough, or (more likely) can’t be bothered.


If something was better for the entire industry one would think the compliance recommendations would be the topic of discussion, not explaining why it needed to be done differently individually.


I'm not sure what you are getting at with the NIST example - their recommendations for passwords are pretty reasonable. Maybe their older ones weren't, but, their newer guidelines recommend against outdated ideas such as expiring passwords. (https://pages.nist.gov/800-63-FAQ/#q-b5)


If you think about it, given the retention requirements they have, it's not clear that forward secrecy is useful in that context.


Forward secrecy is always useful for two endpoints that want to have a secure exchange of messages. It's a core component of secure transport these days.

It's not "useful" if your goal is to intercept and decrypt messages that are supposed to be secure, which is what both regulated entities and baddies want to do.

If you don't require forward secrecy you introduce a weakness. The protocol won't distinguish between whether that weakness is being exploited by regulated entities or baddies.

You don't need to weaken TLS in order to do what the regulated entities want to do - you just need to do the retention on the endpoints. The issue isn't that they can't do that, it's that they don't want to do that, probably for cost or convenience reasons. Those aren't reasons to weaken TLS for everyone who actually wants secure comms.


> If you don't require forward secrecy you introduce a weakness. The protocol won't distinguish between whether that weakness is being exploited by regulated entities or baddies.

Weakness in the protocol, but not necessarily weakness in the system as a whole. This is an environment where part of the trusted nature of the system comes from having a complete record of all the network communication to and from the system, and an ability to audit that data offline.

That context is in conflict with the objectives of PFS & TLS 1.3 in general. So, understandably, they came up with another solution that fit the design goals, and understandably, rather than reinvent the wheel, they came up with a way to tweak existing solutions to fit their design goals.

> You don't need to weaken TLS in order to do what the regulated entities want to do - you just need to do the retention on the endpoints.

Yeah, you're not understanding the system. The whole point is to NOT trust the endpoints to be reliable narrators of what they are transmitting over the network (which makes sense, because if they are compromised, they wouldn't be). TLS is designed to allow two trusted endpoints to communicate, but the goal/context for eTLS is to have a full audit of communications into and out of an untrusted node. That's presumed from the start; TLS's goals therefore aren't helpful.


Getting rid of forward secrecy doesn't fix the problem of server trust. A compromised server could always use a different private key.

The only way to truly not trust the server is to verify that you can decrypt in real time.

But wait, if you're verifying that you can decrypt in real time, then you could apply that to forward-secret connections too! Have the server send session keys to the logging machine, and make it test them.

It requires a (small) modification to the server, but so does using your own protocol.


> Getting rid of forward secrecy doesn't fix the problem of server trust. A compromised server could always use a different private key.

Yes, but what it solves is that when it does so, you know it has been compromised.

> The only way to truly not trust the server is to verify that you can decrypt in real time.

No. That is the way to know that you can not trust the server in real time. That isn't the objective. The objective is to be able to, after the fact, prove that you could trust it at that point in time.


When a server doesn't use forward secrecy, it looks like this:

While it's working correctly (which involves you knowing the private key), you know everything it sends. If it gets compromised, it might keep using the same key and you know what it sent, or it might start using a different key and you'll never figure out what it sent while compromised.

When a server does use forward secrecy, it looks like this:

While it's working correctly (which includes logging session keys), you know everything it sends. If it gets compromised, it might keep logging session keys and you know what it sent, or it might start logging fake keys and you'll never figure out what it sent while compromised.

What meaningful difference is there?

You don't trust the server to tell you what it sent. You record what it sent, and at some point you can check if it decrypts or not. This acts the same whether or not you have FS enabled.

> The objective is to be able to, after the fact, prove that you could trust it at that point in time.

You can verify what it sent, but that doesn't prove it wasn't compromised.


> What meaningful difference is there?

There would not be a meaningful difference in terms of being able to tell if it was compromised. There would be a meaningful difference that you would now have an operational exposure of the session keys, that fundamentally undermines not just PFS but potentially the encryption protecting the session in general.

> You can verify what it sent, but that doesn't prove it wasn't compromised.

You're right. The semantics of what you are trying to prove isn't so much that it wasn't compromised, but rather to verify that there wasn't some kind of leakage of data from the system, which is a subtly different thing.


> There would be a meaningful difference that you would now have an operational exposure of the session keys, that fundamentally undermines not just PFS but potentially the encryption protecting the session in general.

That's no worse than before, where you could use the private key to undermine the encryption.

You could also end up with a much better system if you encrypted the session keys so that only the auditing device can decrypt them.


> That's no worse than before, where you could use the private key to undermine the encryption.

>

> You could also end up with a much better system if you encrypted the session keys so that only the auditing device can decrypt them.

We're speculating about the trust model and the constraints it must operate within. The private key isn't getting transmitted continuously over the network, so perhaps with some trust models transmitting session keys is equivalent, better, or worse.

Given that the people actually working in the space have clearly put a lot of thought into how to best fit their needs, I wouldn't presume that they got it wrong.


> The whole point is to NOT trust the endpoints to be reliable narrators of what they are transmitting over the network

That’s not the goal though. You’re not trying to monitor the systems themselves, you’re trying to monitor the people using the system.

A financial regulator doesn’t give a crap what your system does, or how it does it. They only care that they can blame (and potentially prosecute) an actual person if it goes wrong.

Using middleware boxes makes this easy. Not need to actually modify the software your using to create proper audit logs, just log everything and figure it out later.


> That’s not the goal though. You’re not trying to monitor the systems themselves, you’re trying to monitor the people using the system.

From a systems perspective, they aren't logically different.

> A financial regulator doesn’t give a crap what your system does, or how it does it. They only care that they can blame (and potentially prosecute) an actual person if it goes wrong.

Yeah, you tell that to the regulator when it becomes clear that trades were being published to a competitor a millisecond before they were being listed.

> Using middleware boxes makes this easy. Not need to actually modify the software your using to create proper audit logs, just log everything and figure it out later.

The log everything part, you are totally right about. The "middleware" box that actually is part of the operational path... that's a different story. You want something that watches the system without actually being part of the system.


Which means this whole argument might be insincere and the real goal is otherwise.


Lack of PFS isn't a weakness. It isn't a mathematical backdoor. It's a config option that is left disabled when entities have a moral and ethical rationale for inspecting the traffic traversing their network.


I disagree, it still protects the encryption of information they have in flight.

If they're actually recording the entirety of every TCP stream that comes into the datacenter, how many sets of credentials do you think are stored in that system? And right now, they're all encrypted with a single or small number of keys, that must be available to the system that is storing and parsing this data.

Also, given the breaches that have happened, I keep waiting for there to be a set of regulations from the other side requiring adequate protection and deletion of data. He seems entirely unconcerned with that aspect.


This is a remarkable story. Fortunately, this ETSI-backed "ETS" standard appears to have just about zero uptake or internet presence, let alone vendor acceptance. So although this is fairly outrageous based on the EFF article, it doesn't look like something that's a big threat to TLS at this point.

PS. I can't even get ETSI's website to load! https://www.etsi.org/


If you're having trouble getting etsi.org to load, try using their static key for diffie-hellman: 0x00000000.


Works every time.


It could be a threat to TLS if they manage to convince NIST to recommend their variant instead of the TLS standard.


> This would only require changes to servers, not clients, and would look just like the secure version of TLS 1.3.

If a TLS 1.3 client will happily connect to an ETS server that isn't playing by the rules, doesn't that indicate a flaw in 1.3?


Either client or server can break secrecy. Server compromise isn't a threat model the client can defend against. For example, the server could simply forward a copy of the whole communication in cleartext to someone, and the client can't know this.

In this case, the server is using a predictable number instead of a random one for part of the protocol. Possibly a client could detect this by doing multiple transactions and seeing if a number gets reused, but that seems outside the scope of TLS.


The expectation is that the encrypted link is not decryptable by a third party. If that isn't always true in the face of an adversary then claims of forward secrecy for TLS 1.3 are false.


The expectation is that the encrypted link is not decryptable by a third party if both parties are implementing the protocol in good faith.


This isn't a 3rd party, it's the server you're communicating with. Just like your own client could do the same.


> If a TLS 1.3 client will happily connect to an ETS server that isn't playing by the rules, doesn't that indicate a flaw in 1.3?

There is a way to detect this. Record the last ephemeral public key that server used with you. If it uses the same one again, refuse to connect.


So what's the argument from the other side? Going through all this effort to allow PFS to be disabled seems like a ton of work? What's their use-case?


If your stance is:

No opaque data leaves my network

Then this is the only way you can have outbound HTTPS connections. And for e.g. a bank, certain legal firms, or any company that has a lot of sensitive data they either don't want to be leaked, or at least want the option of detecting when it is leaked, that is a somewhat reasonable stance. In the case of banks, this is needed for regulatory compliance regarding insider trading. For legal companies, I imagine this is about ensuring certain confidentiality. I could see the same thing for companies dealing with trade-secrets.

The statement 'Just log it on the end-points' presumes complete access to those end-points and all software running on them.

This method is considered better than terminating TLS early at a proxy and setting up a separate tunnel to the clients because breaking PFS is passive, rather than active. Thus it is a lot less resource intensive, a lot less vulnerable (no internet facing box that, if broken, has all communication in plaintext), and introduces no extra latency.

It is essentially a 'better' way to do an authorized MitM on everything on your network, and some companies want this authorized MitM. Like any authorized MitM, it introduces a third party who can compromise security, which is not generally desirable, but some companies don't mind being that third party to their own employees.


Why not have the endpoints ship their session keys OoB to a centralized place / whatever needs to look at the traffic? Sure, there's more of them, but that shouldn't be a huge volume? (It is insignificant compared to the captured traffic.)

> The statement 'Just log it on the end-points' presumes complete access to those end-points and all software running on them.

There still has to be some control over the endpoints. Otherwise, what prevents them from negotiating an algorithm in TLS 1.2 that has PFS?

And I am not sure if you're attempting to address this, but instead of terminating at a more edge-ish node, why not just decrypt and re-encrypt there? (So, it is still encrypted internally, but the node can inspect the data in an authorized manner.) (You seem to address it, but I'm not sure what you mean: yeah, having a centralized box decrypting your traffic means that an attacker that gets access to that can see a lot. But what were you doing in TLS 1.2 w/ a non-PFS ciphersuite that didn't involve a machine w/ the ability to decrypt everything?)


> Why not have the endpoints ship their session keys OoB to a centralized place / whatever needs to look at the traffic? Sure, there's more of them, but that shouldn't be a huge volume? (It is insignificant compared to the captured traffic.)

When you have communication between two endpoints, over your own network, transmitting session keys OoB doesn't improve the security of your systems, but does increase the complexity.

> And I am not sure if you're attempting to address this, but instead of terminating at a more edge-ish node, why not just decrypt and re-encrypt there?

Aside from adding a ton of latency and extra performance overhead, you now have a new operational endpoint that you have to trust. That doesn't fit the trust model. They key point here is the data is getting logged and then decrypted offline, by a totally separate system.


> doesn't improve the security of your systems

Having PFS increases security for your end users. (Minus your storing of the session keys, of course; if whatever you need the session keys for doesn't require you to store them forever, then it still seems like a benefit.) Being able to use standard, well-audited libraries instead of a proprietary piece of "enterprise" code is a benefit.

> Aside from adding a ton of latency and extra performance overhead

The performance of TLS on today's hardware is negligible; CPUs have instructions to accelerate it in hardware.

> you now have a new operational endpoint that you have to trust

No: In the prior TLS 1.2 design, the decryption key was on both the node MitM'ing the traffic, and the actual end nodes dealing with the traffic. The proposed TLS 1.3 alternative does not change that. (Nor does it improve it.)

If your TLS 1.2 was that you terminated at the node doing the MitM'ing, then do the same thing in TLS 1.3.


> Having PFS increases security for your end users.

The context of ETS/eTLS is that you yourself are the end user.

> The performance of TLS on today's hardware is negligible; CPUs have instructions to accelerate it in hardware.

Uh-huh. I like that you believe that, but with a lot of HFT systems, even the latency of going from the NIC to the CPU is too much. Adding a hop in between that decodes and then reencodes, the the inherent buffering involved, is way, way too much latency.

> No: In the prior TLS 1.2 design, the decryption key was on both the node MitM'ing the traffic, and the actual end nodes dealing with the traffic. The proposed TLS 1.3 alternative does not change that. (Nor does it improve it.) > > If your TLS 1.2 was that you terminated at the node doing the MitM'ing, then do the same thing in TLS 1.3.

Yeah... see, that's the part you aren't getting. The old model was also not terminating through a proxy with TLS 1.2. That actually doesn't address the needs of the trusted system.


Not to mention that you have to trust the endpoint to ship their session keys. If one "forgets" or the key is "lost in the mail", there's no way to prove that message wasn't problematic.


Assume that non-PFS ciphersuites had remained in TLS: You still have to trust the endpoint to not negotiate a PFS ciphersuite.


Yes, but a failure to do so would be considered a compromise, as opposed to a simple network failure.


It’s a pipe dream. The webpage or application can easily include its own encryption that can’t be broken by these proxies.

If your stance is ‘no opaque data leaves my network’ your only option is an air gap.


Engineering is all about making trade-offs (“perfect is the enemy of the good and all”) and security engineering is no different. The same logic could be used to say that you don't need an edge firewall because each client should have one but it's much easier to simplify the baseline.

If nothing else, it would make the remaining traffic stand out more since you wouldn't be spending time auditing normal apps which decode cleanly and it's highly likely that someone trying to circumvent such a system would be required to do things which stand out more than routine usage.

As a simple example, an organization which does that kind of monitoring is unlikely to allow users to install arbitrary applications or visit any site on the web. With a standard setup, someone trying to exfiltrate data could just hit a popular site like Github, Gmail, Dropbox, etc. but if they need to use some custom encryption or steganography code they're either forced to install it somewhere far less common (i.e. more likely to stand out) or installing something locally where client monitoring can report an unusual browser extension or application.


A cute excuse until one of your popular sites starts to do this. Now it happens to be that Google hates these proxies as they are a popular target for repressive governments, who technically can’t be distinguished from a snooping ‘enterprise’.

The reality is that world kept turning without these proxies and it will keep turning once they are made obsolete.


It's not a pipe dream if your only goal is to be compliant with a formal rule or regulatory structure that requires it. The reason we need stuff like ETS is really because there are organizations that otherwise could not allow any outbound HTTPS sessions.

There are people who need to check off boxes in order to comply with certain rules. Their security reality is not actually all that important.


They control the devices on their side of the firewall, so they can log whatever they want before the data sent enters the encrypted tunnel, and after the data received exits it.

ETS just means they don't have to spend money on replacing their man-in-the-middle monitoring gear with client-local solutions on every workstation and server.

It al seems pretty silly though. Regulators aren't idiots; they know that HTTPS is everywhere now, and that TLS 1.3 means that third parties listening in on connections are going to be thing of the past, so the regulations will change to reflect that.


> ETS just means they don't have to spend money on replacing their man-in-the-middle monitoring gear with client-local solutions on every workstation and server.

Doesn't it still require altering the TLS implementation to use the static DH keys instead of following the TLS 1.3 standard of using random keys?


You can ban the use of such applications a lot easier than you can ban HTTPS. This means that simply using those tools is grounds for legal action, which might suffice. Especially if the concern for confidentiality is not for internal reasons, but for legal reasons.


It's not just companies that want to MitM their traffic: consumers want this too. Otherwise all the IoT devices in a house will no longer be trustable: Alexa can start uploading everything it hears, even if you never said "hey alexa!". And because of forward secrecy, you can't verify what it did or did not send: it all looks the same.

This expands to any IoT device with proprietary software on it, which by 2023 will be quite a lot of things.


There's a whole IT market segment around TLS decryption for corporate LAN. Basically corporate MITM that will decrypt TLS at the gateway / firewall, and with currently used TLS standards, will then re encrypt the traffic back to the client so the browser thinks it has a legit connection. It's used to scan packets for intrusion detection, for malware, to track for data loss like the article talks about.


But you don't NEED to kill forward secrecy to do that. TLS 1.3 doesn't seem to be a problem for the anti-malware, IPS, or even DLP use cases. You just need to decrypt, inspect, and re-encrypt traffic at the firewall, using a CA cert trusted by your clients. The problem is lazy organizations that just want to passively collect all of the encrypted traffic and then decrypt it later at their leisure, which smells much more like surveillance than security.


Yeah, you're missing the security model.

The point is to have the decryption done on a system that is isolated from the production environment (and is consequently isolated from security compromises).


All of those require your computer to trust a new Certificate Authority or you will get warnings all over the place. If there is a company that claims to be able to do it without trusting the CA or producing warnings I would love to see it. (seriously, I actually would love to see that).

And if you are in a corporate environment using a company computer you forfeit your privacy anyway. You can always go somewhere else or do your banking and Facebook on a different machine / not on company time.


Why? If you have the private key you can decrypt TLS traffic if forward secrecy is off. Which is why forward secrecy exists, to prevent captured encrypted sessions form being decrypted out-of-band with, presumably, comprimised private keys.

The issue is that TLS 1.3 deprecates the key exchange that makes this possible, essentially making (perfect) forward secrecy a requirement since the only inlcuded ciphers do so. The only way to monitor/inspect TLS traffic in this situation is to MITM the traffic rather than simply record encrypted sessions.


It's deceptive to call out-of-band MITM not MITM, it's still MITM just covert. TLSv1.3 forcing it to become glaringly obvious is exactly what should be happening.


For client-side it does require a new CA, but for server side it does not (since you have access to all of the private keys in use). Given that banks are pushing for this standard, that would make a lot of sense.


Correct re: the root cert in my experience at least. They usually get pushed out as part of a Group Policy in Active Directory.


The original purpose was governments spying on their citizens, which is why a lot of software uses certificate pinning to block this intrusion. These MITM solution just let through the big players’ traffic so you don’t get too much of a fuss while still retaining the ability to ‘check for malware’.


Breaking the security of HTTPS for surveillance and monitoring purposes. The BITS group is formally opposed to secure communications because they want to make it easier to MITM attack the secure communication. They want to make it easier to decrypt communication.


...and they want that for transparency, not for nefarious reasons.


MITMing TLS outside the endpoints is inherently nefarious, whether they think it is or not.

If you want to compromise the endpoint, compromise the endpoint. Install your own MITM certificates and terminate the connection in the middle, or install client-side malware. Either way, there should always be a giant warning sign on the client that end-to-end security is compromised.


...and in this context they aren't compromising it. They're just sharing their keys so that another component, all part of the same trusted system, can also decrypt the traffic.


Sharing the key allows more than just decryption; it also allows undetectable modification of the traffic. A protocol which allowed third-party decryption but not modification would have been designed differently.


In this case the key is used by a system that is reading a log of the archived traffic. It has no direct network connectivity to the production traffic. I think we can safely say it isn't modifying the traffic... in fact, that's a requirement for the security apparatus.


They don't want to compromise the endpoint. They want to ensure they have a record of all communications to and from the endpoint.


In order to implement ETS they already have to compromise the endpoint to make the TLS 1.3 implementation use static keys, right?


Yeah, but it's a means to an end, not the end.

The assumptions that TLS 1.3 is based around are in direct conflict with the requirements of the secure environment they operate in.

This isn't some evil thing... a cryptographic protocol is a part of a trusted system, and whether it is appropriate for a particular context has to do with the design of the trusted system.


Corporate environments where "endpoint solutions" snoop through all traffic to detect malware activity. While I understand the use case, I would not support it. What I find unacceptable is, assuming the article is correct, that ETSI is asking NIST to recommend their crippled TLS in their new guidelines rather than TLS1.3.

Disabling PFS and thus enabling the decryption of all TLS sessions should be a conscious decision rather than something that was there 'by default' (and could easily be abused).


The other side of the argument is frequently discounted and as an IT security person myself I understand that. However, there is a real challenge for companies who deal with large amounts of very sensitive data. To be able to effectively monitor for data loss it makes a lot of sense to be able to monitor the connection points between your protected network and outside networks. The move to all traffic being encrypted and uninspectable breaks this paradigm.

You can cover some of the same concern by implementing an agent on every connected computing device but this brings much greater complexity as you are monitoring potentially hundreds to thousands more places and still have to worry if you have complete coverage.

Consider an analogy of going through international customs. Do you employ customs officials at the border who are allowed to sample and inspect private belongings to verify laws are being followed? Or do you employ an official to help pack the belongings of each individual who you think may eventually cross the border? The second example is a bit stretched but hopefully illustrates the scale problem.


> Or do you employ an official to help pack the belongings of each individual who you think may eventually cross the border?

Without telling the person who's things were packed that they were packed by the official.


I assume that organizations deploying this make it clear to employees that the network and computer equipment is intended for professional use.


But since every device needs to trust your CA aren't you forced to pack their belongings in either case?


Hmm, I'm straining my own analogy already so I won't try to beat that horse any deader. :-) I am mostly trying to argue the positives of centralized inspection at network chokepoints in simplicity and guarantee of coverage.


Banks are required by regulation to monitor & audit pretty much everything. Previously they did this for internet usage by using MITM proxies. TLS 1.3 makes that approach hard/impossible.


Why can't they just install their own self signed root ca on all their computers and continue MITM it?


Certificate pinning is used by some very common applications and can break a MITM that relies on a self-signed certificate.


Then those applications are correctly getting the behavior they desire: either they get a secure connection or they don't connect at all.


Do you have any example of such applications that would be used by a bank?


I don't work for a bank so I can't speak definitively to their applications. A few sample applications I see listed as certificate pinned in Netskope (a CASB) though include:

  Adobe Creative Cloud
  Amazon Work Spaces
  Docusign
  GitHub
  Google Drive
  GoToMeeting
  iCloud
  Microsoft Office 365 Outlook.com
  Microsoft Skype for Business
  Salesforce.com
Note this typically refers to native applications and plugins which also connect over TLS and not web applications.


My suspicion was that any such applications have a web version.

I'm a bit surprised by Skype for Business (horrible product BTW, it's a gamble to even be able to sign in on a fresh install) though, the rest I would expect people to use web versions of.

I would think that (perhaps coincidentally), the organizations that require these kinds of insights are not the ones that are relying on services that do cert pinning. And if they do, they can but the marketing department/the server running THAT wonky old software from the 90s in a separate subnet.


Isn't that exactly the same between TLS 1.2 and 1.3? They won't have the private keys for google drive. How are those handled today?


Are those common applications necessary for their business operations though? They could just blacklist them entirely.


besides, many of those applications depin when you install custom CAs, don't they?


I'm not sure I understand: why can't they record the decrypted traffic instead? (I assume they have it plain text at some point). Of course they could encrypt it again before sending it to their audit server


How does ETS break MITM for corporate LANs that are trusted CAs on work devices? Why can't a proxy still MITM a connection by terminating the client side, establishing the server side, and that be that?

Also, banks seeing their own corporate traffic is ethical and moral. Whether they need to simply find another way to read all data leaving their network is another piece of the story.


Seeing articles like this remind me why I donate to EFF on the regular and recommend others to do so, too. They're on our side.


> Instead of thinking of this as “Enterprise Transport Security,” which the creators say the acronym stands for, you should think of it as “Extra Terrible Security.”


As long as browser makers don't support it, this is a non-issue. Correct?


I wonder how they plan to get this into Chrome.


Christ, what assholes.


Yeah, let's just make it harder for banks to protect your money so that nobody can figure out your Facebook password in 10 years.

EFF: "Everything sent over the network should be a secret! Nobody has a good reason to inspect traffic, it puts users' privacy at risk!"

Bank: "We keep trillions of your dollars. Inspecting our own traffic is how we make sure nobody is stealing it. We're a pretty big organization, so this stuff costs a lot of money, and is complex and takes a long time to get right. Can you give us a way to do that in this new TLS standard?"

EFF: "No!! Privacy!!!"

Bank: "Ok... I guess we'll have to make our own standard, then...?"

EFF: "Don't ANYONE use that standard, it will cause REAL HARM!!!!"

Bank: "..... Nobody else was going to... except us....."


PFS doesn't prevent inspection of traffic, it just makes passive capture more complicated since you now need to log the ephemeral keys for each connection rather than just using the private key to decode the whole package.

Not necessarily trivial but not exactly impossible for someone who controls one of the endpoints.

Active interception with a middlebox still works exactly the same as it always has.


> Nobody else was going to

I doubt this pretty strongly.

And if they can put in enough effort to implement a new protocol, they can put in enough effort to log some keys.

They could do it in a much safer manner, too. They could have a TLS extension that appends the session key to the start of every connection, encrypted so that only the inspection device can use it. Then it would be transparent, connections not using it could be easily blocked, and you would still have forward secrecy in case the private key leaked.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: