What's discouraging terrorism is the US's overreaction outside the US. It's become very clear to terrorist organizations that if they attack the US, the US is going to hit back, even if it's insanely expensive and causes collateral damage. The people in charge, and many people around them, end up dead.
Remember ISIS, the Islamic State? ISIS is down to 1.5 square miles, surrounded, and everybody but the most fanatical fighters is surrendering. The holdouts have days to live.
We don't need more Big Brother.
The problem with this argument is that it can't justify continued spending because that would make it unfalsifiable. We need to spend $450B/year on a bear-repelling rocks because we currently pay for the rocks and there are no bears. And if any bears do appear then we obviously didn't have enough bear-repelling rocks and we need to start spending $900B/year.
If there is a real question as to whether the ~0 bears is a result of the rocks, it's time to cut the bear-repelling rock budget in half and see how many bears there are next year. If it's still ~0 then it didn't need to be as high as it was and it may still be too high.
Also, cutting the government's budget would not impact the cost of compliance to corporations.
Many that throw jabs at J2EE (written on purpose), never had the joys of trying out xBaseEE, CEE, C++EE (CORBA, DCOM/MTS),...
In comparison to some of their other hardware, these servers were more suited to organizations with more demanding needs like minimizing downtime or having lots of compute power or configuration flexibility.
But of course people quickly realized that a key characteristic of actual enterprise computing is large budgets, so it almost immediately turned into a game of labeling things with the word "enterprise" in hopes of vacuuming up as much of that money as possible.
There are these large corporations with a significant investment in their existing infrastructure and systems - and now they all need to make them interop. The mindset is “how do we make our CORBA ERP communicate with their Java CRM without needing to make any changes to either of them?”. Hence SOAP: It packages existing method-call semantics into a HTTP message that will cross a firewall: not even the IT dept needs to get involved to change firewall rules. And they hammered out a working spec within a couple years. That’s impressive considering the slow-moving nature of large, risk-averse enterprises. We now know that REST-is-Best, but it took the industry around 10 years to figure that out, and another 5 years for the tooling and ecosystem to catch up. SOAP was a quick-fix that was needed immediately.
So I’d recharacterise “Enterprise software” as “fits into your existing system and does what you need it to, right now” - and their MC Escher-inspired architecture is a consequence of it needing to support and fit-in to whatever systems were prevalent when their project was started.
It’s not Enterprise software that’s rigid and inflexible - but cutting-edge software that I have more problems with. I was working with Neo4j in 2016 and having issues with security because it didn’t have any built-in security support until last year. I had to change what I was doing to accommodate them, instead of vice-versa.
More often than not, those MC Escher-inspired architecture as you call it, are the result of corporate politics with each department having a say in how their tooling should look like, and bringing in externals to actually build it for them at lowest bid with fixed cost projects.
It turned me grey, bald and cynical aka experienced in every possible way to fuck something up. That turned out to be quite valuable!
> Tue, 27 September 2016 18:21 UTC
> The various suggestions for creating fixed/static Diffie Hellman keys raise interesting possibilities. We would like to understand these ideas better at a technical level and are initiating research into this potential solution.
The core argument made by BITS is that they need a way to log TLS traffic such that it can be decrypted later, in order to provide data retention in line with regulations. While this could be done by logging all ephemeral keys generated by the servers, BITS argues that this isn’t practical due to their use of dedicated packet logging hardware that is key-ignorant. Instead they want to use non-forward-secret TLS so they can decrypt past messages easily. Their beef with TLS 1.3 is that it removes all non-FS key exchange methods, and further that by explicitly obsoleting TLS 1.2 as a standard pushes them to have to adopt 1.3 in an enterprise environment (or risk current/future regulatory scrutiny over their use of an obsoleted standard). Hence why they want to develop a competing, active standard with non-FS key exchange.
It is vital to financial institutions and to their customers and regulators
that these institutions be able to maintain both security and regulatory compliance
during and after the transition from TLS 1.2 to TLS 1.3.
This just isn’t true, or rather “compliance” tends to be quite fuzzy.
Regulators generally expect you follow recommendations from places like NIST. But it’s not a hard requirement, you just need to explain why deviating is better.
Unfortunately most fincial institutions trip up at the “explain why it’s better” bit. Either because they aren’t competent enough, or (more likely) can’t be bothered.
It's not "useful" if your goal is to intercept and decrypt messages that are supposed to be secure, which is what both regulated entities and baddies want to do.
If you don't require forward secrecy you introduce a weakness. The protocol won't distinguish between whether that weakness is being exploited by regulated entities or baddies.
You don't need to weaken TLS in order to do what the regulated entities want to do - you just need to do the retention on the endpoints. The issue isn't that they can't do that, it's that they don't want to do that, probably for cost or convenience reasons. Those aren't reasons to weaken TLS for everyone who actually wants secure comms.
Weakness in the protocol, but not necessarily weakness in the system as a whole. This is an environment where part of the trusted nature of the system comes from having a complete record of all the network communication to and from the system, and an ability to audit that data offline.
That context is in conflict with the objectives of PFS & TLS 1.3 in general. So, understandably, they came up with another solution that fit the design goals, and understandably, rather than reinvent the wheel, they came up with a way to tweak existing solutions to fit their design goals.
> You don't need to weaken TLS in order to do what the regulated entities want to do - you just need to do the retention on the endpoints.
Yeah, you're not understanding the system. The whole point is to NOT trust the endpoints to be reliable narrators of what they are transmitting over the network (which makes sense, because if they are compromised, they wouldn't be). TLS is designed to allow two trusted endpoints to communicate, but the goal/context for eTLS is to have a full audit of communications into and out of an untrusted node. That's presumed from the start; TLS's goals therefore aren't helpful.
The only way to truly not trust the server is to verify that you can decrypt in real time.
But wait, if you're verifying that you can decrypt in real time, then you could apply that to forward-secret connections too! Have the server send session keys to the logging machine, and make it test them.
It requires a (small) modification to the server, but so does using your own protocol.
Yes, but what it solves is that when it does so, you know it has been compromised.
> The only way to truly not trust the server is to verify that you can decrypt in real time.
No. That is the way to know that you can not trust the server in real time. That isn't the objective. The objective is to be able to, after the fact, prove that you could trust it at that point in time.
While it's working correctly (which involves you knowing the private key), you know everything it sends. If it gets compromised, it might keep using the same key and you know what it sent, or it might start using a different key and you'll never figure out what it sent while compromised.
When a server does use forward secrecy, it looks like this:
While it's working correctly (which includes logging session keys), you know everything it sends. If it gets compromised, it might keep logging session keys and you know what it sent, or it might start logging fake keys and you'll never figure out what it sent while compromised.
What meaningful difference is there?
You don't trust the server to tell you what it sent. You record what it sent, and at some point you can check if it decrypts or not. This acts the same whether or not you have FS enabled.
> The objective is to be able to, after the fact, prove that you could trust it at that point in time.
You can verify what it sent, but that doesn't prove it wasn't compromised.
There would not be a meaningful difference in terms of being able to tell if it was compromised. There would be a meaningful difference that you would now have an operational exposure of the session keys, that fundamentally undermines not just PFS but potentially the encryption protecting the session in general.
> You can verify what it sent, but that doesn't prove it wasn't compromised.
You're right. The semantics of what you are trying to prove isn't so much that it wasn't compromised, but rather to verify that there wasn't some kind of leakage of data from the system, which is a subtly different thing.
That's no worse than before, where you could use the private key to undermine the encryption.
You could also end up with a much better system if you encrypted the session keys so that only the auditing device can decrypt them.
> You could also end up with a much better system if you encrypted the session keys so that only the auditing device can decrypt them.
We're speculating about the trust model and the constraints it must operate within. The private key isn't getting transmitted continuously over the network, so perhaps with some trust models transmitting session keys is equivalent, better, or worse.
Given that the people actually working in the space have clearly put a lot of thought into how to best fit their needs, I wouldn't presume that they got it wrong.
That’s not the goal though. You’re not trying to monitor the systems themselves, you’re trying to monitor the people using the system.
A financial regulator doesn’t give a crap what your system does, or how it does it. They only care that they can blame (and potentially prosecute) an actual person if it goes wrong.
Using middleware boxes makes this easy. Not need to actually modify the software your using to create proper audit logs, just log everything and figure it out later.
From a systems perspective, they aren't logically different.
> A financial regulator doesn’t give a crap what your system does, or how it does it. They only care that they can blame (and potentially prosecute) an actual person if it goes wrong.
Yeah, you tell that to the regulator when it becomes clear that trades were being published to a competitor a millisecond before they were being listed.
> Using middleware boxes makes this easy. Not need to actually modify the software your using to create proper audit logs, just log everything and figure it out later.
The log everything part, you are totally right about. The "middleware" box that actually is part of the operational path... that's a different story. You want something that watches the system without actually being part of the system.
If they're actually recording the entirety of every TCP stream that comes into the datacenter, how many sets of credentials do you think are stored in that system? And right now, they're all encrypted with a single or small number of keys, that must be available to the system that is storing and parsing this data.
Also, given the breaches that have happened, I keep waiting for there to be a set of regulations from the other side requiring adequate protection and deletion of data. He seems entirely unconcerned with that aspect.
PS. I can't even get ETSI's website to load! https://www.etsi.org/
If a TLS 1.3 client will happily connect to an ETS server that isn't playing by the rules, doesn't that indicate a flaw in 1.3?
In this case, the server is using a predictable number instead of a random one for part of the protocol. Possibly a client could detect this by doing multiple transactions and seeing if a number gets reused, but that seems outside the scope of TLS.
There is a way to detect this. Record the last ephemeral public key that server used with you. If it uses the same one again, refuse to connect.
No opaque data leaves my network
Then this is the only way you can have outbound HTTPS connections. And for e.g. a bank, certain legal firms, or any company that has a lot of sensitive data they either don't want to be leaked, or at least want the option of detecting when it is leaked, that is a somewhat reasonable stance.
In the case of banks, this is needed for regulatory compliance regarding insider trading. For legal companies, I imagine this is about ensuring certain confidentiality. I could see the same thing for companies dealing with trade-secrets.
The statement 'Just log it on the end-points' presumes complete access to those end-points and all software running on them.
This method is considered better than terminating TLS early at a proxy and setting up a separate tunnel to the clients because breaking PFS is passive, rather than active. Thus it is a lot less resource intensive, a lot less vulnerable (no internet facing box that, if broken, has all communication in plaintext), and introduces no extra latency.
It is essentially a 'better' way to do an authorized MitM on everything on your network, and some companies want this authorized MitM. Like any authorized MitM, it introduces a third party who can compromise security, which is not generally desirable, but some companies don't mind being that third party to their own employees.
> The statement 'Just log it on the end-points' presumes complete access to those end-points and all software running on them.
There still has to be some control over the endpoints. Otherwise, what prevents them from negotiating an algorithm in TLS 1.2 that has PFS?
And I am not sure if you're attempting to address this, but instead of terminating at a more edge-ish node, why not just decrypt and re-encrypt there? (So, it is still encrypted internally, but the node can inspect the data in an authorized manner.)
(You seem to address it, but I'm not sure what you mean: yeah, having a centralized box decrypting your traffic means that an attacker that gets access to that can see a lot. But what were you doing in TLS 1.2 w/ a non-PFS ciphersuite that didn't involve a machine w/ the ability to decrypt everything?)
When you have communication between two endpoints, over your own network, transmitting session keys OoB doesn't improve the security of your systems, but does increase the complexity.
> And I am not sure if you're attempting to address this, but instead of terminating at a more edge-ish node, why not just decrypt and re-encrypt there?
Aside from adding a ton of latency and extra performance overhead, you now have a new operational endpoint that you have to trust. That doesn't fit the trust model. They key point here is the data is getting logged and then decrypted offline, by a totally separate system.
Having PFS increases security for your end users. (Minus your storing of the session keys, of course; if whatever you need the session keys for doesn't require you to store them forever, then it still seems like a benefit.) Being able to use standard, well-audited libraries instead of a proprietary piece of "enterprise" code is a benefit.
> Aside from adding a ton of latency and extra performance overhead
The performance of TLS on today's hardware is negligible; CPUs have instructions to accelerate it in hardware.
> you now have a new operational endpoint that you have to trust
No: In the prior TLS 1.2 design, the decryption key was on both the node MitM'ing the traffic, and the actual end nodes dealing with the traffic. The proposed TLS 1.3 alternative does not change that. (Nor does it improve it.)
If your TLS 1.2 was that you terminated at the node doing the MitM'ing, then do the same thing in TLS 1.3.
The context of ETS/eTLS is that you yourself are the end user.
> The performance of TLS on today's hardware is negligible; CPUs have instructions to accelerate it in hardware.
Uh-huh. I like that you believe that, but with a lot of HFT systems, even the latency of going from the NIC to the CPU is too much. Adding a hop in between that decodes and then reencodes, the the inherent buffering involved, is way, way too much latency.
> No: In the prior TLS 1.2 design, the decryption key was on both the node MitM'ing the traffic, and the actual end nodes dealing with the traffic. The proposed TLS 1.3 alternative does not change that. (Nor does it improve it.)
> If your TLS 1.2 was that you terminated at the node doing the MitM'ing, then do the same thing in TLS 1.3.
Yeah... see, that's the part you aren't getting. The old model was also not terminating through a proxy with TLS 1.2. That actually doesn't address the needs of the trusted system.
If your stance is ‘no opaque data leaves my network’ your only option is an air gap.
If nothing else, it would make the remaining traffic stand out more since you wouldn't be spending time auditing normal apps which decode cleanly and it's highly likely that someone trying to circumvent such a system would be required to do things which stand out more than routine usage.
As a simple example, an organization which does that kind of monitoring is unlikely to allow users to install arbitrary applications or visit any site on the web. With a standard setup, someone trying to exfiltrate data could just hit a popular site like Github, Gmail, Dropbox, etc. but if they need to use some custom encryption or steganography code they're either forced to install it somewhere far less common (i.e. more likely to stand out) or installing something locally where client monitoring can report an unusual browser extension or application.
The reality is that world kept turning without these proxies and it will keep turning once they are made obsolete.
There are people who need to check off boxes in order to comply with certain rules. Their security reality is not actually all that important.
ETS just means they don't have to spend money on replacing their man-in-the-middle monitoring gear with client-local solutions on every workstation and server.
It al seems pretty silly though. Regulators aren't idiots; they know that HTTPS is everywhere now, and that TLS 1.3 means that third parties listening in on connections are going to be thing of the past, so the regulations will change to reflect that.
Doesn't it still require altering the TLS implementation to use the static DH keys instead of following the TLS 1.3 standard of using random keys?
This expands to any IoT device with proprietary software on it, which by 2023 will be quite a lot of things.
The point is to have the decryption done on a system that is isolated from the production environment (and is consequently isolated from security compromises).
And if you are in a corporate environment using a company computer you forfeit your privacy anyway. You can always go somewhere else or do your banking and Facebook on a different machine / not on company time.
The issue is that TLS 1.3 deprecates the key exchange that makes this possible, essentially making (perfect) forward secrecy a requirement since the only inlcuded ciphers do so. The only way to monitor/inspect TLS traffic in this situation is to MITM the traffic rather than simply record encrypted sessions.
If you want to compromise the endpoint, compromise the endpoint. Install your own MITM certificates and terminate the connection in the middle, or install client-side malware. Either way, there should always be a giant warning sign on the client that end-to-end security is compromised.
The assumptions that TLS 1.3 is based around are in direct conflict with the requirements of the secure environment they operate in.
This isn't some evil thing... a cryptographic protocol is a part of a trusted system, and whether it is appropriate for a particular context has to do with the design of the trusted system.
Disabling PFS and thus enabling the decryption of all TLS sessions should be a conscious decision rather than something that was there 'by default' (and could easily be abused).
You can cover some of the same concern by implementing an agent on every connected computing device but this brings much greater complexity as you are monitoring potentially hundreds to thousands more places and still have to worry if you have complete coverage.
Consider an analogy of going through international customs. Do you employ customs officials at the border who are allowed to sample and inspect private belongings to verify laws are being followed? Or do you employ an official to help pack the belongings of each individual who you think may eventually cross the border? The second example is a bit stretched but hopefully illustrates the scale problem.
Without telling the person who's things were packed that they were packed by the official.
Adobe Creative Cloud
Amazon Work Spaces
Microsoft Office 365 Outlook.com
Microsoft Skype for Business
I'm a bit surprised by Skype for Business (horrible product BTW, it's a gamble to even be able to sign in on a fresh install) though, the rest I would expect people to use web versions of.
I would think that (perhaps coincidentally), the organizations that require these kinds of insights are not the ones that are relying on services that do cert pinning. And if they do, they can but the marketing department/the server running THAT wonky old software from the 90s in a separate subnet.
Also, banks seeing their own corporate traffic is ethical and moral. Whether they need to simply find another way to read all data leaving their network is another piece of the story.
EFF: "Everything sent over the network should be a secret! Nobody has a good reason to inspect traffic, it puts users' privacy at risk!"
Bank: "We keep trillions of your dollars. Inspecting our own traffic is how we make sure nobody is stealing it. We're a pretty big organization, so this stuff costs a lot of money, and is complex and takes a long time to get right. Can you give us a way to do that in this new TLS standard?"
EFF: "No!! Privacy!!!"
Bank: "Ok... I guess we'll have to make our own standard, then...?"
EFF: "Don't ANYONE use that standard, it will cause REAL HARM!!!!"
Bank: "..... Nobody else was going to... except us....."
Not necessarily trivial but not exactly impossible for someone who controls one of the endpoints.
Active interception with a middlebox still works exactly the same as it always has.
I doubt this pretty strongly.
And if they can put in enough effort to implement a new protocol, they can put in enough effort to log some keys.
They could do it in a much safer manner, too. They could have a TLS extension that appends the session key to the start of every connection, encrypted so that only the inspection device can use it. Then it would be transparent, connections not using it could be easily blocked, and you would still have forward secrecy in case the private key leaked.