* smaller latency (0-RTT: zero round-trips support).
* removes insecure encryption primitives (SHA1, MD5, RC4).
* elliptic curve support.
* downgrade protection.
=> also removes all key exchange algos without forward secrecy.
* elliptic curve support.
=> and new X25519/Ed25519 elliptic curve support.
I think most security people are aware of these features for a while. The final accepted standardization is what matters now, after 28 drafts, and the epic success of overcoming the middlebox disaster...
This is one of these cases in which it would be (have been) better to just break them. It isn't TLS's job to explicitly support evil.
These middleboxes are cancer. There's no way to morally defend their presence. Breaking them (and thus effectively forcing them out of the way) would have been the better approach.
> Breaking them (and thus effectively forcing them out of the way) would have been the better approach.
This is exactly what the security people are going to do after TLSv1.3 is introduced. Google Chrome is going to implement the so-called GREASE protocol (https://tools.ietf.org/html/draft-ietf-tls-grease-01), which actively tries to negotiate a connection with random, non-existent, but standard-compliant values in various fields of the protocol, to actively break the (future) broken middleboxes early on. Revenge time.
In case people are curious what TLS 1.3 did to support broken middleboxes, a lot of information is in "D.4. Middlebox Compatibility Mode". Here's the text:
> Field measurements [Ben17a] [Ben17b] [Res17a] [Res17b] have found that a significant number of middleboxes misbehave when a TLS client/server pair negotiates TLS 1.3. Implementations can increase the chance of making connections through those middleboxes by making the TLS 1.3 handshake look more like a TLS 1.2 handshake...
I think working around it in the short term (so that TLS 1.3 can start improving our protections NOW), and separately getting middleboxes to upgrade over a longer period of time, is a good plan.
(I guess I should clarify that my argument is not "SSL inspection is bad and therefore TLS 1.3 is bad" - my employer monitors all my SSL traffic from work-owned machines, and could do so very easily with a browser extension or something instead of SSL inspection if the protocol somehow made middleboxes difficult. The proxies are currently designed to permit a few websites through without interception, which is the specific thing that there was debate about doing, but I don't think there's any real reason why my employer wouldn't just monitor everything if they had to. So I don't think there's anything the TLS spec could have done to make my employer more or less likely to monitor traffic, and I don't think that this affects the argument about whether the monitoring itself is immoral.)
A browser extension would not do nearly what a proper web proxy can do.
I am a critic of the surveillance state and a donating member of the EFF, but web proxies are not evil in themselves, anymore than a camera is evil because it can be used to build a panopticon, or cookies are evil because they can be used for cross site tracking.
I'm not a big fan of transparent proxies, but in the case of employers there's at least a reasonable argument that the "computers are not the user's, they are the company's". There is NO good argument for middleboxes that don't implement the specification correctly and thus make it very hard to upgrade protocols.
They'd turn it off if the alternative was to be offline because the middlebox crashes a second after power on.
The nature of TLS is that nobody in the middle is supposed to be able to read what's encrypted, and that either end can detect if traffic they receive has been modified in transit.
Of course this is incompatible with corporate (especially financial) networks where employers have legal/compliance/noseyness reasons to want to make sure their employees aren't doing things they shouldn't (sending corporate secrets to the NYT, having their corporate knowledge siphoned out by malware, spending all their time on Facebook etc.), and repressive regimes (where, for example, Erdogan wants to know if you're a member of the Gulen movement) - Bluecoat and Cisco, for example, do a lot of business in the Middle East and China.
They'll typically act as a man in the middle for TLS traffic, re-encrypting traffic using either a compliant root CA or a corporate CA that all employee machines have been made to trust. The nature of Enterprise Software is that these devices will have poor, incomplete or outdated implementations of TLS with the result that, for example, a lot of them didn't know what to do with draft TLS v1.3 traffic and just dropped it on the floor as invalid.
In essence: if corporate wants to spy on me, they shall provide me with devices that are set up that way, and with that I know I can't have any expectation of privacy. If I'm using my own device, I want TLS to actually provide security and confidentiality. If that means denying my personal devices access to the corporate network, so be it.
> Our IT team is going to have to do some work and expose that traffic is sent unencrypted (or differently encrypted) to the final hop on the network.
I get why IT teams don't want to, especially for huge organizations, but "lol we can MITM TLS" is not acceptable. Terminate the exterior TLS at the network, inspect / log it, then re-black it to the end device.
Also, I want to take a second to complain about locally installed certs. I know we all know how they work and everything, but most users don't. The fact that I can see a green lock for google.com when Eve has put a local cert on my machine is really stupid.
People check their personal email on their work computers, that's just a fact of life. Their personal email password shouldn't be in some network log somewhere without them understanding that this is possible. There should be a different colour for when trust is different. A blue lock, for example. I know IT has root anyway, but most companies don't install key loggers on their gear or do obviously shady shit, and I think it would benefit users to have a clear demarcation about who they are trusting.
And maybe it should be country-smart too. An HTTPS cert originating from China and being used for a Chinese service is green, but as the internet balkanizes we may want to be a little smarter. Not sure on that though.
Looking at the Lewandowski stuff, that's what Google does. And for obvious reasons. Just logging internet traffic isn't very useful at all for compliance.
Well, that's their dumbass fault.
At our company, there is a giant warning on the login screen that "You are using a corporate asset and there is NO expectation of privacy."
Now, as to the safety of browser locks... sounds great until the company patches their Firefox distribution.
That's how middleboxes work already. How could they work any other way? If you don't install the root certificate on the client machine TLS fails as it should. You'd need some generally accepted CA to issue a wildcard certificate for the whole internet if that wasn't the case. Has anyone actually done this?
As you say, there are other issues in a BYOD environment but is that common in an environment that uses middleboxes anyway?
The problem with middleboxes and TLS v1.3 wasn't that IT can see you're on Google, it's that you go to Google and get a grey page that says LOL_SSL_ERROR.
More like, everybody is offline as the middlebox crashed. Which is how it should be: Garbage in the network path should be highlighted and removed.
Do you know what traffic your IoT "smart" things and even your PCs and mobile devices is sending and receiving?
Anti-Gülenist cleanup has been completed. It is now the time to appeal and correct the mistakes.
TLS offers end-to-end encryption but a true proxy splits things so that you've got two end-to-end channels with the proxy in the middle. This is fine, Alice talks to the Proxy, the Proxy talks to Bob. All fine.
But of course proxying sixty employees watching funny cat videos on YouTube needs lots of CPU power, which costs money, which means less profits for the middlebox vendors.
So middleboxes pull all sorts of shenanigans rather than actually act as a proxy. For example one trick is pass things through at first, eavesdropping but not actually proxying, then if the middlebox decides things are OK just stop eavesdropping and use a hardware fast path, otherwise drop the connection (wasting resources for both server and client) and when it's retried intercept the retry...
If you imagine this is a Black Hat talk you can probably fill in for yourself all sorts of hilarious things the bad guys might do here with this not-a-proxy.
So that's why this is a terrible outcome from a security point of view. But from a protocol agility point of view it's worse. The middleboxes make all sorts of arbitrary assumptions in their frantic attempts to avoid doing actual work and a working protocol needs to cope with every one of those assumptions or break those middleboxes.
Several big famous middlebox vendors shipped new firmware to make them "ready" for TLS 1.3. What that firmware does is make them TLS 1.2 full proxies. This has two consequences:
1. They now work. Both halves of the proxied connection are TLS 1.2, which works just fine as described above. As a full proxy when a client says "I want TLS 1.3" they can honestly say "Alas I only know TLS 1.2" and so there's no new considerations. So long as they proxy everything.
2. They have intolerably poor performance and will be quietly disabled outside of high threat environments. These boxes haven't anywhere near the brute power needed to really do TLS 1.2 proxy at line rate. Today's web is mostly encrypted, so everything slows to a crawl. The vendor upgrade docs explain how to "mitigate" this slowness by basically disabling the system.
1. What do you mean that they overcome it?
2. Are middle boxes still possible, and how would these be implemented? I assume intra-network, by signing local certificates one can for instance do this for company employees. But if I access a website over an encrypted connection, is there any way that it is routed through a middlebox without being able to know?
Given that you'd be insane not to run application-level connection encryption for malware communication, and I don't see any fundamental reason you couldn't duplicate a TLS-esque handshake at a higher level (reimplementing cert chaining, e.g. via web of trust), what's there for the boxes to inspect that's not already encrypted? Setup negotiation?
Or do so few legitimate applications encypt on top of TLS that the traffic stands out?
Some middleboxes act as Man-in-the-Middle and inspect the plaintext, this is how those middleboxes were implemented in the past, and still, how they are going to be implemented in the future.
The middlebox disaster I mentioned is not necessarily about those boxes, but other boxes which do not decrypt traffic, but instead try to inspect visible fields and metadata in packets and protocols. One basic example is inspecting the SNI field to know the domain name you are visiting.
Many of these middleboxes are poorly implemented with hardcoded assumptions (but compatible with existing implementations), not fully in compliance with the standard. They also like to reject or drop the connection if they cannot successfully parse the protocol as expected.
After TLSv1.3 is released, guess what? Their assumptions don't work anymore, and they start rejecting every TLSv1.3 connection. The final solution is some tweaks of the wire format, to make it seems to be TLSv1.2 in middleboxes' perspective.
In fact, this is not really a new problem - many firewalls are designed to reject things they don't understand, because it was believed to be the defense-in-depth. This was the problem that almost killed TCP ECN 20 years ago, and officially condemned by the IETF (https://tools.ietf.org/html/rfc3360), but history repeats...
This blogpost from CloudFlare gives an extensive introduction to this issue: https://blog.cloudflare.com/why-tls-1-3-isnt-in-browsers-yet
IIRC there was another middlebox kerfuffle related to some vendor complaining that removing non-PFS ciphers(?) would destroy their business. IIRC the initial response was to shove it, and for legitimate purposes like employee monitoring in enterprises there are ways around it (installing middlebox certs on managed clients?). Whatever happened to this (or was this even what the argument was about?)? Did the IETF bow to these demands or not?
I'm asking about forced TLS-stripping's (e.g. by root certs) effectiveness against malware encryption.
If you're laying app level encryption on top of a connection... what are the boxes inspecting? Or are malware authors just really bad at network encryption and don't do any complex stuff?
These are not new.
Particularly the second is a complicated story. TLS always had downgrade protection, but at some point browsers had to decide to break downgrade protection, because there were too many shitty devices out there that didn't implement the TLS handshake correctly.
2. The intent is that correct servers will never do 0RTT unless there's some explicit buy-in from the application. Allowing 0RTT unconditionally is almost invariably going to end badly. A static web site (all idempotent GETs) ought to be OK, and making it safe for toy (single server, transactional) apps isn't so hard, but 0RTT safely for serious stuff is hard, most people just shouldn't.
All of this is possible and should be done, but I doubt it will make protocol 3.
To ensure your keys are indeed set up by the trusted party, you need to get that signed by the server cert, but that is already part of the TLS protocol.
It seems I'm missing something.
Specifying which extensions are encrypted is part of the ClientHello. Encrypted Extensions are supposed to be sent directly after the client receives ServerHello.
With that said, I don't think we'll see implementations of encrypted SNI in the wild any time soon. From the ietf draft on encrypted SNI:
DISCLAIMER: This is very early a work-in-progress design and has not
yet seen significant (or really any) security analysis. It should
not be used as a basis for building production systems.
The SNI that most people care about encrypting are those sent from the user to the server.
TLS 1.3 is the new secure communication protocol that should be already with us. One of its new features is 0-RTT (Zero Round Trip Time Resumption) that could potentially allow replay attacks. This is a known issue acknowledged by the TLS 1.3 specification, as the protocol does not provide replay protections for 0-RTT data, but proposed countermeasures that would need to be implemented on other layers, not at the protocol level. Therefore, the applications deployed with TLS 1.3 support could end up exposed to replay attacks depending on the implementation of those protections.
This talk will describe the technical details regarding the TLS 1.3 0-RTT feature and its associated risks. It will include Proof of Concepts (PoC) showing real-world replay attacks against TLS 1.3 libraries and browsers. Finally, potential solutions or mitigation controls would be discussed that will help to prevent those attacks when deploying software using a library with TLS 1.3 support.
So even though popular clients, services and libraries had the last draft for months this will, as I understand it, require a patch to do actual bona fide TLS 1.3