Hacker News new | more | comments | ask | show | jobs | submit login
The Transport Layer Security (TLS) Protocol Version 1.3 (ietf.org)
255 points by dochtman 6 months ago | hide | past | web | favorite | 67 comments



New features introduced in TLS 1.3 include:

  * smaller latency (0-RTT: zero round-trips support).
  * removes insecure encryption primitives (SHA1, MD5, RC4).
  * elliptic curve support.
  * downgrade protection.
Some interesting reads: https://blog.cloudflare.com/introducing-0-rtt/ and https://www.cloudflare.com/learning-resources/tls-1-3/ .


* removes insecure encryption primitives (SHA1, MD5, RC4).

=> also removes all key exchange algos without forward secrecy.

* elliptic curve support.

=> and new X25519/Ed25519 elliptic curve support.

I think most security people are aware of these features for a while. The final accepted standardization is what matters now, after 28 drafts, and the epic success of overcoming the middlebox disaster...


>and the epic success of overcoming the middlebox disaster...

This is one of these cases in which it would be (have been) better to just break them. It isn't TLS's job to explicitly support evil.

These middleboxes are cancer. There's no way to morally defend their presence. Breaking them (and thus effectively forcing them out of the way) would have been the better approach.


I personally believe stop breaking these middleboxes is a necessary evil/compromise to make TLSv1.3 to be universally accepted. The net gain is a huge positive.

> Breaking them (and thus effectively forcing them out of the way) would have been the better approach.

This is exactly what the security people are going to do after TLSv1.3 is introduced. Google Chrome is going to implement the so-called GREASE protocol (https://tools.ietf.org/html/draft-ietf-tls-grease-01), which actively tries to negotiate a connection with random, non-existent, but standard-compliant values in various fields of the protocol, to actively break the (future) broken middleboxes early on. Revenge time.


I think GREASE has an excellent chance of long-term success, and if it works, other protocol implementations should start doing the same thing. Actively requiring implementations to support future compatibility looks like an excellent plan.

In case people are curious what TLS 1.3 did to support broken middleboxes, a lot of information is in "D.4. Middlebox Compatibility Mode". Here's the text:

> Field measurements [Ben17a] [Ben17b] [Res17a] [Res17b] have found that a significant number of middleboxes misbehave when a TLS client/server pair negotiates TLS 1.3. Implementations can increase the chance of making connections through those middleboxes by making the TLS 1.3 handshake look more like a TLS 1.2 handshake...

I think working around it in the short term (so that TLS 1.3 can start improving our protections NOW), and separately getting middleboxes to upgrade over a longer period of time, is a good plan.


BRB, telling the compliance team that SSL inspection is morally indefensible. I'm sure they'll be happy to turn it off.


"We should continue doing immoral things because there's no way we'll ever convince corporate and governmental power to let us stop doing immoral things" is ... probably not an incorrect argument, but a very sad one.


SSL inspection isn't immoral. That's an uncharacteristically silly argument.


It's a tool, but it is (in a small way) a normalization of mass surveillance.

(I guess I should clarify that my argument is not "SSL inspection is bad and therefore TLS 1.3 is bad" - my employer monitors all my SSL traffic from work-owned machines, and could do so very easily with a browser extension or something instead of SSL inspection if the protocol somehow made middleboxes difficult. The proxies are currently designed to permit a few websites through without interception, which is the specific thing that there was debate about doing, but I don't think there's any real reason why my employer wouldn't just monitor everything if they had to. So I don't think there's anything the TLS spec could have done to make my employer more or less likely to monitor traffic, and I don't think that this affects the argument about whether the monitoring itself is immoral.)


A company monitoring its assets for proper use and the SECURITY and PRIVACY of its customers is not immoral.

A browser extension would not do nearly what a proper web proxy can do.

I am a critic of the surveillance state and a donating member of the EFF, but web proxies are not evil in themselves, anymore than a camera is evil because it can be used to build a panopticon, or cookies are evil because they can be used for cross site tracking.


This is approximately where I land on this. The privacy of employees takes a back seat, way in the back, behind the luggage, to the privacy of the information customers vouchsafe with the company. So, I'm always a bit itchy about moralizing against employee surveillance (in technology I mean, not, like, whether Walmart shift workers take too many breaks!).


The "middlebox problem" being discussed isn't the ability to install CAs on clients - the problem is that middleboxes don't implement the spec properly and prevent upgrading the protocol in the obvious way.

I'm not a big fan of transparent proxies, but in the case of employers there's at least a reasonable argument that the "computers are not the user's, they are the company's". There is NO good argument for middleboxes that don't implement the specification correctly and thus make it very hard to upgrade protocols.


Whoah, sorry, don't get me wrong: TLS 1.3 and the eradication of RSA is an unalloyed good thing, and broken middleboxes are bad.


TLS 1.3 doesn't eradicate RSA it just says you mustn't use it for key agreement. I'm guessing you knew that but just to be clear for anyone else reading. If your focus is getting rid of RSA entirely actually TLS 1.3 may even allow it to stick around a bit longer by reducing how nervous we are about it. If you care primarily about Forward Secrecy then sure, problem fixed.


'tptacek meant "RSA ciphersuites", not "RSA trapdoor function". The cipher suites that begin with TLS_RSA_* and used RSA encryption to pass key material from client to server, encrypted with the server's long term key (the key from the certificate).


>I'm sure they'll be happy to turn it off.

They'd turn it off if the alternative was to be offline because the middlebox crashes a second after power on.


Let them spend money on new middleboxes, if they care so much.


Can you expand on what middleboxes are?


https://en.wikipedia.org/wiki/Middlebox

The nature of TLS is that nobody in the middle is supposed to be able to read what's encrypted, and that either end can detect if traffic they receive has been modified in transit.

Of course this is incompatible with corporate (especially financial) networks where employers have legal/compliance/noseyness reasons to want to make sure their employees aren't doing things they shouldn't (sending corporate secrets to the NYT, having their corporate knowledge siphoned out by malware, spending all their time on Facebook etc.), and repressive regimes (where, for example, Erdogan wants to know if you're a member of the Gulen movement) - Bluecoat and Cisco, for example, do a lot of business in the Middle East and China.

They'll typically act as a man in the middle for TLS traffic, re-encrypting traffic using either a compliant root CA or a corporate CA that all employee machines have been made to trust. The nature of Enterprise Software is that these devices will have poor, incomplete or outdated implementations of TLS with the result that, for example, a lot of them didn't know what to do with draft TLS v1.3 traffic and just dropped it on the floor as invalid.


I'm not buying the argument that middleboxes are incompatible with corporate networks. The IT staff can install their own root certificate on all machines, and then they can decrypt all the traffic they want. What about HSTS? IT can disable that (they have full access to all machines anyways). What about personal devices? Those should connect to a separate network.

In essence: if corporate wants to spy on me, they shall provide me with devices that are set up that way, and with that I know I can't have any expectation of privacy. If I'm using my own device, I want TLS to actually provide security and confidentiality. If that means denying my personal devices access to the corporate network, so be it.


Yeah it is a super dumb excuse. The real reason is this:

> Our IT team is going to have to do some work and expose that traffic is sent unencrypted (or differently encrypted) to the final hop on the network.

I get why IT teams don't want to, especially for huge organizations, but "lol we can MITM TLS" is not acceptable. Terminate the exterior TLS at the network, inspect / log it, then re-black it to the end device.

Also, I want to take a second to complain about locally installed certs. I know we all know how they work and everything, but most users don't. The fact that I can see a green lock for google.com when Eve has put a local cert on my machine is really stupid.

People check their personal email on their work computers, that's just a fact of life. Their personal email password shouldn't be in some network log somewhere without them understanding that this is possible. There should be a different colour for when trust is different. A blue lock, for example. I know IT has root anyway, but most companies don't install key loggers on their gear or do obviously shady shit, and I think it would benefit users to have a clear demarcation about who they are trusting.


I like this idea of different signals for different levels of trust. In practice how would it be best implemented though? Some sort of entry in DNS that says "we have all certs signed by one of these two CAs" and then clients could see if a cert is valid but chains up to a different root and show "this is secure, but not in the way you think"?


Basically, yeah.

And maybe it should be country-smart too. An HTTPS cert originating from China and being used for a Chinese service is green, but as the internet balkanizes we may want to be a little smarter. Not sure on that though.


Well, they should install keyloggers. Okay, not keyloggers per se, since they are pretty useless, but if you control the software on the device, you can monitor internet activity in a million and one ways, none of which involve breaking the internet.

Looking at the Lewandowski stuff, that's what Google does. And for obvious reasons. Just logging internet traffic isn't very useful at all for compliance.


It isn't clear to me that that is what Google does, and installing a keylogger without informing employees is shady, no matter how you dice it. The network makes sense, but people "think outloud" while they revise draft emails. They change their mind on resigning or sending a harsh email. Big difference.


>People check their personal email on their work computers, that's just a fact of life.

Well, that's their dumbass fault.

At our company, there is a giant warning on the login screen that "You are using a corporate asset and there is NO expectation of privacy."

Now, as to the safety of browser locks... sounds great until the company patches their Firefox distribution.


>If I'm using my own device, I want TLS to actually provide security and confidentiality. If that means denying my personal devices access to the corporate network, so be it.

That's how middleboxes work already. How could they work any other way? If you don't install the root certificate on the client machine TLS fails as it should. You'd need some generally accepted CA to issue a wildcard certificate for the whole internet if that wasn't the case. Has anyone actually done this?


As I recall there was a CA that either after external compromise or via a rogue employee issued a cert for *.com at one point. But it is definitely not something any respected CA would do willfully.


And that's a quick way to get the big root stores to start looking very, very closely at you.


I didn't mean that middleboxes are incompatible with corporate networks, but that TLS's design of "your traffic is encrypted and nobody between Alice and Bob can read what they say" is. Middleboxes are a workaround for that.

As you say, there are other issues in a BYOD environment but is that common in an environment that uses middleboxes anyway?

The problem with middleboxes and TLS v1.3 wasn't that IT can see you're on Google, it's that you go to Google and get a grey page that says LOL_SSL_ERROR.


>The problem with middleboxes and TLS v1.3 wasn't that IT can see you're on Google, it's that you go to Google and get a grey page that says LOL_SSL_ERROR.

More like, everybody is offline as the middlebox crashed. Which is how it should be: Garbage in the network path should be highlighted and removed.


It’s not that they can’t do it correctly but that enterprise IT controls a lot of systems and operates on near-geologic timescales. If you want TLS 1.3 deployed you either need to convince a ton of vendors to reverse their custom of charging $$$ to fix their buggy appliances or wait a decade for everyone to finish buying newer less-slow, less-buggy appliances.


Third option: You deploy TLS 1.3, the middleboxes break, but that's neither your fault nor your problem. Hopfully they're makers or users will learn from this and not make respectively buy broken garbage. All major browser vendors appear to be involved with TLS 1.3.


Exactly, why should the whole web suffer just from the shitty internal practices of some large companies. If their trashcan ready hardware is breaking TLS, they can deal with the consequences.


Everyone seems to look at this from the "large corporations IT" and "mass surveillance" perspective, but let's not forget that everyone should also have the right to inspect and modify traffic on the networks and devices they own. TLS MITM is not necessarily a bad thing.

Do you know what traffic your IoT "smart" things and even your PCs and mobile devices is sending and receiving?

https://news.ycombinator.com/item?id=6759426


> where, for example, Erdogan wants to know if you're a member of the Gulen movement

Anti-Gülenist cleanup has been completed. It is now the time to appeal and correct the mistakes.


Lots of random routers on the internet (eg. at your ISP, at their transport partners, etc.) were doing all sorts of packet inspection, parsing the plain parts of TLS messages, etc. and then dropping things they thought were invalid (including things as simple as seeing a version number they didn't recognize and dropping the connection). They made TLS 1.3 look sort of like TLS 1.2 resumption which made the protocol much less elegant, but made crappy boxes that had made bad assumptions about what the protocol should look like happy.


So currently it's no longer true and TLS 1.3 looks like TLS 1.3, not like TLS 1.2 wrapped something inside? It's good, it was very inelegant.


The end result is a hack. The on-wire format of TLSv1.3 is tweaked, to make the TLS 1.3 handshake resemble TLS 1.2 session resumption requests, which is just enough to make the connection being able to pass broken middleboxes.


"Middlebox" is the IETF term for "proxy".


If only. The biggest problem is that TLS middleboxes don't act like proxies. Proxies would suck in various ways but they're ultimately just a policy decision.

TLS offers end-to-end encryption but a true proxy splits things so that you've got two end-to-end channels with the proxy in the middle. This is fine, Alice talks to the Proxy, the Proxy talks to Bob. All fine.

But of course proxying sixty employees watching funny cat videos on YouTube needs lots of CPU power, which costs money, which means less profits for the middlebox vendors.

So middleboxes pull all sorts of shenanigans rather than actually act as a proxy. For example one trick is pass things through at first, eavesdropping but not actually proxying, then if the middlebox decides things are OK just stop eavesdropping and use a hardware fast path, otherwise drop the connection (wasting resources for both server and client) and when it's retried intercept the retry...

If you imagine this is a Black Hat talk you can probably fill in for yourself all sorts of hilarious things the bad guys might do here with this not-a-proxy.

So that's why this is a terrible outcome from a security point of view. But from a protocol agility point of view it's worse. The middleboxes make all sorts of arbitrary assumptions in their frantic attempts to avoid doing actual work and a working protocol needs to cope with every one of those assumptions or break those middleboxes.

Several big famous middlebox vendors shipped new firmware to make them "ready" for TLS 1.3. What that firmware does is make them TLS 1.2 full proxies. This has two consequences:

1. They now work. Both halves of the proxied connection are TLS 1.2, which works just fine as described above. As a full proxy when a client says "I want TLS 1.3" they can honestly say "Alas I only know TLS 1.2" and so there's no new considerations. So long as they proxy everything.

2. They have intolerably poor performance and will be quietly disabled outside of high threat environments. These boxes haven't anywhere near the brute power needed to really do TLS 1.2 proxy at line rate. Today's web is mostly encrypted, so everything slows to a crawl. The vendor upgrade docs explain how to "mitigate" this slowness by basically disabling the system.


I would like to ask two basic questions.

1. What do you mean that they overcome it?

2. Are middle boxes still possible, and how would these be implemented? I assume intra-network, by signing local certificates one can for instance do this for company employees. But if I access a website over an encrypted connection, is there any way that it is routed through a middlebox without being able to know?


One more basic question -- what are current middleboxes actually doing with their inspection?

Given that you'd be insane not to run application-level connection encryption for malware communication, and I don't see any fundamental reason you couldn't duplicate a TLS-esque handshake at a higher level (reimplementing cert chaining, e.g. via web of trust), what's there for the boxes to inspect that's not already encrypted? Setup negotiation?

Or do so few legitimate applications encypt on top of TLS that the traffic stands out?


Just read SamWhited's comment.

Some middleboxes act as Man-in-the-Middle and inspect the plaintext, this is how those middleboxes were implemented in the past, and still, how they are going to be implemented in the future.

The middlebox disaster I mentioned is not necessarily about those boxes, but other boxes which do not decrypt traffic, but instead try to inspect visible fields and metadata in packets and protocols. One basic example is inspecting the SNI field to know the domain name you are visiting.

Many of these middleboxes are poorly implemented with hardcoded assumptions (but compatible with existing implementations), not fully in compliance with the standard. They also like to reject or drop the connection if they cannot successfully parse the protocol as expected.

After TLSv1.3 is released, guess what? Their assumptions don't work anymore, and they start rejecting every TLSv1.3 connection. The final solution is some tweaks of the wire format, to make it seems to be TLSv1.2 in middleboxes' perspective.

In fact, this is not really a new problem - many firewalls are designed to reject things they don't understand, because it was believed to be the defense-in-depth. This was the problem that almost killed TCP ECN 20 years ago, and officially condemned by the IETF (https://tools.ietf.org/html/rfc3360), but history repeats...

This blogpost from CloudFlare gives an extensive introduction to this issue: https://blog.cloudflare.com/why-tls-1-3-isnt-in-browsers-yet


Thanks for the cloudflare link, that explains it pretty well.

IIRC there was another middlebox kerfuffle related to some vendor complaining that removing non-PFS ciphers(?) would destroy their business. IIRC the initial response was to shove it, and for legitimate purposes like employee monitoring in enterprises there are ways around it (installing middlebox certs on managed clients?). Whatever happened to this (or was this even what the argument was about?)? Did the IETF bow to these demands or not?


I understand the non-compliance network appliance issue and hence the wire hacks for 1.3.

I'm asking about forced TLS-stripping's (e.g. by root certs) effectiveness against malware encryption.

If you're laying app level encryption on top of a connection... what are the boxes inspecting? Or are malware authors just really bad at network encryption and don't do any complex stuff?


> * elliptic curve support. > * downgrade protection.

These are not new.

Particularly the second is a complicated story. TLS always had downgrade protection, but at some point browsers had to decide to break downgrade protection, because there were too many shitty devices out there that didn't implement the TLS handshake correctly.


TLS 1.3 downgrade protection is cleverer than previous approaches because the downgrade marker is scribbled into a field used for connection setup whenever a TLS 1.3 server ends up not doing TLS 1.3. If an attacker erases the marker, the connection fails. If they don't erase it the downgrade is reported to the client. If the client doesn't know TLS 1.3 the marker means nothing to it.


Without a round trip, we must be reusing keys right? So no more ephemeral keys? I know it's optional, but I wonder which will be the default for most off the shelf https severs folks use?


1. Yes 0RTT uses a key agreed during a previous TLS session

2. The intent is that correct servers will never do 0RTT unless there's some explicit buy-in from the application. Allowing 0RTT unconditionally is almost invariably going to end badly. A static web site (all idempotent GETs) ought to be OK, and making it safe for toy (single server, transactional) apps isn't so hard, but 0RTT safely for serious stuff is hard, most people just shouldn't.



Encrypted SNI probably can’t make this version, as it requires a fair bit of new protocol infrastructure to support. The handshake would have to start with a generalized untrusted assymetric exchange exchange to transfer the SNI field then “upgrade” the connection to be trusted and then retroactively tell the client what the initial “untrusted” key was so that the client can confirm it didn’t suffer a man in the middle attack during the SNI transfer phase.

All of this is possible and should be done, but I doubt it will make protocol 3.


As far as I know, the SNI field is only used by the server to select what cert to use and site to serve. So if someone were to MitM the SNI exchange, presuming they don't have a valid cert, they could only cause the client to receive a different site (if served with a cert valid for both sites). We can presume the MitM doesn't have a valid cert, otherwise they could fully MitM the connection.

To ensure your keys are indeed set up by the trusted party, you need to get that signed by the server cert, but that is already part of the TLS protocol. It seems I'm missing something.


The danger is not that unencrypted SNI exposes clients to additional MitM attacks. It just exposes the client's intended domain to everyone on the connection route. It's an information leak, that's all.


I think OP I replied to stated that encrypting SNI requires MitM mitigations.


Everyone on the connection route would be “in the middle”. The actors to watch out for are ISPs and CDNs.


It is sad to see the SNI encryption isn't in TLS 1.3. I'd say that unencrypted SNI is the single biggest issue with TLS 1.2; it is a very obvious leak of information. All of the improvements in TLS 1.3 are nice-to-haves, but they are mostly shoring up systems that are already pretty secure, whilst we leave the largest leak open.


In prior TLS versions the server certificate is sent unencrypted. So a passive eavesdropper gets not just the name but a certificate for that name from the server. This is fixed in TLS 1.3


I only skimmed the RFC, but it seems like the SNI can be send either in the extensions in the client hello, or later in the encrypted extensions.


Never mind, apparently encrypted extensions is only sent by the server to the client, not the other way around.


I think you got it right the first time, server_name is referring to SNI in this context: https://tools.ietf.org/html/rfc8446#page-37

Specifying which extensions are encrypted is part of the ClientHello. Encrypted Extensions are supposed to be sent directly after the client receives ServerHello.

With that said, I don't think we'll see implementations of encrypted SNI in the wild any time soon. From the ietf draft on encrypted SNI:

DISCLAIMER: This is very early a work-in-progress design and has not yet seen significant (or really any) security analysis. It should not be used as a basis for building production systems.


That table is what I based my first post on. However, as far as I can find out, Encrypted Extensions are only ever send by the server to the client. [1]

The SNI that most people care about encrypting are those sent from the user to the server.

[1] https://tools.ietf.org/html/rfc8446#page-120


Okay, I see what you mean. Apparently I need to read more, because that completely flies in the face of how I thought this whole thing worked.


Attended this talk yesterday: https://www.defcon.org/html/defcon-26/dc-26-speakers.html#Ga...

TLS 1.3 is the new secure communication protocol that should be already with us. One of its new features is 0-RTT (Zero Round Trip Time Resumption) that could potentially allow replay attacks. This is a known issue acknowledged by the TLS 1.3 specification, as the protocol does not provide replay protections for 0-RTT data, but proposed countermeasures that would need to be implemented on other layers, not at the protocol level. Therefore, the applications deployed with TLS 1.3 support could end up exposed to replay attacks depending on the implementation of those protections.

This talk will describe the technical details regarding the TLS 1.3 0-RTT feature and its associated risks. It will include Proof of Concepts (PoC) showing real-world replay attacks against TLS 1.3 libraries and browsers. Finally, potential solutions or mitigation controls would be discussed that will help to prevent those attacks when deploying software using a library with TLS 1.3 support.


Notably the RFC by making this a standard rather than merely the draft we all expect to become standard (which had been solid for months) removes all the overrides in the drafts that switch off e.g. downgrade detection. These overrides were needed in drafts so as to prevent craziness where a draft changes the protocol in an incompatible way and the anti-downgrade forbids using TLS 1.2 so then you're screwed.

So even though popular clients, services and libraries had the last draft for months this will, as I understand it, require a patch to do actual bona fide TLS 1.3


Yep. https://rustls.jbp.io/ now supports final TLS1.3 but you'll be hard-pressed to find a library which negotiates TLS1.3 with it. Give it a few weeks.


Is PKI still an optional feature of TLS? Can one still use self-signed x.509 certificates and have key-signing parties?




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: