A number of months ago I decided to finally begin researching a project I've had on the backburner for ages: a simple domain+IP bandwidth throughput counter for everything within my LAN. Basically I wanted to make a dashboard saying "you exchanged this much data with this many IPs within the last 24h; here's the per-IP and per-domain breakdown; here are the domains each IP served", that sort of thing.
Part of the motivation for this was to track down the headscratch-generating mysterious small amount of "unmetered data" my ISP reports each month (I don't watch TV over the internet and cannot think of what else would be it), and the other part of the motivation was to satisfy idle curiosity and observe domain<->IP cohesion/fragmentation over time.
At the exact point I began my research TLS 1.3 had just been approved, and I rapidly realized my nice little idea was going to get majorly stomped on: the inspiration for the project, years ago, was the SNI field itself (hah).
5-10 years ago, I would have been able to route all my traffic through a gateway box, and passively watch/trace (slightly behind realtime) everything via tcpdump or similar, catching DNS lookups and sniffing SNI fields. That would have handled everything.
Now, with encrypted SNI, and the upcoming DNS-over-HTTPS (seems I need to think ahead :/), it looks like my (now irritatingly stupid-seeming) project will require me to generate a local cert, install it on all my devices, and get a machine beefy enough to keep line-rate speed while doing MITM cert-resigning.
And then, for the fraction of traffic my system reports as "not signed by local cert", I'm just going to have to let that go, or I'll break everything that uses cert pinning.
...which I wouldn't be surprised if some programs eventually use with DoH lookups (using pinned certs fished out of DNSSEC for DoH makes sense).
I'm very very interested to know if there are any alternatives I can use. I realize, for now, that I can sniff for and collect DNS lookup results and associate these with IPs, but when will this break?(!)
Honestly HTTPS and security in general just seems like a gigantic mess. I say this not as a disrespect of the architecture, but more as a "wow, this is just so hard to get right".
TLS 1.3 approval doesn't stomp on SNI; on the contrary, for the first time it actually makes SNI mandatory - in previous versions it was optional, leading to some HTTP implementations still not working with SNI late in the 21st century. I had a corporate Python system I was responsible for that couldn't do SNI even earlier this year.
You probably shouldn't attempt your idea of proxying everything. Proxies are very fragile, and this will probably inadvertently worsen your security, while also introducing weird non-security misbehaviour that's hard to track down. For example Firefox will go "Oh, a MITM proxy :(" and it will switch off all the safety measures built up over years for the public Internet because this isn't the public Internet. In principle if your MITM proxy is enforcing the same or tougher rules, this wouldn't be worse, but let's face it, you're going to cobble together the minimum possible to make it work.
Every time somebody tries to get fancy and _think_ about the bits inside the packets their network is supposed to be moving they screw up. That's the most cynical reason why there's an impetus to encrypt everything. If the bits are all encrypted gibberish the temptation to meddle with them goes away.
> Every time somebody tries to get fancy and _think_ about the bits inside the packets their network is supposed to be moving they screw up.
How do you propose we do security otherwise? (Apart from delegating it to other parties so they think about the bits)
The clear and obvious solution is to make developers, managers, executives, and major investors personally responsible for the damage their products do. If Google is going to encrypt everything using pinned certificates, making it impossible for me to implement security features/boundaries of my choice, then they should be 100% liable for all damages that come from the use of their products. Additionally, if an analysis of the problem shows that is is related to aspects of the product that do not directly related to what the end user believes the purpose of the product to be, then there should also be criminal liabilities for all parties as well.
I struggle to understand how this is a "clear and obvious" solution. How would you go about executing this? Legislation? The big famous piece of recent tech legislation, GDPR, is, by most opinions, far and away "clear and obvious". And given the US's current political environment, I'd struggle to believe any sort of legislation could be passed that is "clear and obvious".
There has to be a better solution that "coordinate the interests, incentives and motivations for millions of users, developers, managers, and executives to pass some legislation that makes the latter group responsible."
"33.12% of traffic went to IPs associated with Google. 26.3% of traffic went to IPs associated with CloudFlare. 25% of traffic went to Cloudfront. Here is the breakdown of the remaining 15.58%..."
<Insert cartoon fist punching noises here>
DO NOT WANT. Sorry for caps, it's a bit loud, but the above ^ is what I explicitly want to _avoid_. And with DoH (DNS-over-HTTPS) avoiding that will be even harder.
> You probably shouldn't attempt your idea of proxying everything.
Gestures at post and underlines angry and irritated parts (not angry at you of course, angry at TLS 1.3)
I don't _want_ to build a proxy. Well, I do; I'd love to make it a firewalling proxy that only lets HTTPS data signed by my cert through (and resigns everything so only my cert is ever used), so it DOES break everything using cert pinning, and I can sniff ALL the data generated by the programs I use (if people argue about hardware ownership, shouldn't they argue even more vehemently about their personal information?!).
But practically speaking, no, I don't want to build a proxy. I want to avoid needing to build a resilient LAN with multiple APs and subnets and firewalling and ugh. (Because I will of course need that to break HTTPS and then use my bank's website/apps without my heart fluttering - or even my Google login, for that matter...)
> Proxies are very fragile, and this will probably inadvertently worsen your security, while also introducing weird non-security misbehaviour that's hard to track down.
What do you mean?
> For example Firefox will go "Oh, a MITM proxy :(" and it will switch off all the safety measures built up over years for the public Internet because this isn't the public Internet.
Really?! Where can I learn more about this? :s
> In principle if your MITM proxy is enforcing the same or tougher rules, this wouldn't be worse, but let's face it, you're going to cobble together the minimum possible to make it work.
In practice you're right. In my head I want to build the best+most secure system available, but if I'm pragmatic, the amount of support out there for doing this on a personal/temporary (aka non-dev/"break the system for a minute because it's being debugged"/SSLKEYLOGFILE) basis is about the same as for GPG-encrypted mail, so I know I'll wear down eventually, and make myself a giant sitting duck. :'(
> Every time somebody tries to get fancy and _think_ about the bits inside the packets their network is supposed to be moving they screw up. That's the most cynical reason why there's an impetus to encrypt everything. If the bits are all encrypted gibberish the temptation to meddle with them goes away.
Thanks for this. I never looked at it that way, but thinking about all the programs and apps using weird binary formats to scare people away... it makes perfect sense the Internet and HTTPS would work similarly. But I think that's a vulnerability in and of itself, in that it discourages general exploration and progressive investigative analysis by virtue of being so unbelievably multidimensional and hard. For example I look at https://sites.google.com/site/testsitehacking/-36k-google-ap... and wonder.
* The FQDN (minus a trailing dot) of the site must appear either exactly as a SAN dnsName or, with its first label replaced by a single asterisk, as a "wildcard", in the certificate, otherwise the cert is worthless
* The cert should have an Extended Key Usage. The EKU must contain at least 184.108.40.206.220.127.116.11.1 (meaning this key is to be used to identify a TLS server)
* The certificate mustn't use SHA-1 (or MD5 for that matter) as part of a signature algorithm
So how did IPv6 adoption go?
Imagine being in corp information security though. It seems like it won’t be long before command and control or exfiltration happens via certificate pinned apps you learn you can’t inspect. Corporate networks will have to start banning them if they don’t have a way for them to inspect.
Too many people are unaware that their connections to their banks etc from work are insecure because of corporate firewalls and what are (essentially) compromised certificate stores.
You make an outgoing TLS connection, the MITM Proxy Meg intervenes, but you actually wanted MyBank
You->Meg (thinking she's MyBank): Hi, I'm looking for MyBank?
Meg->MyBank: Hi, I'm looking for MyBank?
MyBank->Meg: Yes, we're MyBank <cert for MyBank>
[ Meg inspects the cert ]
Meg->You: Yes, we're MyBank <cert for MyBank>
Now Meg stitches the TCP connection back together so you're talking to MyBank. She no longer eavesdrops.
This works for the business because now they're not paying for a box that can proxy your connection _and_ they're less likely to find that one of their "security" employees emptied your bank account. They feel comfortable that MyBank aren't bad guys, so they aren't scared that you'll, say, exfiltrate stolen corporate secrets via MyBank or download Malware from it.
In TLS 1.3 this doesn't work, because the conversation goes like this:
You->Meg (thinking she's MyBank): Hi, I'm looking for MyBank and I picked 14, and the letter Q
Meg->MyBank: Hi, I'm looking for MyBank and I picked 14, and the letter Q
MyBank->Meg: Cool, I picked a Banana and 209.
<Unintelligible encrypted data> [ Meg can't do anything with this ]
Or Meg could try picking her own Elliptic Curve Diffie Hellman values:
Meg->MyBank: Hi, I'm looking for MyBank and I picked 81, and the letter Z
MyBank->Meg: Cool, I picked a Banana and 209. <We're MyBank - encrypted data>
[ Meg now has a connection to MyBank, but she can't give it you because it doesn't work if you picked different values ]
In the medium term we expect corporations which do this will just clamp everything to TLS 1.2 with full proxying. This will be very expensive, and worse, but well, we did tell you not to do this, so now that's broken you get to keep both halves. Ordinary folks who aren't trying to put middleboxes everywhere get TLS 1.3, with better performance and more security.
So... with TLS 1.3, it won't be possible to do connection-jockey anymore, will it?
(This is why I was saying things were such a mess earlier)
There are (and have been for some time) some known-unmetered sites. (There was a small stink raised where an FTP/HTTP file mirror got removed some years ago.)
Some digging around I did some months ago turned up a (semi-public, as in "obscurely published but uninteresting") list of supposedly-whitelisted IP addresses pointing to school/university backend systems and apparently used by rural schools sitting on (expensive) satellite. Suffice to say that I wasn't able to find anything that could be used as a proxy :P
So no, I don't think that's an easily solvable problem. And yes, I also think it's a giant mess.
Even from a security standpoint it doesn't seem to make sense: We're fortifying the connections itself against sniffing but at the same time force everything to go through centralized cloud services which have a growing reputation to be hacked and also force everyone to blindly trust the programs they have running on their own devices.
To put on my tinfoil hat, this kind of threat model sometimes seems better designed to protect surveillance capitalism against attacks instead of protecting the users.
Obviously I'm not actually reading all this source code (who has the time?), but I strongly prefer it to be available, so that I have the ability if desired. Also for reasonably popular stuff the community tends to suss out suspicious activity pretty quickly.
I absolutely do not trust proprietary blobs; they don't get to run on any of my real workstations. I might run them on my Windows install (primarily for games) if I must, and the computer I use for work gets to run anything my company declares that I need (thankfully not much), but I certainly don't trust them.
Unless it's ImageMagick, OpenSSH, WordPress, or any number of other things.
You might think so, but not really. Any app you install on Android can watch what you do online in any other app, with no permissions at all.
We also need encrypted DNS for the recursive lookup itself so you can run your own resolver somewhere.
Why not yourself? Your ISP can still see the RR working, of course.
> We also need encrypted DNS for the recursive lookup itself so you can run your own resolver somewhere.
This would indeed be optimal but would require upgrading a significant portion of authoritative name servers, sooo... might take a while.
Well, then what attacker do you defend against if your laptop asks your router via DoT but then the router does an unencrypted recursive lookup anyway?
The encrypted SNI would primarily be useful to make censorship and MITM attacks harder.
Let's say that I have a Nginx on my server which serves a lot's of websites, and whose web sites can only be accessed through HTTPS with SNI, not HTTP.
Now with Encrypted SNI deployed, requests from my clients can still be dispatched to it's respective virtual hosts, but any sniffers in the middle of the connection should only be able to see that my clients are accessing to my server, but not which virtual host.
Is I'm missed anything? I haven't dig deep in to this currently.
The theory is that if you put your server behind a popular CDN, then a state-level attacker is left with little choice but to block the entire server.
Another benefit is that attacks that can observe but not modify traffic will be less able to track what sites you're visiting.
There are risks though:
* It's unclear how resistant CDNs actually are to state-level attackers.
* It's unclear how resistant CDNs are to regular attackers
* Corporations/Security-savvy users will find it difficult to control what [their] workstations can reach and cannot: Allowing access to a single cloud-based service may inadvertently allow access to a malicious command/control server sharing the CDN.
So if your website serves a page which is 412KB in size that would be quite easy to fingerprint especially across a pool of websites, beyond that it’s also quite possible to fingerprint things even further by measuring the number, size and order of secondary requests a page load incurs.
So overall there is little privacy that would be provided by this (again when taking into account the threat model and actors) unless you hide behind 1000s and 1000s of websites on the same network and even then it’s mostly not about privacy but about resilience against MITM and state censorship.
The entire set of encryption can be easily opened up for inspection and manipulation if all parties agree this is good idea.
Encryption where the user does not control either of the endpoints stacks the odds in favor of developers and against users.
I agree however, that the situation for the web (e.g. everything that lives inside a browser) is still pretty good thanks to the ability to add custom root CAs and general user the browser's debugging and inspection tools.
However, if you try to find out what exactly a mobile app, a game console or an IoT device is doing, you soon find yourself out of luck even if you're the owner of those things.
If you don't trust your children to use the internet responsibly, don't let them use it. Or let them use it but only under your supervision. If you let them go wild but put up filters, they will find a way around, one way or another, and at that point the princess is in another castle.
I'm all for having discussions, but if you think the dichotomy should be "don't use the Internet at ALL" or "Talk to them about it and hope they're mature enough to avoid getting rick-rolled into rotten.com" then I heartily disagree.
And the trick with local CA is awesome!
Which should be in ntpsec eventually.
Who does this protect from?
ISPs that don't want to give up spying obviously are monopolies, have enough power and can block DNS-over-HTTPS/TLS and all 3rd party resolvers. So can censors. State actors can intercept DNS traffic even easier, since DoH/DoT make it more centralized. IP address itself leaks so much information, that cleartext SNI might not be even needed after all (meaning you can still pinpoint a domain with high probability just by IP address).
Maybe this can help make collateral freedom a bit easier. But it's already possible to achieve that with generated not known ahead throwaway domains, they are cheap compared to CDN/cloud costs. And collateral freedom doesn't really work against censorship all that well on its own, as centralization makes it easy to pressure CDNs and clouds to deny service to dissidents. Not enough collateral damage.
Too many big players are behind it for ISPs to be able to get away with that strategy. If a battle started then Google, Cloudflare, and others could simply start hosting DNSoHTTPS directly alongside all of their major services. To most customers 90% of "the internet" is the services from these providers.
"So can censors. State actors..."
There is never going to be surefire way to subvert a large government that controls the infrastructure you are trying to sneak through. Even if there was the government would still be in control of data at any companies that operate inside the country. It's not "outsmart China or bust".
Rather than dismissing the solution as useless because it doesn't provide the ultimate privacy maybe it's more worthwhile to look at what it can do and how other changes to TLS and other protocols can continue to make things harder to snoop.
While we're at it: what prevents, say, a cartel of the world's transit providers to cut China off from the Internet until they don't censor it any more? And why haven't they done so? After all, there are only two outcomes: the Chinese government goes fully intranet, but their economy tanks as a result or the Chinese government backs down, with their economy (and civil rights) profiting. Same goes for Russia or Iran.
It's about time that Western companies take a stance when it comes to civil rights. So much technical and social progress is kept back out of the fear the censorship creates, it must be profitable by now to simply force their hands.
Oh, and while we're at it, why are there still companies doing business with our very own guilty parties in espionage terms? Why do they have bank accounts? Why is anyone renting them office spaces, selling computers and whatever else?
The same companies from USA, Germany, Israel and I believe also Italy do not only supply our own governments (which is bad enough in itself) but also dictators who use these tools to repress their own citizens.
So we're making snooping by ISPs more difficult by using a technique that only works well precisely because so much of the modern internet is centralized behind a few hosting service providers.
Encrypted SNI will be at best a lateral move. Much time will be spent (especially by small-time providers and open source stack authors) trying to keep up with the privacy enhancements offered by the big hosting providers just so they can lay claim to the dubious benefit of enhanced privacy.
More likely, these enhancements will be leveraged to centralize hosting even further. The more complex the protocol stack, the more costly it is to avoid using centralized providers.
And much like with browser fingerprinting, whatever marginal benefits exist will be quickly erased once the existing tracking systems build and maintain databases that map IPs to likely sites. Nobody has even tried to demonstrate that this isn't feasible with aggregators--everybody just assumes it's not practical. And that ignores the reality that countries like China can force companies like Google, Apple, and others to host services domestically, making encrypted SNI of little value from day 1.
I absolutely oppose encrypted SNI because it adds significant complexity for little benefit over the long term. Worse, it will likely hasten centralization. A proper solution would be to fix TLS so SNI always happens over an encrypted channel. But as with TLS 1.3 0-RTT, in reality performance is allowed to trump security. That's the calculus of the very same organizations pushing encrypted SNI.
As somebody who has written DNS, HTTP, MySQL, RTP, RTSP, SMTP, and even HTML parsing stacks from the ground-up, we need to focus on making these things simpler. We need to shift our efforts to writing implementations using specialized languages and techniques for software verification so we can write provably safe and correct implementations rather than chasing feature after feature.
 In fairness, these organizations are diverse and if the encrypted SNI folks had their way they may also have come up with a better solution. But that doesn't justify encrypted SNI on its merits.
 Rust doesn't even begin to solve this. As ugly as buffer overflows are, when it comes to tracking and data exfiltration they're hardly the biggest culprit. And in any event Rust only solves a subset of overflows. Techniques for building circular-reference data structures in Rust (i.e. using indirection through array indexes--basically pointers to a specialized heap) can and _will_ result in similar issues. At the end up the day, for core infrastructure software no general purpose language suffices. For core infrastructure, there's no avoiding doing things the hard way. Core infrastructure software needs to be developed in the same way SQLite is developed, but that will never happen as long as these standards are moving targets that rely on many moving pieces.
How do you propose to do that? This is not a rhetorical question, if you have an idea on how to make it work, go ahead and propose it to the TLS working group. Keep in mind the issues raised at https://tools.ietf.org/html/draft-ietf-tls-sni-encryption-03.
7.1. Why is cleartext DNS OK?
In comparison to [I-D.kazuho-protected-sni], wherein DNS
Resource Records are signed via a server private key,
ESNIKeys have no authenticity or provenance information.
This means that any attacker which can inject DNS responses
or poison DNS caches, which is a common scenario in client
access networks, can supply clients with fake ESNIKeys (so
that the client encrypts SNI to them) or strip the ESNIKeys
from the response. However, in the face of an attacker that
controls DNS, no SNI encryption scheme can work because the
attacker can replace the IP address, thus blocking client
connections, or substituting a unique IP address which is
1:1 with the DNS name that was looked up (modulo DNS
wildcards). Thus, allowing the ESNIKeys in the clear does
not make the situation significantly worse.
DANE never gained traction because wedding TLS to DNS was considered intolerable from the perspective of both client-side and server-side implementation complexity. But apparently not intolerable enough when it benefits centralized hosting providers. Indeed, it's not intolerable client-side complexity because unlike DANE they're not going to bother trying to implement DNSSEC browser-side; instead they're going to farm that out to a hosted service. (Remember Mozilla+Cloudflare mentioned several weeks ago? Nominally free, but as with all business strategies, if you're not the customer you're the product.) And server-side they'll just bill directly for the cost of the complexity, because now there's even more incentive for the big hosting providers to maintain your DNS for you. (Of course, in reality they'll just siphon business away from the traditional DNS registrars, so the customer sees "value-add" while the entire internet continues to centralize even further.)
What TLS ESNI is, is the realization that people killed DANE because they let perfect be the enemy of better. (Where better is to just assume DNSSEC is already solved outside the browser, or more generally depending on DNS solving the hard part, and solving it in a way that minimizes browser implementation complexity.) But apparently nobody is yet willing to cop to that reality.
TLDR: The proper solution is a simpler one that achieves the same degree of privacy--fix TLS to setup an ephemerally-keyed channel first, then perform domain authentication by exchanging data within the channel. If you don't have to worry about preventing an active MITM attacker from snooping domain names, then you don't need a committee to draft up a spec. The requirements RFC says we do have to worry about an active MITM attacker, but the actual TLS ESNI doesn't actually prevent that. It just redefines the scope to "solve" the problem away.
 You could argue that it's easier to MITM TLS than it is to MITM DNS, but nobody is going to buy that argument.
It’s protection against passive attack. An ISP that blocks various DNS services will get noticed and will face more backlash than an ISP that merely sniffs all traffic and sells it.
Also, how exactly do you block DNS-over-HTTPS reliably?
> State actors can intercept DNS traffic even easier, since DoH/DoT make it more centralized.
It can be hard for a state actor to tell which ISP customer asked for which domain.
But what we really should focus on is the added security everywhere else. Even when I trust my ISP, that is no reason not to have defense in depth.
Someday WiFi or 4G network will have a horrible security flaw. And then this will be another bolt in the next layer of security.
If security is important, we need multiple layers!
Focusing on what authoritarian governments might or might not do is pretty much besides the point.
But if this is the method we are following, we will also need to think about the costs-vs-benefits of different security measures.
If patching known exploits or threat-models, the cost-vs-benefit generally allows for very strong security measures because you know the risks if you don't implement it and you know when the measure is implemented and you are done.
However, with "defense-in-depth" you could always add another layer of encryption, add another sandbox layer or make a system less general-purpose. If security is your only priority, there is no point where you would stop.
in a secure system, adding defensive layers is constrained by the amount of code required to implement them. at a certain point, these layers can become an attack surface, and therefore a liability.
If you haven’t read the spec it’s pretty fascinating.
Working with ALT SVC’s and being able to set the SNI which piggybacks on some of the OP encrypted SNI is pretty cool tech. It’s all early, hard to get working, not supported by most browsers, but it’s going to allow for some interesting future security and trust options.
What I wonder is about downgrade attacks. Browsers will probably have to support clear-text SNI for a long time - could censors abuse this by making it seem that no server supports encrypted SNI? Or is there a digitally-signed way for servers to prove that they support it, before the browser sends the host?
This proposal supposes that when we've decided to talk to example.com we may as well ask a DNS query that requests not only 'A' and 'AAAA' records (IPv4 and IPv6 address) but also an encrypted SNI key. If such a key is available we can use it to prove to the server (which set that key) which name we wanted, while eavesdroppers are none the wiser.
If the adversary can eavesdrop DNS they can already speculate from our query which host we wanted to talk to, but securing DNS is a problem for which there are known solutions.
Yes, but then the censor can distinguish these apps from a regular client, and block them. This only works if browsers do it too, so that the app can't be distinguished.
Google is a cloud operator these days, and as such if something like this is to be of much use they'll have to eat the consequences at the other end, which means their product (Cloud servers) working less well for uncontroversial services than it might if they let censors do as they please. Basically for a cloud operator or CDN choosing to actually implement a working encrypted SNI solution (whatever it is) means now all your uncontroversial sites are obliged to tell censors "I'm Spartacus" and reap the consequences. If the censor is a single place of business or a micro-state maybe you can just suck it up. If it's Russia then you've just decided you want to annoy all your paying customers on a point of principle. I believe the word for that in Yes, Minister was "Brave".
Ultimately, for this to be deliver much value (and if it doesn't why bother at all?) it needs to be something every major cloud provider and CDN does for all the sites they host. Which means from somewhere they're all going to need to find a lot more backbone than is needed for, say, TLS 1.3 itself.
I hope it works, but I fear it will not.
I also get the sense that Google is pretty good at prioritizing long-term goals such as a fast and secure internet over any short-term considerations.
If somebody can MitM your encrypted connection to both server and DNS, encrypted SNI stops working to my understanding.
If someone is in your trusted CA list, why do you want protection from them? If you want protection from them, remove them from your list of trusted CAs.
TLS 1.3 has downgrade detection, so if a middlebox tries to downgrade you (e.g. to TLS 1.2) without proxying the entire connection the TLS 1.3 implementation in your client will spot that and reject the connection.
Proxying is possible (including with a downgrade) if you trust the proxy. So, don't.
If you have a recent-ish version of Chrome or Firefox you are already using all this.
I mean, if SNI is encrypted then we don't need SNI at all, just use the Host field inside https header which is encrypted already and carries all the info you need know. Am I missing something?
Yes SNI could be encrypted outside of the SSL handshake in client-hello, but why? will it be useful to defend domain fronting, don't think so.
Yes, the intent is exactly to encrypt SNI during the initial ClientHello, in particular the public key used will be found in DNS alongside the server's IP address(es).
The key used will be the same for some set (perhaps all) of names hosted on this IP address, so yes it achieves the same goal as "domain fronting" but without tricks.
You could already do something approximating encrypted SNI using domain fronting. Connect to boring domain A via SSL, then send an HTTP request with a Host header set to your actual target domain B. Load balancing HTTP reverse proxies in the clouds would happily use the encrypted domain in the Host header, ignoring SNI in the TLS headers. This started being widely used to evade censorship and build "unblockable" services, because to block them you'd have to block all of Cloudflare or Akamai or Google.
These firms all, as one, banned domain fronting. Turns out the bosses wanted access to big markets more than they wanted censorship resistance.
So why do these developers think it'll work out any differently this time, with slightly different packet formats? First company to enable encrypted SNI will get blocked at the national level and will blink. The others will look at it and probably never try.
If they're serious about making sites unblockable by domain, the first step is to re-activate support for domain fronting and make a public commitment to keep it.
When a bunch of well respected people decide to do something which you don’t understand, maybe next time your first instinct should be to ask what you don’t know rather than assuming they’re participating in an IETG process which will be shut down?
In this case, you might start by questioning whether there’s any other explanation for not supporting domain fronting in the past other than support for censorship (hint: Google gave it at the time), and whether those sound engineering reasons might in fact have lead to something like this as an official supported alternative.
"Domain fronting has never been a supported feature at Google, but until recently it worked because of a quirk of our software stack," a Google spokesperson explained in a statement emailed to The Register. "We're constantly evolving our network, and as part of a planned software update, domain fronting no longer works. We don't have any plans to offer it as a feature."
This might superficially sound like domain fronting stopped working accidentally, as part of other changes, but notice how carefully it's phrased. They made an update. It stopped working. Nothing in this phrasing would be false if they made a business decision to kill it and did so.
I find this explanation extremely likely for several reasons:
1. It happened just when fronting was being adopted by Signal, a high profile app that would have caught attention.
2. All the other big players made the same change, again, prompted by specific events related to anti-censorship.
GreatFire speculated that the attacks were precipitated by the publication of an article in the Wall Street Journal that described in detail domain fronting and other “collateral freedom” techniques. The interview associated with the article also caused CloudFlare to begin matching SNI and Host header, in an apparent attempt to thwart domain fronting.
This explanation turned out to be correct. CloudFlare said:
Among other network service providers, it's clear domain fronting could be awkward. "Cloudflare does not support domain fronting," a Cloudflare spokesperson said in an email to The Register. "Doing so would put our traditional customers at risk as it would mask banned traffic behind their domains."
Note that their explanation of why they killed it would also apply to encrypted SNI.
3. I worked at Google for a long time and am very familiar with their serving infrastructure. I remember noticing that resolved hostname and host header didn't have to match a decade or more ago. This ability lasted a very long time, right up until the moment people started using it in ways that upset governments. Then it went poof.
I can believe that these particular 4 players might have had a change of heart. Mozilla doesn't run any CDN or third party web services of note, and has nothing to lose from implementing this. Nor does Apple. Cloudflare and Fastly are both small firms who might have genuinely changed their minds, although if they have they should state explicitly that they are newly willing to let their "traditional customers mask banned traffic".
But I'm also not naive: it's possible some people are thinking, better ask for forgiveness than permission.
> Note that their explanation of why they killed it would also apply to encrypted SNI.
I'd say that's an interesting interpretation. What they likely meant is that if govs/orgs want to block X and X starts using domain fronting via domain Y, there's a chance Y starts getting blocked. (basic SNI/DNS filtering) The same idea does not apply to the encrypted SNI - there's no damage to a random domain, unless the whole provider range gets blocked.
In particular, note that encrypted SNI could just be blocked at the network level, thus forcing fallback to regular old TLS. But domain fronting can't be blocked like that.
That's true, but I don't think it's relevant to what the providers want to prevent. Even if the domain changes based on some pool, if Y is used at some point, Y could be blocked as a collateral damage.
Since Y pays the provider, the provider cares about them not being blocked because of a functionality they enable.
The engineers are looking for an alternative solution which pleases both camps: no more domain fronting, and no more possible blocking of domains by adversaries.
I think it's a bit idealist to say that if they're serious about making sites unblockable, they should put their commercial interests aside. This will not happen. But they are working on an alternative solution where both parties (engineers and shareholders) are happy, and I can only support that.
The reason they stopped domain fronting wasn't technical, it was business related. If these engineers can't win the internal political battles to keep domain fronting working, what makes them think they can win the exact same battle this time around?
It can be a differentiating factor when one of the three major cloud providers is losing steam and want to have some privacy-related PR.
Every corporation which does SNI scanning for web filtering will require every user to insert the company's CA cert in every client they use. This then also forces them to make a choice between requiring all apps in all networks to do this, or whitelist certain parts of the network. Not only does this reduce their security, but it limits how apps can work on their networks.
Authoritarian governments will also start requiring this. I could even see federal agencies for non-authoritarian governments making a stink over it on law enforcement grounds. They already hate encryption. If they can't force you to expose what sites you are visiting, they may force new legislation to require all web service providers to provide logs to law enforcement, which is a much worse privacy violation than just seeing what domain you were hitting.
I think the better solution to this would have been to improve server-side support for VPNs and Tor. Tech does not exist in a bubble, and rights can be taken away when they conflict with the interests of authority.
The logical extension of this line of thinking is that the tech community should never provide any extra security to anyone, in case somewhere some malicious actor is angered into retaliating against a certain group of users.
If you think that there are companies that rely on surveilling DNS requests (but don't already intercept and decrypt all their employee's web traffic) then you can campaign for browsers and OSes to have an option to turn off new security features like DNS-over-HTTPS/TLS and Encrypted SNI, but please don't demand that the whole internet go without these basic protections because of the internal policies of a few corporations.
The logical extension would not be "there should be no security features", and this is obvious, because the whole problem was not the existence of security features. The problem was the lack of ability to enforce policies, which organizations (companies, governments) are often required to do. I think the logical extension should be that security features should be developed with policy enforcement capabilities in mind.
Also, it's not logical to conflate this privacy feature with security features. Secure connections do not require SNI hiding at all. DNS-over-HTTPS is closer to a security feature, because without enabling DNSSEC, exploits are possible to the DNS which can impact non-TLS connections. That has a much easier workaround, which is to point DNS-over-HTTPS to a local provider.
There's no "quiet" in this approach. The cert vendors do not have the certificate's private keys; all the server operators send to the cert vendors is the certificate's public keys. Therefore, the only way to listen on the secured connections through a cert vendor is to use the cert vendor's private keys to sign a counterfeit certificate, which is presented to the browser. And at this point, the browser has everything it needs to prove that the cert vendor was compromised.
There are also several other gotchas: Certificate Transparency can force them to publish their counterfeit certificates to the whole world; Extended Validation is allowed only from some cert vendors (and requires Certificate Transparency); and last but not least, this (like all MITM) breaks client authentication (client certificates).
Finally, this only applies to cert vendors under the USA jurisdiction. A NSL has no power over other countries.
And as you say, browsers today might not check these things (and checking CAA doesn't make much sense for a browser, it's intended to be checked by certificate issuers). But future browsers, browsers customized with some third-party extensions, or non-browsers might check.
Browsers don't and can't check CAA records, because that's not how CAA is supposed to work. CAA is only enforced at the time of issuance. If you temporarily switch your CAA records to allow issuance for an hour, issue a cert, then switch them back to block all issuance from all CAs, the cert you issued remains valid.
You should correct it into _national_ agencies for non-authoritarian governments. Unitary states are majority in the world.
Every corporation which does SNI scanning for web
filtering will require every user to insert the
company's CA cert in every client they use.
It doesn't prevent domain fronting, of course, but neither did SNI scanning.
All that encryption will lead to people using VPNs en masse. This will lead to laws, directly prohibiting VPN usage except for explicitly granted permissions for selected businesses. This will lead to Internet segmentation, Russian Internet, China Internet, etc, world wide web will cease to exist.
Kazakhstan attempted , but abandoned the idea, likely since it was unrealistic to expect widespread adoption of the certificate.
And eventually, it'll probably make it infeasible in China/Russia. Especially, as the population becomes more technology literate.
...ah, and that would break anything using cert pinning.
HTTPS feels so utterly clunky.
Or when the domain uses HPKP to pin its certificate to a different CA. When the MITM CA has been added manually to the browser's root key store, most browsers allow HPKP bypass by that CA unless configured otherwise; but if the MITM is done by compromising one of the browser's default CAs, the browser won't allow a bypass.
Or if the certificate was supposed to be an EV certificate, which can only be issued by a limited number of CAs, and the compromised CA is not one of these. And even if an EV CA is compromised, AFAIK some browsers require the certificate to be in a Certificate Transparency log, and they also require that the certificate carry proof of its submission to the CT logs.
Corporations and governments have to balance the costs and the benefits of breaking encryption. The costs have increased, which increases the chance that they decide that the costs outweigh any perceived benefits.
That's not the case at all, and a very dangerous idea. Some steps drive increased centralization. Increased centralization makes interception and filtering easier.
The better solution is to push domain authentication to after the secure channel is established. That doesn't protect against active MITM snoopers, but neither does TLS ESNI. Worse, TLS ESNI will drive people to use centralized DNS-over-TLS service providers.
Mozilla+Cloudflare's "free" DNS service barely made it out of the gate before it was hijacked by the Great Firewall (not even simply blocked). See https://borncity.com/win/2018/05/30/cloudflare-dns-service-1... and https://blog.cloudflare.com/bgp-leaks-and-crypto-currencies/
Of course, with DNS-over-TLS you would know (maybe) that there was a MITM. But that's no different than DNSSEC. And there's nothing you could do about it, anyhow.
Securing DNS, and specifically keeping DNS queries private, is NOT a solved problem. Indeed, it's not even a solvable problem because of the nature of DNS--the need to centrally coordinate registration of human readable identifiers. The only real "solution" is anonymization via techniques like onion routing (e.g. Tor), which anonymizes the requester, not the responder.
Advocates will tell you that TLS ESNI improves requester anonymity by aggregating users behind "trusted" intermediate resolvers, but that's largely based on conjecture and very little evidence--in particular without any showing that usage patterns won't naturally reconfigure to either return everybody to the status quo ante, or make everybody worse off on average.
TLS ESNI is a wild goose chase and, looking back 10 years from now, will have simply served to increase the complexity of the web stack and decrease service and software diversity.
Link to the RFC draft-00 "Encrypted Server Name Indication for TLS 1.3" on which the work is based
I submitted the direct Twitter link (with the exact same title) about 5h before this StackShare link was submitted: https://news.ycombinator.com/item?id=17537176.