Hacker News new | comments | show | ask | jobs | submit login
Cloudflare, Mozilla, Fastly, and Apple Working on Encrypted SNI (twitter.com)
343 points by sahin-boydas 5 months ago | hide | past | web | favorite | 145 comments



Related/relevant question.

A number of months ago I decided to finally begin researching a project I've had on the backburner for ages: a simple domain+IP bandwidth throughput counter for everything within my LAN. Basically I wanted to make a dashboard saying "you exchanged this much data with this many IPs within the last 24h; here's the per-IP and per-domain breakdown; here are the domains each IP served", that sort of thing.

Part of the motivation for this was to track down the headscratch-generating mysterious small amount of "unmetered data" my ISP reports each month (I don't watch TV over the internet and cannot think of what else would be it), and the other part of the motivation was to satisfy idle curiosity and observe domain<->IP cohesion/fragmentation over time.

At the exact point I began my research TLS 1.3 had just been approved, and I rapidly realized my nice little idea was going to get majorly stomped on: the inspiration for the project, years ago, was the SNI field itself (hah).

5-10 years ago, I would have been able to route all my traffic through a gateway box, and passively watch/trace (slightly behind realtime) everything via tcpdump or similar, catching DNS lookups and sniffing SNI fields. That would have handled everything.

Now, with encrypted SNI, and the upcoming DNS-over-HTTPS (seems I need to think ahead :/), it looks like my (now irritatingly stupid-seeming) project will require me to generate a local cert, install it on all my devices, and get a machine beefy enough to keep line-rate speed while doing MITM cert-resigning.

And then, for the fraction of traffic my system reports as "not signed by local cert", I'm just going to have to let that go, or I'll break everything that uses cert pinning.

...which I wouldn't be surprised if some programs eventually use with DoH lookups (using pinned certs fished out of DNSSEC for DoH makes sense).

I'm very very interested to know if there are any alternatives I can use. I realize, for now, that I can sniff for and collect DNS lookup results and associate these with IPs, but when will this break?(!)

Honestly HTTPS and security in general just seems like a gigantic mess. I say this not as a disrespect of the architecture, but more as a "wow, this is just so hard to get right".


You will still be able to measure how much data goes to and from particular IPs, since that can't be screened.

TLS 1.3 approval doesn't stomp on SNI; on the contrary, for the first time it actually makes SNI mandatory - in previous versions it was optional, leading to some HTTP implementations still not working with SNI late in the 21st century. I had a corporate Python system I was responsible for that couldn't do SNI even earlier this year.

You probably shouldn't attempt your idea of proxying everything. Proxies are very fragile, and this will probably inadvertently worsen your security, while also introducing weird non-security misbehaviour that's hard to track down. For example Firefox will go "Oh, a MITM proxy :(" and it will switch off all the safety measures built up over years for the public Internet because this isn't the public Internet. In principle if your MITM proxy is enforcing the same or tougher rules, this wouldn't be worse, but let's face it, you're going to cobble together the minimum possible to make it work.

Every time somebody tries to get fancy and _think_ about the bits inside the packets their network is supposed to be moving they screw up. That's the most cynical reason why there's an impetus to encrypt everything. If the bits are all encrypted gibberish the temptation to meddle with them goes away.


And yet a number of recent privacy breaches where apps were collecting and transmitting data they had no business in were discovered by sniffing the traffic.

> Every time somebody tries to get fancy and _think_ about the bits inside the packets their network is supposed to be moving they screw up.

How do you propose we do security otherwise? (Apart from delegating it to other parties so they think about the bits)


>How do you propose we do security otherwise? (Apart from delegating it to other parties so they think about the bits)

The clear and obvious solution is to make developers, managers, executives, and major investors personally responsible for the damage their products do. If Google is going to encrypt everything using pinned certificates, making it impossible for me to implement security features/boundaries of my choice, then they should be 100% liable for all damages that come from the use of their products. Additionally, if an analysis of the problem shows that is is related to aspects of the product that do not directly related to what the end user believes the purpose of the product to be, then there should also be criminal liabilities for all parties as well.


>The clear and obvious solution is to make developers, managers, executives, and major investors personally responsible for the damage their products do

I struggle to understand how this is a "clear and obvious" solution. How would you go about executing this? Legislation? The big famous piece of recent tech legislation, GDPR, is, by most opinions, far and away "clear and obvious". And given the US's current political environment, I'd struggle to believe any sort of legislation could be passed that is "clear and obvious".

There has to be a better solution that "coordinate the interests, incentives and motivations for millions of users, developers, managers, and executives to pass some legislation that makes the latter group responsible."


> You will still be able to measure how much data goes to and from particular IPs, since that can't be screened.

"33.12% of traffic went to IPs associated with Google. 26.3% of traffic went to IPs associated with CloudFlare. 25% of traffic went to Cloudfront. Here is the breakdown of the remaining 15.58%..."

<Insert cartoon fist punching noises here>

DO NOT WANT. Sorry for caps, it's a bit loud, but the above ^ is what I explicitly want to _avoid_. And with DoH (DNS-over-HTTPS) avoiding that will be even harder.

> You probably shouldn't attempt your idea of proxying everything.

Gestures at post and underlines angry and irritated parts (not angry at you of course, angry at TLS 1.3)

I don't _want_ to build a proxy. Well, I do; I'd love to make it a firewalling proxy that only lets HTTPS data signed by my cert through (and resigns everything so only my cert is ever used), so it DOES break everything using cert pinning, and I can sniff ALL the data generated by the programs I use (if people argue about hardware ownership, shouldn't they argue even more vehemently about their personal information?!).

But practically speaking, no, I don't want to build a proxy. I want to avoid needing to build a resilient LAN with multiple APs and subnets and firewalling and ugh. (Because I will of course need that to break HTTPS and then use my bank's website/apps without my heart fluttering - or even my Google login, for that matter...)

> Proxies are very fragile, and this will probably inadvertently worsen your security, while also introducing weird non-security misbehaviour that's hard to track down.

What do you mean?

> For example Firefox will go "Oh, a MITM proxy :(" and it will switch off all the safety measures built up over years for the public Internet because this isn't the public Internet.

Really?! Where can I learn more about this? :s

> In principle if your MITM proxy is enforcing the same or tougher rules, this wouldn't be worse, but let's face it, you're going to cobble together the minimum possible to make it work.

D:

In practice you're right. In my head I want to build the best+most secure system available, but if I'm pragmatic, the amount of support out there for doing this on a personal/temporary (aka non-dev/"break the system for a minute because it's being debugged"/SSLKEYLOGFILE) basis is about the same as for GPG-encrypted mail, so I know I'll wear down eventually, and make myself a giant sitting duck. :'(

> Every time somebody tries to get fancy and _think_ about the bits inside the packets their network is supposed to be moving they screw up. That's the most cynical reason why there's an impetus to encrypt everything. If the bits are all encrypted gibberish the temptation to meddle with them goes away.

Thanks for this. I never looked at it that way, but thinking about all the programs and apps using weird binary formats to scare people away... it makes perfect sense the Internet and HTTPS would work similarly. But I think that's a vulnerability in and of itself, in that it discourages general exploration and progressive investigative analysis by virtue of being so unbelievably multidimensional and hard. For example I look at https://sites.google.com/site/testsitehacking/-36k-google-ap... and wonder.


Examples of things we're sure are safe to insist on for certificates in the public Internet but we can't check in people's crappy corporate networks because the checks would just constantly fail and nobody cares to fix anything:

* The FQDN (minus a trailing dot) of the site must appear either exactly as a SAN dnsName or, with its first label replaced by a single asterisk, as a "wildcard", in the certificate, otherwise the cert is worthless

* The cert should have an Extended Key Usage. The EKU must contain at least 1.3.6.1.5.5.7.3.1 (meaning this key is to be used to identify a TLS server)

* The certificate mustn't use SHA-1 (or MD5 for that matter) as part of a signature algorithm


>leading to some HTTP implementations still not working with SNI late in the 21st century

So how did IPv6 adoption go?


This is pretty similar to my own thoughts on the matter that came from thoughts about doing something similar (mostly to make sure there’s not random malware from family members).

Imagine being in corp information security though. It seems like it won’t be long before command and control or exfiltration happens via certificate pinned apps you learn you can’t inspect. Corporate networks will have to start banning them if they don’t have a way for them to inspect.


Honestly, if corporate IT has to drop connections to a bank, because they can’t MiTM the connection, I see this as a good thing.

Too many people are unaware that their connections to their banks etc from work are insecure because of corporate firewalls and what are (essentially) compromised certificate stores.


It's not uncommon with TLS 1.2 to have a setup where what actually happens is this:

You make an outgoing TLS connection, the MITM Proxy Meg intervenes, but you actually wanted MyBank

You->Meg (thinking she's MyBank): Hi, I'm looking for MyBank?

Meg->MyBank: Hi, I'm looking for MyBank?

MyBank->Meg: Yes, we're MyBank <cert for MyBank>

[ Meg inspects the cert ]

Meg->You: Yes, we're MyBank <cert for MyBank>

Now Meg stitches the TCP connection back together so you're talking to MyBank. She no longer eavesdrops.

This works for the business because now they're not paying for a box that can proxy your connection _and_ they're less likely to find that one of their "security" employees emptied your bank account. They feel comfortable that MyBank aren't bad guys, so they aren't scared that you'll, say, exfiltrate stolen corporate secrets via MyBank or download Malware from it.

In TLS 1.3 this doesn't work, because the conversation goes like this:

You->Meg (thinking she's MyBank): Hi, I'm looking for MyBank and I picked 14, and the letter Q

Meg->MyBank: Hi, I'm looking for MyBank and I picked 14, and the letter Q

MyBank->Meg: Cool, I picked a Banana and 209.

<Unintelligible encrypted data> [ Meg can't do anything with this ]

Or Meg could try picking her own Elliptic Curve Diffie Hellman values:

You->Meg (thinking she's MyBank): Hi, I'm looking for MyBank and I picked 14, and the letter Q

Meg->MyBank: Hi, I'm looking for MyBank and I picked 81, and the letter Z

MyBank->Meg: Cool, I picked a Banana and 209. <We're MyBank - encrypted data>

[ Meg now has a connection to MyBank, but she can't give it you because it doesn't work if you picked different values ]

In the medium term we expect corporations which do this will just clamp everything to TLS 1.2 with full proxying. This will be very expensive, and worse, but well, we did tell you not to do this, so now that's broken you get to keep both halves. Ordinary folks who aren't trying to put middleboxes everywhere get TLS 1.3, with better performance and more security.


I liked that explanation.

So... with TLS 1.3, it won't be possible to do connection-jockey anymore, will it?

(This is why I was saying things were such a mess earlier)


Maybe it's a deliberate lie from your ISP? if everyone has the nice words 'unmetered data' on their bill it might help with overall customer retention.


This is Telstra, Australia's juggernaut/cartel^Hmajor player (owns all the copper and 4G; "most definitely >51% market and then some"). So I'd actually mostly blame bureaucracy.

There are (and have been for some time) some known-unmetered sites. (There was a small stink raised where an FTP/HTTP file mirror got removed some years ago.)

Some digging around I did some months ago turned up a (semi-public, as in "obscurely published but uninteresting") list of supposedly-whitelisted IP addresses pointing to school/university backend systems and apparently used by rural schools sitting on (expensive) satellite. Suffice to say that I wasn't able to find anything that could be used as a proxy :P


iinet has a few domains they explicitly do not meter and list in the stats, including a mirror for open source software. Was very useful while I was down under to keep my Gentoo system nicely updated without killing the quota of our shared flat. I so don't miss this :)


Ah, to clarify I meant lying about you accessing any unmetered sites/IPs, not that any existed.


Even on desktop, some non-browser programs already seem to use cert-pinning. E.g., there doesn't seem to be a way at all to inspect the traffic of the Dropbox client on Windows, short of patching the executable. The situation is likely worse on mobile.

So no, I don't think that's an easily solvable problem. And yes, I also think it's a giant mess.

Even from a security standpoint it doesn't seem to make sense: We're fortifying the connections itself against sniffing but at the same time force everything to go through centralized cloud services which have a growing reputation to be hacked and also force everyone to blindly trust the programs they have running on their own devices.

To put on my tinfoil hat, this kind of threat model sometimes seems better designed to protect surveillance capitalism against attacks instead of protecting the users.


I mean, no one's forcing me to blindly trust programs I run on my own devices. I make a point of only running open source software for this reason. I don't need to inspect the traffic if I can simply read the source code and look for outgoing calls. I won't run Dropbox for this exact reason. Instead I prefer SyncThing, which has no data caps, no central server, and is open source and easy to inspect.

Obviously I'm not actually reading all this source code (who has the time?), but I strongly prefer it to be available, so that I have the ability if desired. Also for reasonably popular stuff the community tends to suss out suspicious activity pretty quickly.

I absolutely do not trust proprietary blobs; they don't get to run on any of my real workstations. I might run them on my Windows install (primarily for games) if I must, and the computer I use for work gets to run anything my company declares that I need (thankfully not much), but I certainly don't trust them.


> Also for reasonably popular stuff the community tends to suss out suspicious activity pretty quickly.

Unless it's ImageMagick, OpenSSH, WordPress, or any number of other things.


>The situation is likely worse on mobile.

You might think so, but not really. Any app you install on Android can watch what you do online in any other app, with no permissions at all.

https://hackernoon.com/android-gives-apps-full-access-to-you...


Maybe just run your own DNS resolver and log that, then correlate with IP traffic? Additionally you can trace which applications open which connections on each machine.


This is amazing work, can't wait for this to be finished and deployed on the internet. Together with encrypted DNS (DoT and DoH) we finally get fully confidential connections to a server without leaking anything other than Remote IP.


The encrypted DNS proposals only cover securing the route to the recursive resolver. So the recursive resolver (your ISP, google, cloudflare) will still see all the sites you're visiting.

We also need encrypted DNS for the recursive lookup itself so you can run your own resolver somewhere.


The resolver is less of an issue because you have free choices there, ISP is harder to change. Plus you increase the number of parties that need to collude (ISP + RR provider) to spy on your traffic.


True. However these days pretty much everyone is colluding so there is that. Data bonanza.


> So the recursive resolver (your ISP, google, cloudflare)

Why not yourself? Your ISP can still see the RR working, of course.

> We also need encrypted DNS for the recursive lookup itself so you can run your own resolver somewhere.

This would indeed be optimal but would require upgrading a significant portion of authoritative name servers, sooo... might take a while.


> Why not yourself?

Well, then what attacker do you defend against if your laptop asks your router via DoT but then the router does an unencrypted recursive lookup anyway?


Who said laptop ? You can have a DNS resolver on a server somewhere you own, that way it can remain encrypted on your ISP's network.


I'm actually not sure exactly what the purpose of ESNI is but if you look at the implementation if the server you connecting to is publicly known then ESNI is not private. I might be missing something, but you can just build up a database of ESNI record_digest to server name mappings. The limitation being you can only build this up for servers you know about. Also, I guess it doesn't work for SSL servers that are terminating multiple domains because they are able to use the same key for a bunch of different domains. I guess this is the purpose of ESNI :)


That only works if the server you are visiting is behind a CDN with no resources served directly from the dedicated host.

The encrypted SNI would primarily be useful to make censorship and MITM attacks harder.


I'm don't completely get it. How does CDN is required for this?

Let's say that I have a Nginx on my server which serves a lot's of websites, and whose web sites can only be accessed through HTTPS with SNI, not HTTP.

Now with Encrypted SNI deployed, requests from my clients can still be dispatched to it's respective virtual hosts, but any sniffers in the middle of the connection should only be able to see that my clients are accessing to my server, but not which virtual host.

Is I'm missed anything? I haven't dig deep in to this currently.


That's about it.

The theory is that if you put your server behind a popular CDN, then a state-level attacker is left with little choice but to block the entire server.

Another benefit is that attacks that can observe but not modify traffic will be less able to track what sites you're visiting.

There are risks though:

* It's unclear how resistant CDNs actually are to state-level attackers.

* It's unclear how resistant CDNs are to regular attackers[1]

* Corporations/Security-savvy users will find it difficult to control what [their] workstations can reach and cannot: Allowing access to a single cloud-based service may inadvertently allow access to a malicious command/control server sharing the CDN.

[1]: https://9to5mac.com/2017/02/24/cloudflare-server-breach-clou...


That’s an accurate summary, but CDNs are important because they terminate huge numbers of sites. In your example censorship is relatively easy since network operators who want to block a site won’t have very much collateral damage by blocking your entire server just to deny access to that single site. CDNs, terminating millions of sites, make that far more challenging.


It misses the fact that even with encrypted SNI it would be very easy to fingerprint a website see my comment above.


Shared hosting can provide some “privacy” or to be more exact plausible deniability however it’s not going to be particularly good (when accounting for the actors that this would play at this level) especially when you consider that you can fingerprint the websites quite easily as the encrypted data would still of known size.

So if your website serves a page which is 412KB in size that would be quite easy to fingerprint especially across a pool of websites, beyond that it’s also quite possible to fingerprint things even further by measuring the number, size and order of secondary requests a page load incurs.

So overall there is little privacy that would be provided by this (again when taking into account the threat model and actors) unless you hide behind 1000s and 1000s of websites on the same network and even then it’s mostly not about privacy but about resilience against MITM and state censorship.


I think for the web to hold true to its ideas, there should have been a discussion of whether we want to have fully confidential connections on the internet. Sadly, this doesn't seem to have happened.


I don't think encryption hinders the true ideas of the web, it merely blocks malicious actors, of which there are many in almost all areas and connections.

The entire set of encryption can be easily opened up for inspection and manipulation if all parties agree this is good idea.


If all parties agree, yes. But that likely is not the case - e.g., an app developer would likely want to have maximum freedom in what data they send over the wire, while a user would want to minimize the amount of data transmitted (for simple cost reasons) and also control which personal and/or sensitive information is transmitted.

Encryption where the user does not control either of the endpoints stacks the odds in favor of developers and against users.

I agree however, that the situation for the web (e.g. everything that lives inside a browser) is still pretty good thanks to the ability to add custom root CAs and general user the browser's debugging and inspection tools.

However, if you try to find out what exactly a mobile app, a game console or an IoT device is doing, you soon find yourself out of luck even if you're the owner of those things.


The Remote IP is almost as sensitive as SNI right?


This proposal is generally for sites which are behind an edge network like Cloudflare. In that case your remote IP is the IP of the provider, not your actual server.


Not always, see domain fronting: https://en.m.wikipedia.org/wiki/Domain_fronting


Or services which use subdomains for individual users, e.g. #.github.io, #.tumblr.com, #.blogspot.tld and so on.


This. Currently, the difference between accessing github.com/blattimwind and blattimwind.github.io is that, in the second case, which username you're accessing is visible to all. Encrypted SNI will also protect that part of the URL.


Not necessarily, think CDNs or Shared IPs for Webspaces.


So in reality it is just as sensitive.


It can be as sensitive. But in the best case, shared IPs provide more anonymity.


[flagged]


As a responsible parent, you should not be relying on filtering to educate your children. Filters have more holes than Swiss cheese, often block entirely legitimate educational resources, make your children know that you don't trust them to be responsible, and learning about proxies and VPNs is trivial for them (just try googling "proxy servers unblock websites").

If you don't trust your children to use the internet responsibly, don't let them use it. Or let them use it but only under your supervision. If you let them go wild but put up filters, they will find a way around, one way or another, and at that point the princess is in another castle.


allowed domains:

en.wikipedia.org

I'm all for having discussions, but if you think the dichotomy should be "don't use the Internet at ALL" or "Talk to them about it and hope they're mature enough to avoid getting rick-rolled into rotten.com" then I heartily disagree.


allowed netname: WIKIMEDIA-EU-NET

problem solved


I agree on the most. When my kids grow older, I’ll teach them.

And the trick with local CA is awesome!


As parent you can install whatever SSL CA you'd like and make full MitM of your own devices. For everyone else it's very important to not leak any private information to ISP or any other party.


It's great seeing more and more work being done to increase privacy and anonymity online, especially things like this which are about plugging smaller, less obvious holes. Nothing is perfect, but the harder it is for the snoops and spooks, the better.


NTP over TLS next please.



Am I understanding this correctly, they propose a _esni TXT record for a domain containing a public key, which is used to encrypt SNI?

Who does this protect from?

ISPs that don't want to give up spying obviously are monopolies, have enough power and can block DNS-over-HTTPS/TLS and all 3rd party resolvers. So can censors. State actors can intercept DNS traffic even easier, since DoH/DoT make it more centralized. IP address itself leaks so much information, that cleartext SNI might not be even needed after all (meaning you can still pinpoint a domain with high probability just by IP address).

Maybe this can help make collateral freedom a bit easier. But it's already possible to achieve that with generated not known ahead throwaway domains, they are cheap compared to CDN/cloud costs. And collateral freedom doesn't really work against censorship all that well on its own, as centralization makes it easy to pressure CDNs and clouds to deny service to dissidents. Not enough collateral damage.


"ISPs that don't want to give up spying obviously are monopolies, have enough power and can block DNS-over-HTTPS/TLS and all 3rd party resolvers."

Too many big players are behind it for ISPs to be able to get away with that strategy. If a battle started then Google, Cloudflare, and others could simply start hosting DNSoHTTPS directly alongside all of their major services. To most customers 90% of "the internet" is the services from these providers.

"So can censors. State actors..." There is never going to be surefire way to subvert a large government that controls the infrastructure you are trying to sneak through. Even if there was the government would still be in control of data at any companies that operate inside the country. It's not "outsmart China or bust".

Rather than dismissing the solution as useless because it doesn't provide the ultimate privacy maybe it's more worthwhile to look at what it can do and how other changes to TLS and other protocols can continue to make things harder to snoop.


> It's not "outsmart China or bust".

While we're at it: what prevents, say, a cartel of the world's transit providers to cut China off from the Internet until they don't censor it any more? And why haven't they done so? After all, there are only two outcomes: the Chinese government goes fully intranet, but their economy tanks as a result or the Chinese government backs down, with their economy (and civil rights) profiting. Same goes for Russia or Iran.

It's about time that Western companies take a stance when it comes to civil rights. So much technical and social progress is kept back out of the fear the censorship creates, it must be profitable by now to simply force their hands.

Oh, and while we're at it, why are there still companies doing business with our very own guilty parties in espionage terms? Why do they have bank accounts? Why is anyone renting them office spaces, selling computers and whatever else?

The same companies from USA, Germany, Israel and I believe also Italy do not only supply our own governments (which is bad enough in itself) but also dictators who use these tools to repress their own citizens.


Money. Sadly morals and convictions get quickly tossed out the window for money and power. It is China's problem, why should Western companies damage extremely profitable business relations? I agree with you that they should cut off such countries, but modern morality is a lot more fluid and money is king. Typical FYGM mentality.


Harder to snoop? You said it yourself, "[t]o most customers 90% of 'the internet' is the services from these [few] providers."

So we're making snooping by ISPs more difficult by using a technique that only works well precisely because so much of the modern internet is centralized behind a few hosting service providers.

Encrypted SNI will be at best a lateral move. Much time will be spent (especially by small-time providers and open source stack authors) trying to keep up with the privacy enhancements offered by the big hosting providers just so they can lay claim to the dubious benefit of enhanced privacy.

More likely, these enhancements will be leveraged to centralize hosting even further. The more complex the protocol stack, the more costly it is to avoid using centralized providers.

And much like with browser fingerprinting, whatever marginal benefits exist will be quickly erased once the existing tracking systems build and maintain databases that map IPs to likely sites. Nobody has even tried to demonstrate that this isn't feasible with aggregators--everybody just assumes it's not practical. And that ignores the reality that countries like China can force companies like Google, Apple, and others to host services domestically, making encrypted SNI of little value from day 1.

I absolutely oppose encrypted SNI because it adds significant complexity for little benefit over the long term. Worse, it will likely hasten centralization. A proper solution would be to fix TLS so SNI always happens over an encrypted channel. But as with TLS 1.3 0-RTT, in reality performance is allowed to trump security. That's the calculus of the very same organizations pushing encrypted SNI.[1]

As somebody who has written DNS, HTTP, MySQL, RTP, RTSP, SMTP, and even HTML parsing stacks from the ground-up, we need to focus on making these things simpler. We need to shift our efforts to writing implementations using specialized languages and techniques for software verification so we can write provably safe and correct implementations[2] rather than chasing feature after feature.

[1] In fairness, these organizations are diverse and if the encrypted SNI folks had their way they may also have come up with a better solution. But that doesn't justify encrypted SNI on its merits.

[2] Rust doesn't even begin to solve this. As ugly as buffer overflows are, when it comes to tracking and data exfiltration they're hardly the biggest culprit. And in any event Rust only solves a subset of overflows. Techniques for building circular-reference data structures in Rust (i.e. using indirection through array indexes--basically pointers to a specialized heap) can and _will_ result in similar issues. At the end up the day, for core infrastructure software no general purpose language suffices. For core infrastructure, there's no avoiding doing things the hard way. Core infrastructure software needs to be developed in the same way SQLite is developed, but that will never happen as long as these standards are moving targets that rely on many moving pieces.


> A proper solution would be to fix TLS so SNI always happens over an encrypted channel.

How do you propose to do that? This is not a rhetorical question, if you have an idea on how to make it work, go ahead and propose it to the TLS working group. Keep in mind the issues raised at https://tools.ietf.org/html/draft-ietf-tls-sni-encryption-03.


Requirements 3.6 and 3.7 include prevention of active MITM attacks. The TLS ESNI supposedly "solves" the problem by shifting the problem to DNS. But the problem of an active MITM attacker isn't actually solved by DNS. Per the TLS ESNI RFC:

  7.1.  Why is cleartext DNS OK?

  In comparison to [I-D.kazuho-protected-sni], wherein DNS
  Resource Records are signed via a server private key,
  ESNIKeys have no authenticity or provenance information. 
  This means that any attacker which can inject DNS responses
  or poison DNS caches, which is a common scenario in client
  access networks, can supply clients with fake ESNIKeys (so
  that the client encrypts SNI to them) or strip the ESNIKeys
  from the response.  However, in the face of an attacker that
  controls DNS, no SNI encryption scheme can work because the
  attacker can replace the IP address, thus blocking client
  connections, or substituting a unique IP address which is
  1:1 with the DNS name that was looked up (modulo DNS
  wildcards).  Thus, allowing the ESNIKeys in the clear does
  not make the situation significantly worse.

  (https://tools.ietf.org/html/draft-rescorla-tls-esni-00#section-7.1)
So there is no solution possible per the original requirements, which the TLS ESNI specification admits. (See above.) All the existing TLS ESNI specification does is obfuscate the issue at the expense of considerable implementation complexity.

DANE never gained traction because wedding TLS to DNS was considered intolerable from the perspective of both client-side and server-side implementation complexity. But apparently not intolerable enough when it benefits centralized hosting providers. Indeed, it's not intolerable client-side complexity because unlike DANE they're not going to bother trying to implement DNSSEC browser-side; instead they're going to farm that out to a hosted service. (Remember Mozilla+Cloudflare mentioned several weeks ago? Nominally free, but as with all business strategies, if you're not the customer you're the product.) And server-side they'll just bill directly for the cost of the complexity, because now there's even more incentive for the big hosting providers to maintain your DNS for you. (Of course, in reality they'll just siphon business away from the traditional DNS registrars, so the customer sees "value-add" while the entire internet continues to centralize even further.)

What TLS ESNI is, is the realization that people killed DANE because they let perfect be the enemy of better. (Where better is to just assume DNSSEC is already solved outside the browser, or more generally depending on DNS solving the hard part, and solving it in a way that minimizes browser implementation complexity.) But apparently nobody is yet willing to cop to that reality.

TLDR: The proper solution is a simpler one that achieves the same degree of privacy--fix TLS to setup an ephemerally-keyed channel first, then perform domain authentication by exchanging data within the channel. If you don't have to worry about preventing an active MITM attacker from snooping domain names, then you don't need a committee to draft up a spec. The requirements RFC says we do have to worry about an active MITM attacker, but the actual TLS ESNI doesn't actually prevent that. It just redefines the scope to "solve" the problem away.

[1] You could argue that it's easier to MITM TLS than it is to MITM DNS, but nobody is going to buy that argument.


> ISPs that don't want to give up spying obviously are monopolies, have enough power and can block DNS-over-HTTPS/TLS and all 3rd party resolvers.

It’s protection against passive attack. An ISP that blocks various DNS services will get noticed and will face more backlash than an ISP that merely sniffs all traffic and sells it.

Also, how exactly do you block DNS-over-HTTPS reliably?

> State actors can intercept DNS traffic even easier, since DoH/DoT make it more centralized.

It can be hard for a state actor to tell which ISP customer asked for which domain.


Lots of people here seem to focus on how this could prevent censorship and spying by governments. It might raise the bar, and make such measures infeasible for small countries, while also making it more visible.

But what we really should focus on is the added security everywhere else. Even when I trust my ISP, that is no reason not to have defense in depth.

Someday WiFi or 4G network will have a horrible security flaw. And then this will be another bolt in the next layer of security.

If security is important, we need multiple layers!

Focusing on what authoritarian governments might or might not do is pretty much besides the point.


> Even when I trust my ISP, that is no reason not to have defense in depth.

But if this is the method we are following, we will also need to think about the costs-vs-benefits of different security measures.

If patching known exploits or threat-models, the cost-vs-benefit generally allows for very strong security measures because you know the risks if you don't implement it and you know when the measure is implemented and you are done.

However, with "defense-in-depth" you could always add another layer of encryption, add another sandbox layer or make a system less general-purpose. If security is your only priority, there is no point where you would stop.


> If security is your only priority, there is no point where you would stop.

in a secure system, adding defensive layers is constrained by the amount of code required to implement them. at a certain point, these layers can become an attack surface, and therefore a liability.


One thing my team is working on and “waiting” for is the ALT SVC SNI extension to be adopted.

If you haven’t read the spec it’s pretty fascinating.

https://tools.ietf.org/id/draft-bishop-httpbis-sni-altsvc-02...

Working with ALT SVC’s and being able to set the SNI which piggybacks on some of the OP encrypted SNI is pretty cool tech. It’s all early, hard to get working, not supported by most browsers, but it’s going to allow for some interesting future security and trust options.


Would it not be simpler to issue an SSL certificate for the IP address for the outer connection (kind of like how EAP works with inner and outer identities) instead of relying on a DNS RR?


Just curious, would it enable Signal and other tools trying to avoid censorship by doing domain fronting without the cloud provider (e.g AWS Cloudfront) noticing.

e.g. https://news.ycombinator.com/item?id=16970199

https://news.ycombinator.com/item?id=16868564


Arguably it's no longer domain fronting, it's just connecting to a server that serves both sites without saying which one you want. So it's better, because it doesn't break any rules. That requires AWS et all to support SNI encryption, though.

What I wonder is about downgrade attacks. Browsers will probably have to support clear-text SNI for a long time - could censors abuse this by making it seem that no server supports encrypted SNI? Or is there a digitally-signed way for servers to prove that they support it, before the browser sends the host?


The intent is that you'd be using a secured DNS channel such as DoH or DoT to obtain your DNS results - so an adversary can't rewrite the DNS records.

This proposal supposes that when we've decided to talk to example.com we may as well ask a DNS query that requests not only 'A' and 'AAAA' records (IPv4 and IPv6 address) but also an encrypted SNI key. If such a key is available we can use it to prove to the server (which set that key) which name we wanted, while eavesdroppers are none the wiser.

If the adversary can eavesdrop DNS they can already speculate from our query which host we wanted to talk to, but securing DNS is a problem for which there are known solutions.


So censors can stop this by preventing the use of (non-controlled) DoH and DoT servers?


No, the owner can sign the dns records and also sign the absence of a record, and if the censor tries to tell the browser that there is no SNI key, without being able to produce the signature to prove that, the browser can just treat this as an MITM attack. This only work on signed DNS entries though.


But they just want to block it, so if the browser and/or app treats it as a MITM attack, that's fine from the censor's POV.


There is a difference between inciting end point devices to drop down to less secure protocols, and alerting the user that you are running an active attack.


This solution is already solved through things like HSTS preloads.


Apps and other machine to machine can require encrypted SNI pretty easily. Pinning of various other kinds (not just in the binary, but via headers or a list in a browser) solves it too.


Apps and other machine to machine can require encrypted SNI pretty easily.

Yes, but then the censor can distinguish these apps from a regular client, and block them. This only works if browsers do it too, so that the app can't be distinguished.


I remember in TLS 1.3, a middle box can not modify a single byte in the negotiation, or otherwise, the connection fails (a checksum validation on server and client side). TLS 1.2 somehow has this but the coverage is less (it only happens for chipers etc).


The cloud provider will notice when they get blocked in a country.


Does anyone know Google's position on this? I tried finding a relevant Chromium issue but failed.


"Google's position" includes a lot more than just some Chromium developers, and even Chrome is big enough for there to be internal tension between teams focused on end-user security (e.g. encrypting SNI) and real world performance.

Google is a cloud operator these days, and as such if something like this is to be of much use they'll have to eat the consequences at the other end, which means their product (Cloud servers) working less well for uncontroversial services than it might if they let censors do as they please. Basically for a cloud operator or CDN choosing to actually implement a working encrypted SNI solution (whatever it is) means now all your uncontroversial sites are obliged to tell censors "I'm Spartacus" and reap the consequences. If the censor is a single place of business or a micro-state maybe you can just suck it up. If it's Russia then you've just decided you want to annoy all your paying customers on a point of principle. I believe the word for that in Yes, Minister was "Brave".

Ultimately, for this to be deliver much value (and if it doesn't why bother at all?) it needs to be something every major cloud provider and CDN does for all the sites they host. Which means from somewhere they're all going to need to find a lot more backbone than is needed for, say, TLS 1.3 itself.

I hope it works, but I fear it will not.


Google and AWS are each large enough to make blocking them a very expensive proposition.

I also get the sense that Google is pretty good at prioritizing long-term goals such as a fast and secure internet over any short-term considerations.


The linked Twitter thread mentions that BoringSSL (maintained by Google, used in Chromium) has an implementation already.


Right, I forgot BoringSSL was used in Chromium. Thanks!


Can we do something about the certs? There are DPI that can block websites by cert CN/SAN.


This is sorta it. SNI is the only unecrypted part that leaks the server hostname. CN/SAN blocking usually has a middlebox that decrypts the connections so there is nothing to be done here.

If somebody can MitM your encrypted connection to both server and DNS, encrypted SNI stops working to my understanding.


Encrypted SNI is about protecting you from people who aren't in your list of trusted CAs.

If someone is in your trusted CA list, why do you want protection from them? If you want protection from them, remove them from your list of trusted CAs.


IIRC after ServerHello the cert was given to client in clear text.


This changed in TLS 1.3. The server cert is now encrypted.


That's cool. Is it possible to force google.com use TLS1.3?


If you insist upon talking TLS 1.3 draft 23, which is the last substantive change before the draft went to the RFC queue, google.com is perfectly happy to talk TLS 1.3 draft 23.

TLS 1.3 has downgrade detection, so if a middlebox tries to downgrade you (e.g. to TLS 1.2) without proxying the entire connection the TLS 1.3 implementation in your client will spot that and reject the connection.

Proxying is possible (including with a downgrade) if you trust the proxy. So, don't.

If you have a recent-ish version of Chrome or Firefox you are already using all this.


No more.


SNI is key to https virtual hosting. If encrypted how will the server know where to go? More importantly, why not just use the "Host" header inside https if you need SNI?

I mean, if SNI is encrypted then we don't need SNI at all, just use the Host field inside https header which is encrypted already and carries all the info you need know. Am I missing something?

Yes SNI could be encrypted outside of the SSL handshake in client-hello, but why? will it be useful to defend domain fronting, don't think so.


The purpose of SNI is to allow the server to pick the right proof of identity (usually an X.509 certificate) for the site the client actually wants to access. This needs to happen before the HTTP session starts, so we can't wait for a Host: header.

Yes, the intent is exactly to encrypt SNI during the initial ClientHello, in particular the public key used will be found in DNS alongside the server's IP address(es).

The key used will be the same for some set (perhaps all) of names hosted on this IP address, so yes it achieves the same goal as "domain fronting" but without tricks.


This doesn't make much sense to me, I wonder if the senior management at these firms understand what the engineers are doing.

You could already do something approximating encrypted SNI using domain fronting. Connect to boring domain A via SSL, then send an HTTP request with a Host header set to your actual target domain B. Load balancing HTTP reverse proxies in the clouds would happily use the encrypted domain in the Host header, ignoring SNI in the TLS headers. This started being widely used to evade censorship and build "unblockable" services, because to block them you'd have to block all of Cloudflare or Akamai or Google.

These firms all, as one, banned domain fronting. Turns out the bosses wanted access to big markets more than they wanted censorship resistance.

So why do these developers think it'll work out any differently this time, with slightly different packet formats? First company to enable encrypted SNI will get blocked at the national level and will blink. The others will look at it and probably never try.

If they're serious about making sites unblockable by domain, the first step is to re-activate support for domain fronting and make a public commitment to keep it.


> This doesn't make much sense to me, I wonder if the senior management at these firms understand what the engineers are doing.

When a bunch of well respected people decide to do something which you don’t understand, maybe next time your first instinct should be to ask what you don’t know rather than assuming they’re participating in an IETG process which will be shut down?

In this case, you might start by questioning whether there’s any other explanation for not supporting domain fronting in the past other than support for censorship (hint: Google gave it at the time), and whether those sound engineering reasons might in fact have lead to something like this as an official supported alternative.


Google did not give any engineering reasons at the time. That's misleading. Here's what they said:

"Domain fronting has never been a supported feature at Google, but until recently it worked because of a quirk of our software stack," a Google spokesperson explained in a statement emailed to The Register. "We're constantly evolving our network, and as part of a planned software update, domain fronting no longer works. We don't have any plans to offer it as a feature."

This might superficially sound like domain fronting stopped working accidentally, as part of other changes, but notice how carefully it's phrased. They made an update. It stopped working. Nothing in this phrasing would be false if they made a business decision to kill it and did so.

I find this explanation extremely likely for several reasons:

1. It happened just when fronting was being adopted by Signal, a high profile app that would have caught attention.

2. All the other big players made the same change, again, prompted by specific events related to anti-censorship.

https://www.bamsoftware.com/papers/fronting/

GreatFire speculated that the attacks were precipitated by the publication of an article in the Wall Street Journal that described in detail domain fronting and other “collateral freedom” techniques. The interview associated with the article also caused CloudFlare to begin matching SNI and Host header, in an apparent attempt to thwart domain fronting.

This explanation turned out to be correct. CloudFlare said:

Among other network service providers, it's clear domain fronting could be awkward. "Cloudflare does not support domain fronting," a Cloudflare spokesperson said in an email to The Register. "Doing so would put our traditional customers at risk as it would mask banned traffic behind their domains."

Note that their explanation of why they killed it would also apply to encrypted SNI.

3. I worked at Google for a long time and am very familiar with their serving infrastructure. I remember noticing that resolved hostname and host header didn't have to match a decade or more ago. This ability lasted a very long time, right up until the moment people started using it in ways that upset governments. Then it went poof.

I can believe that these particular 4 players might have had a change of heart. Mozilla doesn't run any CDN or third party web services of note, and has nothing to lose from implementing this. Nor does Apple. Cloudflare and Fastly are both small firms who might have genuinely changed their minds, although if they have they should state explicitly that they are newly willing to let their "traditional customers mask banned traffic".

But I'm also not naive: it's possible some people are thinking, better ask for forgiveness than permission.


> "Doing so would put our traditional customers at risk as it would mask banned traffic behind their domains."

> Note that their explanation of why they killed it would also apply to encrypted SNI.

I'd say that's an interesting interpretation. What they likely meant is that if govs/orgs want to block X and X starts using domain fronting via domain Y, there's a chance Y starts getting blocked. (basic SNI/DNS filtering) The same idea does not apply to the encrypted SNI - there's no damage to a random domain, unless the whole provider range gets blocked.


But the circumvention can use any domain and switch repeatedly, or dump lots of them using various techniques and randomly select. It doesn't need to be the case that domain Y is static.

In particular, note that encrypted SNI could just be blocked at the network level, thus forcing fallback to regular old TLS. But domain fronting can't be blocked like that.


> It doesn't need to be the case that domain Y is static.

That's true, but I don't think it's relevant to what the providers want to prevent. Even if the domain changes based on some pool, if Y is used at some point, Y could be blocked as a collateral damage.

Since Y pays the provider, the provider cares about them not being blocked because of a functionality they enable.


I don't think this is a correct assessment. Domain fronting / boring domains can still be blocked, and this is precisely what caused the whole Signal / AWS issue recently. This is a commercial risk to the domain holder, and to any organisation with shareholders it makes sense for them not to want their domain to be used for this purpose.

The engineers are looking for an alternative solution which pleases both camps: no more domain fronting, and no more possible blocking of domains by adversaries.

I think it's a bit idealist to say that if they're serious about making sites unblockable, they should put their commercial interests aside. This will not happen. But they are working on an alternative solution where both parties (engineers and shareholders) are happy, and I can only support that.


It's the same thing. How do you think encrypted SNI will play out? The IP address is still available, so this only helps if the IP address you're connecting to hosts many domains, some of which are desirable to the censors to keep unblocked. In practice this means big CDNs and hosting services, of which there are only a few.

The reason they stopped domain fronting wasn't technical, it was business related. If these engineers can't win the internal political battles to keep domain fronting working, what makes them think they can win the exact same battle this time around?


Blocking entire IP blocks is a much more crude weapon and definitely an improvement over the current situation. What you're presenting is a logical fallacy; just because the proposed solution doesn't tackle all the problems you want solved, doesn't mean it's not worth doing.


I was thinking the same at first, but let provide a more optimistic view. It has a few advantages over domain-fronting: it has a spec, does not single out a single domain (google.com or souq.com for instance), and has two CDN providers behind it, which means that there will be at least some kind of competition.

It can be a differentiating factor when one of the three major cloud providers is losing steam and want to have some privacy-related PR.


I think it's worth pointing out the repercussions of this and DNS-over-HTTPS.

Every corporation which does SNI scanning for web filtering will require every user to insert the company's CA cert in every client they use. This then also forces them to make a choice between requiring all apps in all networks to do this, or whitelist certain parts of the network. Not only does this reduce their security, but it limits how apps can work on their networks.

Authoritarian governments will also start requiring this. I could even see federal agencies for non-authoritarian governments making a stink over it on law enforcement grounds. They already hate encryption. If they can't force you to expose what sites you are visiting, they may force new legislation to require all web service providers to provide logs to law enforcement, which is a much worse privacy violation than just seeing what domain you were hitting.

I think the better solution to this would have been to improve server-side support for VPNs and Tor. Tech does not exist in a bubble, and rights can be taken away when they conflict with the interests of authority.


This sounds a lot like the argument we have already seen from the anti-"TLS everywhere" crowd. The claim is something like "If people start using HTTPS even for sites which [supposedly] don't need encryption, then governments will get annoyed at not being able to spy on people any more, so they'll finally force everyone to install a root CA cert that will be used to MitM their connections".

The logical extension of this line of thinking is that the tech community should never provide any extra security to anyone, in case somewhere some malicious actor is angered into retaliating against a certain group of users.

If you think that there are companies that rely on surveilling DNS requests (but don't already intercept and decrypt all their employee's web traffic) then you can campaign for browsers and OSes to have an option to turn off new security features like DNS-over-HTTPS/TLS and Encrypted SNI, but please don't demand that the whole internet go without these basic protections because of the internal policies of a few corporations.


My argument is not that the government will require root CAs for all clients due to a lack of surveillance options. My argument is that without the ability to enforce the law, the government may force a change that allows them to enforce the law.

The logical extension would not be "there should be no security features", and this is obvious, because the whole problem was not the existence of security features. The problem was the lack of ability to enforce policies, which organizations (companies, governments) are often required to do. I think the logical extension should be that security features should be developed with policy enforcement capabilities in mind.

Also, it's not logical to conflate this privacy feature with security features. Secure connections do not require SNI hiding at all. DNS-over-HTTPS is closer to a security feature, because without enabling DNSSEC, exploits are possible to the DNS which can impact non-TLS connections. That has a much easier workaround, which is to point DNS-over-HTTPS to a local provider.


> They'll probably issue an NSL to some cert vendor, or just tell the NSA to extract their keys, and then listen quietly to all secured connections.

There's no "quiet" in this approach. The cert vendors do not have the certificate's private keys; all the server operators send to the cert vendors is the certificate's public keys. Therefore, the only way to listen on the secured connections through a cert vendor is to use the cert vendor's private keys to sign a counterfeit certificate, which is presented to the browser. And at this point, the browser has everything it needs to prove that the cert vendor was compromised.

There are also several other gotchas: Certificate Transparency can force them to publish their counterfeit certificates to the whole world; Extended Validation is allowed only from some cert vendors (and requires Certificate Transparency); and last but not least, this (like all MITM) breaks client authentication (client certificates).

Finally, this only applies to cert vendors under the USA jurisdiction. A NSL has no power over other countries.


You're right, there's ways to detect it, though I don't think browsers today actually check the cert transparency log or CAA record when making requests. Without these two checks, valid certs created by a MITM tool using a CA's keys should pass without flagging any validation errors.


Even if most browsers don't check, it takes just one single person to expose the whole operation. This provides a strong incentive to use compromised cert vendors only for the most important operations, instead of indiscriminately against everyone: each use of a counterfeit certificate risks "burning" the whole cert vendor.

And as you say, browsers today might not check these things (and checking CAA doesn't make much sense for a browser, it's intended to be checked by certificate issuers). But future browsers, browsers customized with some third-party extensions, or non-browsers might check.


Chrome does check that all certs (at least [the ones issued after April 30][1]) are recorded in a Certificate Transparency log. Or rather, it checks that multiple logs have promised to include the certificate (via a signed certificate timestamp); the code that actually queries the CT monitors to verify that the certificate was included [isn't finished yet][2].

Browsers don't and can't check CAA records, because that's not how CAA is supposed to work. CAA is only enforced at the time of issuance. If you temporarily switch your CAA records to allow issuance for an hour, issue a cert, then switch them back to block all issuance from all CAs, the cert you issued remains valid.

[1]: https://groups.google.com/a/chromium.org/forum/#!msg/ct-poli...

[2]: https://bugs.chromium.org/p/chromium/issues/detail?id=506227


With Expect-CT, sites can opt-in for browsers to report anything suspicious. Not sure how many browsers actually support that though.


Currently only Chrome (and presumably other Chromium-based browsers?) does.


If this makes surveillance more visible it's already a huge societal win.


> federal agencies for non-authoritarian governments

You should correct it into _national_ agencies for non-authoritarian governments. Unitary states are majority in the world.


  Every corporation which does SNI scanning for web
  filtering will require every user to insert the
  company's CA cert in every client they use.
Just require your employees to route port 443 traffic through a proxy, making a CONNECT request.

It doesn't prevent domain fronting, of course, but neither did SNI scanning.


What would this accomplish? I assume the CONNECT string would simply be the IP:PORT of the destination, the same as with SNI hiding, so it doesn't seem like it would make a difference.


I suppose that's a good point; although today the CONNECT command uses a hostname, a browser could change to using IP:PORT if they were serious about keeping hostnames secret.


Authoritan governments will just block by IP. Yes, they'll block all CDN's as a consequence and big parts of internet won't work. There was time when Kazakhstan blocked dozen of Google's IPs because some opposition hosted their website on blogspot and gmail was broken. It lasted monthes if not years, well, nobody rebelled because of it, LoL.

All that encryption will lead to people using VPNs en masse. This will lead to laws, directly prohibiting VPN usage except for explicitly granted permissions for selected businesses. This will lead to Internet segmentation, Russian Internet, China Internet, etc, world wide web will cease to exist.


Forcing the installation of a root CA isn't a viable option for all but the largest authoritarian governments (China and Russia).

Kazakhstan attempted [1], but abandoned the idea, likely since it was unrealistic to expect widespread adoption of the certificate.

[1] https://news.ycombinator.com/item?id=10663843


Indeed raising the bar for what it takes to make these kinds of installations are going to prevent a lot of small countries from doing it.

And eventually, it'll probably make it infeasible in China/Russia. Especially, as the population becomes more technology literate.


Well, you could make a proxy that only allows HTTPS traffic through if it's signed by your root cert, then MITM-resign all upstream certs.

...ah, and that would break anything using cert pinning.

HTTPS feels so utterly clunky.


A much easier solution is for the government to circumvent PKI entirely. Basically you issue a National Security Letter to some cert provider and require them to give you their keys. Now you can generate valid certs as that provider. You use this in combination with a MITM tool to auto-generate certs for any requested domain, and those certs are implicitly trusted by all clients because they trust that CA's keys. So the state can now MITM and listen to all secured connections, and the only giveaway is that the cert is not in the certificate transparency log, or if the CA used does not match the domain's CAA record (when it exists).


> So the state can now MITM and listen to all secured connections, and the only giveaway is that the cert is not in the certificate transparency log, or if the CA used does not match the domain's CAA record (when it exists).

Or when the domain uses HPKP to pin its certificate to a different CA. When the MITM CA has been added manually to the browser's root key store, most browsers allow HPKP bypass by that CA unless configured otherwise; but if the MITM is done by compromising one of the browser's default CAs, the browser won't allow a bypass.

Or if the certificate was supposed to be an EV certificate, which can only be issued by a limited number of CAs, and the compromised CA is not one of these. And even if an EV CA is compromised, AFAIK some browsers require the certificate to be in a Certificate Transparency log, and they also require that the certificate carry proof of its submission to the CT logs.


1) HPKP is dead, Chrome 67 removes support. 2) No client will flag an error if the new cert is non-EV. EV is completely useless as a security feature. 3) AFAIK no browser checks the cert transparency log.


And how does that "forcing" work? You just install it voluntarily because you get annoyed by certificate errors on every other site?


Every step towards "encrypt all the things" increases the cost of that interception. Back when the web was plaintext-only, a mirror port on a switch was more than enough to see everything. With SSL and TLS up to 1.2, they have to MITM to see anything more than the IP address, the SNI, and the server certificate. With TLS 1.3, they have to MITM if they want to see the server certificate. With TLS 1.3 plus encrypted DNS plus encrypted SNI, all they have without a MITM is the IP address. And on the MITM front, several measures have gradually made it harder, with a grandfathered exception for manually added root CAs.

Corporations and governments have to balance the costs and the benefits of breaking encryption. The costs have increased, which increases the chance that they decide that the costs outweigh any perceived benefits.


> Every step towards "encrypt all the things" increases the cost of that interception.

That's not the case at all, and a very dangerous idea. Some steps drive increased centralization. Increased centralization makes interception and filtering easier.

The better solution is to push domain authentication to after the secure channel is established. That doesn't protect against active MITM snoopers, but neither does TLS ESNI. Worse, TLS ESNI will drive people to use centralized DNS-over-TLS service providers.

Mozilla+Cloudflare's "free" DNS service barely made it out of the gate before it was hijacked by the Great Firewall (not even simply blocked). See https://borncity.com/win/2018/05/30/cloudflare-dns-service-1... and https://blog.cloudflare.com/bgp-leaks-and-crypto-currencies/

Of course, with DNS-over-TLS you would know (maybe) that there was a MITM. But that's no different than DNSSEC. And there's nothing you could do about it, anyhow.

Securing DNS, and specifically keeping DNS queries private, is NOT a solved problem. Indeed, it's not even a solvable problem because of the nature of DNS--the need to centrally coordinate registration of human readable identifiers. The only real "solution" is anonymization via techniques like onion routing (e.g. Tor), which anonymizes the requester, not the responder.

Advocates will tell you that TLS ESNI improves requester anonymity by aggregating users behind "trusted" intermediate resolvers, but that's largely based on conjecture and very little evidence--in particular without any showing that usage patterns won't naturally reconfigure to either return everybody to the status quo ante, or make everybody worse off on average.

TLS ESNI is a wild goose chase and, looking back 10 years from now, will have simply served to increase the complexity of the web stack and decrease service and software diversity.


They could always do IP blocking and tracking.


Doesn't really work if half the internet is covered by CloudFlare's free CDN.


Is this really done by these companies or just by developers from these companies in their spare time?


This has 100% support from the most senior quarters at Cloudflare. Can’t speak for the others.


What do you think the time frames are for something like this to make it into all the major browsers?


You'd need to get it into all servers too, which is probably a very long time frame.


This design only requires the hosting server (e.g. Cloudflare, CDN, ...) to support encrypted SNI. It can be entirely transparent to user deployed servers. So it might well be here much sooner than you think!


Direct link to the tweet thread

https://twitter.com/grittygrease/status/1018566026320019457

Link to the RFC draft-00 "Encrypted Server Name Indication for TLS 1.3" on which the work is based

https://tools.ietf.org/html/draft-rescorla-tls-esni-00


Thanks, the UI of this Stackshare website is not that great.


I just get an almost blank page.


Me too. It has a lot of javascript that I'm blocking.


Yeah, strange.

I submitted the direct Twitter link (with the exact same title) about 5h before this StackShare link was submitted: https://news.ycombinator.com/item?id=17537176.


Is this a way for the ad-industry to evade pi-hole or similar technology?


They would spend more money developing this than they would gain. Nobody, in a statistically-relevant sense of the word, uses pi-hole.


What do you mean, not use pi-hole?


I very few people use it, to the point that the ad companies don't really losing anything from it.


pi-hole doesn't intercept/inspect SNI. they block at the DNS level.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: