> If they’re willing to convert all their customers to ESNI at once
Why does it seem like this is over-engineering at it's finest? Not only are CDNs now part of the problem/solution space, but they are now dictating.
It is now that much harder to diagnose issues when they do crop up, instead of checking ping or nslookup. Now, you've got to see if the DNS-over-HTTPS/The DNS record itself/Host/client/any number of other steps is broken.
We've completely removed the ability for a poweruser to diagnose before calling their resident IT professional.
At the moment if you want ESNI it looks like you have to use Cloudflare, but the solution to that is to encourage other cloud providers to support ESNI rather than to decry the notion of ESNI.
But why do I have to? I already have a trusted DNS resolver operated by myself wired to my OS. Why require the whole DoH rube goldberg machinery to let me try ESNI?
There are three well-known trusted public DNS resolvers, run by Cloudflare, Verizon, and Google.
Which of those three would you encrypt your DNS traffic to, if those were the only three options available other than plaintext for all to see?
DNSCrypt (or DNS over TLS or DTLS) is a wonderful alternative that works in-band and works with DNSSEC.
People are also ignoring the consequences of the switch from UDP to TCP.
DOH isn't about trust. It's about preventing network observers from figuring out what sites you visit by observing the DNS requests you make.
In the UK for example, the current government proposals for yet more Internet censorship assume that they can "just" order ISPs to censor DNS. This, their white paper says, is relatively cheap and so ISPs might even be willing to do it at no extra cost, which is convenient for a supposedly "small government / low tax" party that keeps thinking of expensive ways to enact their socially regressive agenda...
But DoH and indeed all the other D-PRIVE proposals kill that, to censor users with D-PRIVE you're going to have to operate a bunch of IP layer stuff, maybe even try to do deep packet inspection, which TLS 1.3 already made problematic and eSNI skewers thoroughly.
So there's a good chance this sort of thing for _ordinary_ users (the white paper already acknowledges that yes, people can install Tor and it can't do anything about that) makes government censorship so difficult and thus expensive as to be economically unpalatable. "Won't somebody think of the children" tastes much better when it doesn't come with a 5% tax increase to pay for it...
Don't know if there are any tools available right now that will do that for you, but there's no technical reason why it isn't possible.
Yes, apps could theoretically already do this today if the developers are willing to run their own endpoints. However, my guess is this will become vastly easier to do when there are already public DoH endpoints available to connect to.
Same privacy, minus cloud companies that try to insert themselves as middleman.
Instead of pushing this functionality into OS resolvers and standalone resolvers used by networks, it is being pushed into commonly used applications, with cloud companies providing the other end by default.
ESNI doesn't require DoH, but there's no point of using it without one, if your network can check the DNS records you are asking for, and then check the ecrypted SNI against it (it will have the same key you are using to encrypt, so it can do the same and match).
Some resolvers (systemd-resolved, ducks and hides) do use custom API to attach properties to responses. That was the only way to get DNSSEC status, so it can be reused to indicate upstream protocol too. However, not many applications use that, they rely on standard gethostbyname(), which doesn't provide anything similar.
I do agree that pushing DNS functionality into apps instead of the OS level is suboptimal, and I certainly hope that, if Firefox proves that DoH works well, it will be adopted by the major OS's (along with a way to query the OS resolver to check if it's using DoH or not) so that all apps can benefit from it instead of just web browsers that reimplement DNS.
Of course, IIRC Chrome at least (not sure about Firefox) has already been implementing DNS resolution itself for a long time rather than relying on the OS resolver, so the idea of a web browser doing DNS directly instead of relying on the OS is not a new one. I'm not sure why Chrome does this though.
A simple flag to configure this would have done the job. I don't like how browsers are pretending that they have security needs that are special compared to any other application and thus need to pull in the whole network stack and bypass the OS on everything.
It causes duplicate effort if you want to secure your whole network instead of only the browser.
It also limits technology choice. I'm forced to use DoH even though there are other options.
This is not mozilla's decision to make.
It's a different world, true, but technology can't be stopped.
If Mozilla succeeds in being a agent negotiating on behalf of users, all your base might be governed by reasonable contracts.
"We’ve chosen Cloudflare because they agreed to a very strong privacy agreement" . Like, legally agreed? With regular audits and full access for Mozilla people?
Where does that leave me, if it gets baked into my browser?
Of course, if you still don't want to use Cloudflare for DoH you can just configure your favourite resolver in Firefox itself. The blog you refer to as  contains detailed instructions on how to do that.
So, where are you left? Right where you are today: you control the DNS resolver on your machine today. With Firefox Nightly you also control the DoH resolver (and can disable it entirely).
You'll probably always be able to run your own. If you so desire.
Privacy is a fragile thing. Is it better putting all my lookups in one basket (CF)? For hiding from ISP, nothing beats VPN, and in this case no need for ESNI. My point is its not up to Mozilla to turn my data to third party.
And, frankly, if choosing between ISP and CF (or Google), leaks to ISP impact your privacy much less. ISP have no global data to ML your history, no analytics cookies, no clear text traffic access.
The world is moving towards more cloud computing, Mozilla can't stop the centralization of the internet. But if they can use collective bargaining to protect consumers that might do a lot of good.
This is a Firefox decision, not something required by the standard:
ESNI is best combined with DoH to prevent snooping (hence Firefox's apparent decision to tie the two features together), but obtaining the ESNI key does not strictly require DoH.
Doing it like this is a great way to end up in interopeability hell down the road when different parties have implemented different versions. I'm not saying they have to wait until it's an RFC, but atleast wait for a couple more versions of the draft and let the IETF discuss it a bunch first. This is a big change.
I can see why it might look that way, but actually draft-ietf-tls-esni-01 is the third draft of this document, and has been co-written by at least four named authors including Chris Wood at Apple. Also that "Mozilla employee" was one of the Working Group chairs.
draft-ietf-tls-esni-01 was preceded by draft-ietf-tls-esni-00 (it is usual for early drafts to have zero zero versions)
draft-ietf-tls-esni-00 was preceded by draft-rescorla-tls-esni which was Eric Rescorla's first write-up of this idea
Finally, though this document didn't exist twelve months ago, the "issues and requirements" document did. This document imports the thinking behind that document, it just provides an implementation and now Firefox is testing it.
The reason for the name change is a thing called "adoption". The TLS Working Group agreed by consensus to adopt this piece of work, rather than it just being independent stuff by a handful of people who coincidentally were working group members. When that happens the draft's name changes, to reflect the adoption (removing a single person's name) and sometimes to use more diplomatic naming (e.g. the "diediedie" draft got a name that didn't tell TLS 1.0 to "die" any more when it was adopted).
There are benefits (censorship circumvention) to be reaped, but also great peril.
DNSSEC is another great example. Look around. Nobody in the industry is asking for it (try that "dnssec-name-and-shame.com" site to confirm this), except the IETF and a very short list of companies with a rooting interest, like Cloudflare. In the very short time it's been around. DNS over HTTPS has done more to improve DNS security than 25+ years of DNSSEC standardization ever did. The cart has been dragging the horse here for a long time.
It was never the idea that IETF was meant to be an Internet legislature adjudicating what features can and can't be supported in protocols. But that's exactly what it has become.
Gradually Nalini's lot discovered a very important thing about the IETF: It is not a democracy. They tried sending more and more people, attempting the same thing that made Microsoft's Office into an ISO standard - pack the room with people who vote how you tell them. But there aren't any votes at the IETF, you've just sent lots of people half way around the world to at best get recruited for other work and at worst embarrass themselves and you.
After they realised that stamping their feet, even if in large numbers, wouldn't get RSA back in TLS 1.3, they came up with an alternate plan for what was invariably named "transparency" (when you have a bad idea, give it a name that sounds like a good idea, see also: most bills before US Congress) but is of course always some means to destroy forward secrecy or to enable some other snooping.
Now, IMNSHO the Working Group did the right thing here by rejecting these proposals on the basis that (per IETF best practice) "Pervasive Monitoring is an Attack". Was this, again, the "Internet legislature" since Nalini and co. wanted to do it and they'd expected as you've described that if they wanted to do it the IETF should just help them achieve that goal?
Well if you're sad for Nalini there's a happy ending. The IETF, unlike a legislature, has no power whatsoever to dictate how the Internet works.
ETSI (a much more conventional standards organisation) took all the exciting "Transparency" work done by Nalini's group and they're now running with it. They haven't finished their protocol yet, but in line with your vision it enables all the features they wanted, re-enables RC4 and CBC and so on. They've published one early draft, but obviously ETSI proceedings (again unlike the IETF) happen behind closed doors.
You are entirely welcome to ignore TLS 1.3 and "upgrade" to the ETSI proposal instead. Enjoy your "freedom" to do this, I guess?
I agree: the 1.3 process is better than what came before it. But it's the exception that proves the rule: the 1.3 process was a reaction to the sclerotic handling of security standards at IETF prior to it.
My point is simple and, I think, pretty obviously correct: you can't look back over the last 10-15 years of standards group work and assume that either IETF approval or multi-party cooperation within IETF is a marker of quality. And that's as it should be: it's IETF's job to ensure interop, not to referee all protocol design. More people should work outside of the IETF system.
There is no way this encrypted SNI could enable censorship circumvention.
Only if the large host does not cooperate, the respective ISPs will block their entire IP range, (if they want to keep operating in given jurisdiction).
If a browser starts (purposefully) subverting the hosts file or not adhering to resolv addresses, then we've got a bigger problem.
Think, a fat client resolving an address differently than a browser; then that's all sort of Pandora's Box.
Related, it should be possible to have “correct” dns in userland that behaves as you describe sans falling back to the system resolver. In my understanding the whole point of DNS over https is to avoid the DHCP assigned DNS address (and of course encrypt)
Finally, I’m pretty sure Firefox at least does its own dns caching. I’ve had to force reload to pick up dns changes already visible to the system resolver.
I'm not really sure what benefit there is to doing this compared to DNS over TLS with a resolver like Unbound but I suppose that's a different discussion.
What Firefox seems to be doing, unless I'm mistaken, is running their own resolver that implements DoH/connects to Cloudflare and bypasses OS settings.
I haven't dug into the details yet to see how it interacts with the hosts file.
It does sound like it falls back to the OS if it fails to resolve with DoH but this solution at first glance appears unideal.
Wouldn't it be best if Microsoft/Apple/*nix distros/ISPs/third party nameservers used resolvers and nameservers that support DNS over TLS?
Then end users/administrators could choose who they trust and everything would still be encrypted.
As a system admin myself; if user applications started overriding the DHCP DNS that I give them, not only could intranet sites be broken, but I'd start having fights with users about it.
Edit: Rather, not overriding but querying the DoH instead of the provisioned DHCP DNS. I'm no expert in DoH, or how any of that works under the hood.
Further, when/if browsers turn on DoH by default, then I can't really fight users, because they did nothing wrong but use a browser. Suddenly, I can't support a browser or two because of it.
DNS caching by the application is fine, because they made the request to the OS, and got the response. That being said, TTL might be violated by that, since the record has a TTL, and whatever the application cache TTL is.
It's the default DNS in the network. Computers do not need to know the detail it gets encrypted past that point.
Do you think encrypted SNI and NAT will become preferred to using IPv6 for routing because of the privacy benefits of ESNI (either real or imagined, depending on who you trust, since this seems to be relying on centralized CDNs for adoption)?
Edit: I realize ESNI and IPv6 are orthogonal, however, I wanted to ask since I know that lack of IP space and using NAT/SNI are correlated.
There's numerous reasons to bounce direct (non-hostname) requests including:
- Discourage users visiting via the IP Address, adding that to favorites, sharing it, etc. Making it difficult/impossible to migrate.
- Make it harder for databases to associate that IP Address with your site (for user privacy).
- Security. Ignoring that several modern techniques don't even work with IP Addresses, it also stops someone taking over the IP after you migrate and stealing cookies associated with that address, etc.
Point being is, that SNI is useful on IPv6. It is useful on single web-site servers. It isn't going anywhere.
That's not a reason to reject. That's a reason to issue a redirect. No regular user that would make the mistake of bookmarking an IP instead of a domain is going to know how to use an IP to get there in the first place.
>- Make it harder for databases to associate that IP Address with your site (for user privacy).
You're misunderstanding how these crawlers work. They don't just walk all IP addresses because that's a good way to get an abuse letter and because if there is no redirect to a domain they don't get the domain. These crawlers just follow links like any other and log the IPs they resolve to. The only way this helps user privacy is if there are no links to your site anywhere on the internet.
If you run multiple websites on a single server for example you can move your server to another IP address / host / datacenter, update the IP address in your DNS settings and you're done. If you use IPv6 for this purpose, every domain and subdomain need to be configured with a new IP address and need a special line in the DNS config. This leads to administrative overhead, which leads to mistakes.
If I use SNI -- which at this point is fully supported on anything that matters -- I just have to setup all the hosting definitions and make sure my server can find all the certificates.
If I want to use one IP per site without SNI, now I have to also manually manage the mappings of IP-to-certificate for each host, and also be sure they're all synchronized with DNS.
More work, more potential for trouble, and no real benefit.
"Tracking Users across the Web via TLS Session Resumption". A snippet from the abstract: "Our results indicate that with the standard setting of the session resumption lifetime in many current browsers, the average user can be tracked for up to eight days. With a session resumption lifetime of seven days, as recommended upper limit in the draft for TLS version 1.3, 65% of all users in our dataset can be tracked permanently."
Not exactly looking forward to TLS1.3, it appears to be a move forward in security but with no (or worse) privacy benefits that I've seen so far.
> seven days, as recommended upper limit
Do we fix this by changing that setting to a few hours?
Edit: the report discusses this: "The recommended upper limit of the session resumption lifetime in TLS 1.3  of seven days should be reduced to hinder tracking based on this mechanism. We propose an upper lifetime limit of ten minutes based on our empirical observations"
looks interesting, thanks!
Our great great grandchildren will surely figure something out.
Say sometimes I love to visit a very private website for my personal pleasure when I'm alone at night.
Without eSNI, when I type-in pornhub.com and hit enter, my buddy Bob who working for the ISP immediately knows and be very sure that I'm trying to accessing none other than pornhub.com. And then, with great confident, he greedily calling me for a live chat.
Bob is a ... special person. He might tell my mom about the pleasure thing, but not just that, he also secretly tracks my pleasure activities only to figure out the pattern using some sort of weird thing called machine learning, so he can show up in front of my door at the exact right time to share the pleasure with me.
I don't like that.
With eSNI, Bob only knows that I'm accessing 220.127.116.11. But when he tries to access the 18.104.22.168:80, he be greeted by a 403 error which says "Invalid Host".
A website may have many IP addresses, and an IP address can serve many websites. Because of that, now Bob can only know MAYBE I'm watching my little pleasure, oh wait, or maybe it's imworkingverylateatnight.com? He just can't be sure now.
You're right that eSNI is a nice to have (though years late) for IPv4, but I and the GP would like to know what we can do to protect our anonymity with IPv6.
ESNI as it has been developed to essentially require two other components to work properly:
1) a large scale cdn
2) a trusted dns infrastructure (i.e. DNS-over-HTTPs or DNS-over-TLS).
So people are absolutely right that in distant future when IPv4 fronted sites go extinct, it may be possible that site hostnames can be correlated to a set of IPv6 address(s). ESNI doesn't and can't solve for that. I imagine that as the internet continues to become more and more centralized, a few large CDNs will host most (or very close to all) internet traffic through a few sets stabilized anycast addresses (thus obfuscating any individual hostname among many hundreds or thousands of other sites as they would all correlate to the same ip blocks).
That being said, I still don't understand why it's so important to have the SNI on the "outside" of the tunnel. Seems like we should have another layer before the symmetric key exchange where the sni is exchanged on its own.
It's a lot of extra hassle to set up dozens of IPv6 addresses when (e)sni can do the same job.
Moreover, (e)sni has an advantage over using ip mapping; events if someone is snooping on your connection and can see that you are connecting to some ip address they won't be able to determine what site that might be.
If you are simply mapping IPs, they can visit that to see what you are visiting.
I have multiple domains hosted on my personal site. Similarly, facebook.com and facebook.co.uk could very well point to the same IPs.
Nobody wants their home or their datacenter machines exposed to the whole Internet all the time.
NAT is a feature, not a bug.
Stateful firewalls still work, have always worked, and work the same in IPv6 as they do in IPv4. Having a public, globally-routable, unique address on your internal machine, whether that's an IPv4 address or an IPv6 one, doesn't mean that anyone can connect to it. It still has to go through your router. That router can be running a stateful firewall.
NAT is awful.
It looks like Cloudflare is including a public key in the DNS lookup, which is used to encrypt the SNI information.
Couldn't this key be stored in a TXT record for normal DNS lookups as well?
It also centralizes DNS requests at Cloudflare's POPs (a company from a mass surveillance, secret orders happy, police country by the way).
No, none of it addresses privacy and security, probably only makes it worse.
It's time to admit there is no future for privacy and security without overlay networks.
If you are going to block DoH you’ll need to block all HTTPS traffic altogether, don’t you? I mean unless you are just blocking traffic to some list of known DoH providers.
This works great in theory but if it takes off then Google and Cloudflare can simply decide to serve DNSoHTTPS requests over their existing service IP space and you're left with the choice of block the internet or allow encrypted DNS lookups.
As far as government comments you're never going to deploy public infrastructure inside a state and be able to avoid the state so it's pointless to bring anything about that up.
Otherwise, this sounds suspiciously a lot like DANE, which cert authorities hate, since there would be no use for them.
This thread is about eSNI. Guess what, eSNI can't be done reliably without a way to do DNS that doesn't break when you do anything more interesting than A lookups.
Fortunately, Firefox has a solution for that, DoH.
Wait, which of those two identical problems does it solve? Oh right, both of them.
And, of course, you're misrepresenting Langley's blog post when you suggest that the only reason DANE isn't in Chrome is because of lookup reliability. Readers can just read the piece for themselves (it's good, and interesting!) and come to their own conclusions.
"Let's get rid of CAs!" Sounds great. "Let's replace the CAs with a less accountable set of companies and governments that are harder to punish for bad behavior" doesn't sound so great. But thats what DANE is.
The common counter-point is that, because nearly every CA does domain validation, the owners of the DNS keys are already capable of getting arbitrary certs. Therefore, DANE does not give DNS key owners more power; all it does is take out the CA's as a potential failure point.
And really, as long as a cert attests that "You are talking to the owner of this domain" the power over certs is always going to lie with those who control the DNS system.
A possible response is that just taking power from the CA's but leaving power with the DNS key owners is not good enough. This does make some sense. It is not entirely clear to me how we will take this power from the DNS system though. The best bet are CT-logs, which will allow after-the-fact detection of any falsely issued certs. Notably though, attribution between CA's and the DNS system isn't solved here. Perhaps if CA's store the relevant signed DNS response, we could attribute the attack to the DNS system.
Also Thomas has rejected the suggestion that the parts of his post that are now hopelessly wrong should be mentioned in the FAQ he prominently links. So, that post is wrong and explicitly won't be fixed, you should not rely on the "facts" in it unless you want to get laughed at.
Your mention of Comodo suggests you're badly confused. The Symantec hierarchy is in the process of being distrusted by the Mozilla and Google root programmes, not Comodo.
As to .com, it already _is_ run very badly and we already do have to put up with that because there is no way to fix it. Don't put new things in .com unless you're comfortable with for-profit companies screwing you over whenever it suits them. DNSSEC can't make that worse, it's already terrible.
That's worth emphasising - DNSSEC cannot make you more dependent on your registry operators, because you are already entirely dependent on those registry operators anyway. If the operator could be leaned on by spooks (seems plausible) that is already true today.
Lol, I didn't read the username.
> So, that post is wrong and explicitly won't be fixed, you should not rely on the "facts" in it unless you want to get laughed at.
I haven't analyzed everything in that blog post. However, the case it makes against DANE I think is convincing. I linked to that blog post since it was the one that made me realized that DANE was a bad idea when I was briefly a DANE enthusiast a few years ago.
> Your mention of Comodo suggests you're badly confused.
I accidentally slipped and used the wrong CA - I think "badly confused" is a bit strong for a slip of the tounge.
> Don't put new things in .com unless you're comfortable with for-profit companies screwing you over whenever it suits them.
It sound like you are also arguing that DANE is a bad idea.
> DNSSEC can't make that worse, it's already terrible.
I don't think I said anything to the contrary.
I'm real confused by the aggressive tone - it seems like you agree with everything of substance I wrote and the things you don't agree with are things that I didn't actually say.
I appreciate and will remember the concession that security for .COM is hopeless.
But today actually FQDNs like 14021-nonprod.bankofamerica.com or whkgm04ye.hktskcy.apac.bankofamerica.com are not just accessible if you brute force DNS, they're automatically published, because Bank of America heavily relies, in fact, on the Web PKI and its issuers log the certificates.
It seems very strange to focus on the security for .COM when my point is that that entire TLD is badly run, it's like you're focused on how good the lock is on the front door (somebody call Deviant Ollam) at Lehman Brothers when the actual problem was they've invested all this money in worthless mortgage securities.
You've picked an odd point to quibble with, since there's not only NSEC and NSEC3 but now, after the last RWC, a proposed NSEC5 to address this supposed non-problem.
And since DNS queries are commonly cached locally (i.e. dns server == my computer a reasonable percentage of the time) that's not even a rare occurrence.
This makes things a lot more complex to run in a lan. Especially since you also need certificates and put them into the browser trust store. Just to avoid leaking SNI.
As far as resolvers you can run yourself everyone and their brother has made one - facebook, cloudflare, cordens, dnscrypt, your cousin's brother's sister on Github. Literally all there is to it is rewriting well formed datagrams into well formed JSON and back.
I'm going to split this up into two answers:
1) The problem for "coffee shop" guest style networks isn't so much that they care to purposefully block this type of DNS as much as it's already blocked by accident and it isn't going to get unblocked any time soon since (realistically) nobody manages these networks post deployment. At least until 8 years later when something breaks or it becomes to unreliable and then is replaced by the next in-a-box solution they bought for 80 dollars.
2) This assumption of blocking works somewhat decently until Google or Cloudflare just decide to enable DNSoHTTPS on their main service IPs. Then you have the choice of blocking most of the internet or not spying on your customer's data. Now as a business if you own the devices and the network you can manage your devices appropriately with an explicit proxy or SSL intercept but general ISP/guest wifi tracking becomes an order of magnitude more difficult to do cost effectively.
I see this as a good thing. I do sympathize with small businesses that, when gaping holes in internet security are closed, are forced to retire technologies that exploited those holes. And I'm not being sarcastic about being sympathetic - businesses in this position would probably rather be spending their money on something else. But, these are security issues that impact everyone and leaving them unfixed just because it will negatively impact a group isn't a reasonable option.
The dynamically discovered ones are the next step in the game, but you still have to start somewhere, either with well-known IP or at least legacy DNS.
Cloudflare apparently does support it already for all their websites though. (or at least the ones that use cloudflare dns also)
See https://blog.cloudflare.com/encrypted-sni/ .
Yet, those examples are useful because they prevent people dismissing the arguments because `people shouldn't be doing that anyway`.
That is, no-one wants to keep people from going to cancer-related websites. This is very different from e.g. porn or STDs. I guess perhaps the exception are the nut-cases that believe people getting cancer `deserve it` because otherwise it wouldn't be part of gods plan. But luckily very few people consider those opinions relevant.
Apple's contrived example from when they introduced private mode in Safari was shopping for presents and not wanting the recipient to find out, but that would be even less convincing when the person you're hiding your traffic from is the person next to you in the coffee shop.
So, yeah, I'd agree it seemed both jarring and unnecessary.
Wow. That is technically a valid English phrase. What boggles the mind is that someone could be so out of touch with societal norms and basic human decency that they would actually use it.