Hacker News new | past | comments | ask | show | jobs | submit login
Encrypted Client Hello: The Future of ESNI in Firefox (blog.mozilla.org)
153 points by todsacerdoti 9 days ago | hide | past | favorite | 82 comments

This is extremely ugly. They got rid of ESNI because it was an incomplete solution possibly exposing the connection target anyways. So they decided to encrypt the whole client hello message, calling it ECH (encrypted client hello).

However, to make it work, they need the server's public key out of DNS. To get that, they didn't rely on preexisting TLSA records intended for this purpose (partly because it wouldn't work in some cases, TLSA can either contain fingerprints or pubkeys). Instead, they defined a whole new DNS record type to publish those keys. But those records, called SVCB, don't just publish keys. They are an amalgamation of SRV records (this service called "something" is actually at that host and port), CNAME records and a list of key/value-properties with associated priorities. If you think you might know that one, yes, it looks almost like all of DNS inside DNS again. Just another Meta-DNS, because all those RRs aren't fancy enough already. Or maybe Cloudflare just wants a way to put all their internal routing info into DNS somehow.

Anyways, I'll wash my eyes with soap now...

So, one reason not to have this as like fifty separate DNS records is that consistency is essential. DNS doesn't provide any consistency guarantees, if you ask "A? foo.example" and "AAAA? foo.example" you are not promised that your answers represent the same moment in time, one of them might be brand new while the other has been cached for six hours and is now obsolete.

For A vs AAAA that's fine, you don't need both to be consistent you just want either of them to work.

For ECH this could mean you get told key A (from yesterday), target name B (just changed now) and port number N (six hours old) and then when you try to connect to port N with key A and target name B that combination doesn't work.

So by bundling everything you need into a single record means the only remaining problem is that a cached record might be out-of-date, an ordinary routine mistake by operators that happens for any DNS records.

The other reason is dirtier and practical: We want all this information to make (for example) a web browser work. So why have the web browser always make fifty queries when there could be one query that encapsulates everything needed?

DNS is never consistent, that is an explicit non-goal. You can also get a CNAME from last week pointing to a CNAME from 5 minutes ago pointing to a nonexistent hostname in a nonexistent zone. This feels very much like hammering on DNS to make it look like a bad copy of etcd or something.

Also, http(s) within SVCB is an ugly special case to be handled differently than other SVCB records. And one SVCB answer doesn't contain all you need. E.g. there is no intent to be a replacement to or consistent with TLSA entries, so you will still need to do mutiple queries. It just contains the ECH key, so it rather looks like a half-baked kludge to make ECH work somehow.

If it depends on DNS, does that not expose the name being requested via DNS and defeat the point of ESNI? I guess I need to go read up on this. Reading Cloudflares blog, it appears they assume people are using DoH or DoT.

Since you need DNS to get the IP address in the first place, it's already true that ESNI is nearly useless without encrypted DNS.

Unless the DNS is also encrypted.

In fact, the firefox implementation for ESNI, and I assume now for ECH, required DNS over HTTPS to be enabled in firefox for ESNI to work. That means that if ESNI is used the domain is never transmitted in plaintext. However, it has the signfificant downside that if you can't or don't want to use Firefox's DoH, and your system DNS is already encrypted (DoH, DoT, VPN, etc.), then you can't get ESNI.

See https://bugzilla.mozilla.org/show_bug.cgi?id=1500289

True. I guess I assumed incorrectly that the original design of ESNI would not require DNS be in the flow for the validation, meaning that if I had local DNS, or /etc/hosts entries, then nobody would see the name. But if special record types are required, then /etc/hosts or local DNS won't suffice. Just my opinion, but it feels like this was made unnecessarily complicated. Feels like some concepts from SSH (host keys, caching host keys) and web servers (default _server_ name) could have been used for this, since I don't really need to validate who I am talking to for passing the name.

Hopefully, browsers and operating systems will add capacities to store the records locally.

They allude that this could be possible in the spec.

> This document defines the format of the ECH encryption public key and metadata, referred to as an ECH configuration, and delegates DNS publication details to [HTTPS-RR], though other delivery mechanisms are possible.

> since I don't really need to validate who I am talking to for passing the name.

If you don't validate who you're talking to, some ISP will run a MITM attack simply to log accessed hostnames (even if they can't MITM the final TLS connection)

True, but this would be pretty easy to spot. I am ok with the compromise of using the same method used with ssh host keys for passing the name. The browser can cache these keys and tell the person if things are changing. At least that would be much simpler than these additional DNS lookups and assuming people are using encrypted DNS to a trusted DNS provider, but that is a whole other discussion.

The concern is working around state level or ISP level censorship and monitoring of domain names. TLS already allows you to detect tampering without ESNI so that's not a goal here

> it appears they assume people are using DoH or DoT

Things changed since then, now plain DNS is also allowed.

Not if your DNS is encrypted (e.g. DNSCrypt, DoH, DoT, ...)

I suppose it does if you run your own resolver?

But your resolver still needs to fetch those from somewhere. So you need encryption somewhere in the chain.

DoH or DoT is encrypted. The address of the upstream resolver typically is hard coded or comes as a parameter during DHCP.

Plus both Firefox and Chrome has built-in DoH support.

If the DNS query is done with DoT or DoH the server name is not exposed.

That is what I took from their blog as well. For some reason I thought the original intent of ESNI was to do some type of opportunistic or self signed handshake for the (E)SNI portion to hide the requested name, then validate the actual site certificate. They must have run into a problem.

I've been loosely following ESNI/ECH drafts, and I am not sure this approach (double handshake) was ever seriously considered despite that it sounds perfectly fine to me. It'd make the handshake heavier, but on the other hand, it'd also make it much harder to distinguish from the regular TLS traffic. On the contrary, ESNI/ECH seems to be easy to detect and block, and some countries (China, for instance) have already announced that they're going to do that.

ECH is intended to be GREASEd which means there isn't an easy way to say "Block ECH only when it is used" because the obvious markers are present in every single connection.

That is, an ECH implementing browser talking to some web site that doesn't do ECH will present it an ECH blob which is random gibberish. The web site doesn't do ECH, so it ignores the ECH, but a middlebox doesn't know that and if it wants to "block ECH" it must tear down this connection.

If the same browser connects to a site which does have ECH the ECH blob isn't gibberish it's encrypted, the site decrypts it and treats the result as the real ClientHello.

Historically China's Great Firewall is willing to put work in to detect and sabotage things it doesn't like. But the point of GREASing extensions is to ensure that the extension itself isn't poisoned. As a result most likely ECH will work fine in China, they'll just block IPs they don't like, punish citizens (or indeed visitors) they don't like, and that's an internal matter that the Chinese can take up with their government in the usual (and costly) way.

This is not a magic anti-censorship tool, and isn't designed to be one. Even Tor isn't that, although its developers can point you at services that help if that's what you need.

So there would be ClientHello with greased ECH and SNI and ClientHello with real ECH and without SNI?

Regarding blocking, what prevents Chinese firewall from simply removing ECH extension from all ClientHello packets? Servers that don’t expect ECH would continue to work, those that expect wouldn’t accept connection, mission accomplished.

No. There's always SNI in the outer unencrypted ClientHello. ECH DNS entries explain which name you should use in this position, which will be constant across some number of inner names which will be encrypted.

Remember the IP address you connect to already tells a hypothetical eavesdropper roughly what you connected to. If it's a Wikimedia server or Microsoft is not concealed by lacking SNI. What ECH does is hide exactly which service you wanted from that IP address. Wikimedia run not only their famous encyclopedia, but a dictionary and numerous other useful references - from the same IP address with different hostnames. So maybe a future ECH-enabled German Wiktionary will use the same outer name as the English Wikipedia. Today an adversary can see which I am using, with ECH they cannot.

> Regarding blocking, what prevents Chinese firewall from simply removing ECH extension from all ClientHello packets? Servers that don’t expect ECH would continue to work, those that expect wouldn’t accept connection, mission accomplished.

The entire handshake is integrity protected. In TLS 1.3 (ECH is not proposed for any earlier versions) the transcript recorded by the server and client would deviate (one sent ECH the other did not receive it) and so the transcript signature check step fails and the connection doesn't work.

The Chinese don't bother messing about with this stuff, if you connect somewhere the Great Firewall thinks you shouldn't, it just closes the TCP connection and blocks traffic altogether. Simple, effective, and in a sense, standards compliant.

Thank you for the very detailed explanation!

The handshake messages are validated with an HMAC to prevent tampering

I don't see how this would work: China would effectively just be banning that browser, which is no skin off their back as other companies--even better: ones local to China--can make browsers that don't do this.

Sure, if China wants to ban say, Safari, they can do that.

Seems like a very bad idea to me, but it's their country and they can choose policy. I think Winnie the Pooh is smarter than Trump and won't just throw a tantrum because he didn't get his way, but it's certainly possible he'd decide to turn every iPhone in China into a brick rather than accept that we don't want to keep telling eavesdroppers which server name you wrote in the URL.

The GREASE isn't there primarily because of China, as I said they don't build crazy fragile technology, they don't have anybody to sell this too, the Great Firewall is an in-house project, so it's mostly simple and robust. Bad destination IP, connection blocked. Rarely a big problem in a technical sense.

GREASE is because of stupid corporate middleboxes. Middleboxes are a technical problem, because even when they have an apparently mundane and legitimate purpose they tend to go about it in stupid over-complicated ways that destroy forward compatibility. All of the weird spelling of TLS 1.3 is because of middleboxes. Why is HelloRetryRequest spelled ServerHello? Because if you don't say ServerHello at that point some famous middleboxes explode. Why is TLS 1.3 ClientHello written exactly like a TLS 1.2 ClientHello (including the version number) except with crazy extension values? Because otherwise middleboxes explode. Why is there a bunch of completely random data labelled "Session ID"? Is it a session ID? Nope. If that was missing, you guessed it, middleboxes explode.

RFC 7924[0] (cached certificate extension) is intended only to avoid lots of wasted bandwidth re-sending certificates, but with a modification of where the cached_info goes in the handshake, it would make any middle-box meddling (including censorship) orders of magnitude more expensive.

Honestly, the next version of TLS should have a mandatory variant of certificate caching [0], except instead of putting the cached_info in the ClientHello message, the final handshake message (which is encrypted) would include the hash of the cached certificate. If the hash doesn't match, then the server would send the certificate (the connection is now in an encrypted but unauthenticated state).

If nothing is cached, the client sends randomly generated bits in place of a cached certificate hash, which eliminates one case to be handled in the protocol. (Handle an empty cache and a stale cache in the same code branch.) This provides more privacy than sending an "I have nothing cached" and it's more likely to be struck by lightning and a meteorite simultaneously than get a 256-bit collision. The consequences of a hash collision are only that the connection needs to be reset.

In the case of nothing being cached, again to reduce the number of special cases to handle, the client would need to make a guess as to the ECDH/RLWE/etc. parameters in its initial handshake message. The mechanism for handling stale cached ECDH/RLWE/etc. parameters would then apply to the non-cached case. (In this case, the server just sends back a message saying "Those are stale. Here are my parameters for a method in your provided list of supported methods, and here, also have my first side of the handshake."). Again, eliminating special cases by faking it/guessing if nothing is cached slightly increases privacy by requiring information outside of the handshake to detect if this is a repeat visit.

This change would force any meddling middle box to go through MITM'ing most of a TLS handshake before getting much information at all, and then having to break the MITM'd connection and allowlist/denylist* the involved IP for some period of time. For scalability, most present-day censoring hardware keeps the censorship out of the main path, passively observing traffic from a router's cloned port, and submitting forged RST packets to both sides of a TCP connection when the plain text contains something it doesn't like. Forcing MITM'ing makes such meddling orders of magnitude more expensive to implement, and much less accurate in cases of shared IP addresses.

On a side note, I heavily use the trick of initializing a cache to expired values in order to avoid special-casing empty caches. There's a cute trick for caching conversion of ISO-formatted date strings to date objects if your hot path includes a lot of parsing of dates from JSON and only dealing with consecutive days (in my case, yesterday and today) the least significant bit of the rightmost day digit code point and the least significant bit of the rightmost month digit code point form a very cheap 2-bit hash that never collides for consecutive dates, so you can use a 4-element array as your cache. These digits are at constant offsets in the ISO date string from the year 1000 to the year 9999. (There are no consecutive even days, and consecutive odd days only occur when the month rolls over. Months always alternate even and odd, even at the rollover from 12-31 to 01-01. Every character set I'm aware of puts the digits consecutively, so this trick works for unicode, ASCII, and every character set I'm aware of.)

[0] https://tools.ietf.org/html/rfc7924

* formerly known as whitelist/blacklist

Also, for improved security in load-balanced shared hosting environments, the signed handshake finalization message from the server should have an optional set of ECDH/RLWE/etc. values so that the load balancer and the server can cooperate to allow the load balancer to MITM the connection, but then negotiate a key unknown to the load balancer before the connection goes into an authenticated and encrypted state.

There was nothing stopping a MITM with a fake/self-signed pre-handshake.

I agree the whole design is horrible. I would much rather have had no DNS records whatsover, and instead used onion-style layered TLS, rather like how 802.1X works with outer and inner authentication. The browser would first initiate a TLS connection to the IP address, using the IP address as subject name. It then creates a second connection inside the first using the domain name as subject name.

How the authenticity of IP address is established then? Wouldn’t the server need a certificate for its IP?

Either for its IP or for the hostname that results from the reverse-lookup of the IP. The latter is easier to obtain (proof of ownership for IPs is cumbersome) and is often present anyways. It would need an additional DNS lookup, but the present suggestion involving SVCB would need that as well (along with all the other uglyness).

Add some dnssec and ipsec keys to the reverse-lookup zone and we have everything we need to authenticate the IP address.

Proving IP address ownership is done via HTTP challenge, nothing too problematic.

Aren't SRV records CNAME-ish as well? So it kind of makes sense to see SVCB as SRV 2.0, I guess.

Yep, just not widely implemented. Hence a new thing with new hopes. It looks really nice though, and it seems it has vendor support too.


In studies I have done, most websites on the internet do not require SNI. Thus with most websites there is no problem of domain names in plaintext being sent in ClientHello packets, and no need for ESNI.

There are not many ESNI-enabled websites on the web, but Cloudflare's CDN offers ESNI service. Thus, one can use ESNI for websites that use Cloudflare.

I have been testing ESNI for a while now. I use it for reading websites text-only with no ads or tracking. I use links; I am not a fan of "modern" browsers. IMHO, Cloudflare's ESNI is very fast. It seems even faster than normal TLSv1.2 or TLSv1.3.

https://defo.ie is one source for ESNI info, including links to source code on Github for ESNI-enabled openssl, curl, lighttpd, etc. If you compile the ESNI-enabled openssl from Github, below is a one-liner and a small script to try ESNI out.

First you must fetch an ESNI key in a TXT RR from Cloudflare DNS. Then you use the key to submit HTTP requests for about an hour, until it expires. Then you must get a new key.

   # one-liner to fetch a key
   # if you run your own DNS you could then put this into a TXT RR in your own zone file 

   curl "https://cloudflare-dns.com/dns-query?name=_esni.www.cloudflare.com&type=TXT&ct=application/dns-json"|grep -o [/][^\\]* > esnirr
It seems that connecting to cloudflare-dns.com to fetch the initial key via DoH requires SNI. Why not have a bootstrap server that is not on a shared IP and hence does not require SNI.

   # try out ESNI using the key
   # usage: echo https://some.cloudflare.website.com|$0
   # usage: $0 HEAD < file-of-urls-with-same-domainname -- (http/1.1 pipelining)
   read k < esnirr
   exec 2>/dev/null
   (while read uri;do case $uri in http://*)uri=${uri#*http://};;https://*)uri=${uri#*https://};esac;
   export host=${uri%%/*};path=/${uri#*/};test ! ${#path} -gt ${#url}||path=/; 
   printf ${1-GET}' '$path' HTTP/1.1\r\nHost: '$host'\r\nConnection: keep-alive\r\n\r\n';
   done|sed 'N;$!P;$!D;$d';printf 'Connection: close\r\n\r\n') > .http;
   host=$(sed -n '/^Host: /{s/.*Host: //;p;q;}' .http|tr -d '\r');
   # Any Cloudflare IP should work - is just one example
   # https://www.cloudflare.com/ips-v4
   openssl s_client -4 -tls1_3 -noservername -ign_eof -verify 9 -connect -esni $host -esnirr $k < .http
   rm .http
"Modern" browsers are stupid. They are sending domain names in the clear even when the website is not using SNI.


In that case, wouldn't it have been more logic to use eDNS OPT query fields instead?

Probably much harder than simply adding a new record resource type.

Encrypting the SNI is probably a good idea. According to Wikipedia one Russian ISP and China has already blocked ESNI'ed connections, which is probably a good sign that it's doing something.

But still, it feels like such a hack throwing this into the DNS records. A mess for server operators to implement if you don't manage your DNS records via an API today (since they'll need to be changed when certs rotate). And I'm guessing that a wast majority of people use the ISP default DNS (or Google/Cloudflare), that brings plenty of privacy problems on its own...

Maybe ESNI/ECO is really worth the hassle in a future where the whole Oblivious DNS-thing[0] is widely spread.

[0] Someone like Apple would proxy your encrypted (DoH) DNS requests to Cloudflare, so neither Cloudflare nor Apple can see what a specific user is querying

There's no obvious place to put it. Conjuring up CertificateTransparency-like public notaries might work, but who would run them? It would basically have the exact same operational requirements as DNS resolvers. Plus DNS is already "inlined" to the process.

This suffers from the same problem as the v4-v6 migration. There's simply no good way to solve it other than biting the bullet and laying the new v6 pipes. (Though this is a bit easier, but getting DNSSEC off the ground, the root signed, getting root rollover worked out ... all of that like v6 deployment took more than a decade, and it's just getting mainstream.)

True. I'm just saying that it's a lot of work for a pretty small improvement (as the world looks today). But maybe I'm incorrect, maybe this would solve real world problems today?

I'm just thinking that if you don't have encrypted DNS queries, your basically still revealing similar info (at least on first/uncached connections).

If you use DoT to your ISP, they will have similar info. If you use DoH to Google/Cloudflare, they will have similar info.

How big is the privacy gain over cleartext SNI on first connection + client side caching of certs and encrypted SNI on later connections? Obviously it depends on your threat model, this + DoH/DoT to someone you trust would protect from plenty of things in an environment where a passive attacker can see your network traffic. But the question is who to trust...

It's a long term investment. And it helps especially big sites. It's kind of the new RSA/AES/SSL hype (SSL 3.0 released in 1996, IE shipped with Win95 already supported SSL 2, then Win98SE shipped IE 4.01 if I remember correctly, and ... people were amazed. Ecommerce was all the rage, and boom, dot-com boom.)

But it's hopefully done properly this time. :)

It's the next logical step. There's no rush to implement it, even if it's nice and shiny.

Both Firefox and Chrome already has built-in support for DoH. So encrypted DNS is already getting taken care of.

There are an increasing number of providers to choose from: https://github.com/curl/curl/wiki/DNS-over-HTTPS

And rolling your own is always an option.

Info leak, trust a 3rd party, invest your time, pick your poison :)

I think the infoleak is not that serious, because in most cases a passive observer will have the IPs you connect to, and currently it's pretty easy to just enumerate the IPs of the "top 1 million websites" (I have no idea if anyone on Earth still uses the Alexa toolbar at all, or if it exists at all, but that list seems eternal somehow).

This is basically where the hiding in the noise strategy is made possible "thanks" to CDNs. If there are thousands of sites all hosted on a handful of IPs then correlating who views what is getting increasingly harder.

We shall see if this kind of slow but steady activism helps with government censorship on the long term or not.

HTTPSSVC might, as a side effect, finally solve the problem of not being able to point the root of a domain (eg 'google.com') to a CDN.

(although the rollout of HTTPSSVC to all DNS servers and domain hoster APIs will probably take even longer than the phase-out of IE..)

Isn't that because CNAMES may not be combined with other records and SOA and NS are required on whatever.com?


Yep. There was talk of ANAME records at IETF but that doesn't seem to be really going somewhere

Some DNS servers implement something they often call ALIAS, but it's basically the equivalent of having a crontab mirror the settings of the records you're referring to, and breaks eg. geo-dependent address resolving (so not that useful for referring to a CDN, which is why you'd most likely want this)

SRV records would already support that is browsers would look them up. But unfortunately they don't.

I wish SRV records had been adopted widely. It would be a much nicer world.

I can't wait to see browser support for HTTPSSVC.

- CNAME in the apex domain - Direct HTTP/3

Can anyone explain why is it designed this way? Why was it necessary to involve DNS into this? Was it unavoidable or is that all in the name of keeping 0-rtt possible?

Tbh, with the current implementation, ECH setup seems rather complicated to me. It benefits CDNs like CloudFlare since they control both DNS and the handshake process, but those who would like to setup ECH on their own are in trouble.

You need to know if you're talking to the server you think you're connecting to, otherwise a man-in-the-middle could impersonate the server and intercept the handshake to see the hostname you're trying to connect.

So you need another channel to pull a public key (or something comparable), and they picked DNS for that

As someone suggested in a different comment: handshake with one certificate, then send a new ClientHello that contains the desired hostname and "re-handshake" with the real cert?

How do you validate the authenticity of the first certificate without revealing any hostnames?

If we're keeping this simple, then I'd say have IP address in subaltnames. Obtaining such certs shouldn't be a problem for CDNs, and possible for others as well (regarding free options: Let's Encrypt doesn't support that, but ZeroSSL does)

You wouldn't even need a cert for an IP address (which is hard to get). You could just resolve the CNAME and A records until you arrive at an address, do a reverse lookup on that address. Then use the resulting "primary" PTR hostname for the encryption certificate. No additional info for an attacker, no new weird RRs, in the best case just one additional DNS query (but SVCB would need that as well).

That means one-site-per-IP, which doesn't work with the way we currently do L3->L5.

One-site-per-IP also ruins much of the point of ESNI, since then anyone who wants to block or track what domain you are visiting can just lookup the domains they’re interested in and match them to IP addresses.

For the most part they can do that anyways, ESNI or ECH are only relevant for big hosters or proxy services.

No, this is just a way to verify the Cert for the first handshake, so that one doesn't need to use one with the IP in the sub alt names.

no, the per-ip key is just the one you would use for ECH, in the encrypted part you can use separate per-vhost certificates as usual

That would require coordination with a certificate authority any time your IP changes, which could be very inconvenient in some cases.

Meanwhile, DNS records already need to be updated if your IP changes anyway, so it is a logical place to put the authenticity information

How do you know that the first certificate was not produced by the MitM?

In order to encrypt the connection handshake, you and the server need to agree on a key to encrypt it with.

If you do the key exchange on the connection, you have two choices:

a) anonymous key exchange, which could be MITM, because it's unauthenticated. This would be more secure than just sending the hostname in plain text, because it would require active interception rather than passive surveillance, but active interception is not that much harder than passive surveillance.

b) signed key exchange, which is tricky, because the server may have certificates for many hostnames, and it doesn't know which one you used

In order to have another option, you need to get a key through some other method. DNS is the more or less the most viable out of band method to get that information; after all, it's what you used to turn a hostname into an IP address to contact the server in the first place.

You could do (a) to force active attacks, and then at the end (after SNI) authenticate the server based on the then-matching cert, followed by checking that you both got the same symmetric key from the ECDH key exchange at the beginning. If they don't match, you know there was an active MITM and both sides know.

The client initiates the connection and must know a key to encrypt the client hello with before the first packet. Unsigned DH is out because that would allow MitM and involve another roundtrip, cached certificate is only known on the second connection and hardcoded keys would be stupid. That leaves DNS.

So... the attack on this would be to take over the local nameserver (or even just system DNS resolution) of the clients trying to get to a domain and then serve your own public key?

That would work right up to the point where the server returns a certificate that the client needs to validate. If you're an attacker then you'd also need to have a certificate with a matching domain and is signed by a trusted certificate authority. Otherwise the client will see the certificate as untrusted.

Having a trusted certificate for a domain you don't own is already the "Game Over" scenario for current TLS connections since DNS hijacking is generally much easier.

Mozilla also pushes for DNS over HTTPS, so the browser uses a hardcoded server by default (eg. Cloudflare).

If you can do that, there's no need to change any keys. You can just look at the DNS requests that come in and the hostnames will be revealed that way. That is why it is important to also use encrypted DNS in addition to ESNI.

I suggest reading this presentation and blog post:

"What can you learn from an IP?" https://irtf.org/anrw/2019/slides-anrw19-final44.pdf


https://github.com/dlundquist/sniproxy does not yet support ESNI. But might supporting ECH be completely infeasible?

So for those unfamiliar, sniproxy is able to forward traffic to virtual hosts with their own encryption keys, by dispatching the connection based on the SNI of the handshake. After determining the correct vhost it initiates its own handshake with the destination and then hands the connection over to the original client, so the host running sniproxy does not need access to any keys or certificates.

I want to run something like this on the client, for firewalling purposes. The firewall should perform ECH encryption after validating the destination, not the client.

And you want the client to then trust a new root certificate from the firewall?

OpenSSL doesn't support this yet, so neither does nginx.

>A TLS server supporting ECH now advertises its public key via an HTTPSSVC DNS record

Why a new record type, when TLSA records were invented for literally this exact purpose?

Because it's like TLSA + SRV [which itself is basically CNAME + a port hint] + A + AAAA in one. (Because there's no "consistent multi querying" in DNS, so it had to be basically multiplexed into one RR type.)


Okay, I can see why this new record type would be beneficial given the lax consistency model of DNS - seems a little lame to make the record type ‘HTTPSVC’ given that other TLS-enabled services could benefit from the same guarantees.

As I understand it, the new record types are “HTTPS” and “SVCB” (“service binding”). The SVCB record is a generic SRV-like thing, and HTTPS is like SVCB but with extra things which HTTPS needs.

You want to bundle the entire configuration, because DNS doesn't provide any consistency guarantee between records.

Isn't DNS doing too much work in this? How do you authenticate DNS results?

ESNI is the only reason why I use Cloudflare in some of my web sites.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact