Hacker News new | past | comments | ask | show | jobs | submit login
DNS-over-HTTPS (developers.google.com)
321 points by mikecarlton on Sept 30, 2016 | hide | past | favorite | 148 comments

> Currently, web-based applications must use browser extensions to take advantage of advanced DNS features such as DANE, DNS-SD service discovery, or even to look up anything other than IP addresses. Extensions for features that depend on DNSSEC must validate it themselves, as the browser and the OS may not (be able to) validate DNSSEC.

If only there were some way for Google to let people take advantage of advanced DNS features without requiring browser extensions ... alas, they'd probably need to add that code to a web browser. But then, where would they find a web browser they could add such code to? Ah well. Google should ask those web browser vendors why they won't implement DNSSEC, at least. Maybe start with asking the browser with the highest market share, whichever that one is.

agl has explained in detail why they don't do that: https://www.imperialviolet.org/2015/01/17/notdane.html

tl;dr they can't, because DANE is undeployable in the current Internet.

The root zone ZSK is increasing to 2048 bits as we speak. http://www.circleid.com/posts/20160928_increasing_strength_o...

The article also mentions the number of TLDs that now use 2048 RSA keys is around 1,300.

It's 2016, and the DNSSEC people are congratulating themselves for removing a 1024 bit RSA key.

By the way, since DNSSEC is always good for laughs, check which cloud platforms support DNSSEC currently (spoiler: none).


They also rotate the keys every 3 months, so the standard cost analysis is a bit different.

DANE is not DNSSEC. This API doesn't solve the problem the DANE is trying to tackle (adding additional trust vectors to certificates).

It actually seems kind of lazy. Their argument is that it is to difficult for client software to embed a DNS resolver into the browser so it can check whether DNSSEC was attempted and whether it was successful. The actual validation would still happen at the clients local resolver.

Their solution is to make client software implement a REST API for doing its lookups instead, and the validation is still happening at your resolver (the REST API).

The only thing I see as a positive with this is your DNS requests will be encrypted. The trade off is the complexity of the process of getting your responses and that you'll be subject to the potential of third party censorship, logging of all destinations which will be difficult to opt out of.

That argument seems to boil down to “We can’t use any new record types in the DNS ever again because 4-5% of users had problems when we silently tested it”. First, this is defeatism writ large – if we can’t develop anything new, ever, we may all just as well go home and give up. Secondly, these things don’t exist in a vacuum; if a website failed because a user couldn’t access DANE or whatever, this would create pressure on all affected parties to fix the situation. This is how things progress instead of stagnates.

> First, this is defeatism writ large – if we can’t develop anything new, ever, we may all just as well go home and give up.

No, it means you need to consider the existing realities when designing a new protocol. A good example is HTTP/2: Probably countless proxies would produce crap once they see an HTTP/2 packet. So people came to the conclusion that they need to wrap HTTP/2 in TLS in order to deploy it. (I know that there's HTTP/2 without tls, but nobody is using it for precisely that reason.) HTTP/2 was built with deployability in mind. DNSSEC was not.

> Secondly, these things don’t exist in a vacuum; if a website failed because a user couldn’t access DANE or whatever, this would create pressure on all affected parties to fix the situation.

This is simply not what's happening. What happens in the real world is that people blame their browser for breaking things that previously worked. There are countless examples for this.

>What happens in the real world is that people blame their browser for breaking things that previously worked.

There are 3 browser vendors of any note right now. It wouldn't be that hard for them to collaborate to push this change at the same time.

Not necessarily. Consider the (admittedly dystopian) future where everything you do is hosted at Facebook.com. DNS becomes a vestigial remnant of the old Internet, and Facebook does de facto name resolution on it's private servers. New things can then develop within the proprietary Facebook ecosystem, in part because FB controls (more of) the entire experience. (Come to think of it, why doesn't FB develop a browser and an OS? And hardware while their at it. The fewer middlemen between you and your product the fewer risks you'll get cut off from them/it.)

Thank you for the link. I was previously only working from these links:




> Instead, for this we have HPKP, which is a memory-based pinning solution using HTTP headers.

The problem with HPKP is that you still need a certificate from a CA. And if you want wildcards or a >90 day length (regardless of what you think of their security, many people do -- the biggest sites on the internet have both), that means paying money. And it also still leaves you vulnerable to MitM rogue CA attacks.

HPKP requires the user to connect the first time to get the pinned key. If an MitM has a fake certificate for your domain, they can send that to you on your first connection and fool you. DANE does not have this problem.

Well, to be fair, DANE has it for the DNSSEC provider used. But that is one thing to trust, and said certificate can be bundled with the OS or browser. One thing to trust beats the current 2000+ intermediate CAs we have to trust.

Further, DANE allows a way to verify self-signed certificates, allowing security to be free for everyone. We could see a lot more innovation than just certbot if Let's Encrypt weren't the only way to generate free certificates. For just one example, web servers could generate certificates from fresh installs; zero configuration requied. Like Caddy, but in nginx and Apache.

Lastly, HPKP is riskier. You have to include a backup pin. But, what would happen if your had your own pinned, and StartSSL as the secondary? And you set HPKP for a year? And two weeks from now, Mozilla revokes StartCom entirely? Well, good luck with that one. Deleting HPKP is a royal pain for a regular user (about:permissions in Firefox, chrome://net-internals#hsts in Chrome)

> We also have pre-loaded pinning in Chrome for larger or more obvious targets.

Nice for Google and Facebook, not so nice for byuu.org =(

> It's also worth noting that CryptoCat has committed pinning-suicide in Chrome at at the moment due to their CA having switched intermediates between renewals.

Another good negative of HPKP.

> But support for that was removed because it was a bunch of parsing code outside of the sandbox, wasn't really being used and it conflicted with two long-term plans for the health of the HTTPS ecosystem: eliminating 1024-bit RSA and Certificate Transparency. The conflicts are probably the most important reasons for not wanting to renew the experiment.

RSA-1024 is bad, yes. Perhaps we should start asking how we can get DNSSEC on RSA-2048+. Having a commitment to implement it in Chrome, Firefox, Edge, Safari if we do would go a long way toward making that happen, I'm sure. Further, we can reject any RSA-1024 signed DNSSEC after that point, just like we reject weak crypto in browsers today.

If they can't change DNSSEC to stronger crypto, how about they make an alternative? If Google introduces one that has stronger crypto, supports it in Chrome, and offers a free "DNSSEC+" hosting service, I'll definitely take immediate advantage of that.

But the latter is the whole point of DANE -- self-signing. Certificate Transparency is a crutch due to the CA system having 2000+ intermediate CAs that can sign any domain name they want. With DANE, you don't need that anymore. DANE is about providing an alternative to the CA signing business.

And there's nothing stopping users from serving up trusted CA-signed endpoint certificate signatures over DANE/TLSA records. That would obviously remain the only sensible way to do EV certificates. The way I see it, a DANE certificate would act like HTTP does now: you get the globe (or even the question mark circle), but your page actually loads over HTTPS, instead of sending you to Defcon-1 advisories that are all but impossible for the casual user to work around (and indeed, we don't want to train them to work around these warnings anyway.) There's no way this would be worse than plain-text HTTP.

> You literally can't avoid it because the root zone transits through a 1024-bit key.

Does this really matter for the root? There are root CAs with RSA-1024 still.


At any rate, Google apparently feels DNSSEC+DANE is good enough for this web service API. So, honest question, why offer that if it's really such an unworkable system?

First, I have bad news for you, and good news for the Internet:

In the very unlikely event that Chromium or Firefox ever honor DANE records, and the even less likely event that they honor trust anchor assertions in TLSA, you are still going to need a CA certificate. In the parallel universe in which DNSSEC is seriously deployed and honored by browsers, the entire X.509 PKI will be replaced with something else before TLSA trust anchor assertions are reliably deployed in browsers, and, until they do, huge fractions of your user base won't know what the hell the DNS is talking about when you give it your self-signed certificate.

If you don't care about that user base, then nothing at all is stopping you from using self-signed certificates today. Just tell your group of friends and followers to check your certificate once when it pops up with a warning, and add it to their trust stores. If you tried to use parallel-universe DANE to serve a self-signed certificate, that is the experience you would have anyways.

"The whole point of DANE" is not self-signing. If it were, Dan York wouldn't be telling people that DANE isn't government key escrow (hint: it is) because the CAs will still be involved. The point of DANE is to come up with some reason, any reason, to get DNSSEC deployed, because the people working on it have in some cases been working on it for over 20 years (it shows) and are frustrated that the Internet has moved on without them.

I don't know what you mean by "Google feels DNSSEC+DANE is good enough for this web service API". The web service is a simple wrapper around a DNS service. The Google you want to pay attention to, the one that matters for the DANE discussion, is the Chromium project. Go talk to the Chromium security people about DNSSEC. See what they say.

RSA1024 isn't as much of a problem for DNSSEC, though it is being increased. The validity period and the amount of data being encrypted restricts the attacker to a much larger degree. That being said, I don't think RSA is the correct choice anymore for speed, security, or size of DNS responses.

You can find more information here: https://www.cloudflare.com/dns/dnssec/ecdsa-and-dnssec/

I was definitely thinking, "why not use Curve25519 for a 256-bit key if fitting the response into one packet is so important?"

It seems there's some worry about quantum computers being able to break ECC with less qubits than RSA on account of there being less actual bits.

But to me, it seems like if quantum computers come about that start breaking things, RSA is going to need to be replaced as well. So now's probably not the time to be too paranoid about something we don't yet know will ever even happen. At least, not until we have something we know will be resilient to the new quantum attacks.

Note: They are already working on switching to 2048 bits in DNSSEC; they will do it some time this year, if all goes according to plan.

I find it somewhat amusing that in order to use DNS-over-HTTPS you must first resolve a domain using "normal" DNS (dns.google.com). You'd think they'd go ahead and publicly advertise a static IP for that so you can use it without relying on normal DNS.

But that is a bit harder to spoof. Suppose someone hijacks your plain text DNS to take over "dns.google.com". Their fake site will not have the right SSL keys, so your DNS-over-HTTPS client will reject it.

The certificate has to match, so I don't think there would be a problem.

Assuming none of your CAs have been compromised.

If you're not trusting CAs, there's not much point to DNS over HTTPS, is there? Might as well be DNS over HTTP. Or just DNS.

To expand on this, there would be three advantages, but none have to do with CA trust: (1) Clients implementations may be slightly simpler, since they have to handle one less protocol. (2) The initial DNS would be removed altogether as a possible source of error, rather than a detectable possible source of error. (3) Some marginal additional privacy. But reverse DNS means not really.

Why would it accept any old CA for a property of Google?

Browsers do this by default, unfortunately.

I don't use either of those as http clients in my idiotic hobby projects that lookup DNS over TLS/HTTP.

Independent of HPKP Chrome still does 2011-style static pinning AFAIK and just by looking at HTTP headers I don't think google.com even uses HPKP. Unlike HPKP which is trust-on-first-use, static pinning is enforced from the very start so if you have the ability to statically pin in the client (as you would if you're a browser vendor or distributing your own mobile app) you probably should.

And then there is this recent discussion: https://news.ycombinator.com/item?id=12434585

Why do people keep linking to posts from 2011? That is wholly irrelevant... not only is key pinning not done that way now, we have a standard for this.


And it's implemented by most browser vendors... just not Microsoft, of course.


> Why do people keep linking to posts from 2011?

Because it was an accurate response to the question to point out that Chrome hasn't done it for five years?

DNS-over-HTTPS resolvers aren't browsers, in any case.

No, but the service seems to be (at least partly) targeted at "web-based applications", so the https clients would be browsers. Which would in turn mean the resolver would be using the browser's pinned certificates.

I've been playing with some code running as a resolver on my Mac. I had to seed it with some of the google IP addresses to resolve https://dns.google.com to get it going.

Agreed, an anycast IP should be provided just like that of their resolvers.

yeah would make sense i guess if they used another ip address in their like range and offer it directly on ip.

DNSCurve solved this years ago: http://dnscurve.org , with implementation notes here: https://dnscurve.io

No it didn't. DNSCurve only solves recursive to authoritative. The request from a typical PC to recursive is still without encryption. And provides no mechanism for the PC to even know whether the result it receives is valid or not or if DNSCurve was even used by the recursive.

see also: Internet Mail 2000

> (including DNS-based Internet filtering)

What guarantees that they wouldn't start filtering URLs on their own (upon request by DMCA, or FBI)?

I do get that they say what they log ( https://developers.google.com/speed/public-dns/privacy ), yet if in case this ever does become a _commonplace_ thing, they'd be easily able to obtain IP addresses of users trying to access blacklisted websites, and hand them over to officials (upon request, maybe?).

We did the same thing w/ JSON responses: https://www.openresolve.com/

This is a bad idea outside of experimentation. Not to be used for production.

If you want to secure DNS look at QUIC, TLS, or my favorite, DNSCrypt (which I funded).

Why is this a bad idea? You can't just say "it's bad" with no justification.

As a user, I can sure think of some countries with broken Internet access where this would come in handy.

I'll rephrase, it's way better accomplished with UDP without sacrificing security in the ways I list.

Running DNS over HTTPS over TCP isn't needed. It doesn't solve a problem.

Doing JSON DNS for OOB DNS checks is useful since most applications speak HTTP. :-)

Note that although it is not documented, when you query the Google DNS-over-HTTPS service from Chrome, it will usually use QUIC. You can check this at chrome://net-internals/#quic, and will probably see something like this (look DNS/HTTPS/QUIC/UDP/IPv6!):

dns.google.com:443 true QUIC_VERSION_34 [2607:f8b0:400d:c03::8a]:443 10544469510527000173 0 None 2 9 0 9 true

An independent implementation of QUIC (are there any outside of browsers?) would probably work much the same, modulo any changes during the ongoing standardization of QUIC.

Yup I threw together a toy too:


For debugging and diagnostics it is useful, for querying via a local resolver not so good.

I've actually just written a blogpost about it. http://www.dmitry-ishkov.com/2016/09/dns-over-https.html You can run a local DNS server which is gonna use Google's DNS-over-HTTPS. But as eridius noticed you still have to resolve dns.google.com

There are several different implementations of proxies for DNS-over-HTTPS:

https://github.com/aarond10/https_dns_proxy (C) https://github.com/pforemski/dingo (Golang) https://github.com/tssva/dnshttps-proxy (Golang) https://github.com/wrouesnel/dns-over-https-proxy (Golang) https://github.com/CodeFalling/dns-proxy-https (Javascript)

I'd heard that somebody was working on DNS-over-HTTPS support for https://github.com/getdnsapi/getdns at the hackathon in Buenos Aires in April just before DNS-OARC / IETF-95, but have seen no evidence of that.

How fast does it work? What perceived latency does it have during usual web surfing?

I would not use that implementation. It is broken in multiple ways. The most impactful to normal browsing is that it only supports a couple of RR types which doesn't include CNAMES.

I didn't mention it in my original comment because I thought the code didn't exist anymore but I found an old Time Machine backup disk with the code on it for an updated version of the referenced implementation. I have put it up on Github at https://github.com/tssva/dnshttps-proxy. I need to throw up a README and give attribution. Will get to that later today.

This version will support all RR types supported by the miekg/dns library which is the vast majority of them and any you are likely to come across in the wild. It also allows you to specify regular DNS resolvers which can be used in two ways. As fallback if connectivity to the DNS over HTTPS service fails or to always use to resolve specific domains. It also allows you to restrict access to the proxy to certain networks. The rest of the code should be IPv6 friendly but for some reason I implemented the access list in a manner that only supports specifying IPv4 networks. Guess I have something to work on.

If no DNS resolvers are specified it attempts to use the Google Public DNS servers to resolve dns.google.com. If DNS resolvers are specified they are used to resolve dns.google.com. A flag to always use the Google Public DNS servers would be useful, so now I have 2 things to work on.

As far as performance impact I have generally seen from 20 - 80msec of additional delay. Using a caching resolver behind the proxy would help mitigate this. As is the additional delay is pretty much unnoticeable when web browsing.

Why can't you just send DNS messages over an SSL/TLS socket? What's the value add for http and REST?

I would assume being able to reuse the connection for multiple requests. Setting up the TLS connection is quite a bit more expensive than raw TCP, and especially UDP (most DNS happens over UDP). For longer connections, the additional overhead is minor, but for the extremely short DNS messages, I would imagine that a TLS connection per DNS request would be some pretty substantial overhead.

Edit: The page mentions that this allows web applications to make their own DNS requests, possibly looking up things other than A/AAAA records that the browser normally requests.

http and rest are not the only ways to reuse a socket. You could just write the packet to the TLS stream, raw, and keep the socket open.

Definitely. But then you need to define an envelope to mark where an individual message begins and ends (with UDP DNS, it's a single datagram; with TCP DNS, it's the entirety of the transmission). There are infinite ways to do this, and countless many already specified in various standards (many of which are already implemented in browsers, which is surely the the primary application Google had in mind). HTTP provides one such envelope.

> with TCP DNS, it's the entirety of the transmission

DNS Messages have a two-byte length prefix when transmitted over TCP. Multiple envelopes can and often are sent over a single circuit.

that's what ssl tickets are there for.

heck you can even do that over udp with dtls, if you don't want to deal with setting up and tearing down tcp connections.

Web applications can't initiate raw TLS connections.

That's not true. Sure, many languages require you to drop down to a lower-level API to create a vanilla TCP connection, but if the language you're using doesn't provide a way to open a TCP socket and do the TLS negotiation dance, then you've probably chosen the wrong language.

The language is entirely uninteresting; what's interesting is what the runtime allows you to do. Browsers purposefully don't allow you to create raw TCP connections, yet JavaScript can in many other contexts (e.g node.js) create raw sockets perfectly fine.

> Browsers purposefully don't allow you to create raw TCP connections

Which is ridiculous, as it literally brings zero advantage to restrict that.

Well I think it's a good idea to prevent web pages from just opening up a socket to some server, speaking whatever protocol they like. It could be easily abused to spam, etc.

There is, however, https://www.w3.org/TR/tcp-udp-sockets/ - see section 10 for an example.

Considering the move proposed in this thread to move even SMTP to HTTP, how would preventing connecting to SMTP prevent spam when one can still POST to SMTP-over-HTTP?

"We need to restrict this to prevent abuse."

-- time passes --

"We need to change everything in order to accomplish a task under these restrictons."

Then all billion of us on the web have "chose[n] the wrong language".

This is painfully accurate.

Indeed, it's quite unfortunate

Because HTTP is the new narrow waist of the internet.

Because Google thinks by default in terms of HTTP and browsers, and can't think that not everything is on Web.

Given how successful they've been at that, I'm curious why you seem to view that as an aspersion on them rather than something to learn from? Shouldn't that tell us something about deploying internet-scale services?

(and, to be clear, the answer is yes, namely that a custom protocol has to deliver a lot of value to make up for having to deal with the deployment headaches inherent to anything less supported by firewalls, NAT boxes, etc. than HTTP/HTTPS)

It's an API that returns a document. Nothing fancy, no bells and whistles. If, for literally anything else, someone had suggested building such a service using raw sockets, they'd rightly be laughed out of the building.

Twilio isn't implemented using AT commands either, and for the same reason.

What, resolving using a socket is now considered ridiculous?

The comparison with Twilio is unfair.

DNS latency is an entire round trip added to every single fresh domain lookup you make. That's a lot of times, and we want those to be as fast as possible. At that point, the raw sockets vs full blown HTTP over TLS over TCP debate matters.

Twilio, on the other hand, is used in situations with comparably laughable latency requirements. At least two orders of magnitude.

Not saying it's doa, but the question has merit.

Corporate proxies / firewalls.

nice toy, but there are some things to be considered:

1) "secure DNS" is a solved problem 2) DNS is simple 3) responses normally easily fit inside one packet 4) DNS is fast

HTTPS is a slow, wordy and inefficient protocol. Forcing everything into JSON just compounds the problem.

> 1) "secure DNS" is a solved problem

No, not in practice. You can easily MitM DNS and nobody is verifying DNSSEC by default. On the current internet, secure DNS just doesn't exist.

the mechanism is there, people are not choosing to use it.

That is because security is hard.

This doesn't mitigate MitM attacks, as upstream DNS records can still be spoofed

WE go from a fast, decentralised, resilient and low overhead system, to a centralised chatty and fragile behemoth.

It still doesn't give end to end encryption, its just a slow encrypted proxy.

First people need to know about it to use it. Then it needs to actually work and not fail. https://ianix.com/pub/dnssec-outages.html

Google Public DNS ( verifies DNSSEC by default. So does Verisign Public DNS (

Some measurements of DNSSEC validation show that as much as 15% of Internet domain lookups validate DNSSEC: http://stats.labs.apnic.net/dnssec/XA. Approximately half of that is due to Google Public DNS validation (many sites use both Google Public DNS and other resolvers that do not validate, so do not actually validate DNSSEC overall).

It is very true that less than 1% of DNS zones are signed with DNSSEC, so it is true that "secure DNS" doesn't practically exist, but this a serving side issue, not a lack of client validation.

"nice toy". Very succinct diss.

HTTP-over-DNS would also be neat :) I think I would be able to get internet access in some airports if I had HTTP-over-DNS

IP-over-DNS has already been done.


As has IP over ICMP echo requests, which I have actually found useful: http://code.gerade.org/hans/

Just curious, what use did you find for this?

Its useful for reading your email on those 'free' wifi networks that require you to give your name, address mothers maiden name, and credit card before getting http access.

Tunnelling into networks and bypassing proxies, ancient IDSes and ineffective monitoring systems.

DNS-over-HTTPS-over-Tor-over-DNS-over-ICMP is where it's at. None of the three-letter agencies can eavesdrop on your DNS queries now! It might even be faster than RFC 1149.


Not if you actually use RFC 1149. You're operating at layer 1 and 2 when you rely on birds.

I'm waiting for the day when RFC 1149 is expanded to incorporate a layer 2 tunnelling protocol. I suspect this issue is not the initial encapsulation but how to extract the original frame without it getting mangled.

> It might even be faster than RFC 1149

Depends on definition of fast, the latency indeed could be better, but your bandwidth is only limited by the size of your USB stick, and most often would be much faster than current solutions.

Not just HTTP but you can TCP-over-DNS:

    apt-get install dns2tcp
Pretty easy to use too (though development seems to have stopped)

Don't those networks hijack DNS to display captive portals? I always like poking around domains on "free limited-browsing" networks trying to get an open connection to the open web.

Not all of them. Some of them hijack HTTP, so you resolve your destination address as usual, but when you actually attempt to connect, you're redirected. This seems to be becoming less common, though, unfortunately.

Why doesn't Google include the entire DNSSEC signature chain in the response? Their current approach to DNSSEC validation seems quite weak. Sure, I can query them and get an answer with AD set, but then I need to trust that they didn't tamper with the response.

The "web" is becoming hack upon hack upon hack.

Becoming? When has it not been?

DNS doesn't seem very well secured against determined attackers. But at the same time I almost never hear about attacks done via DNS spoofing. So I guess it harder to attack than I think.

Sometimes I think we should all take the Apple approach to this kind of things and deprecate old stuff and/or make new stuff mandatory.

We could just force DNS extensions to be implemented in most/all client/server implementations.

DNS over HTTPS might be okay and work well, but imho is a (smart?) workaround, not a fix.

Why can't we all set a time window (7.5 years? 10 years? 15 years?) to plan massive RFC/protocols updates with possibly-breaking changes?

Edit:fix grammar (not native speaker of English)

What's the purpose of explicitly specifying the "random_padding" parameter? Couldn't the client send any arbitrary unused query argument as padding?

I can think of two non-technical reasons: as an explicit reminder to devs about side-channel attacks, and also to guarantee the key "random_padding" will never be used for anything else.

Why not DNS over TLS? https://tools.ietf.org/html/rfc7858

Shameless plug (again): https://github.com/pforemski/dingo

Why have you disabled "Issues" on your repo?

For now I prefer to communicate through email

I find it very annoying when people do this, it makes it very hard to gauge what sort of issues the software has and how the author responds to them.

If you find an important issue and document it properly, I will be happy to re-enable issues and add your input there. Github just doesnt provide enough moderation tools there, IMHO.

You know, I've had thoughts along similar lines in the email space (SMTP). HTTP is such a fantastic protocol and an amazing amount of engineering effort has gone into it compared to SMTP. I've wondered whether there would be any interest in defining a translation from SMTP into HTTP, with an eye toward eventually deprecating SMTP in the fullness of time.

For example, to send an email, perhaps you just send an HTTP POST request to a canonical endpoint (email.example.com), instead of all the rigamarole that SMTP servers require with a unique text protocol requiring multiple round trips. Have you seen the number of SMTP commands involved in sending a single email? Here's an abbreviated transcript of what it's like to send an email using `telnet`:

  # Wait for banner from server (RT #1)
  220 email-inbound-relay-1234.example.com ESMTP Sendmail 1.0.0; Thu, 29 Sep 2016 19:22:12 GMT
  # Send EHLO and wait for reply (RT #2)
  EHLO example.com
  250-email-inbound-relay-1234.example.com Hello ws-1.example.com [], pleased to meet you
  250 HELP

  # At this phase you should really send STARTTLS and negotiate a TLS connection,
  # but we'll just ignore that for now and proceed plaintext.
  # Specify sender (RT #3)
  MAIL FROM: jcrites@example.com
  250 2.1.0 jcrites@example.com... Sender ok
  # Specify recipient (RT #4)
  RCPT TO: jcrites@example.net
  250 2.1.5 jcrites@example.net... Recipient ok
  # Specify message headers and content (RT #5)
  354 Enter mail, end with "." on a line by itself
  Subject: Hello, world!
  Fun stuff

  # Wait for reply (RT #6) 
  250 2.0.0 u8U1LC1l022963 Message accepted for delivery
Furthermore, if you skip these steps or front-run them, some servers will consider that suspicious or spammy behavior. (RFC 2920 properly allows this as an extension called pipelining, advertised in the EHLO reply above.)

With full use of SMTP extensions, things are a bit better than I imply but still frustratingly suboptimal. For example, I've run across ISPs who purely for their own load management reasons want to close an SMTP session at the TCP level after an arbitrary number of emails have been sent (N < 100)! Why would they desire that? If we're going to exchange more messages, then it's certainly less efficient for us both to negotiate a new TCP session and TLS session, rather than reuse the one we already have, but such is the practice of email. So message sending often can be as inefficient as this. When sending to some ISPs worldwide it's not uncommon for a single message to take seconds to deliver under normal network conditions.

How about we replace all of that with an HTTP POST to email.example.com, specifying the email headers and content with the POST body, and the sender and recipient as headers or querystring parameters? I think it'd be nice to get there eventually rather than drag SMTP on forever. All of the effort that goes into HTTP clients, servers, and security could benefit the email community as well.

Proper TLS security is still nascent in SMTP -- only because of Google's actions with Gmail and their Safer Email [1] initiative has TLS really come into widespread adoption at all. Today, although a lot of email is nominally taking place over TLS, most clients are not involving any sort of path validation and the connections are susceptible to MITM; and email clients don't specify client TLS certificates nor do servers examine them. If we were to employ it, TLS client certificate authentication could be an effective way to prevent email forgery, e.g., require email from example.com to be sent from a client with a TLS certificate for that domain. This kind of thing would be much easier to achieve in the HTTP world than in the SMTP world. We could also take advantage of HTTP/2 pipelining to efficiently deliver a lot of traffic across just one TCP connection.

We'd still need most of the effort invested into email, such as all of the effort fighting abuse, and mail servers would still need to buffer outbound messages and authenticate inbound ones, etc. (and we'd still need SPF, DKIM, DMARC) but at least it would simplify the foundational and protocol-level work, like what's involved in bootstrapping a new email client or server from scratch. You could write basic code to send an email in a few minutes using an HTTP library in any language. SMTP is pretty well entrenched, however, and the incremental benefit is probably not large enough, so I don't have my hopes up.

[1] https://www.google.com/transparencyreport/saferemail/

FastMail has been working on something called JMAP [1] for quite some time. It's an HTTP-based replacement for IMAP. Perhaps it could be extended to replace SMTP as well. Then we would have a single, HTTP-based API for all of our email needs.

[1] http://jmap.io/

I'm not convinced that's a useful generalization. Beyond that IMAP and SMTP do something vaguely related to email, there's very little overlap between the two protocols.

Isn't this effectively what any mail API does? Sendgrid, mailgun, Amazon SES, etc. all have HTTP APIs for interacting with email.

Yes, that's an apt comparison. Proprietary APIs for email exist today between parties that trust each other (clients and their service providers). What I'm proposing is to take it further and devise a standard HTTP-based protocol for message transmission between equal parties like ISPs, and for scenarios where there isn't preexisting trust.

For example, today if you use an HTTP API to submit a message to SendGrid or Mailgun or Amazon SES, that's a trusted relationship based on an account you have with the service, typically a paid relationship. Each provider has its own unique API, which is incompatible with other providers.

In the next step of that process, your service provider's Mail Transfer Agent (MTA) communicates with the final destination mail server (`example.com MX`), and that part is a peer relationship between ISPs (quasi-trusted or untrusted). This communication is all SMTP today, and I'm proposing the idea of a standard way to transmit emails over HTTP in this layer too, in such a way as that it would, in the fullness of time, obsolete SMTP.

I'm recently thinking a lot about email because of project[0], and yes, I do agree that it would be great to tunnel SMTP inside HTTP(S). (But yes, I'd go with a SRV entry, not a constant name.)

But no, it's not a great idea to let HTTP replace SMTP. SMTP is a stateful protocol, and while that brings some problems, it also brings some gains. For example, backend servers can keep connections open between them, peers can negotiate resource usage in real time, and the entire extension model is only possible because of connection state.

You'd lose all of those (or replace them with ugly hacks) by tunneling over stateless HTTP. It's a worth trade-off on some situations (like when you are behind a bad proxy), but not always.

[0] sealgram.com

you don't need a canonical endpoint per se. just a SRV record

Active Directory already uses SMTP for synchronising between sites.

DNS-over-QUIC would be a much more compelling technical proposal from Google as a standard

Note that although it is not documented, when you query the Google DNS-over-HTTPS service from Chrome, it will usually use QUIC. You can check this at chrome://net-internals/#quic, and will probably see something like this (look DNS/HTTPS/QUIC/UDP/IPv6!):

dns.google.com:443 true QUIC_VERSION_34 [2607:f8b0:400d:c03::8a]:443 10544469510527000173 0 None 2 9 0 9 true

An independent implementation of QUIC (are there any outside of browsers?) would probably work much the same, modulo any changes during the ongoing standardization of QUIC.

This won't be overly useful (to me) unless/until the system resolver supports this and I can implement this on my own DNS server(s). Seems like a good idea, though.

I will still prefer the system resolver to support DNS over HTTPS natively, but this option could work.

So to resolve a domain to an IP address without using regular DNS they have opted to use HTTPS which really requires a certificate to be signed against a domain via DNS.

I think this could be more useful if there was a local client that installs and proxies. E.g., A traditional query to localhost:53 gets translated to DNS-over-HTTPS.

I wonder what the performance hit of something like this would be. It seems like the ssl connection would be a bottle neck for the page load.

I use DNSCurve, which adds <1ms to latency. That's with X25519 and XSalsa20-Poly1305. Assuming a persistent connection, DNS-over-HTTPS might be similar with AES-NI or ChaCha20-Poly1305. The real speed issue is the number of round trips. DNSCurve is Zero-RTT, and assuming a persistent connection, DNS-over-HTTPS should probably be too at least once it's up and running.

And then consider that lots of pages have megabytes of javascript fetched from multiple sources, big and often unoptimized images, expensive screen redraws, etc.

Modern crypto doesn't affect performance at all. Hell, even PQCrypto-encrypted-DNS with 64KB public keys would be fast compared to the modern web. There's no reason to worry anymore about modern crypto affecting performance. It's just not an issue.

Probably not much, especially if you can maintain a persistent HTTP/2 connection to the DNS server.

If you had the TLS connection open to Google, there's a bit of overhead because the request is longer, and the response is longer than native DNS, plus http headers (hpack and content-encoding would help), but I wouldn't expect it to spill to a second packet for either request or response. Encrypt/Decrypt is probably not a big deal compared to network latency. Assuming Google runs dns.google.com in the same locations it runs its port 53 services, then it's still one round trip either way.

If you don't have the connection open, you still have to do a port 53 DNS lookup to find out where to connect (1 round trip to configured dns server), plus open a tcp connection (1 round trip), setup tls (1 round trip, assuming TLS false start), DNS request (1 round trip); so 4 round trips vs 1.

This is great, but google doesn't need to eavesdrop on us when they compel us to use their avenues for our every action.

It is optional to use. Just like it's optional to use Googles Dns servers at and The advantage with this is the extra security you get with HTTPS.

Google DNS tends to be one of the fastest DNS servers you can use (just benchmark them against other options). The IPs are anycast, so you will likely be served by the Google data center closest to you.

As for what they log, check it yourself: https://developers.google.com/speed/public-dns/privacy

So, what DNS server do you use? I trust Google's DNS (I use the normal DNS ones, and a lot more than I trust Comcast's DNS servers. I'm sure there are others out there, of course, but is good, reliable, and easily memorized.

I run my own, but then again, I also run my own web and email services. But in my gut, I have the feeling that at some point, Google will become the Internet (or even worse---we have the GoogleNet and FacebookNet and never the twain shall share).

Until then, I'll run my own stuff.

> So, what DNS server do you use?

The chaos computer club runs their own, which happens to answer usually just as fast as Google’s DNS. (And which isn’t subject to censorship, Google’s DNS, like Comcast and most US ISPs, censor several domains of piracy websites, although the domains are still existing in the ICANN database, and are reachable through most other DNS servers)

I use OpenDNS. My home network is setup so that all DNS requests are sent via dnscrypt to OpenDNS. This ensures that Comcast (or whoever) doesn't ever see my DNS traffic and can't muck with it.

Comcast is the one sending you the (unsigned, unencrypted) response packets from

You might as well use the Comcast ones.

Politically it's a lot easier to stop Comcast from altering through-traffic DNS than it is to stop them from lying in DNS responses and calling it pro-user.

And if you use DNS-over-HTTPS to get your answers from Google, Comcast can't modify them.

I use OpenDNS

Is there a matrix of X-over-Y implementations? I would assume that we are converging to a fully filled matrix.

If not filled, probably full-rank such that all X-over-Y is possible, via-S-via-T!

Why do I need dns over http?

That's fantastic. Wondering why this hasn't happened before?

- It has happened, see the comment below (openresolve).

- It doesn't solve the problem, unlike other existing solutions for encrypting DNS.

- It adds unnecessarily high overhead.


There maybe more than one problem.

That's relative to the aforementioned problem.


I find it amusing that they chose apple.com for the example.

httpresolver.py implements DNS over HTTPS in a fork of Paul Chakravarti's dnslib. I have it running as a resolver for my Mac using the command:

sudo python3 httpresolver.py

Have a play with https://bitbucket.org/tony_allan/dnslib

Why is this news? Any protocol can (usually trivially) be tunneled over http(s).

I believe the significance is that it's being done, how, and by an Internet giant. There's an impact difference between random person on Github doing DNS over HTTPS and Google deploying it.

Great idea! Even more leverage for a DDoS attack!

It is a quicksand this idea, it seems fine until you rely on it and are shaken by attacks that just make your service unavailable with very few computers and traffic. And then you are screwed because we still hardly know howto prevent DDoS except by having a huge bandwidth compared to the attackers. Unless you are a megacorp with huge datacenters everywhere it is a bad idea.

But well Google will never become a monopolistic company that behave assholishly, right? They would never push standards that favors them other the few remaining hosting companies on the internet. Wouldn't they?

> Even more leverage for a DDoS attack!

HTTPS already exists and is slightly less vulnerable than normal DNS traffic.

This opens no new DDoS opportunities. The rest of your post is irrelevant.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact