
DNS-over-HTTPS - mikecarlton
https://developers.google.com/speed/public-dns/docs/dns-over-https
======
byuu
> Currently, web-based applications must use browser extensions to take
> advantage of advanced DNS features such as DANE, DNS-SD service discovery,
> or even to look up anything other than IP addresses. Extensions for features
> that depend on DNSSEC must validate it themselves, as the browser and the OS
> may not (be able to) validate DNSSEC.

If only there were some way for Google to let people take advantage of
advanced DNS features without requiring browser extensions ... alas, they'd
probably need to add that code to a web browser. But then, where would they
find a web browser they could add such code to? Ah well. Google should ask
those web browser vendors why they won't implement DNSSEC, at least. Maybe
start with asking the browser with the highest market share, whichever that
one is.

~~~
hannob
agl has explained in detail why they don't do that:
[https://www.imperialviolet.org/2015/01/17/notdane.html](https://www.imperialviolet.org/2015/01/17/notdane.html)

tl;dr they can't, because DANE is undeployable in the current Internet.

~~~
teddyh
That argument seems to boil down to “ _We can’t use any new record types in
the DNS ever again because 4-5% of users had problems when we silently tested
it_ ”. First, this is defeatism writ large – if we can’t develop anything new,
ever, we may all just as well go home and give up. Secondly, these things
don’t exist in a vacuum; if a website _failed_ because a user couldn’t access
DANE or whatever, this would create pressure on all affected parties to _fix_
the situation. This is how things _progress_ instead of stagnates.

~~~
hannob
> First, this is defeatism writ large – if we can’t develop anything new,
> ever, we may all just as well go home and give up.

No, it means you need to consider the existing realities when designing a new
protocol. A good example is HTTP/2: Probably countless proxies would produce
crap once they see an HTTP/2 packet. So people came to the conclusion that
they need to wrap HTTP/2 in TLS in order to deploy it. (I know that there's
HTTP/2 without tls, but nobody is using it for precisely that reason.) HTTP/2
was built with deployability in mind. DNSSEC was not.

> Secondly, these things don’t exist in a vacuum; if a website failed because
> a user couldn’t access DANE or whatever, this would create pressure on all
> affected parties to fix the situation.

This is simply not what's happening. What happens in the real world is that
people blame their browser for breaking things that previously worked. There
are countless examples for this.

~~~
Senji
>What happens in the real world is that people blame their browser for
breaking things that previously worked.

There are 3 browser vendors of any note right now. It wouldn't be that hard
for them to collaborate to push this change at the same time.

------
eridius
I find it somewhat amusing that in order to use DNS-over-HTTPS you must first
resolve a domain using "normal" DNS (dns.google.com). You'd think they'd go
ahead and publicly advertise a static IP for that so you can use it without
relying on normal DNS.

~~~
microcolonel
The certificate has to match, so I don't think there would be a problem.

~~~
bluejekyll
Assuming none of your CAs have been compromised.

~~~
paulddraper
If you're not trusting CAs, there's not much point to DNS over HTTPS, is
there? Might as well be DNS over HTTP. Or just DNS.

~~~
paulddraper
To expand on this, there _would_ be three advantages, but none have to do with
CA trust: (1) Clients implementations may be slightly simpler, since they have
to handle one less protocol. (2) The initial DNS would be removed altogether
as a possible source of error, rather than a detectable possible source of
error. (3) Some marginal additional privacy. But reverse DNS means not really.

------
zdw
DNSCurve solved this years ago: [http://dnscurve.org](http://dnscurve.org) ,
with implementation notes here: [https://dnscurve.io](https://dnscurve.io)

~~~
AliceWonderMisc
No it didn't. DNSCurve only solves recursive to authoritative. The request
from a typical PC to recursive is still without encryption. And provides no
mechanism for the PC to even know whether the result it receives is valid or
not or if DNSCurve was even used by the recursive.

------
thewisenerd
> (including DNS-based Internet filtering)

What guarantees that they wouldn't start filtering URLs on their own (upon
request by DMCA, or FBI)?

I do get that they say what they log (
[https://developers.google.com/speed/public-
dns/privacy](https://developers.google.com/speed/public-dns/privacy) ), yet if
in case this ever does become a _commonplace_ thing, they'd be easily able to
obtain IP addresses of users trying to access blacklisted websites, and hand
them over to officials (upon request, maybe?).

------
davidu
We did the same thing w/ JSON responses:
[https://www.openresolve.com/](https://www.openresolve.com/)

This is a bad idea outside of experimentation. Not to be used for production.

If you want to secure DNS look at QUIC, TLS, or my favorite, DNSCrypt (which I
funded).

~~~
prewett
Why is this a bad idea? You can't just say "it's bad" with no justification.

As a user, I can sure think of some countries with broken Internet access
where this would come in handy.

~~~
davidu
I'll rephrase, it's way better accomplished with UDP without sacrificing
security in the ways I list.

Running DNS over HTTPS over TCP isn't needed. It doesn't solve a problem.

Doing JSON DNS for OOB DNS checks is useful since most applications speak
HTTP. :-)

------
therusskiy
I've actually just written a blogpost about it. [http://www.dmitry-
ishkov.com/2016/09/dns-over-https.html](http://www.dmitry-
ishkov.com/2016/09/dns-over-https.html) You can run a local DNS server which
is gonna use Google's DNS-over-HTTPS. But as eridius noticed you still have to
resolve dns.google.com

~~~
tssva
I would not use that implementation. It is broken in multiple ways. The most
impactful to normal browsing is that it only supports a couple of RR types
which doesn't include CNAMES.

~~~
tssva
I didn't mention it in my original comment because I thought the code didn't
exist anymore but I found an old Time Machine backup disk with the code on it
for an updated version of the referenced implementation. I have put it up on
Github at [https://github.com/tssva/dnshttps-
proxy](https://github.com/tssva/dnshttps-proxy). I need to throw up a README
and give attribution. Will get to that later today.

This version will support all RR types supported by the miekg/dns library
which is the vast majority of them and any you are likely to come across in
the wild. It also allows you to specify regular DNS resolvers which can be
used in two ways. As fallback if connectivity to the DNS over HTTPS service
fails or to always use to resolve specific domains. It also allows you to
restrict access to the proxy to certain networks. The rest of the code should
be IPv6 friendly but for some reason I implemented the access list in a manner
that only supports specifying IPv4 networks. Guess I have something to work
on.

If no DNS resolvers are specified it attempts to use the Google Public DNS
servers to resolve dns.google.com. If DNS resolvers are specified they are
used to resolve dns.google.com. A flag to always use the Google Public DNS
servers would be useful, so now I have 2 things to work on.

As far as performance impact I have generally seen from 20 - 80msec of
additional delay. Using a caching resolver behind the proxy would help
mitigate this. As is the additional delay is pretty much unnoticeable when web
browsing.

------
joshAg
Why can't you just send DNS messages over an SSL/TLS socket? What's the value
add for http and REST?

~~~
LukeShu
I would assume being able to reuse the connection for multiple requests.
Setting up the TLS connection is quite a bit more expensive than raw TCP, and
especially UDP (most DNS happens over UDP). For longer connections, the
additional overhead is minor, but for the extremely short DNS messages, I
would imagine that a TLS connection per DNS request would be some pretty
substantial overhead.

Edit: The page mentions that this allows web applications to make their own
DNS requests, possibly looking up things other than A/AAAA records that the
browser normally requests.

~~~
eggnet
http and rest are not the only ways to reuse a socket. You could just write
the packet to the TLS stream, raw, and keep the socket open.

~~~
LukeShu
Definitely. But then you need to define an envelope to mark where an
individual message begins and ends (with UDP DNS, it's a single datagram; with
TCP DNS, it's the entirety of the transmission). There are infinite ways to do
this, and countless many already specified in various standards (many of which
are already implemented in browsers, which is surely the the primary
application Google had in mind). HTTP provides one such envelope.

~~~
pprx
> with TCP DNS, it's the entirety of the transmission

DNS Messages have a two-byte length prefix when transmitted over TCP. Multiple
envelopes can and often are sent over a single circuit.

------
dorianm
Pretty cool, DNS as an API:

\- JSON:
[https://dns.google.com/resolve?name=doma.io](https://dns.google.com/resolve?name=doma.io)

\- Web interface:
[https://dns.google.com/query?name=doma.io&type=A&dnssec=true](https://dns.google.com/query?name=doma.io&type=A&dnssec=true)

------
KaiserPro
nice toy, but there are some things to be considered:

1) "secure DNS" is a solved problem 2) DNS is simple 3) responses normally
easily fit inside one packet 4) DNS is fast

HTTPS is a slow, wordy and inefficient protocol. Forcing everything into JSON
just compounds the problem.

~~~
viraptor
> 1) "secure DNS" is a solved problem

No, not in practice. You can easily MitM DNS and nobody is verifying DNSSEC by
default. On the current internet, secure DNS just doesn't exist.

~~~
KaiserPro
the mechanism is there, people are not choosing to use it.

That is because security is hard.

This doesn't mitigate MitM attacks, as upstream DNS records can still be
spoofed

WE go from a fast, decentralised, resilient and low overhead system, to a
centralised chatty and fragile behemoth.

It still doesn't give end to end encryption, its just a slow encrypted proxy.

~~~
viraptor
First people need to know about it to use it. Then it needs to actually work
and not fail. [https://ianix.com/pub/dnssec-
outages.html](https://ianix.com/pub/dnssec-outages.html)

------
babangida
HTTP-over-DNS would also be neat :) I think I would be able to get internet
access in some airports if I had HTTP-over-DNS

~~~
kijin
DNS-over-HTTPS-over-Tor-over-DNS-over-ICMP is where it's at. None of the
three-letter agencies can eavesdrop on your DNS queries now! It might even be
faster than RFC 1149.

[https://www.ietf.org/rfc/rfc1149.txt](https://www.ietf.org/rfc/rfc1149.txt)

~~~
chris_wot
Not if you actually use RFC 1149. You're operating at layer 1 and 2 when you
rely on birds.

I'm waiting for the day when RFC 1149 is expanded to incorporate a layer 2
tunnelling protocol. I suspect this issue is not the initial encapsulation but
how to extract the original frame without it getting mangled.

------
amluto
Why doesn't Google include the entire DNSSEC signature chain in the response?
Their current approach to DNSSEC validation seems quite weak. Sure, I can
query them and get an answer with AD set, but then I need to trust that they
didn't tamper with the response.

------
jbb555
The "web" is becoming hack upon hack upon hack.

~~~
Beltiras
Becoming? When has it not been?

------
aomix
DNS doesn't seem very well secured against determined attackers. But at the
same time I almost never hear about attacks done via DNS spoofing. So I guess
it harder to attack than I think.

------
znpy
Sometimes I think we should all take the Apple approach to this kind of things
and deprecate old stuff and/or make new stuff mandatory.

We could just force DNS extensions to be implemented in most/all client/server
implementations.

DNS over HTTPS might be okay and work well, but imho is a (smart?) workaround,
not a fix.

Why can't we all set a time window (7.5 years? 10 years? 15 years?) to plan
massive RFC/protocols updates with possibly-breaking changes?

Edit:fix grammar (not native speaker of English)

------
dtjohnnymonkey
What's the purpose of explicitly specifying the "random_padding" parameter?
Couldn't the client send any arbitrary unused query argument as padding?

~~~
pshc
I can think of two non-technical reasons: as an explicit reminder to devs
about side-channel attacks, and also to guarantee the key "random_padding"
will never be used for anything else.

------
garaetjjte
Why not DNS over TLS?
[https://tools.ietf.org/html/rfc7858](https://tools.ietf.org/html/rfc7858)

------
pjf
Shameless plug (again):
[https://github.com/pforemski/dingo](https://github.com/pforemski/dingo)

~~~
poorman
Why have you disabled "Issues" on your repo?

~~~
pjf
For now I prefer to communicate through email

~~~
lox
I find it very annoying when people do this, it makes it very hard to gauge
what sort of issues the software has and how the author responds to them.

~~~
pjf
If you find an important issue and document it properly, I will be happy to
re-enable issues and add your input there. Github just doesnt provide enough
moderation tools there, IMHO.

------
jcrites
You know, I've had thoughts along similar lines in the email space (SMTP).
HTTP is such a fantastic protocol and an amazing amount of engineering effort
has gone into it compared to SMTP. I've wondered whether there would be any
interest in defining a translation from SMTP into HTTP, with an eye toward
eventually deprecating SMTP in the fullness of time.

For example, to send an email, perhaps you just send an HTTP POST request to a
canonical endpoint (email.example.com), instead of all the rigamarole that
SMTP servers require with a unique text protocol requiring multiple round
trips. Have you seen the number of SMTP commands involved in sending a
_single_ email? Here's an abbreviated transcript of what it's like to send an
email using `telnet`:

    
    
      # Wait for banner from server (RT #1)
      220 email-inbound-relay-1234.example.com ESMTP Sendmail 1.0.0; Thu, 29 Sep 2016 19:22:12 GMT
      
      # Send EHLO and wait for reply (RT #2)
      EHLO example.com
      250-email-inbound-relay-1234.example.com Hello ws-1.example.com [1.2.3.4], pleased to meet you
      250-ENHANCEDSTATUSCODES
      250-PIPELINING
      250-EXPN
      ...
      250 HELP
    
      # At this phase you should really send STARTTLS and negotiate a TLS connection,
      # but we'll just ignore that for now and proceed plaintext.
      
      # Specify sender (RT #3)
      MAIL FROM: jcrites@example.com
      250 2.1.0 jcrites@example.com... Sender ok
      
      # Specify recipient (RT #4)
      RCPT TO: jcrites@example.net
      250 2.1.5 jcrites@example.net... Recipient ok
      
      # Specify message headers and content (RT #5)
      DATA
      354 Enter mail, end with "." on a line by itself
      Subject: Hello, world!
      
      Fun stuff
      .
    
      # Wait for reply (RT #6) 
      250 2.0.0 u8U1LC1l022963 Message accepted for delivery
    

Furthermore, if you skip these steps or front-run them, some servers will
consider that suspicious or spammy behavior. (RFC 2920 properly allows this as
an extension called pipelining, advertised in the EHLO reply above.)

With full use of SMTP extensions, things are a bit better than I imply but
still frustratingly suboptimal. For example, I've run across ISPs who purely
for their own load management reasons want to close an SMTP session at the TCP
level after an arbitrary number of emails have been sent (N < 100)! Why would
they desire that? If we're going to exchange more messages, then it's
certainly _less_ efficient for us both to negotiate a new TCP session and TLS
session, rather than reuse the one we already have, but such is the practice
of email. So message sending often can be as inefficient as this. When sending
to some ISPs worldwide it's not uncommon for a single message to take seconds
to deliver under normal network conditions.

How about we replace all of that with an HTTP POST to email.example.com,
specifying the email headers and content with the POST body, and the sender
and recipient as headers or querystring parameters? I think it'd be nice to
get there eventually rather than drag SMTP on forever. All of the effort that
goes into HTTP clients, servers, and security could benefit the email
community as well.

Proper TLS security is still nascent in SMTP -- only because of Google's
actions with Gmail and their Safer Email [1] initiative has TLS really come
into widespread adoption at all. Today, although a lot of email is _nominally_
taking place over TLS, most clients are not involving any sort of path
validation and the connections are susceptible to MITM; and email clients
don't specify client TLS certificates nor do servers examine them. If we were
to employ it, TLS client certificate authentication could be an effective way
to prevent email forgery, e.g., require email from example.com to be sent from
a client with a TLS certificate for that domain. This kind of thing would be
much easier to achieve in the HTTP world than in the SMTP world. We could also
take advantage of HTTP/2 pipelining to efficiently deliver a lot of traffic
across just one TCP connection.

We'd still need _most_ of the effort invested into email, such as all of the
effort fighting abuse, and mail servers would still need to buffer outbound
messages and authenticate inbound ones, etc. (and we'd still need SPF, DKIM,
DMARC) but at least it would simplify the foundational and protocol-level
work, like what's involved in bootstrapping a new email client or server from
scratch. You could write basic code to send an email in a few minutes using an
HTTP library in any language. SMTP is pretty well entrenched, however, and the
incremental benefit is probably not large enough, so I don't have my hopes up.

[1]
[https://www.google.com/transparencyreport/saferemail/](https://www.google.com/transparencyreport/saferemail/)

~~~
kijin
FastMail has been working on something called JMAP [1] for quite some time.
It's an HTTP-based replacement for IMAP. Perhaps it could be extended to
replace SMTP as well. Then we would have a single, HTTP-based API for all of
our email needs.

[1] [http://jmap.io/](http://jmap.io/)

~~~
duskwuff
I'm not convinced that's a useful generalization. Beyond that IMAP and SMTP do
something vaguely related to email, there's very little overlap between the
two protocols.

------
seanmcelroy
DNS-over-QUIC would be a much more compelling technical proposal from Google
as a standard

~~~
sashametro
Note that although it is not documented, when you query the Google DNS-over-
HTTPS service from Chrome, it will usually use QUIC. You can check this at
chrome://net-internals/#quic, and will probably see something like this (look
DNS/HTTPS/QUIC/UDP/IPv6!):

dns.google.com:443 true QUIC_VERSION_34 [2607:f8b0:400d:c03::8a]:443
10544469510527000173 0 None 2 9 0 9 true

An independent implementation of QUIC (are there any outside of browsers?)
would probably work much the same, modulo any changes during the ongoing
standardization of QUIC.

------
rascul
This won't be overly useful (to me) unless/until the system resolver supports
this _and_ I can implement this on my own DNS server(s). Seems like a good
idea, though.

~~~
hrez
[https://github.com/wrouesnel/dns-over-https-
proxy](https://github.com/wrouesnel/dns-over-https-proxy)

~~~
rascul
I will still prefer the system resolver to support DNS over HTTPS natively,
but this option could work.

------
chris_wot
So to resolve a domain to an IP address without using regular DNS they have
opted to use HTTPS which really requires a certificate to be signed against a
domain via DNS.

------
jetsnoc
I think this could be more useful if there was a local client that installs
and proxies. E.g., A traditional query to localhost:53 gets translated to DNS-
over-HTTPS.

------
tigarcia
I wonder what the performance hit of something like this would be. It seems
like the ssl connection would be a bottle neck for the page load.

~~~
Panino
I use DNSCurve, which adds <1ms to latency. That's with X25519 and
XSalsa20-Poly1305. Assuming a persistent connection, DNS-over-HTTPS might be
similar with AES-NI or ChaCha20-Poly1305. The real speed issue is the number
of round trips. DNSCurve is Zero-RTT, and assuming a persistent connection,
DNS-over-HTTPS should probably be too at least once it's up and running.

And then consider that lots of pages have megabytes of javascript fetched from
multiple sources, big and often unoptimized images, expensive screen redraws,
etc.

Modern crypto doesn't affect performance _at all_. Hell, even PQCrypto-
encrypted-DNS with 64KB public keys would be fast compared to the modern web.
There's no reason to worry anymore about modern crypto affecting performance.
It's just not an issue.

------
acidtrucks
This is great, but google doesn't need to eavesdrop on us when they compel us
to use their avenues for our every action.

~~~
deathanatos
So, what DNS server do you use? I trust Google's DNS (I use the normal DNS
ones, 8.8.8.8 and 8.8.4.4) a lot more than I trust Comcast's DNS servers. I'm
sure there are others out there, of course, but 8.8.8.8 is good, reliable, and
easily memorized.

~~~
sneak
Comcast is the one sending you the (unsigned, unencrypted) response packets
from 8.8.8.8.

You might as well use the Comcast ones.

~~~
Dylan16807
Politically it's a lot easier to stop Comcast from altering through-traffic
DNS than it is to stop them from lying in DNS responses and calling it pro-
user.

~~~
sashametro
And if you use DNS-over-HTTPS to get your answers from Google, Comcast can't
modify them.

------
majewsky
Is there a matrix of X-over-Y implementations? I would assume that we are
converging to a fully filled matrix.

~~~
OJFord
If not filled, probably full-rank such that all X-over-Y is possible, via-S-
via-T!

------
drizzentic
Why do I need dns over http?

------
youdontknowtho
That's fantastic. Wondering why this hasn't happened before?

~~~
ingenter
\- It has happened, see the comment below (openresolve).

\- It doesn't solve the problem, unlike other existing solutions for
encrypting DNS.

\- It adds unnecessarily high overhead.

~~~
youdontknowtho
Awesome.

There maybe more than one problem.

That's relative to the aforementioned problem.

Cheers.

------
mtyaka
I find it amusing that they chose apple.com for the example.

------
tony-allan
httpresolver.py implements DNS over HTTPS in a fork of Paul Chakravarti's
dnslib. I have it running as a resolver for my Mac using the command:

sudo python3 httpresolver.py

Have a play with
[https://bitbucket.org/tony_allan/dnslib](https://bitbucket.org/tony_allan/dnslib)

------
api
Why is this news? Any protocol can (usually trivially) be tunneled over
http(s).

~~~
nickpsecurity
I believe the significance is that it's being done, how, and by an Internet
giant. There's an impact difference between random person on Github doing DNS
over HTTPS and Google deploying it.

------
SFJulie
Great idea! Even more leverage for a DDoS attack!

It is a quicksand this idea, it seems fine until you rely on it and are shaken
by attacks that just make your service unavailable with very few computers and
traffic. And then you are screwed because we still hardly know howto prevent
DDoS except by having a huge bandwidth compared to the attackers. Unless you
are a megacorp with huge datacenters everywhere it is a bad idea.

But well Google will never become a monopolistic company that behave
assholishly, right? They would never push standards that favors them other the
few remaining hosting companies on the internet. Wouldn't they?

~~~
Dylan16807
> Even more leverage for a DDoS attack!

HTTPS already exists and is slightly less vulnerable than normal DNS traffic.

This opens no new DDoS opportunities. The rest of your post is irrelevant.

