> Currently, web-based applications must use browser extensions to take advantage of advanced DNS features such as DANE, DNS-SD service discovery, or even to look up anything other than IP addresses. Extensions for features that depend on DNSSEC must validate it themselves, as the browser and the OS may not (be able to) validate DNSSEC.
If only there were some way for Google to let people take advantage of advanced DNS features without requiring browser extensions ... alas, they'd probably need to add that code to a web browser. But then, where would they find a web browser they could add such code to? Ah well. Google should ask those web browser vendors why they won't implement DNSSEC, at least. Maybe start with asking the browser with the highest market share, whichever that one is.
DANE is not DNSSEC. This API doesn't solve the problem the DANE is trying to tackle (adding additional trust vectors to certificates).
It actually seems kind of lazy. Their argument is that it is to difficult for client software to embed a DNS resolver into the browser so it can check whether DNSSEC was attempted and whether it was successful. The actual validation would still happen at the clients local resolver.
Their solution is to make client software implement a REST API for doing its lookups instead, and the validation is still happening at your resolver (the REST API).
The only thing I see as a positive with this is your DNS requests will be encrypted. The trade off is the complexity of the process of getting your responses and that you'll be subject to the potential of third party censorship, logging of all destinations which will be difficult to opt out of.
That argument seems to boil down to “We can’t use any new record types in the DNS ever again because 4-5% of users had problems when we silently tested it”. First, this is defeatism writ large – if we can’t develop anything new, ever, we may all just as well go home and give up. Secondly, these things don’t exist in a vacuum; if a website failed because a user couldn’t access DANE or whatever, this would create pressure on all affected parties to fix the situation. This is how things progress instead of stagnates.
> First, this is defeatism writ large – if we can’t develop anything new, ever, we may all just as well go home and give up.
No, it means you need to consider the existing realities when designing a new protocol. A good example is HTTP/2: Probably countless proxies would produce crap once they see an HTTP/2 packet. So people came to the conclusion that they need to wrap HTTP/2 in TLS in order to deploy it. (I know that there's HTTP/2 without tls, but nobody is using it for precisely that reason.)
HTTP/2 was built with deployability in mind. DNSSEC was not.
> Secondly, these things don’t exist in a vacuum; if a website failed because a user couldn’t access DANE or whatever, this would create pressure on all affected parties to fix the situation.
This is simply not what's happening. What happens in the real world is that people blame their browser for breaking things that previously worked. There are countless examples for this.
Not necessarily. Consider the (admittedly dystopian) future where everything you do is hosted at Facebook.com. DNS becomes a vestigial remnant of the old Internet, and Facebook does de facto name resolution on it's private servers. New things can then develop within the proprietary Facebook ecosystem, in part because FB controls (more of) the entire experience. (Come to think of it, why doesn't FB develop a browser and an OS? And hardware while their at it. The fewer middlemen between you and your product the fewer risks you'll get cut off from them/it.)
> Instead, for this we have HPKP, which is a memory-based pinning solution using HTTP headers.
The problem with HPKP is that you still need a certificate from a CA. And if you want wildcards or a >90 day length (regardless of what you think of their security, many people do -- the biggest sites on the internet have both), that means paying money. And it also still leaves you vulnerable to MitM rogue CA attacks.
HPKP requires the user to connect the first time to get the pinned key. If an MitM has a fake certificate for your domain, they can send that to you on your first connection and fool you. DANE does not have this problem.
Well, to be fair, DANE has it for the DNSSEC provider used. But that is one thing to trust, and said certificate can be bundled with the OS or browser. One thing to trust beats the current 2000+ intermediate CAs we have to trust.
Further, DANE allows a way to verify self-signed certificates, allowing security to be free for everyone. We could see a lot more innovation than just certbot if Let's Encrypt weren't the only way to generate free certificates. For just one example, web servers could generate certificates from fresh installs; zero configuration requied. Like Caddy, but in nginx and Apache.
Lastly, HPKP is riskier. You have to include a backup pin. But, what would happen if your had your own pinned, and StartSSL as the secondary? And you set HPKP for a year? And two weeks from now, Mozilla revokes StartCom entirely? Well, good luck with that one. Deleting HPKP is a royal pain for a regular user (about:permissions in Firefox, chrome://net-internals#hsts in Chrome)
> We also have pre-loaded pinning in Chrome for larger or more obvious targets.
Nice for Google and Facebook, not so nice for byuu.org =(
> It's also worth noting that CryptoCat has committed pinning-suicide in Chrome at at the moment due to their CA having switched intermediates between renewals.
Another good negative of HPKP.
> But support for that was removed because it was a bunch of parsing code outside of the sandbox, wasn't really being used and it conflicted with two long-term plans for the health of the HTTPS ecosystem: eliminating 1024-bit RSA and Certificate Transparency. The conflicts are probably the most important reasons for not wanting to renew the experiment.
RSA-1024 is bad, yes. Perhaps we should start asking how we can get DNSSEC on RSA-2048+. Having a commitment to implement it in Chrome, Firefox, Edge, Safari if we do would go a long way toward making that happen, I'm sure. Further, we can reject any RSA-1024 signed DNSSEC after that point, just like we reject weak crypto in browsers today.
If they can't change DNSSEC to stronger crypto, how about they make an alternative? If Google introduces one that has stronger crypto, supports it in Chrome, and offers a free "DNSSEC+" hosting service, I'll definitely take immediate advantage of that.
But the latter is the whole point of DANE -- self-signing. Certificate Transparency is a crutch due to the CA system having 2000+ intermediate CAs that can sign any domain name they want. With DANE, you don't need that anymore. DANE is about providing an alternative to the CA signing business.
And there's nothing stopping users from serving up trusted CA-signed endpoint certificate signatures over DANE/TLSA records. That would obviously remain the only sensible way to do EV certificates. The way I see it, a DANE certificate would act like HTTP does now: you get the globe (or even the question mark circle), but your page actually loads over HTTPS, instead of sending you to Defcon-1 advisories that are all but impossible for the casual user to work around (and indeed, we don't want to train them to work around these warnings anyway.) There's no way this would be worse than plain-text HTTP.
> You literally can't avoid it because the root zone transits through a 1024-bit key.
Does this really matter for the root? There are root CAs with RSA-1024 still.
...
At any rate, Google apparently feels DNSSEC+DANE is good enough for this web service API. So, honest question, why offer that if it's really such an unworkable system?
First, I have bad news for you, and good news for the Internet:
In the very unlikely event that Chromium or Firefox ever honor DANE records, and the even less likely event that they honor trust anchor assertions in TLSA, you are still going to need a CA certificate. In the parallel universe in which DNSSEC is seriously deployed and honored by browsers, the entire X.509 PKI will be replaced with something else before TLSA trust anchor assertions are reliably deployed in browsers, and, until they do, huge fractions of your user base won't know what the hell the DNS is talking about when you give it your self-signed certificate.
If you don't care about that user base, then nothing at all is stopping you from using self-signed certificates today. Just tell your group of friends and followers to check your certificate once when it pops up with a warning, and add it to their trust stores. If you tried to use parallel-universe DANE to serve a self-signed certificate, that is the experience you would have anyways.
"The whole point of DANE" is not self-signing. If it were, Dan York wouldn't be telling people that DANE isn't government key escrow (hint: it is) because the CAs will still be involved. The point of DANE is to come up with some reason, any reason, to get DNSSEC deployed, because the people working on it have in some cases been working on it for over 20 years (it shows) and are frustrated that the Internet has moved on without them.
I don't know what you mean by "Google feels DNSSEC+DANE is good enough for this web service API". The web service is a simple wrapper around a DNS service. The Google you want to pay attention to, the one that matters for the DANE discussion, is the Chromium project. Go talk to the Chromium security people about DNSSEC. See what they say.
RSA1024 isn't as much of a problem for DNSSEC, though it is being increased. The validity period and the amount of data being encrypted restricts the attacker to a much larger degree. That being said, I don't think RSA is the correct choice anymore for speed, security, or size of DNS responses.
I was definitely thinking, "why not use Curve25519 for a 256-bit key if fitting the response into one packet is so important?"
It seems there's some worry about quantum computers being able to break ECC with less qubits than RSA on account of there being less actual bits.
But to me, it seems like if quantum computers come about that start breaking things, RSA is going to need to be replaced as well. So now's probably not the time to be too paranoid about something we don't yet know will ever even happen. At least, not until we have something we know will be resilient to the new quantum attacks.
I find it somewhat amusing that in order to use DNS-over-HTTPS you must first resolve a domain using "normal" DNS (dns.google.com). You'd think they'd go ahead and publicly advertise a static IP for that so you can use it without relying on normal DNS.
But that is a bit harder to spoof. Suppose someone hijacks your plain text DNS to take over "dns.google.com". Their fake site will not have the right SSL keys, so your DNS-over-HTTPS client will reject it.
To expand on this, there would be three advantages, but none have to do with CA trust:
(1) Clients implementations may be slightly simpler, since they have to handle one less protocol.
(2) The initial DNS would be removed altogether as a possible source of error, rather than a detectable possible source of error.
(3) Some marginal additional privacy. But reverse DNS means not really.
Independent of HPKP Chrome still does 2011-style static pinning AFAIK and just by looking at HTTP headers I don't think google.com even uses HPKP. Unlike HPKP which is trust-on-first-use, static pinning is enforced from the very start so if you have the ability to statically pin in the client (as you would if you're a browser vendor or distributing your own mobile app) you probably should.
Why do people keep linking to posts from 2011? That is wholly irrelevant... not only is key pinning not done that way now, we have a standard for this.
No, but the service seems to be (at least partly) targeted at "web-based applications", so the https clients would be browsers. Which would in turn mean the resolver would be using the browser's pinned certificates.
I've been playing with some code running as a resolver on my Mac. I had to seed it with some of the google IP addresses to resolve https://dns.google.com to get it going.
No it didn't. DNSCurve only solves recursive to authoritative. The request from a typical PC to recursive is still without encryption. And provides no mechanism for the PC to even know whether the result it receives is valid or not or if DNSCurve was even used by the recursive.
What guarantees that they wouldn't start filtering URLs on their own (upon request by DMCA, or FBI)?
I do get that they say what they log ( https://developers.google.com/speed/public-dns/privacy ), yet if in case this ever does become a _commonplace_ thing, they'd be easily able to obtain IP addresses of users trying to access blacklisted websites, and hand them over to officials (upon request, maybe?).
Note that although it is not documented, when you query the Google DNS-over-HTTPS service from Chrome, it will usually use QUIC. You can check this at chrome://net-internals/#quic, and will probably see something like this (look DNS/HTTPS/QUIC/UDP/IPv6!):
An independent implementation of QUIC (are there any outside of browsers?) would probably work much the same, modulo any changes during the ongoing standardization of QUIC.
I've actually just written a blogpost about it.
http://www.dmitry-ishkov.com/2016/09/dns-over-https.html
You can run a local DNS server which is gonna use Google's DNS-over-HTTPS. But as eridius noticed you still have to resolve dns.google.com
I'd heard that somebody was working on DNS-over-HTTPS support for https://github.com/getdnsapi/getdns at the hackathon in Buenos Aires in April just before DNS-OARC / IETF-95, but have seen no evidence of that.
I would not use that implementation. It is broken in multiple ways. The most impactful to normal browsing is that it only supports a couple of RR types which doesn't include CNAMES.
I didn't mention it in my original comment because I thought the code didn't exist anymore but I found an old Time Machine backup disk with the code on it for an updated version of the referenced implementation. I have put it up on Github at https://github.com/tssva/dnshttps-proxy. I need to throw up a README and give attribution. Will get to that later today.
This version will support all RR types supported by the miekg/dns library which is the vast majority of them and any you are likely to come across in the wild. It also allows you to specify regular DNS resolvers which can be used in two ways. As fallback if connectivity to the DNS over HTTPS service fails or to always use to resolve specific domains. It also allows you to restrict access to the proxy to certain networks. The rest of the code should be IPv6 friendly but for some reason I implemented the access list in a manner that only supports specifying IPv4 networks. Guess I have something to work on.
If no DNS resolvers are specified it attempts to use the Google Public DNS servers to resolve dns.google.com. If DNS resolvers are specified they are used to resolve dns.google.com. A flag to always use the Google Public DNS servers would be useful, so now I have 2 things to work on.
As far as performance impact I have generally seen from 20 - 80msec of additional delay. Using a caching resolver behind the proxy would help mitigate this. As is the additional delay is pretty much unnoticeable when web browsing.
I would assume being able to reuse the connection for multiple requests. Setting up the TLS connection is quite a bit more expensive than raw TCP, and especially UDP (most DNS happens over UDP). For longer connections, the additional overhead is minor, but for the extremely short DNS messages, I would imagine that a TLS connection per DNS request would be some pretty substantial overhead.
Edit: The page mentions that this allows web applications to make their own DNS requests, possibly looking up things other than A/AAAA records that the browser normally requests.
Definitely. But then you need to define an envelope to mark where an individual message begins and ends (with UDP DNS, it's a single datagram; with TCP DNS, it's the entirety of the transmission). There are infinite ways to do this, and countless many already specified in various standards (many of which are already implemented in browsers, which is surely the the primary application Google had in mind). HTTP provides one such envelope.
That's not true. Sure, many languages require you to drop down to a lower-level API to create a vanilla TCP connection, but if the language you're using doesn't provide a way to open a TCP socket and do the TLS negotiation dance, then you've probably chosen the wrong language.
The language is entirely uninteresting; what's interesting is what the runtime allows you to do. Browsers purposefully don't allow you to create raw TCP connections, yet JavaScript can in many other contexts (e.g node.js) create raw sockets perfectly fine.
Well I think it's a good idea to prevent web pages from just opening up a socket to some server, speaking whatever protocol they like. It could be easily abused to spam, etc.
Considering the move proposed in this thread to move even SMTP to HTTP, how would preventing connecting to SMTP prevent spam when one can still POST to SMTP-over-HTTP?
Given how successful they've been at that, I'm curious why you seem to view that as an aspersion on them rather than something to learn from? Shouldn't that tell us something about deploying internet-scale services?
(and, to be clear, the answer is yes, namely that a custom protocol has to deliver a lot of value to make up for having to deal with the deployment headaches inherent to anything less supported by firewalls, NAT boxes, etc. than HTTP/HTTPS)
It's an API that returns a document. Nothing fancy, no bells and whistles. If, for literally anything else, someone had suggested building such a service using raw sockets, they'd rightly be laughed out of the building.
Twilio isn't implemented using AT commands either, and for the same reason.
DNS latency is an entire round trip added to every single fresh domain lookup you make. That's a lot of times, and we want those to be as fast as possible. At that point, the raw sockets vs full blown HTTP over TLS over TCP debate matters.
Twilio, on the other hand, is used in situations with comparably laughable latency requirements. At least two orders of magnitude.
Google Public DNS (8.8.8.8) verifies DNSSEC by default. So does Verisign Public DNS (64.6.64.6).
Some measurements of DNSSEC validation show that as much as 15% of Internet domain lookups validate DNSSEC: http://stats.labs.apnic.net/dnssec/XA. Approximately half of that is due to Google Public DNS validation (many sites use both Google Public DNS and other resolvers that do not validate, so do not actually validate DNSSEC overall).
It is very true that less than 1% of DNS zones are signed with DNSSEC, so it is true that "secure DNS" doesn't practically exist, but this a serving side issue, not a lack of client validation.
Its useful for reading your email on those 'free' wifi networks that require you to give your name, address mothers maiden name, and credit card before getting http access.
DNS-over-HTTPS-over-Tor-over-DNS-over-ICMP is where it's at. None of the three-letter agencies can eavesdrop on your DNS queries now! It might even be faster than RFC 1149.
Not if you actually use RFC 1149. You're operating at layer 1 and 2 when you rely on birds.
I'm waiting for the day when RFC 1149 is expanded to incorporate a layer 2 tunnelling protocol. I suspect this issue is not the initial encapsulation but how to extract the original frame without it getting mangled.
Depends on definition of fast, the latency indeed could be better, but your bandwidth is only limited by the size of your USB stick, and most often would be much faster than current solutions.
Don't those networks hijack DNS to display captive portals? I always like poking around domains on "free limited-browsing" networks trying to get an open connection to the open web.
Not all of them. Some of them hijack HTTP, so you resolve your destination address as usual, but when you actually attempt to connect, you're redirected. This seems to be becoming less common, though, unfortunately.
Why doesn't Google include the entire DNSSEC signature chain in the response? Their current approach to DNSSEC validation seems quite weak. Sure, I can query them and get an answer with AD set, but then I need to trust that they didn't tamper with the response.
DNS doesn't seem very well secured against determined attackers. But at the same time I almost never hear about attacks done via DNS spoofing. So I guess it harder to attack than I think.
I can think of two non-technical reasons: as an explicit reminder to devs about side-channel attacks, and also to guarantee the key "random_padding" will never be used for anything else.
If you find an important issue and document it properly, I will be happy to re-enable issues and add your input there. Github just doesnt provide enough moderation tools there, IMHO.
You know, I've had thoughts along similar lines in the email space (SMTP). HTTP is such a fantastic protocol and an amazing amount of engineering effort has gone into it compared to SMTP. I've wondered whether there would be any interest in defining a translation from SMTP into HTTP, with an eye toward eventually deprecating SMTP in the fullness of time.
For example, to send an email, perhaps you just send an HTTP POST request to a canonical endpoint (email.example.com), instead of all the rigamarole that SMTP servers require with a unique text protocol requiring multiple round trips. Have you seen the number of SMTP commands involved in sending a single email? Here's an abbreviated transcript of what it's like to send an email using `telnet`:
# Wait for banner from server (RT #1)
220 email-inbound-relay-1234.example.com ESMTP Sendmail 1.0.0; Thu, 29 Sep 2016 19:22:12 GMT
# Send EHLO and wait for reply (RT #2)
EHLO example.com
250-email-inbound-relay-1234.example.com Hello ws-1.example.com [1.2.3.4], pleased to meet you
250-ENHANCEDSTATUSCODES
250-PIPELINING
250-EXPN
...
250 HELP
# At this phase you should really send STARTTLS and negotiate a TLS connection,
# but we'll just ignore that for now and proceed plaintext.
# Specify sender (RT #3)
MAIL FROM: jcrites@example.com
250 2.1.0 jcrites@example.com... Sender ok
# Specify recipient (RT #4)
RCPT TO: jcrites@example.net
250 2.1.5 jcrites@example.net... Recipient ok
# Specify message headers and content (RT #5)
DATA
354 Enter mail, end with "." on a line by itself
Subject: Hello, world!
Fun stuff
.
# Wait for reply (RT #6)
250 2.0.0 u8U1LC1l022963 Message accepted for delivery
Furthermore, if you skip these steps or front-run them, some servers will consider that suspicious or spammy behavior. (RFC 2920 properly allows this as an extension called pipelining, advertised in the EHLO reply above.)
With full use of SMTP extensions, things are a bit better than I imply but still frustratingly suboptimal. For example, I've run across ISPs who purely for their own load management reasons want to close an SMTP session at the TCP level after an arbitrary number of emails have been sent (N < 100)! Why would they desire that? If we're going to exchange more messages, then it's certainly less efficient for us both to negotiate a new TCP session and TLS session, rather than reuse the one we already have, but such is the practice of email. So message sending often can be as inefficient as this. When sending to some ISPs worldwide it's not uncommon for a single message to take seconds to deliver under normal network conditions.
How about we replace all of that with an HTTP POST to email.example.com, specifying the email headers and content with the POST body, and the sender and recipient as headers or querystring parameters? I think it'd be nice to get there eventually rather than drag SMTP on forever. All of the effort that goes into HTTP clients, servers, and security could benefit the email community as well.
Proper TLS security is still nascent in SMTP -- only because of Google's actions with Gmail and their Safer Email [1] initiative has TLS really come into widespread adoption at all. Today, although a lot of email is nominally taking place over TLS, most clients are not involving any sort of path validation and the connections are susceptible to MITM; and email clients don't specify client TLS certificates nor do servers examine them. If we were to employ it, TLS client certificate authentication could be an effective way to prevent email forgery, e.g., require email from example.com to be sent from a client with a TLS certificate for that domain. This kind of thing would be much easier to achieve in the HTTP world than in the SMTP world. We could also take advantage of HTTP/2 pipelining to efficiently deliver a lot of traffic across just one TCP connection.
We'd still need most of the effort invested into email, such as all of the effort fighting abuse, and mail servers would still need to buffer outbound messages and authenticate inbound ones, etc. (and we'd still need SPF, DKIM, DMARC) but at least it would simplify the foundational and protocol-level work, like what's involved in bootstrapping a new email client or server from scratch. You could write basic code to send an email in a few minutes using an HTTP library in any language. SMTP is pretty well entrenched, however, and the incremental benefit is probably not large enough, so I don't have my hopes up.
FastMail has been working on something called JMAP [1] for quite some time. It's an HTTP-based replacement for IMAP. Perhaps it could be extended to replace SMTP as well. Then we would have a single, HTTP-based API for all of our email needs.
I'm not convinced that's a useful generalization. Beyond that IMAP and SMTP do something vaguely related to email, there's very little overlap between the two protocols.
Yes, that's an apt comparison. Proprietary APIs for email exist today between parties that trust each other (clients and their service providers). What I'm proposing is to take it further and devise a standard HTTP-based protocol for message transmission between equal parties like ISPs, and for scenarios where there isn't preexisting trust.
For example, today if you use an HTTP API to submit a message to SendGrid or Mailgun or Amazon SES, that's a trusted relationship based on an account you have with the service, typically a paid relationship. Each provider has its own unique API, which is incompatible with other providers.
In the next step of that process, your service provider's Mail Transfer Agent (MTA) communicates with the final destination mail server (`example.com MX`), and that part is a peer relationship between ISPs (quasi-trusted or untrusted). This communication is all SMTP today, and I'm proposing the idea of a standard way to transmit emails over HTTP in this layer too, in such a way as that it would, in the fullness of time, obsolete SMTP.
I'm recently thinking a lot about email because of project[0], and yes, I do agree that it would be great to tunnel SMTP inside HTTP(S). (But yes, I'd go with a SRV entry, not a constant name.)
But no, it's not a great idea to let HTTP replace SMTP. SMTP is a stateful protocol, and while that brings some problems, it also brings some gains. For example, backend servers can keep connections open between them, peers can negotiate resource usage in real time, and the entire extension model is only possible because of connection state.
You'd lose all of those (or replace them with ugly hacks) by tunneling over stateless HTTP. It's a worth trade-off on some situations (like when you are behind a bad proxy), but not always.
Note that although it is not documented, when you query the Google DNS-over-HTTPS service from Chrome, it will usually use QUIC. You can check this at chrome://net-internals/#quic, and will probably see something like this (look DNS/HTTPS/QUIC/UDP/IPv6!):
An independent implementation of QUIC (are there any outside of browsers?) would probably work much the same, modulo any changes during the ongoing standardization of QUIC.
This won't be overly useful (to me) unless/until the system resolver supports this and I can implement this on my own DNS server(s). Seems like a good idea, though.
So to resolve a domain to an IP address without using regular DNS they have opted to use HTTPS which really requires a certificate to be signed against a domain via DNS.
I think this could be more useful if there was a local client that installs and proxies. E.g., A traditional query to localhost:53 gets translated to DNS-over-HTTPS.
I use DNSCurve, which adds <1ms to latency. That's with X25519 and XSalsa20-Poly1305. Assuming a persistent connection, DNS-over-HTTPS might be similar with AES-NI or ChaCha20-Poly1305. The real speed issue is the number of round trips. DNSCurve is Zero-RTT, and assuming a persistent connection, DNS-over-HTTPS should probably be too at least once it's up and running.
And then consider that lots of pages have megabytes of javascript fetched from multiple sources, big and often unoptimized images, expensive screen redraws, etc.
Modern crypto doesn't affect performance at all. Hell, even PQCrypto-encrypted-DNS with 64KB public keys would be fast compared to the modern web. There's no reason to worry anymore about modern crypto affecting performance. It's just not an issue.
If you had the TLS connection open to Google, there's a bit of overhead because the request is longer, and the response is longer than native DNS, plus http headers (hpack and content-encoding would help), but I wouldn't expect it to spill to a second packet for either request or response. Encrypt/Decrypt is probably not a big deal compared to network latency. Assuming Google runs dns.google.com in the same locations it runs its port 53 services, then it's still one round trip either way.
If you don't have the connection open, you still have to do a port 53 DNS lookup to find out where to connect (1 round trip to configured dns server), plus open a tcp connection (1 round trip), setup tls (1 round trip, assuming TLS false start), DNS request (1 round trip); so 4 round trips vs 1.
It is optional to use. Just like it's optional to use Googles Dns servers at 8.8.8.8 and 8.8.4.4. The advantage with this is the extra security you get with HTTPS.
Google DNS tends to be one of the fastest DNS servers you can use (just benchmark them against other options). The IPs are anycast, so you will likely be served by the Google data center closest to you.
So, what DNS server do you use? I trust Google's DNS (I use the normal DNS ones, 8.8.8.8 and 8.8.4.4) a lot more than I trust Comcast's DNS servers. I'm sure there are others out there, of course, but 8.8.8.8 is good, reliable, and easily memorized.
I run my own, but then again, I also run my own web and email services. But in my gut, I have the feeling that at some point, Google will become the Internet (or even worse---we have the GoogleNet and FacebookNet and never the twain shall share).
The chaos computer club runs their own, which happens to answer usually just as fast as Google’s DNS. (And which isn’t subject to censorship, Google’s DNS, like Comcast and most US ISPs, censor several domains of piracy websites, although the domains are still existing in the ICANN database, and are reachable through most other DNS servers)
I use OpenDNS. My home network is setup so that all DNS requests are sent via dnscrypt to OpenDNS. This ensures that Comcast (or whoever) doesn't ever see my DNS traffic and can't muck with it.
Politically it's a lot easier to stop Comcast from altering through-traffic DNS than it is to stop them from lying in DNS responses and calling it pro-user.
I believe the significance is that it's being done, how, and by an Internet giant. There's an impact difference between random person on Github doing DNS over HTTPS and Google deploying it.
It is a quicksand this idea, it seems fine until you rely on it and are shaken by attacks that just make your service unavailable with very few computers and traffic. And then you are screwed because we still hardly know howto prevent DDoS except by having a huge bandwidth compared to the attackers. Unless you are a megacorp with huge datacenters everywhere it is a bad idea.
But well Google will never become a monopolistic company that behave assholishly, right? They would never push standards that favors them other the few remaining hosting companies on the internet. Wouldn't they?
If only there were some way for Google to let people take advantage of advanced DNS features without requiring browser extensions ... alas, they'd probably need to add that code to a web browser. But then, where would they find a web browser they could add such code to? Ah well. Google should ask those web browser vendors why they won't implement DNSSEC, at least. Maybe start with asking the browser with the highest market share, whichever that one is.