If only there were some way for Google to let people take advantage of advanced DNS features without requiring browser extensions ... alas, they'd probably need to add that code to a web browser. But then, where would they find a web browser they could add such code to? Ah well. Google should ask those web browser vendors why they won't implement DNSSEC, at least. Maybe start with asking the browser with the highest market share, whichever that one is.
tl;dr they can't, because DANE is undeployable in the current Internet.
The article also mentions the number of TLDs that now use 2048 RSA keys is around 1,300.
It actually seems kind of lazy. Their argument is that it is to difficult for client software to embed a DNS resolver into the browser so it can check whether DNSSEC was attempted and whether it was successful. The actual validation would still happen at the clients local resolver.
Their solution is to make client software implement a REST API for doing its lookups instead, and the validation is still happening at your resolver (the REST API).
The only thing I see as a positive with this is your DNS requests will be encrypted. The trade off is the complexity of the process of getting your responses and that you'll be subject to the potential of third party censorship, logging of all destinations which will be difficult to opt out of.
No, it means you need to consider the existing realities when designing a new protocol. A good example is HTTP/2: Probably countless proxies would produce crap once they see an HTTP/2 packet. So people came to the conclusion that they need to wrap HTTP/2 in TLS in order to deploy it. (I know that there's HTTP/2 without tls, but nobody is using it for precisely that reason.)
HTTP/2 was built with deployability in mind. DNSSEC was not.
> Secondly, these things don’t exist in a vacuum; if a website failed because a user couldn’t access DANE or whatever, this would create pressure on all affected parties to fix the situation.
This is simply not what's happening. What happens in the real world is that people blame their browser for breaking things that previously worked. There are countless examples for this.
There are 3 browser vendors of any note right now. It wouldn't be that hard for them to collaborate to push this change at the same time.
> Instead, for this we have HPKP, which is a memory-based pinning solution using HTTP headers.
The problem with HPKP is that you still need a certificate from a CA. And if you want wildcards or a >90 day length (regardless of what you think of their security, many people do -- the biggest sites on the internet have both), that means paying money. And it also still leaves you vulnerable to MitM rogue CA attacks.
HPKP requires the user to connect the first time to get the pinned key. If an MitM has a fake certificate for your domain, they can send that to you on your first connection and fool you. DANE does not have this problem.
Well, to be fair, DANE has it for the DNSSEC provider used. But that is one thing to trust, and said certificate can be bundled with the OS or browser. One thing to trust beats the current 2000+ intermediate CAs we have to trust.
Further, DANE allows a way to verify self-signed certificates, allowing security to be free for everyone. We could see a lot more innovation than just certbot if Let's Encrypt weren't the only way to generate free certificates. For just one example, web servers could generate certificates from fresh installs; zero configuration requied. Like Caddy, but in nginx and Apache.
Lastly, HPKP is riskier. You have to include a backup pin. But, what would happen if your had your own pinned, and StartSSL as the secondary? And you set HPKP for a year? And two weeks from now, Mozilla revokes StartCom entirely? Well, good luck with that one. Deleting HPKP is a royal pain for a regular user (about:permissions in Firefox, chrome://net-internals#hsts in Chrome)
> We also have pre-loaded pinning in Chrome for larger or more obvious targets.
Nice for Google and Facebook, not so nice for byuu.org =(
> It's also worth noting that CryptoCat has committed pinning-suicide in Chrome at at the moment due to their CA having switched intermediates between renewals.
Another good negative of HPKP.
> But support for that was removed because it was a bunch of parsing code outside of the sandbox, wasn't really being used and it conflicted with two long-term plans for the health of the HTTPS ecosystem: eliminating 1024-bit RSA and Certificate Transparency. The conflicts are probably the most important reasons for not wanting to renew the experiment.
RSA-1024 is bad, yes. Perhaps we should start asking how we can get DNSSEC on RSA-2048+. Having a commitment to implement it in Chrome, Firefox, Edge, Safari if we do would go a long way toward making that happen, I'm sure. Further, we can reject any RSA-1024 signed DNSSEC after that point, just like we reject weak crypto in browsers today.
If they can't change DNSSEC to stronger crypto, how about they make an alternative? If Google introduces one that has stronger crypto, supports it in Chrome, and offers a free "DNSSEC+" hosting service, I'll definitely take immediate advantage of that.
But the latter is the whole point of DANE -- self-signing. Certificate Transparency is a crutch due to the CA system having 2000+ intermediate CAs that can sign any domain name they want. With DANE, you don't need that anymore. DANE is about providing an alternative to the CA signing business.
And there's nothing stopping users from serving up trusted CA-signed endpoint certificate signatures over DANE/TLSA records. That would obviously remain the only sensible way to do EV certificates. The way I see it, a DANE certificate would act like HTTP does now: you get the globe (or even the question mark circle), but your page actually loads over HTTPS, instead of sending you to Defcon-1 advisories that are all but impossible for the casual user to work around (and indeed, we don't want to train them to work around these warnings anyway.) There's no way this would be worse than plain-text HTTP.
> You literally can't avoid it because the root zone transits through a 1024-bit key.
Does this really matter for the root? There are root CAs with RSA-1024 still.
At any rate, Google apparently feels DNSSEC+DANE is good enough for this web service API. So, honest question, why offer that if it's really such an unworkable system?
In the very unlikely event that Chromium or Firefox ever honor DANE records, and the even less likely event that they honor trust anchor assertions in TLSA, you are still going to need a CA certificate. In the parallel universe in which DNSSEC is seriously deployed and honored by browsers, the entire X.509 PKI will be replaced with something else before TLSA trust anchor assertions are reliably deployed in browsers, and, until they do, huge fractions of your user base won't know what the hell the DNS is talking about when you give it your self-signed certificate.
If you don't care about that user base, then nothing at all is stopping you from using self-signed certificates today. Just tell your group of friends and followers to check your certificate once when it pops up with a warning, and add it to their trust stores. If you tried to use parallel-universe DANE to serve a self-signed certificate, that is the experience you would have anyways.
"The whole point of DANE" is not self-signing. If it were, Dan York wouldn't be telling people that DANE isn't government key escrow (hint: it is) because the CAs will still be involved. The point of DANE is to come up with some reason, any reason, to get DNSSEC deployed, because the people working on it have in some cases been working on it for over 20 years (it shows) and are frustrated that the Internet has moved on without them.
I don't know what you mean by "Google feels DNSSEC+DANE is good enough for this web service API". The web service is a simple wrapper around a DNS service. The Google you want to pay attention to, the one that matters for the DANE discussion, is the Chromium project. Go talk to the Chromium security people about DNSSEC. See what they say.
You can find more information here: https://www.cloudflare.com/dns/dnssec/ecdsa-and-dnssec/
It seems there's some worry about quantum computers being able to break ECC with less qubits than RSA on account of there being less actual bits.
But to me, it seems like if quantum computers come about that start breaking things, RSA is going to need to be replaced as well. So now's probably not the time to be too paranoid about something we don't yet know will ever even happen. At least, not until we have something we know will be resilient to the new quantum attacks.
And then there is this recent discussion: https://news.ycombinator.com/item?id=12434585
And it's implemented by most browser vendors... just not Microsoft, of course.
Because it was an accurate response to the question to point out that Chrome hasn't done it for five years?
What guarantees that they wouldn't start filtering URLs on their own (upon request by DMCA, or FBI)?
I do get that they say what they log ( https://developers.google.com/speed/public-dns/privacy ), yet if in case this ever does become a _commonplace_ thing, they'd be easily able to obtain IP addresses of users trying to access blacklisted websites, and hand them over to officials (upon request, maybe?).
This is a bad idea outside of experimentation. Not to be used for production.
If you want to secure DNS look at QUIC, TLS, or my favorite, DNSCrypt (which I funded).
As a user, I can sure think of some countries with broken Internet access where this would come in handy.
Running DNS over HTTPS over TCP isn't needed. It doesn't solve a problem.
Doing JSON DNS for OOB DNS checks is useful since most applications speak HTTP. :-)
dns.google.com:443 true QUIC_VERSION_34 [2607:f8b0:400d:c03::8a]:443 10544469510527000173 0 None 2 9 0 9 true
An independent implementation of QUIC (are there any outside of browsers?) would probably work much the same, modulo any changes during the ongoing standardization of QUIC.
For debugging and diagnostics it is useful, for querying via a local resolver not so good.
I'd heard that somebody was working on DNS-over-HTTPS support for https://github.com/getdnsapi/getdns at the hackathon in Buenos Aires in April just before DNS-OARC / IETF-95, but have seen no evidence of that.
This version will support all RR types supported by the miekg/dns library which is the vast majority of them and any you are likely to come across in the wild. It also allows you to specify regular DNS resolvers which can be used in two ways. As fallback if connectivity to the DNS over HTTPS service fails or to always use to resolve specific domains. It also allows you to restrict access to the proxy to certain networks. The rest of the code should be IPv6 friendly but for some reason I implemented the access list in a manner that only supports specifying IPv4 networks. Guess I have something to work on.
If no DNS resolvers are specified it attempts to use the Google Public DNS servers to resolve dns.google.com. If DNS resolvers are specified they are used to resolve dns.google.com. A flag to always use the Google Public DNS servers would be useful, so now I have 2 things to work on.
As far as performance impact I have generally seen from 20 - 80msec of additional delay. Using a caching resolver behind the proxy would help mitigate this. As is the additional delay is pretty much unnoticeable when web browsing.
Edit: The page mentions that this allows web applications to make their own DNS requests, possibly looking up things other than A/AAAA records that the browser normally requests.
DNS Messages have a two-byte length prefix when transmitted over TCP. Multiple envelopes can and often are sent over a single circuit.
heck you can even do that over udp with dtls, if you don't want to deal with setting up and tearing down tcp connections.
Which is ridiculous, as it literally brings zero advantage to restrict that.
There is, however, https://www.w3.org/TR/tcp-udp-sockets/ - see section 10 for an example.
-- time passes --
"We need to change everything in order to accomplish a task under these restrictons."
(and, to be clear, the answer is yes, namely that a custom protocol has to deliver a lot of value to make up for having to deal with the deployment headaches inherent to anything less supported by firewalls, NAT boxes, etc. than HTTP/HTTPS)
Twilio isn't implemented using AT commands either, and for the same reason.
DNS latency is an entire round trip added to every single fresh domain lookup you make. That's a lot of times, and we want those to be as fast as possible. At that point, the raw sockets vs full blown HTTP over TLS over TCP debate matters.
Twilio, on the other hand, is used in situations with comparably laughable latency requirements. At least two orders of magnitude.
Not saying it's doa, but the question has merit.
- JSON: https://dns.google.com/resolve?name=doma.io
- Web interface: https://dns.google.com/query?name=doma.io&type=A&dnssec=true
1) "secure DNS" is a solved problem
2) DNS is simple
3) responses normally easily fit inside one packet
4) DNS is fast
HTTPS is a slow, wordy and inefficient protocol. Forcing everything into JSON just compounds the problem.
No, not in practice. You can easily MitM DNS and nobody is verifying DNSSEC by default. On the current internet, secure DNS just doesn't exist.
That is because security is hard.
This doesn't mitigate MitM attacks, as upstream DNS records can still be spoofed
WE go from a fast, decentralised, resilient and low overhead system, to a centralised chatty and fragile behemoth.
It still doesn't give end to end encryption, its just a slow encrypted proxy.
Some measurements of DNSSEC validation show that as much as 15% of Internet domain lookups validate DNSSEC: http://stats.labs.apnic.net/dnssec/XA. Approximately half of that is due to Google Public DNS validation (many sites use both Google Public DNS and other resolvers that do not validate, so do not actually validate DNSSEC overall).
It is very true that less than 1% of DNS zones are signed with DNSSEC, so it is true that "secure DNS" doesn't practically exist, but this a serving side issue, not a lack of client validation.
I'm waiting for the day when RFC 1149 is expanded to incorporate a layer 2 tunnelling protocol. I suspect this issue is not the initial encapsulation but how to extract the original frame without it getting mangled.
Depends on definition of fast, the latency indeed could be better, but your bandwidth is only limited by the size of your USB stick, and most often would be much faster than current solutions.
apt-get install dns2tcp
We could just force DNS extensions to be implemented in most/all client/server implementations.
DNS over HTTPS might be okay and work well, but imho is a (smart?) workaround, not a fix.
Why can't we all set a time window (7.5 years? 10 years? 15 years?) to plan massive RFC/protocols updates with possibly-breaking changes?
Edit:fix grammar (not native speaker of English)
For example, to send an email, perhaps you just send an HTTP POST request to a canonical endpoint (email.example.com), instead of all the rigamarole that SMTP servers require with a unique text protocol requiring multiple round trips. Have you seen the number of SMTP commands involved in sending a single email? Here's an abbreviated transcript of what it's like to send an email using `telnet`:
# Wait for banner from server (RT #1)
220 email-inbound-relay-1234.example.com ESMTP Sendmail 1.0.0; Thu, 29 Sep 2016 19:22:12 GMT
# Send EHLO and wait for reply (RT #2)
250-email-inbound-relay-1234.example.com Hello ws-1.example.com [220.127.116.11], pleased to meet you
# At this phase you should really send STARTTLS and negotiate a TLS connection,
# but we'll just ignore that for now and proceed plaintext.
# Specify sender (RT #3)
MAIL FROM: firstname.lastname@example.org
250 2.1.0 email@example.com... Sender ok
# Specify recipient (RT #4)
RCPT TO: firstname.lastname@example.org
250 2.1.5 email@example.com... Recipient ok
# Specify message headers and content (RT #5)
354 Enter mail, end with "." on a line by itself
Subject: Hello, world!
# Wait for reply (RT #6)
250 2.0.0 u8U1LC1l022963 Message accepted for delivery
With full use of SMTP extensions, things are a bit better than I imply but still frustratingly suboptimal. For example, I've run across ISPs who purely for their own load management reasons want to close an SMTP session at the TCP level after an arbitrary number of emails have been sent (N < 100)! Why would they desire that? If we're going to exchange more messages, then it's certainly less efficient for us both to negotiate a new TCP session and TLS session, rather than reuse the one we already have, but such is the practice of email. So message sending often can be as inefficient as this. When sending to some ISPs worldwide it's not uncommon for a single message to take seconds to deliver under normal network conditions.
How about we replace all of that with an HTTP POST to email.example.com, specifying the email headers and content with the POST body, and the sender and recipient as headers or querystring parameters? I think it'd be nice to get there eventually rather than drag SMTP on forever. All of the effort that goes into HTTP clients, servers, and security could benefit the email community as well.
Proper TLS security is still nascent in SMTP -- only because of Google's actions with Gmail and their Safer Email  initiative has TLS really come into widespread adoption at all. Today, although a lot of email is nominally taking place over TLS, most clients are not involving any sort of path validation and the connections are susceptible to MITM; and email clients don't specify client TLS certificates nor do servers examine them. If we were to employ it, TLS client certificate authentication could be an effective way to prevent email forgery, e.g., require email from example.com to be sent from a client with a TLS certificate for that domain. This kind of thing would be much easier to achieve in the HTTP world than in the SMTP world. We could also take advantage of HTTP/2 pipelining to efficiently deliver a lot of traffic across just one TCP connection.
We'd still need most of the effort invested into email, such as all of the effort fighting abuse, and mail servers would still need to buffer outbound messages and authenticate inbound ones, etc. (and we'd still need SPF, DKIM, DMARC) but at least it would simplify the foundational and protocol-level work, like what's involved in bootstrapping a new email client or server from scratch. You could write basic code to send an email in a few minutes using an HTTP library in any language. SMTP is pretty well entrenched, however, and the incremental benefit is probably not large enough, so I don't have my hopes up.
For example, today if you use an HTTP API to submit a message to SendGrid or Mailgun or Amazon SES, that's a trusted relationship based on an account you have with the service, typically a paid relationship. Each provider has its own unique API, which is incompatible with other providers.
In the next step of that process, your service provider's Mail Transfer Agent (MTA) communicates with the final destination mail server (`example.com MX`), and that part is a peer relationship between ISPs (quasi-trusted or untrusted). This communication is all SMTP today, and I'm proposing the idea of a standard way to transmit emails over HTTP in this layer too, in such a way as that it would, in the fullness of time, obsolete SMTP.
But no, it's not a great idea to let HTTP replace SMTP. SMTP is a stateful protocol, and while that brings some problems, it also brings some gains. For example, backend servers can keep connections open between them, peers can negotiate resource usage in real time, and the entire extension model is only possible because of connection state.
You'd lose all of those (or replace them with ugly hacks) by tunneling over stateless HTTP. It's a worth trade-off on some situations (like when you are behind a bad proxy), but not always.
dns.google.com:443 true QUIC_VERSION_34 [2607:f8b0:400d:c03::8a]:443 10544469510527000173 0 None 2 9 0 9 true
Modern crypto doesn't affect performance at all. Hell, even PQCrypto-encrypted-DNS with 64KB public keys would be fast compared to the modern web. There's no reason to worry anymore about modern crypto affecting performance. It's just not an issue.
If you don't have the connection open, you still have to do a port 53 DNS lookup to find out where to connect (1 round trip to configured dns server), plus open a tcp connection (1 round trip), setup tls (1 round trip, assuming TLS false start), DNS request (1 round trip); so 4 round trips vs 1.
Google DNS tends to be one of the fastest DNS servers you can use (just benchmark them against other options). The IPs are anycast, so you will likely be served by the Google data center closest to you.
As for what they log, check it yourself: https://developers.google.com/speed/public-dns/privacy
Until then, I'll run my own stuff.
The chaos computer club runs their own, which happens to answer usually just as fast as Google’s DNS. (And which isn’t subject to censorship, Google’s DNS, like Comcast and most US ISPs, censor several domains of piracy websites, although the domains are still existing in the ICANN database, and are reachable through most other DNS servers)
You might as well use the Comcast ones.
- It doesn't solve the problem, unlike other existing solutions for encrypting DNS.
- It adds unnecessarily high overhead.
There maybe more than one problem.
That's relative to the aforementioned problem.
sudo python3 httpresolver.py
Have a play with https://bitbucket.org/tony_allan/dnslib
It is a quicksand this idea, it seems fine until you rely on it and are shaken by attacks that just make your service unavailable with very few computers and traffic. And then you are screwed because we still hardly know howto prevent DDoS except by having a huge bandwidth compared to the attackers. Unless you are a megacorp with huge datacenters everywhere it is a bad idea.
But well Google will never become a monopolistic company that behave assholishly, right? They would never push standards that favors them other the few remaining hosting companies on the internet. Wouldn't they?
HTTPS already exists and is slightly less vulnerable than normal DNS traffic.
This opens no new DDoS opportunities. The rest of your post is irrelevant.