Hacker News new | comments | show | ask | jobs | submit login
HBO Now DNSSEC Misconfiguration Makes Site Unavailable from Comcast Networks (internetsociety.org)
81 points by danyork 779 days ago | hide | past | web | 76 comments | favorite



This shouldn't be possible. In a sane design, a transient policy failure should produce a warning light or a browser popup; something that applications can use to sensibly convey what happened, in some form, to users.

Instead, because DNSSEC makes security policy decisions at a very low layer, in a protocol that virtually all applications expect to work transparently so long as you have connectivity, you get this: instead of a browser popup, a connectivity outage so complete that it is indistinguishable to users from "the site you are trying to get to doesn't exist and never has".

DNSSEC is a terrible, terrible idea. Comcast should stop validating it, immediately. If someone from Comcast's network engineering team is reading this: I was a network engineer, and then worked for 4 years with tier-1 engineering getting Netflow monitoring deployed and scalable. I've been writing DNS security software since I was 19 years old, in the 1990s. You have reasons for whatever configuration you've decided on, but I will take as much time out of my day as is productive for you to convince you to stop working on rolling this out.

What happened yesterday with HBO Now, which had half of Twitter screaming "NET NEUTRALITY OHNOZ!", is going to keep happening over and over again.

See also: https://news.ycombinator.com/item?id=8894902


While I agree DNSSEC has its limitations and problems, I haven't heard many alternatives (in general).

DNS has two inherent issues:

- Hijacking traffic via spoofing responses.

- Private information leakage.

With DNSSEC combined with HTTPS it is very reasonable to say that both you're connecting to the party you're expecting to connect to and in addition, no third party listening on the line either knows what host name or page you're visiting (only the IP address of the box).

A lot of people dismiss these limitations or just point wildly at certificate pinning to solve all of our problems (while ignoring that HTTP isn't the only type of traffic the internet was designed for).

Plus with NSA mass surveillance, DNS makes seeing what domains you're visiting and building a "picture" of you as an individual absolutely trivial. The IP addresses still may help them do that to some extent, but it certainly becomes very easy if they can see your DNS packets.


With DNSSEC combined with HTTPS it is very reasonable to say that both you're connecting to the party you're expecting to connect to and in addition, no third party listening on the line either knows what host name or page you're visiting (only the IP address of the box).

No, because clients don't validate DNSSEC responses. You'd need to be running a full resolver in your machine, which OSs usually don't.

Also, with SNI¹ they'll know the hostname anyway.

¹ http://en.wikipedia.org/wiki/Server_Name_Indication


> I haven't heard many alternatives (in general).

These problems don't call for a general solution. We should respect the fundamental truth behind the end-to-end principle and solve these problems as close to the application as possible. The likely threats, appropriate default choices and trust model are all best understood by each application developer and building these solutions into Layer-3/4 is bound to cause short-term pain and reduce our long-term flexibility to meet new risks.

For example, EFF's STARTTLS Everywhere project takes into account the fact that a huge percentage of the world's mail moves between a small enough number of providers that human verification of announcements is possible. It also recognizes that the MX configuration for these providers is reasonably static, meaning that changes that propagate in minutes do not need to be accommodated. Small specific solutions like this can be rolled out and provide a real, tangible benefit to users much faster than we can upgrade the entire DNS system to provide a more general solution.

I agree that we need a DNS privacy solution, one that hopefully doesn't eliminate the existence of caching infrastructure.


> I agree that we need a DNS privacy solution, one that hopefully doesn't eliminate the existence of caching infrastructure.

Please do take a look at what the folks are doing within the DPRIVE working group of the IETF:

https://datatracker.ietf.org/wg/dprive/charter/

They are working on mechanisms to bring privacy / confidentiality to the "last mile" of DNS connections. Any input you have would be useful. (There's a link there to a mailing list to which you can subscribe.)


> I haven't heard many alternatives (in general).

From tptacek's FAQ ( http://sockpuppet.org/stuff/dnssec-qa.html ):

> What’s the alternative to DNSSEC?

> Do nothing. The DNS does not urgently need to be secured. > All effective security on the Internet assumes that DNS lookups are unsafe. If this bothers people from a design perspective, they should consider all the other protocol interactions in TCP/IP that aren’t secure: BGP4 advertisements, IP source addresses, ARP lookups. Clearly there is some point in the TCP/IP stack where we must draw a line and say “security and privacy are built above this layer”. The argument against DNSSEC simply says the line should be drawn somewhere higher than the DNS.


Right.... which is where I disagree with Thomas. I believe we should secure the integrity of answers from DNS.

I want a DNS we can trust.

I see DNSSEC as one of the many layers in any defense-in-depth security plan.

Yes, I acknowledge that there are some challenges with DNSSEC deployment... but those are what a good number of us are working on fixing.

I don't see "Do nothing" as a viable alternative.


Doing nothing is self-evidently viable. Virtually no business transactions in the world are protected by DNSSEC today. None ever have been. The onus is on you to demonstrate that DNSSEC is viable, because "no DNS security at all" clearly does work.

You keep walking face-first into this rhetorical brick wall. If you're going to say "doing nothing isn't viable", you need to have a ready response to the extremely obvious observation that the Internet seems to function pretty OK without DNS security. It's not 1994 anymore. You can't wave your hands and suggest that the Internet is going to get more serious in 10 years and need better security. It already needs the best security it can possibly get. It's just that DNSSEC isn't part of that mix.


Thomas, as you are well aware, the Internet of 1994 was an extremely different and much smaller world. I completely agree with this:

> It already needs the best security it can possibly get.

It's just that I believe that DNSSEC should be part of the mix.

For instance here's an example of some research out of CERT-CC back in September 2014 where hijacking of MX records is redirecting email to someone:

http://www.internetsociety.org/deploy360/blog/2014/09/email-...

As far as I can see, deploying DNSSEC validation on the networks of the affected mail servers - and receiving DNSSEC-signed MX records - would prevent them from delivering mail to servers in the middle.

It's things like this that I want to prevent.

I want a more secure Internet - and in my view DNSSEC helps.


I'm sorry, but I'm going to have to point out that your comment doesn't respond to mine in any way.

Once again: the 2015 Internet functions with no DNS security whatsoever. BGP announcements, themselves unencrypted, aren't protected with DNSSEC. Browsers don't use DNSSEC in any way and are in fact blind to it. Email will remain insecure with or without it. Credit card transactions are protected at a layer higher than DNS, one designed to assume that the DNS would always be insecure.

Why, specifically, is doing nothing to secure the DNS "not viable"?


> I'm sorry, but I'm going to have to point out that your comment doesn't respond to mine in any way.

Hmmm... you said " you need to have a ready response to the extremely obvious observation that the Internet seems to function pretty OK without DNS security "

I gave you one example. Here are some more:

- There are now over 800 XMPP servers with DNSSEC-signed SRV records that can be used to ensure they are talking to the correct servers. https://xmpp.net/reports.php#dnssecsrv

- On a related note, there are over 300 XMPP servers using DANE to provide a higher level of trust to TLS certs: https://xmpp.net/reports.php#dnssecdane

- There are now over 1,000 email servers using TLSA records (DANE) to provide a higher level of security to the TLS connections between email servers. (Viktor Dukhovni of exim)

These are very real cases where adding DNSSEC is, to me, increasing the security of DNS.

Because I'm around examples like these, I see value in securing the DNS. So to me, "doing nothing" is not an option.


Wait. The one place you suggest DNSSEC is adding security value is small servers for an unencrypted insecure messaging service: that is to say, the one place where NSA's almost total ability to manipulate the DNS would (a) most likely go completely undetected and (b) transparently cough up chat transcripts from private message sessions.

And one of the ways you suggest it could help is to allow clients to override TLS certificates and instead trust the government-controlled DNS.

That is an amazing concession.


Cert scanning projects and Certificate Transparency (edit: and pinning) may reverse this calculus or may already have reversed it, but of course the TLS certificates are commonly being issued on the strength of what unauthenticated DNS records said at the time of issuance. Adversaries that could tamper with a CA's view of a zone (or a route) at some point could cause cert misissuance, whether or not they could modify the underlying zone contents. The DNS is the underlying evidentiary basis for a substantial number of DV cert issuance events.


The second paragraph of "Against DNSSEC" addresses this.


Not conceding anything... just pointing out some of the use cases today that are interesting.

Another one is most of the email service providers in Germany, where DNSSEC / DANE are being seen as ways to have a more secure email environment.

The list can go on...

> And one of the ways you suggest it could help is to allow clients to override TLS certificates and instead trust the government-controlled DNS.

:-) Where did I say override, Thomas? You can, if you wish, use a different trust anchor than the current (broken) CA system, but the beauty of DANE is that it gives you a way to add another layer of trust to existing systems. So you can use a CA-issued TLS cert and put a fingerprint in a TLSA record as an added check during a SMTP transaction. You could also check certs with CT... and if it were HTTP you could do pinning as well.

As many layers as you want!


> BGP announcements, themselves unencrypted, aren't protected with DNSSEC.

This is true, but only because BGP announcements don't involve DNS, and so all the DNS security in the world won't help. Agreed that there is a lot of scope for doing better on BGP security, though - and indeed DNS security.


The fact that "all the DNS security in the world won't help" is part of my point. There aren't really any places where "all the DNS security in the world" will help.


> Agreed that there is a lot of scope for doing better on BGP security, though

Yes... and there we go down the path toward BPGSEC, RPKI and other tools that people are developing to help secure the routing infrastructure.


I found many of your criticisms of DNSSEC reasonable and well-informed, but the "do nothing" part puzzled me. The information security status quo isn't exactly excellent or ideal. DNS attacks (including monitoring of DNS queries) often form a part of other attacks. Many of those attacks are perhaps best mitigated at other layers, but not all have been, and at least something like query privacy can't be.

(Yes, I know DNSSEC doesn't try to address query privacy, so that problem isn't an argument for DNSSEC itself.)


> because "no DNS security at all" clearly does work.

No it does not work. I have been the victim of DNS poisoning with a flood of requests that almost took my server down.

If DNS poisoning is so easy then DNS is not working correcting. If you want to get rid of DNSSEC then you need to say how you would fix it instead.


DNSSEC makes it easier to take servers down, not harder: a tiny UDP DNSSEC request generates a massive DNSSEC response loaded up with RSA key material.


So what is your alternative for solving DNS poisoning?


Exactly the same thing we do today! Assume the DNS is poisoned, and delegate security to higher layers so that there's not much point in bothering to poison it. Literally, my answer is DO NOTHING NEW.


> delegate security to higher layers so that there's not much point in bothering to poison it. Literally, my answer is DO NOTHING NEW.

That would not help my server - the DNS poisoning is acting like a DDOS. Sure the random victims know they are on the wrong page, but it doesn't help my server for them to know that.


Isn't exactly the same form of DDOS --- not even reaching how many simpler ways there are to DDOS you --- available in DNSSEC, by injecting non-validating records?


Not exactly, as you don't cache non validating records (for very long).


Wonder if DNSCurve would solve that.


I think it is unfortunate that you leave DNSCurve at the end. I think securing the DNS is a good thing regardless.


> Plus with NSA mass surveillance, DNS makes seeing what domains you're visiting and building a "picture" of you as an individual absolutely trivial. The IP addresses still may help them do that to some extent, but it certainly becomes very easy if they can see your DNS packets.

FYI, there is a working group within the IETF working on the issue of providing confidentiality to DNS transactions. It's called DPRIVE and more info can be found here:

https://datatracker.ietf.org/wg/dprive/charter/


DNSSEC does nothing to prevent information leakage, the opposite in fact.

People compare it to SSL but it's not like SSL. It does NO encryption of the commuications. It's only validation (argubaly the most broken part of SSL).

A better ananlog to SSL would be DNSCrypt, which we've been using now for years, is slowly gaining traction and adoption by others, and provides real privacy from DNS messages from being intercepted or manipulated.


Thomas, I'll ask directly even though I believe I know the answer. But this topic is relatively dense so there's a non-trivial chance I've got some wires crossed.

In short: with total objectivity, what are all of the possible, theoretical advantages to the people using order.hbonow.com of HBO using DNSSEC + https + (maybe) HSTS rather than them ONLY using https + HSTS?

I'm posting this (I think) leading question so that it might be possible to cut through VOLUMES of related discussion, and see a relatively simple, straightforward answer.

Thank you!

PS: I have read through your Against DNSSEC post, and the large amounts of related discussion, but I'm still seeking a technical yet succinct response that I can easily 'carry around' with me.


There is zero reason for HBO to be using DNSSEC today. Browsers don't support it, and email is insecure with or without it, and that describes pretty much their whole attack surface. It was a totally unforced error, which is presumably why HBO's response to this debacle was to eliminate DNSSEC.


tptacek - I was waiting for your reply... almost thought about setting a timer! Glad to see you didn't disappoint.

Toward the end of the article you'll note I point out that this is a great example of a hole in the operational process that has been identified as more and more people have deployed DNSSEC:

http://www.internetsociety.org/deploy360/blog/2015/02/cloudf...

The process of updating (or removing) the DS record NEEDS to be automated. There is a group within the industry working on ideas around this now and I think we'll see that work happen inside the DNSOP Working Group within the IETF.

There is a public mailing list open to anyone interested to join.


> In a sane design, a transient policy failure should produce a warning light or a browser popup; something that applications can use to sensibly convey what happened, in some form, to users.

Thomas, I agree that having the DNS resolver only return a regular SERVFAIL without a hint of WHY there was the failure is a challenge. I wasn't around the DNS part of the IETF when this design decision was made. It seems to me that a separate error message would have been preferable... but I don't know the discussions that were had at that time.

There have been several suggestions about ways to include additional diagnostic information with the SERVFAIL so that browsers and applications could take better action. One such proposal was from Evan Hunt at ISC (makers of BIND):

http://tools.ietf.org/html/draft-hunt-dns-server-diagnostics...

The draft expired back in 2014 but he indicated recently on the DNSOP mailing list that he would be open to reviving that draft if people thought it would be useful.


I think DNS was designed to be a relatively simple protocol and the same for the APIs too.


I used to think that DNSSEC was a good thing. Authenticated DNS lookup, what's not to like? Incidents like this make me question that.

But dropping DNSSEC leaves the problem of securing DNS unhandled. What would you propose in its place to address securing DNS?


DNSCurve, for one.


DNSCurve and DNSCrypt are a strong combination.


I wonder what would be wrong with a new DNSSEC2 that would be designed for online signing of DNS records from the beginning for example (so no NSEC/NSEC3). It would not replace DNSCurve, but would be usable for DANE for example.


Do you hold that reasoning true every protocol, or is DNS somehow special? Should we immediately halt all work on securing BGP, for example, as misconfigurations will make entire ASes unavailable?


No, it's not true of every protocol. That's my point.

Some protocols are well-situated to make security policy decisions, and some aren't. This is an obvious engineering point: however you secure BGP, you're going to have TCP delivering records. Should we then secure TCP, so that we can ensure availability and prevent attackers from exploiting it to introduce byzantine routing failures? And, if we create TCPSEC, some form of unencrypted IP traffic will be used to deliver those segments to end stations. Should we then secure IP? And then ARP? And then Ethernet headers?

Secure BGP makes a lot of sense. BGP is already more of a policy expression framework than a routing protocol (the routing algorithms used by BGP are basic, and virtually all of the complexity comes from two decades worth of hacks designed to express policy). Fully every single router running defaultless BGP is managed by a team of people engaged intimately with BGP security policy.

DNS, however, runs on every single Internet user's computer, often in different places, and at a layer that isn't fully exposed by APIs, because those APIs were designed with the assumption that DNS fails only due to connectivity failures. DNS is a terrible place to enforce policy.


BGP _is_ a policy framework, and what you said was "In a sane design, a transient policy failure should produce a warning light or a browser popup; something that applications can use to sensibly convey what happened, in some form, to users.".

That will, for obvious reasons, never be the case with BGP. Because it is far too low a layer. Hence my question.


BGP's clients are mostly routers though, and routers are typically managed by professional admins who monitor for such failures.


What alternatives to DNSSEC do you advocate?


Step 1: Scrap DNSSEC

Step 2: Return to drawing board

Step 3: Wait for better solution to emerge

Step 4: Repeat until value of new DNS security system exceeds cost of deployment

Step 5: Profit

I expect never to reach Step 5, but who knows?


> Step 2: Return to drawing board

> Step 3: Wait for better solution to emerge

I, and I suspect many others, eagerly await the better solution to the DNS security issues we see. If you have one (beyond "do nothing"), please explain it.

Lacking that, we focus on deploying DNSSEC because that is what we have AVAILABLE today.



DNSCurve already exists fortunately.


> DNSSEC is a terrible, terrible idea.

DNSSEC is a terrible, terrible implementation. All of your criticisms are valid but none of them are inherent to the idea of authenticating DNS resolution.

That basically means we need to start over from scratch, which is unfortunate, but now tell me if we did we couldn't address your complaints:

1) It isn't necessary. Sure, we can work around the lack of DNS authentication, but then we end up with the horrible CA system. Can we really not do any better than that?

2) It's centralized and government-controlled. So don't sign the root. Instead of hard-coding the root key in the resolvers, hard-code the keys for each TLD. Being able to pick which government can forge your signatures is the best you're going to be able to do; the authority doing the signing is going to exist in somebody's jurisdiction.

3) DNSSEC's cryptography is weak. So use different cryptography. Elliptic curve keys and signatures are an order of magnitude smaller than RSA anyway, which also reduces the DNS DoS amplification potential by that amount.

4) Resolver APIs don't provide good information about why resolution failed. So provide a new API that provides better errors. Put it right in the new RFC. Then you only get validation if you use the new API and nothing changes for existing applications.

5) Deployment is expensive. Why isn't deployment automatic? The default configuration for a DNS server should be to generate a signing key for each domain on first run and then automatically sign all the records with it. If you're paranoid and you want to keep your signing keys offline then you can configure that manually, but nobody has to. And the higher level domain should be able to get the signing key from the lower level domain as soon as you add the NS record, and then confirm with the administrator that it's the right key the same way as with ssh host keys.

6) DNSSEC isn't validated by the endpoints. As far as I can tell there is no actual reason for this even with existing DNSSEC. The client can ask its DNS cache for each of the signing keys up to the root and then check the signatures. A new pseudo-RR that would return the entire chain would make this more convenient, and couldn't really be used for DoS because only recursive resolvers and not authoritative servers would answer that query. Validating clients without validating caches could still fall back to asking for each individual record.

7) Authoritative denials leak information. NSEC5 is supposed to fix this, but there is a much easier way: Sign a denial key which can itself only be used to sign denials and keep it on the authoritative server. The idea that someone who compromises your authoritative servers will be able to deny service to your DNS clients is already sort of implied.

The problem is, even though you could theoretically do all of those things to DNSSEC itself, those aren't even the only problems, and trying to patch all the warts in something nobody is even really using is only going to make something which is already unnecessarily complicated even worse. What is needed very much is a clean slate.

But that doesn't mean it isn't worth doing.


The most important TLDs are controlled by world governments. COM and NET are essentially USG properties. What's the most popular TLD outside the original TLDs? IO. Guess who controls IO? GCHQ, the world's most unhinged signals intelligence agency.

Moreover, how can it possibly be sane for us to deploy a security system that protects end-users from NSA only if Google is willing to move Gmail off of COM?

If NSA subverts a CA today and uses it to MITM Gmail, a substantial fraction of all browsers on the Internet will detect that and alert Google, because of key pinning. When that happens, Google will nuke that CA from orbit. If NSA is dumb enough to subvert a CA that's hard to nuke, Google will start a process of employing code-level restrictions on that CA that will for a substantial portion of all Internet users make that CA asymptotically approach "useless" for NSA's purposes.

If NSA does a QUANTUM INSERT-type attack to selectively poison .COM lookups in order to use TLSA to get a target to eat a fake certificate, what does Google do? Nuke COM from orbit?

DNSSEC is a terrible, terrible idea.


The solutions to those problems aren't possible within the DNS but they also aren't incompatible with it. There is no reason you can't use both DNSSEC and key pinning, and the .com registrar does not want to get caught forging signatures for the NSA.


The .COM registrar is controlled by NSA.


That doesn't mean they want to get caught forging signatures. It would destroy their credibility and cause people to take additional countermeasures. In theory the community could revoke their control over the TLD.

But I'm trying to understand your objection here. DNSSEC/DANE replaces domain validated certificates. I understand your objection to be that we don't want the registrar to be in the chain of trust; but they already are. If you can forge the target's DNS records from the registrar's servers then you can get a domain validated certificate from any CA. The ability to control the DNS records of the domain is the thing they're verifying. The difference with DANE isn't that the registrar is in the chain of trust, it's that the CA isn't. It causes you to have to trust strictly fewer third parties. There is no less vulnerability to or recourse against the registrar than there is now.

To do better than that you need to do something more than domain validation. But how does replacing domain validated certificates with DANE prevent any such additional checks from being done?


Just to be clear:

> DNSSEC/DANE replaces domain validated certificates.

DNSSEC/DANE can be used to replace CA-issued certs, but it can also be used to add an extra layer of validation to existing CA-issued certs. To me this is actually the strongest use-case for DANE, as it provides a means to use DNSSEC to ensure that you are using the correct TLS certificate.

More info is here:

http://www.internetsociety.org/deploy360/resources/dane/

The four modes are:

----

0 – CA specification – The TLSA record specifies the Certificate Authority (CA) who will provide TLS certificates for the domain. Essentially, you are able to say that your domain will ONLY use TLS certificates from a specific CA. If the browser or other application using DANE validation sees a TLS cert from another CA the app should reject that TLS cert as bogus.

1 – Specific TLS certificate – The TLSA record specifies the exact TLS certificate that should be used for the domain. Note that this TLS certificate must be one that is issued by a valid CA.

2 – Trust anchor assertion – The TLSA record specifies the “trust anchor” to be used for validating the TLS certificates for the domain. For example, if a company operated its own CA that was not in the list of CAs typically installed in client applications this usage of DANE could supply the certificate (or fingerprint) for their CA.

3 – Domain-issued certificate – The TLS record specifies the exact TLS certificate that should be used for the domain, BUT, in contrast to usage #1, the TLS certificate does not need to be signed by a valid CA. This allows for the use of self-signed certificates.

----

Modes 0 and 1 work with current CA-issued certs and assume that normal PKIX X.509 validation is occuring.


It can not be used this way. Adam Langley explained why. It has to do with the way browsers work. If there are 4,392 trusted CAs today, DNSSEC will make it 4,393. In practice, DNSSEC strictly makes the CA system worse.

People involved in DNS standardization clearly believe this isn't the case, and that there's a spectrum of different ways DNSSEC will interact with the CA system. They also believed in Interdomain IP Multicast and SNMPv3. The track record of DNS standards people on browser technology is not good. In this case: I suggest taking AGL's word for it.


Thomas, I definitely defer to AGL when it comes to browser technology - and I've certainly read his "Not DANE" piece, but you'll note he did not entirely rule it out. He just said it may be "a long way out". Two comments:

> It can not be used this way.

Actually, it can be. There's a modified version of Firefox maintained by the team at the DNSSEC-Tools project called "Bloodhound" that does DNSSEC validation of every link and does DANE checks on TLS certs:

http://www.dnssec-tools.org/bloodhound/bloodhound.html

> If there are 4,392 trusted CAs today, DNSSEC will make it 4,393.

Hmmm... I guess I see that only if you were using modes 2 and 3 of DANE. If you are using 0 and 1 you are just using DANE as an additional check for the CA-issued CERT.

The value to me is that I am in control of the TLSA record in that I am publishing that in my own zone file on my own DNS servers. I can specify there precisely which TLS cert I want to use or which CA I want to be trusted for my domain.

My choice is then cryptographically signed via DNSSEC and bound into the global chain of trust via DS records going back up to the root of DNS.


It would still be better than nothing, and would be easier to deploy. DNSCurve would also be good though.


No, it is is substantially worse than nothing, and catastrophically expensive to deploy.


I am talking about the new version the parent proposed, not the current DNSSEC.


"As a result, the many networks around the world that perform DNSSEC validation to ensure that customers are getting to the correct sites (versus being redirected to bogus sites for phishing or malware) were blocking customers from getting to the possibly bogus order.hbonow.com!"

The "versus being redirected to bogus sites for phishing or malware" part here is funny. Because that generally happens when some scammer registeres hb0now.com. Not when someone is intercepting your DNS.

And actually, in case of the latter, all bets are off anyways. Because client resolvers DO NOT CHECK DNSSEC.

(Yes, I know you are on your custom configured Linux box or OpenBSD firewall and that DNSSEC works wonderful for you. But the majority of the internet using world with OS X or Windows behind a $15 DSL router or WiFi AP does not.)

Stop. The. DNSSEC. Nonsense.


And actually...

> And actually, in case of the latter, all bets are off anyways. Because client resolvers DO NOT CHECK DNSSEC.

... please visit APNIC's DNSSEC Statistics site where you will see that about 12% of all DNS queries globally ARE being validated by DNSSEC:

http://stats.labs.apnic.net/dnssec/XA?c=XA&x=1&g=1&r=1&w=7&g...

In Sweden this is about 71%:

http://stats.labs.apnic.net/dnssec/QM?o=cXAw7x1g1r1

Slovenia 67%, Estonia 55%, Denmark 48% ... on down to the USA at 23%:

http://stats.labs.apnic.net/dnssec/XQ?o=cXAw7x1g1r1

So DNSSEC validation very definitely * IS * happening out there!


This is "checking" in the sense that "everyone at Comcast was 'checking' when HBO Now broke".


Checks being made doesn't mean the clients are checking. What's the point of validating the communication between the servers if the last mile is unprotected?


> What's the point of validating the communication between the servers if the last mile is unprotected?

It's all incremental ways of reducing the attack surface. Get DNSSEC validation happening at large public DNS services... then at ISPs... then on network edge devices... then into operating system (in stub resolvers) ... then perhaps into applications themselves.

Each step reduces the attack surface a bit more. I wrote about this at:

http://www.internetsociety.org/deploy360/resources/plan-dnss...

This is why some people are using libraries like the GetDNS API to build DNSSEC validation directly into apps:

https://getdnsapi.net/

And solving the "last mile" problem is exactly why the DPRIVE working group was chartered within IETF:

https://datatracker.ietf.org/wg/dprive/charter/

(And anyone is welcome to contribute to the group.)


There have been many instances of unknown entities intercepting DNS which went unnoticed for quite some time. Hence DNSSEC.


Several people have confirmed to me that they were unable to see the site from Google's Public DNS Servers (8.8.8.8, 8.8.4.4 and the IPv6 addresses). This makes sense given that Google has been performing DNSSEC validation since early 2013.

I've updated the post to more clearly note the fact that this was not just an issue on Comcast networks. (I mentioned that it would be an issue on any network performing DNSSEC validation, but then only gave Comcast as an example.)


DNSSEC causes more problems than it solves. And even if it were reliable, it's still based on obsolete crypto and isn't even encrypted. But on top of that, it's an outage factory.


Why would it make sense to encrypt queries and replies from a public data source? Signing the data is all that is needed.


Some users might not want intermediate hosts to see a request for the IP addr of "ginger-midget-porn.com", for example.


Also unavailable from Android devices…


Which would make sense because they may be set to use Google's Public DNS Servers (8.8.8.8 and 8.8.4.4 and the IPv6 equivalents).

Google Public DNS has been performing full DNSSEC validation since May 2013: http://www.internetsociety.org/deploy360/blog/2013/05/confir...


Yea, it is fortunate that people on Twitter quickly figured this out.


To be fair though, it's not that hard to test for DNSSEC breakage if you have dig available. If "dig +cdflag example.com A IN" works and "dig +adflag example.com A IN" returns SERVFAIL, it's DNSSEC's fault. (If that latter command returns responses without the ad flag, the domain lacks DNSSEC signing.)


If it's currently unavailable on Android, you should be able to start seeing the site again whenever the TTL of the DNS records ages out... this just happened for me on my own home network where I have a DNSSEC-validating DNS resolver on the edge of my network.

However, when I do "dig +dnssec @8.8.8.8 ds hbonow.com" I don't get any DS records back, indicating that the issue should be fixed on Google's PDNS.

Can you tell what DNS servers you are using on your Android device? It may be that your ISP is doing DNSSEC validation and experiencing a similar TTL issue.


I was being snarky, since HBO Now reportedly supports only Apple clients.


Too funny... completely missed the snark in the midst of all the discussion here. :-)

And you're right - HBO Now is only on Apple devices right now... so getting to the website from Android wouldn't matter.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: