Hacker News new | past | comments | ask | show | jobs | submit login
Spoofing DNS with fragments (powerdns.com)
31 points by ahubert on Sept 10, 2018 | hide | past | favorite | 44 comments



Don't rely on DNS for security.

DNS is a reverse phone book. It doesn't matter if the phone book is über-secure. Even if the number you get is really your friend Gary's number, the person who picks up on the other end might not be Gary. Maybe Fred knocked Gary out with a lead pipe and answered the phone. You can't be sure it's Gary talking to you just because the phone number was right.


Right! Same thing with trusting a site just because it has SSL. All it ensures is that you are having a private conversation. You could be having a private conversation with SATAN.


It's not a phone book. It's a distributed key/value store.


A phone book is a great analogy?

If you want someone in Berlin, you'd look in a Berlin phonebook; if you want someone in Austin, you'd look in an Austin phonebook. Maybe the underlying storage mechanism doesn't rely on consistent hashing or buckets, but phone books are inherently distributed key/value stores.


Phone books also have non-phone information on some entries, but their primary purpose is to get phone numbers. The analogy is fine.


I can go to nearly any house and pick up a phone book and get the equivalent of A records (name to address).


A distributed IP book then?


Well, yes, DNS is spoofable. The solution is, and has always been, DNSSEC. This just makes it slightly more urgent to start using it.


DNS is virtually never spoofed in practice because of a combination of (1) sensitive systems being built to assume the DNS was already insecure and (2) the settings where DNS spoofing is useful are already so insecure than DNS security doesn't do much to help you (for instance, coffee shop wireless).

In practice, there are two significant targets for DNS spoofing:

1. SMTP email, where TLS isn't universally required among MTAs.

2. CA domain validation.

For problem (1), we has SMTP STS, which simply establishes a long-term requirement for TLS between a pair of MTAs, sidestepping the spoofing problem.

For problem (2), there is a whole panoply of countermeasures to DNS spoofing, erratically deployed across a variety of CAs. CAs do not as a rule do strict DNSSEC validation (it's too broken), but many of them do multi-perspective lookups which make packet-level spoofing tricky to accomplish in practice.

The real solution to this problem isn't a government-run PKI that forklifts in a whole new DNS protocol, but rather a culling of CAs that can't deploy adequate countermeasures against these attacks.

Note that the Usenix paper illustrates scenarios in which DNSSEC makes this attack easier, not harder.


Just to clarify, it is a "government-run" PKI in the sense that some governments control some TLDs, as opposed to the CA ecosystem where a single rogue (government-run) CA can issue a certificate for any domain (happily ignoring any CAA record).

It would be nice to have something like Certificate Transparency for DNSSEC (at least for the TLDs), but for now CT has the problem that it is only a requirement for new certs, and a rogue CA could backdate a cert they maliciously created for your domain.


> CAs do not as a rule do strict DNSSEC validation (it's too broken), but many of them do multi-perspective lookups which make packet-level spoofing tricky to accomplish in practice.

Can you give specific examples of "many of them" doing multi-perspective DNS today ?


Can't. But if I've given the impression that the majority of CAs are sophisticated, that wasn't intentional.


So to be clear:

You say DNSSEC validation is "too broken" for any public CA to be doing that, but actually the largest (by issuance volume) does exactly that

Whereas you say multi-perspective is so prevalent "many of them" do it, but you aren't able to produce any examples at all.

Maybe time for a re-think?


I do not understand your argument at all, sorry.


tialaramex is mis-reading "do not as a rule" as "never" and then constructing a straw man based upon that, and is also asking you to specifically name any CA that does what you say many of them do. (-:


DNSSEC is probably the way forward. However it is bloody awful to implement, OK it isn't really but it is hard (not the same thing). I suggest that DNSSEC evangelists look at how Let's Encrypt are doing things for TLS certs and learn a few lessons.

Just in case I've put a nose out of joint: on a brand new Ubuntu Bionic minimal install with a NAT port forward to 80/tcp from WAN, I can get a LE cert with:

# apt install certbot

# certbot -d systemname.example.co.uk

Fill in a few questions as required and off we go. There is a systemd timer unit that automatically renews certs or you can use cron. Naively using both by accident does not matter. In general the LE system just works because clients have been designed to work like that (my general experience here is only Linux - Windows and Apples may have a different experience)

Now, DNSSEC has not been handled like LE. Window's DNS daemon seems to have DNSSEC built into the GUI (mmc) from at least 2012R2. The options on Linux and others are not quite so easy.


Naive question: Is there any reason, given that my registrar allows me to turn on DNSSEC, that I shouldn't? Any negative consequences or things I need to account for? Or is this a no duh move?


Yes: if you DNSSEC-sign your domain but don't keep up maintenance of it, your domain will vanish off the Internet for users behind strict-validating DNS caches. That happened to HBO NOW for all Comcast users the week they rolled out.

(You'd also be participating in a government-run PKI but I'm not assuming your politics preclude you from doing so).


> but don't keep up maintenance of it

Not sure about this phrasing. It's not on the domain owner to maintain anything, you just need to not forget you have DNSSEC enabled if you move your zone to different NS.

It's up to your DNS host to handle things like key rollovers and most people aren't writing and running their own authoritative DNS servers.

Regardless, the risk vs reward is still heavily in favor of not signing. It is so sparsely deployed that it can't be relied upon for anything (like DANE) and it seems like DoH (both for clients and between resolvers) may end up being the better way forward.


In addition to DoH, there's already a direct-to-domain-registrar protocol in the works for the CA problem.

DNSSEC is, I think, pretty much dead.


”there's already a direct-to-domain-registrar protocol in the works for the CA problem”

Sounds interesting. Are there any drafts/discussions available?


You're looking for discussions about RDAP, the successor to WHOIS.


Danke!


Am I incorrect that the same thing happens when you let your domain registration or TLS certificate expire? Perhaps with a slightly worse outcome, but still.

If the current iteration of DNSSEC is intolerable, why not propose a new version with less warts? A sort of DNSSEC-lite? Almost everyone will adopt it as long as it's not as difficult to use as the original, but USG can continue rolling out the original in the public sector the same as they do now.


No: when your TLS certificate expires, you don't vanish off the Internet. Your browser tells you specifically what went wrong.

When DNSSEC fails, you (in the general case) vanish without a trace.

Why bother with a new version of DNSSEC? We've worked around basically all the problems of insecure DNS at this point.


Making DNSSEC more attractive will help all the people that the workarounds won't.

Two reasons for a better DNSSEC:

1) People will continue to [erroneously] assume their DNS transactions are secure. They won't be, and then they'll get screwed.

For users, this means being smart enough to know if their application is resistant to DNS attack, and if not, smart enough to implement a workaround. For organizations, this means waiting until they are attacked, then adding the workarounds. Organizations and users who make this assumption - which will be most of them - will continue to be more vulnerable than is necessary.

2) There are many applications and protocols on the internet other than HTTP[S], and assuming the above point, these apps/protocols would eventually need to implement the workarounds that web clients/servers have.

Many of these are legacy, and many of them are also critical, including uses by the financial industry, medical industry, public utilities, government/military, web businesses, etc. The public sector is already implementing DNSSEC because they see that not doing so is unnecessarily risky (and they don't have to pay for it). But for all the private orgs that either don't see the need or don't want to bear the cost, a less odorous solution that doesn't require "workarounds" would net them significant benefits, even if it's at the fringes of today's real world security problems.

Basically, we will all at some point be involved (at the least, indirectly) in something that isn't an HTTPS transaction, and so we will all at some point have a need to trust DNS to some extent. The workarounds aren't going to work by default for everyone everywhere, and so there will be people left in the lurch.


There are fewer and fewer non-HTTPS protocols every year, a fact underscored by the fact that DNS itself is moving towards a preferred realization in HTTPS. Not only that, but simply adding DNSSEC to an insecure legacy protocol obviously doesn't make it secure; all it does is give you a better chance of looking up its genuine IP address.

These do not seem like compelling reasons to push a forklift upgrade of DNS infrastructure. And the market agrees: DNSSEC has more or less failed, even though you can now enable it pretty straightforwardly at a number of cloud providers. Nobody cares. And they're right: DNSSEC was a terrible solution to something that has always been kind of a non-problem. Even in the eras where DNS spoofing was straightforward (2007, the late 1990s), it never really mattered much operationally.


The market is perfectly content to sit back and add workarounds on top of workarounds until everything starts to collapse. It's culminated in DNS Flag Day: https://dnsflagday.net/. I suppose it's fine to forklift upgrade if it's to resolve the tech debt the market tacitly accepted, but not so if it means securing a small but meaningful part of internet infrastructure.

I don't want or need DNSSEC as currently designed to be accepted everywhere, but saying that a workaround is preferable because the real fix is costly ignores the long-term ramifications.


I understand your argument, but try for a second to see it from my perspective: the end-to-end argument pretty strongly implies, to me at least, that to get meaningful security, you want to provide it in higher-layer protocols, closer to the endpoints. What's wrong with taking a layered design like TCP/IP and simply drawing a line, below which we don't try to provide cryptographic security?

We've been doing that for ~30 years now; it is the only means of providing cryptographic security on the Internet that has ever actually worked.

You're calling this approach "workarounds", which, sure, they're workarounds, in the same sense as TCP is a workaround for IP's native lack of congestion avoidance.


Because making a TLS stack mandatory for doing DNS is both ridiculous and also a non-starter. Ridiculous because if you think adding DNSSEC verification to a DNS library is hard, try adding a TLS stack. The hard part isn't recursive queries. All DNS libraries, even stub resolves, already need to handle recursive resolution to follow CNAME chains as caching servers don't normally return resolved chains unless they're (a) in-bailiwick or (b) the chain can be resolved from cache. The ugly part is attaching such a complex dependency to all the various third-party resolvers out there.

A non-starter because I've written DNS servers and clients that saw widespread deployment and by far the biggest headache was networks that blocked TCP on port 53. DNS over TLS uses port 853. LOL Like that's gonna work better.

Same analysis applies to DNS over HTTP, except now you're adding a whole HTTP stack to your DNS library. For heaven's sake....

Why not just get the slow, painful death of the open internet over with and move everything to AWS. We can have AWS terminals using opaque, bespoke protocols for everything. We can merge the AWS and Blink/Chrome teams and we can delegate the development of all new novel network services to them.


Nerds have been saying this about the HTTP-ization of protocols literally for decades, but HTTP's gradual assimilation of all other protocols has continued basically unimpeded.

Also: your primary objection to DoTLS is that networks block 53/tcp, and you say "same analysis applies to DNS over HTTP", except that, obviously, it does not: the whole reason DoH exists is to address that problem.

Finally, even though I believe over the long term that DoH will be successful and will eventually become the primary mechanism with which browsers look up hostnames, it doesn't have to be for DoH to solve, entirely, the real problems DNSSEC tried to solve.


M. Ahern was, I suspect, referring to the analysis of the first paragraph with "same".

In other words:

> if you think adding DNSSEC verification to a DNS library is hard, try adding a HTTP or a HTTP+TLS stack


That doesn't make much sense as an argument. Adding DNSSEC verification to a library is probably harder.


The layered approach gives benefits when the lower-layer protocols provide features to the upper-layer ones. We should have some security at a lower level so the upper levels can relax. But encryption was bolted onto everything after the fact, so it's a special case that doesn't translate to layering until you get into network tunnels.

DNS is also weird because it's used by all these other protocols not as a layer, but as an external resource. It's like if an HTTPS server needed records from an SQL server, and rather than secure the SQL connection, you put the SQL inside HTTPS. I mean... sure, I can run an HTTPS server on the SQL server, add some extensions to the protocol, and request records that way. But that's layer-7-in-layer-7. That's not layering, that's... weird smooshing and stacking and munging. It's not the kind of layering the internet was designed for.

I think we should've widely adopted layer 4 secure transport protocols a long time ago. You wouldn't need to do anything other than add or change some arguments to your existing IPC syscalls. Decentralized protocols are more complicated, but could be adapted without much trouble.


Your first argument begs the question. "We should have security at a lower level so upper protocols can relax". Ok, at what layer? The overwhelming majority of "protocols" are instantiated at a layer higher than TLS. Make the case that we need security at a lower layer than TLS.

DNS is "weird" in the sense that you say, but it's important to understand that it's not the only thing that's weird in the exact same way. We don't have secure ARP, or secure BGP, or secure OSPF, or really even secure SNMP.

TLS is a secure transport protocol. That's literally what the name means.


> Make the case that we need security at a lower layer than TLS

I am. The first line of my last paragraph says it. "I think we should've widely adopted layer 4 secure transport protocols" (I meant OSI L4, not TCP/IP L4). As in, make TCP TLS connections natively adopted as ubiquitously as tcp and udp, built into operating systems and routers, and made as easy to use as opening a socket.

That would help these other protocols that we don't secure, like SNMP. Obviously SNMP would still have those crap "community strings", but every protocol built on top of tcp could have been secure if tcp had security by default. SSH might never have been invented if Telnet and Rsh could have used an operating system's native TLS connections. Forcing applications to bring their own TLS implementation doesn't make sense to me.


"I think we should have widely adopted" and "it doesn't make sense to me" is not really "making the case" against the stated end-to-end principle argument, it's just describing a bare preference.

The end-to-end principle isn't some immutable axiom but it's been particularly effective and applicable in secure network communications. So the bar for "making the case" is a lot higher than what you have so far.


> Fragmented DNS responses happen occasionally with DNSSEC

It is not clear that DNSSEC helps in this case. Given just the text (and my unfamiliarity with DNS protocols on the wire) it seems to imply this occurs because of DNSSEC.

EDIT:

> In the meantime, DNSSEC does actually protect against this vulnerability, but it does require that your domain is signed and that your CA validates. This may not yet be the case.

Nevermind.


Do any DNS resolvers monitor for huge spikes of spoofed results? Is falling back to TCP for those queries likely to break much?


My question as someone who doesn't know much about DNS beyond the most basic stuff, how would a DNS resolver know a when query is spoofed? You can maintain a query cache to filter out unsolicited (spoof) responses but what would make a query valid or invalid? I'm talking about DNS/UDP btw.

Maybe some sort of challenges? Authentication? Like DNS cookies or something.


At a level that needs OS cooperation to detect, there are packets with invalid ports or invalid sequence numbers for TCP. On top of that, the requests themselves have a 16-bit ID that acts as a random cookie. If we could extend DNS to make the ID bigger that would solve the problem by itself. There have been attempts to use rAnDOm CAsE to make spoofing harder, but it only works on some DNS servers.

For attacks like this, there are thousands to billions of spoofed responses coming in. It's not subtle at all, or very hard to keep track of the domains under fire.

Edit: Oh wait, the queries themselves? That's a very different problem and there's no good solution. Harass more ISPs into implementing filters that drop spoofed IPs from their users.


Daniel J. Bernstein discussed the collision likelihoods with message ID and port numbers years ago, which he later repeated on his WWW site; distinguishing between various forms of attackers according to how much network access they have (for snooping the query traffic). From the design of his TAICLOCK protocol one could tell that such thinking had been a contributory factor. 236 bytes are available for (say) a client-generated random number.

* http://cr.yp.to/proto/taiclock.txt

* http://cr.yp.to/djbdns/forgery.html


What are you trying to fix?


Poisoning attacks.

Also reporting them.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: