This of course does nothing to fix the pink elephant of protocols that rely on a source address in a UDP packet to shovel off data without any limits. DNSSEC is just one single feature that can be abused; i'm sure there are many more available in other protocols, and more yet to be invented.
If I had to draw a pie chart explaining the rationale behind deploying DNSSEC, a 1/3 slice of that pie would be labeled "IETF's misguided effort to replace the broken TLS CA PKI with yet another PKI controlled largely by the same giant businesses", and the remaining 2/3 slice would be labeled "Self-perpetuating fallacy that DNS must be SEC'd the way IP was (unsuccessfully) SEC'd", or, less charitably, "Windmill tilting exercise on the part of standards bodies".
If you want to strip away part of the protocol to fix an attack vector, why not just redact EDNS0? Or do what networks around the world already do and block all port 53 udp packets bigger than 512 bytes. We'll continue to have a size-limited and somewhat inflexible protocol, but at least DNS amplification will have an upper bound.
I get that. Hell, you're probably right that the cost of supporting DNSSEC isn't worth its benefits in the long run. It's still a crappy argument and a crappy way to deal with a long-standing security problem.
If you want to prevent all future UDP DNS amplification attacks you must require all UDP DNS packets be no more than a specific number of bytes (for example, 512, the pre-EDNS0 size). This would fix the root problem forever. All feature extensions can simply require the use of TCP.
I get that there's a large cost involved with every solution except for forcing everyone to abandon DNSSEC. I don't think forcing everyone to abandon DNSSEC is a realistic goal at this point. Instead, I recommend fixing the root problem for all future cases. Everyone can continue to not use DNSSEC, and DNS will never be able to be used in an amplification attack past what was already possible before DNSSEC.
Meanwhile, DNSSEC itself provides minimal value (all online commerce on the Internet happens without DNSSEC today, and to a useful first approximation, none of today's online fraud depends on spoofing DNS).
(a) DNSSEC doesn't actually do a good job of making the DNS secure (for instance, it doesn't secure the "last mile" between desktops and recursive resolvers).
(b) Securing the DNS doesn't automatically secure the other core protocols that also need to be secure to rely on DNS promises; an IP address is still just an insecure IP address even if you learn about it from an RSA-signed message.
(c) There's no compelling case that the DNS needs to be secure, or that we need systems that rely on a secure DNS. A secure DNS doesn't make it any easier to clear credit card transactions, to transfer funds, or to create secure & anonymous messaging systems.
It seems to me that if DNS security was such a serious problem, it would be easier to come up with a scenario that benefited from it; that is, you wouldn't need to handwave with a comment like "the DNS needs to be secure before systems that rely on DNS being secure are written". Well, sure, but that shouldn't make it hard to merely imagine such a system. What is it?
I am not going to name one. You may now repeat your claim that the reason is because I can't think of one, for continued comedy affect.
It is not a subjective statement that Internet commerce doesn't rely at all on DNSSEC today. It's a fact.
I don't know much about the issues surrounding DNSSEC, so I wouldn't be surprised if they make this not worth it in practice.
Ironically, the dominant provider in both PKIs happens to be Verisign.
One difference between the two PKIs is that the CA system admits to many different CAs, and to browser and even end-user control over which CAs are trustworthy. On the other hand, DNSSEC bakes its authorities into the core of the Internet. Don't like Verisign? Tough shit. A pithier way to say the same thing: Ghaddafi's Libya would at one point have been BIT.LY's "CA" in a DNSSEC world.
The role of any PKI authority in the HTTPS/TLS ecology will diminish soon with something like TACK, which integrates key continuity with the TLS PKI. TACK doesn't require DNSSEC, but allows websites to "pin" their certificates into browsers so that even if Iran or China hacks your trust anchor, your site can overrule them. This is exactly the scheme Google uses today to protect its properties: no DNS record or CA signature will convince Chrome that you are the real GMail.
The short answer to "replacing the CAs" as a use case for DNSSEC is that DNSSEC doesn't change the model or the security characteristics of HTTPS/TLS; it's at best a lateral move. But there are specific ways in which it makes HTTPS/TLS trust problems worse, and even more ways that it introduces reliability problems.
And yes, certificate pinning is an exception, making the browser vendor the CA for a small number of domains, but Convergence and other systems depend on the DNS, and TACK only helps on subsequent visits to an already visited site.
Why are we required to assume that anyone who answers a plaintext email from a domain must be issued the certificate corresponding to that domain? Why can't we authenticate the issuance of new certificates out-of-band? We could address that problem without even requiring browser modifications.
Also, in a TACK+TLS/CA world, Libya could not have surreptitiously swapped out BIT.LY's certificate. In a TACK+TLS+DNSSEC world, they can; stub resolvers don't verify signatures in DNSSEC.
Ultimately, what we need over the long term is a system like Convergence to allow trusted third parties to vouch for CAs, and for users to choose from among trusted third parties who they want to vouch for CAs. This is largely a UI/UX problem, and it's orthogonal to whether HTTPS trust anchors come from X.509 or DNS.
It feels like the very best argument that can be made for DNSSEC as a CA alternative is that it doesn't make things worse. (I personally think that it makes things much worse, but stipulate otherwise here). But it's tremendously expensive and brittle, addresses mostly the solved part of the problem, and does nothing to help the major unsolved part of HTTPS trust.
Authenticate with who, Verisign (the ones that have the authority to determine who owns the domain)? I guess making it out of band alleviates some situations where Verisign gets hacked (but not where Verisign is untrustworthy, which you could argue is the case for recent US domain seizures), but the number of domains requiring certificates is high enough that they're just going to end up checking the same database.
> Also, in a TACK+TLS/CA world, Libya could not have surreptitiously swapped out BIT.LY's certificate. In a TACK+TLS+DNSSEC world, they can; stub resolvers don't verify signatures in DNSSEC.
Huh? In a TLS/CA world, Libya could get a legitimate new certificate (although I guess this could be noisier); in a TLS+DNSSEC world, Libya could produce a valid signature. TACK has the same effect on both, protecting some but not all users; invalid DNSSEC signatures are irrelevant. (But in general, the stub resolver issue is one such practical problem that I'm here not worrying about.)
As I said, I believe that Convergence doesn't really help as long as the trusted third parties are validating using DNS, and the only way around that is a new DNS root designed from the start to be decentralized and cryptographically secure.
Regarding your first question:
To a first approximation, everyone that needs a TLS certificate already has one.
The concern about authenticating requests for certificates is about attempts to get certificates issued against entities that already attest to having them.
An entity that already has a cert can authenticate new requests for different certs; for instance, they can put a PGP or S/MIME key on file with the CA, and that key can be used to authenticate new requests.
Actually, every tool in the web authentication toolbox, from S/MIME through 2-factor auth keys, is fully available to CA authentication, which is a good thing.
So the question then is, why should ability to answer an email sent to a domain trump every other authentication mechanism we could use instead?
To your second question: my point is just that Libya can more quietly hijack BIT.LY under DNSSEC than they can under an HTTPS CA model.
I agree that we need better trust anchors for peer-to-peer verification than DNS.
He doesn't think it's a very strong argument against DNSSEC.
The notion that the problem is "actually open redirectors" (nameservers configured to answer queries from arbitrary points on the Internet) is indicative of the weird reasoning that the IETF DNS people have used all throughout the DNSSEC process. Open redirectors means DNSSEC is a viable mechanism to get ISPs to flood random sites off the Internet? Just mandate that ISPs not run DNSSEC that way! Secret DNS names mean that verified negative answers in the DNSSEC protocol will breach confidentiality? Just mandate that nobody have secret DNS names!
Daniel J. Bernstein did a much more convincing takedown of dakami's reasoning in a talk at 27C3; I'm not going to recap it. I'd just say Daniel J. Bernstein has earned the authority he speaks with regarding DNS security; the vulnerability dakami is famous for discovering is one that djbdns --- released many years before that vulnerability was disclosed --- was designed in part to address. Obviously (if you've ever installed djbdns), Bernstein did a good job of handling the open redirector problem as well.
Kaminsky got on the wrong side of this issue, which is ironic, because he's put a lot more time into practical DNS security than the people he's arguing on behalf of.
Second, you can compare their numbers directly (get Bernstein's from any of his DNS talks at cr.yp.to).
Third, read closely and you run into things in Kaminsky's post like this:
That’s a 3.6KB response to a 64 byte request, no DNSSEC required. I’ve been saying this for a while: DNSSEC is just DNS with signatures. Whose bug is it anyway? Well, at least some of those servers are running DJB’s dnscache…
Well, probably not, because dnscache was, from the time of its release, the first cache server to ship default-deny for remote queries; if you want dnscache to serve as an open cache, you have to jump through hoops to configure it that way. Most open cache servers are BIND.
I agree with very little of what Kaminsky has to say about DNSSEC, but my arguments are reasoned well enough that I'm not afraid to actually make them.
Slides for this talk: http://cr.yp.to/talks/2010.12.28/slides.pdf
(Also http://cr.yp.to/talks.html Ctrl+F "DNSSEC")
By dropping the incoming packets you're punishing well behaved resolvers. They're going to take a 2-500ms latency hit before retrying another resolver or over tcp. A well behaved resolver will respond to a truncate by sending the same query over tcp. That's only a latency hit of the rtt.
The tcp connection also effectively authenticates control of that source ip address. Now add that src ip to your whitelist of known good resolvers.
Attacker mitigated, other customers not impacted.
" What's great is that we can safely respond and ask them to block all DNS requests originating from our network since our IPs should never originate a DNS request to a resolver. "
The tl;dr is rate limiting, plus some other techniques like adaptively restricting the ratio of request size to response size.
In particular since most DDoS attacks originate from botnets, simply egress filtering at the ISP level should be sufficient.
Seriously, they're just too lazy to auto-generate firewall rules from their list of assigned addresses.
I think vendors also have some responsibility. The defaults are bad and the vendors make their devices hard to manage on purpose (for lock-in reasons). I'm looking at Cisco in particular.
Once you get away from the network edge the only possible urpf is loose mode. But thats a restricted implementation as plenty of stubs out there use default routes. Then asymmetric routes are so common as to rule the use of feasible urpf completely.
So in summary, it has to be the edge networks who enforce this. And the prime intermediate offenders actually have a monetary incentive to not prevent this traffic.
It turned out that a VPN was implemented using `dnsmasq` which was responding to the DNS queries.
I ended up using firewall rules to drop the queries because it seems like in certain situations it's not enough to just configure `dnsmasq` to not respond to the requests: http://people.canonical.com/~ubuntu-security/cve/2012/CVE-20...
Just incase the information is useful to anyone else. :)
(I Am Not A Network Engineer.)