DNSSEC ensures that received DNS records are signed by an entity authorized to publish changes to the domain. It does not ensure, in any way, that this entity is publishing the right changes. It protects you against man-in-the-middle attacks, but not "hijacking" as usually envisioned. The linked article https://krebsonsecurity.com/2019/02/a-deep-dive-on-the-recen... identifies multiple cases of registrars saying that someone broke into their web interface (either by determining valid account credentials, or finding a flaw in the web interface). DNSSEC is not designed to protect against these attacks. As that article points out, some of the victims did in fact have DNSSEC set up.
As 'tptacek pointed out a few days ago, many .gov and .mil domains have DNSSEC and were nonetheless victims of DNS hijacking attacks. https://news.ycombinator.com/item?id=19180817 The US government is correctly not insisting on additional rollout of DNSSEC.
I myself have been a victim of DNS hijacking: in January 2013, someone hijacked mit.edu and (among other things) redirected the MX records to non-MIT servers. https://thetech.com/2013/01/23/hack-v132-n63 I lost email to my mit.edu address as a result, and if the attacker were interested in targeting me (or targeting MIT account holders in general) they could have triggered password resets by email, etc. They got in because they apparently knew the password for MIT's account at EDUCAUSE, domain registrar or guessed it on the first try. DNSSEC would not have prevented this. The attacker would have simply instructed EDUCAUSE to sign the records, or instructed EDUCAUSE to update the public key to one under the attacker's control.
And, for bonus points, had they done so, they likely would have been able to lock out MIT of regaining control of the domain for longer than it actually took.
At the moment, DNSSEC is the only off-the-shelf authenticated DNS system. We need that authentication between the DNS servers and the Certificate Authorities.
DNSSEC between the DNS server and hosts doesn't matter much. There are other ways to MitM, and insecure fallback is inevitable.
In the end, the DNS system is what we use for identity management, so it should be authenticated. As the recent attacks have shown, the actual administration of the DNS servers (Registrars) also needs to be secured. That is not DNSSEC's role.
But in our current system, if a registrar is compromised, the system is compromised anyway. I'd love to hear an idea that obviates the need to trust Registrars. Without such an option, I don't think 'this doesn't defend against DNS account hijacking' counts, because nothing defends against DNS account hijacking.
Once someone has that account, you are fucked. HPKP is essentially the only thing that could save you, and that has (rightly) been deprecated.
If there were actual routine attacks that DNSSEC was seriously defending against, you don't think its advocates would be sounding their trumpets about them incessantly? Occam's Razor argues emphatically that the attacks DNSSEC deals with aren't occurring.
Meanwhile: if the reason we need DNS security is for certificate issuance, DNSSEC is a stupid way to get that. We can and will create a secure channel directly between CAs and registrars (one such system is RDAP, the upcoming replacement for WHOIS). We also have systems in place that have demonstrably defeated misissuance, most notably certificate transparency, which does not in any way depend on DNSSEC. CAs can and will do multi-perspective DNS lookups, which are valuable (though imperfect) even against the BGP attacks that completely bypass DNSSEC.
The history of Internet network security is largely the history of higher-layer systems routing around the insecurity of DNS; an insecure DNS is practically the premise of Internet security. And throughout that entire history, going all the way back to when I was in high school in the early-mid 1990s, DNSSEC advocates have been trying to foist this boondoggle of a protocol onto us. It was annoying when it was just a bad cryptosystem that was impossible to configure. But now it's a key escrow system as well. Fuck that. DNSSEC is still failing. Let it fail all the way away.
I don’t think the desire to authenticate data coming from the internet’s global public key-value store is ludicrous.
People used to say the same thing about the boot process. Now we have secure boot. Maybe that's just stupid too but I feel a lot better knowing that my hardware is booting verified images than whatever happens to be sitting at some address in unprotected memory (as long as I remain the owner of the key).
Even if this is true it is also shallow, misleading and IMO more of a master suppression technique: even if we accept it as true it doesn't mean the opposite is true (i.e. that everything that is marketed as "defence in depth" is a useless security product.
Given that just about everyone here recognizes the bad faith shown by ICANN on this issue, it seems to be an act of desperation on their part. Why? They've invested a lot into DNSSEC and are emotionally wrapped up in it, despite better alternatives. And maybe more importantly, if you consider DNS an asset, then its value rises by stuffing more "value" into it such as key escrow. Therefore DNS's principal holders (ICANN, TLDs) would see their asset depreciate if DNSSEC fizzles and dies.
For it to really help though, DV using DNS needs to stop working. That's going to take a while, but I guess the same goes for convincing all Registrars to use DNSSEC. In that sense, it seems like IANA should be pushing for RDAP.
Certificate Transparency logs are kind of orthogonal to DNSSEC and DV though. The main value in Certificate logs is forcing CAs to act honestly.
CA's not getting accurate DNS records is not completely something the CA's can prevent, because they can't authenticate DNS records. Though certainly, they can do their best by using multi-perspective look-ups and other things.
I still think it's scary that registrars have ultimate power over issuance. They seem like the next weak link after CT logs are widespread enough to defend against rogue CAs. Luckily, it is harder for a registrar you don't use to claim your domain, so it is easier to defend by choosing a registrar you trust.
If an attacker removes DNSSEC at the same time as making name server changes there is no way to detect a difference from a legitimate change.
For now most of that discussion is the two dominating methods to get encryption and authentication between the user and recursive resolver, but there are plenty of good reasons why we want the same standard of security between recursive resolver and authoritative resolver.
Telnet was replaced by ssh, but they're both connection-oriented protocols that create a session between two endpoints. The same is true of the transition from X to X-over-TLS for the various values of X.
DNS isn't really a connection-oriented protocol to begin with and DNSSEC isn't really a protocol at all, it's a collection of DNS records for use in signature verification. Moreover, it's not encryption, only signatures. It doesn't provide privacy protection. The DNS equivalent to the transition to ssh or TLS would be something like DNSCurve, which is a transport protocol and does encrypt the data.
DNS over TLS/HTTPS typically only secures the link between the client and the resolver, and does so at the complexity cost of bringing in all of TLS and its problematic CA system, and HTTP on top of that for DoH. And the proposals to make it work to the authoritative nameservers have it using DANE (i.e. DNSSEC).
With DNSCurve, when you have a delegation from one authoritative server to another, the parent domain provides the name of the child domain's nameserver, which encodes its public key. Then you use the public key to communicate with the child nameserver. The key distribution is part of the protocol.
To use TLS, you would either have to rely on the CA system with all its problems (any malicious CA can forge any name and many are operated by potential adversaries), or DANE, and then we're creating a dependency on DNSSEC rather than replacing it. Worse, you have to choose one or the other for everyone or else support both as options and have the worst of both worlds in combined complexity and lowest common denominator security.
The overhead of DoT/DoH is also worse in the recursive case. Between the client and the recursive DNS server, creating a TLS session is expensive but at least then you can keep it active and use it for all your queries. In the recursive case the queries go to/from many different authoritative servers, each with low probability of reusing the connection such that keeping it active long is counterproductive. So you're moving from a single UDP request and reply to a full TCP handshake + TLS handshake for as little as a single query. This adds both computation and latency.
Moreover, the one redeeming quality of DNS over TLS/HTTPS is avoiding interference by middleboxes, which exists almost entirely in the context of client devices on restrictive networks rather than the links between recursive and authoritative DNS servers.
An early argument against HTTPS was that you could not use web proxies any more. Today we don't normally talk about that, in part because the caching was moved away from the middle and into the server end.
The discussion around how useful DNSSEC misses the point that what history has shown is that authentication without encryption is like encryption without authentication. We need both and preferable done yesterday.
DNS (UDP) is a strictly connection-less protocol. There is no state, there is no transport guarantees, its total fire-and-forget.
Your reasoning is of the form, "We need something. This is something. Therefore we need this."
No, actually we don't. We don't need this unless we can demonstrate that it makes things better in the real world, with real problems that people really encounter.
The problem isn't simply that it fails to protect against DNS account hijacking. The problem is that it makes the common problem of DNS account hijacking even worse than it is now. Which means that it makes an existing problem bigger than it is now.
Yes, it also solves another problem. But now we have to look at it as a tradeoff between two problems, decide which is worse, and decide accordingly. Every security professional and network admin that I trust finds the tradeoff obvious, and not in DNSSEC's favor.
Pretty much, though I would rephrase it to:
X seems useful, Y is the only realistic way to get X soon, so Y has a valid use case.
Notably, tptaeck mentioned RDAP which might make "Y is the only realistic way to get X soon" wrong.
You can't. DNSSEC has explicit protections against that.
This doesn’t fix the problem of course as access can still be gained to the master admin account if it’s credentials have been compromised, which is true about nearly every service on the internet. DNS is no different, except of course that it’s critical to serving traffic to a site.
I agree with the GP in that ICANN is being dishonest in encouraging DNSSEC as a response to hijacked accounts. What they should be pushing for is better security practices, like 2FA, by the registrars and more authorization limited options for limiting the exposure of an account to common updates versus less common more critical ones.
dnscurve and dnscrypt are in the same space as DNS-over-HTTPS, DoH, and DNS-over-TLS, DoT. What they do is authenticate the dns server (resolver) and also encrypt the channel giving you privacy.
DNSSEC == authenticated records.
dnscurve, dnscrypt, DoH, and DoT == authenticated and encrypted resolutions.
But notice that authenticated resolutions get you authenticated records provided you do them all the way to the root. The root servers on down could support DNSCurve or similar and you could operate recursively using it, or have your trusted recursive resolver do so as you in turn reach it using the same mechanism.
Meanwhile that wouldn't break the cases where you want your DNS server to modify records, e.g. for blocking tracking domains.
The recursive resolver could then provide a different response to the client, but the client could use DNSCurve to authenticate its DNS server as well, so the client only gets malicious responses if it configures a malicious DNS server.
And that would compromise anything. What happens to DNSSEC if the attacker replaces the client's DNSSEC root keys with the attacker's?
With DNSSEC the DNS Server also has to advertise different keys, which is something you can notice in your resolver and alert too, for example if you already know the key is different via a side channel or you communicate with another resolver and they got a different key.
DNSSEC makes such manipulation obvious and also prevents in-transit tampering. Coupled with DoH/T you get all the benefits of both.
Sure there is. You do the recursion yourself and compare the result. If you do this 100% of the time, i.e. run your own recursive resolver, there is no need to even have a third party recursive DNS server that could potentially send malicious responses. If you do it a lower percentage of the time at random, or for specific queries you're expecting interference with, you detect the interference and know not to trust that DNS server anymore.
Comparing keys in DNSSEC is no better than this. If a MITM attacker has altered your root keys, the attacker can forge responses from any DNS server you use, unless you're using something that actually authenticates the recursive resolver (whether DoT/DoH or DNSCurve), at which point you can compare the query responses again and no longer need to compare the keys or use DNSSEC at all.
> DNSSEC makes such manipulation obvious and also prevents in-transit tampering.
Which is one of its flaws. Sometimes you want friendly in-transit tampering. With DNSSEC there is a fixed root of trust that everybody has to follow and nobody can choose for themselves or alter for a specific subset of the tree.
I was talking about during recursion. Plus this is what I mentioned in my previous comment. This is about someone injecting responses on the network level and blocking proper communication or even having taken over an IP via BGP hijack, in which case the NS record might point to the correct IP but a malicious DNS server would be responding.
DNSSEC would detect this failure mode.
>Sometimes you want friendly in-transit tampering
In which case you likely have your own PKI to manage HTTPS MitM as well, so it shouldn't be an issue to run DNSSEC.
So would DNSCurve.
If the root servers were using DNSCurve then you would distribute their public keys in the same way that you distribute their IP addresses, over some secure side channel such as your operating system's software update mechanism.
Once you have the public keys of the root servers, no one can impersonate the root servers to you without their private keys. The root servers can then securely give you the names of the gTLD nameservers. If they use DNSCurve, their public keys are encoded in the names of the nameservers and you can communicate securely with them as well, and so on down the chain.
Hijacking BGP doesn't allow you to impersonate any of the authoritative servers that have a chain up to the root using DNSCurve without having their private keys. You can also pin the public keys of any nameservers that don't have a chain up to the root for the same benefit for that subtree, e.g. if some gTLDs used DNSCurve but the root didn't, or to use between your clients and your company's authoritative DNS servers for your own domain.
> In which case you likely have your own PKI to manage HTTPS MitM as well, so it shouldn't be an issue to run DNSSEC.
This is regularly not the case. For example, sysadmins regularly map the most common tracking domains to a black hole in the resolver on the local network or local machine without doing any kind of HTTPS MitM.
This behaviour doesn't break under DNSSEC; the blackholed domain either blackholes or the resolver fails due to the bad signature and effectively blackholes as well.
>If the root servers were using DNSCurve then you would distribute their public keys in the same way that you distribute their IP addresses, over some secure side channel such as your operating system's software update mechanism. [...]
And what does that gain you over DNSSEC+DoT/H other than being non-standard?
If you give an address that always sends back ICMP unreachable messages, the client fails immediately and continues to load the rest of the page, but DNSSEC proscribes that.
You could refuse to send a response for those names, but then the client has to wait for the timeout. You could send a response with an invalid signature, but then the client should discard it and continue to wait for a valid response, and you still have to wait for the timeout.
There are also circumstances where you want to return a specific address, e.g. so that attempts to send traffic to that address can be monitored, and redirecting the address doesn't work because it could be a shared host using SNI and you don't want to affect every name pointing to that address.
> And what does that gain you over DNSSEC+DoT/H other than being non-standard?
It's simpler, more efficient, lower latency, has less attack surface and can't be used as a DDoS amplification vector.
And if you want something standard, why not standardize on DNSCurve? Using DNSSEC over that is only the sunk cost fallacy.
How would a TLS-using protocol be used as a DDoS amplification vector?
>If you give an address that always sends back ICMP unreachable messages, the client fails immediately and continues to load the rest of the page, but DNSSEC proscribes that.
You can return resolver errors or no data, there is plenty of options for DNS blackholing.
>There are also circumstances where you want to return a specific address, e.g. so that attempts to send traffic to that address can be monitored, and redirecting the address doesn't work because it could be a shared host using SNI and you don't want to affect every name pointing to that address.
You'll need a PKI for that anyway so you can deploy DNSSEC root keys.
Turning off UDP DNS would break everything that still uses it but adding the DNSSEC records makes the UDP responses a DDoS amplification vector even if some clients use TLS.
> You can return resolver errors or no data, there is plenty of options for DNS blackholing.
DNSSEC validates NODATA responses and clients can respond to resolver errors in various ways that amount to more timeouts.
> You'll need a PKI for that anyway so you can deploy DNSSEC root keys.
That's assuming you want to MitM the connection rather than merely log the attempt before dropping it or record it for traffic analysis even if you can't MitM it.
In the DNSSEC case, yes, that's akin to the same thing.
In the other, PGP is both encrypting and authenticating the data, then sending that over untrusted email servers. DoH, DoT, et al, are encrypting the channel, and authenticating the endpoints, but not authenticating the data.
† (spoiler: Apple, Google, and Mozilla dabbled with it and then rescinded their support; Mozilla and Google have both stated, Google more formally than Mozilla, that DANE isn't happening)
 Yes there's CT to help catch dishonest registrars, but a similar regime could be applied to registrars directly to force transparency around changes to TLSA records.
Further: a mis-issuing CA can be put to death (as happened to the largest CA when Google caught them mis-issuing). You can't revoke a TLD.
Meanwhile, CT actually exists today and is meaningfully combating misissuance, and obviously does not rely on DNSSEC to function.
This is false. That article makes no such claims.
As magila stated, registrars are already a trust anchor for domain validated certificates. Trusting a certificate directly via DANE vs through a domain-validated certificate doesn't change that. It does, however, cut CAs out of the process, which reduces the number of trust anchors.
When your argument starts depending on redefinitions of well-established concepts, that's a sign that you should reconsider it.
Everyone who uses .com must trust the authority for .com, because that registry's name servers are what delegate control of all domain names in the TLD. Every time you resolve a domain name like "example.com", you are relying on the name servers for ".com" to serve correct information.
If the registry was malicious (or if an attacker compromised it), then the registry could easily seize control of any domain name -- it could rescind the delegation and publish whatever DNS records it chooses, including the DNS records necessary to obtain a domain-validated certificate.
It is correct to say that we "trust" TLD authorities because they can do these malicious things, but they commit to behave in a certain way. We trust TLD authorities in the same way that we trust certificate authorities only to issue certificates following a certain process. Both of these actors are capable of fraudulently obtaining certificates: the CA can mis-issue a certificate directly, whereas the TLD can seize a domain's DNS and trivially procure a domain validated certificate (from a legitimate CA).
This is what I believe Ajedi32 means when referring to "trust anchor" (using the term informally). DNSSEC is not adding new trusted entities, but is rather adding PKI around an existing trust relationships: a child DNS zone (e.g. example.com) always needs to trust its parent zone (e.g. the .com TLD). In the PKI sense, I agree that it would be adding new trust anchors, but those trust anchors would simply model preexisting trust relationships that don't use PKI.
On that note, we also need to trust the Internet root name servers that point to the name servers for the TLDs.
If you want to use an idiosyncratic definition of a term, that's fine --- I won't, but I can at least follow the argument. But what you can't do is say "that citation you provided does not say what you said it does" when it clearly does using the mainstream definition of the term. It's especially fallacious to pull out this semantic argument about a reliable source that generates a surprising conclusion about trust anchors!
DNSSEC adds trust anchors to the Web PKI.
Obviously, I disagree with the argument that we "need" DNSSEC to securely issue certificates, but the rest of this thread adequately captures my rebuttals to that argument.
Besides that, you were just plain wrong, twice (you doubled down when I showed the quote that refuted your claim). It's fine to be wrong; I'm wrong all the time! But stubbornly refusing to acknowledge when you're plainly incorrect to the extent where you redefine words is not a good look.
The central point of your comment is still completely wrong though. DANE does _not_ add new trusted actors, and has no significant negative security impact.
How would DNSSEC have made it take longer?
DNSSEC makes many things worse, but I don't think this is one.
> Woodcock said PCH’s reliance on DNSSEC almost completely blocked that attack, but that it managed to snare email credentials for two employees who were traveling at the time. Those employees’ mobile devices were downloading company email via hotel wireless networks that — as a prerequisite for using the wireless service — forced their devices to use the hotel’s DNS servers, not PCH’s DNNSEC-enabled systems.
> “The two people who did get popped, both were traveling and were on their iPhones, and they had to traverse through captive portals during the hijack period,” Woodcock said. “They had to switch off our name servers to use the captive portal, and during that time the mail clients on their phones checked for new email. Aside from that, DNSSEC saved us from being really, thoroughly owned.”
Obviously DNSSEC won't save you if your account with your domain registrar gets compromised, but there are other situations where it _is_ effective.
If you want to push back against this, find somebody near Kobe, Japan in the next few weeks who can go to this open session  and explain the problems, and provide some alternative solutions.
I agree with rocqua that the Registrar needs to have a more active role in securing DNS. I think we need new standards of control between domain owners, registrars, DNS TLDs, and certificate authorities.
I, as a domain owner, should be able to provide signed changes to the Registrar, and to entities further down the chain, so that just because someone gets access to my account on a service provider, they can not change anything without my private key. Delegation and revocation of other keys can also be handled for large organizations (we do that today with DNS).
There are registrars that do take a more active role in securing dns, but they usually charge a bit more. A surprising many companies and even government departments will compare two dns registrars exclusively on the price, and if one cost $10 annually and the other registrar cost $29 then they will pick the $10 version. What they might not be aware is that $9 out of the price is the registry fee, so what they are really comparing is the service that you get from $1 margin vs $20.
Let's kill it off and focus on efforts that solve real problems. It's worse than pointless, the added complexity is a huge liability.
Certbot, which I have setup to automatically renew my wildcard Let's Encrypt certificates, has access to my master API key on DO. I try to follow the best practices in keeping it safe, but I'd prefer if that specific API key would only have the required privileges to modify a set of specifically named TXT records and nothing more.
Use RFC2136 like civilized folk, I guess. ¯\_(ツ)_/¯
It helps against BGP attacks. https://www.blackhat.com/docs/us-15/materials/us-15-Gavriche...
DNSSEC + a CAA record with the validationmethods parameter set to DNS would prevent that attack.
Like what? It's pointless to distract from constructive proposals with no supporting material for what the problems are with the proposed solution, and then claim there are other things that need our attention and then not even pay lip service to them.
I've only seen this sentiment every now and then in very particular circles. It's not really that famous.
Here's a fun exercise: You could just say what you need to say to make a point and educate the people reading this thread. Otherwise your comment is kind of detracting from constructive conversation.
That is a damning argument and there is nothing else to say, until someone (e.g. you, but don't feel pressured) comes forward with a good counter example.
That's not a very good argument when you're talking about securing systems. We don't actively mitigate attacks based on the prevalence of known instances of that attack. You do so beforehand. Attack vectors, before they are ever or even actively used, are considered. The most relevant, recent, and well known example of this is heartbleed.
Here is the exact change if you're able to read the code: https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h... -- if that is not within your skillset to read that change, the patch is essentially doing a bounds check on a set area of memory. The language being used is notorious for allowing this type of code error and is the source of many exploits and bugs. It has nothing to do with the specification.
For some reading material see this wikipedia article on the subject: https://en.wikipedia.org/wiki/Buffer_over-read
>Programming languages commonly associated with buffer over-reads include C and C++, which provide no built-in protection against using pointers to access data in any part of virtual memory, and which do not automatically check that reading data from a block of memory is safe; respective examples are attempting to read more elements than contained in an array, or failing to append a trailing terminator to a null-terminated string. Bounds checking can prevent buffer over-reads
We mitigate against attack vectors before there are known instances of software using those attack vectors. I'm not sure why you want to argue against that, or even how because we do it all the time. Take Spectre/Meltdown as another example. Preemptive measures were taken before any known usage of that attack vector had been discovered.
Sure, the actual bug is a coding 101 bug, but the social process that caused a coding 101 bug to potentially compromise the vast majority of server installations is why Heartbleed is important. One of the lessons is that you need more reason to implement a feature than "it's a feature that can be implemented."
I don't think I am. You're focusing on one particular thing and invoking platitudes that are completely irrelevant. You're actively attempting to change the conversation topic to fit your viewpoint. That's not okay. No one is talking about the philosophy of feature development and adoption. I gave another example of where we preemptively mitigate issues that has nothing to do with philosophical waxing on "did we really need to do this" or "nobody really wanted that feature": Specter/Meltdown.
So two things, to get this conversation back on track:
1) The idea that DNNSEC is worthless/useless/bad is not "famous" by any reasonable use of that term. I explicitly stated it was only something you see in particular circles. The fact that the actual most popular piece of software used to manage DNS enables DNNSEC by default, and one of the biggest 3rd party public providers for DNS has it opt-out only is a testament to this fact. It's myopic to think that other people are aware of these things when the entire ecosystem is signaling quite the opposite.
2) DNSSEC might be useless, but in abstract terms arguing against systems designed to mitigate attacks simply because those attacks haven't been seen in the wild is a bankrupt idea in the world of security. So if you want to say DNSSEC is a bad idea then stick with that, but don't say it's a bad idea because we haven't seen an actual attack with what DNSSEC is trying to prevent. The claim again, is bankrupt when you're talking about securing systems.
>but the social process that caused a coding 101 bug to potentially compromise the vast majority of server installations is why Heartbleed is important.
I actually want to speak to this, because it's kind of a popular thing to say in the world of software development, but it fundamentally misunderstands what's actually happening.
You're right that a social process allowed the coding 101 bug to happen and exist for so long, but you're wrong about which social process. People's motivations for a feature have no bearing on how exploitable code gets written. Because features that people do want are victim to the same kind of errors in code. If you immediately falsify your perspective about which social process is in play you see it becomes irrelevant. You can completely take people's motivations out of the equation and the same process and class of error exists that allows the exploits and bugs to happen.
I am not saying that increases in LOC does not correlate with an increase in bugs. That's a fairly standard fact. I am saying that people's motivations for features have no bearing on how open source projects fall victim to shoddy code.
The social process that IS to blame is the fact that people don't invest in open source software, yet depend on it for their entire ecosystem. A student wrote some code, submitted it, and it was reviewed by one person. That code was never really looked at again, not really tested, but allowed to exist in the software for years. If the social ecosystem around open source software was one where more eyes had been laid on the code created it might not have happened all together, and relying on a student to write the code is a travesty in its own right all together.
But people's motivations for writing code? Not really relevant in terms of LOC, exploits, and code quality.
You're getting really hung up on "famously" which doesn't even literally mean "is famous". If that's your #1 point, you might as well drop it and move on to something else.
Either way, calling commenters on HN "trolls" violates the HN civility rules, and you need to stop doing that. You can read more about that in the Guidelines, linked at the bottom of this page.
I stated as such in my comment you're replying to. The software that basically everyone uses for DNS management. Google DNS supports it by default, requiring you to opt out. I gave you an example, it would be greatly appreciated for you to provide a substantive, informative comment about the problems with DNSSEC or at the very least provide other material authored by other people (blogs/articles/etc) who are famous security experts who do not support DNSSEC -- you seem to know who to go to.
Google does not generally support DNSSEC. It's easy to find Google security engineers criticizing it, but the most notable example would be AGL's "Why Not DANE In Browsers", which discusses why Chrome stopped supporting DNSSEC.
Actually if you read exactly what I wrote I merely countered that it's not a famous sentiment, which the comment I was replying to said it was. I said it was a sentiment only seen, at least from my perspective, in particular circles and really only in passing.
I would consider the fact that Google DNS and named support it by default to be a significantly appropriate counter to the whole "everyone knows this isn't really a good tool/specification" sentiment. And it's an excellent negation of "it's easy to find Google engineers criticizing it" when their own service and the company they work/did work for is using it by default with their DNS service. Indeed, while I thank you for actually linking something I and others can read, the fact that some engineer in the ether -- and not to impugn his work, credibility, or how well he is known in certain circles -- is writing on the subject doesn't really have the same publicity as services and defacto software enabling DNNSEC by default.
Here's the important point: Because of the above, to the passing eye or as mere consumers of these tools and services it looks like a specification that would probably be a good idea and a credible one at that. There is nothing "famous" about it falling short, or being a solution in search of a problem when everyone who really matters in this space is using it.
Surely you must see that. Again thanks for the reading material, it is very much appreciated.
Things become slightly more complicated when you have to deal with broken home routers. But that code exists as well.
What you meant to say was that DNSSEC works "perfectly fine for securing requests from end-users" so long as they run their own DNSSEC cache servers. But of course, nobody is going to do that. They'll run DoH/DoTLS instead.
Maybe you want to take a look at, for example, https://getdnsapi.net/
Please stop spreading misinformation.
Another example, openssh can locally do DNSSEC validation of SSHFP records (using the ldns library). Unfortunately, it doesn't really work, but that is a different story. All of the validation code is there.
A search term you might find helpful: [DNS header AD bit].
In addition, a validating endpoint resolver is no more a "dns server", than a stub resolver is.
Not for nothing, but even the IETF doesn't want you to do this.
Whilst predominantly offered to registrars it is also available directly to registrants for the equivalent of $120 per year.
Assume a user has a domain name registered with their preferred registrar, e.g. Gandi, GoDaddy etc., then to change the DNS settings the user has to login to his Nominet portal (using mandatory 2SV) and unlock the DNS for up to 20 minutes (it can be manually re-locked earlier). The user then has to log into his registrar (preferably with 2SV set-up) and configure the relevant changes.
The additional steps involved are sufficient to prevent almost all unauthorized DNS changes except DNS poisoning.
Nominet say that their separate "DNSSEC signing service was withdrawn from service in January 2016 due to low uptake".
Also, dnssec for their real website, icann.org, is properly configured.
Unbound 1.6.0, BIND 9.9.5, and Google Public DNS all set the DO bit in their queries even when there's no DS record in the parent zone, so the DSSEC records will indeed be included in responses.
And that's not even addressing the amplification risk.
> Also, dnssec for their real website, icann.org, is properly configured.
Are you disputing that ICANN owns and operates icann.com?
I feel like most people who want DNSSEC just have very theoretical warm feelings about the idea of it. When they actually implement it it does nothing but consume time and energy while adding liability.
issue "letsencrypt.org; validationmethods=dns-01"
I would go so far as to say that not doing this for a high value site will be irresponsible once this CAA feature becomes stable.
In theory, the Baseline Requirements require CAs to honor DNSSEC when checking CAA records. However, in practice, there are too many terrible DNS servers, and looking up a CAA record can often fail, for example because the authoritative server just drops any packet containing a CAA query. So CAs are allowed to treat a CAA lookup failure as permission to issue if the domain is not DNSSEC-signed. However, since no off-the-shelf resolver provides an API to check if a domain is DNSSEC-signed, CAs had to roll their own solutions, most of which were wrong and insecure (probably in no small part due to DNSSEC's monstrous complexity). An attacker who can BGP hijack your authoritative DNS servers would be able to get a certificate from one of these CAs.
I created https://caatestsuite.com/ and smoked out many of these bugs in 2017. Maybe the situation has improved since then, but given the difficulty in implementing this check correctly, I'm sure there's at least one CA out there that is doing it wrong, and it only takes one CA to misissue.
In the long run, I’d love to see something a little more like TLSA so that browsers can at least reject certificates from the wrong CA.
We can solve the DV misissuance problem directly without forklifting in a new DNS, and the forklift upgrade doesn't even decisively solve the primary problem it's meant to solve.
Why are we still considering DNSSEC?
DNSSEC is awful in many ways, but it can be deployed incrementally, it's already standardized, and a good chunk of the benefit can be had without changing normal clients at all. I think that "absolutely staggering enormous global expense" is an overstatement.
edit: A major benefit of DNSSEC is that the CA/B forum already requires that CAs use it to check CAA records.
I don't care that it's "standardized". The IETF has standardized a lot of bad stuff. I care that the benefits, such as they are, don't outweigh the costs --- those of a global key escrow system embedded in the DNS, those of the mass deployment of 1990s cryptography, and those of an enormous and expensive disruption to both our networks and, ultimately, our software, much of which will need to be updated to account for the untenable failure mode DNSSEC exhibits today.
Why do I care that CA/B forum requires DNSSEC (when it's enabled) for CAA records? Nobody uses DNSSEC.
Is it your opinion that none of the largest tech companies or financial institutions in the US have good security teams? Or, if, like me, you generally think money buys talent and companies like Google and Facebook and Apple (and, for that matter, Bank of America and Citigroup) have pretty amazing security teams, riddle me this: why have none of them enabled DNSSEC? Are they just too stupid?
I don't actually know why Google doesn't enable DNSSEC on its own properties for this exact reason. They may be relying on the fact that any serious CA will have "google.com" and similar domains in a list of high-risk domains.
(Heck, I can't easily personally monitor BGP for a website even if I wanted to.)
Each of these companies has a huge security team. Not one of them thinks DNSSEC is worth deploying. How do you explain that? Is virtually every infosec team in North America just wrong?
So fix that problem then. If CAs are not properly checking DNSSEC in violation of the BRs, that's the real problem here. It's not a reason to avoid DNSSEC.
Second, that website has to be a parody of DNSSEC. It's showing that none of the major tech companies use it, meanwhile the "good examples" of DNSSEC compliance are all vendors substantially involved in standardizing or selling DNS services.
1) Eclipse attack the client, meaning that the client is on a partitioned network with an alternative chain tip. This grows more expensive as the records were written further in the past
2) Steal the top level domain owners private key and update the database
Is there a process for trademark dispute resolution? If not, is this generally considered a desirable thing by domain customers?
If the answer is yes, who holds the override keys?
In the case where keys are lost, there are 2 options
1) Let the domain expire after 1 year, there is a protocol rule in which the domain must be updated and without the private key it would be impossible to update it, and then rebuy it
2) Gain support from the community and fork the protocol such that the domain is reassigned to a different private key
There is a process for trademark dispute resolution, it's been going on for awhile now. The Alexa Top 100k domains are reserved and can be claimed using a DNSSEC Proof, so dot google on Handshake can be claimed by the owner of google dot com
(Also, if you're going to let the military overrule decisions in your distributed permissionless blockchain, is there a point in having a distributed permissionless blockchain in the first place?)
This is the only reason I have had any interest in the HS project.
(Also kinda curious as to why pointing this out seems to be somewhat controversial? :)
What would be better is to require 2FA for all zone hosting companies.
Even with this, many registrars and DNS hosts are too fast and loose with disabling 2FA when you call them up and tell them you lost your second factor. Alternatively, requesting changes over the phone without going through web authentication. My confidence in 2FA dipped quite a bit after going through the experience myself.
If you think you're seeing abuse, please email us with links (firstname.lastname@example.org). That's in the rules too.
> On 15 February 2019, in response to reports of attacks against key parts of the DNS infrastructure..
Is related to said attack ?
heres the link for the dns attack by venezuelan gov
it's worth a read
> This particular type of attack, which targets the DNS, only works when DNSSEC is not in use.
That doesn’t match any description I’ve seen of these attacks where the attacker had the same access to the management infrastructure as legitimate users. Is there some other incident being discussed or is this just fundamentally incorrect?
Also most FAANG companies do not have dns sec implemented at all (or correction).
There are various ways to check this but the easiest is to simply pull a whois.
There was a somewhat high profile theft a few years back where the owner published how they recovered their domain:
This is a fight for cert revenue: CAs vs ICANN.
also, there can be standardized portal lookup/discovery methods (this RFC has already expired, but it's likely that the effort will continue: https://tools.ietf.org/html/draft-pfister-capport-pvd-00 this seems to work with IPv6 only, as it builds on ICMPv6 Router Advertisement, but "provisioning domains" could simply be a DHCP data type too)
What happened to namecoin?
(Also the current third party link doesn’t seem to load (for me) ‘Resource Limit Exceeded’)