Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe the IETF could set an example by using TLS 1.3

(Just a suggestion, but hey, at least they use DNSSEC)



According to [1] IPv6 is only at about 33% penetration and this is for a backwards compatible 25 year old standard. In contrast TLS 1.3 is supported by approx. 14% of websites in 2019 [2]. So in comparison to IPv6, TLS 1.3 is being adopted quite rapidly.

1. https://www.google.com/intl/en/ipv6/statistics.html

2. https://hostingtribunal.com/blog/ssl-stats/


TLS 1.3 was designed to be easy to deploy, and the deployment was tested before the standard was finalized.

IPv6 not so much.

Also, TLS 1.3 is an application protocol, so you only need the server and client to support it. You don't need an OS change for either server or client; although if you rely on TLS libraries shipped with the OS, you would. You also don't need network support, although if your network is particularly hostile, it could cause issues.


Because it requires only the ends to adopt, whereas IPv6 requires cooperation from e.g., my ISP.


They're some of the only people that use DNSSEC; DNSSEC is a dead letter.

It's useful to compare the uptake of TLS 1.3 (quite widespread; will within a few years be required for conformance; took just a couple years) to that of DNSSEC (it's been decades).


That depends on the TLD. For the American TLDs (.com, .net, etc.) you're right, there's barely any uptake. Several other TLDs (.cz, .no, .nl, .se, for example) have over half of their registered domains using DNSSEC. That's much more than TLS before the Let's Encrypt move happened.

The problem is also with cloud providers. Amazon Route 53, has only announced support a few months ago [1] even though the standard has existed since before Route 53 came into existence.

Perhaps it's time for browsers to consider DNSSEC to determine the security of the website, similar to how TLS and CORS have been added to protect web resources. The uptake would improve quickly if people wouldn't be lazy or lackluster about their DNS security.

[1]: https://aws.amazon.com/about-aws/whats-new/2020/12/announcin...


Browsers implemented DNSSEC years ago, and then struck their support. A lot of the things DNSSEC proponents believe it's good at don't actually hold up in the real world; for instance, you can't practically deploy DANE without creating a scheme where the DNS simply becomes another CA that has to be trusted alongside all the others, and one you can't revoke when misissuance occurs.

It's funny to watch as the "serious" efforts to get some semblance of DANE working all involve some variant of stapling to bypass the actual DNS. DNSSEC is a weird, clunky, 1990s PKI that has been trying desperately for decades to find some reason to exist, even if that reason has nothing to do with the DNS.

A thing to pay attention to with European DNSSEC adoption is that it tends to happen at the registrar, automatically, without customer opt-in. The registrar controls the customer zone keys. That's security theater.


The actual DNS server still needs to implement DNSSEC, the registrar won't do that by themselves. Still, as an end user, I don't care who implements it, all I know is that the records I received are intact and have not been tampered with by any intermediary parties. This makes it a lot easier to trust my DNS provider.

Yes, DNS becomes a CA in schemes such as DANE, but a CA that the domain administrator controls. I don't see any problem with that. We've been spoiled by Let's Encrypt by now, but free, easy TLS certificates and management for even small businesses are a very recent thing.

I much prefer the decentralised nature of DANE over current CAs, even from parties like Let's Encrypt. LE is still an American company and the USA has been proven to be all but transparent and friendly to its allies when it comes to using their power over digital infrastructure for their gains. I am 100% sure that if LE receives a red security letter instructing them to generate a certificate for a certain domain, they will, just like any CA would, before that CA would collapse as soon as anyone finds out. My bank's website security depends on nobody on the other side of the Atlantic getting any funky ideas.

The decentralised nature of DANE makes it a nice system because worst case scenario, some TLDs do not get signatures during the next key rollover. This would be immediately obvious to any observer, so actions like these cannot be done in secret.


The problem isn't simply that the DNS becomes a CA. It's that it can't replace existing CAs --- the browser still has to support the WebPKI. So what you have in effect is another CA, a 1001th, if you will, and one that the browsers can't revoke.


> where the DNS simply becomes another CA that has to be trusted alongside all the others

Isn't DNS already implicitly trusted?


As a user, one can bypass root servers and TLD servers, and caches, and go straight to the designated authoritative server(s) to get RR data. That is the most secure way to obtain DNS data, IMO. No recursion. Ideally the authoritative servers should support encryption of DNS data in transit, e.g., per packet encryption via DNSCurve.

Control over RR data belongs to the person publishing it and she is free to distribute it however she likes, e.g., using ICANN DNS to list the IP address of her authoritative DNS server(s). If she chooses ICANN DNS, the ICANN-approved TLD registry and the entity that controls ICANN DNS root servers have no control over the content of the RR data (cf. the domainname), e.g., they cannot declare a RR as "false", "invalid", "revoked", etc. DNSSEC gives them this control.

DNSSEC as used in practice requires that the RR data must be signed ("approved") by a third party, e.g., a TLD registry, whose own RR data must in turn must be signed ("approved") by another third party, e.g., ICANN.

DNSSEC was designed for people who get their DNS data second hand, e.g., from a remote cache run by an ISP or some other third party, including "open resolvers" such as Google. That method carries some additional risk, e.g., RR data in the cache may be manipulated, as compared with retrieving the RR data directly, with no third party ISP/Google middleman, from its source: the authoritative server(s) listed by the person publishing her RR data. There are existing solutions for encrypting individual DNS packets (i.e., not streams, not TLS) travelling from the authoritative server to the user, such as DNSCurve. Alas, few authoritative servers are encrypting DNS packets.

The DNS data travelling between authoritative servers and third party DNS providers running caches is, generally, not secured. I never see any discussion of this online.


> As a user, one can bypass root servers and TLD servers, and caches, and go straight to the designated authoritative server(s) to get RR data.

In practice everyone finds the designated authoritative servers through the root servers and TLD servers. They're trusted already.


The addresses for those rarely change, some are unlikely to change in a lifetime. They have the same addresses year after year. You are free to keep looking them up every day, but, with very few exceptions, they will still be the same. Don't take my word for it, try storing these addresses for the major TLDs. With very few exceptions, they will work for decades. I know the addresses for a root and a .com server from memory. They do not change.


Not in the same way. Again: you can't revoke the DNS.


Any CA that issues domain validated certs can be duped by a fraudulent DNS record. All it takes is one CA failing to promptly revoke and blacklist the offending domain and the attacker is off to the races.


What threats does DNSSEC defend against?

How does it change the "security" of a site I might visit?


DNS cache poisoning, bit flips, and other such attacks. It does nothing for your site's confidentiality or availability, but it ensures the validity of the DNS records.

Scummy DNS providers like commercial ISPs tend to hijack certain DNS queries for their own gain. With DNSSEC they cannot do so.

Furthermore, with the slow but soon irreversible switch over to DNS over HTTPS, with all the DNS centralisation it brings, knowing for sure that nobody tampered with DNS records is a necessity.

Additionally, DNS records are also used in technologies like encrypted eSNI headers. If you are able to supply bad or old eSNI data to another site's cache, that might cause slowdowns or even breakages when eSNI or its successor eventually rolls out. Alternative PKI solutions also store TLS public keys in DNS records, so those are essential to get right as well.


DNSSEC does not protect your scummy commercial ISP DNS server from lying to you. Your stub resolver trusts that scummy ISP DNS server to validate DNSSEC for it.

People have a lot of funny ideas about what problems DNSSEC solves. This is demonstrably not one of them.


> DNS cache poisoning, bit flips, and other such attacks. It does nothing for your site's confidentiality or availability, but it ensures the validity of the DNS records.

Sure, but at what cost? It's hilariously easy to misconfigure effictively removing the entire domain from anyone who decides to enfirce DNSSEC validation, and DNSSEC gives you no choice in who to trust (Verisign own .com I think, state governments tend to own a lot of the national TLDs etc.)

If your threat is a poisoned DNS cache then... don't use a DNS cache?

> Scummy DNS providers like commercial ISPs tend to hijack certain DNS queries for their own gain. With DNSSEC they cannot do so.

The problem here is that as an end user, I want to resolve gmail.com to the correct endpoint. If my upstream DNS cache is giving me the wrong results, DNSSEC doesn't help me - NXDOMAIN'ing a valid DNS request leaves me in the same position as without DNSSEC. I still can't get my email!

In the face of a malicious upstream handing out the wrong DNS records, a far more sensible solution is to bypass that upstream, optionally encryping the transport so that they can't fiddle with the results in-flight. This actually gets me what I need - the correct DNS record for my request.

> Furthermore, with the slow but soon irreversible switch over to DNS over HTTPS, with all the DNS centralisation it brings, knowing for sure that nobody tampered with DNS records is a necessity.

> Additionally, DNS records are also used in technologies like encrypted eSNI headers. If you are able to supply bad or old eSNI data to another site's cache, that might cause slowdowns or even breakages when eSNI or its successor eventually rolls out. Alternative PKI solutions also store TLS public keys in DNS records, so those are essential to get right as well.

If you care strongly about the integrity of your DNS records, there are better solutions. DNS-over-TLS/HTTPS should in theory let you trust only the owner of the authoritative DNS server, if you can make a direct connection to it to ask questions. DNSSEC forces me to trust a whole load of intermediaries forever. I'm not sure why the latter is better.

Maybe it's better to stop designing things for which the security rests on being able to fully trust DNS? Gmail (and everything else etc.) seems to work quite well right now without depending on DNS being 100% trustworthy, mostly because if I do somehow get an evil-controlled DNS record back, my browser's going to start sounding alarm bells when the TLS cert doesn't work.


If you use third party DNS, e.g., ISP , Google, NextDNS, etc., the third party provide you with access to a DNS cache. The cache is shared with (many, many) other users. If the other users manipulate the data in the cache, or the DNS data in transit from authoritative DNS servers to the cache is manipulated, DNSSEC can help to detect that. It is like downloading a file over an insecure network then checking against known checksums/hashes/fingerprints. Except with DNSSEC the owner of the file does not create the hashes. She delegates control over the signing process to third parties.


DNSSEC can only help you detect manipulation in zones that are signed. Very few popular zones sign.

Further, DNSSEC does nothing to protect data along the network path between you and 8.8.8.8 (or NextDNS or whatever). It collapses down to a single "trust me I checked" bit in the DNS header.

If you're worried about someone tampering with 8.8.8.8, a more reasonable approach is to run your own recursive resolver off-net and use DoH to query it.


Personally, I avoid recursive resolvers altogether. Much faster. I can gather the DNS data I need for the zone files myself. Not reliant on Google, NextDNS or the next third party DNS provider.


I can't tell if you're serious. Are you saying that scrape authorities directly to, like, build a fake zone file for GOOGLE.COM or whatever, and then consult that? Say more about how this system works?


I am serious. I have DNS data that I save. I serve zone files over the loopback. This is simpler and faster than any cache. It works for me.

Not that anyone except me should care, but this beats any recursive resolver in terms of speed, produces less DNS traffic over the network, can eliminate ads/tracking, increases "privacy"^1 and allows for resiliance against other peoples' DNS problems. I have seen people complain they could not reach some website because of some DNS problem; meanwhile I had no problems because I am not making DNS queries over the network every time a re-visit the site.

Why do this? The story is that many years ago I was running a copy of the root zone served over the loopback. Gradually I started adding A records to it for sites I used often. IIRC I think .mil used to have some A RRs in their zone file that were for websites not nameservers. This technique reduces the number of queries needed to resolve those names. Faster lookups. That might be where I got the idea, otherwise I was just experimenting. I got obsessed with faster lookups, fewer queries. Over the years the local DNS setup I use became more complex and I started running multiple authoritative servers over the loopback, but the technique is essentially the same. Gather DNS data in bulk, save it and serve it. It works for me.

1. One property of this is that DNS lookups are not done at the same time as the user accesses the resource, e.g., a website or whatever. Thus, FWIW, a network observer cannot make easy inferences about what reources a user is accessing, e.g., via a shared IP at a CDN, simply by looking at DNS queries.

For recreational web use, like reading HN and sites posted here, I use a text-only browser and a forward proxy that strips unnecessary headers and does not send SNI except when required. Obsessive minimalist. For serious, non-recreational web use I still have to use a bloated graphical browser and ISP DNS just like everyone else.

It is probably inviting DVs and negative, snarky comments to share this in this thread, but there you go.


Why are you advocating for DNSSEC? You don't even use the DNS.


I never advocate for DNSSEC. Around 2008 when cache poisoning and consequently DNSSEC gained renewed attention, I stopped using shared caches. I think the costs of DNSSEC outweigh the benefits.


I misread, thanks/sorry!


I want to use DNSSEC in my domains but either my registar or TLD doesn't support it. I'll move them to another registar.


mind you DNSSEC is also far more complex then TLS. A misconfigration of DNSSEC can result in your entire domain not being responsive and thus have the risks in regards to implementation are far higher.

I don't really think this should be a reason to NOT implement DNSSEC, but it is still a reason many companies and organisations see it as risky.


There are a lot of reasons not to deploy DNSSEC. That it's much more dangerous than TLS is just one of them. It would be better for the Internet if we scrapped DNSSEC and started from scratch with a service model that acknowledges what the Internet of 2020 (or, if we can't be that ambitious, 2005) actually looks like.


DNSSEC provides zero benefit to confidentiality. DNS over TLS and DNS over HTTPS both do. Additionally, the TLS CA system with Certificate Transparency and the HTTPS Everywhere addon is better without DANE.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: