Hacker News new | past | comments | ask | show | jobs | submit login
DNS servers that offer privacy and filtering (danielmiessler.com)
125 points by petercooper on Feb 6, 2019 | hide | past | favorite | 89 comments



The best tip is in the sidebar on that page:

> 1.0.0.1 abbreviates to 1.1, so you can literally test by typing "ping 1.1"


You can also ping 16843009 and 16777217 -- perhaps less memorable, but if your dot key is broken…


IP addresses are fundamentally 32 bit numbers. The a.b.c.d format is just for ease of use on humans and as the tip in the sidebar shows has its own shortcuts. But there are several formats that most operating systems support such as decimal, binary, hex, octal:

https://www.abuseipdb.com/tools/ip-address-converter?ip=1.1....


I know this is not for everyone, but I strongly prefer to run my own recursive resolver at home. Performance is great, plus I get regular DNS for the machines on my home network. Also, it was a fun little project. :)


More people should do this.

I recently switched my home network DNS forwarder from Bind to DNS Crypt Proxy (https://github.com/jedisct1/dnscrypt-proxy). You can get ad/content filtering lists along with some little privacy enhancements like DNS Crypt and DNS over HTTPS support for encrypted DNS queries to supported services, like CloudFlare.


Even with DNS/HTTPS and such, wouldn't using a home DNS resolver with a VPN on both ends of the connection still make you a little more vulnerable to network analysis and timing attacks since your DNS requests are guaranteed to go through a specific IP which only serves you?


Not sure what timing attacks you are talking about.

Having traffic analysis in your threat model is an extreme choice, typically it means your adversaries include law enforcement agencies or state-level actors. Sending DNS over VPN might just mean that you don’t trust your ISP and think they might intercept the request and forge a response—something which does happen, and I’ve personally observed it at two different ISPs.


It's not in my threat model per se, but it's something I give a lot of thought to because systems like this are expected to be used by political dissidents, whistleblowers and other persecuted individuals who are subject to such threat models.

My ISP probably forges responses because they have no problem injecting html/js and hijacking ad space on unencrypted connections.

As far as timing attacks, an example would be such: Even though you are at a coffee shop in an undisclosed location miles away from home, if LE has a reason to profile the IP you're broadcasting from at home they could do look at request times and try to link a request from the coffee shop to your home DNS server.

If you use a public DNS server, you have the same benefit of using a public VPN vs a private VPN, in that your traffic gets bundled with everyone else's and obfuscated. It's harder to establish a link between you and the DNS server using network analysis because potentially thousands of connections are being made to that same DNS server from the same VPN node at the same time.


> It's not in my threat model per se, but it's something I give a lot of thought to because systems like this are expected to be used by political dissidents, whistleblowers and other persecuted individuals who are subject to such threat models.

This is wrong, you should not expect political dissidents, whistleblowers, and other people in the same category to use similar techniques to protect themselves.

If you are protecting yourself from different threats, then it is not unreasonable to use different methods to protect yourself. There is an inherent tradeoff between security and usability. If law enforcement or state-level actors are in your threat model, you're going to have to make some extreme usability sacrifices just to keep yourself safe. That means using different systems than other people use.

> Even though you are at a coffee shop in an undisclosed location miles away from home, if LE has a reason to profile the IP you're broadcasting from at home they could do look at request times and try to link a request from the coffee shop to your home DNS server.

You're describing a different system. The system described by kingo555 is just for home.

> As far as timing attacks, an example would be such: Even though you are at a coffee shop in an undisclosed location miles away from home, if LE has a reason to profile the IP you're broadcasting from at home they could do look at request times and try to link a request from the coffee shop to your home DNS server.

That's just traffic analysis. The term "timing attack" refers to something else.

If you're a political dissident, whistleblower, or someone else with law enforcement / state-level actors in your threat model, everything changes. Presumably if you are worried about law enforcement, you put the VPN endpoint outside their jurisdiction. This can make it extremely difficult to do traffic analysis, depending on who your adversary is.

I think it makes sense that not everyone has law enforcement and state-level actors in their threat model.


> This is wrong, you should not expect political dissidents, whistleblowers, and other people in the same category to use similar techniques to protect themselves.

I don't know how you can say I'm wrong when I was making the general conjecture that people in these categories use privacy-enhancing systems. I don't think you understood me well. I was not specifically referring to any particular set of techniques or systems.

> You're describing a different system. The system described by kingo555 is just for home.

This system is not meant to be used when roaming? Or is this a use case?

> The term "timing attack" refers to something else.

Which is why I specifically listed both timing attacks and traffic analysis separately. They can be interrelated at times but that's not something I feel like discussing.

> If you're a political dissident, whistleblower, or someone else with law enforcement / state-level actors in your threat model, everything changes. Presumably if you are worried about law enforcement, you put the VPN endpoint outside their jurisdiction. This can make it extremely difficult to do traffic analysis, depending on who your adversary is.

I appreciate the lesson in OPSEC but I only asked a simple question and you've devolved into trying to tear apart my comment for errors and lecturing me about things I already know about instead of simply answering the question. In this case, the answer is apparently "Well, your question isn't really relevant because this system is just meant for home use." One helpful sentence, no assumptions and no negativity.

> I think it makes sense that not everyone has law enforcement and state-level actors in their threat model.

Cool. No one was saying anything to the contrary.


> This system is not meant to be used when roaming? Or is this a use case?

kingo555 described a home network, but I don't think it matters. kingo555's threat model doesn't appear to consider traffic analysis, excluding it from the threat model, and that's a very reasonable choice. The system doesn't make any timing attacks easier.

> In this case, the answer is apparently "Well, your question isn't really relevant because this system is just meant for home use." One helpful sentence, no assumptions and no negativity.

I thought that was what I did here: https://news.ycombinator.com/item?id=19095304 But I was wrong, and sometimes it takes an entire conversation to discover the differences in assumptions and definitions.


I have to go around my VPN provider for DNS because they intercept and alter DNS requests.


Which provider is this? Why do you still use them if they do something like that?


I really like the flexibility of CoreDNS for that. I use Cloudflare with dnscrypt as my resolver and still get to use intranet things at work because I can just forward the zones I need there elsewhere.


I like to share the cache of the DNS resolver with as many people as possible because that's what gives you the best performance. A local resolver just for me can't really do that.


Google DNS (and likely others) doesn't really work that well for this unless it's a very popular domain because of the sheer number of servers that they run and the short TTLs on most records. Even on subsequent lookups, you're likely to hit a different backend server that doesn't have a record cached that you just looked up. Google's servers don't seem to share their cache between one another. Even reddit.com often has to do the full lookup.

For a full local resolver, you can configure it to use and serve expired records, with a 0 TTL. In the background it will then lookup all records used, so that it's refreshed for next time. Cloudflare does this. Likewise, you can configure it to prefetch frequently requested records before they expire.


Most home routers already do this for you, it's a bit abstracted but there, just need to specify the network and have it in the list of networks to resolve for DHCP. Or you can DIY it on another box.

I find it's still best to defer to an upstream forward server vs. root resolution. Most of the providers mentioned have multiple points of presence and will likely be closer and resolve MUCH faster for most cases. DNS resolution time can be a huge factor in web responsiveness.


I use pdnsd as a local cache. It used to make browsing lightning fast. Although I might have messed up something since because dns resolution seems laggy. But in theory it is of great benefits.


That's great and all, but you still need to pick an upstream DNS server. The conventional advice is to use one of these public services, or your ISP's resolvers, to avoid hitting the root servers constantly. A lot of services these days have very short TTLs, so running your own recursive resolver still causes a lot of requests to get forwarded.

Also, as counterintuitive as it might seem, when I use namebench ( https://code.google.com/archive/p/namebench/ ) it still says cloudflare and google are faster than my local resolver. (not by a lot though)


You actually don't have to. The TTLs on NS records are generally pretty long, especially the root servers (6 days for root, 2 days for both .com and my domain's NS). You will hit the .com for example the first time you go to a domain, but so does Google.

In my experience, Google's DNS has so many servers that even on subsequent requests, you hit a different server and it has to do the full lookup again (likely querying a root unless it's a popular domain). It's not really decreasing the load on the root servers that much, if at all. It might actually increase the load.

One trick you can do to speed up your local recursive resolver is allowing it to serve expired records. Unbound in pfsense allows for this. If the record has been previously retrieved but is expired, it returns the record with a 0 TTL (to force the client to look it up again next time). This includes internally using expired NS records, for example to lookup a different subdomain. Meanwhile, in the background, it looks up all the records used to refresh the TTL, and serves/uses this next time. Generally speaking, expired records still work fine. I noticed that Cloudflare DNS does this as well, and regularly serves 0 TTL records.

I've found that this consistently makes my local resolver faster than any public DNS server, except sometimes the very first time it looks up a domain. The slowest DNS queries are records which use lots of nested CNAMES on different domains with short TTLs, such as www.microsoft.com / most sites using akamai, which takes 500ms for the first lookup. There was a domain I saw the other day which had 4 or 5 layers of CNAMES which took 1-1.5 seconds to resolve initially.


I know all that. IMHO in that configuration the root servers are your upstream servers. That’s what I was trying to convey. And serving expired records does generally work, but is not suggested practice.


Technically, you don't have upstream DNS servers in that configuration. The root servers are not recursive resolvers like an upstream.

But this isn't a problem... DNS caching works very well. I've been running my own nameservers for over two decades.


potato potato


> That's great and all, but you still need to pick an upstream DNS server.

No, you don't, that's the whole point of a recursive resolver. I've been running against the root servers and neither query statistics nor observed performance match frequent issues due to short TTLs, or excessive number of external queries.


In that case you are using the root servers as your upstream. That’s the point I was trying to make, but I guess I didn’t explain it well. :/


I love my Pi Hole! Great for blocking ads on everything.

* Pi-hole®: A black hole for Internet advertisements – curl -sSL https://install.pi-hole.net | bash || https://pi-hole.net/


The docker image is pretty trivial to set up too, so it's even easier :)


That and handy for network level filtering, such as PiHole.


I need to point out that Norton DNS has been retired and is not supported anymore (and never offered any privacy).

"On November 15, 2018, Norton ConnectSafe service is being retired or discontinued meaning the service will no longer be available or supported. You may continue to use ConnectSafe until November 15, 2018. However, we do recommend that you take a moment to review important details related to this announcement below."

Some alternatives: https://medium.com/@nykolas.z/norton-connectsafe-dns-is-shut...

I am actually surprised he didn't mention CleanBrowsing in their list, which I would recommend as good alternative to Norton and OpenDNS.


I made the jump to CleanBrowsing a few months ago from OpenDNS because OpenDNS caches records really aggressively and was just generally stagnant in terms of feature set. I've been really happy with performance and privacy.

I configured DNScrypt on Tomato and I also use Tomato to redirect all DNS requests so it can't be bypassed by simply re-pointing DNS. VPN obviously bypasses it.

https://cleanbrowsing.org/how-it-works


> Logs are kept for 24 hours for debugging purposes, then they are purged

Legalese is very hard to learn language. Do they keep aggregate data, derived from those raw logs? It is not said. Trusting Cloudflare? It is a profit driven company, to start with ...


I work at Cloudflare. We don't sell data of any form, and we don't keep anything which could map queries back to the individual who made them.

When we talk about aggregate data it's things like total number of queries made to 1.1.1.1, number made by AS, and geographic region. Its purpose is only to show us if people are using 1.1.1.1 and how that changes over time.


DNS servers should not be used as "internet connectivity tests" by pinging them. They are not maintained as ICMP test servers, and that is not their purpose. While many do not block ICMP packets, there are typically rate limiting systems in place, and other reasons why they would not respond to ping requests.

Pinging DNS servers is a shitty inconclusive test for internet connectivity, or SLA measurements etc etc.


But a great test for “am I getting out to the net” that’s ok to have a low specificity.

So it is not smart to make a lot of decisions based on “ping 1.1” timing out, it is so quick it’s a good first step in trying to debug a network issue.


>DNS servers should not be used as "internet connectivity tests" by pinging them.

Not true.

>They are not maintained as ICMP test servers, and that is not their purpose.

Irrelevant.

>While many do not block ICMP packets, there are typically rate limiting systems in place, and other reasons why they would not respond to ping requests.

8.8.8.8 and 1.1.1.1 and all the major ones don't care.

Pinging DNS servers is highly productive and easy. There is nothing bad with using them as internet connectivity tests or SLA measurements.


So, what should we ping for testing Internet connectivity ?


There’s ping.sunet.se as maintained by the Swedish University Network (i.e. the network connecting Swedish Universities; there is no single “Swedish University”).


The advantage of pinging IPs is that you don’t have to have a working DNS setup to test your connection. If DNS does work, then your connection probably works, too.


In this case one should run "ping -n", otherwise ping will hang trying to resolve the IP into an hostname, if the DNS is misbehaving or not responding.


what default(included in OS) ping utility tries to reverse-lookup an address before pinging, and hangs if it can't.

I've never observed this in any version of Windows nor Linux.


You are better off using traceroute to one of these addresses, you should at least get some route outside of your own network.


Traceroute uses ICMP as well, and may send more packets then ping. It works by setting the TTL on the ICMP packet, which is the number of route hops to traverse before the packet is dropped. Routers are supposed to ( this can be disabled, and is why you get * * * sometimes) respond with a TTL expired. So it starts with a TTL of 1, and then increments by 1 after each hop is discovered (or no response in some timeout).


On unixes, traceroute typically uses UDP by default (nowadays, ICMP is often an option, as are TCP SYN packets). Windows' tracert uses ICMP.

I believe this is because you need root privileges/special capabilities to send ICMP packets.


Yes. I wasn’t perfectly clear there. You’re correct for the initial src packet, the TTL exceeded response is ICMP, I think always.


The important difference is that you can trust no host on the internet to actually respond to pings, especially not machines prone to receive DOS attacks, like public DNS resolvers. Traceroute is for testing connectivity, ping is an archaic test to see if a machine is running.


I just ping google.com when I need a quick sanity check.


Which doesn't help if your DNS isn't working


I've used the thing [0] from grc.com, is there other ways that are reliable as well?

https://www.grc.com/dns/benchmark.htm


I'm more often in need of CLI that does that for me and have grown used to:

https://github.com/cleanbrowsing/dnsperftest


I use this as well. Found it very helpful.


Please, for the love of all that is holy, stop blocking ICMP! It's needed for things like Path MTU Discovery. With the increasing use of VPNs and tunnels (IPSec, Wireguard, GRE, etc) PMTUD is more important than ever.

At a minimum, any node on the internet routing or serving traffic needs to keep ICMP open. If you're running a DNS server, ICMP should be working.


https://groups.google.com/d/msg/public-dns-discuss/p1o62SJEl...

TL;DR: At the risk of repeating myself: Google Public DNS is a Domain Name System service, not an ICMP network testing service.


you can just set a lower TTL and get a response from a hop somewhere in your ISP's net.


are there known ICMP test servers ?


Dnsperf.com is worth a look as well, to compare performance of all of these.

Public DNS resolvers: https://www.dnsperf.com/#!dns-resolvers

DNS services for your own domain: https://www.dnsperf.com/


> If you care about privacy and speed and maximum memorability, I recommend CloudFlare

I disagree with the speed part, because cloudflare doesn't support EDNS. This is great for privacy but not for speed.

Here is proof: https://pastebin.com/raw/QnbWXU1a

If he meant speed in the DNS resolution context, I somewhat agree with him.


Surprised that 4.2.2.x (Level3) is not on the list --- it's also unfiltered DNS, and run by a company that focuses only on networking.


At some point a few years back they started to redirect some of the traffic to some dodgy ad sites, that's when I stopped using them.


4.2.2.2 is supposedly for Level3 customers only.


I used OpenDNS long ago, even though it wasn't as easy to remember as the ones that came later. Then I shifted to Google DNS and stayed with it, albeit with some discomfort (even if the policies state it doesn't track, it's still a leap of faith for me). Then last year I switched to Cloudflare DNS and also learned about Quad9 DNS.

I haven't done local benchmarking using a tool like namebench for a long time, and it looks like that tool has not been updated for several years. Any alternatives for it that are cross platform?


> I haven't done local benchmarking using a tool like namebench for a long time, and it looks like that tool has not been updated for several years.

In fairness, the DNS protocol that it's testing hasn't really changed in that time either. namebench is still sufficient for general testing.


His blog post pays no mention of users whose DNS queries are being redirected. Isn't that a privacy concern?

Hotels and ISPs sometimes set up captive portals that intercept and redirect port 53 to their own choice of DNS servers.

As such, users might want memorise the addresses of some resolvers that listen on non-standard ports (not port 53).

A user behind one of these captive portals who pings any of the resolvers in this blog post will not be pinging those servers; she will be pinging the hotel/ISP's chosen DNS servers and she may be none the wiser.


In a hotel I always first thing direct everything through a VPN server (work or home, depending on what I want to do). Some hotels block UDP, in that case I switch the VPN to go via TCP port 443. But some hotels (really!) block port 443.. fortunately not that many anymore.


When you use a home VPN server, is the VPN server running on a computer located at your home and reachable on the open internet? If yes, do you have fixed address or do you use dynamic DNS?


I could have set it up at home (the address is stable), but I have a server on a hosting facility which I use as my "home central".


Another option here is DNS servers that block ads. I can't vouch for the company itself, but I have found AdGuard DNS reliable and effective, if not memorable:

176.103.130.130, 176.103.130.131

https://adguard.com/en/adguard-dns/overview.html


Run your own: https://pi-hole.net/



Genuine question: what benefits does this have over using a service provided for free by someone else? Privacy?


Yes, privacy, plus it's a DNS cache that's running on your local network, so your DNS resolution is faster. Its also customizable (you can add/blacklist/whitelist specific domains from the control panel).


Or setup your own very lightweight filtering and caching DNS at home using Dnsmasq and https://github.com/notracking/hosts-blocklists/


Quality, latency, and uptime are also factors:

https://www.dnsperf.com/#!dns-resolvers,World,quality


Anybody remember 128.146.1.7?


Do I have to?


(You are referring to the original title.)

Actually, no. Even if one does accept the premise that one should use these third-party non-contracted services, challenged elsewhere in this very discussion, there's no reason that one need have these things memorized. Written in a handy pocketbook, perhaps. But not necessarily memorized.


https://pi-hole.net/ is a project to consider for home and small business networks that you're looking to protect via DNS without sending all your requests to a third party.


Your requests are still forwarded to a third party with a Pi-hole. They are sometimes cached and sites you have blocked do not resolve, but choosing a DNS provider is still required.


Only non-cached requests go to a third party. And I don’t think there’s an easy way to prevent this unless you get a hold of all the zone files and copy in bulk.

What’s nice about pi-hole is that you get one request to sites like google.com until the record expires in the cache. If you use 8.8.8.8 as your dns you might end up requesting the same domain name a bunch of times depending on how your client caches and the caching is at 8.8.8.8. So dns will see lots of requests to the same domain.


In a network of just a few computers, are there really that many cached requests? Local DNS caches will already cache short term and TTL of most domains is probably too short to get much caching beyond that.


Looking at my dnsmasq statistics, only 16.3% of 10,776 queries in the last 24 hours have been answered by the cache. Another 21.7% never left the device, since they were in the block lists, but that still leaves 62.0% of queries to be returned by 1.1.1.1 and 1.0.0.1, which is my external DNS provider.

Although this doesn’t count on-client caching, it still seems to back up your guess and my original comment.


I've got a pihole running at one of my family members' houses and am seeing ~20% caching.. at my place I'm seeing about 35%.


Shout out to OpenDNS, who my company uses and whom I made a free account with.


So the author is a security expert who recommends two companies that are notorious for their security flaws (Norton, Cisco), two companies that track your DNS queries for profiling (Google, Cloudflare) and IBM...

Yeah, this sounds totally legit...


> two companies that track your DNS queries for profiling (Google, Cloudflare)

Can you elaborate? Neither Google nor CloudFlare seem to collect information for profiling.

Google: https://developers.google.com/speed/public-dns/privacy

CloudFlare: https://developers.cloudflare.com/1.1.1.1/commitment-to-priv...


What? How do you come to that conclusion? Google actually tells you what it is collecting, if you cannot see how, say, city of origin, cannot be used to target you with specific search result, then I really cannot explain it to you.

Cloudflare does not store any information, but they are pretty frank about passing it on to APNIC for "research" as part of the deal where APNIC lend the 1.0.0.1 and 1.1.1.1 address to them. This is pretty well established and actually openly communicated by both entities, so should be easy to DDG. You are of course free to believe that this data is totally anonymized and not used for profiling / targeting at all, but imo that is pretty naive.


I work at Cloudflare. APNIC absolutely does not get individual DNS query logs. Their primary interest is in studying the other junk traffic which ends up hitting 1.1.1.1.

For the record, we don't build any sort of profile of DNS queryiers, map them back to any existing profile we have, or even keep the data you would need to have to do that.


Well, for me to believe that there still is such a thing as a free lunch, you'll have to be better than that.

If the data sent to APNIC is so safe and non-personal, why not make it transparent? Instead, when contacting APNIC about it, you get a typical one liner stating that

> ... the access to the primary data feed will be strictly limited to the researchers in APNIC Labs, and we will naturally abide by APNIC's non-disclosure policies.

Clearly, someone thinks there is something to hide. Maybe not Cloudflare, but then it's someone else.


> why not make it transparent?

I'm not with CF or APNIC, but it's likely due to issues with sensitive data.

Say a web service is hitting `http://internal.example.com:5220` with basic authentication, or there's a misconfigured jira trying to access `internal.example.com:3306`, but the DNS admins have retired `internal.example.com` and decided to make it return `1.1.1.1` for some reason. Showing all traffic that hits the service would expose a little too much sensitive information.


I have worked in a lot of companies and never have I seen anyone treat a DNS name as security relevant information. If you rely on DNS names not to be known as a measure of privacy and/or security, you clearly are doing it wrong. As a matter of fact, DNS names supposed to be known, it's their one and only purpose.

In addition, if "internal.example.com" is not already resolvable publicly (which would mean that it's known) CF could not guarantee the privacy of that query anyways because their DNS is not part of the root zone, which means they need to forward it to someplace beyond their control, meaning they leak it no matter what.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: