I for one has been pretty happy as a home user with OpenDNS: dead simple DNS filters to make sure kids don't surf on youporn and youreligion
Especially when 220.127.116.11 works so smoothly, with a backup on my ISP's service.
I used GRC's DNS benchmark tool to discover this. Also the particular OpenDNS servers which responded faster were different from the two that they tell you to use, so the fast response is probably helped by low load (fewer people using these undocumented servers). Going from memory because I'm at work, I think they ended in 202.222 and 222.202 and 202.123, whereas the officially listed ones end in 222.222 and 220.220.
Since I've done that, DNS latency has become the actual limiting factor of TCP connection setup as I've been using Google DNS too (my previous provider started to lie about NXDOMAIN), so I've switched to my new ISPs DNS which happily responds in < 10ms for most queries.
My point being: Try your ISPs DNS too - it might actually not suck.
Essentially if you do a DNS lookup and the result is in the cache, it returns the cached result straight away. It will then check to see if the response is due to expire from the cache soon, and if it is, then it will refetch it at that point. So if you visit a site regularly, it should always be in your cache.
This can be very handy even on a home network, as you avoid the whole WAN trip to fetch records for sites you visit often, such as Google and others.
Re: the root comment of this thread, it's usually optimal to use your ISP's DNS resolvers as they're right there on your same ISP network which means your requests don't have to go out to the broader internet and are thus faster (99% of the time anyway if the hostname requested is cached on those resolvers). Using a more external resolver like Google or what-have-you means your requests have to leave your local network, leave the ISP network, and travel some distance across the internet to wherever that resolver resides.
Edit: IP anycast can allow for many resolvers to respond to one IP (such as 18.104.22.168) so it's not as if the request will have to travel the world to get an answer, but it's still generally faster to key off a resolver you know is within your same ISP.
1. You can use multiple name servers on multiple networks and generally the first to respond wins. 2. The local caching resolver can be configured to be more aggressive, lookup more efficiently in parallel and keep more records for an extended period of time. 3. Not all DNS requests are cached by the OS, but by providing your own caching resolver, you can cache all requests used by any application on your network. 4. You can provide your own ranges and domains for internal hosts so that resolving names for private IPs reduces resolution time in local apps. 5. You can control negative caching and TTLs. 6. You can prevent unscrupulous DNS providers from redirecting NX lookups to internal [ad-supported] services. 7. You can actually implement proper DNSSEC validating resolution, if you care about that sort of thing. 8. DNSRBLs can speed up resolution of pages by ignoring advertisement and malware domains.
But the first request of the OS within the TTL of the record in question is likely still much faster served by a DNS server used by more people than just me because the server might already have answered a query before me.
That's one reason not to use them.
I'm talking 1.5mb/s download vs. 45mb/s - a massive difference.
They hijack NXDOMAIN - but you can turn that off. https://my.virginmedia.com/advancederrorsearch/settings
The local cache speeds things up but also ensures that even if DNS is down, I still have most of my most used domains cached. Usually, I average 10,000 domains per day, so I have a cache of 15,000 records. The "all-servers" tends to help with redundancy but also speeds things up because no single server can consistently respond fast without having some queries take 100ms or more.
PS Technically you shouldn't be using 22.214.171.124 since Level3 expressively discourages it's public use. But the idea here is that you shouldn't rely on one provider even if they provide you with separate IPs
PS2 why was this post deleted?
Did you test the secondary (126.96.36.199)?
Edit: twitter says the secondary was down aswell. So I guess I'll have 188.8.131.52 (OpenDNS) as the second DNS on my systems from now.
And if that reason is you want to take advantage of a third parties cache, you can configure Unbound to forward requests onto another resolver, but also for it to fallback to doing it's own resolution if that third party resolver times out.
Unbound is easy to set up, and runs on Linux, OSX and Windows.
the caching mechanism is not there for show...
If this were true, my life would be much easier because DNS propagation would be much more predictable than it is today. One bad server in the chain serving obnoxiously high and invalid TTLs can ruin your day. It's not very common percentage-wise, but it certainly happens every time we switch DNS over for Stack Overflow.
What figures are you basing that on? The TTL for the NS records for '.' is 6 days. Or are you just guessing? I think it would be perfectly feasible for everyone to run a local recursive resolver. As long as the change was gradual rather than over night.
latency quite high yet even its back up..
Damn ... latency quite high still, though it is back up."
I used a translator for people who are likely not native speakers and just spent a few hours thinking their many, many servers were inaccessible.
It's a niche dialect.