Is there any reasonable theory for how the choice of DNS changes download speed this dramatically?
I can see it taking longer to resolve the ip address of the download server, but that should only be a one-time thing - and the total impact should be a couple seconds at most. Unless the download is constantly flipping between servers, I don't see how DNS latency is going to make a noticeable difference in the time it takes to download / stream a movie.
Even without anycast the DNS server's geolocation is going to be way off for a lot of ISP defaults. Why look at a cached second-party side channel instead of the real routed packets from the client anyway? Anycast is the right mechanism for distributing this stuff — Apple's keying off of DNS is a bad idea, implemented poorly.
Google DNS is broken as a concept, and shouldn't have been released until they were anycasted in either all Google CDN POPs (where ever www.google.com is proxied from) or had talked the major CDNs (AKAM, LLNW, L3, etc) into supporting their proposed DNS extension (http://tools.ietf.org/html/draft-vandergaast-edns-client-ip-...).
As it stands, using Google DNS is optimizing exactly the wrong thing -- it may be "faster" than your local ISP (which I've actually never seen), but you're trading a couple of ms on an easily scaled distributed system (which every single ISP in the world provides) for a huge hit on network performance because you get a non-optimal CDN pop.
I'd even go as far as saying that Google DNS is ruining the network experience for anyone outside the US (and from Gruber's article, apparently in the US too) -- the very problem people pay CDNs to solve in the first place.
I'm pretty sure we're talking about an HTTP protocol so a 302 redirect ought to work, and it seems like that would give the CDN way more control than trying to distribute traffic using a cached and not reliably localized DNS mechanism.
Edit: foobarbazetc has a good point, but it still feels like the CDN has reasonable ways to work around and do a better job of selecting the correct pop than DNS. Adding a layer of subdomains which force locality (us-ny.host.com) would keep urls readable & virtual hosts intact.
Which would send the wrong 'Host' header to the server, and won't serve the right site and/or content through the CDN. :)
With a long term connection that can withstand a little extra in setup time and a custom built client (both things fit apple tv) it's unnecessary to rely on geo dns anyway.
But the article makes a good point, and since I have a bind running on my firewall anyway, I'll just take the forwarders out.
In theory anycast delivery is the right solution, but there is a wide gulf between theory and practicality here. There is a reason that no one uses anycast for anything other than dns at the moment...
If your ISP is giving you, for example, one local server and one non-local server, do some sleuthing and figure out where their second local dns server is, and hard code your DNS to those servers (or, remove the server that isn't local).
In the earlier days of the Internet, you were encouraged to provide geographically diverse DNS servers. But, what many operators did not understand is, that rule of thumb is for authoritative DNS hosting.
For DNS resolvers, you want those to be in the metro area that your internet connection terminates, and in that AS. Another ISP even in the same metro area is not good enough. That other ISP will not have the same peering/transit arrangements as yours.
A much better way of locating a resource on the network is its IP address, since this is what IP was designed to do and not what DNS is designed to do (resolve names to IP addresses)
DNS is designed to be forwarded and cached. A much better way of optimizing network routes is to advertise a better route, since this is what routing was designed to do. (To pick the fastest way to get from your IP to a destination block. (aka. anycast.))
It's simply amazing how well things work when you use things for what they were intended for.
Why reinvent the wheel at such a high level?
If you guessed incorrectly then the tcp handshake will start from the user to your server and you will know that they are not at the right server. If the nature of the eventual exchange allows it (e.g. it will be repeated queries or a long data exchange) then you just hit them with a redirect and let the user re-open the connection to a server that is closer to them, or you seed the response data so that follow-up queries are all exchanged with the closest data center. If the exchange is quick and not going to be repeated a lot then you will end up adding more perceived latency by trying to re-direct once the connection is established.
Putting this sort of smarts down at the application server is expensive. It is much easier to do this job at the dns lookup phase and you will get a good return on the amount of investment that this approach requires. You do need to be smart though, and make sure that you do not let geodns end up directing large batches of users that are getting service from google dns or opendns to overload a particular location.
Anycast is not the solution, it is just a different approach that brings its own set of headaches into the equation (e.g. pop switches during extended exchanges)
Most CDNs couple DNS (for a large region) and Anycast (within the region) -- they're not idiots.
Providing IP addresses (locations) to host queries is exactly what DNS is built to do. :)
It sounds like the specific POP the google DNS server is being fed is overloaded with traffic. It should be fairly easy for Apple to resolve the problem on their end, by simply not resolving to overloaded pops (they shouldn't ever anyway).
Other video cdn backed services (like netflix) don't suffer POP overloading on public DNS servers like GTE or open.
Conceivably caching behavior could also throw off geolocation, if Google's cache domains (i.e., the geographic areas that get placed into the same cache bucket) don't match the upstream CDN.
Even if you found a large resolver that was really screwing you by doing something like setting TTL to 86400, you could just custom serve them a large EDNS response including all of your POPs in a round robin list.
From a number of australian providers, on links up to 30Mb/s, I have found this to be impossible.
Can easily get 2MB/s from an Australian site, but getting over about 200KB/s once I go international is about the best I can do. Multiple connections does get around this issue though
"Here are the subnets from which Google Public DNS sends requests to authoritative nameservers, and their associated IATA airport codes"
So in theory, if you live in one of those cities, you shouldn't have this problem. Right?
Some of the reported numbers are too astounding to be covered by this explanation, however. I wonder if something else is at work.
You can check by opening a command line (Run -> cmd) and doing a "tracert random web site.com" (or Terminal, then traceroute in OSX/mtr in linux). If it doesn't come back with 220.127.116.11 or 18.104.22.168 as any of the servers (should be in the first few), you're not using Google's DNS servers.
Otherwise, remove the servers and revert to automatic or ISP settings (instructions are at the bottom of Google's doc).
EDIT: Sorry, confused DNS server with my ISP connection node. Check out the explanation by jonburs below.
A tool such as dig, on the other hand, will show you what server you're using, and easily see the results from an alternate. For example, compare the results of 'dig www.apple.com' (configured DNS) to 'dig www.apple.com @22.214.171.124' (Google DNS).
Quick question: dig (with no @server argument) turns up my router's IP. Is there a way to get around that?
If that's the case there's a good chance your router's admin interface will let you view and configure the DNS server its resolver is using.