Hacker News new | past | comments | ask | show | jobs | submit login
Why does DNS propagation take so long?
3 points by siliconmeme on Feb 9, 2013 | hide | past | favorite | 6 comments
I was talking to a friend at Silicon Drinkabout last night, and we were wondering why there isn't a "push" approach to DNS propagation? Especially for CDNs, and heavy cache users, such as mobile networks.. ? What are we missing in our thinking, here?



I'm no expert, but my in understanding a 'push' approach wouldn't work because there are so many different DNS servers (Every ISP and Hosting provider has them, plus third party providers, and internal corperate DNS servers) it wouldn't really be feasible to push changes to all of them instantly.

Really the TTL (Time To Live) should allow fairly quick DNS propagation. The problem with TTL is that you need know your change is coming in advance.

Eg: If your domain's TTL is currently 24 hours, you need to change it to something much shorter (say 5 minutes) at least 24 hours before the DNS change happens, as this will mean that every DNS server will know to update its record for your domain every 5 minutes before you actually make a DNS change.

The other problem is to 'improve' response times, some DNS servers (particularly ISPs) don't honour the TTL directive.


Not withstanding the problems mentioned, you can leave your TTL as a low value. Amazon S3, for example, leaves it as 60 seconds.


How would that work, would every authoritative nameserver keep a database of the IP addresses of all the clients that have requested records in the last week? And then hope those clients also run a name server on the same IP, so it can send some kind of DNS NOTIFY when the records change? What about my recursive DNS server on my LAN, which only has an RFC 1918 address? :)

Of course it'd be nice to push changes quickly, but I doubt there is any feasible way to coordinate that on the internet. The next best thing you can do is to lower your TTL in advance.


It takes "so long" because of the disparate cache and TTL idea. That is, service providers believe the best thing to do is run large DNS caches for end users (and those providers can each set TTL's however they wish; they might not all be in agreement on what is an appropriate TTL). Of course, that's not the only way to run a DNS. But groupthink (or whatever you choose to call it) prevents any change in the status quo.

For example, if I said end users could run their own caches, use their own custom zones, or custom /etc/hosts files for sites they frequent, I'll bet the idea would quickly be shot down by the defenders of the status quo.

Yet it solves the "so long" problem. They can honour the suggested TTL's the authoritative DNS servers provide, or set their cache's min/max TTL's as they wish. Long or short. End user decides. Not all sites, including CDN's, keep changing their IP every few minutes. Once the user knows the closest IP for getting what she wants, no need for the DNS. Things become very fast when you don't have to constantly look up the same names, day after day.

But the real question is why are you asking why it takes "so long"? What is the real problem you are trying to solve? For example, are you needing to keep switching IP's, add CNAMES, or create more indirection? If so, why is that?


You mean, other than RFC 1996 and friends?


What do you mean by "push" approach to DNS propagation?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: