Hacker News new | past | comments | ask | show | jobs | submit login
Problems with low DNS TTLs (00f.net)
185 points by JimWestergren 15 days ago | hide | past | favorite | 158 comments



As a sysadmin with 20+ years experience, I've had long TTLs cause issues on several occasions.

I've never regretted a short TTL.


The part that's missing in DNS is the ability to tell people that the cache expired. You can't really do this in DNS itself since a cached response will never hit the network. You could add a signal in the application protocol for example: <meta name="dns-refresh" value="[.. some date ..]">.

You'd also have to do that for SMTP, IMAP, etc. Probably not worth the complexity as low TTLs seem to work well enough with few enough downsides. DNS is already a tricky enough protocol.


(offline) Cache invalidation is a very hard thing to do. If you are going to ask the network if your DNS cache is still valid, you might as well get the latest value anyways, since DNS queries are tiny.


Not all the world is HTTP and/or web stuff.

For a lot of things changing domain is quite a hassle, and the old domain will still be around, cached, and considered valid (because its ttl hasn't expired yet).

I agree with GP in saying that a short TTL never created too much issue.

Now, to come to the original issue: short ttl.

What do you expect, in this day and age?

With all this cloud stuff going up and down, being created and destroyed again and again, ephemeral stuff that doesn't even last a whole single day...

Of course you need a short ttl.


Use a new (sub)domain. Your main domain can then simply redirect to whatever subdomain you wish.


A new subdomain normally will do the trick, but might be suboptimal due to the work needed to be done according to scale/tech/job involved. For example, doing this as a sys in a big company requires jumping through bureaucracy hoops, not to mention setting up ssl for it(I know there is letsencrypt but you know, some coporate doesn't want them). A short TTL will just be "Well just try it again in 10 minutes and it'll work."


Besides we might still want other caches (such as the browser cache) to work. A new subdomain would invalidate those too.


The event horizon of the stupidity singularity just became a little better defined.


I would put a reasonable floor on short TTLs (5 minutes?), but yes, it's nice in an emergency to be able to send everyone someplace new. Sucks if you're down, hacked, etc, and can't do anything about an existing long TTL other than wait it out.

Edit: Worth noting there's lots of software that seems to only resolve hostnames at first connection, then hangs onto it forever. Lots of java internals for example, unless you poke in specific configuration.


Nginx as reverse proxy does this and it's burned me.


Same, saw a DNS entry for an internal DB endpoint get updated with an 8 hour TTL for a planned failover. DB Admins went to sleep as everything was fine, everyone woke up 8 hours later with everything failing to connect. Had to flush the DNS on each internal server to fix that issue.


The issue here is whether a one-time migration lazyness justifies millions or billions of DNS requests to a web service that could've been saved for customers.

And I do not think it does. DNS without caching is useless traffic overhead. Just like HTTP responses without gzip compression. DNS entries almost never change, therefore it should be cached accordingly.


I generally agree with this. I think most of us that have set a DNS entry use a low TTL because of experiences we had 5, 10, 15 years ago. Really really bad experiences. I'm not necessarily arguing to increase the TTL, but maybe we should beg the question: with all the new routing tech out there, is low TTL still necessary?

K8s Ingress and Cloudfront alone will probably make the customer visible IP addresses nearly static forever. We don't live in the old world where we had to take a server down any more. It's all managed.


I just don't see the harm in a short TTL. Most apps are "bursty" so a 1 minute TTL is more than ample, basically giving them a 1 time penalty on the first request then nothing for the rest of the requests.

1 minute is a long time, you can do a ton of requests in 60,000 milliseconds.

On the flip side, setting the TTL long can be a disaster. You can't fix it after the fact. If you have a 1 hour TTL then that's potentially 1 hour before the changes needed to fix a service fully take effect. That's 1 hour of helplessness.


It would be nice if failure to connect / TLS handshake failure invalidated the entire related network stack cache for that link. If that were the case I wouldn't mind higher TTLs.


Surely we can live with 5-10 minutes for most things, though, right?


I don't see why not. I just don't see there being a huge difference between 1 and 10 minutes in the grand scheme of tech.


That's an order of magnitude which can make a pretty big difference when demand spikes.


During the Centurylink outage, like last year?, I had to switch IPs to a different network since those were exposed over BGP and you couldn't update BGP anymore. All one offs sure, the point of low TTLs is to be prepared. I pay for Route53 by the query so I'm aware of the cost and it's still nothing compared to the cost of an outage waiting for a TTL to expire.


What about 2 minutes during the first hour after a change, 30 minutes during 2 days, 3 hrs after that?


This would be fine for some uses. It's not great for client facing names where you want to be able to react to an incident and have traffic move over quickly.

If it's going to take more than 3 hours to setup the new traffic target anyway, sure. You have time to fiddle with TTLs. If you have servers ready to go elsewhere, 1-5 minute TTLs are nice so you can quickly move things when you notice a problem.


I think the point is that I don't want to change my TTLs. If I had my druthers, yeah, they'd go up periodically, and then if tech support receives an issue that sounds like it might require DNS changes, they press a button and the TTL drops. Once that issue is closed, it goes back to gradually increasing.


I'd kind of like if there was some ability to have more complex multi-part TTLs as an option along with a default TTL (the current one). So I could specify

  Default TTL: 12 hours;
  [<startdate> to <enddate>] TTL: 10 minutes;
or even

  Default TTL: 12 hours;
  [Thu 0000-1200, repeating]: 5 minutes;
  [<startdate> to <enddate>] TTL: 10 minutes;
So with no further effort all downstream caches/clients can basically have advanced notice of regular maintenance windows as well as planned maintenance and just automatically adapt. Of course this wouldn't deal with true emergencies, but it might lower the overhead for a huge amount of regular stuff that otherwise tempts people to set it low and leave it that way.

I dunno, I'm sure there's other downsides I haven't thought of, and proper implementation would require thinking through side effects. But after a long time dealing with it feels like there's some room for something beyond one single TTL ever which must be specifically changed (with a wait for propagation) well ahead of time whenever anything planned with a risk of issues needs to be done. Maybe?


I don't think you'd need to further complicate DNS, just have a service running that checks a calendar and syncs your DNS TTLs appropriately.

eg.

- Check if there's a maintenance window in the next [max TTL]

- If not, set TTL = [max TTL]

- Otherwise, set TTL = [time until maintenance window]

- Repeat


Migration accidents happen. If your TTLs are long, those accidents cause long disruptions and big downtimes. That's a poor customer experience.

I definitely think it's worth the millions or billions of DNS requests.

Use short TTLs, your customers will thank you.


> I've never regretted a short TTL.

Very much this sentiment.

When migrating a website many years ago, I forgot to lower the TTL of companyname.co.uk. I had lowered the TTL of www. but not the root.

So when the big switchover came, half the traffic stayed where it was. Not only that the new backend fell over.

having that option to roll out/in would have been really useful.

Now with the cloud stuff goes away with, so having a TTL of 3600 means long outages.


This mirrors my experience.

I've seen more problems caused by the JVM, by default on some configurations, caching DNS indefinitely, regardless of TTL, than caused by a short TTL.


Definitely. About 12+ years ago, I had to prove to a vendor, with tcpdumps, etc., that they were connecting to the wrong server after we changed a DNS entry. 3 of their systems were working, the 4th hadn't been restarted and was connecting to the old address. Very frustrating.


But that isn't the issue at question. If the TTL is 300 seconds or 3600 seconds and the JVM holds onto it for three weeks, you can't blame a TTL of 3600 seconds for that, and setting the TTL down to 0 seconds at all isn't going to fix it either (unless, bizarrely, the JVM developers decided to respect TTLs of 0-60 and treat all other values as infinite).


Is that fixed yet?

I remember having to bounce Java apps every time DNS changed, which never made sense to me. It's literally the point of DNS to not have to do that.


It's fixed in newer JVMs ("newer" meaning anything in the past 10 years.)


"OH, thank god it's a 48 hour TTL".

No one ever said this.


48hrs is excessively long. Most DNS servers will probably evict your entry from the cache before two days elapse anyway.

Less than a minute is excessively short. Most long-lived applications will internally cache your IP address for longer than a minute no matter what you do.


I always go for 5 minutes. The workaround would be to lower the TTL we you know you are gonna make changes soon.


Agreed, I have been stung by long propogation times, never shorter times to propogate.


User perspective:

As a user with 20+ years of experience, I have never regretted using authoritative DNS as the source for DNS data instead of DNS caches. Instead of using a shared cache, I bind local authoritative servers loaded with zone files containing the DNS data that applications need to the loopback. IME, lookups are faster than with a cache.

IMO as a user, recursive queries to shared caches are overrated. I want to know what lookups the applications I use are making and I want control over what DNS data is available to them. IME, giving applications license to lookup any resource at any time means this permission will be abused to further the interests of the online ad industry and all those who service it instead of 100% serving the interests of the user.

For example, a commercial router's management console does not need to be able to look up the addresses of ad servers. Letting it have access to a remote shared cache, or even a local one, that does recursive queries is unnecessary. The user has already paid for the router.


> I've never regretted a short TTL.

I've seen enough people complaining about overloaded DNS servers.


Yes, that happens.

I've seen way more people complain about outages that lasted hours because of long TTLs and a deployment mistake.


I haven't.


I've been using 5 minute TTL in production for years, never noticed any problems with it. It has the advantage that it makes it super simple to deploy to production at a moment's notice, in case something unexpected occurs.


Yep, that matches perfectly my (shorter) experience


15 mins is the shortest I would want to go. Otherwise, your app is going to be perceived as slow (by your users) because it has to do a bunch of needless DNS lookups a lot.


I doubt the perception in slowness is really going to be that different between a 5 min TTL and a 15 min TTL.

1. DNS lookups add on the order of 100ms to load time.

2. In both cases (5 min and 15 min) the user is going to do a DNS lookup on the first page, then have cached DNS while they browse for a bit, then have a DNS lookup at some point in the future.

Most web sessions are relatively short, so I doubt most users would even notice the difference where they see an extra 100ms load time at, say, 2 points in their session instead of 1.

In my experience the appropriate use of the preconnect and dns-prefetch hints has a much bigger impact on perceived performance than worrying about DNS TTLs beyond 5 min.


I think it'd be more interesting to measure the impact on the end user. The article mentions a drop in queries, but aren't DNS queries a drop in the bucket compared to the size of most web pages anyway? Is the difference really noticeable?

Do you get faster web pages if you cache for a longer time? If you do, shouldn't web browsers "soft-invalidate" (use the entry, but update it right after) the cache entry when you're just past TTL and "hard-invalidate" (update it before using) after? Do they do that already?


> aren't DNS queries a drop in the bucket compared to the size of most web pages anyway?

The client needs to wait for the result of a DNS query before it can do anything else. The bandwidth is irrelevant, the problem is the delay.

Usually DNS queries are cached by a server near the user, so they are very fast. But if the authoritative name server has a very short TTL, then those cached results will often be stale, and the name server has to resolve the name recursively, which can be slow.


> The client needs to wait for the result of a DNS query before it can do anything else. The bandwidth is irrelevant, the problem is the delay.

You're right. But the article seems to claim the number of queries can be reduced with a higher TTL, which is why these should be enforced. Fair enough, but so what? In any case, the TTL is irrelevant for the first query. For the next ones, if you allow yourself to rely on the previous result (the "soft-invalidation" I mentioned, for lack of a better word), it shouldn't have any impact for the user.


> TTL is irrelevant for the first query

With a long TTL, the chances are higher that your router or your ISP has the name cached. Round trip to your ISP is very short.

If you have a short TTL, the ISPs name server may have to query the next server in the chain.

If the authoritative name server is 10000km away, that means at least 60ms extra round trip time (speed of light).


I have no data, but I think it should make a difference. The impact is not about bandwidth but about latency, so the size of a query plays no role - what does play a role is how many network roundtrips the browser has to do before the page becomes usable.

Seems to me that short TTLs in connection with the current trend to include scripts from dozens of different domains could easily double the number of average roundtrips per page load here.


I thought the same thing. This post included lots of data about the system-wide behavior of the DNS system, but no data about the end user experience. It's obvious there will be some reduced latency for users, but what is the distribution there in the real world? If the mean is like 10ms extra or something, it's not really worth optimizing for anybody except maybe the very biggest players on the web (and even they might decide the trade-offs are not worth it).

It's just very easy to shoot yourself in the foot with a long DNS TTL, in the worst case taking down your entire site until the TTL expires if you ever misconfigure it. Why risk it for a small gain?

Also, even theoretically it's not clear to me that it would help end user experience. Someone correct me if I'm wrong, but I believe if there's already an open connection from the browser to the server based on the TCP keep-alive settings, the browser will continue to use it rather than kill it and open a new connection starting with a DNS lookup even if the DNS TTL expires. So for an otherwise well-tuned site, each user session should expect to do a DNS lookup at the start of the session but then the rest of the requests in that session won't need to.

If that's true, then the only way DNS TTL will affect the end user experience is if the TTL is long enough such that there are a significant number of instances where the time between sessions for one user on the same device is longer than the DNS TTL. Most sites don't have their users returning every 30 minutes, they're lucky to get someone as a daily active or even weekly active user. So the TTL might have to be ~ a few days long to impact a significant number of users (and even then, probably only by a few tens of milliseconds at the beginning of the session).


I think the impact of a low ttl is negligible for web sites. It would at least be nice with some actual measurements. The first visit wouldn't be dns-cached anyways and subsequent page loads will have other static resources already cached.


The article claims that web browsers will automatically pick a healthy backend when you return multiple A records, but the behavior doesn't seem acceptable to me. I was going to post "I've never seen it work", but I just tried it and it does indeed work -- the browser hangs for 30 seconds while it waits for the faulty IP address to time out, and then it eventually tries the other IP address, and it does work. (It then retains its selection for a while; I was too lazy to see what happens if I invert the healthiness of the two backends. I also didn't try more than 2.)

I think most people would call a website down if it was just a white screen for 30 seconds, so while it's a nice try on the part of the browsers, you can see why people use short TTLs to get bad backends out of the pool as quickly as possible.


Customer-facing DNS should have TTLs on the order of 15 to 30 minutes. Halving those values to estimate TTL value to the end user, you get 7 to 15 minutes of cached DNS. That's about right for most user interactions on the web.

Much longer and you run into all the trouble that operators have with keeping DNS accurate. DNS is hard. It is easy to break. And 15 to 30 minutes of waiting is about as much normal human attention span you can apply to a problem that sounds like, "Ok, we're all done, is DNS ok?"

5 to 10 minute TTLs only benefit operators. Certainly, any TTL less than 5 minutes is an indicator that your operators have no faith whatsoever in their ability to manage DNS.


Once upon a time, I worked in a saas company that would sometimes switch customers to a new instance of a service by switching DNS records -

1. Create instance of service running version n+1

2. Switch public DNS records to point to new servers

3. Wait for TTL to expire

4. Turn off old servers

(Obviously I'm simplifying; if nothing else there should be testing steps in there)

Unless I've missed something, wouldn't the author's suggestion to artificially raise the TTL by ignoring the upstream TTL result in the application breaking for customers if they used a DNS resolver that did this?


Yes, for 40 minutes to 1 hour.

But I bet you still ran forwarders on the old hosts for at least an hour after you cut over DNS.


> But I bet you still ran forwarders on the old hosts for at least an hour after you cut over DNS.

I promise you we did not.


That's a surprise - I've handled migrations like this in the past, and we always setup a simple proxy to forward traffic for a while.

I've definitely lost count of the number of clients that would cache the old IPs, despite valid and low TTLs being in-place well in advance of a migration.


The impression I got from the senior sysadmins was that we considered clients caching records beyond TTL to be a bug on their side and not our problem, and (importantly) the nature of our business/clients allowed us to make that determination and not take corrective measures to compensate for client-side misconfigurations. As such, practicing traffic would have been considered at best unnecessary work (and at worst comprising our testing process and encouraging bad behavior).


I can appreciate that, I know that I would see traffic hit the old IP for >3 days. I suspect old Java clients, etc, that would resolve IPs once on startup and never again.

In our case it was worth keeping things working for a few days, but after a week at the outside we'd kill the proxying/forwarding.


There were major providers that at one point in time overrode TTLs to a minimum week if under 24h.


Maybe for losely coupled systems. Unavoidable in tightly coupled systems because it's a convenient way to do things unless you already have elaborate HA infra and protocols in place.

For example, if you offer an "entrypoint" that you can guarantee and technically make to be stable, then use longish TTLs. Anycast IPs are an extreme, but inbetween there are many useful modes of exploiting longish but not too long TTLs.

On the other hand, if you implement system failover in a locally redundant system and want to exploit DNS so you don't have to manage additional technology to make an "entrypoint" HA (VRRP, other IP movements, ...), low TTLs are nice. AWS is I think using 5s TTLs on the ElastiCache node's primary DNS names.

Finally, 15m max is what I'm comfortable with. Any longer or much longer, and ANY MISTAKE, and you can easily be in a world of hurt. It's no fun sitting out a DNS mistake propagating around the world and the fix lagging behind.

And this is only a view on "respectable TTL" values. DNS services like Google's public dns probably ignore any or all TTLs for records they pull, and refresh them as fast as possible anyway, at least according to my observation. In that sense, I doubt that most of the internet is still using "respectable" TTLs --- I suspect most systems will RACE to get new data ASAP.


The problem is that the DNS TTL is a feature designed for a static internet of the 70's or 80's.

What this points to is a need for an authenticated DNS pushes for refresh/invalidation.

All supporting resolvers could keep a list of supporting clients that were told that "foo is at address 42". If the record changes, the authoritative DNS server sends a DNSSEC signed unsolicited response to all previous requesters to update their records. Obviously the TTL can be extended to keep the cache of requestor IPs reasonably sized.

Will this happen? Well, for UDP DNS it depends on DNSSEC, which is already not well supported, and it fixes something that is broken but not terribly so. One could imagine Google arranging this between its DNS resolvers and Chrome, for instance.

For DNS over HTTPS, this becomes much more feasible.


That is an insane amount of state for auth dns servers to maintain.

“Pushing” the message out that the record has changed would also prove tricky to implement I’d say.


Worst case, 2^32 bits is 500MB. If you think that you'll get less than 134 million distinct queries, a simple list or a sparse array may be better.

Obviously you need one of these bitmaps for every change domain (i.e. 1 per zone, or 1 per A/AAAA/CNAME record set, operator choice), and you need to clear it every (extended) TTL.

So a CDN with 100,000 dynamic IP records might split themselves into 1,000 change domains of 100 records each, have a 1 hour TTL (expiry's staggered), and use 500GB ram to do this.


so that also means that the client has to be kept connected to the dns server ?

if so, that's a nice spot for user tracking.

if not, how do you push data to clients behind a nat, without a stateful and persistant tcp connection? for most dns queries udp is sufficient (although sometimes tcp is necessary).

500GB ram for 100k records seems quite a lot btw.


What you're describing is somewhat like DNS Push Notifications (RFC 8765): https://tools.ietf.org/html/rfc8765


>The urban legend that DNS-based load balancing depends on TTLs (it doesn’t - since Netscape Navigator, clients pick a random IP from a RR set, and transparently try another one if they can’t connect)

That's just not how this works at all. While you could use RR records for this purpose, I believe the author is suggesting that load balancing will happen automatically when the client simply can't connect to one of the addresses. That's not load balancing. That's failover.

Additionally, most of the use cases for this that I'm aware of are Cname -> A record. This is to say, this method is being used with precision rather than RR.

I agree that running 60 second TTL's regardless of need is inefficient, but at a fast glance, the full argument doesn't hold up for me.


I think load balancing in that argument happens via “clients picks a random IP” and failover happens via “transparently try another if they can’t connect”.

So that would be both load balancing and failover, why doesn’t the argument hold up?


load balancing is more like you have 5 records, I'll serve 3 of them back to you.

next client comes in, I'll serve three again, possibly different from the three i've served before.

the client doesn't even know that there are two other possible endpoints (unless maybe until the next query).

edit: i just tried running this

    watch dig -t A www.amazon.com @8.8.8.8
and saw the record change from time to time.


This only applies to the first request until the cache expires.

If a client makes 50 requests before the cache expires, then those will all be based on the cached result.

This is still efficient enough that there's probably no more than a single DNS hit for every web page load, even with a short (say, 5 second) TTL, because most web assets will be loaded within that five second window. (If your web page takes longer than 5 seconds to load, you have far more significant issues than a few UDP DNS requests.)

Whether the list of invalid use cases are straw man arguments are left as an exercise to the reader, but this article seems to be arguing only one side of the perfectly valid trade-off between flexibility (low TTL's) versus latency (high TTL's).

In other words, if high TTL's are so great and there's no compelling reasons to not use them, why not make them one year? Ten years?

On the other hand, many (probably most) applications can probably absorb a five-minute outage without anyone screaming too loudly.

Clearly there is a balance between "long" and "short" (probably somewhere between one second and infinity). It's good to think about these things and optimize for lower latency, but if five-minute or longer TTL's simply don't fit your use case, then don't feel bad about it.


I was happy to have a low 10 minute TTL a few days ago when Netlify's apex domain IP address stopped working and I had to change it to the new IP that they announced on their status page...! :-) [0]

Netlify's "previous" IP was down for ~4 hours.

[0] https://news.ycombinator.com/item?id=26581027


Okay, I thought this would be little more hyperbolic than it is. TTLs under a minute is a little ridiculous. 5m is plenty long for sessions and plenty short for migrations/recovery/what have you.


LOL lot of arguments for a feature that makes sysadmin/dev life easy once a year at the expense of degraded user experience every day (lot of sporadically broken ISP etc DNS servers civilians can't be expected to bypass). Digital littering.


quite the opposite, actually.

more dns queries with a lower-ttl (say 10 minutes) means one additional round-trip every 10 minutes. how long can a round-trip be, 200msec worst case scenario?

that looks good.

now on the other hand, assume a 48h ttl and something breaks. now you've got all your users unable to reach your services for up to 48h. or worse, some of your users will go to the old ip, some to the new.

what's worse for the user, a round-trip from time to time, or an extended outage ?


I wish 10 minutes was the minimum acceptable TTL, that'd be tremendous progress already.

It's typically much more than 1 round trip, given typical amount of crap frameworks and gadgets that each load themselves and dependencies from around the world on a typical webpage. On a page that takes 50 seconds to load all the crap from 50 servers each with a 5 second TTL through a flaky ISP DNS server, you get hit all the time basically.

200ms is not worst case, that's more like median, worst case is DNS being stuck for minutes with responses lost or very slow (say 20 seconds). Often DNS is the only thing that's broken, and if you're in one session (with no new DNS request required beyond refreshes from expired TTL) it makes the difference between the user being stuck or being able to proceed unhindered, until they go to a new site requiring a new request.


The only time I've seen DNS failing after seconds, about six seconds iirc, was when an host had three DNS server configured, all of which were wrong (public DNS servers, used to query an internal zone).

It took seconds for the request to time out because the host tried all three records one after another, and gave up only when all three had failed.

But, uh, that's an uncommon situation.


I've seen issues with some DNS caches not honouring the TTLs if they're too short (less than 1 hour iirc, although memory is a bit hazy, it was some years ago) - in particular academic institutions tended to be the biggest culprits for this.


I've seen this happen with mobile providers and ISPs in APAC, especially Australia and New Zealand. In the worst case, a migration we expected to take place within an hour actually long-tailed to a full 24 hours - where within an hour, practically all of the US and Europe had migrated, and practically none of ANZ had.


Australia and New Zealand probably feel the pain from short TTLs much more simply because they are so far away from most servers. Sure, the large CDNs and DNS providers have edge nodes there, but to everyone else they have 200-300ms ping.


CloudFlare has a "Auto" TTL option, which is the default, and required to be used when reverse proxying through CloudFlare. There is nothing magical about "Auto" TTL, though: it appears to literally always be 299 seconds. A lot of low TTLs you see are probably caused by CloudFlare.


Similarly, AWS Route 53 alias records use a 60 second TTL and there's no way to change that, so that's probably about a quarter of the Internet right there. Also when creating a manual record in Route 53, the default is 300 seconds and you'd have to go out of your way to pick another value.


Doubt long TTLs matter that much, given that plenty of software also has a max TTL value[1], including all popular browsers (Chrome(ium), WebKit aka Safari, Necko aka Firefox, Trident aka IE) and the most popular mobile OS (Android). You could maybe get lucky with some caching on your router, but in my experience cheap consumer routers just act as DNS forwarders and have little to no caching (I could not find any explicit data on this however).

1: https://www.ctrl.blog/entry/dns-client-ttl.html


Part of the problem is that so many devices are poorly behaved when it comes to DNS. At one point I worked for a company that had a large mobile app presence. We setup new authoritative name servers to conduct a test for a week or so. After the test was completed we removed the name servers records. A lot of clients went away very quickly... but way more stuck around way longer than they should have.

At two months post test, those test servers were still getting some traffic.


Had to get to the very end to see that 'ridiculously low' was anything shorter than "between 40 minutes (2400 seconds) and 1 hour."

No thank you, if there's an outage that needs a DNS update to resolve it, 5 to 15 minutes is much more reasonable.


If we changed 5 minute TTLs to 1 hour and lost that ability to recover, what would we gain in saved traffic? My guess would be not very much.


Short TTL can be used for activity tracking.

You can use dnsmasq --min-cache-ttl= to set the minimum.

Unfortunately you have to recompile to have a minimum longer than 1h.


The problem with generalities is that they tend to pick the examples that don't generalize well.

In the case of github's example the author is fixated on DNS where in reality the DNS entry is entry point into fastly's anycast CDN endpoints where the DNS is used to point into the general direction of the correct anycast entrypoint. Fastly's CTO did a great talk a few years ago about load balancing which addressed the DNS issues based on the actual data they have from the edges that service billions of requests.

TL;DR of the DNS portion of that talk is "use as low TTL as you can humanly get away with"


Unrelated to the author's post, but for LetsEncrypt TXT records (to have wildcard SSLs), I've always set the TTL very low (in the 1-2 minutes or so range). This is because when I renew SSLs, I don't want to wait for DNS caching of those TXT records to resolve all over the Internet.

I think that doesn't really affect anything traffic-wise. Just a thought I had in mind reading the article.


What is the use cases for having the TTL shorter than 5 minutes?


The article posits why:

Why are DNS records set with such low TTLs?

- Legacy load balancers left with default settings

- The urban legend that DNS-based load balancing depends on TTLs (it doesn’t - since Netscape Navigator, clients pick a random IP from a RR set, and transparently try another one if they can’t connect)

- Administrators wanting their changes to be applied immediately, because it may require less planning work.

- As a DNS or load balancer administator, your duty is to efficiently deploy the configuration people ask, not to make websites and services fast.

- Low TTLs give peace of mind.

- People initially use low TTLs for testing, and forget to crank them up later.


The DNS based load balancing isn’t a myth if you want to do any kind of load balancing that isn’t round robin. If you want to, say, send 10% of traffic to data center A and 90% to data center B, and you don’t want to use up 10 IPs to do that.


But those are no valid use cases which was my question. So there are no valid use cases at all?


> Administrators wanting their changes to be applied immediately, because it may require less planning work.

Why is this not valid?


But 5 minutes should be fine? It surprised me that so many has 1 second or 20 second TTL.


If I need to get a web service up and I can save 4 minutes by setting a low TTL when I configure my DNS record why wouldn't I?


Because you're pushing the cost on to someone else.

If your DNS hosting provider charged you per query (some do, especially when adding features like health checks & load balancing), then it might make a big difference.


Valid migration plan should be able to handle it without very short TTL.


The major cases revolve around failure recovery, and traffic distribution. A 5 minute outage is not acceptable in many industries or at scale.

If a load balancer or DC fails we need to ensure traffic moves away fast. Similarly if you want to take a system out for maintenance or perform migrations.


> A 5 minute outage is not acceptable in many industries or at scale.

Well, if that's the case, you better have your redundant systems on your normal DNS entries, because there is no chance you will distribute new entries over the internet in 5 minutes, whatever value you specify at the TTL.


There are tons of things that don't follow TTLs, but a large majority of normal people traffic does.

Easily 90% of new connections will move following the TTL. Of course, some traffic got a DNS result once in 2003 and is going to use that forever. If it's important traffic, you can trace it and follow up with them. If not, you do the best you can and let the rest go.


Found an earlier HN discussion about this question from November 2019: https://news.ycombinator.com/item?id=21437160


A low DNS TTL for testing purposes is a valid use case.


This is about production systems of large enterprises.

...

Please tell me you're not testing in production.


AWS application load balancers have records with a TTL of 60s. Presumably they are doing it because they want the flexibility to change the IP addresses or the number of IP addresses dynamically. Seems like a reasonable use case.


Maybe for those cases where one can get random IP from their ISP and has a e.g. hopto.org configured?

One would probably be OK if it was 5 or 10 mins, but it depends on what's behind that dns entry and how often ISP can change the IP.


Do any ISPs generally change dynamic IPs more often than modem/routers reboot?


Some ISPs do it on a daily schedule. I know that the German Telekom rotates customer IPs every night at 01:45 AM, because a friend of mine is with them and that's the time when he drops from the video conference for a minute (if we stick around that long).


DNS based network load balancing. If you have two data centers, and you want to be able to dynamically and deterministically shift load between them, you want a short TTL so you can control the percentage of traffic going to each data center.


If you want to deterministically shift load, you use routing, not DNS, to manage your load.

That’s what is missing from this discussion.


Anycast/BGP traffic engineering is not nearly as accessible as DNS based load balancing. There's several DNS providers you can use to add load balancing and health checks on top of your existing hosting, wherever that is (and you can do it yourself too). Using anycast for this means a specialty hoster, a third party in the data path between your clients and servers, or running your own ASN. I'm sure that gets more precise results than DNS, but it's also a lot more work, and it's harder to replace if the providers involved stop being a good fit.

Determinism isn't necessarily required either. Probabilistic shifting works fine mostly.


How would you use routing to balance load at that granularity?


Rather than get into a lot of details, here's some excellent starting points:

[1] Google Cloud networking in depth: Cloud Load Balancing desconstructed - https://cloud.google.com/blog/products/networking/google-clo...

[2] What is AWS Global Accelerator: https://docs.aws.amazon.com/global-accelerator/latest/dg/wha...

[3] Tumblr: Hashing Your Way To Handling 23,000 Blog Requests Per Second: http://highscalability.com/blog/2014/8/4/tumblr-hashing-your...

[4] Load Balancing without Load Balancers: https://blog.cloudflare.com/cloudflares-architecture-elimina...


Yes, I understand how anycast works (I work for an anycast based CDN).

The issue is that you can't do percentage based routing with anycast... in fact, you can ONLY do shortest hop routing with anycast (at least for WAN anycast). That means that, while different edge networks can go to a different datacenter, every individual edge network will hit only a single datacenter.

The key issue is that anycast is a very blunt tool. You are relying on your BGP announcements to route traffic, but you aren't actually in control of where a particular request goes.


Rather easily. There are routing protocols designed for such things. Far more reliable than trying to hijack DNS for load balancing.

Indeed the root DNS servers are not a single server but pools of geographically distributed servers via anycast.


Anycast doesn't support percentage based load balancing unless you control all the hops between client and server, which is almost never the case if you are serving the public.

Every request that comes from the same network is going to be routed the same way. Anycast works great for regional load balancing in general, but it doesn't work for subdividing individual networks.


I always was under the impression people lowered the TTL when updating records... which doesn't even really make sense since the change won't propagate until the previous TTL is overrun anyways.

Then you were supposed to update it to a longer TTL when your change had propagated.

So, I guess, after understanding things better... there is no use case really since if it's a new record your change will always propagate, and if it's an old record, lowering the TTL on update doesn't really matter since the old TTL will still be in effect.


It could help to lower your TTL way before you plan to update. If your current TTL was 24h, you just update it to 5m 24 hours before you plan the actual change of the record itself. The new record can them be set to 24h directly (unless you want a quick turnaround for rollback).

It's still no guarantee all changes will propagate within 5 minutes. But it gives some ease of mind to know the bulk of change won't take a day.

Also a lot of people forget the negative caching of NXDOMAIN records which is set by the TTL of the SOA record. Which means that it will take a while for your new record to be resolved if you started querying before you set the record.


Yes - it’s called planning. Lowering TTL well in advanced is not a new concept and works very well if you have enough advanced notice from the application owners ahead of time :p


Unfortunately, data centers rarely give 24 hour notice about catastrophic fires or provide advance notice about utility outages when the automatic transfer switch is going to fail.


> It's still no guarantee all changes will propagate within 5 minutes. But it gives some ease of mind to know the bulk of change won't take a day.

Especially it gives you peace of mind that should stuff go badly wrong you can easily revert the change.


You got the use case right, but I think you're still thinking about it wrong. In-use public DNS changes aren't something that just crop up and need to be done immediately.

Eg. Five days from now, I'm going to make an infrastructure change that will affect public DNS. Today, I lower my TTL on affected records. Five days later I make the public changes, still using the lower TTL. Once I am satisfied my change will stay in production, I modify TTLs to be more sane.


In practice though other things come up and the admin doesn't get round to increasing the TTL back to sanity later because nothing is broken if they don't.


Sure but in a department where people are sysadminning reactively, they’re going to have that problem crop up everywhere. This type of thing wouldn’t be neglected in a well thought-out change process.


That makes a lot of sense, I've updated a lot of DNS records, but not generally in a planned manner.


I lower the try a few days before I’m going to make a big change. This is also how we did it in the past at a large corp I worked at.

Does it actually make a difference? I dunno but it just feels right.


You could lower TTL if you know a change is coming.


Exactly. The TTL is lowered in advance of the maintenance window, sufficiently far out to allow any entries with the old TTL to expire from most resolvers. Once the maintenance has been completed and validated, and sufficient time has elapsed to decide there are no issues requiring roll back, the TTL is raised back up to its stable value.


Windows Update. To reboot a server, you need to take it out of production. With a TTL of 5 minutes, it can take an hour for (nearly) all users to stop using that server.


Sorry - why would a 5m TTL take an hour to stop using? Shouldn't it be 5 minutes?


If things behaved nicely, yes. There's all sorts of weird DNS caching behaviour out there. It's not unusual to find folks with DNS servers / clients that are caching records for 1 hour+, and then of course there's people running super old versions of Java that used to cache DNS forever by default (before JDK 6). There's a very clear set of user that seem to cache for 10-15 minutes, regardless of any DNS TTL.


You can't fix systems that ignore your TTL by specifying lower TTL values.


Sure. My general approach is to use lower TTL values (~ 5 minutes) and just accept that if people do dumb things, they just have to put up with things randomly breaking unexpected.


Good grief- you do not need to reboot the server; just flush the cache https://www.dnsstuff.com/clear-flush-dns-server-cache-window...


They mean they're rebooting the server having the IP that's entered in DNS, not rebooting the client consuming that service.


We have a service that uses AWS Route53 health checks, and set the records to 60s TTL because if there is a problem at the primary service fails healthcheck, we want it to get the updated DNS records, which point to another data center, fairly quickly.

In our case, primary is AWS with a protection service in front of it, and secondary is our own servers at a data center. So something like VRRP wouldn't work.


Reading the article and then reading the comments is interesting. I guess this is a good example of a feature which in theory would benefit both, users and sites - but which falls flat because it's infeasible for ops.


It's the classic problem with externalities. Every individual person does the thing that's most convenient for them, society has to suffer the consequences, but since no person individually caused the problem, it doesn't get fixed.

Honestly, the only way I would see this resolved is if Google demoted sites with low TTLs in SERP ranking, but they're no saints either (I can see a 5 minute TTL for google.com over here).


Interestingly, in my experience there is always a long tail of laggards after IP changes, where some folks do not notice the change for a very long time or at all. Having a long TTL makes this worse/take longer.


I noticed Cloudfront sets a TTL of 60 seconds on its distributions and also on the elastic load balancers. You pay for every Route 53 lookup if you have an ALIAS record pointing there, as is typical. So AWS does not have an incentive to set it any higher.

But if I understand it correctly, you can point a CNAME with a long TTL to the appropriate cloudfront.net record, and then you only pay for the CNAME one. The cloudfront.net lookup will not cost you anything. But the latency for your users will be worse because it adds a lookup (because an ALIAS record gets resolved without a lookup).


Am I missing something? Aren't ALIAS lookups free?

https://aws.amazon.com/route53/pricing/

"DNS queries are free when both of the following are true:

The domain or subdomain name (example.com or acme.example.com) and the record type (A) in the query match an alias record.

The alias target is an AWS resource other than another Route 53 record."


Queries to ALIAS records that are AWS resources (ELB, CloudFront), etc are indeed free. The reason is directly related too; we want to be able to raise and lower the TTL value without customers being impacted.


Intersting. I stand corrected.


> The urban legend that DNS-based load balancing depends on TTLs (it doesn’t - since Netscape Navigator, clients pick a random IP from a RR set, and transparently try another one if they can’t connect)

Sure but if it can connect but then pukes out on something like a bad ssl or broken app, it’s not going back and trying another host.

So, when using dns for load balancing, it’s preferable to have a low ttl with a dns record tied to a host health check. If a host goes unhealthy it takes itself out of rotation, auto scaling brings a new one in, and it’s fully warmed up in a minute.


In 2016 Dyn DNS suffered a DDOS attack and sites including Twitter and Spotify became inaccessible. Higher TTLs would have extended availability from browsers with cached resource records.


One purpose for a low TTL in the solutions I have built is that you want to change the IP. So first you hit the DNS. You get an IP from some main location. Then after the first request you figure out where the user is located. Perhaps spins up some container close to the user. Then on consecutive requests you get an IP much closer to the user.

Another usage is to load balance out a lot of users to different web nodes for instance.

Edit: spelling


DNS issues could be operated better by many of those running resolvers, for instance, by keeping caches primed for sites to reduce latency to end users - as opposed to extending TTLs.

This is probably the cheapest and best solution available for improving DNS related UX issues, and is likely to be something where a commercial DNS provider might do well.


From my short experience, the issue isn't that "the new service isn't available for the user" but "the new service isn't available FOR THE CLIENT". Cue - "why isn't it up yet" emails/calls with "it will take up to x hours to propagate".


Wouldn't imposing a lower bound on the TTL push more people to using anycast instead?


Seems likely, but good lucky getting your average <$1B/y revenue business to do that.


So you have a high TTL thinking that DNS servers will cache your IP, yeh right, DNS servers like Google DNS will only cache it for a few minutes. Doesn't matter if you have high or low TTL.


I've often wished there was a way a web server could respond to requests with "Your DNS is out of date. Use this IP instead".


I'am running local caching dnsmasq with minimum TTL of 1h. Modern internet experience is really awful without it.


I wonder how low TTL compares to browser URL bar queries with respect to impact on DNS user experience.


Author probably never had to switch servers because of failure etc and then had to wait 24 hours until the traffic came back up while losing money and getting angry emails from clients who e.g. bought advertising.


I don't think the author meant 24h TTL should be applied to everything.


It doesn’t sound like the author has ever operated a large scale service. There are reasons why every big operator has short TTLs and it isn’t because they are stupid.


Frank Denis worked for years for OpenDNS, one of the largest recursive DNS services on the Internet. While there he developed DNSCrypt, which has many users and was instrumental in pushing for encrypted DNS. And looking through his github, he has other DNS tools as well. DNS is a contentious issue and I don't agree with anyone on everything they say about the subject, but I agree with Frank here on the waste of absurdly low TTLs and in any case it's wrong to think he is inexperienced.


I'm not sure that's right. By virtue of how caching works, it's significantly less of an issue for very large services which will have absurdly high cache hit ratios all the way out to the edge within the 2.5-5 minute windows just by sheer user volume per 2.5-5 minute window.

It's everyone below that that don't operate very large scale services that will see the benefits from longer TTLs.


Your comment would be a lot stronger if you could tell us what those reasons might be




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: