I've contracted an ex Azure DNS team member to write up articles about DNS [1] and published it for free. I considered my DNS knowledge okay, but I learned something every article he wrote.
If you want to be better at DNS than >99% of your colleagues for the rest of your career, then invest a single day in reading those.
I have found (and commented on previously, sorry for the repetition but it's so common as to warrant it IMHO) that one of the biggest hard-to-diagnose issues is in https://www.nslookup.io/learning/zone-delegation/ under "The duality of NS records":
>So there are actually TWO sets of NS records for every zone: the authoritative NS record set in the child zone and the delegation NS record set in the parent zone. It is recommended that these NS record sets be identical, but they do not have to be. Generally speaking, DNS resolvers can use either set of NS records. When a resolver has access to both, it will prefer the authoritative NS record set from the child zone. The way DNS data should be preferred, or ranked, by resolvers is specified in RFC 2181 section 5.4.1.
"Generally speaking" is all-too true.
NS records in the root are glue. If a resolver only has these records, it will (likely) cache them and not bother getting the NS records from the authoritative zone server (i.e., asking one of the "glue" servers for NS records), and consider itself done for that's zone's NS settings.
But the two most common mistakes in DNS zone administration is lame delegation (non-existent SOA or NS records from the nameserver), and having different NS records in the root vs the authoritative zone, possibly with different (and conflicting) DNS configurations.
Admins update the servers named in the SOA they are administering, but don't update the root (typically, at their registrar). They feel free to change the names in the NS records, or do it and forget to update the root -- because it quite often works fine. As long as the glue points to at least one working (non-lame) nameserver, all might seem okay.
The moral of the story is that it is very risky not to keep the root's glue in sync with the NS records in the zone (including TTL values!).
Some of the hardest and memorable troubleshooting DNS problems (for me) are:
1. Default-DENY firewall policy mandates the admin being a DNS protocol expert and often requires a refresher (of own volition or ego-checkingj.
2. Gateway Firewall blocked the incoming response UDP of authoritative record transfers of secondary authoritative DNS. Firewall admin goofs often and delay discovery of DNS outage is often the result due to poor network error logging
3. Bastion (one kind of the split horizons) DNS server over multiview DNS is ALMOST always preferable in security theatre.
4. Forgetting to disable Firefox’s DoH after full DNS block at border gateway. Used Firefox policy for corporate and HomeLab network (during cutover from public resolvers to internal homeLab/corporate resolver).
5. Private DNSSEC root servers, setup of (in case of root DNS outages)
6. Negative cache resolution by TTL, balancing the
7. cache spillage prevention of internal corporate DNS records being exposed
8. Mastering resolv.conf (especially against incoming replacement by systemd-resolved)
Most still hit me 10 years later despite re-reviewing my private DNS HOWTOs. Some of above experiences that I have posted on my website by DNS topic [1].
Website caveat: Still not sorry that Google Chrome still cannot HTTPS-negotiate for HTTP/1.3-only (ignore HTTP/2) with just only the ChaCha (no AES/RSA) algorithm. Use a different browser, that’s my firm security stance. (Most corporate firewall should be blocking HTTP/2 until their transparent HTTP proxy have been upgrade to handle HTTP/2 because its inline selective blocking within a HTTP/2 TLS stream is still a thorny hurdle.)
That fact applications are now ignoring my DNS configuration for their own makes me irrationally angry. Android, Chrome, Firefox, probably more I haven’t noticed yet.
I am the network admin here. Please follow my rules.
That's why these same DoH (and VPN) proponents are not a network security expert.
They should be using a client-side-TLS-signed (and verified) DoT only at their own DNS-forward-blocking border gateway with their wireguarded remote DNS resolver unless multiple DNS views are so desired (such as QubeOS desktops) then it is down the rabbit hole for DNS experts only.
(I frequently forget to say this often but always configure for one at the start.)
Corporate Bonus if you can scrub all TLS traffic at kernel level while running a transparent HTTPS/TLS proxy at the border gateway: that is, force all TLS through that proxy by payload detection mechanism and not just by port numbers.
>> May 06 16:34:03 host.contoso.com dockerd[1496]: time="2023-05-06T16:34:03.666703897Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]"
> I am the network admin here. Please follow my rules.
In an organisation - sure. But sometimes countries act like a giant man-in-the-middle redirecting/blocking forbidden sites e.g. Russia. Most popular DNS servers like 1.1.1.1 and 8.8.8.8 are forced to comply.
Neither will work by themselves as far as censorship goes because it's not just DNS, but either can solve some minor geofencing annoyances like Qt blocking access to https://download.qt.io or Spotify pointing their podcast CDN to 127.0.0.1 when using major DNS providers.
You are the network admin, just block access to port 443 and the problem is solved. The power of a network admin over a host admin is limited to what can be blocked.
And the more that is blocked, the more traffic will move to port 443, maybe using some relays at big cloud providers.
I'd say, the job a network admin is to provide positive services: reliable transport of bits, reliable DNS resolvers, etc. Beyond that, a network admin should not look at what users are doing with those bits.
> I'd say, the job a network admin is to provide positive services: reliable transport of bits, reliable DNS resolvers, etc. Beyond that, a network admin should not look at what users are doing with those bits.
I'll be sure to tell that to the regulator overseeing my industry and my Compliance department.
Maybe you don't want to place that solely on the shoulders of the network admin. These days most network data is encrypted, so the network admin cannot do much.
So either the host must be trusted to conform to policy. In which case it up to the host admin to avoid bad setups, or the host should have no (direct) access to the internet.
The TLDR is in most cases DNS records with the same name but different record type are expired independent of each other. In the DNS protocol there are two different ways to respond that an answer doesn't exist, with subtle differences. We had a typo in our DNS configuration, that caused No Such Name responses on IPv6 queries... and we weren't using IPv6. No such name means the name doesn't exist for any record type, not just the record you asked for doesn't exist. This caused our resolver to look at it's cache and throw out every DNS record with that name... including ones we had statically configured. We no longer knew how to contact root, so we could no longer do any external DNS queries that needed our root servers.
Re: #1
You don't have to be DNS expert to know the ports/protocols that various DNS traffic uses. It's literally a question on every CCNA, CEH, and A+ certification exam.
This is a wonderful talk about learning. I have pretty much the same process: when being confused gets to be intolerable, I go and learn something. I would just add that once you learn it you should write it down. Personally I keep man pages in ~/man/manpj, but everyone has their own system. Ideally it should be as unpolished and secret as possible, so that writing it down takes just an instant and there is no pressure to polish it. But it should still be as accurate as you can make it (or if you have further confusion try to write down yet-unanswered questions). Anyway if you spent 2 hours to figure something out, what's so bad about spending 10 minutes to write it down? You'll remember it much better, and if you do forget then you can refresh your memory.
On DNS: I remember when Julia's DNS playground showed up on HN, and one thing I didn't see explained then was my own personal 10-year (no, 20-year!) DNS confusion: iterative vs recursive lookups. In an iterative lookup, you have to keep asking one DNS server after another for answers until you get what you need. In a recursive lookup, your DNS server does that. What doesn't happen is you ask server A, then A asks server B, then B asks server C, etc. That is the model that "recursive" suggests to me, but it's wrong. It's more like your server does the iterative request that you would have done yourself. At least, that's my understanding today. I hope I got it right! :-)
You nailed it! Recursion, propagation, registry, registrar, registrant. Naming in the DNS could — ironically — use some work. But it's too late now, I suppose.
I really like the format of this web page, where it uses the slides from the presentation and places the transcript of the text so that the slides and the text are next to each other for each slide and you can read the whole thing as an article with slides. That’s very nice!
Julia Evans makes nice things in general. I bought the collection of her printed zines a little while ago, and finished reading them all, including the zine she references in this post. I picked up a new trick or two along the way and learned some new pieces of information as well.
I like the format in that sense, but you (& Evans) and I must have very different.. I don't know, monitors/browsers/settings (but I'm not knowingly doing anything that weird, 100% zoom in Firefox on a landscape 2560x1440 display) - it's unreadably narrow to me, not even just stylistically (I don't like there being so much more dead/negative space than content) but the images are illegibly small. The wrapping's not as nice, but it's overall much better (to me) without the `max-width: 45em`. (The video at the top actually fits then too.)
It wasn't the focus of the presentation, but the title brings up one other part she didn't cover explicitly. My kneejerk response at seeing the title was "how could it possibly take 10 years just to learn DNS, if you're actually spending some nontrivial amount of that time trying?"
This might be partly because I have knees, and am a jerk. (But aren't those true of most of us?)
And I'm kind of right. Understanding enough DNS for most practical purposes should take far less than 10 years, even if you really are spending 6 months at a time ignoring it (as jvns says in the talk). And that might be enough, who knows.
But there are so, so many things where when you actually try it out, or need to scrape down a layer because you're using some additional feature, you suddenly find that the stuff you thought you understood is actually perched atop a shifting layer of incandescent hairballs.
For me in the specific case of DNS, one time my smug confidence was shattered was from a previous jvns presentation, where she demonstrated that my naive mental model of DNS cache TTLs was not only wrong, but didn't even make logical sense. (This isn't the negative caching from the talk, it's boring positive cache results where the TTL only starts counting once something gets put in the cache.)
That's a general pattern for so many technical things. The performance of your non-IO code is based on the number of instructions and the instructions per cycle. Oh wait, it isn't, you can pretty much ignore the instructions and just count memory references. Er, no, memory references aren't equal, you need to count up how many hit each level of the memory hierarchy. No that's not it either, some of those slow things don't seem to matter, I guess it's all down to data dependencies. Wait a minute, this code isn't accessing much memory at all, but all these integer divides are bogging things down, maybe I do need to pay attention to the instructions... I could go on. Those hairballs are getting kind of bright, aren't they?
Ten years isn't seeming nearly long enough anymore.
> Ten years isn't seeming nearly long enough anymore.
I'm getting better at feeling energized by this view rather than sad and overwhelmed. Something about embracing my inner dummy and keeping the beginner's mind attitude at the forefront.
Every day I interact with coworkers who know way more about computer stuffs than me and coworkers who I know way more about computer stuffs than them. We are all on this exciting bottomless journey of knowledge and mastery together and it's awesome.
Each level of abstraction unveiled is itself interesting, even if it's still hundreds of layers above the ground floor. Reminds me of the Feynman quote “Everything is interesting if you go into it deeply enough”.
I feel lucky to have found a career in which I enjoy the minutia as well as the bigger picture, and it also happens to be able to create incredible real world value when applied to the right problems in the right ways.
> I feel lucky to have found a career in which I enjoy the minutia as well as the bigger picNT AUTHORITY\SYSTEMture, and it also happens to be able to create incredible real world value when applied to the right problems in the right ways.
Sometimes it's tiring. I have stumbled upon a bizarre problem where a Windows NTFS partition isn't accessible anymore because of an "access denied". Only NT AUTHORITY\SYSTEM has access to it, and I did not manage to change the partition's ACL. True, digging deeper into Windows's ACL and filesystem should be interesting, but I find it a burden when things should work not just for yourself.
Yeah, some type of practical knowledge is interesting in it's own right, and some other type of knowledge could be only described as "Not all scars are on the outside".
> The idea that "I should understand this already" is a bit silly. For me, I was doing other stuff for most of the 16 years!
I strongly believe this, and they “you should know this already” is common and dangerous.
As an aside, indeed this is true about less abstract things. Kids learn languages well because they don’t have as many other things to do compared to adults (plus they only need to know a kid level vocabulary). I know three adults who learned English after the supposed “easy” age (14, 24, and 24). All have excellent vocabulary and two speak with the local accent.
This is true of music as well. Some people (like me) play music OK at best but well enough to have fun. Others just can’t stop thinking about it, playing (i.e. practicing) and as far as I can tell are all quite good.
I bet most or all of us feel this internally, but socially can’t believe it. It’s nice to see someone “point at the emperor”
The corollary issue is remembering some issue you fixed two years ago after encountering the issue previously.
I don’t write down every troubleshooting issue that comes down the pike or how to fix it on a particular platform- I’ve dealt with 6+ firewall platforms, 5+ load balancers, near infinite routing platforms- and the multitude of ways they interact makes it more important to develop heuristics rather than remember precise solutions.
I have been messing around with computers for a couple of decades at this point. I always learn something new every few days. Also many times at this point I get to relearn something I long ago forgot because I have not used it in 15 years... I also will deep dive on a subject sometimes. Just so I know it correctly. I can then help others when they are struggling with it.
Or you can help yourself in a few years when you forget you found an answer, and then you find a forum post that lays it all out and you read and think wow this sounds sort of familiar and then you look at the username and …oh, I wrote this! Such a fun little feeling, like time travel almost.
For the sort of things I do there are usually already hundreds of posts out there for that thing. I am not that original. For anything that I keep forgetting and keep having to dig out of some stackexchange post I put it in something like onenote. I have not done anything that would be considered ground breaking in 15 years. And even with that, the field has moved on considerably since then. My insights would be more interesting tidbits of my personal history than helping.
I haven’t gotten to that point yet, but I did find a StackExchange answer to a problem I was having and was wondering if that specific solution would work on my platform. And then I (from the past) pointed out in a comment how to make it work on my platform.
I think the hard part about DNS is not the public facing resolution of addresses, but rather how to understand it in enterprise environments.
A junior tasked with creating a DNS record in an enterprises servers will have to go through a wild ride to just get that record created and set in a correct manner. Even worse, if a service has to be reachable through an internal DNS entry and external DNS entry.
Not only is it stressful to find the IT deities to grant the junior their wish, it can be a monetary issue as well if the company needs to re-order public certs in case of mistakes.
The absurdity is striking when you see all this process around DNS in a large org, and then you remember that DNS was invented as a way of permitting delegation of authority, away from the centre.
I've seen it takes weeks to get a DNS record configured in a large org. I'm talking something that should be simple, like adding a CNAME or A record for a new site.
Bonus points when it's spread across a bunch of different registrars and they refuse to delegate nameservers because of office politics, making things way more complicated to deploy and maintain.
In an "Enterprise" setting, the dev or ops team that needs certificates for internal services often has no direct access to the company's external web site or DNS, which may be owned by some other team like marketing or IT. So the options can boil down to jumping through bureaucratic hoops (if they even exist) for every request, or kludge together some other half-baked system that sorta gets the job done. Ask me how I know.
Makes sense. I mean, in case of wildcard certificates, number of requests/access could be reduced to 1x per subdomain, but still. I have the same problem at my university.
It took me 40 years to get this good at guitar. So naturally, I'm fairly good at playing the guitar. Or I suppose that's an understatement. I'm probably better than most people on this entire forum, yet I never practised a single day in my life.
Well, obviously that's why it took so long to get good lol! But I think there's also a lesson in it because it's also the reason why I kept on doing it, when everybody else quit. So, in that sense you could say the real reason I'm this good at playing the guitar today, is simply because I never quit.
But at the same time, I never really tried hard either. The entire journey has been about enjoyment and problem solving. It was never about the pressure to achieve anything. Instead, it was always about enjoyment and “zero pressure” problem solving, that is problem solving akin to solving some jigsaw puzzle. So, a pleasurable thing! And that's why I don't strictly consider it “practising” though it probably is. The rest of the time I'd just enjoy the tunes, and the singing and the good vibes.
I should probably add that I was never motivated to play the guitar because of chicks, or because I wanted to become a rock star. That's very hard to do when you grow up in a poor home, and the only thing you got was a crummy old acoustic guitar.
The cool guys all had electric guitars, and to them they weren't tools to become excellent at playing guitar. Instead, they were tools to get laid, or get popular, but not good at playing the guitar. So naturally, today none of the guys who had those fancy electrical guitars play guitar anymore, while I do. And now I'm better than them. Way better. (But still no Polyphia, but then I'd have to practise!)
I was listening to a podcast recently where a guy was fairly well-known in his niche. To paraphrase, "I didn't get where I am because I'm better than everyone else because I'm really not and never will be. But I'm a stubborn motherfucker who doesn't know when to quit. My success is mostly down to the attrition of everyone else who expected results right away and that's just not how it works."
I feel like this applies to so many things included career-related learning, businesses, and yes, learning a musical instrument.
You’re completely right about attrition. It’s true on more levels than one! Staying power is often underestimated, but IMHO it also requires that you’re careful about not burning out. That’s where keeping things enjoyable comes in. If it’s not enjoyable in the long term (or at the very least tolerable), then IMHO it’s not worth doing.
I’ve definitively also been pretty stubborn at times, especially when I found a problem that really was in the way of achieving some musical goal (or any goal, for that matter). So, I’d sit down and angrily butt my head against the wall until that wall cracked. Though in pedagogical terms that probably isn’t the best way to attack a problem. Butting your head against an impregnable wall is really demotivating in the long run. In turn that can be damaging to your long-term motivation. But I also think it comes down to how you approach such problems.
The first trick is to only ever attack achievable problems. But if it’s too simple, then you also risk that it becomes boring. At least if you don’t vary the tasks. So, you need to up the ante a bit. The harder a problem is that you can still solve, the more mastery you gain. In turn that is highly motivating. So you need to find that zone; that zone of proximal development as it were, wherever that is.^1 (Sorry, pedagogy is an old war wound I have from working many years as a teacher.)
That doesn’t mean that I never try stuff that I know I won’t ever be able to master on the fly. I’ve also tried my hand at some things I know is seriously difficult, such as Polyphia licks, and… Well, let’s just say I still have some work to do lol. But I know I’ll keep at it because I it’s a fun challenge for me. If it wasn’t, then I wouldn’t find any joy in trying, and then there would be neither motivation nor mastery in it.
If I already know that the problem is unsolvable, I approach it more as an experiment, to see how far I can get. And I always do it from the perspective of the joy of exploration. This is how I approach, say, Polyphia’s stuff. I already know that I probably won’t be able to figure it out on the fly, because that stuff is hard. But then it’s also not a problem to fail. It’s only an experiment, after all, since their music already brings me so much joy. So, I’d say that’s one way I keep motivated despite those things being really, really foreign to me in terms of guitar mastery.
This seems like an interesting presentation that I just skimmed through. But I really like the idea I get from the title, as a counter-statement against the general trend of presenting things as easy, and "learn x in five simple steps". Some things are hard, and maybe you can learn something useful in five minutes, true mastery takes time.
My issue / problem is that it doesn't feel like there's much job opportunities for "true mastery"; a lot of places are fluid and will do big rearchs every couple of years.
Or maybe I'm biased for having worked in consultancy for ages and mainly being involved in said big rearchs. That said, I can't fathom working on the same thing for a decade. I tried in my last job, but after two and a half years of slow progress (I was my own product owner and the company didn't want to hire anyone else... or they did, it's just that they went onto Monsterboard and the like to find staff).
I mostly agree. My DNS mentor, whose job was basically a DNS industry expert, was fired without cause one day, and it always left a bad taste in my mouth.
That said, while I've always considered myself a generalist, being the 'DNS guy' has proven invaluable in my experience. People treat you like some kind of wizard.
I guess I've come to the conclusion that it's likely best to be a generalist with a few really deep specialist qualities.
> I can't fathom working on the same thing for a decade.
I don't think that is being suggested here? In fact, jvns specifically talks about working on other things for the vast majority of that time. I think the idea is that you're doing whatever you're doing, but there are these threads that you'll return to repeatedly, deepening your understanding each time.
Specialization in one area is an interesting topic (when does it make sense? When is it career-limiting vs career-enhancing? Can it be done in a way that avoids overfitting?) but it's something different than being discussed in the article.
I recently decided to start reading RFCs and picked up RFC-1035[1] (DOMAIN NAMES - IMPLEMENTATION AND SPECIFICATION) as I'm self hosting pihole+unbound and this could fill some knowledge gaps.
The ascii struct format in the RFC were intuitive to read so I wanted to visualize the req/rsp packets in the same way to easily identify each field and show the whole packet structure.
If anyone else prefers to look at ascii formatted structs instead of wireshark hexdumps, or just want to see A/AAAA/NS records and packet fields, give kydns[2] a try. Feel free to provide any feedback.
There's one more cache that people often run into / get foiled by: various applications like browsers and JVMs also cache DNS responses. In firefox, see about:networking#dns , in chrome it's chrome://net-internals/#dns .
The JVM infamously doesn't (didn't? is this finally fixed?) respect TTL in DNS responses by default. Each name gets looked up once and then cached for the duration of the java process. This has caused innumerable support issues, particularly with zookeeper, kafka, etc. There's some setting you can use to force it to expire its cache after some number of seconds, but as far as I know you can't tell it to just honor the TTL from the response.
That's kinda a big one. Thankfully, so far, Firefox can still be set to some "corporate" setting where DoH can be forcibly turned off.
If you want to experiment and play with Wireshark to check DNS queries you pretty much have to use Firefox and you have to configure it to not use DoH.
It's what I do: Firefox with DoH prevented. Then I run dnsmasq on the local machine (the one also running Firefox) and I also run unbound on a RPi (because, really, why not!?). unbound is really sweet: you can match domains using wildcards and null route them, you can force a higher (or lower) TTL setting before the response expires, etc.
Then there's the more extravagant stuff I do: like using the firewall to automatically reject any query that tries to fetch a domain name containing Unicode characters (yup, I'm like that and, no, I don't care that it may break a few sites... Unicode characters in domain names can just die a painful death).
They likely mean Unicode outside the ASCII range. Last I checked Firefox refuses to implement the "fix" for this type of attack because it is "culturally insensitive" and "treats English as a privileged language over other languages".
I do respect the authors journey of learning DNS, and it was definitely a lot harder to understand and learn in the late 90s and early 00s when virtualisation/containerisation was scarce / non-existent and the resources were a lot less gentle (BIND books, RFCs), and also, what I consider, gatekeeping.
However, I think in 202x you're going to have a much easier time, and infact a lot of the information can be spoonfed to you.
I agree that the information is much easier to come across, but the SNR of reliable information is much lower overall. Additionally, the amount of content and depth you must go to truly understand a topic is far greater.
It was enlightening to see how Evans wrote the functions to interact with the DNS server without using any library. I really liked the part where she would copy paste the binary DNS query from Wireshark, convert it to a hex string and then just push it to the socket. I was like: “Can one do that? Copy and paste from Wireshark? Yeah, that’s apparently totally doable.” Brilliant!
I learned DNS by configuring BIND in text zone files when I was 18, having no idea it was supposed to be something intimidating to learn. As with many things I assumed everyone in tech understood it already and if I didn’t I wouldn’t be legit. I do remember thinking at the time “geeze this sure is complicated”.
Feels kind of discouraging to think in 10 year timespans nowdays when it seems fairly clear that looming on that timescale will be massive changes in how software is gonna be done (hence that effort won't bear fruit at the end of it). I.e. AI.
Dig down to fundamentals. Most of the foundational stuff for AI is older than or approaching ten years. If you'd started on them then, you would be an expert now. If you start now, you'll still become an expert and prepared to understand the next thing built on the same fundamentals.
The big thing right now is transformers: 2017 (6 years), based on much older work. It's the T in GPT. The foundations for all this stuff are over 100 years old. It's convenient for people with financial stake to make it seem scary and inaccessible, but all this stuff that's setting the stage for the next future is knowable.
You don't necessarily have to sit down and make your own AI. Just knowing the general shape of progress gives you a better shot at knowing what will break out the way ChatGPT did in time to think of how you can fit it into your own workflow. Or at least keep pace once it does. That's how I've kept ahead of the growing public anxiety.
Another way of seeing it, how has DNS changed in the last 10 years? There is a bunch of extensions, and usually users tend to use "top platforms" for hosting the name servers and dns resolvers, otherwise it looks pretty much the same.
I have noticed it's mostly FAANG developers which leads me to believe that it's not "public DNS" that is giving them headache but DNS based service naming and discovery in the large-scale distributed systems at FAANG companies.
From what I've noticed 99% of "things biting people in DNS" is the way windows domain does it, at least the sentiment for "it's always DNS" tend to be far stronger on windows side of things
Yeah there's no need for invalidation, just observe the ttl to the letter.
Oh hang on let me take this call.. yes I wrote the dns sever, ah so it runs out of memory, uh huh, wait how many unique domai-, mm, a 2000 second TTL is pretty low actu-, wait years?!, no you're right stability is important, no I can't fix the whole world, ok, so like an LRU, yeah that would be a start at least, ok bye.
Wait a second.. THERE YOU ARE! You little rascal, you were trying to hide from me weren't you. You're a good little problem, yes you are, you're going to torture me later aren't you?, yes of course you will, it's a good thing you're so cute huh. Good now run along and go play with the other nightmares. Look at em go, wow they grow up in complexity so fast.
Anyways what we were talking about? Oh yeah "just" expiring the cache.
Oh there is definitely a need for invalidation sometimes, but the system doesn't support it, due to its openness: authoritative servers don't track their consumers.
But the protocol does support it. So, if you really need invalidation, go talk to the world's resolvers and ask them to process dns NOTIFYs from your authoritative DNS servers.
The spec is clear, you can expire the cache whenever you want/need before the TTL, and you must expire it immediately after the TTL. The only problem occurs when broken caches fail to expire.
If you want to be better at DNS than >99% of your colleagues for the rest of your career, then invest a single day in reading those.
[1]: https://www.nslookup.io/learning/