Hacker News new | past | comments | ask | show | jobs | submit login
Why is DNS still hard to learn? (jvns.ca)
507 points by TangerineDream on July 28, 2023 | hide | past | favorite | 254 comments



I don't agree with this article. I think DNS is something few people take the time to learn, but it's not actually hard to learn. One of the great things about DNS is that the system itself will tell you about it's internal state in response to queries. It's very easy to inspect a DNS server for a known zone and understand how it works, and there's very good tooling that's free and widely available to do this (like dig).

It's always been a big surprise to me that my DNS expertise is what seems to be most memorable for a lot of folks I've worked with through my career, when I don't believe I know anything mystical or special. DNS is extremely well standardized, the most common server and client implementations rigorously follow the standard, and it's very easy to inspect with free tooling. It just takes some effort and time to learn, but it's not really hard.


Hello! I wrote this post and I have a couple of things to say about this "DNS is not actually hard" take. It took me many years to feel totally comfortable debugging DNS problems, and I wrote this post to explain why I think it was hard for me.

I also used to think that "no, actually, it's easy!" was an encouraging response to "this is hard to learn". And I kind of get it! I love DNS! I think it is surprisingly simple in many ways, and I've written about that a lot, for example in https://implement-dns.wizardzines.com which shows you how to implement a toy DNS resolver from scratch in some pretty simple Python code.

But over the years I've learned that "no, it's easy to learn!", instead of coming off as an encouraging comment ("you can do it!"), often gets received as "no, it's not hard, actually the problem is that you're dumb". Like, I've been confused about this for years, and you're telling me that, no, actually it's easy? Not that helpful!

So I've stopped telling people that, and instead I put a huge amount of work into trying to understand _why_ people find certain things hard and work to help remove some of those barriers.


Just my 2 cents: I worked on the DNS in Windows Server back in the 200x's (validating the UI and back-end functionality, not dev work), so I did have to learn about it at one point and I recall that there was a vicious level of complexity to it compared to what we ask it to do. The 99% scenario is turning a domain into an IP via an A record. Next after that is doing the same with an MX record. Beyond that is this deep well of other things people thought someone might want to do with DNS, but that almost nobody takes advantage of in real life. So DNS is probably easy to understand if you're just concerned about the one or two first class scenarios, but if you had to implement an RFC-correct instance of a DNS server, it's a bit of a brain melter.


This is the comment I wanted to make. DNS is one of those technologies that has a simple explanation - "it's indirection" - with consequential ramifications that turn it into someone's job. Most people will not interact with it often enough to know how to perform the job, so they remain hesitant and try not to do anything at all with it, because it burned them once before.

Git is analogous - most uses of Git are formulaic, and the underlying concepts are simple enough - but actually accessing the right lever to pull when disaster strikes is unclear and hard to experiment with. I'm in this latter category with DNS: I get it at a high level, but it's like a student who has only done the simple example project.

I'm of the opinion that we're at a good moment to redo the things DNS does with better separation of concerns by going towards "put those records on a blockchain, streamline it for known applications, reframe the hard problem around bootstrapping access to on-chain data". It's already been explored in varying degrees(e.g. Ethereum Name Service, Symbol namespaces) but it's not really something that has to be the monopoly of any specific chain.


ANYTHING that has the word "blockchain" is automatically disqualified from pretty much any real use.

In particular, pretty much nobody wants to trust DNS names that can cost hundreds of millions to blockchain. Because if you lose (or compromise) your private key, then that's it. The domain is gone.

With regular DNS this would be solved by a couple of irate phone calls to registrars. At worst, via a court case.


> Git is analogous - most uses of Git are formulaic, and the underlying concepts are simple enough - but actually accessing the right lever to pull when disaster strikes is unclear and hard to experiment with.

Would you mind sharing some examples of hard-to-recover git related disaster? I ask because I previously worked with applying hundreds of patches on top of chromium (yes, Opera). This resulted in __multiple__ merge‐conflicts. However not anything close to what I would call a git related disaster. Anything could be undone nicely.

What is important is that git is very verbose by default. It guides through not only a repo current state but also options user has.

I've met many developers not caring to read any of a git output and then claiming git is too complex and git broke their repo and it's "too complex to use". I'm sure it's not the case for you though.

So I'm just curious what kind of disaster was mentioned above.


> I've met many developers not caring to read any of a git output and then claiming git is too complex and git broke their repo and it's "too complex to use".

This a hundred times over.


Not OP, but I've encountered a few situations where the solution was modifying .git/ manually. The one that comes to mind was having two branches with names separated only by case on a case sensitive system, and then checking out both on a case insensitive system.


Wouldn't be easier to rename the branches on the case sensitive system (a throw away Linux VM would do), push, pull?

Of course if the solution is editing .git/config, that's even easier, but it probably takes more than that.


It's been awhile so my memory is fuzzy, but this was at a windows-only shop back before WSL was a thing and getting a VM involved an annoying months-long requisition process. I think the repo state was actually corrupted too, so pull may not have been working.


> I've encountered a few situations where the solution was modifying .git/ manually

Would be very interesting to know the purpose of modifying .git/ manually.


> and hard to experiment with.

Just on the topic of git: git makes it extremely easy to experiment! But far too many people, including those writing tutorials and blog posts, don't understand this. Whenever you're in doubt, just commit your work and make a temporary branch and switch to it. Then you can mess about with merging and rebasing and dealing with the conflicts to your heart's content.

This should be among the very first things we teach to new git users.


Maybe everyone sees this differently? Or maybe is more than one viewpoint to it.

For example, if someone tells me that something is "actually really simple", and I did not get it yet, I tend to think that I likely (not certainly, but likely) have not found the right approach to that something yet, and once I found that way to look at it, things will resolve itself.

But if people assure me that something is "hard to learn" or "hard to get", I'm rather suspecting that that thing is full of unwieldy complexity and/or unintuitive aspects, and unless I'm either up for the challenge (definitely the case sometimes), or learning it is crucial for achieving a certain goal (in which case it may turn out to be fun after all, even if still hard), I might think twice if I want to put the time and energy into it.

In the case of DNS, which I've learned in the nineties, I found the principle simple and elegant, the tools and software unwieldy and sometimes arcane, and a large multitude of little details that are not strictly necessary for understanding and using DNS in basic ways, but that can get important when either debugging or working at scale.

So, in the case of DNS, I'd say the answer is "depends on how deep you want to go", but don't shy away from the simple and elegant principle, even if the arcane software (it has gotten better, though) stands in the way.

I think that matches the content of your article overall?


> Maybe everyone sees this differently? Or maybe is more than one viewpoint to it.

> For example, if someone tells me that something is "actually really simple", and I did not get it yet, I tend to think that I likely (not certainly, but likely) have not found the right approach to that something yet, and once I found that way to look at it, things will resolve itself.

At least in my experience, people asserting that something lots of people have trouble understanding is "not hard" are doing so as a flex at least as often as they are trying to be helpful, so I'm surprised that a lot of people don't make the same hypothesis you do. When someone does so while also touting their status among their peers due to their knowledge and talking about how anyone could do it by just putting in the work, it's not surprising that people might interpret that as implying that people who don't have the same knowledge are either too lazy to put in the effort or not capable of it, regardless of whether that was the intended message. If the goal is to try to help people, it's more effective to communicate in a way that conveys understanding and not judgment; if someone doesn't care to improve their methodology of helping people, that's fine, but it does raise more doubts about whether they're being honest about their intentions.


> anyone could do it by just putting in the work, it's not surprising that people might interpret that as implying that people who don't have the same knowledge are either too lazy to put in the effort or not capable of it

Not spending time on something doesn't mean you're lazy or stupid, that seems like your own leap. If you choose to interpret in a charitable way it just says: "if you haven't looked here don't worry, it's not that hard, you'll just need to spend some time".

I haven't looked under many rocks I've walked by, talked to everyone, read every book I encountered, doesn't mean I'm stupid or lazy, I just haven't done it.


My point is that if your goal is to teach people, you should care about framing things in a way that's conducive to the people who you want to teach. Dismissing fairly common insecurities because you don't happen to have them will significantly reduce your effectiveness at teaching, and if you don't find it worth the time to take into account feedback about how to craft your messaging more effectively, I'd argue that your priority never really was teaching in the first place.


Your post reminds me of three things -

The nature of what's being learnt. Some things require a continuity - to understand B, prior A is needed (or helps, to understand faster).

The method of learning. Book/theory-based, or practical? For either, what's the nature of scaffolding (self, or via resources) to help leap the chasm? If testing one's self, what's the complexity and can that complexity be broken down into simpler (or more discrete) parts, (perhaps testing working better in smaller parts)? Perhaps A isn't fully (or at all) required to 'know' B, depending on how it's learnt. Which goes on to -

The nature of the learner (at that point for that task). Someone that's looking to solve a task, somewhat surface, or someone that's interested and will go deeper into edge cases or approach with greater curiosity?

[I'm skipping the nature of the learning/knowledge, since 'resolving DNS' is a pretty externally verifiable result. However it might be fruitful to consider the nature of the learning is not only 'resolving DNS', and even if 'resolving DNS' fails, learning always happens (intended/unintended, positive/negative, a can of worms there).]

You point out that 'easy' and 'hard' are motivators that might have unexpected, or the reverse, effects vs. intended, depending on the reader. When putting it into those 3 parts, perhaps this shows the usefulness of framing.


My own hassles with DNS, some of which are mentioned.

    1. windows and apache httpd and others cache things
       for a long time, seemingly forever.
    2. The 'A' record is used in case of missing MX records. Who knew that?
    3. (in the olden days) confusion with winbios and other naming schemes
    4. The defn of 'authoritative'
    5. the misleading semantics of 'domain' in resolv.conf
    6. (minor) the casual use of CNAME to mean alias, whereas it fact
       means the opposite and is used to refer to the record, not the name.
    7. That DNS can use tcp/53.  The number of misconfigured firewalls I've
       dealt with argh.
    8. Split vs forwarding DNS.  e.g. impact on the .dev
       domain.
    9. The urge of some to put underscores in their names.
       I believe that some (early?) implementations allowed
       them.
    10. Someone putting in a lower 'serial' in named.conf and
        the struggle to restore sanity.


DNS's caching behaviour in the wild is incredibly infuriating. We have one team member at my company who is stuck with a stale DNS record for one of our internal services in our application, but their browser works just fine. I've absolutely no idea how to actually fix it, so for that one persons benefit we bought a whole new domain and ported everything to that.

Split DNS has let us do some cool things, and simultaneously caused no end of headaches. See above for the sorts of headaches.


i thought to post about how i thought "dns looks fairly straightforward" (which i guess it is on paper) then learned all the nuances over the years of maintaining a few services across a few domains and debugging network issues as a networking fw dev. i paused and the difference is perfectly between the the parent and grandparent post really.

some things seem deceptively easy and you can get by for years on basic features. then it happens, you try something new you learned you can do with $THING , add to it and things start to break and you dig in and realize you never fundamentally understood how the basic features that got you here worked. and thats ok


Perfect response!

My own response to "X is actually hard to learn" is ... compared to what?

   1. compared to your first impressions?
   2. compared to general opinions that you're aware of?
   3. are you talking about general principles or deep down stuff?
   4. are you talking about the learning curve?  Is it irregular? Steep? Shallow?
"X is not hard to learn" is ultimately a pretty empty statement.


I get it. Your take resonates with me. Playing piano is easy too right? Right after you can do it. Until then it's mystically hard.

It just takes practice. Training helps but nothing can replace practice.

We can't jus b gud


jvns, your articles and conversational writing style are great. I hear you saying "'X is hard' is a way to not lose a reader by insulting their intelligence". My immediate reaction is "'X is easy' is a reminder that things that look hard upon first examination can be quite easy".

I think in general this level of "kid-gloves" is overall unhealthy for society. It encourages mental weakness.

At the same time, I understand that your aim is to reach as large an audience as possible so your educational material can have max impact. For this reason, I can justify you having this stance. That said, I don't think it applies globally.


Julia, I agree with your take here, thank you for bringing this thought process into the mainstream. There are many other concepts this happens with and in other ways: people sing praises of the pedagogic excellence in Skiena’s & Sedgewick’s algorithm textbooks (and the awful EPI book by Aziz et al) but I hate reading them, they’re actually written horribly to explain things, IMO. I think this form of difficult writing on top of already difficult concepts is a weird type of gate keeping (perhaps sometimes unintentional?) and it should be tackled all over Computer Science by better writers who have an explanation style like yours.


Simple is not always easy. Too simple can be difficult to grasp and sometimes adding a little complexity can actually make something more intuitive and easier.

DNS is simple, but it is not necessarily easy to understand.


an aside your content is top notch in quality and I always point to your site(s) when I want to share how this stuff works with an easy explainer. You make it seem easy :)


Howdy Julia,

I debated whether I should respond to you at all, however I feel that I must. You and I don't know each other, and based on the last sentence of your reply I suspect we agree with one another more than we disagree. Nonetheless, your reply seems to imply that my comment and my earlier disagreement were done out of malice or arrogance. A different commenter used the term "gatekeeping".

Fundamentally most of the building blocks of the Internet that people interact with regularly, including DNS, are well-defined and relatively easy to understand /relative to other technical things/. Protocols like DNS are effectively shoving text into packet. All of the hard stuff isn't in the bare protocol, it's in understanding the complete stack of abstractions the protocol relies on, the abstractions that allow the protocol to be simple but can sometimes not work in unexpected ways. If there is anything I've learned in my life, it's that integrations and scaling are what make technical things hard, the basic protocols are generally very simple and they are robust primarily because they are simple. They are simple primarily because they get to rely on all the underlying abstractions to be there. Technology now exists in a world of abstractions on top of abstractions, and understanding the basic building blocks is now rare since many technologists focus on mastering a particular layer of abstraction, yet if you understand the basic building blocks, even in an incomplete sense, it can greatly empower you to be more capable and comfortable with the abstractions.

"Hard" and "Easy" are relative terms, and I do believe that saying things are hard can lead people to avoid them altogether. I have spent a considerable portion of my life working diligently to mentor people in their technical careers, foster technical understanding, and bring people along with me on knowledge journeys. I have done this through extensive written documentation (internal to companies), teaching classes (internal to companies), public conference talks, guest lecturing at universities, and mentorship of people who are interested in learning technology but come from non-traditional backgrounds. I myself come from a non-traditional background. I've also worked hard to improve the UX and accessibility of every piece of software I touch, because I think that technology as a tool and the information about how it works should be accessible.

I really appreciated your article and I think you have a great writing style to explain things in an understandable way. Your Implement DNS project is really great, and I will be linking it to others in the future as they go through the process of learning DNS. In no way was my disagreement before or this reply meant to throw shade on you, your article, or otherwise. It's because I believe saying things are hard actually discourages people from learning them, leads to avoidance, and in particular I believe DNS is one of the more simple of the many possible things someone can learn about how the Internet works. I am /very much/ not trying to in any way disparage the intelligence of anyone who is struggling with learning any technology. I believe that learning technology is much like learning anything else, it requires interest and time. Saying that some of the simpler things are hard can discourage people from having the interest, and if people do not have the interest they will not invest the time.

Very specifically, I think for the audience here on HN, they've had to learn and use technologies that are far more difficult than DNS in their lives and careers with an almost certainty. In a relative sense, I do not believe DNS is difficult to learn. DNS has a /lot/ of edge cases though, which can be hard to troubleshoot, understand, and resolve, so from this perspective it is hard to master and I wouldn't claim to have fully mastered it myself. As an example of what I mean, another commenter mentioned Git. I have a pretty deep understanding of Git as well, yet I consider it significantly more difficult to learn and understand than DNS, largely because DNS is better documented, easier to inspect, and doesn't require the technologist to understand complex algorithms like merkle trees. Yet almost every tech worker in the world uses Git every day, often with many frustrations and weird edge cases, as in regular usage it fails far more often than simpler technologies like DNS. The entire strength of DNS from a robustness, adoption, and resilience perspective is largely its simplicity, which is why I think relative to the many other things that exist that it's not hard to learn.

Our opinions are always colored by our experiences, as is mine, which I stand by. One of those experiences I'll recount here, which is that I have known a woman for a little over 6 years now who is currently working in IT and finishing a bachelor's in IT through WGU. She's one of the top performer's on her team and has a gift with understanding and explaining abstract concepts to users, which has made her well-liked by all at her company. Not long after I met her she shared with me that she'd always wanted to go into IT, but felt like she couldn't do it because everyone she'd met along the way had told her it was too hard for her because she suffers from a learning disorder called dyscalculia which interferes in her ability to do math. For reasons I don't fully understand, math ability is used by many technical people as a determinant of your ability to learn and work with technology. I was one of the first people she'd met in her life that encouraged her to pursue her interest in technology and told her she could do it. I did so by pointing out that many of the things people were saying were hard were concepts she already understood in the abstract and offering to go on the journey with her. She had been so dejected by others telling her things were too hard for her she never pursued a college education. I invested significantly in learning with her and 3 years ago she finished an associates and multiple certifications and started her first IT job, and she's been excelling in it the entire way.

I've continued to mentor her, along with many others, because I legitimately do believe that much of the technology around us is much easier to learn and understand than people commonly believe, and that understanding how it works is a key to understanding how our world works now and is incredibly empowering even for people who don't work with technology every day. I am adamantly opposed to any sort of gatekeeping of technology knowledge, and in no way am advocating that gatekeeping or trying to disparage those who struggle to learn. If anything, one of my existential fears is that as a society we fall into a situation where so few people understand the technology we rely on every day that it creates a new and more dire social gap beyond the wealth gap, a knowledge gap that fundamentally separates those who can effectively participate in the levers of power (including democratically) from those who cannot. I am heavily invested in ensuring that knowledge is free, accessible, and that there is an open pathway for people to learn and it's one of my core optimisms about the Internet.


This is one of the kindest comments I've read on any social media site. Thanks for taking the time.


Seems like maybe you are both saying: DNS the protocol is easy to understand well. DNS in he wild is much harder to understand well.


Actually, DNS the protocol is a horribly complex beast with various compression mechanism, tricky encoding rules, workarounds for MTU limits, etc. Most people "learning DNS" never even attempt to learn the protocol.

What is reasonably easy is a high-level understanding of e.g. "given this question, this is what one expects an authoritative DNS server to respond" in the human-friendly form, covering the basic kinds of questions (A/AAAA/MX/TXT/CNAME).


As always, thanks for the article (and the debate).

DNS itself, as an internet protocol, is "easy" in principle.

I did implement and maintain DNS services at the beginning of my career, for more than 200 customers, without any title and without help, except "the internet".

Everybody understands "you ask for a name, you get an address to connect as response", or "you can have more than one MX entry and play with the weight", "what is a TTL", etc.

But as every internet protocol... as soon as you add to the recipe: historical changes and more than one RFC, upgrades to the protocol, cryptography, different client and server implementations, vendors, operative systems and libraries (and versions), levels of compliance, software applications, misconfiguration, bugs, layers and more layers (inside client, reaching intermediate server(s), final server issues), cache!, enterprise interests (from client to server side) with a wild global (or local) positioning, government and agencies interests, ageing, innovation, all issues related to the long running opensource projects related to the protocol, all issues related to the "new is better" opensource projects (client and server) related to the protocol, user/developers/customers/managers asking (or assuming) impossible things or that there are no limitations, a long history (and a future one) of known attacks and known defenses or mitigations, response ordering (/etc/gai.conf), clients that implement from the last century /etc/hosts to mdns and avahi to nsswitch and ncsd, clients that act as servers (developed by enterprise bug creators)... you know... not easy anymore... things gonna be wild.

If there are complementary protocols, let's multiply the complexity: for DNS issues, there is also domain registration issues, whois, DHCP, VPNs, intermediate firewalls, rate limits, anycast, replication, IP, ARP...

With any internet protocol, or anything related to computers:

1) While it works, it's fine (and everybody is happy)

2) While we get an issue previously seen, understood and fixed, it's still calm

3) While we get an issue that we can quickly search by symptoms and fix, maybe ok

The "not easy" comes after that.

When we get a new and unknown problem, a weird issue, unexpected things, and there are 20 or 30 possibly failing actors around the protocol, or XX involved parts.

Recently I did a written breakdown of all things related to DNS in the "strace -y -yy -fq -v nc -vz google.com 80" in a linux laptop, and many people at work was surprised of all the things that happen there, inside the "give me the IP" step locally.

We could say exactly the same against any issue related to any internet protocol:

HTTP? it's easy or hard

Depends. It's easy to explain in a sentence what it does.

But you don't want to be the guy fixing issues against a float of load balancers used by all the customers, with a legacy (non compliant) device from a high priority customer at 4 A.M. oncall.


I think it is hard to learn... using the tools people used to learn DNS with.

BIND is great at what it does, but its configuration files suck and its manual is long, terse, and unnecessarily complex sometimes. Dig is powerful, but abbreviates everything like we're on an 80 column terminal. At times Wireshark was a better tool debugging DNS issues than Dig was.

Give someone PowerDNS or another modern DNS server and I think they'll have a much better time configuring a working DNS server. I don't know a good modern DNS client, so I've learned to deal with Dig instead. As a user of the "--color" flag for the `ip` command, I'd love to see tools like dig produce more modern output (I'll alias the command line flags, just add it to the command!)

Seriously, "MSG SIZE rcvd: 71" did not need abbreviation. "flags: qr rd ra" could've been full words as well. I don't know what the semicolons before the lines are supposed to convey but they're only making things confusing.

I find it no wonder people get confused learning DNS with the materials provided to them.


I'd conservatively estimate 90% of the people who make core FOSS software interface decisions haven't had to learn anything technical in an entirely unfamiliar domain where there existing mental models didn't apply in decades. Beyond that, many consider having learned these arbitrary, terse interfaces as a badge of honor, and for some reason thinks that makes them better technologists. I'll bet they'd be even better had they been able to focus on the core concepts rather than trying to get into some systems engineer's head who worked at Berkley in 1984.


I disagree.

The CLI is inherently a super-user oriented interface. The CLI needs to let an experienced user be as productive as possible.

If you need a pretty UI with everything spelled out, you should go on GitHub and find a GUI that someone built on top of the core tool. But dumbing down the core tool is not the way to go.

The --help message should be good, and the man page needs to be good, but the tool itself should favor power and productivity over handholding someone who doesn't want to Google a how-to.


This is full of false dichotomy. Making things more understandable isn’t “dumbing them down”. There isn’t necessarily a trade-off. An application being CLI-based is not a get out of jail free card for UX critiques, nor is it an implication that it’s for power-users. You can’t just put the minimum viable effort into considering how someone will use your software without any thought to intuitive mental models and cry “it’s for power users”.

Old-school FOSS nerds have a hard time admitting that they tend to be absolutely useless at considering user experience, because then that’s something On Computers that they’re not good at. The best they can do is shoo it away by claiming that the sort of UX concerns I’m talking about only apply to software used by Lesser Beings and not Smart Computer Boys such as themselves.

It’s such an outdated user-hostile boys club attitude fuelled by insecurity and misplaced and not even well-thought-out elitism, and I’ll never pull any punches when talking about it.


> nor is it an implication that it’s for power-users.

Yes it is. As an administrator being a CLI tool means that I can automate configs and deployment with ansible or Nix OS. I would argue doing these sorts of automation what are what “power-users” would do.

GUI’s often lack any sort of automation capability or have some half-cocked API that usually lacks features or endpoints that would be trivial if the thing was just a CLI tool with a config file.


Good interface design absolutely does not require a gui. When I first encountered a BSD distro that used ports, that was a huge interface improvement. Then Deb or whatever beyond that.

Using sane labels and coherent layouts isn't dumbing anything down. You can make things easier for humans while having sane interfaces for scripting. Redirecting the output of `ls` and similar commands automatically change the layout to something easy for cli i/o manipulation.

We realized long long ago that the API and human-facing interfaces are often better when separated, which is why I don't browse the web with curl... or maybe telnet.


> Old-school FOSS nerds have a hard time admitting that they tend to be absolutely useless at considering user experience, because then that’s something On Computers that they’re not good at.

New-school FOSS nerds don't seem to understand that all poor user experience concerns are because we are trying to be productive. The new style of of reduced information density and easy-to-remember names isn't useful when you are paid to have production systems up and running. What I need are tools that give me the exact information I need quickly and get the fuck out of my way for everything else.

> It’s such an outdated user-hostile boys club attitude fuelled by insecurity and misplaced and not even well-thought-out elitism, and I’ll never pull any punches when talking about it.

Most people that complain about this put the minimal viable effort into actually learning what they are doing. Somehow I'm the idiot after spending hours of my life to learn this professionally and for wanting expert interfaces. You can call me elite all you like but when the whirlpool of disaster is pulling us both under, I'll be in my elite raft surfing the waves while you struggle.


I've been using Linux for 25 years, spent 6 or so years doing administration type work on large Unix systems, and spent 10 years in a back-end developer role. I've somewhere in the high five figure range of hours spent contributing to FOSS software. More recently, I've gotten formal interface design education and worked in design roles. I am not a 'new school' FOSS nerd by any measure.

> New-school FOSS nerds don't seem to understand that all poor user experience concerns are because we are trying to be productive. The new style of of reduced information density and easy-to-remember names isn't useful when you are paid to have production systems up and running.

The reason you find them more productive is because you already know how to use them. Needlessly reduced information density is usually a bandaid applied to a shitty, confusing interface by a non-designer because they don't know how to organize it to be easily parseable, but still dense. Maybe denser than before. You don't even realize when a good designer has done a good job because it just makes sense— it's so intuitive and useful to cite users that you don't have to think about it. If it's a utilitarian tool that "looks designed," it was probably a non-designer trying to gussy it up, or "work on UX" without realizing that involves a pile of intellectual work before even considering things like white space, colors, and fonts. I can't tell you how many times I've seen developers do something like implement custom color themes in response to people saying their UX sucked; it doesn't even begin to address the problem. It would be like a designer changing the CSS on a web app because people said was performing poorly. At least the designer, when confronted about it, probably wouldn't have the unbridled hubris to claim enough development expertise to tell the developer what's what. I can't tell you how many clueless developers have tried to explain design to me.

A solid understanding of visual hierarchy would solve at least half of FOSS interface problems without changing anything other than layout. The problem is project maintainers who think any attempt to remediate their non-designed cobbled-together interface is an attack by 'designers trying to dumb things down' or 'new school' or other who don't understand what the 'real technologists' do. They either dismiss proposals out of hand, or reflexively bikeshed them into oblivion because they're defensive, and lack the design sophistication to knowledgeably engage. As a senior developer and someone who enrolled in art school as an adult, I know how much more defensive people get about work they're not confident in. When those people are in charge of a project AND have the misguided notion that their "design" (read as cobbled-together assembly of user-facing functionality) is more functional than an expert's design, just uglier, good luck pushing change through.

> Most people that complain about this put the minimal viable effort into actually learning what they are doing. Somehow I'm the idiot after spending hours of my life to learn this professionally and for wanting expert interfaces. You can call me elite all you like but when the whirlpool of disaster is pulling us both under, I'll be in my elite raft surfing the waves while you struggle.

And fraternities keep paddling pledges and making them drink gallons of grain alcohol because they all had to do it too, so everybody else should. Complaining that nobody wants to spend time studying something that they wouldn't have to if it was properly designed is an asinine argument against progress. The only people here who should be chastised for refusing to learn new things are the people who refuse to consider better approaches and insist everybody else learn the same inscrutable incantations they did, because it's easier for them... Because they already know them. Just because you can't imagine productive ways to update these tools to a post-teletype world doesn't mean that nobody can.


> I've been using Linux for 25 years, spent 6 or so years doing administration type work on large Unix systems, and spent 10 years in a back-end developer role. I've somewhere in the high five figure range of hours spent contributing to FOSS software. More recently, I've gotten formal interface design education and worked in design roles. I am not a 'new school' FOSS nerd by any measure.

If you have trouble with tools like dig, you have wasted a lot of fucking time.


Edgy. Asserting that my deep consideration and professional concern with usability means that I don't understand the software is the most deliberately obtuse thing I've read this week, and I browsed Slashdot comments.

Measuring usability by your own comfort level is the reason user-facing "open source alternatives" that aren't foundation-backed with dedicated designers will remain "alternative" in perpetuity. It's also the reason that centralized managed environments like AWS et al will keep replacing in-house self-hosted systems throughout the world, despite the cost, as the need for that functionality exceeds the number of people willing to learn to use the archaic unix admin tool kit circa 1985.


I disagree.

I'm a super-user and as such I have to use hundreds of tools to GTD. What you call “pretty UI” I call ergonomics. So I appreciate when the tooling is respectful of my time and don’t require me to visit the man page just because the developer was so situational myopic that thought sparing a few characters was a good idea.

It all smells like unconscious gatekeeping.


And don't get me started on undocumented parts of CLIs, which are unknown unknowns, especially when they hold core parts of the API.

RTFM is usually (not always) a way to gatekeep bad UX.


Right. There USED to be good reasons to have that be the way classic CLI utilities worked— your terminal had 80 character lines, you might be working over a 300 baud modem, you might be using a teletype terminal and you didn't want to waste paper, having machine-focused output and human-focused output was too resource Intensive... All of that is gone now.


You are seriously taking the position that writing “received” instead of “rcvd” is “dumbing down”?


Someone who knows what information to expect doesn't even need a label. Once you're past that hurdle, it's redundant.

For instance, you look at the dash of your car every time you drive, and what little is labeled is abbreviated. The expectation is that when you understand how the vehicle operates, the gauges and lights will make sense. The information is encoded behind your knowledge. dig isn't really different.

Some people want the tools to teach them everything, but that's just not appropriate or necessary, whether you're talking about a hammer, a firearm, or a computer.


KPH/MPH are ubiquitous standard abbreviations that exactly zero percent of people encounter for the first time when they drive for the first time. Occasional and brand new drivers will never not know what those things mean. Kilometers Per Hour is also 600% larger than KPH, and visual clutter while driving at highway speeds is a much bigger problem than when examining DNS records.

> Once you're past that hurdle, it's redundant.

Yeah, then multiply that hurdle times the thousand others someone has to memorize to work in *nix.

> Some people want the tools to teach them everything, but that's just not appropriate or necessary, whether you're talking about a hammer, a firearm, or a computer.

Textual output on a computer is a communication medium-- hammers and firearms are not.


A) dumbed-down interfaces are bad design; Most you see were made by developees trying to be designy. No shade— it's a different profession with relevant degrees and career paths, and most designers would make terrible code if they tried. B) Good interface design absolutely does not require a gui. I've been professionally using (and loving) these tips for decades, but many cross the border from negligent to user hostile. C) Download SideFX HOUDINI, which has a nice pretty interface and is fully scriptable with python and it's own language, vex, and tell me that's "dumbed down."


*Loving these tools for decades


I think of this in the same way I think of documentation. I try to write documentation that is as clear as it is concise and well structured. But sometimes that’s just not realistically possible, and expressing information often _has_ to assume some level of familiarity with the underlying concepts.

I’ve never considered DNS to be a complicated technology, and I’ve never considered that tools like dig and the abbreviations they use need to change much. If there’s something that isn’t clear, it’ll be in the man page, a mailing list, Internet based documentation, StackOverflow, etc.

Personally, I value information density, and I don’t mind terseness at all.

That doesn’t necessarily mean I would design some of this stuff the same way, but after more than a decade using it, I feel like changing it now is just an appeal to futility. A tool like dig more or less looks and works the same today as it did 15 years ago, and there’s value in that kind of stability.


> but after more than a decade using it, I feel like changing it now is just an appeal to futility

And that's why your learning to use these tools a decade ago was as painful as it was for me 20+ years ago, and so on. Nobody improves it because the only people with the clout to make the changes is too set in their ways to change it, and have come to believe the ridiculous notion that it's beneficial for people to learn it, despite not being related to the actual problem.

I worked in technical roles for the better part of 25 years, including 10 as a back end developer. One of the reasons I got into design was to try and break chains like this, but talking to the FOSS community about design gets nearly zero traction and a whole lot of pushback from folks who think they know a lot more about design than they do. Nothing you do matches the information density of a train schedule in a large city, and those designs are so ubiquitous and intuitive that we take them for granted. That was not always the case— someone decided to lay things out like that at some point because they knew it works be easier and more intuitive for the majority even if it might take some adjustment for existing everyday users. And designers still work to make them better, to this day.

I wrote a tutorial teaching non-technical folks to use a rest API. It required no knowledge beyond familiarity with using web browsers, and ended with people making useful queries with curl. It was short, visual, and effective- I learned that multiple non-profits at the time used it in their materials to teach folks technical literacy. Just because using a bunch of terse jargon is easier for developers to write and acceptable for other developers to read doesn't mean it's optimal... Or even good. Especially when the realm world be much more accessible to non-technical people if it was designed better.


> As a user of the "--color" flag for the `ip` command, I'd love to see tools like dig produce more modern output

https://github.com/ogham/dog is pretty good in that regard


wow, didn't know about `ip --color`, that's awesome


>I don't know what the semicolons before the lines are supposed to convey but they're only making things confusing

all of the lines that aren't part of the query answer are prefixed with semicolons. so it's basically a comment character. presumably to ease processing of the data it spits out.

You know. So you can `dig google.com | grep -v '^;' | grep . | awk '{ print $5 }'` easily.

I can imagine people using it in a shell script 20 years and more ago to grab a list of IPs to do some domain's health check or whatever

Not that you would want to in any modern stack. obviously you'd just use dig's `+short` option :)


Now I just wonder what the double semicolons mean and why they're different from single semicolons :) It's fine, I can Google the answer, I just wished I didn't need to.

I already know to use +short when I just want the result(s). I use it in a script to detect and resolve my Pihole's random freezes (by timing the lookup and rebooting the VM every time a lookup fails or takes longer than 200ms, janky but it works).


Perhaps there is info that would normally be present and isn’t, hence stacked semicolons.


A precision engineering tool shouldn’t make its users guess like that.


Totally agree.


> dig google.com | grep -v '^;' | grep . | awk '{ print $5 }'`

It wasn't really your point, but hopefully you'll forgive me sharing an equivalent one-liner, without grep:

dig google.com | awk '/./ && !/^;/ {print $5}'


I believe both you and the parent meant to escape the dot, as to only return lines with a dot in them:

     dig example.com | awk '/\./ && !/^;/ {print $5}'
If it isn't escaped it'll just match on everything.

If matching everything was intended then you don't need it at all:

     dig example.com | awk '!/^;/ {print $5}'
Will strip out the lines beginning with a semi-colon.

However, parsing the output of `dig` this way is not needed. It can be ran to only return the answer, for example:

     dig +noall +answer example.com
Or if you just want the IP for that record:

     dig +short example.com


I was actually just using `grep .` to discard empty lines. The `grep -v '^;'` would have only left lines with answers after it discarded lines starting with `;`. That and the empty lines.

I could have combined `grep -v '^;'` and `grep .` as a nice simple `grep '^[^;]'`


Ahh. I was working from memory, I forgot about the empty lines it returns. That's what I get for replying from my phone. :)

However, it's still better to just have the `dig` command return only the necessary information via the +short or +noall +answer flags, rather than parsing the full output.


Remind me again where this weird syntax came from for passing arguments?


- is allowed in domains


dash is only allowed in the middle. labels can't start or stop with it. so it wouldn't affect the command arguments.

    The labels must follow the rules for ARPANET host names.  They must
    start with a letter, end with a letter or digit, and have as interior
    characters only letters, digits, and hyphen.  There are also some
    restrictions on the length.  Labels must be 63 characters or less.
https://www.ietf.org/rfc/rfc1035.txt

it's more likely that in the late 90s they just made up their own argument parsing that they liked and the application has just used that ever since.


BIND expose (almost) every details of DNS. That's why learning their config would teach you 90% of the DNS system.

dig output make sense after reading the RFC. It exposes every bit flag in the protocol level


I have read the RFC. I just don't mess with DNS often enough that I know all the RFC definitions from the top of my head, and I don't want to need to find the appropriate RFC(s) every time I want to debug an issue.

It's not as if tools like wget bother you with http/3 spec fields every time you download something from the internet unless you explicitly ask for them, and even if they do, they're more descriptive than just the shortcode coming from a diagram in the RFC.


>I just don't mess with DNS often enough

That's the root of all this, we only deal with DNS when something breaks, and it rarely breaks.

If we did DNS all day every day it'd all be super clear and concise.


I learned DNS a bit over 20 years ago running a BIND deployment that was authoritative for somewhere over 30k customer domains.

I still tend to use nslookup over dig (yes, I know, bad mst) because I got sufficiently used to the former that I barely notice it's even there when using it - my fingers and brain are so used to it that the interface part of nslookup is mentally invisible (though I always teach other people 'dig' and 'host' instead).

I'm not sure learning it that way was exactly easier, but being (somewhat, that was far from the only service I was responsible for) immersed in it meant I at least got through the initial mental scrabbling in a relatively compressed time frame, and once you've got a coherent model built in your head life is easier.

I think this is really a very long way to say "based on doing DNS most days for a few years, I strongly suspect you're right."


Yep, and the other thing is that something like 90% of DNS is "I need to update a A record or a CNAME or a AAAA (rare)" and then waiting for caches to expire.

But that's only like 10% of DNS's surface area, there's tons of other things it can do and rarely does, but if you have to make that stuff work you can get deep in the weeds fast.


To the extent I learned DNS, it was via nslookup, definitely not dig.


I think old CLI tools are hard to use because they were designed to be machine- and human-readable at the same time. They are effectively APIs in some scenarios (e.g., scripting and as linked libraries) and in others handholds. As a result they are restricted to be barely viable as either.


  $ ip address --color
  Command "--color" is unknown, try "ip address help".


Some CLI apps with subcommands accept top level options anywhere in the arguments, some - like 'ip' - expect

    <command> <command options> <subcommand name> <subcommand options> <arguments>
The advantage of that approach is that you don't have to worry about naming collisions between the command and subcommand option sets, which can be really handy - especially in applications with pluggable and/or user supplied subcommands, so you can add top level options without worrying about breaking somebody else's extensions.

The disadvantage is, well, what just happened to you.

(note that I personally really like the pedantic approach because it fits my brain, but I'm -not- trying to endorse it as better in general since it's very definitely a trade-off and one that often isn't a good one overall)


try

> ip --color address

otherwise you might just have an old version


or: ip -c -br a which is: ip --color --brief address add -4 if only ipv4


That helped, thanks!


rcvd, qr, rd, ra

are defined in the RFC. It need the abbreviation to fit in the ASCII art in RFC text.

back in those days, dig users are those who read rfc


I can see that they were truncated in the block diagram, but they were also explained with one or two words just lines below that. RFC1035 calls them "query"/"response", "authoritative answer", "Truncation", "Recursion Desired", "Recursion Available", and CD/AD become "Checking Disabled" and "Authentic Data".

dig already translates things like record types from integers to strings, why not use the full, descriptive names for these flags as well? Why shouldn't I be able to dig +pretty domain.example?

I suppose dig is mostly maintenance mode these days, but it's quite annoying to see so many tools rely on it when it's so... 90s.


You've gut the nail on the head: dig was designed with the needs of the 90s in mind, where terseness was a virtue (if your ram can be usefully conveyed in kilobytes, you need terse tools).

Due to backwards compatibility, the output won't be changed. Due to backwards compatibility, stuff is the tool that's installed by default.

Both can't be changed (it would break too many assumptions many scripts and programs have).

However, there are multiple dig alternatives. You just have to install them.


I liked jvns' suggestion to add a +human option - plenty of tools have a -h output option which is nicer for quick interactive use, and I value their existence (e.g. I use both 'du -h' and 'du -k' semi regularly depending on if I want to read the results or throw it through an analysis script).


> I don't agree with this article.

But you address none of the article's points. You're basically disagreeing with the title. And I'd say that your answer seems to conflate DNS is hard and DNS is hard to learn.

> It just takes some effort and time to learn.

How much effort, how much time? Many people in these threads say that they learned by implementing a server. Some mention that it only took them some months to grasp, but then just echo in unison that "once you understand it, it's actually pretty easy". So can we agree that for something as ubiquitous and conceptually simple, it's actually hard to learn?


If you read the article the author points out why it's hard to learn. The concept is easy, but when teaching the concepts we don't include all the details of the modern internet.

As an example what are the rules that your browser uses to cache and expire DNS entries? Are those rules consistent between browsers?


And does your browser have settings which bypass or supplement the host's DNS configuration. Secure DNS (DoH etc) is great, but damn that's confusing when you first run across it. Not to mention how phones do it; you can't override a DoH DNS server when connecting to a VPN which offers internal DNS on Android, for instance.


Aren’t all of those concerns out of scope for DNS itself, though? DNS can only give you a TTL, for example, it cannot require you follow it.

Ideally that’s what RFCs are for, but even organizations that pay smart people to come up with clever standards don’t always follow them. Implementations frequently disregard or guess about the things standards cover.


From the point of view of the standard, maybe, but not from the point of view of somebody learning or using DNS.


I don’t disagree, and I think that’s why we need to make it clear that there’s a difference between implementation and standard.

My education is a mix of formal and autodidactic. One of the best things I got from formal education is the structured introduction to fundamentals like the OSI model.

If you don’t have that kind of foundation, it can be much, much harder to understand the “why” of the endless differences between documented standards and in-the-wild implementations. It’s good to know where you are in the stack to help inform what you’re seeing.


> you can't override a DoH DNS server when connecting to a VPN which offers internal DNS on Android, for instance.

True. I set my VPN server to force DNS thru the tunnel to an intercepting DNS server - and it replies as if it were the intended DNS server.

DNS server is setup this way in response to LAN devices that have their own DNS configured, but it handles Android's private DNS too.


> I think DNS is something few people take the time to learn

I kinda agree and think DNS one of those technologies where you can go an entire career without picking up more than bits and peices here and there. Those things gains a sense of mystique in industry as more complicated than it otherwise would if more people had to tackle it full on.


> I kinda agree and think DNS one of those technologies where you can go an entire career without picking up more than bits and peices here and there.

As far as that's true it's weird, because DNS basically does one straightforward thing. But then you get into all the places where that one thing has to be done in different ways.

Where I wouldn't mind some more magic is with reverse DNS. Too many tables don't know a name pointing to an IP until something tries to resolve that name.

Reliably historical rdns would be even more awesome but that's more of a service than a spec thing.


DNS itself is easy to learn. Trying to figure out why domain x doesn’t resolve to ip y is a hard problem.

Like the article points out, there’s so many layers of potential caches in between you starting a lookup and that lookup being resolved.


You have the curse of knowledge my friend. It’s hard to learn and way more complicated than it needs to be.


Likewise, a lot of people know (and use) me for my supposedly deep knowledge of Git, even though it took me years to fully understand and feel comfortable enough with it so that I’m no longer terrified when using the CLI — I can cherry-pick and rebase with the best of them. YET, I still feel it’s a bit intimidating and that there’s some mysteries behind the internals I don’t fully grasp. I suppose like DNS, it does take time and effort to learn, and when I really think about it, it’s actually not that hard, but for some reason, so many devs struggle with it, mostly (front what I’ve observed) because it’s intimidating due to o it’s somewhat odd terminology and unintuitive workflow, but also because there are so many GUI/IDE tools that hide a lot of the complexity, which simultaneously good and bad.


Absolutely agree. Back in the days, I was very inexperienced, I was thrown to the task of administering DNS (with BIND) and Sendmail. I had 100+ servers. The first couple of month was a lot of reading, and understanding things, but relatively fast I got a good understanding of it. After 6 month I was teaching DNS to other teams in other countries for the same company. It was not at all hard. I'm a very average engineer, and from 0 to explaining to others in 6 month, is by no means a difficult topic.


DNS isn't at all hard, it only takes two months to learn when you're being paid to, and after only 6 months you'll be knowledgeable enough to teach!

> from 0 to explaining to others in 6 month, is by no means a difficult topic.

You should have seen almost anything else. Many things can be learned within days.


> I think DNS is something few people take the time to learn, but it's not actually hard to learn.

Hard disagree. For me, DNS is like doing taxes: I touch it once a year or so, find it Byzantine, know enough to be dangerous, but am always frustrated that I don't use it often enough to remember exactly how to configure things without having to consult poorly written and/or overly technical tutorials.

I'd like to see a better version of web-based tools like mxtoolbox.com that will analyze DNS records, let you know what's wrong, and give you actual examples of what settings you need for things like DMARC/DKIM/SPF records. In my experience, online tutorials for setting them up come tantalizingly close to giving me what I need, but I often end up getting stuck with the last few details (usually the weird punctuation required) because, again, I touch this stuff so infrequently I just don't remember from one time to the next. Ideally I'd want a form-based tool that gives you drop-downs to select from and, when submitted, just gives you the actual record you need.


DNS _protocol_ and _servers_ are conceptually easy. In practice, the implementations and what actually happens is not. When you type google.com in your browser, do you know exactly the workflow in your computer and the different caches it uses (browser, OS etc) and how they work?

Do you know which Linux tools use gethostbyname vs getaddrinfo and why they could give different results?


I also don't agree with the article. Many aspects of sysadmin are far more difficult than DNS.

> the most common server and client implementations rigorously follow the standard

Not servers! in particular, many servers mess with the TTL (as author notes). It's not that these servers are defective; it's that the hostmaster has interfered with the configuration, presumably to reduce the load.


I wish I had jobs where people cared that I knew DNS.


How did you learn DNS? And when?


> How did you learn DNS? And when?

I wrote one of the world's first dynamic DNS servers for our dialup modem customers in the early 90s, so when connected to our Livingston Portmasters with an assigned IP address you could be username.isp.com while online.

Later updated this so you could also be http://www.username.isp.com all the time just by dropping an index.html file in your ~username/site directory.

I learned both by spelunking BSD, reading man pages, and rapid prototyping in Perl. So I'm with tristor, it felt relatively straightforward.

// I don't say I wrote the first because I don't know. I am not aware of any other regional or national commercial dialup ISP in the U.S. that offered this before we did.


Same here.

I implemented some dynamic dns system. I still remember how to handcraft a RRUPDATE request.

After reading the RFC, every bit in dig output makes sense


It would, however, be nice if there was a tool (or, as jvns proposes, a +human option to dig) that provided output oriented to somebody who's just trying to -use- the thing rather than reading the RFC first.

(I'm very much a "read the RFC first by default" sort of person because that approach fits my brain, but I don't think everybody should have to do that for relatively simple day to day stuff)


For me: Volunteer systems administration for a student organization. We had a subdomain under the domain of the university and wanted to add another one. We already had Bind configured with an A and MX record. No one really wanted to deal with it, so I asked some people I knew, who recommended that given we already ran qmail, then maybe tinydns would be the best option.

So I installed tinydns, read the documentation, experimented with the subdomain we didn't use yet. Took a few tries, but not really difficult.

Later one we added more stuff, like zone transfers, all of it is pretty easy once everything is running.

One of the things that scare people away from running DNS servers is Bind and it's configuration language, it's not great. You just don't have to use Bind, or you can switch to it once you understand the concepts of DNS better.


I was tasked with setting up & maintaining a dnsmasq server in college, which handled DNS, DHCP, and pxe booting.

I learned it like anything else. Googled it, read the docs, and trial & errord my way into a working setup. I was comfortable with all the technologies in a couple weeks (working part time).


Setup an in-house server to sinkhole malware requests.


I had a slow day at work in the beginning of my career (I was a help desk monkey then) and I wanted to learn more about how Active Directory actually worked. One of the things that I found out and became more curious about was why DNS was so critical for Active Directory, upon where I discovered the magic of SRV records. After that I decided I wanted to learn DNS in detail, so I read the RFCs, poked around with nslookup and dig, and then set up a local copy of BIND on a Linux VM to experiment with. In the numerous intervening years I've picked up more over time and been involved in numerous situations where I had to assist someone by doing DNS spelunking, often learning something in the process. I jumped from being a help desk monkey to being an L2 sysadmin in my very next job largely on the basis of my DNS and Active Directory knowledge, all of which was learned by reading documentation and poking at things, nothing particularly difficult about either.

What always causes challenges is weird edge cases, but if you understand how things are supposed to work at a protocol level you have the basis of knowledge to troubleshoot the edge cases, which is how you learn about the more esoteric or implementation specific behaviors of things. I wouldn't expect anyone to particularly have deep knowledge of AD or DNS edge cases, but the basics of how they work aren't particularly hard knowledge to acquire and it's a constant surprise to me how few tech people understand DNS in particular, since it's a critical system for nearly every aspect of modern computing.

As far as resources, when learning these things way back when I only used the official Microsoft documentation, the RFCs, man pages, and sites like nixCraft to learn about it. These same resources still exist (although Google sucks now and makes anything decent hard to find), and these protocols have changed very little in the intervening nearly 20 years, so I don't think it's any harder for folks to learn today than it was for me back then. This stuff is very literally not rocket science, DNS is an extremely basic protocol. In fact, nearly all its weird edge cases is because it's such a basic protocol that some behavior isn't clearly defined, however /most/ behavior is. It's very nearly all text on the wire, and all of the zone records, queries, and responses are human readable. As someone else mentioned, you can just use Wireshark and learn about DNS. It's inspectable on the wire, not just through tooling.

FWIW, Active Directory is far more difficult to learn in detail than DNS, largely because Kerberos is deeply complex, and Active Directory has many unique complexities (like how FSMO roles work). DNS in comparison to most other things you are expected to know as even a semi-competent systems administrators / SRE is mindbogglingly simple.


I think the article hits the nail on the head. DNS itself isn't hard, but learning real-world DNS is, due to the (as described) hidden nature of many parts between intending to do a lookup and getting the expected result.

Some of this also stems from the divergence of what used to be a 'default' way to be connected to the internet (one interface, one gateway, one DNS server provider) and what is now far more common like having to figure out how to actually do the lookup in the first place (connected to multiple WANs, sometimes concurrently like LTE and WiFi) and as a user, not actually being able to tell which resolving path was used (was it the browser? the standard interface in the system C lib? was it some local resolver or recursor that sits in between the lookup? do any of them have a local cache? do any of them add special options by default?).

Even if everything works, you'd still not be able to blindly trust if the query and response used the same path in one application as it did in another. Three browsers might all use something different, and the OS itself might use something different yet again, only to have mDNS add a fifth option.


Because the only three hard problems in computer science are cache invalidation and naming things.

And DNS is a caching system for names of things.

https://reddit.com/comments/15c2ul2/comment/jtty9dy


To be fair, DNS is one of the best examples of "naming things done right".

It's globally-curated (IANA), hierarchical, federated, easy to modify.


Also, off by one errors, which is implied


It's one of those things where there is a mismatch between how easy it seems to be, and how hard it turns out to be.

We all use DNS every day, and it seems really easy. The everyday language of DNS is: domain names, lookups, IP addresses. This language is exposed in browsers for all to see, and through this exposure we develop a mental model of how we think it works.

But under the covers there is a whole new language: zones, resolvers, delegated authority, that weird dot after the top-level domain...


"that weird dot after the top-level domain"

That weird dot is called root. Without it, a name is unqualified, with it the name is completely defined. That means that context is everything. Without the dot, a resolver might add the resolver's domain or parts of it, repeatedly.

Now, you and I know exactly what: host.example.co.uk is supposed to mean but without the trailing dot a resolver could try to look up host.example.co.uk.example.co.uk

Windows out of the box, if this happened would also try host.example.co.uk.example.co then host.example.co.uk.example and then host.example.co.uk.example, then host.example.co.uk. and get a result. However I never saw Windows actually try the first effort and I think the behaviour was designed to deal with large corp with DNS federated monstrosity Active Directories.

Your browser is probably toddling off to a DNS over https (DoH) server these days without your say so and canoodling with all sorts of ne'er do wells. Your DNS lookups are basic data - your ISP used to love seeing where you go. Your OS vendor (if you buy your OS) obviously can pass back "telemetry". Mr Google, doesn't own the desktop but does own the browser, so by ensuring you use "safe" DNS servers for your browser instead of whatever you have configured, its all good. All these shenanigans does make IT troubleshooting far more exciting than it used to be.

I shouldn't worry too much about trailing dots. You will almost certainly not be using the DNS servers you think you are. I get why DOH was invented and there is a good reason for some "civilians" to use it - ie a non IT specialist using a nasty wifi hotspot will be protected from some harm by their browser going home securely to do DNS lookups. However is it up to the browser vendor to trample all over the user's choice of Endpoint Security?

DNS is way more complicated than simply looking up addresses. Its about money these days (actually it always has been since around 2000) and there are now a lot of very opinionated mega corps who want to decide who profits off you.


You might actually be surprised, or at least I was, that browsers -- and typically worse, IoT junk, still generally respect the DNS server(s) served to them via DHCP or they use plain ol' 53 to the hardcoded Google/Cloudflare standards. Some (like the Facebook Portal) are tricky enough to try DNS over TLS, but insofar as my big-list-of-common-DNS-IPs in the block rule is accurate, nothing reaches out over DoH unless explicitly configured to do so.

Like you, however, I see the writing on the wall. Being able to easily see (and block) how these devices/operating systems/apps relentlessly and obsessively phone home to upload as much telemetry as they can get their hands on is not something these corporations want. Eventually, they will all just use your connection for the bandwidth and pass everything over a VPN to hardcoded endpoints.


I've spent over 20 years running DNS servers, web proxies etc. I also happily run quite a lot of IoT crap. I have two VLANs at home - THINGS and SEWER. SEWER is for stuff that really scares me! SEWER has layer 2 (MAC) isolation.

For a laugh, on my laptop (Arch BTW) I block one of my browsers from all external access except via a local Squid proxy with bump n splice - man in the middle, but at least I am the man.

In general I am an optimist. I think we are going through growing pains, akin to dealing with a teenager ("I hate you"). Mr Google and the other school bullies will be gradually brought into line and we will all be gradually dragged into the future. It may well not be as envisioned by anyone. However the climate is a bit of a snag and that might be more important.

We live in a time when my grand daughters are able to cavort on a Chinese media platform and multiple American ones to the entire world as an audience. They look pretty cool whilst doing it - they have media and make up skills and more. I used be able to flick beer mats or something.

The world is twisting and turning in front of our eyes and I for one am out of breath (ex smoker!)


Yes, but in the end unless they change the underlying transport (i.e. TCP/IP) to something that is not the Internet, you can still see that the traffic is happening and block the endpoint.

If the device doesn’t work after that, well, maybe I didn’t need it.


I agree with that, but there's always going to be exceptions like 'smart' TVs. I have a Roku-based one, and I accept that whatever service (Disney/Hulu/etc.) I connect to can track what I do on their app, but I don't consent to Roku themselves tracking every click I make, app I'm using, or setting I change. Plenty of people have pointed out this traffic to Roku on their support forums, and AFAIK, they've never come clean as to what, exactly, they track.

Just because I let someone sleep on my couch doesn't mean I accept that they sneak into my bathroom and watch me shower.


It’s the distribution part that makes it hard right? There’s a shit ton of dark magic happening above the atomic level of an individual node that both introduces the majority of the complexity and also the majority of the obfuscation.

DNS is easy. An organization agnostic distribution of information is _really_ tricky.


I went through this a few years ago, deciding that I had only a piecemeal understanding of DNS based on the specific things I had run into. I knew about `dig(1)` and BIND and a CS101 idea of how recursive DNS resolution works, etc. But I was missing the working knowledge needed for designing and implementing anything non-trivial or for debugging non-working systems. So I read "DNS and BIND" (not quite cover-to-cover, but close, skipping over the details on some of the more exotic features). I set up a real BIND server for some unimportant personal web sites. None of it was hard, but it did involve a bunch of (time) investment. To be clear, BIND is not the right thing for many use cases, but a lot of DNS ideas and terminology still come from BIND and I found that stuff very valuable.

I think books are underappreciated for learning stuff like this. Most resources you find on the web are high-level theory ("here's a block diagram of how recursive lookup works"), task-oriented ("how do I get `dig` to do a recursive lookup"), or otherwise low-level (e.g., reading the source of your local DNS client to understand its retry policy). To understand the pieces and how they fit together, from the theory (what they're trying to achieve) down to implementation (what caches exist where), I find there's no substitute for a holistic approach that you usually find in books, but rarely (not never) find on the web.


It's probably a good idea for all IT people to have a working knowledge of how to debug DNS issues.

DNS has historically been a vector for significant security holes and it's likely that this will continue to be true for the indefinite future. These holes also lead to other vectors in nearly every other protocol like SMTP. Even the CA system used for HTTPS is highly dependent on a basically insecure protocol. (Would you notice if your bank bought a DV certificate instead of OV? likely not)

So, perhaps it's not such a bad thing that it seems hard to learn to those who don't have enough interest, since even now we see people building DNS things without taking the time to really understand the history of things like port randomization, cache poisoning, AXFR, etc.


It seems to me that everything which broadcasts/asserts routing decisions in a network (any layer) is deceptively simple and potentially dangerous.


> deceptively simple and potentially dangerous

Also, there's not a lot of people keeping the whole thing running. iirc there's only like 13 or 14 root DNS servers on earth.


There are 13 root name server addresses, but most (all?) of these use anycast, i.e. there are actually several servers with the same address, and your traffic will usually be routed to the one closest to you. There are over 1500 root name servers.

https://wikipedia.org/wiki/Root_name_server


Shameless side project plug: they mention a "debug" mode for dns resolving being nice to have. ComfyDNS has this in its web UI :3

https://comfydns.com/

It's the picture that says "TRACE google.com A IN" at the top.

ComfyDNS is partly scratching a personal itch - I was tired of hand modifying bind9 zone files. And also I was curious as to how DNS works - I knew surface level stuff but no details. So I implemented the RFC from "scratch" (I used netty but no DNS libs). It was a lot of fun.

(Also if/when the site goes down from hugging, forgive me, it is a rails app running on the oracle cloud free tier lol)


the joke i've always heard is DNS combines 2 of the hardest problems in CS: naming things and cache invalidation


It comes with a validity counter in seconds, and you can be very very loose about counting those seconds.

It's not the hard kind of cache invalidation. You don't really have to do "invalidation" at all.

And on the server side, it's perfectly acceptable to send a mix of old and new versions for a while.


That definitely helps things to work, but it makes it very much more difficult to work out why things might not be working.

Not least because an unexpected cache can lead to things looking like they're working when they're actually broken at source, as well as things looking like they're still broken when you've actually fixed them at source already.


"I didn't know that cache existed" isn't because of the difficulty of invalidating the right items, though.

And the occasional cache that keeps things forever is so extra broken that it's not doing that because cache invalidation is hard, it's either a supreme misunderstanding or it's incompetence.


Sadly reboot the VM is still a valid step in debugging DNS, while you google "clear cache for this type of client resolver"

Also, the dreaded caching of negative results/authoritative no such domain just before you get the new domain working properly.


> And the occasional cache that keeps things forever is so extra broken that it's not doing that because cache invalidation is hard, it's either a supreme misunderstanding or it's incompetence.

Working in phone technical support in the early 2000s, I encountered first in CF6 and then at least one J2EE implementation (Websphere, maybe?) where the $^&#ing default was to cache DNS results forever.

The behavior was borderline undocumented, and the setting to fix it was even less well documented. It's like they wanted DNS to not be a thing.


You sometimes can perform an invalidation, but it's a manual process and you need to know who to ask. Slack did this when they botched their DNSSEC rollout[1]:

> Our team quickly started contacting major ISPs and operators that run public DNS resolvers, with the request to flush all cached records for slack.com.

DNSSEC is another part of DNS that is still hard to learn.

[1] https://news.ycombinator.com/reply?id=36910054&goto=item%3Fi...


> It's not the hard kind of cache invalidation. You don't really have to do "invalidation" at all.

One of the points he brings up is negative cache, or caching the dns into a state that it won’t retrieve a resolved address even if it’s available simply because the negative case is cached.

Invalidation is definitely a part of it, mostly because you kind of can’t.


you forgot the second, off-by-one errors.


Yes, but not a joke.


Another one of these articles. We learnt very easily back in the 1990's when the Internet was smaller, and the computers were much much slower and less capable.

DNS, LDAP, SMTP, IMAP etc were the bread and butter of ISP's back then and people actually referred to the official documentation (RFC's etc). You had to learn them if you wanted to run servers on the Internet at all, and with a bit of an investment of time (ie. your paid time on your job) you learned it.

This generation of developers and devops people don't have the patience or initiative and expect to be spoon fed and just cut and paste crap from StackOverflow and various low value blogs. Rather than learn the infrastructure that the Internet is built on, they grab the latest fashionable wrapper tool of the week, follow some shitty blog instructions, and then cry foul when it all falls apart and they cost their company lots of money. Just because they didn't take the time to learn the foundations of how things actually work on the Internet.

I've seen it time and time again. It's not actually that hard kids. You just need to do your homework.


Ah another one of those "dese damn kids and millennials I tell ya hwat, back in my grandpappys days we used to mine our own copper before laying them lines!"


The "get off my lawn" vibe was totally intended. Doesn't change the fact that there are a bunch of "AWS architects" out there running Terraform and building future disasters because they don't properly understand the infrastructure that the whole thing sits upon.

It's just like the morons who think that dynamic linking should be abolished for everyone, because they don't understand the use cases for distributors from a security standpoint, developers/users who require binary modules, or how to use their distributions packaging tools to avoid conflicts, and are too lazy to learn. But oh no, "ma DLL hell". Good grief.


> are a bunch of "AWS architects" out there running Terraform and building future disasters because they don't properly understand the infrastructure that the whole thing sits upon

Back in my day we had two servers handle more requests than your 1000 nodes.


Back in my day all requests went to Hammurabi who just impaled you if he didn't like your requests.

simpler days. Goddoggone millennials.


It’s not. It’s one of the few things that hasn’t changed much and it’s operation is fairly straightforward.

dig is a little confusing. It’s more capable but less straightforward than good old nslookup (which still works fine BTW).

I think partly DNS and the core protocols may seem confusing to younger people in the industry because so much stuff “just works” now.

For example, today wifi routers “just work” right out of the box. In the early 2000s it would have taken a network engineer with knowledge of DNS, IP, Ethernet, RFC1918, actual routing protocols and whole bunch of other stuff to set something like that up and they’d have well known how it worked and why it was configured the way it was.

If you think DNS from a client can perspective is confusing, try configuring BIND ;-)

/OldNeckBeardRant


When I was 14 I (poorly) administered an Active Directory environment with mail, web, and CIFS for a restaurant without understanding DNS or DHCP. Instead of setting the WRT54G's DHCP server to hand out the domain controller's static IP as the DNS server for proper name resolution I just used IP addresses and host file entries to make everything work. I also had the MX record for the domain set to the router's WAN IP and didn't have any PTR records set - the fact that e-mail delivery went as smoothly as it did is an absolute miracle in retrospect. A few years later I figured out how DNS actually worked and in my early 20s I inherited a corporate intranet where BIND was used as the nameserver for all external corporate domain zones. Moving this setup to VPSes for increased reliability taught me a _lot_ (mostly zone transfers, SOA, etc). I'm grateful for the experience but these days everything is pretty much done for you so this is a low-value activity... "IT" isn't valued the same way that "software engineering" is for better or worse.


This is why I continue to maintain that ops and SE excel at remaining distinct fields, though I certainly don’t mind overlap.

Most of my friends are software developers and/or software developers working on cloud-based stacks.

Two of those friends lead the platform engineering groups at their respective companies. One of them has a very basic understanding of networking and could figure out how to do subnetting that didn’t come out of the box, but they’d need to stop and go learn that.

That isn’t everyone by any stretch, but I see it more often than not these days.

I agree fully with the idea that younger professionals aren’t as used to the infrastructure underneath the infrastructure being as complex as the higher layers of the OSI model that they’re frequently more experienced with.

What I hope we don’t end up with is a future where all the data center people and all the network engineers (and so on) are almost exclusively employed by a small number of mega-corps. It’s important that knowledge and experience in the fundamentals of the networked world remain widely distributed and openly accessible.


Same boat at a similar age. (Ab)using AD taught me a lot about DNS, largely after the fact. Lots of “so that’s why that didn’t work…” moments later down the line.


>It’s one of the few things that hasn’t changed much and it’s operation is fairly straightforward.

It's relatively straightforward, ignoring all of the potential ways that things can go wonky, e.g. random servers not respecting TTL.

But I'll never forget when Firefox put out an update with DNS-over-HTTPS turned on by default. All of a sudden, I was inundated with "Email is gone! Everything is broken!" because we run an internal DNS server handed out to workstations by DHCP. We have internal webmail and intranet Web servers that were just gone.

It took a lot longer than it should to figure out what was happening, partially because it's DNS! Why should things go blooey? But it's pretty clear that Mozilla did not anticipate this (easily forseen, IMO) sort of issue.


> But I'll never forget when Firefox put out an update with DNS-over-HTTPS turned on by default.

Plenty of DNS old timers / neckbeards (e.g., Paul Vixie) warning that DoH was not a good idea; there was lively discussion on HN at the time.

We used split-horizon DNS as well and I implemented the "disable DoH" canary where I was working at the time.

* https://support.mozilla.org/en-US/kb/configuring-networks-di...

* https://support.mozilla.org/en-US/kb/canary-domain-use-appli...


These things you're talking about are a small fraction of DNS though.

For instance, you lookup "thing.behind.cdn.it" and get one answer, someone else looks up the same thing and gets a different answer. Pretty obvious, but when someone asks the reasonable question "can you open a firewall hole for thing.behind.cdn.it"

Some servers forward requests, some delegate, some will look stuff up for you others won't. And there's the magic with search domains on clients, and if clients or internal resolver libraries will honor TTLs or not.

There's also the myriad different types of records, and sometimes the server will tell you to reconnect in TCP instead of UDP, etc.

So -- DNS is pretty complex; it has the illusion of being simple because it works so well and most of the fiddly bits are abstracted away by stuff that mostly just works.


Re/ knowing older protocols, I recently took a few weeks to read _Networking for System Administrators_ and take+review copious Anki card notes. It's incredible just how much more confident I feel around understanding networking at a high level, including both DNS and all the stuff underneath it, like `ethtool` and Ethernet frames and stuff.

I suppose this isn't surprising, since knowing things "from the ground up" is why I went for electrical engineering instead of CS in college.


20 years after doing a CS degree, I wish I had done EE instead.


Yeah, BIND is hard to configure. Unbound/nsd are so much easier to deal with (once you find the correct documentation which is an exercice in frustration)

The principle behind DNS are not that hard, once you understand it's recursive. Now to configure it with security in mind, the proper infrastructure and the final details... lot of things to learn, but not that hard. Without BIND I mean.


> For example, today wifi routers “just work” right out of the box. In the early 2000s it would have taken a network engineer with knowledge of DNS, IP, Ethernet, RFC1918, actual routing protocols and whole bunch of other stuff to set something like that up and they’d have well known how it worked and why it was configured the way it was.

I think you are stretching how bad wifi was in early 2000s - sure its easier today, but in the actual year 2000 you could walk into a store and take home an Apple branded wifi base station (original AirPort unit) - 802.11b stuff of the era was largely as easy to connect new stuff to as today, generally with a passkey. It all largely worked with DHCP out of the box just like most routers today too, if anything the experience is much the same minus the faster speeds, slightly better range and encryption today. Oh and probably some kind of ipv6 support...

You certainly did not need network engineer level knowledge - lots of smart professional folks installed wifi in the 2000s, and solutions like the AirPort base station and many others were about as "turnkey" as they came.


That is what I assumed as well, until one day I got hit by a bug involving Extension Mechanisms for DNS (EDNS). Never knew it existed. All of a sudden DNS was failing and could not understand why. Took me a long time to fix the issue.


> I think partly DNS and the core protocols may seem confusing to younger people in the industry because so much stuff “just works” now.

Younger people aren't dumber than old one, they build even more complex stuff on top of these old abstractions.


Agreed. Not dumber, just building higher level stuff.

The value of deeply understanding all 7 layers has diminished because generally things are standardized and work well so it’s not required. This frees up some calories to work on higher level stuff like LLMs. That’s good.

It’s also good to encourage some people to understand the wisdom of the ancients in case SHTF.


Because all great developers create More complexity (-:


Pretty much what I came here to say. As a young system administrator, DNS was the second thing I learned after setting up my first Apache server and I didn't find it hard to learn at all.

I will admit that it when you get to a certain point, you have to be careful not to shoot yourself in the foot when operating a production system but that is a slightly different concern which is more implementation dependent. Eg BIND.


Can you help me find the mistake in my zone file?

  $ORIGIN example.net.
  $TTL 900
  @    IN    SOA    ns1.example.com. hostmaster@example.com. (
        20230728001
        1800
        300
        3600
        172800
    )
  @    IN    NS      8.8.8.8.
  @    IN    NS      8.8.4.4.

  @    IN    CNAME example.com.
  @    IN    MX    10    172.253.124.27
  www  IN    CNAME example.com


Off the top of my head (haven't had to do zone files for ~2 years):

* hostmaster@example.com -> hostmaster.example.com

* NS records are usually hostnames (not sure if IPs are even valid)

* Ditto for MX records ; also add a period to the end, otherwise example.net will get appended

* Also appending with the www record

See also:

* https://linux.die.net/man/8/named-checkzone


To add —

You can do delegated zones by specifying NS records for a subdomain within the parent zone. If you’re talking about NS for a second level domain (“example” in example.com) you would want glue records which are essentially a “lookaside” to prevent circular dependencies. Glue records are really just A records with clout, returned with the IP of your name server. This glue is maintained by the higher level authoritative zone. So you query for example.org and the nameserver for .org returns ns1.example.org as the name server for your zone, as well as the IP address for that server.

This is why you could run a DNS server at ns1.example.org as your authoritative DNS.


Pretty good! :)

Besides what was mentioned by others, there are two more mistakes in the SOA and one in each NS.


> mistakes in the SOA

I figured there was since (e.g.) the numbers looked 'off' (regular TTL vs. negative TTL), but couldn't be arsed to look up the exact SOA format.


Also the CNAME if you have a cname you don't want any other records with the same name. It ends up being a confusing and ambiguous situation to be if. You are supposed to use the cname to jump the the actual record. but now there is also a MX record here are we supposed to do anything with it?


Indeed, a semi-common footgun that is allowed by BIND and I think many other servers.


I just wanted to add on to what you're saying:

> I think partly DNS and the core protocols may seem confusing to younger people in the industry because so much stuff “just works” now.

I've noticed it's become much worse since universities have been teaching Python to start with and with the whole aggressive comodization of developers. To some extent the social justice polices inacted in our communities to exclude people*. ("unsavory" people)

We no longer have the culture where we had kids in early ages get a desktop, learn the ins and outs, play video games, trying to pretend to be a hacker, etc. We're getting developers who barely can script in javascript, barely do html, ignore the edge cases, and generally don't have a lot of interest in the craft. It's pretty frustrating to see this.


I may agree with your point but don't understand the social justice aspect at all; makes it feel like it's something just added in. I think both our perspective as a society and thereforo educational goals have changed over decades, laterally to any social justice aspect. If nothing else, there are order of magnitude more developers of all sorts today than 25 years ago, both as absolute numbers and as relative percentage of population. Basically the enthusiastic nerds who geek out about anything and everything and want to be hands on are still there, but also a LOT of people who just want a paycheck or just have more limited / specific interests. And they do ok. And education system will provide for them.

But I just don't see e.g. for-profit bootcamps with their simplified curriculum as any kind of social justice project.


And when there is social justice or involved it's about getting more people involved in engineering. I can see where "hey, quit being gross to women" can be read as excluding the unsavory but that framing presumes that a world where that's an intrinsic immutable property of a person.

I really don't buy the "some people just can't help being an asshole so having a rule against assholes is exclusionary." The most controversial CoC is by far the Contributor Covenant and rules are be kind and empathetic, show others respect, don't insult or demean others, don't be creepy or sexually harass others, don't doxx people, and behave like adults at work. Like the bar is so low.


This is kind of where I was going with that, but I don't agree with the way its being described. I 100% agree with cracking down on individuals who are hostile based on identity, that was unreasonable and should continue to be excluded from projects.

The history of the developer circles is that we've been a bit rough with the language. (mount, finger, touch, zip, etc are perfectly acceptable things for linux commmands.. theres even a joke about it). Unsavory jokes in mailing list/code bases, etc. However, we've come upon a bunch of people who take the aggressively negative view on the words and interactions used and have taken it upon themselves to rename things and force changes. (Master/main, master/slave, whitelist/blacklist, [theres a whole list of these "undesirable" language things] etc ).

On top of that the CoC has implemented toxic positivity rules on projects. Implementers overlook the fact that they have been used as a weapon to be put in the project it's self. (There have been examples of racial, and gender based discrimination in the enforcement of the rules)

All of this brings in people who do not contribute in a UX, DX, engineering, management sort of way. Even when these things are abused/enforced poorly it still has a significant silencing effect and distancing effect from those who would be a positive contributor but is reluctant to participate.


> But I just don't see e.g. for-profit bootcamps with their simplified curriculum as any kind of social justice project.

And you're right not seeing them that way!

While I'm sure there are now good-faith bootcamps that have the goal of improving the career options of people who may otherwise never get those, the first time I saw the programming/sysadmin bootcamp concept implemented was around 2002, where a company was recruiting kids still in high school to train them for their openings.

They did that because there were not enough skilled IT workers back then. Right, what that really meant is, they did that because there were not enough skilled IT workers they could afford back then. Someone did the math and estimated that for some openings, it would be more profitable to train first-time-job-seekers (in a lot of cases, not-yet-job-seekers) than to offer a better deal. They're still in business so I guess it worked!


The SJ part I gave more context in a reply to: Spivak

But to address the level of statements: I see SJ as one part of it, not the only or major part.

> enthusiastic nerds

Maybe it's the spaces I'm in but I'm not seeing that these days. I'm not hearing about personal projects people have made anymore. I'm not seeing a lot of enthusiastic young presenters at conferences etc. I'm just seeing younger people trying to stand out in the rat race and they aren't similar to when I was growing up. (I.e. lan parties, identifying as nerdy people and grouping together, etc).

The last young person I recall being like that was a guy who did a presentation on using Joycons at Scaladays 2017? This was a kid that did have experience in the us first robotics group. But f me.. a high schooler pretty good at Scala.. that's awesome!


I don't know, I would be curious to meet out of school dev from 20 years ago. Those I meet at my current job are... young. The lack of experience shows.

I guess at the time I was not better. Different, because using C I could still destroy hardware. Hard to program a CGA card from Python :D Maybe with MicroPython on an arduino now?


> I don't know, I would be curious to meet out of school dev from 20 years ago. Those I meet at my current job are... young. The lack of experience shows.

I was a stil-at-school dev/sysadmin/multi-purpose nerd 23 years ago, if that works, nice to meet you!

I had more experience than average on some tasks because I was lucky enough that, against my own will, when I was a little kid my parents bought a computer and not a videogame console. I still spent most of my time in front of it playing games, of course, but I also learned to program a little bit, and by the time we had our first PC at home, I also learned troubleshooting because every time I broke it I had to get it working before one of my parents needed it.

Other than that, I think school prepared me well in terms of foundations (perhaps better than what some kids see today, at least I'm surprised how few people who went through formal CS/SWE university training recently know enough about the reasons the relational model was introduced, or how operating systems work).

On the other hand, practical skills taught at school were not immediately useful to me. In particular, the software stack most used at school (Pascal, then Delphi, we did have some courses using C, Prolog, ASM, COBOL, and SQL, but most of my programming hours in school were spent in Pascal/Delphi) was not used in any of the jobs I've had, with the exception of COBOL), so I did a lot of learning on the job.

I make a living working on databases now, and I sometimes like to tell the story of how I got into them: while working at a bank, the web banking application had an SQL Server backend database for its frontend (actual banking happened in another network, on a mainframe, I don't know which DB it used but it was some IBM thing and it was not DB/2). There was a settings table that was basically a key/value store controlling different config options of the app. One of them was a maintenance flag which, if set to whatever the value for true was, would show a "we're under maintenance" page when serving any user request. One day they asked me to go put the site on maintenance ("go put the site" because I had to go walk upstairs and enter the datacenter to do this back then), so I went to the console, opened the query editor, and entered something like:

update settings set value = 1;

Notice the lack of a where clause. When I saw the number of rows affected, I panicked, ran downstairs, explained what I had done, and a colleague (who was also young but I think at his second job after school, and way more experienced than me) calmed me down, walked up with me again, showed me where the backups where, and helped me restore one.

I've been hooked on databases since that day!


A network engineer or a teenager motivated to communicate with his girlfriend when not at his dads office.


The downside to things that "just work" is that they become magical black boxes where learning how they work isn't a requirement until things really go wrong.


If that is the case why is the recommended way to use DNSSEC is to turn it off?

https://www.fastmail.com/blog/dnssec-dane/


DNS concepts are pretty straightforward, but I agree with the article that there are a lot of little holes to fall into. No mention in any thread on dig vs /etc/hosts. Or of ISPs with bad actor DNS behavior... etc..


> In the early 2000s it would have taken a network engineer with knowledge of DNS, IP, Ethernet, RFC1918, actual routing protocols and whole bunch of other stuff to set something like that up

You remember things differently than I


Yeah. It was a bit different from today, but not by much. Really if anything, stuff has regressed since then.

Our setup in the early 2000s was the same as today, except the AP, router, and modem were separate devices. This was somewhat beneficial: today, my monopoly ISP forces me to use their all-in-one junk box. There's not even a possibility of competition, and lo and behold, I don't think the forced-box does anything more than the three pieces of equipment did. (Except, finally and only as of the last year, IPv6. But perhaps if there had been real competition, that'd happened a decade sooner, esp. since I first saw it in '07 on a non-ISP connection…)

Combining the AP/router I can get behind. It's when it became ISP domain that it went south.

The 90s were rougher, as the networking stack in Windows was a bit more terrible. I have vague nightmares about installing drivers for TCP … where the TCP driver was somehow specific to the network card. That sentence doesn't even make sense by today's standards. Stuff got better in the 2000s.


Me too. Plug-and-play consumer wi-fi routers were common place by the early 2000's.


in the very early 2000s, home routers weren't a thing. Cable modems hooked up to a single computer. If you were a business, you got a PIX, but home setups were frequently done with a computer that had 2 ethernet ports and either used Windows's "home internet sharing" or Linux's ipchains and NAT. This was typically fine, because very few houses had multiple computers. I knew many people who would get a separate cable modem for each computer in their house.

By the mid 2000s, Linksys started coming out with their little WRT routers, which were affordable by home users and mostly just plug and play.


What you describe is late 90's, not early 2000's. Broadband was rolling out across many areas of the US in the late 90's (@Home cable modems, DSL, etc.)


Is configuring BIND hard just because it's got an obtuse zone and configuration format? Or because there are a lot of DNS-server-level decisions that need to be made?


See above. It's not only an obtuse configuration issue, but it has very little validation for issues that can get you in trouble, like CNAME s that also have A records.


Linksys was making home routers which were about as easy to deploy as any home router today starting in 1999. Their earliest Wireless G router came out in like 2002. The beloved WRT54GS came out in 2003.

https://arstechnica.com/gadgets/2000/08/befsr41/


I agree. I remember learning about DNS when I was a teenager. And I've been running my own authoritative DNS servers for almost 30 years now. Remember the O'Reilly book, "DNS and BIND" ? It's still out, those this would've been first edition, around 1993.


> For example, today wifi routers “just work” right out of the box. In the early 2000s it would have taken a network engineer

Or a nerd buying a WRT54v1 to install hyperwrt.


Many modern APIs are more ergonomic and easier to use due to the benefits of hindsight. A redesign and upgrade of DNS is long overdue.


I found out about https://www.nslookup.io/learning/ recently, which greatly increased my knowledge of DNS. If you look at the list of DNS record types [0], you might be surprised at how many their are. Knowing how to use those can be a bit much.

[0] https://www.nslookup.io/learning/dns-record-types/


> the list of DNS record types

Actually authoritative list: <https://www.iana.org/assignments/dns-parameters/dns-paramete...> That list also has linked references for each entry, whereas the list you gave only has references for 9 of the 51 types it lists.

If we exclude entries explictly marked as experimental, obsolete, deprecated, or reserved, the list you gave is still missing these:

• AMTRELAY

• ATMA

• AVC

• DOA

• EID

• GPOS

• ISDN

• L32

• L64

• LP

• MINFO

• NID

• NIMLOC

• NINFO

• PX

• RKEY

• RT

• SINK

• SPF

• TALINK

• WKS

• X25

(I know, many of these are de-facto deprecated: SPF is abandoned for TXT, GPOS was replaced by LOC, and the entire usage of WKS was obsoleted by the recommendation of RFC 1123. But they are not marked as such in the list from IANA, and I still often see SPF records in the wild.)

Also incomplete, but often has better references: <https://en.wikipedia.org/wiki/List_of_DNS_record_types>

(Not to mention TYPE, which I have also occasionally encountered.)


These all seem to be super-niche or obsolete though?

* ATMA, ISDN, NIMLOC, EID, X25, are all for relatively niche or obsolete physical layer protocols (I guess ATM isn't that niche, but most people never run into it).

* WKS, PX, NID, LP, L64, L32 seem to be defined but unused in practice (I had never even heard of ILNP, which what NID, LP, L64 and L32 are for, until I googled it just now).

* RKEY, NINFO, MINFO and several others are expired without adoption or never made it to an RFC

* GPOS is an earlier version of LOC


> These all seem to be super-niche or obsolete though?

Yes, I wrote that in an edit soon after commenting; your comment was made hours after I made mine, so you should have seen it.

And, as I said, the SPF record, though obsolete, is still very commonly found, in my experience.


DNS is easy in the same way that chess is. The game mechanics are straightforward, and it gets more complex from there.

DNS bears the burden of delivering you to complex IT systems. It’s abused in various ways to enforce geographic restrictions, service levels, etc. It generally works, so long as everyone upstream knows how to configure things so that downstream things they don’t know exist work well.

When things don’t work… that’s not easy.


I feel like DNS is one of the more straightforward protocols, especially on a practical level, and especially given that most interfaces are a dropdown and two text boxes.

I have noticed a lot of developers shy away from it, probably because they don't use it much or it's not their job (rather than it being hard).


"Dropdown and two text boxes" undersells it.

Here is the list of several dozen record types: https://www.iana.org/assignments/dns-parameters/dns-paramete...


Sure, but I'm talking about a day-to-day practical level. Most people will only ever need to modify A/CNAME, occasional MX and TXT, and maybe an SOA/PTR.

Even the more arcane record types (as far as I've ever used them) are essentially key-value pairs with the record type analogous to a namespace.


AAAA

Also NS is reasonably common.


Yeah, that's the dropdown.


Of the similarly-aged protocols, I think it's the most difficult (which is not saying much).

SMTP and HTTP can be pretty easily done by hand, which makes them more accessible to a person learning the protocols themselves.

DNS the protocol is simple, but I do think there's something to be said for how complex it is if you want to say, set up your own domain from which to reliably send e-mail.


Exactly. Why waste time learning something I will only use once or twice a year or 10 times in my career? Or that someone else (who is an expert) can fix for me?


I guarantee the problem space that dns solves is something you will run into in your career. Best to have some knowledge of systems like that so you can design them.


One of the best, but also strangest explanation of DNS I've seen is from "A Cat Explains DNS". It's wonderful.

https://www.youtube.com/watch?v=4ZtFk2dtqv0


As a nslookup user since before dig, i find the default dig output to be less than ideal. The top->bottom flow, the "important" information is always the third option on a line, etc.

Basically how do I put this, uh, its obviously written by someone for themselves and people who use it everyday.

So, for most things I continue to use nslookup, it gives me what I want 99% of the time, and for me its easier to parse.

So, various other people have said something similar but it reminds me of a couple utilities in AIX that an office mate of mine wrote. The thing was a very sharp tool but the 3 cases that comprise pretty much 100% of its usage required three+ mode selection parameters, etc just to print the equivilant of "its working" or "its not working". The result was 50 characters of typing that quickly turned into a shell script with two parameters. I sorta have the same thing to say about the "ip" command which showed up at the same approx time as dig. Yup it can do everything, but its a terrible UI too for all the same reasons. The defaults are ugly/hard and the hard stuff still takes 10 mins reading the man page for the once every two years you need to do it.

So worst of both worlds.

PS: I sorta feel the same way about bind, its written for people who run big public DNS servers, the rest of us are better served with "unbound" or the half dozen other utilities that are more oriented towards users looking to run an in-house split dns/caching resolver setup.


I remember being stymied by the BIND zone files (which are plain text) being called .db files (they may actually be called anything -- some people used the .db extension). I had just learned about sendmail, and I knew it kept alias tables in Berkeley DB files (which are binary), which in sendmail parlance were also called .db files.

As Alan Kay is fond of quoting Bob Barton... "Systems programmers are high priests of a low art."


I'm not a "developer" and I learned DNS without any problems. Therefore agree with other commenter that DNS is not actually difficult to learn. I like the output from DNS utilities such as BIND and tinydns format.

DNS is worth learning for any internet user, IMHO. I've written primitive utilities that when used together can do stuff none of the popular DNS utilities can do. I use these every day.

Here's DNS challenge for readers. Try to look up the domain "bootable.com". Use whatever is the preferred method.

People writing about DNS often compare it to a telephone book. IMO the way most people use DNS is more like "directory assistance".

IP addresses do change but by and large most stay the same for long periods. Old zone files, publicly available scans^1 and other bulk DNS data collected from web crawls and public DNS caches comprise the "phone book" for me. Absolutely essential.

1. Sadly, in recent some of these sources have changed to non-public. No phone book for you! Call directory assistance.


> DNS is not actually difficult to learn.

> tinydns format.

You earned my disagreement right there!


For me, the issue isn't so much DNS itself, but more that there are more things that try to resolve names on your PC, like /etc/hosts and all the stuff you put into resolv.conf.

Then you read online that you should try dig or so and those tools don't really match up with what your tools do(=resolving via libc), because dig really only does the DNS part.


One of the gotchas I encountered is that DNS is asynchronous, with possibly a long delay before reply. C apis make it look synchronous - which I think makes it harder to work with. There's also the detail that order of replies can be any. (I found too many developers expected synchronous and instant replies)


I remember the mid-90s when we were writing MUD servers and clients. You'd start the client, go "/world ZebraMUCK" and then the TUI would hang while the DNS name resolved.

So then we figured out asynchronous DNS (this was in the days when you linked with "-lresolv" on SunOS) and it was like a breath of fresh air! You could go "/world ZebraMUCK", control was returned to the keyboard, and even if it took 120 seconds to resolve zebramuck.nozebra.example.com, you could go about your business, like in another world, or issue some other client commands.

And client developers learned a little about select(3).


with the caching, it's a nice bimodal distribution, 99.99$ 0 ms response time, 0.01% 30 ms response time (with a small chance of having that query packet be dropped, with retries in the 1000s of ms). I've seen people write caches that use the old value and kick off a new query in the background to hopefully populate the cache again.


This was an outstanding writeup, and I have felt the same frustrations in many other contexts. I enjoy going the "make your own, friendlier tools" route.


sometimes it's hard to even know WHERE to change the settings. Last week a friend was trying to setup heroku with api.foo.com and he needed to add a CNAME to the domain so heroku would make the cert and turn it on.

I used dig, i used host, I used whois, I got invited to their aws route 53 and saw all sorts of stuff in there but each change had no affect. Finally I noticed from whois that the name servers weren't even aws they were google.

So they gave me access to the google account but no domains in there.

Finally I asked, have the CEO log in to his personal google account and sure enough, that's where the change could be made.


This. The protocol isn't hard, but the protocol isn't the service.

The service of DNS is a decentralized, distributed, held-together-by-spit-bailing-wire-and-an-unprecedented-post-WWII-era-of-international-peace-and-collaboration hash-job of individually configurable nodes kind of agreeing on a shared worldview of the information the service contains, unless your local government hates piracy or pictures of Winnie the Pooh, YMMV.

It's like saying "I don't know why people struggle with databases; SQL isn't hard" and then the database contains ten thousand tables, a thousand indices, a hundred functions and triggers, and all of it was documented by someone who built it and never had a neophyte review the docs.

Oh, and the database operates on eventual-consistency guarantees out to 24 hours of "eventually."


Here's what's cool about the article:

- Presents some nice theories which make things hard to learn (infrequent use, poor tools...)

- Describes how DNS tools could be improved.

- Gives you a few gotchas for how one may shoot themselves in a foot with DNS.

Here's what's a bit (not much) less cool:

- I really have no clue if those things ACTUALLY make things hard to learn (because it's not a research paper on learning).

- It's a plug for other content on the side which actually describes the DNS protocol. I'll admit the sold content looks cool. I haven't purchased and can't vouch for the actual quality.


As for the last point: Check out the author’s blog. She’s a real hacker and can convey technical things in friendly and simple terms.


Yup, I have no problem with the author or the work they do. I think it's cool. And there's still value in the article.

My only point is it slightly undermines this article per se. Claiming that something is hard to learn and immediately providing a thing that can teach it well is usually marketing. And that's all ok :).


At one point, I had to make and maintain my own DNS zone text file, knowing only a little. Today, cheap providers give me various Web forms interfaces, but that doesn't mean it will always "just work", so knowing a little can still help.

I still had to break out `dig` the other day, when a DNS provider started answering records days past TTL, breaking email and Web.

To know when if/when that happened again, I looked around for a monitoring service, but didn't quickly find one that did what I wanted (and was overwhelmed by SEO, and very aggressive robo-sales emails from one of them), so I wrote a script that essentially runs `dig` and `diff`: https://www.neilvandyke.org/check-my-dns/

The monitoring script discovered one of a provider's DNS servers had ongoing problems, so I sent them a monitoring report, saying I wasn't a DNS expert, but perhaps, if there's a problem, the report would be helpful to their DNS experts. This seemed to immediately get past any "have you tried rebooting Windows" front-line flowchart response that I wouldn't have been surprised to hear, and they said they're working on it.


'Still'? There's a famous talk[1] in the network community about how DNS complexity is growing so fast that effectively nobody can keep up with it. There's so many competing, overlapping, and overriding RFCs that it's hard to make sense of it anymore.

[1] https://blog.apnic.net/2018/03/29/the-dns-camel/


In a kind of corrollary of Dunning-Kruger, there is often a chasm between what people think DNS is and all of the things it can be. The article points some of that out (stub resolvers, even different implementations of libc); recursive versus authoritative responses; the recursion process; recursive versus authoritative servers.

A lot of deployment implementations are "bottom of the barrel" and aren't correct to begin with, although they work for the intended purpose. There is no checklist for server implementations which I am aware of (I've asked where people should know!). There's a lot of folklore which persists because if it works it's presumed correct.

There's "DNS" and then there's "The DNS", the "one true root" with arbitrary restrictions on the contents of labels. There's political interference with implementations in terms of the "one true root" doctrine, which interferes with marrying a resolver which e.g. queries a control plane and serves an application plane (where they have different roots): there are e.g. notions of forwarded zones, but there is no notion I am aware of of a "always recurse and lie that you're authoritative" zone (you can hack source code to accomplish this of course).

Even MITRE ATT&CK doesn't always get it right. They had it listed that DNSSEC "traffic" had to be examined with SSL tools, until I pointed it out (I didn't get credit). DNSSEC related records are ordinary DNS records, nothing is being encrypted.

I could go on: the experts were wrong on (UDP) frags, anycast...


DNS is complicated because it's old, and has been continuously revised in flight, to uplift to TCP, DoH, DoT, add DNSSEC, EDNS0, TSIG, notify, ECS, EDE. It continues in one ietf WG having outlasted another, it has cross area review. It's political, and technical, and has its black helicopter cabals.

I predate the DNS, and I continue to befuddle myself about details of the DNS on a daily basis despite a life embedded in DNS.

If you love DNS join DNS OARC.


djbdns is simple, easy to understand, easy to configure; it embodies a clear understanding of how DNS works.

Unlike BIND and dig, it was designed after DNS had been in use for a while.

Like sendmail, BIND suffers from being designed before anyone knew what it would need to do.


I still use tinydns, but I've moved on from dnscache to unbound.

djbdns was a great tool, clearly built with security in mind, and it forced you to understand how the whole system worked. It struggled with things added later like txt and srv records but they could still be added.

qmail was also well ahead of its time.


and logging in hex is awful and should be punished


"(like, what if you could run dig +debug google.com and it gave you a bunch of extra debugging information?)"

The flag they're looking for is +trace and it provides exactly what they're looking for which is a path of how resolution occurs from root nameservers all the way down to the domain.


There's an old saying: if someone tell's you they know how DNS works, they're lying


tl;dr: suggestion: DNS is really hard to unlearn.

I do not agree with the implied sentiment of the article. And thus with some of the comments (from "both" sides) of the arguments presented here in the comments.

Hard is not defined, but is implied to be "harder than you believe it is."

That is a general characteristic of perhaps most topics worth pursuing in our field: computer science/engineering. (Defining it in this closed field space on purpose for brevity.) And especially in distributed systems, which DNS in some way falls under.

Further you have business interests, which have affected and will affect DNS -- we like it or not. And have added on top of it.

A number of individuals then use hard as "it is intractable" [to learn].

Is DNS harder to learn than BGP routing: I think not.

Does one need to spend a decade focusing on using and debugging DNS. I would argue not.

Does one need more time to dive into BGP than DNS to do state of the art work: I would argue yes.

DNS is simply not the hard and hot scientific problem people go around tackling from a research perspective.

It is a federated business (and political perhaps) problem at this point.

I do agree there is a lot of caching involved, which creates a ton of coherence and invalidation issues. That does not make the topic hard, it makes DNS problematic. (The other word I would use, and that I think Ms. Evans [the author] implies perhaps.) I agree totally with that.

I also think it is a point of approach the learning path to it for the average scientist here. A lot of people are introduced to DNS (or email to that extend) as: here is the sketch idea, we make sure it works, trusts us, and there is this single deified entity that makes sure it works, and it is not patched with other branches "constantly." All of the aforementioned sentence of course is a lie. But a lot of engineers learn that. (I was lucky to not have that introduction.) And then it comes time to debug a DNS issue -- when they actually realize it is a DNS issue.

So I would recommend a revised blog title to: DNS is really hard to unlearn.


I love the flamewar that this post has started

There's the "it IS hard!" gang

and the "it's not hard, you're just dumb" gang

and the "it's not hard, you just haven't spent the time required to properly learn it" gang


Because complexity is easy. It practically generates itself. It has a natural economic advantage.

Making sense of complexity, otoh, takes much diligent effort.

So complexity tends to win and everybody tends to befuddlement.


I used to specialized in these service enable services. DHCP/DNS/AAA/LDAP and etc; low level stuff tend to get take for granted. It's not difficult but there are much nuances. Not typical visible to layer7/Front end development per se. Embrace the "and". If you are a FE/BE dev AND understand system/dns/network. You just set yourself apart from the next person.


Great article. Thank you!

I used to develop and maintain one of the World largest DNS services and summarised my experience with DNS protocol in this blog post: https://pavel.network/please-stop-using-dns-protocol-for-you...


The question I have is:

Why isn't there a combined MTA/DKIM/DNS server ?

Why am I installing and configuring and running and monitoring all of these different server software packages ?

I'd like to have a single config file that handles everything required to run a mailserver.

If I have more complex needs I can split things up make them modular ... but I don't.


> Why isn't there a combined MTA/DKIM/DNS server ?

If the nameserver was part of the MTA, where would you configure a CNAME for your webserver?

> I'd like to have a single config file that handles everything required to run a mailserver.

I think the kids call that Ansible. :)


DNS has been repurposed for everything from security validation to load balancing.

DNS over HTTPS and DNSSEC attempted to address some longstanding issues, but in the end everyone still has a host they know is going to get hammered harder.

Not too difficult to understand, but it is complicated given the number of sub-optimal use-cases that emerged. =)


Some years ago, I had a miserable couple of days when an internal DNS server quit working. Eventually I traced it to a corrupted cache file. Once that was deleted, life returned to normal.

Though unpleasant, it was an enlightening experience. I had never imagined how much of the internet ran on Amazon by then (2010?).


DNS-noob here: For what use case is it useful to debug DNS like described in this article? I'm a programmer since 2001 and I never ever had to investigate what is going on behind name resolution, I never encountered any kind of DNS-related issue.


Julia also created this comic about DNS, which is helpful for learning: https://wizardzines.com/zines/dns/


Wow, her "human" output for dig is wonderful. I learned nslookup first, and to this day I still use it because dig's output is such a mess. If dig looked as she suggests, I would have switched years ago.


I find the syntax and formatting confusing. The article mentions a desired "+human" flag. How about DNS tools that output everything in JSON? I want to see the structs as key+value.


Wireshark


i hate to be that guy but it's not hard to learn. The tools are just from a different era where expectations where... different. However, even in my day BIND was avoided in favor of other servers like that one by the qmail guy.. can't remember his name.

this comment reminds me of one of my favorite Dilberts

old guy watching Dilbert at his computer: you kids today and your fancy graphical user interfaces. Back in my day all we had were ones and zeros ...and sometimes we didn't even have ones.

Dilbert: you had zeros? we had to use the letter 'o'


A fun and colorful explanation of how DNS works. https://howdns.works/


i had hella trouble with it, but not just it - all the related topics - reverse dns/proxy setups, 10 ways to configure it/them, cname flattening and the 15 ways to configure that, the 15 ways you can do it with cloudflare or not, dis/advantages of all those setups/combinations/permutations, etc.

doubt a course could teach me half of what i would need to know in some reasonable amount of time - say, two or three months.

but open to the idea.


I actually tried to learn about sockets today. I gave it around 3 hours but I never reached the end of the rabbit hole which led me to give up.


+human flag is a sick idea, would love to see that PR.

inb4: you do it


DNS itself is pretty simple. Implementing a basic DNS server or client is not a big deal.

The management software around it is what is generally terrible.



People who use their DNS knowledge often - what is your job and problems do you solve with your DNS knowledge?


I use DNS to define topology and services (what you'd expect) and of late I'm using it for federating telemetry (the actual data; think of "tags" in the industrial control sense).

I've used it as an observable for asset discovery and classification, as well as for characterizing infrastructure.


Doing:

bastion named server

Multizone

Hidden Master

Split-Horizon

IXFR/AXFR firewalling

DNSSEC and local network

resolv.conf hijacking

Remote admin security

All of above would and should be hard.



DNSSEC was already mentioned, but yet another dark corner of DNS are wildcard records.


DNS is hard because cloud/web companies insist on taxing you to do very basic things.


DNS is like Othello. Simple to understand, but takes a lifetime to master.


Non-issue


I thought you talking about aws lol


TIL that `dig` does not have TLDR page https://github.com/tldr-pages/tldr


Use chatGPT to understand dig output:

Explain this : dig +human +norecurse @8.8.8.8 google.com HEADER: opcode: QUERY status: NOERROR id: 11653 flags: qr ra records: QUESTION: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

QUESTION SECTION: google.com. IN A

ANSWER SECTION: google.com. 21 IN A 172.217.4.206

ADDITIONAL SECTION: EDNS: version: 0, flags:; udp: 512

ChatGPT This is a result from the "dig" command, a DNS (Domain Name System) tool used to query DNS servers and retrieve information about domain names. Let's break down the output:

dig: This is the command being executed.

+human: This flag is not a standard dig option. It is likely an extension that is meant to provide a more human-readable output, making it easier for people to understand the results.

+norecurse: This flag tells the DNS server (in this case, the server at IP address 8.8.8.8, which is a public Google DNS server) not to perform recursive lookups. Recursive lookups involve the DNS server querying other DNS servers if it doesn't have the requested information in its cache. By using +norecurse, the server will only provide information if it has it directly available.

@8.8.8.8: This indicates the DNS server being queried, which is the Google Public DNS server located at IP address 8.8.8.8.

google.com: This is the domain name being queried.

Now, let's look at the different sections of the response:

HEADER: This section provides general information about the response. opcode: QUERY: Indicates that the query type is a standard query. status: NOERROR: Indicates that the query was successful, and there were no errors. id: 11653: This is a unique identifier for the query, used to match queries with their respective responses. flags: qr ra: The flags indicate that this is a response (qr) and that recursion is available and allowed (ra). QUESTION SECTION: This section shows the domain name and the type of record being queried. google.com. IN A: The domain name "google.com" is being queried for its IPv4 address (A record). ANSWER SECTION: This section provides the answer to the query. google.com. 21 IN A 172.217.4.206: This line shows the answer. It says that the IPv4 address of "google.com" is 172.217.4.206. The "21" is the time to live (TTL) value in seconds, which indicates how long the information can be cached by other DNS servers or clients before it should be refreshed. ADDITIONAL SECTION: This section provides additional information related to the query. EDNS: version: 0, flags:; udp: 512: This line indicates the use of EDNS (Extension Mechanisms for DNS), which is a set of extensions to the DNS protocol. The version is 0, and the flags are empty. "udp: 512" refers to the maximum payload size that can be used for UDP communication. In summary, the query asked for the IPv4 address (A record) of "google.com" from the Google Public DNS server at IP address 8.8.8.8. The response confirms that the IPv4 address is 172.217.4.206, and it is valid for 21 seconds.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: