I liked the ANY type as a convenience and debugging tool, but I guess the disadvantages far outweigh that perceived convenience. And now that I read about the actual semantics of ANY with respect to caches, turns out I was using it wrong anyway!
I implemented `minimal-any` in BIND to reduce problems caused by large responses in DDoS attacks. It helps in situations when RRL is not quite enough.
It doesn't always work out, but this sounds like a good and reasonable outcome this time. (Though it's their blog, so of course it sounds that way!)
Every intuition I have about caching would lead me to imagine an alternative implementation of DNS where zones are documents, and intermediate resolvers retrieve and cache entire zone documents from their canonical hosts, in order to respond to queries for individual records from stub resolvers.
Is there some big use-case for the DNS records within a zone to have different TTLs? Would it really be so inefficient if the zone as a whole just had one TTL, which was set as the minimum of the TTLs required by any of its records?
Does DNS not have spacial locality to its access patterns? I.e., if an intermediate resolver retrieves the A records for a domain, does that not imply that it'll soon also want the MX records for that domain, and the TXT SPF records, and—essentially—all the records with some probability? Is there no justification for pre-fetching these?
And, if both of those statements were to be true—that DNS could get by just fine with one TTL per zone, and that DNS queries do have a spacially-local access pattern—then why would you want to design DNS in a way where intermediate resolvers retrieve individual records, rather than entire zone documents? Would a modern ground-up DNS architecture do things this way?
(I have the suspicion that the answer lies in security-through-obscurity, vis. the reason AXFRs are prohibited. So, as an addendum: I ask the same questions as above, but this time presuming your DNS daemon has Row-Based Access Control (RBAC), such that an intermediate resolver retrieving a zone document would only see the records its upstream resolver thinks that particular client should be allowed to see—e.g. enterprise Intranet clients get everything, while public Internet clients get a trimmed view. This implies secure DNS, but just take that as a given.)
In preparation for this deployment, I might lower the TTL on service.example.com considerably. This way I can roll out my change quickly and observe the effect across all clients, and I can also roll it back quickly. Once the deployment is done, and I don't expect to make further changes to service.example.com, then I can raise the TTL again.
More generally, you might want the ability to propagate changes to the definition of some names much faster than others, and record-level TTLs let you do this. I might be planning to change service.example.com shortly while not planning to change www.example.com any time soon. The downside of low TTLs is a higher volume of queries and thus higher costs, with some implications for availability.
Record TTLs are something that I think about, and consider changing, whenever I'm about to make a significant change to a name.
(Yes, if TTLs only existed for zones, then you could still break out service.example.com into its own zone, and define the TTL there. But if you frequently want different TTLs for different records, then having to do this would be inconvenience without benefit. Service.example.com would be part of its parent zone initially, then you'd have to separate it out into its own zone just for the sake of a different TTL, and then merge it back in later. Under this use-case, zone-wide TTLs only will add complexity.)
Probably not. If I want to visit your website, I might just need A and AAAA. That doesn't mean I will soon want to send you emails (MX) or look at your google site verification token (TXT).
> Is there some big use-case for the DNS records within a zone to have different TTLs? Would it really be so inefficient if the zone as a whole just had one TTL, which was set as the minimum of the TTLs required by any of its records?
Definitely no. I tend to set low TTLs for A and AAAA records because in practice they do change very often. Depending on your setup, it possibly changes every deployment. But on the other hand MX is long-lived and I set it to many days. This is important security implications: if say, Gmail, has cached my MX records for many days, it will be very difficult for a DNS takeover attack to maliciously redirect my mail or other sort of DNS poisoning to do the same.
See RFC 5321
In HTTP, TTLs are per-resource.
Now imagine that www.example.com has A LOT of visitors, and myhome.example.com has only OP and his family using it. So now millions of clients need to make up to 96 times more DNS queries each (one day divided by 15 minutes, because 24 hours seems to be the actual maximum amount of time that for example Windows caches positive DNS query replies) because if they are visiting www.example.com and the way too low TTL has expired, all because a handful of people needs a low TTL for myhome.example.com.
Per-record TTL is great and I see no reason to do it the way you are suggesting instead.
Er, why? I was proposing precisely the opposite of that: that if you made myhome.example.com a separate zone from its parent, it'd have a separate TTL.
Honestly, there's no reason (other than the awful UX of current DNS servers and registrars) that any/all subdomains that have reason to change independently, aren't made into independent zones that are simply "managed under" their parent zone.
In software, you decouple components that have different rates of change (e.g. policy and implementation layers), so that teams corresponding to those rates of change can deal with the components using engineering strategies suited to their lifecycles. (E.g. a policy DSL can be quickly modified and pushed by the ops team; while modifying the implementation requires code review and a successful CI build.) Because this is pretty much an unalloyed good, we do it more than we need: we modularize everything, even things with the same needs, because "keep everything modular" is a lot easier of a principle to follow, and it allows us to group things into different change-flow-rate layers after the fact.
So why, oh why, do we not obey this principle with our DNS subdomains? The same team who owns the "microsoft.com" zone is responsible for any changes required to the "technet.microsoft.com" zone, and the "zune.microsoft.com" zone? How crazy is that, as the best-supported, idiomatic-UX default to have in DNS systems?
In practice creating a zone per record would be messy and unnecessary if you didn’t need to delegate control but you could do it if you wanted to.
This could all be handled internal to the registrar, such that you just see a tree of managed zones, with the ability to create new directories (zones) in the tree with a single click. All that is is a different UX for the capability that already exists. But if you can make a CNAME record with one click; or make a new child zone with an apex A record with one click; then why not the latter?
The reason I was saying that you shouldn’t be able to delegate such child zones—not all child zones, just the ones created through this one-click process—is precisely that being able to do so would make the process of creating them in the first place not one-click, because then they’d have to have SOA records that are independently managed (and so, probably, created with a form you fill out on zone creation, like regular zones), rather than being automatic derivatives of their parent zone’s SOA information. Sure, you could have a control-panel option to turn one of these automatically-managed child zones into a fully-separate top-level zone with the ability to edit its SOA information, delegate it, etc.
But that’s just a two-click process to get what we already have, whereas what I care about is the one-click process that gets you something we don’t have: child zones that are required to be bound to the same authority, and therefore are managed with the simplicity of team-based ACLs [like e.g. the AWS resources in a project] rather than with separate top-level accounts in different registrars. It’d be a one-click operation for the domain administrator of “example.com” to break off “foo.example.com”, and then one more click (of an ACL assignment drop-down) for the domain admin to assign the foo team the use of the child zone “foo.example.com” as a namespace to use however they please (including creating further child zones off of.) But all of that would live under what would currently be considered one account with a DNS registrar or DNS management service.
IMHO, this paradigm would be “the obvious thing” for services like AWS Route53 to offer—it fits in much better with the rest of the “enterprise-wide policy-controlled access on pooled resources” philosophy that these IaaS clouds have, than the current approach (every domain and all its sub domains being managed as single resources with single IAM owners) does.
Sorry, I misunderstood your comment then.
A more imaginative alternative DNS would make these first-class entities in their own rights.
iptables -t raw -I PREROUTING -i eth0 -p udp -m udp --dport 53 -m string --hex-string "|0000ff0001|" --algo bm --from 40 --to 65535 -j DROP
Thankyou to the folks at CloudFlare for submitting this.
Say we have A, CNAME and MX type records in our system. If we see ANY, let's convert it up-front into into three queries: A, CNAME and MX. Combine the results and that's it.
However, also, pre-RFC8482 that's not what ANY is defined to mean. A non-authoritative caching resolver is permitted to respond with only the records it has cached if something is in cache. So you're already getting a random subset of records. So the behavior is unhelpful for clients and also unhelpful for authoritative servers, and not in a way where the unhelpfulness trades off for helpfulness elsewhere.
And to begin with, applications should never be interested in all record pertaining to a domain; they want specific records. If I'm sending mail, I want a MX record for a domain. I don't want the SPF record unless I'm receiving mail which implicates that domain. If I'm making a HTTP connection, I don't want any of these; I want an IP address. So no semantics of ANY is easily justifiable.
On the other hand, we can't claim that this semantics is hard to implement.
Should it forward the "ANY" query to authoritative?
Should it respond with any record that is already in cache?
Should it do some a mixture of the above behaviors?
Should it cache the result of "ANY" query and re-use the data for other queries?
How is any of that "simple"? How do you transition the existing infrastructure to that behavior? Why?
Then we don't see it for a while until the type list is refreshed; that's the up-to-dateness case that is sacrificed.
> How do you transition the existing infrastructure to that behavior? Why?
If it's an alternative to ALL being rendered inoperative (and someone else is already taking responsibility for that extreme measure), I might have a lot of leeway that adds up to "try it and whatever the hell happens, happens".
More correctly you should probably forward the ANY as ANY, so you have to do it slightly later than "up front". And then cache the individual answer records as if they came from individual queries.
I haven't fully thought this through, though, because: a) As you can see, it's not necessarily "simple"; b) it's not impossible to suss out something that makes sense, but the article at least claims that the resolver side is undefined so far, and several implementations do several different things by now. Giving up on ANY is probably for the best, at this point.
Qmail, as opposed to other software, was sane enough to understand that lack of MX in ANY didn't mean it wasn't there. It meant it wasn't in a cache, so it would retry with just MX.
Our initial proposal (to do REFUSED for ANY) indeed had a chance to break some older qmail installations. This is why we engaged in a longer process and found acceptable solution - HINFO. HINFO is both backwards compatible (qmail will work fine) and solves our problems with ANY. Win-win.
> Once upon a time, back in 1996, there was a really unfortunate bug in the most popular DNS server software (BIND 4.9.3): it did not respond correctly to "CNAME" requests (that is to say, requests for any CNAME data about a particular domain name). This is critical information that an email server needs to know to do its job. Thankfully, there was a way to work around the problem: "ANY" requests. These requests ask the DNS server, essentially, for any and ALL information it has about the domain name in question, including CNAME information.
> These ANY queries have two big problems:
> As you might imagine, for big domains with lots of mirrors (e.g. gmail.com), that's a lot of information, and so the response can be quite big. Big responses pose two problems: first, it's a waste of bandwidth, and second, it can expose a bug in qmail's handling of large DNS responses (see the next patch).
> ANY queries are often not cached by relaying DNS proxies (for whatever reason), and so ANY queries cause more traffic EVEN behind a caching DNS proxy server.
Cloudflare once again pushes for broad changes to the way the net operates that benefits themselves and other centralized corporate players without any benefit to individuals that actually use the net in the sense of using the internet and not just a web browser.
The kind of exploration and direct learning that was possible when I was a kid growing up 90s/00s is slowly being phased out as the money seeps in.
Cloudflare's argument's are all based around ANY making it slightly harder for them to make a profit. A small chance of DNS amplification attacks in DNS is what Cloudflare thinks about because it's their business. But there's no reason to believe this is more important than individual users wishing to see what servers are behind a domain.
Combined with GDPR killing off whois the internet is a much more boring, less transparent place.
(B) You are free to run ANY as you like on your domain. For domains you own, this RFC doesn't change anything.
(C) Would you advocate for responding to AXFR / zone transfers? There generally is consensus that allowing enumeration is not desired.
Since most servers are web servers these days, you can get pretty close to this goal from certificate transparency logs.
This was proposed solution, the problem is that in case of attack, against a valid Authoritative service, launched via open resolvers, the open resolvers would just download Gigabits of ANY traffic with TCP. Read about this here:
None of your business, really.
> Combined with GDPR killing off whois the internet is a much more boring, less transparent place.
Thank lord for that, there's literally no benefit from it, with WHOIS evil still did evil and normal users got their privacy violated, without WHOIS evil can still do evil and normal users' privacy is protected.
In essence all this change does is remove the fiction of ANY, with or without RFC8482 ANY wasn't reliable enough for real usage.
(I agree with other issues pointed out by the article, and there are other reasons why, as a RR type, I would still axe ANY. But the functionality of being able to query all RRs on a server is often useful for debugging, though I think there are other practical ways to work around that. (Issue a query for many common RR types.))
Or is it "any way" imaginable?