That's great, but as a nasty surprise we found when using it on servers, is that it's rate-limited at 10req/s which is easy to hit even on a development server.
Compared to googles 8.8.8.8 and 8.8.4.4 that has a very high 1500 QPS limit [aednichols][0].
I find it weird when they have so much linux tooling for encrypted DNS..
I wonder how many of the 1.3T daily requests are made by misconfigured cloud servers with no local DNS cache, hitting popular endpoints like <service>.<region>.amazonaws.com or fcm.googleapis.com over and over again.
Consumer devices generally don't have this problem because 1) they respect the TTL, and 2) so do most ISP resolvers.
Ubuntu ships systemd-resolved by default these days, which is bare-bones compared to dnsmasq but still helps reduce unnecessary DNS traffic -- not to mention precious round-trip time. You might still hit the rate limit if your app connects to more than a dozen different FQDNs within a second of starting up, but that's not a very common situation.
I have a bunch of IoT stuff that hit my Pi-hole with the same DNS query a lot, like every second in some cases. Definitely plenty of badly designed consumer devices out there doing horrible things with phoning home.
I wonder if that's why I lost DNS yesterday? I had 1.1.1.1 set up as my home network upstream DNS. Lots of caching, but maybe I still hit the limit? Went back to 8.8.8.8 and 8.8.4.4 and all is good again.
On some servers, I can use a local caching resolver. However, on others, I'm forced to configure the caching resolver to forward queries to an upstream public DNS server like 1.1.1.1 (via DNS-over-TLS) because of network issues. These can arise from DNS (or perhaps UDP?) rate limiting by the server hosting provider itself, at the network level.
In another instance, I noticed that the hosting provider outright drops all fragmented IP packets. I'm concerned this could lead to DNS query failures, especially if the authoritative DNS server is not available over TCP. When contacted, they told me they do this for security reasons and would not be able to disable this filter for me.
Like the OP, on these systems I often encounter the 1.1.1.1 rate limit. This is particularly the case since I have DNSSEC verification enabled, so I'm considering switching over to 8.8.8.8.
What do you mean by "fix the problem on their end"? As I explained, I am not able to fix the problem on my end, if that's what you mean. In fact, this is the exact reason why public DNS servers are so useful: they are much more reliable than the available alternatives, in many cases.
Sure, if everything was perfect and all ISP/hosting provider DNS servers were modern and well-configured and had great availability and didn't censor or hijack results, then they wouldn't be necessary. But alas, unfortunately we have to live in the real world.
I am not against rate limits, because as you said, sending a billion requests per second (due to misconfiguration, or a bug, or for malicious reasons) is not reasonable. But 10 requests/second is way, way too strict, especially if you use DNSSEC and/or have multiple client machines sharing the same IP address without a common DNS caching resolver in-between (which may not be possible to have for several reasons).
> which may not be possible to have for several reasons
Try me. What's the reason? Money can't be one of them if you expect someone else to scale up and handle your load for free as that would be very rude of you.
Sure, here are three off the top of my head that have affected me personally (not to mention the million other possible different scenarios that might exist):
1. In one case, I have multiple machines on the same network but simply speaking, none of them are turned on 24/7 (except for the router), so none of them can be configured to be a common caching resolver. The router is a proprietary Unifi gateway device, for which it is not possible to configure it to use DNS-over-TLS, neither on the local side nor when forwarding to the public DNS server.
2. The other common case for me are my mobile devices (laptops and smartphones). I simply cannot configure them to use a common local caching DNS server because there isn't one, as these devices frequently connect over multiple networks (such as 5G networks, hotel Wi-Fi, etc) and you cannot just use the network-provided DNS server without being vulnerable to man-in-the-middle attacks, censored and hijacked results, etc.
3. Another issue is public Wi-Fi networks. In large hotels, stadiums, etc, there might be hundreds or even thousands of devices behind the public same IP address(es), and as a user yourself, you have no control over them or the network.
And obviously, you cannot just use their provided DNS server on these Wi-Fi networks, for multiple reasons: 1) why would you trust the DNS server of a random Wi-Fi network in the first place?, 2) even if you trusted the provider, it is not possible to authenticate these networks, so you could be talking to a man-in-the-middle attacker without realizing, and 3) even if there isn't a man-in-the-middle attacker and the provider is trustworthy, often these DNS servers are extremely poor, as they often censor and hijack results, pollute the DNS cache, don't support DNSSEC queries, etc.
And I'm not even mentioning the privacy issues.
To be clear, I always use local caching resolvers. But on these systems, I am forced to configure them to forward queries directly to public DNS servers, which means that they don't have a common caching resolver except for the public DNS server itself, hence the rate limit issue when they are all behind the same IP address.
I don't know, the easiest solution would be "run your own DNS server that you point the weird servers at". A handful of euros per month for a VPS will get you more DNS queries than you'll ever need. No need to even hit Cloudflare when you can turn on the recursive setting on your own server; this protects you against the viable but unlikely scenario that Cloudflare messes with your DNS/gets its DNS cache poisoned/goes down.
Nothing wrong with using a public DNS resolver for your own devices, but I think using public services like these for benchmarking/load testing is abusing the generosity of the people running these servers.
It's not a money issue, as I have multiple servers already which could easily provide this service, and it's even less of a configuration issue, as I am perfectly capable of configure this if I wanted.
But these servers are in different countries, so they would be much slower than Cloudflare DNS due to network latency.
And even if I had a server in the same country for my client devices, I frequently travel to other countries anyway.
I don't think I am abusing these servers, as I only use them normally as a user, it's not like I am scraping the Internet or anything like that. Unlike the OP, I am not doing any benchmarking or load testing or anything similar.
I can't imagine what use case requires dozens of devices, hooked up to potentially untrusted random public wifi networks, making many DNS requests all at the same instant. For your very odd use case I'd suggest hosting your own recursive resolver someplace like AWS and pointing to that, but I suspect if you checked your priors you probably can change something about the first 3 conditions.
I just told you three reasons why this would happen, and I'm sure there are many more.
It's obvious that these machines can be making DNS queries simultaneously, why wouldn't they? They are completely independent of each other.
And don't forget the number of DNS queries are also magnified when the machines are configured to do DNSSEC verification.
> For your very odd use case I'd suggest hosting your own recursive resolver someplace like AWS and pointing to that, but I suspect if you checked your priors you probably can change something about the first 3 conditions.
First of all, it's not an odd use case. I think everyone has a laptop and a smartphone, and if they are not following my security practices it's either out of ignorance or out of complacency / not caring, it's not because they are doing what they should be doing. And I'm not even talking about DNSSEC here, just the basic "don't trust random Wi-Fi networks / DNS servers".
And second, why can't you imagine a hotel Wi-Fi network with hundreds or thousands of devices, all of them hooked to the same public Wi-Fi network (the hotel-provided one), configured to use a public DNS server and making requests at the same instant?
What is so hard to imagine about that?
Hell, some public Wi-Fi networks don't even have a local DNS server, they directly send the public Google DNS server IP addresses to all clients as part of their DHCP replies!
Because if the Google DNS servers were configured with a low rate limit, the DNS requests of the other clients of the hotel would cause my DNS requests to fail, as they would all appear to be coming from the same IP address from Google's perspective.
Fortunately, Google has a high DNS rate limit so this is not usually a problem.
But the whole point of this thread is that Cloudflare DNS limits requests to only 10 queries per second, which is way too low for this scenario.
> I expect you to run a dns cache/resolver at that scale and not freeload. You have plenty of customers/employees and must be making enough money.
Yeah, me too. But as a customer, I also expect DNS servers not to censor results, not to hijack results, to properly resolve DNSSEC queries, to be available over DNS-over-TLS, to resolve queries reliably and to not be down frequently.
Unfortunately these expectations are almost always broken and as a user, there's nothing I can do to change that, except to complain to the provider (which almost always does nothing useful, especially when I'm traveling) or to just use Cloudflare DNS or Google DNS myself.
I feel like I'm missing something. Aren't these requests for your servers? Why are they getting censored and hijacked? Why not use HSTS, or if impossible sign the response with public-private key pair/encrpyt requests with same on device, if you are in such hostile territory?
> I feel like I'm missing something. Aren't these requests for your servers? Why are they getting censored and hijacked?
I didn't mention I was running any servers in this conversation. Perhaps you are confusing me with the great-great...-grandparent?
The scenarios I mentioned are all from the perspective of a user.
They are getting censored because of laws in the countries I frequently visit and they are hijacked for multiple reasons (Wi-Fi portals and NXDOMAIN hijacking, mostly).
Apart from that, I also do have servers (in different countries), but that's besides the point. But note that even these individual personal servers (a couple of which are forced to use a public DNS service due to network issues) hit the Cloudflare DNS rate limit during normal operation.
As I mentioned in another thread, I don't do any benchmarking / load testing, web scraping nor anything of the sort, this is just normal operation for servers that are idle most of the time.
It's especially noticeable for reverse-IP queries, for some reason (perhaps because these requests are bursty, therefore not cached, and cause several other queries to be performed? I'm not sure). As I mentioned before, even though I use caching resolvers, I have all of my machines configured to do DNSSEC verification, which contributes to the problem.
> hit the Cloudflare DNS rate limit during normal operation.
What I find implausible is that happening to a residential user, or even a apartment building full of them, frequently enough to matter unless some bit of software somewhere is doing something profoundly silly.
I believe essentially all mainstream DNS lookup functions retry multiple times with exponential backoff, so its not just ten in a second, its ten in a second over a series of sliding windows that evaporate when they are satisfied for a time. That's a lot of requests.
> I believe essentially all mainstream DNS lookup functions retry multiple times with exponential backoff, so its not just ten in a second, its ten in a second over a series of sliding windows that evaporate when they are satisfied for a time. That's a lot of requests.
I only hit this issue when the machine has been idle for quite some time (hours, perhaps). Which indicates that part of the problem is that previous cached answers have expired.
When I hit this issue, I am pretty sure that I am doing on the order of ~10 top-level DNS queries, for about ~4 different domains, a few of them being reverse-IP queries. It's possible that these requests are being amplified due to DNSSEC, so that might be part of the reason.
It's also possible that my caching resolver, when answering a query for e.g. WWW.EXAMPLE.COM, is also doing queries for EXAMPLE.COM and .COM (and their DNSSEC keys), possibly even others, I don't know. I'm not exactly a DNS expert, unfortunately... All I know is that a dnsviz.net visualization seems to indicate that many queries are usually necessary to properly authenticate a domain that uses DNSSEC.
Perhaps another issue is that, since the queries are simultaneous, the retries for these queries are getting synchronized and therefore also happen at the same time?
I can tell you that often some of the DNS requests timeout after exactly 5 seconds, even though I have `options timeout:10 edns0` in my `/etc/resolv.conf` (which is being ignored due to a bug in bind's `host` command). Although I'm pretty sure the real problem is that I was getting a SERVFAIL response from Cloudflare DNS. If I used Google DNS instead of Cloudflare DNS the problem didn't happen.
I don't know, perhaps you are right and some software is doing something silly.
For a single home 10/sec is probably fine. But remember you are using other peoples infrastructure. Be nice. You probably want a local resolver anyway in a case where 10/s is an issue. Even for home use a local caching resolver is not a bad idea. It usually makes just day to day web surfing feel much snappier. Most home routers take care of it but are rather limited. I usually set up a local one then have my local DHCP just hand out that address so all the clients get the right thing instead of going to some external source. That way I can have a list of resolvers and one IP to hand out to my client computers.
The default of many home routers is the router for the DNS field in DHCP.
> For a single home 10/sec is probably fine. But remember you are using other peoples infrastructure. Be nice. You probably want a local resolver anyway in a case where 10/s is an issue.
In one case, I am already using a local caching resolver, which is configured to forward to Cloudflare DNS (as I have no better alternative, except Google DNS perhaps).
It's a single machine which has a public IPv4 address, not used by any other machine.
If the machine has been idle for some hours, it hits the Cloudflare DNS rate limit just by doing 5 or 6 normal queries, plus about 3 reverse IP address queries at the same time (I'd have to check for the exact numbers).
It is configured to do DNSSEC verification, which I think contributes to the issue. Perhaps the fact that these requests are bursty, therefore probably not cached, is also part of the issue.
> Even for home use a local caching resolver is not a bad idea.
All of my machines have local caching resolvers, but this does not solve the issue for many different scenarios, especially those where there are multiple machines behind the same public IP address.
> I usually set up a local one then have my local DHCP just hand out that address so all the clients get the right thing instead of going to some external source.
Sure, but as I mentioned in another thread, it's not always possible to set up a local resolver on a network, especially when you are just a client of a network that is not yours (e.g. public/hotel Wi-Fi) and whose DNS server is unreliable or untrustworthy for many different reasons.
It's a good thing that if fails in development and not just in prod. The rate-limiting itself isn't the best scenario but it is to be expected with a free product.
This is great from a legal standpoint for net neutrality.
ISPs (and their FCC champions) like to claim they are not telecom services that funnel data back and forth which makes them Title II common carriers, subject to strong FCC oversight.
Instead they say they are information services who process and change data. Then they are Title I services that the FCC has little authority over.
One way they do this is by saying they have DNS, that translates domains into IP addresses.
Of course that's incidental but the more that people actually use alternative DNS, the clearer it is that DNS services aren't core to their offers, and the more evidence there is to show to a court that ISPs are Title II services.
So if you want to support the upcoming return of federal net neutrality protections and FCC oversight of ISPs, one thing you can do is choose DNS NOT provided by your ISP.
It's like you're utterly unaware of the massive corruption.
US telecom companies have been getting a lot of US taxpayer money for (literally?) nothing in return, which was of course an absolute accident by whoever set this whole deal up. Absolutely an accident by whoever made this happen.
I adore that you believe that arguments matter at all.
If you want to involve another 3rd-party in your data collections, also choose DNS NOT provided by your ISP.
My ISP has an interest in keeping me online and naturally need to provide their own DNS servers that work well-enough to not question. A 3rd-party DNS's interest is in people using it because...?
The average consumer clearly understands that US ISPs are email, DNS, and security(?) providers who also provide free internet access with speeds dependent upon how much customers pay.
Deep technical adjudication is exactly what judges in higher courts are equipped to do. No matter the domain, they are presented with tortuous technical language and jargon about a specialized topic (the statute), a collection of nuanced facts, and two or more carefully organized logical arguments, prepared by the parties' capable lawyers, about why those facts activate or don't activate this or that clause of the statute.
Probably a lot? Judges are perfectly capable of learning technical topics and are pretty smart in general. The judge who presided over Google vs Oracle taught himself to program. Some may be old but don't forget the people who invented the internet are 70s+ right now.
That's a good question. Tech issues are tough, which is why you see lots of friend of the court briefs to explain it.
For FCC matters around net neutrality, there's generally two courts: the DC Circuit Court of Appeals and then the Supreme Court.
And, yes, they deal with it and have clerks to help.
For instance, in the biggest decision about net neutrality, the BrandX case, where the FCC was allowed to determine that Cable companies were Title I information services, DNS is mentioned 13 times in the majority decision.
And if you find that interesting, I recommend reading the Scalia dissent, where he said that it's totally clear that broadband is a Title II service, and letting the FCC say otherwise was wickedly wrong.
It's impressive and good to have options for fast, free and frequently updated DNS servers, but…
DNS is probably the most successful decentralized protocol we have, one that I use as a mental model when day dream about solving identity problems. I hope we don't end up with 2 or 3 DNS servers eventually.
How many of those are due to companies trying to do weird things with DNS, though?
DNS itself is quite well-designed for its age. It is decentralized, so there is no single-point-of-failure. A request can fit in a single UDP package, so load balancing is trivial. A response can contain multiple records, making it possible to use it for load balancing or failover. The format is designed to be extensible, so it can be used for more than just the original few applications. It is definitely showing its age when it comes to privacy and security, though.
Can a failure of an authoritative DNS server bring down a website? Absolutely! Set a short TTL and bring down the servers due to fatfingering a config, and your website will go offline. But you can hardly blame DNS itself for that.
What do you mean it's decentralised? It's a central service you have to query for name->IP lookup, with authoritative name servers for every domain. It's the very definition of centralised. Just because it's hierarchical doesn't mean it's not centralised.
A decentralised DNS system would be more like a p2p network where you can perform name lookups from any participant in the system.
Companies aren't trying to do anything weird with DNS. It's just how most things work in service architectures. Every network query will start with a DNS query. You might depend on 50 services, with circuit breakers and all kinds of great logic so a single service won't bring you down - but there is that DNS lookup which will break everything if it fails.
DNS load balancing is problematic and anycast is a much better solution for services that need that level of traffic management, but that's not DNS' fault. DNS is meant to be "computer A asks computer B what the IP address of google.com is", not "five million computers on five continents all need to be grouped into ten different groups based on nearest subnet and directed to a randomly organised list of servers", even if it does technically work.
DNS solves a difficult problem (decentralized identity) in a deceptively simple way. We have the “it was DNS” joke because it’s a hard problem with lots of unforeseeable edge cases, not because DNS is poorly designed.
In what sense is DNS decentralized? Public domain name administration is delegated by a handful of entities, and most high-traffic domains are on a registry operated by Verisign.
BGP is probably a better example of a successful decentralized protocol, and even that is contingent on ICANN and a handful of regional registries handing out IP addresses.
It's a nice accomplishment to build a user-facing feature to that scale, but I'm not all that impressed by the raw number. Let's break it down: 1.3T requests per day is roughly 15M req/sec, and assuming that each of the approximately 300 PoP locations [0] is serving this via anycast, that's 'only' 50k req/sec/PoP, easily doable by a couple, a handful at most, of hosts in each location. Small request and response sizes increase total request rate but keep down overall bit rate so one can handle a larger number of requests with a small number of hosts. Obviously these numbers will vary with location, higher and lower based on overall pop utilization usage, but rough enough to plan for. Even with encryption, these are well within the reach of single digit machine numbers in each location.
I don't think the raw number is difficult, the difficult problem was building up the supporting infrastructure, both technical and product-wise.
My source for the above? I've built at-scale, geo-distributed internet-facing services such as anycast DNS, video streaming delivery and the like.
If you've built global geo-distributed networks, then you should know that anycast will not distribute traffic evenly across the pops. Most likely the USA pops do 10-100x the traffic of many of the others.
My guess is that some pops are handling 5M+ req/sec. That's a pretty impressive feat of internal load balancing.
You did but you hand waved right over it with "rough enough to plan for". Also I think it's important to call out the high variance specifically for the people who are not network engineers.
I think you're being misleading with your analysis.
DNS requests fit within most standard MTUs (1500 bytes) over UDP. Almost all of it can be warm cached, and the datapoints are highly compacted and can be sorted offline.
That's (mostly) great! I personally don't use it, but do appreciate competition in public DNS resolvers and having a high-quality fallback for networks that have a mediocre native resolver.
Maybe this would be a good opportunity to stop truncating EDNS client subnets from requests [1]: It doesn't really help user privacy, but is quite anti-competitive since it can degrade latency needlessly for other (DNS-based) CDNs [2], especially in regions where Cloudflare does not have a POP closer to the user than the other CDN.
Generally just my ISPs resolver, since they can eventually see all of my traffic anyway [1] and I don't see the need to have either Google or Cloudflare added in between.
For browsing though, I frequently use iCloud Private Relay, which uses oblivious DNS [2] for name lookups. Cloudflare powers that much of the time too, but at least Apple anonymizes my side of the connection.
[1] At least until ESNI or one of its successors finally become widely supported – then I'll reconsider.
Generally, no, ISPs provide DNS with generally the best locality of results.
Sometimes it can happen that you get a shit resolution, but it's hard to identify if DNS is the issue in home use since there are so many variables.
However, some ISPs have truly terrible DNS service, for example at my father's house, the first request to any not terribly mainstream website (say outside top 1000 most visited or whatever) always misses the first query while the provider goes look it up on whatever upstream they use. In this case it was really a case of using a different DNS provider that doesn't prune the cache so much or that stalls while querying upstream.
I've had a very poor experience with Comcast/Xfininity's DNS. Anytime I'm managing a network with Comcast as the internet provider, I change the DNS to Google or Cloudflare. Or with pfsense I just use the root DNS servers.
That depends on your ISP. Mine is notoriously bad at maintaining their DNS servers. In the past you'd be online if you had defined your own DNS servers (e.g. using CloudFlares or Googles), but if you used those provided by the ISP you weren't going online that weekend.
Personally I use an experimental DoH service that my employer is working on, and CloudFlare.
Install a configuration profile that configures DNS. This a better way to configure DNS on Apple platforms because it's device-wide, not per-network. These are generally distributed as files named something.mobileconfig. I use one distributed by AdGuard for their public DNS. Just open on an Apple platform and it will offer to install. It's signed, so the OS knows it's from who it says it's from. I have this profile installed on my iPhone, Mac and Apple TV.
I don't think there's any automatic fallback, but I'm not sure.
It is possible to quickly toggle on-and-off what the configuration profile sets in Settings. I've occasionally had trouble on captive wi-fi portals, like on planes, but I generally remember it's probably the DoH, so I go toggle it off, get connected, and toggle it back on.
you can also have a look at nextdns [0][1]. I set it up on both my mac nad iOS. NextDns provides a panel where you can see what got blocked and some other analytics for you. Even though I use Brave on iOS and Arc with uBlock Origin still that wasn't enough and nexDNS blocked some additional ~8% trackers. It's free for first 300k requests per month.
It (coincidentially or not) limits the performance of competing regional CDNs to that of Cloudflare, since 1.1.1.1 truncates the EDNS client subnet parameter, which means that the only location/region hint a third-party CDN DNS has to go by is Cloudflare's location, not the user's.
As an example, consider a user in Paris, and assume for the sake of this example that Cloudflare's closest POP is in Berlin. Assume that user visiting a site hosted by a French CDN, with POPs in both Paris and Berlin.
If that user tries to resolve cdn.mysite.fr via 1.1.1.1, they'd get a Berlin-based location, since the French CDN has no way of knowing that the user is actually closer to them, and Paris would have been the better option.
Cloudflare doesn’t use GeoDNS for their CDN, it’s all anycast. They have no use for EDNS for their own traffic. Honestly I don’t take any CDN that doesn’t operate on anycast seriously these days, the issues with GeoDNS are well established at this point and if your network operations can’t handle the complexity of an anycast deployment you’re probably dropping the ball on a lot of other fronts.
Have you ever built a CDN or anycast network? I have, many times.
Anycast isn't magic, and it can't be used in all situations. For example Akamai has 16 times as many POPs as Cloudflare does, and many of them reside within smaller ISPs where you might not be able to establish BGP sessions. Long running downloads and websockets are better served over unicast addresses. Not to mention you don't have a lot of knobs to fix things when anycast does go wrong and you have a network in the US sending your traffic to an EU POP because they have cheaper transit in Europe.
More control over ip selection for clients to better direct traffic for their customers is the obvious first order benefit. Passive competitor data collection is useful too.
DNS traffic has good traffic ratios, at least compared to typical http traffic. That's got to help with peering negotiations. Although DNS traffic isn't that much, so it probably doesn't help much.
Same for accepting and dropping inbound volumetric DDoS. Larger ISPs want to see roughly balanced inbound and outbound flows before they'll agree to settlement free peering, and options for attracting inbound flows from residential ISP customers are limited (run a backup service, run a 1:1 multimedia messaging service and/or video chat, DNS helps a bit)
If we think more complexly we can find a much better answer. Cloudflares profitable services depend on DNS working properly for both the customer and the customers userbase. DNS servers outside of cloudflares control may not be correct, or may not route properly in various edge conditions. Therefore an easy way to solve this is to simply host their own DNS.
This is probably it; the very first thing support can ask is "try it with 1.1.1.1 and see if it works" - if it does, it's a DNS issue.
Also, by running their own DNS they can ensure that some percentage of the Internet does NOT cache results beyond what they want (to some extent) - much of what Cloudflare does is DNS trickery to route to the appropriate CDN.
Latency. Cloudflare is selling speed. When you use their DNS, then lookup of their CDN domains is going to be faster, since you're querying the source directly rather than through intermediate DNS servers.
They have an accurate internal number of what websites people are going to. They could use this to direct their sales people. They might also use it with their secret score on every user to decide if they need a captcha. They could also just have it in their back pocket as a potential extra revenue stream selling user data.
They have a free offering for anti-malware and anti-porn DNS server (1.1.1.2 and 1.1.1.3). Right now it's targeting for "families", but at some point they could easily monetize this especially to corporate networks.
And from a corporate perspective, historically no one ever got fired for using IBM. Who would you want running your Fortune 500 corporate CDN, a company that can handle 1.3T reqs/day or another CDN provider that's one-tenth the price but doesn't have the scale/reach of Cloudflare?
Users need working DNS to use Cloudflare’s paid services. That used to be a business in and of itself, where OpenDNS made money showing people ads. But now the market price has been driven to zero.
Almost like the modern corporate structure is what's really evil. Sixty or Seventy years ago, in the golden age of US manufacturing, companies were mostly privately owned. No shareholders to please. That let them make long-range strategic plays, instead of making short term anti-plays just to make the next quarter number go up.
Considering their core products, DNS might be an early-warning sign of DDoS. A sudden surge in requests for a domain will trip a circuit breaker before the HTTP sessions begin. There's probably a way to tell the difference between a bot attack and a hug-of-death by the pattern of DNS.
One of the more evil things they could do is show you ads, for example every 100 requests, you are shown a full page ad for a few seconds, then directs to where you were going. Pretty much like how youtube does it.
I'm not saying they will do this, but they could if they wanted to. I've seen this from DNS providers in other countries.
OpenDNS used to hijack DNS to show ads, which is one of the reasons Google launched the non-hijacking 8.8.8.8 in 2009. OpenDNS quickly followed suit. No one would attempt it today, seeing as it would instantly make you the worst DNS resolver on the Internet and switching away is trivial.
Given that almost every site now uses HTTPS, they could only do this for users navigating to non-existing domains (and furthermore have to register the domain names themselves, or they wouldn't be able to get any TLS certificate for them).
Would that work over HTTPS? My understanding is that, when the browser connects to the IP returned over DNS, it validates the TLS cert returned by the server at that IP contains the website name.
DNS also seems like one of the easiest cases for anycast and Cloudflare also seems like one of the best organizations to figure out that sort of internet hackery.
So assuming an efficient implementation and including high availability requirements: probably in the order of tens of machines, maybe hundreds but not thousands.
Edit: eastdakota's tweet says 13% of the requests are https- or tls-encrypted. That throws a spanner into these calculations.
OpenDNS when I left was running I think 12 locations with ~4 machines each for approximately a million queries a second. But we did a lot more work than Cloudflare (loading each users preferences into memory, making a decision of how to answer based on them, logging stats for the user, etc) and had plenty of overhead.
DNS isn't hard to serve at that scale compute or bandwidth wise, your problems are more administrative and maintaining enough overhead for misbehaving clients to accidently DoS you.
I mostly use 1.1.1.2 or 1.1.1.3 depending on the device. That is CF with different restrictions.
On Windows the big problem with DNS is that it can be changed randomly. Among all the garbage on Windows this is peak garbage. Any con can change the DNS that I explicitly told it use. DNS by it self is probably top 10 sec risk on desktop envs and it so completely broken by design. I hack it more or less daily to "debug" our sites.
Conservative people leave us with DNS, TLS/Public CAs and other sht and then they whine on HN day in and out why did a "root" cert of MS leak or whatever. Securing TLS or DNS with automatons at a large scale company is a complete nightmare.
When you think about it, this is pretty depressing statistic for anyone interested in decentralized/libre services. It's trivial to run recursive resolvers yourself, and this demonstrates just how much people can't be bothered.
(and heading off the inevitable "but that increases the load on the authoritative servers" - this is only relevant to my comment if you're arguing that people are consciously choosing to outsource DNS because of this concern)
Once I was a member of a Linux user group, and we had regular meetings and also social gatherings, and during the latter, there were also mini-job-fairs; we could bring a résumé and chat with hiring managers.
So I was seated next to a gentleman who worked for a giant mobile telco, and I asked him about what he did, and he gave me a little spiel and quoted a figure of something like 5 exabits per second of data being processed for security and fraud prevention. And I reacted with amazement, and I said something like "that sounds exciting!"
And he says yeah, it's exciting, and then the conversation continued, and 15-20 minutes later, across the very large table, in another circle of conversation, somebody goes "you know, when I quote a large figure like 5 exabits per second, I'm always pleased to hear a reaction like 'that sounds exciting!' which is better than indifference or 'what's an exabit?'" and I chuckled at how the conversation had cross-pollinated, and my small comment was used as a good example.
I should've asked what kind of hardware equipment processes 5 exabits/second.
On any Linux or *BSD system it is very simple to run a DNS resolver on your own computer (e.g. unbound, https://nlnetlabs.nl/projects/unbound/about/) that you configure to listen on localhost, so you can put 127.0.0.1 as the DNS server in your "/etc/resolv.conf".
I have never used on any of my computers a DNS server controlled by other people.
Heh, appreciate the link, but definitely not the foolproof mechanism I was hoping to see. Leaves a bit much for the non-expert to infer + not an easy way to guarantee the system is always being utilized.
I know you're probably joking, but harder to remember numbers was always going to happen when you expand the address space from 4.3 billion combinations to 3.4 * 10^38 combinations.
It's not a joke. Just because you expanded the address space doesn't mean you need to redistribute every number across the space evenly. 1.1.1.1 should still be 1.1.1.1 in ipv6 land, encoded as 0x11110000..., and some random new device can have 2221:db8:1234::f350:2256:f3dd.
They didn't need to change everyone's pre-existing addresses just to do a protocol change, same way adding unicode support to domain names wouldn't require google.com to become ∆Íßխեղճå∂ß.çøm. They also didn't need to change the default format to the ::::: stuff.
It was at the start of the IPv6, but then disabled because companies were afraid of IPv6 compatibility with IPv4, which may be used to penetrate firewalls.
Huh, I had no idea, but I can see that happening. The bigger "nope" that comes to mind is wanting to replace NAT with every device having a public IPv6. Yes a firewall is theoretically superior to NAT, but it matters in practice how stupidly secure NAT is by default.
Compared to googles 8.8.8.8 and 8.8.4.4 that has a very high 1500 QPS limit [aednichols][0].
I find it weird when they have so much linux tooling for encrypted DNS..