Hacker News new | past | comments | ask | show | jobs | submit login
Why might you run your own DNS server? (jvns.ca)
312 points by 0xedb on Jan 5, 2022 | hide | past | favorite | 203 comments



This is a good summary.

I run both authoritative (nsd) and resolving (unbound) nameservers. They require literally zero maintenance. Before nsd, I ran djbdns, which also required zero maintenance. I've run BIND, back in the dark ages. Rumor has it that BIND doesn't suck any more, but I've seen no reason to confirm.

If you are able keep sshd up and running on your hosted or colo'ed server, you have the skills required to run a nameserver reliably. It's that easy. I recommend nsd and/or unbound.

If the article does not persuade you that you want to do so, then don't bother. But if you do want to, don't be dissuaded by assuming it will be difficult.


I was hacked one single time in my entire 25 year long career. Someone hacked a bind server I was running and installed some sort of bot node. That was in the 90’ies.


The pi-hole I was running on Raspberry Pi got hacked. I only noticed the traffic when something unusual showed up on my node app console


Was the Pi running public facing services? How did this occur?


No it was internal facing behind cable router. There must have been some vulnerability in the pinhole or os. They breached the router


That feels weird somehow. I highly doubt PiHole is the culprit. If you're only using it internal to your LAN for DNS there is no way someone from outside can touch it. You most likely have other, bigger problems with your network (perhaps the WiFi password was discovered by someone, or you're exposing other vulnerable services to the web directly).


Accually there's several ways, XSS for example.

But I agree that there's something other that's not ok. Compromised client (probably a computer) or a compromised router is my guesses.


Agreed, I managed to achieve this by port forwarding port 53 in my router settings. This allows hackers to enlist you in their DNS amplification attacks so please never do this.


Yikes. As much as I want to look into PiVPN, things like this give me pause.


Wireguard is the only service that I bother to expose.

It's stealth and has mitigations for DOS attacks.


Do you have a good guide for this? I sort of grok that the Pi (server) setup is different from the devices (clients) that will use it, but it’s always good to check assumptions.

I already run PiHole, but I might run this on a different box just to keep things simple.

Also, last I checked - port 51820 is reasonably well known, is it safe to use this default when forwarding traffic?


To be fair, that was probably due to something else than the DNS stack. For example, i assume the web interface downloads countless dependencies from a 3rd party repo (such as npmjs), any of which could have been victim of a hostile takeover.

DNS is nowadays very robust and secure, and if you have unattended-upgrades configured there's literally zero reason to be frightened by DNS.


To be fair, that pretty much describes every daemon from the 90s, and the linux kernel itself.


What version of unbound are you running? With our traffic load we restart unbound 1.13.x daily to "fix" a memory leak.


I've run relatively loaded unbound in the past - I would suggest to use minimum num-threads with which a single thread uses less than 50% CPU at peaks. And set number of slab to the same value (if it is power of 2) or lower. High number of slabs increases memory usage (may be it can grow over long time because of fragmentation, but I've not noticed this). Unbound Howto Optimize [1] suggests to set num-threads equal to number of cores but IMHO it makes sense only if: server runs no other software except Unbound (even in this case some cores will be utilized by the kernel so better give to Unbound less than total core count) and it has more than enough RAM so possible memory fragmentation is not a concern.

[1] https://www.nlnetlabs.nl/documentation/unbound/howto-optimis...


unbound-1.14.0 (newest).

But I should have qualified -- I run a caching resolver for use by half a dozen users (so about 20 devices). Load is negligible, but it works perfectly! YMMV.

As an added benefit, my unbound instance is also faster than 1.1.1.1, 8.8.8.8, or my ISP's resolver farm.


It is a good summary. My first reaction was "for pain and suffering". If you are hosting a server in some cloud provider, you probably still want to have your primary DNS not on that server. Yes, there are caveats.


I've been running my own DNS servers for 25+ years (BIND, though I did mess around with nsd and unbound for a bit locally.) It is basically painless, set-and-forget, other than the usual OS/package updates.


How much difficulty is added by DNSSEC?


I don't know how difficult it is to set up DNSSEC, but I do know I had to disable it on my internal BIND DNS server because it wasn't resolving google.com ("query failed (broken trust chain) for www.google.com/IN/A at query.c"):

I had to modify my default BIND options to disable DNSSEC:

options {

  dnssec-enable no;

  dnssec-validation no;


If you want DNSSEC to work, be sure you set up NTP and that it's working properly.


Also, NAT64 breaks DNSSEC (by design), so some exclusion rules are needed.


In late 2019 I scanned Fortune 500 for DNSSEC on their top domain, exactly one entity was using it. At the time azures stance on dnssec support was that https certs should suffice.

I think it’s a dead tech.


You haven't lived, until you cannot update the time via ntp, because no dns, because dnssec records aren't valid, because the time is wrong....

This dnssec/ntp dependency loop is absurd, and it is incomprehensible that ntp/dns can break this way.


Could you fix that? I've had mixed results trying to resolve such NTP related issues.


I think using IPs for NTP servers, or disabling dnssec are your only options.

Both show how silly it all is, IMO.


Quick, easy project! I did it just now (finding a list of F500 domains was the hardest part), and 9% of the Fortune 500 is signed.


Not at all … nowadays.

In fact, I run my own private Internet complete with DNSSEC for my white lab.

I used Bind v9.16+ and wrote some bash tools to generate in a flexible manner the named configuration files.

Can generate private root servers, private TLD, split-horizon, bastion, and hidden-master.


link to generate named configuration files.

https://github.com/egberts/easy-admin/tree/main/500-dns


adding DNSSEC validation isn't hard (a few options), but as others has commented one have to be a bit aware that dnssec validation might then fail from time to time. Validating other peoples security setup mean occasionally that people mess up and validation fails. I have been running with validation on 10-20 years, and maybe once a year I notice a service that is down because failing validation. It is a bit like expired certificates on https back in the days before most sites started to use automation for renewing.

Adding DNSSEC to a domain name is more work. You need a registrar that support it, and then depending on what DNS server program you are running you might need to install a signer, a key storage, monitoring and scripts that talks between the different parts of the chain. I have heard that the latest version of bind might now ship with all the required parts built in, but I have yet to test it. It a bit far yet from simply uncommenting an option, and the protocols for talking with registrars is being actively discussed and worked on during technical conferences.


Not much but you should have a good grasp first, and the implications (easy to add, tougher to remove)

There are tools to automate resigning, but personally I just do it manually once a year for fun.


Less than the amount of difficulty added to driving a car that can only be adjusted by poking at a TV mounted some place where you aren't looking at the road.

Sure, fixing that squeaky door is "easy," but have you ever heard the adage that every home project involves three trips to the hardware store? There may be technical aspects that few of us can implement from scratch and on the first try, but at the same time I also don't know how to build a good broom. These concerns are not insurmountable, especially with the network effects of people being in the same boat. How easy is it to find a good handyman without asking anybody? You don't have to.


The question was about DNSSEC.


It's work, but it's not difficult. Documentation is good and some daemons like BIND take care of key rollovers automatically.


reason: offer users of your website the option of per packet encrypted DNS

CurveDNS is also not difficult to set up, so if you run your own authoritative DNS for your website, you can provide optional per packet encrypted DNS. There are very few websites that offer this option.^1 If every website provided this option, users would have a better alternative than DoT or DoH.^2 Neither DoT, DoH nor DNSSEC provide per packet encryption and they focus on DNS caches run by third parties, not authoritative servers.

1. One example is https://ianix.com

2. Users could query the authoritative servers directly using stub resolvers and/or recursive resolvers that support dnscurve protocol, such as dq and dqcache, respectively.


> Before nsd, I ran djbdns

Curious to know what made you switch (I still run djbdns).


No technical reason, djbdns is great and I wouldn't hesitate to run it again.

I did tire of building my own djbdns and daemontools packages. When I switched from qmail to Postfix, the others were collateral damage.


Ah, yeah, I was never a fan of daemontools either.

I actually run djbdns (both cache and authoritative) under systemd (not my fave thing, but the thing my OS comes equipped with) and it works fine.

The lack of native support for some record types (eg. IPV6) is a little bit of a pain, but it's manageable.


I still run djbdns as well, as both an authoritative server and a caching resolver.

The biggest downside to djbdns, to me, is its lack of DNSSEC support. There are patches available for that, but my distro doesn't package them and I haven't gotten around to making my own package to include them.

The next biggest is related: djbdns lacks direct support for some newer Resource Records (like type 257 CAA) in its data file. However, the data file does allow you to encode arbitrary records directly, it's just a hassle to do it and to verify correctness.


I also used to run bind back in the time, but now reading this article I just got up a container with powerdns seems quite easy to maintain


even just cashed is useful for normal users.


I'm writing a safe Rust DNS server library:

https://gitlab.com/leonhard-llc/ops/-/tree/safe-dns/safe-dns

My goal is to have libraries for all the common services. Then I can run web, APIs, DNS, and email from a single static binary, with no config files.

Then the next stage is to run the server in a unikernel in a VM, eliminating the OS. The following stage is to run the server directly on bare metal, eliminating the hypervisor kernel and OS. The final stage is to run the server as firmware directly on the CPU, shipping built-from-source firmware for all peripherals, eliminating all unauditable binary blobs from the server.


I don't think i'd like to go this way, but that's pretty cool. Please keep us posted!


I don't agree that it's decentralized, it tries to be but it's really distributed with a few root servers with a few people that have keys to them.

Eventually there will be a decentralized name system for probably a decentralized P2P radio system, and I'm trying to build that: http://radiomesh.org

But it's proving more tricky than I could have ever dreamed, right now I have scrapped 433MHz LoRa on Rasperry Zero and I'm moving to 169MHz plain radio on Raspberry Pico.

As for running your own it's very easy with these simplified lines of Java and dns4j (excluding port 53 UDP stuff):

  Message query = new Message(data);
  Header header = query.getHeader();
  Record question = query.getQuestion();
  Message response = new Message(query.getHeader().getID());
  response.getHeader().setFlag(Flags.QR);
  response.addRecord(question, Section.QUESTION);
  Name name = question.getName();
  int type = question.getType();
  int dclass = question.getDClass();
  String host = name.toString(true).toLowerCase();
  ...
  response.addRecord(new ARecord(name, dclass, 300, "someIP"), Section.ANSWER);
  ...
  response.getHeader().setFlag(Flags.AA);
  return response.toWire(512);
Everyone should run their own DNS on the same process as their HTTP and SMTP servers... because without DNS nothing exists.

There are few things more frustrating than having your DNS provider be down for hours without recourse!


Hi.

The root servers use anycast, so you can figure there are "several" nameservers with the same address scattered around the 'tubes, and distinguished by the routes announced in different places.

There are and have been alternate roots since the beginnings of internet time, notiwthstanding Mockapetris' opinion that people who advertise false root should be shot.

Writing a decent recursive nameserver is nontrivial, I've written several for specific purposes but generally I use BIND.

I concur that running a recursive server for your SMTP server is best practice because network intelligence is oftentimes utilized for spam / malware mitigation. I'm unclear why you need it for e.g. HTTP.

> few root servers with a few people that have keys to them

Well, kind of. As said, there are quite a few root servers although the control is in the hands of relatively few. Maybe you realize this, maybe you don't but yes there are keys for DNSSEC. I'm not sure exactly how it works, but several people have to cooperate to sign the root zone. They have key signing ceremonies which are televised online. During COVID I watched them drill a lockbox, because one of the keyholders couldn't make it to the ceremony; fun times.


I don't like anycast because I think it requires BGP and backbone access or similar expensive stuff. DNS should have had regions in the main protocol so that people in EU don't use a DNS server in Asia f.ex. But it's too late for that now.

I might use geolocation on my DNS replies, and unfortunately here is the 2nd flaw of DNS, the replies should follow the sent order, because as the protocol works now you either get round-robin redundancy or direct your users to the hopefully correct continent, you can't have both!

As for my brute force workaround: I use IPs for connecting as often as I can, and the hostname is just for virtual hosting to work.

So all my applications have euro., asia. and iowa. prefixes and when outside of a browser I can "hardcode" the IPs so that extra second of lookup never hits my users.

Ofcourse that requires fixed IPs and open port 53 which is something every home fiber owner should ask for to distribute the internet again!


Most recursive resolvers try to figure out which authoritative server for a domain responds fastest and use that one. If you've got enough DNS requests and enough DNS servers, it kind of works out ok without anycast. Although, I've been told that 4 authoritatives is the optimal number, which is limiting (you can do more of course, but a random internet use recalls but can't find a writeup suggesting more wasn't great in some semi-failure cases, and you can cargocult top X domains which seem to do 4 for the most part)

Advanced protocols may be able to use SRV records to distribute further traffic, but web browsers can't, so kind of stuck for them.


> Advanced protocols may be able to use SRV records to distribute further traffic, but web browsers can't

Not for lack of trying from the DNS community, more like web browsers won't. However if you haven't already you should take note of the HTTPS and SVCB DNS record types: https://datatracker.ietf.org/doc/draft-ietf-dnsop-svcb-https...


Personally I don't see a use case other than advertising or surveillance for a web page doing 20 DNS lookups (and that includes CNAME chains). Having said that...

> extra second of lookup

Hrm. Sounds like you want it to work like today's internet, with today's internet services.

You're using radios, what's your traffic shaping look like UDP vs TCP? I know from experience that media devices spam the crap out of wifi (weird stuff too, like multicast MAC addresses). A common scenario is a TCP channel for control, and an accompanying UDP spew for the actual content.

DNS resolution according to the standards which were promulgated in the 1980s uses UDP, unless a (UDP) reply indicates that the request should be retried with TCP: there is no use of TCP, as in there is no "and if all else fails, retry with TCP".

For media, this provides a first mover advantage: whomever resolves their service and starts streaming first attenuates DNS resolution for the latecomers. I actually wrote about this on HN and wrote a demo TCP-only forwarder (which also conveniently does DoT) https://github.com/m3047/tcp_only_forwarder but nobody was very interested.

Point being, if DNS resolution is failing or slow and you can't / won't do traffic shaping you might want to force DNS over TCP / TLS.


There is MDNS for example. Also in terms of alternatives to BGP, you're still going to need to figure out routes. And / or, you going to have to detect loops. Different "anycast" nodes could use something like this to detect each other's presence. You could use the (IP) TTL to limit how many hops a packet will traverse looking for a particular server (no similar concept at the MAC level AFAIK). I hope you have fun!


https://www.youtube.com/watch?v=CVukpkWAp4Y

here is a link to the key signing ceremony if, like I was, you are interested.

Maybe it's the englisher in me, but I was hoping for more grandeur, perhaps some kind of ceremonial mace or at least a benediction.


God save the internet, and its fascist regime!


> but yes there are keys for DNSSEC

why would you want DNSSEC?


DNSSEC works fine for me, why not? When people break it, I get early warning. When people implement it and don't utilize it, I find out. But in all seriousness my sights are on DANE and ultimately eliminating Certificate Authorities.


It's not happening, for multiple reasons.

https://educatedguesswork.org/posts/dns-security-dane/

Reasons not to DNSSEC? The biggest one is that it exposes you to misconfigurations that happen routinely even at large sites (because DNSSEC is hard to manage), for no actual security benefit. The "no actual benefit" part is my big reason, though. DNSSEC is pure path-dependence; people do it because they think they should be doing it, because a bunch of people put a standard together back in the 1990s and have been lobbying for it ever since.

If you proposed DNSSEC today, rather than in 1994, it would go nowhere. But since it's been an IETF effort for 2 decades, it now has a life of its own.


I upvoted you because you're clearly a fanatic in this space. (Hello!) However, I disagree. Viktor punches above his weight (we've gone a couple rounds).

I'm against CAs because they are fabrications who do a beauty show primarily for the browser makers, and everybody else has to deal with the fallout (eschewing political comparisons... deep breaths...); they also provide cover for "infrastructure vendors" who mint their own CAs.

They say that DNS encapsulates the two most difficult problems in data science: naming and cache expiry (I'd add delegation). So if we already have one global tribe attempting to solve this problem, how much of our attention budget do we really want to spend on people who have discovered this "new problem": CA chains: really?

There are (Derrida-not) misconfigurations which routinely happen (PeeWee Herman "I meant to do that") which clearly benefit from DNSSEC. Surely noone would configure their network to trust the name of, say a file server, which serves the executables for your short order diner. Dang. It's such a brilliant idea. Why doesn't it pan out? JasBug hasn't been solved, other than M$ saying "don't do that". What if the DNS solved it?

We're writing here for the public so yes, "CA chains" is technically inaccurate. I don't care.


I co-develop a userspace FOSS DNS client for Android. If you only bother with UDP, you'll leak DNS over TCP. I've found adware to increasingly use TCP to bypass such naive DNS-based content blocking implementations: https://nitter.net/rethinkdns/status/1434137438901846020


The fight for privacy and security in a centralized system is a battle that is lost before it has even begun for all participants.

You need to redesign everything from scratch and be aware how the old system was flawed. The tradeoffs are usually complex high energy vs. simple low energy and it's always a good idea to start with the simple low energy.

SMTP(1971), ETHERNET(1973), IP(1974), UDP(1980), C64(1982), TCP&DNS(1983), HTTP&BGP(1989), DHCP(1993) and RASPBERRY4(2019) are never going away because they are simple. If you want to use something new make sure you are not wasting everyones time!

Edit: I added two hardware devices because they mark the first and last open personal device humans made at scale big enough to matter so that you can see the timeline properly; it took us 50 years to complete the creation loop of a global computer network.

It's going to be REALLY interesting to see what the Raspberry 5 looks like. My guess it's going to flop like the PS5 unless they REALLY increase the energy consumption of the GPU by a factor of atleast 2x, but they need to make that dynamic or the 4 will retain it's low energy value and hurt adoption (think C64 vs. 128)!


> But it's proving more tricky than I could have ever dreamed

Do you have more detailed explanations about this project? radiomesh.org homepage doesn't contain a lot of info.

You may be interested to check out projects like GNU Name System, or routing mechanisms such as CJDNS. Tor's onion services are also very interesting as a global secure naming scheme.


No, I have been iterating on different hardware for 5 years, while developing the idea in my head.

The real challenge is the physical limits and the scalability problem (radio range, bandwidth, electricity use, longevity and disc space) .

I have a working prototype for most parts so I know the project is doable, but to what extent it can successfully replace legacy systems without them faceplanting on their own is another question.

I'll check out your mentions, but I doubt they can be applied to radio.


In a decentralized system how are you handling the bad actor problem? Such as 2 entities claiming they own a namespace?


Well it will probably be some sort of hashing on the old fiber internet but at a fixed very low energy rate per message, and a bit higher energy rate for names (in a first come, first served manner with some sort of public/private key signed distributed database, just trying to not use the b-word here and with spam protection I haven't choosen yet) so far I'm concentrating on the hardware and radio hopping protocol to make sure it could scale at all, that combined with reputation: because the system is relaying your messages, you can be increasingly punished as you missbehave; which makes it hard to abuse productively. But as with all radio you will be able to disturb locally, if you have a better suggestion I'm all ears.


Ah ok. I was trying to figure out how you would fix say abc.xyz is announced by someone name squatting. Then the real abc.xyz comes along and says 'hey wait'. First come first serve fixes someone else coming along and stealing but not squatting. In all of the systems I come up with I always end up with some sort of central trusted authority/machine saying 'this is ok, that is not'.


Yes, but good point about the squatting, I know it's going to be a problem, eventually solved by the "market" but I would like something a bit leaner... thanks for stirring my noodles.

Maybe a penalty for unused names with time, but that will just drive paid spam and energy "waste"... time solves everything, I'm sure a better solution will crop up eventually, it's not like I will be done next week!

Unfortunately for us, everything is a pyramid scheme, you just have to make an as stable/fair pyramid as you can!


Oh very true. It is what I keep running into when trying to create something like this. I end up with something where 'the buck stops here' kind of deal. There is a secondary problem too of transfer of ownership. Companies split up and buy each other out all the time. So you have to have a way transfer ownership too. Which in purely technical terms looks like someone else is squatting. But they are not you want them to have it. The existing DNS business system kind of grew into all of this. As in DNS today right now does not preclude anyone not making their own set of domains. You probably could make it very decentralized just using the config options. The problem is someone 'reputable' has to verify. That becomes a weak point and place where someone can gatekeep. Adding any sort of monetary advantage will also unfortunately motivate people to gatekeep. However, money keeps the lights on...


I don't think you can do better than : you have to pay to rent in the namespace. This allows first-come first-served, and squatters, but they have to pay. Since the namespace is new, in theory there's no benefit to squatting because no particular name has any value yet.

Of course often there's a desire to mirror some existing namespace (e.g. DNS, trademarks) where there is value in the name already. In that case the best you can do is to build some oracle mechanism that consumes proofs of namespace ownership. Similar to how LE/Acme works, but used to drive an oracle.


Another interesting way subdomains leak is through TLS cert registration. I.e. you can plug a domain into this search [0] and find subdomains that have public TLS certs.

I just noticed a full blog post on this topic is also on the front of HN right now. [1]

[0] https://transparencyreport.google.com/https/certificates?hl=...

[1] https://shkspr.mobi/blog/2022/01/should-you-use-lets-encrypt...


Tangentially related: I’ve wondered what would happen if you purchased a domain name that had previously been owned by someone else and they had obtained a TLS certificate from a CA with an expiration date beyond when your ownership began. This seems like a good tool to find such a certificate, but if you found one what would you do? Would the holder of the certificate be able to MITM or otherwise impersonate you? Would there be a way to revoke the certificate (I’m guessing you could contact the CA that issued it?)? Do CAs automatically revoke certificates when domain ownership changes?


There's been some research on this! https://insecure.design/


Oh … wow.

How is LetsEncrypt going to handle revocation … in under 24 hours?


That’s awesome, thank you for the link!


How would you retrieve the private key for that certificate?


You wouldn’t. But the CA that issued the certificate could still revoke it, correct? E.g. https://letsencrypt.org/docs/revoking/#using-a-different-aut...


Yes, from that same link you can see that whoever controls the domain can revoke those certificates (by asking Let's Encrypt to revoke it). All you need is the certificate itself (which you can get from the transparency logs e.g. crt.sh), not the private key.


It's unclear. Do you know what CA's you currently trust on your machine? I bet you can't even identify 1 tenth of them.

There are so many CA's installed by default that it's truly a massive man-in-the-middle attack whenever you might think you are safe, you are not.

I would assume the CCP & the KGB control at least one of the CA's your OS currently trusts. (No doubt the NSA has one too)

In debian: dpkg -L ca-certificates


I'm enjoying running a pi-hole on my local network that also has unbound running on it for resolving dns queries. Works like a charm, and it's nice and quick.


I also use Pi-hole[1] and unbound[2]. You can even use tailscale[3] as a quick and easy way to use Pi-hole on all other networks through a WireGuard VPN tunnel.

[1]: https://pi-hole.net

[2]: https://docs.pi-hole.net/guides/dns/unbound/

[3]: https://tailscale.com/kb/1114/pi-hole/


Tailscale gives me some irrational sense of happiness because it *just works* and lets you do cool things like this without having to worry (too much) about security.


Running Pi-hole on the original Pi (1) model B for over a year and really happy with it. The original Pi running Raspbian has been very reliable and working tirelessly, the only problem is the I/O bottleneck - performance querying SQLite database is unbearable.

Last Nov I spent half day installing Pi-hole on spare Pi 2 and Pi 3 (Ubuntu LTS) serving as two internal DNS Servers for home network), router (AsusWrt-Merlin) as its upstream doing DoT (DNS over HTTPS). Really happy with the performance and cost (quite, low power consumption, no heating issue, no dust collection issue, etc.)


Does Pi-Hole also work with other SQL databases? If yes, you could host PostgreSQL on another Pi (or something beefier). Or maybe there is an adapter library that makes it possible to access another SQLite database via the network (not talking about NFS, as SQLite developers discourage from that).


I'm surprised application devs haven't done this, but you can "backup" a sqlite db to an in-memory sqlite db, then just "backup" the in-memory db every so often in the background.


The problem with running Pi-hole on the Pi 1 (original) is that it does not have enough physical memory after installing pi-hole stack (I used Nginx + php-fpm for the web UI), otherwise there is way to use utils like `vmtouch` to read the sqlite database file and keep it in page cache, even "lock" it.

On Pi 2 and 3 it's no longer an issue (even if you don't do anything about it) to a memory buff and better I/O (using faster micro SD).

Pi-hole provides a mechanism to backup and rotate the database from time to time, one can do that whatever way suits their use cases.


Pi-hole is awesome. It did give me some issues with a few streaming services that I think used google ads.


Yeah, it borked paramount plus and I think hbo max on my apple tv. I just exempted that device and it's fine.

Why'd I exempt my apple tv from the pi-hole? I don't want to screw things up when my wife or visitors are watching tv :)


> Why'd I exempt my apple tv from the pi-hole? I don't want to screw things up when my wife or visitors are watching tv :)

Very wise. There are a lot of cool things I’d like to setup, but keeping things simple for guests and spouses matters.


I run BIND on the router at each site I administer. The router gets a real domain name and is the authoritative nameserver for that domain. isc-dhcp-server is configured to assign a publicly routable IPv6 to each client, and update the BIND zone records with the client hostnames, so each client automatically has a publicly routable domain name, hostname.domain.com, with AAAA records pointing to their IPv6. They are firewalled of course.


I thought more common would just be a caching server (e.g. dnsmasq) for small networks with slow connections to the outside world (and this also helps with local hostnames).


Another good reason to run your own private DNS resolver is censorship. I originally set it up after my ISP was sued to block access to thepiratebay.com. The court was satisfied that a DNS blackhole was good enough..


I imagine that it would be easier to use the Google or Cloudflare DNS (8.8.8.8 or 1.1.1.1 respectively).

Disclaimer: I work for Google.


But then you’re sending all your DNS queries to hostile companies, so that’s a pretty big loss.

We need to normalize people using Pi’s for at-home DNS just like we did HTTPS.


Describing the public DNS resolvers of Google and Cloudflare as "hostile" is textbook shit-HN-says hyperbole.


I'm using Cloudflare on and off.

But it depends what/who you trust. Say you're based in the US work for Google and want to whistle-blow about some nefarious defense/Pentagon contracts, then staying away from Google (or US based Tech firm) infra as much as possible would be sound advice.

Even if you're just a consumer you should be interested in degoogling your life including not just google DNS but their stupid web-fonts and google auth API's. Google is still going to get lot of your traffic no matter what you do but every data point you can remove is good - even if it adds zero to your bottom line at least you will maintain awareness of how you get milked every day.


Ummmm

Can you explain why giving a complete list of URLs you visit to the worlds largest advertising and data gathering company isn't dodgy? If you can't see why, then you appear to have drunk the SV Koolaid.

tldr; If millions of people do a silly thing, it is still a silly thing.


Google is a for-profit-spyware company. They are hostile to me and and anyone who desires privacy.


Good advice. Hong Kong government blocked access to some sites these days but at least I can access it with Cloudflare DNS


But the DNS traffic still goes through the ISP right? They could blackhole all normal DNS traffic matching their blacklists?


You understand the power dynamics here wrong. You pay your ISP for unfiltered and uncensored internet access and your ISP wants to provide this to you. YOU are the ISP's paying customer. The fact that there are 3rd parties like recording industry associations that want to interfere with your and your ISP's private business is a problem for both you and your ISP.

The ISP will do the least legally possible it can to satisfy whatever these external players are coercing them to do. The recording industry does not pay your ISP. On the contrary - they sue them to court and try to force them to implement solutions to censor your internet without any compensation. All these solutions cost the ISP, so if the court is happy that removing the DNS records from the ISP's primary DNS is sufficient, then so be it. The ISP has no incentive to be hostile towards you or implement any more blocks than the absolute minimum they are legally required.


Cloudflare at least is DNS over HTTPS. They'd have to block all traffic to 1.1.1.1 to prevent you using it, they couldn't selectively block pirate bay.


I don't think many clients use DNS over HTTPS? Do you need a modern router for it?


All US Firefox users, some Linux distros, Cloudflare (and maybe other) VPN users have it by default. You can also enable it on Windows 10/11 or Chrome.

Your router doesn't need to support it, one of the complaints from business/school admins or even just people trying to run pihole network wide is that DoH bypasses network level DNS setup


I don't follow. If you access thepiratebay.com, at some level, you are going to send a HTTP[S] request to thepiratebay.com and that request will go through your ISP. How do you prevent that without an external DNS-resolver or VPN (which is not controlled by your ISP)?


The ISP happily routes all IP traffic. Their court-sanctioned solution was just blocking their default nameserver from giving any records for thepiratebay.com. This can be easily circumvented by editing your hosts-file, using some other DNS server than the ISP's, or running your own DNS server.


Some warning. Please do not put a resolver directly onto the internet. As nice as it might be to have a DNS ad-blocker or your own names reachable all over the internet, the server will be part of DDOS attacks through traffic amplification and you don't want that.


If anyone wants to learn more, here's why open recursive resolvers are a bad idea: https://www.cloudflare.com/learning/dns/what-is-recursive-dn...


> Please do not put a resolver directly onto the internet.

Consider using DoT or DoH instead, or at the very least disable UDP queries (there's a slight penalty though).


Run your own recursive server and instrument the crap out of it: https://github.com/m3047/rear_view_rpz You can't get local knowledge from anywhere else.

The latest BIND has DoT (DNS over TLS) out of the box, or you can put nginx in front of any decent DNS server to terminate TLS just like you do with a web server (this is fundamentally TCP not UDP however).


Another reason you might want to run your own BIND server is to enable reverse-lookups for your internal machines. On my home network, a reverse-lookup for 10.0.9.30 resolves to tara.nono.io (i.e. `dig 30.9.0.10.in-addr.arpa ptr` → tara.nono.io.)


I have been managed for years a cluster of recursive authoritative DNS cluster for a broadband ISP (around 30k subscribers). We have also 2 authoritative DNS servers running bind, but the operations were fully automated through a bunch of custom python scripts (the NOC operators can request a new record by themselves).

For the recursives one, I started with bind, but after a few months I replaced it to unbound and it's works like a charm. The only problem that I experienced was about DDoS, mainly generated by ultra cheap chineses home router with buggy firmware. Anyway after a few attacks, we start implementing an application monitoring solution and we were able to mitigate the attacks in a short time.

On my laptop I run a a docker container with grimd (https://github.com/looterz/grimd) as recursive DNS and DoH proxy. I can filter out lot of dab requests and tracking and have visibility of what my DNS traffic is. It's not hard to configure


Silly question coming from someone not very experienced with DNS servers/resolvers, is there a way to download/cache/resolve all country specific domains (ccTLDs)? I know there are many sites that sell zone files, like https://zonefiles.io, but aren’t the DNS records supposed to be something freely available?


What precisely are you interested in?

If you just want the NS records and glue for all the ccTLDs they're in the root zone.

https://www.internic.net/domain/root.zone

If you want the complete zone files from every ccTLD that is a much bigger ask. I'm not sure but I imagine you would have to look into each ccTLD and find out if they're available.


You can download zone files for lots of gTLDs using ICANN’s Centralized Zone Data Service [0].

For ccTLDs there’s no centralised system and availability depends on the country registry. For example, Nominet make the UK zone file and others available for UK registrars for a fee I think.

Another approach is to buy WHOIS files from providers like Whoxy [1], the registrant data shouldn’t be used because of GDPR and other restrictions but as a domain list it can be useful.

I’ve done a fair bit in this area so if anyone wants any help feel free to send an email - details in profile.

0. https://czds.icann.org/home 1. https://www.whoxy.com


Thanks for the hints. I’m interested in collecting all sites/domains for a specific ccTLD (.gr) purely for statistics/curiosity. I know that some countries publish their zones files, but for the GR registry (like many other countries) this is not available. I’m not interested in the WHOIS data, sine as you correctly mention contain sensitive data. I was thinking that somehow if I had my own DNS server I would be able to download all available domains, but after some reading I understood that this is not how DNS servers work. A request from a local client (inside my own LAN) would need to perform an nslookup or web access for each domain, so that the result could be cached.


If anyone has any feedback on using zonefiles.io I’d be interested to hear it


Have you ever wanted to build a toy nameserver that returns funny programmatic results? Then have a look at Python’s dnslib.

Example where the magic happens in ~40 lines of code:

https://github.com/paulc/dnslib/blob/master/dnslib/shellreso...

Hurricane Electric can sit in front of it if you like (and your records are dynamically generated but bounded to a known finite set), god bless them <salute>:

https://dns.he.net/

Example where you might want this: you wrote a nameserver that runs arbitrary shell commands!


dnslib author here - wasn’t expecting to see this so thanks for the reference.

Key thing I learnt writing dnslib (which was originally to provide a DNS API for an application) is that DNS is actually a very dynamic protocol but the complexity of mainstream servers like BIND makes it hard to do a lot of the things that you can actually do. There are a lot of problems (in particular in the service discovery space) which can be solved much more easily using DNS rather than inventing something separate.

As an aside if you want an authoritative DNS server I would look at KnotDNS [1] - you can avoid all the zone file cruft and interact with it using a sensible API. If you want to write a dynamic DNS app I would look at @miekg’s excellent Go library [2]

[1] https://www.knot-dns.cz/

[2] https://github.com/miekg/dns


Thank you for your work, _paulc!

I used your library to broadcast weather readings via DNS in the park where I live. It was really nice to have all the protocol work done for me, leaving me to focus on my meteoprocrastination :)

There’s a special place in my heart for tools where you learn by following examples, rather than by having to read abstract documentation.


Lost me at wanting to run BIND. BINDs security track record is poor and does not show promise of improvement. Run something else. There are plenty of systems that can consome or convert from bind format if thats what you want.


I've been running BIND on the internet for a quarter century (ns-he.nono.io, ns-digitalocean.nono.io). Sure, I got burned 20 years ago with, IIRC, a zone-transfer exploit, but in those days BIND ran as root (remember: the internet was new then). Nowadays BIND runs as a non-privileged user in a chroot'ed environment, so even if it's compromised the blast radius is tightly constrained.

And I like BIND. I like editing zone files by hand. It never fails to make me happy.


BIND has a track record because it has existed for decades. At present, what's wrong with it?


Here are a couple of more recent RCE's:

https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-2521... https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8625

There are numerous DoS CVE's over the same time period.


SPNEGO, (and GSS-API) used by less than 1% of the Bind users (by observatory) and exploitable if configured as enabled that the entire code segment got removed by the maintainer from its code permanently.

The rest is pretty solid and well tested.


One use case that i can think of is split DNS, when you'd like to return different data to different clients, about which i wrote on my blog: https://blog.kronis.dev/tutorials/how-to-run-a-split-dns-ser...

That said, outside of serious enterprise settings, you can do all sorts of things without hosting your own DNS servers - even odd ones, like making records on public DNS servers for your internal network. Sometimes using a dynamic DNS client (e.g. ddclient) is actually easier than caring about setting static IP addresses (if you just want to let DHCP handle everything), when you don't care about that sort of data being exposed. Of course, that's not to say that people should actually do stuff like that, just that they can.

On a more practical note, if you use the DNS servers of someone like NameCheap or GoDaddy, you might run into limits for how many records for a domain you can create. For example, NameCheap allows up to 150 records (https://www.namecheap.com/support/knowledgebase/article.aspx...).


One thing the fine article does not list as a reason to run your own DNS is if you need a secondary name server. There are plenty of options (Route 53, etc.) if you only need a primary, but all Internet domains should have at least two authoritative servers (for redundancy). There are far fewer secondary server options available from DNS providers, so it is often a good idea to run your own.



Far fewer … secondary DNS servers … options?

seems to me that they are increasing in numbers, both public and private.


Think DNS over TLS/TCP.


> it’s almost 40 years old

I was shocked that DNS was only almost 40 years old - I would have guessed it was at least older than me, but she's right. According to https://datatracker.ietf.org/doc/html/rfc882 it's almost 10 years younger...


The Internet is not much older; it is reasonable to say that the Internet started in 1982 when TCP/IP was first standardized, though some would say that the Internet started a few years later with the creation of NSFNet and its interconnection with other networks.


.ARPA domain is older than 1982, so TCP/IP was in force before standardization.

Fun times.


Hypothetically if some scary evil person were to register a domain with some accredited registrar - say GoDaddy or Tucows - could they arbitrarily seize it for "content policy" reasons? Like "your site has Dave Chapelle jokes which we don't like" so we took your domain.

Or do they stand to lose something in a major way if they do this?


Depends on the jurisdiction, your contract with the registrar, the TLD's registration policies... Some TLDs like .ml (Mali) which gives away domain names as a public service (why doesn't every TLD do that?!) have very explicit content policies: http://www.point.ml/en/ml_contentpolicy_combined_v0100.pdf


The blog post indicates running a resolver would have privacy enhancing benefits. Am I misunderstanding something here? Isn't the resolver mostly unusable without an upstream source that could then log all queries? Or is there a records bulk download option?


In the end you need to ask an authoritative server anyway, but this let's you skip your ISP being the middleman for all your general queries. Unbound is nice because you can have it verify dnssec


Think of it like downloading the dictionary and keeping fast local access nearby instead of looking up every word on Google.


Not expressly mentioned in the "user interface" bit: not all hosting services allow all possible RRTypes and you may want to deploy uncommon or newly specified RRTypes like TLSA or the possibly-soon-to-be-specified SVCB and HTTPS RRTypes.


There's a fundamental mistake in this article.

I quote: "But the “phone book” mental model might make you think that if you make a DNS query for google.com, you’ll always get the same result. And that’s not true at all!"

Phone books are not a static model either. If you were to look up "Walmart" in your local phone book, you're going to get a different set of phone numbers for the Walmart stores around you as compared to five states to the West or East of where you are. As such, the mental model really is apt.


Doesn't the sentence from the article mean "even from the same place, you can get different results" though? As in you're looking in (ostensibly) the same phone book but getting different numbers - not that you're looking in different phone books.

I can look up www.google.com from 3 different machines here on my local network (behind a single ipv4 nat'd address) and get 3 different IPs as a result.


The main reason I run my own name server is to support IP over DNS tunneling. Having direct and easy control over the normal records it serves is also nice.


reason: you want to route application traffic through a local daemon but the computing device will not allow you to access /etc/hosts or run a firewall like iptables/nftables/pf

One way to redirect application traffic to a local daemon, e.g., something like sslsplit or stunnel, is using firewall rules. Another way is to use DNS.

Running DNS for oneself with a custom root.zone allows one to redirect traffic, for example, to a loopback address where the daemon is listening. The DNS server can run locally on a loopback or private address (for use while at home/office), or remotely on a public address (for use when travelling).

For example, I use a local proxy server instead of remote DNS lookups. When I visit example.com, there is a local DNS lookup to a local DNS server listening on the loopback. No DNS packets leave the computer. The local DNS server returns the loopback address of the proxy. The proxy, which has the remote address of example.com stored in memory, then accesses example.com.


Been using pfBlockerNG [1] for years and haven't seen ads for a long time. They are even filtered out from Youtube Videos, through DNS sink. I am using it from my offsite-server at my parents house through IPSEC. Also protects family members from accessing bad IPs, based on several global malicious IP filter lists.

[1]: /r/pfBlockerNG


We use the excellent gdnsd (https://github.com/gdnsd/gdnsd), mainly for its geo-DNS configurability. Have been using it for probably 5-6 years now, handling hundreds of millions of queries per day, and never had an issue with it.


Thanks for this great article! My two cents:

> reason: do something weird and custom

Oh yes please! Previous discussions on this topic:

https://news.ycombinator.com/item?id=28218406 (HTML over DNS)

https://news.ycombinator.com/item?id=25620411 (DNS Key Value Storage)

https://news.ycombinator.com/item?id=22808121 (Wikipedia over DNS)

> reason: geo DNS

Please don't do this! IP addresses aren't geolocated in the common sense. The good way to get content closer to your users is to announce your IP space from your different locations. Then client ISPs can choose the best route depending on peering policy and number of hops.

If your DNS lies depending on the IP who asks, you're going to have quite a bunch of people redirected to the "wrong" (far/slow from their perspective) server. The only exception i can think of is for split-horizon DNS where your local resolver advertises local IP addresses.


I found one of challenge when running a public dns resolver(and authoritative name server if we're being target) is DNS Backscatter. Up till nowsaday I don't really know how people solve this problem.

Anyone can lighten me up on how to defend this kind of attack? I really want to run my own dns server


reason: You want to use dynamic wildcard SSL certificates via Letsencrypt. Because they need to be validated via DNS.

reason: You want to use anycast (rent cheap VPS servers that support BGP)

The anycast managed services that exist will takes days to sync all servers, thus you can't use DNS validation with Letsencrypt. Solution: Run your own Anycast ...

That said, anycast is overkill, because DNS has caching built in to it's protocol, if the user has looked up your IP once, it will be cached on the user machine next time he/she looks it up. And if you have a fairly popular domain, it will also be cached at the ISP or whatever DNS resolver the user has.


On my LAN the only DNS server I'm running are resolvers. I have a pihole instance that is backed by stubby to filter out ads/trackers and send queries outside my network via DNS-over-TLS.


I have lately started to call dns "a global distributed database", rather than decentralized. Decentralized implies some properties that the dns system most definitely does not have.


This seems like redefining the term: DNS is fully decentralized in normal operation — the hierarchy has fewer parties involved, but no one entity owns the entire data set the way you’d have in most distributed databases.


In normal operation - true. However, from a technical perspective, any zone operator above yours has the capability to take over your domain(s). While I agree that the root zone operators abusing this power is highly unlikely, for TLD operators under some legal/government pressure this can be quite likely. In fact, domain name seizure is quite a common procedure around the world.


> However, from a technical perspective, any zone operator above yours has the capability to take over your domain(s).

That doesn't mean it's not decentralized. If the .kz operator has a heavy hand, it affects people in their legal jurisdiction but not anyone else, and that's true of everything else as well. A system which doesn't allow enforcement of legal requirements will be blocked, and this isn't a technical problem with a technical solution no matter what the blockchain salespeople say.


It's centralized because owner of `.` can take over `kz.`


That’s a pretty weak claim: ICANN is an international organization with substantial oversight. More importantly, they’re a soft power: if they tried to hijack .kz without a very good reason, they would very quickly not be running a globally-trusted root as ISPs around the world switched away.


Those are things you could say about any centralized system.

They don't change the fact of the centralized structure.


Out of curiosity, what’s your proposed functionally-equivalent replacement?


I propose we replace all descriptions of DNS as decentralized with descriptions that call DNS centralized. Starting with your comments in this thread.


So … you don’t have an alternative?


Whether I have an alternative or not, I'm not going to get side-tracked by stupid questions about alternatives.


It’s not stupid to try to get you to explain the definition of “decentralized” you’re using to exclude a system commonly described as decentralized or get you to logically reason through the implications of that definition.


Yeah, I wouldn't have called it stupid, if that had been what you asked.


Until then, you’ve been apparently getting side-tracked.


It's not decentralized. There's a center.

The center used to be Jon Postel, now it's ICANN.

It's a hierarchy of delegated authority (aka "bailiwick"... aka "domain") emanating from the center.

> no one entity owns the entire data set

Because of the delegated authority.

But the authority is all delegated from a center.

It means there's a central point where names can be removed.

The central point owns as much of the data set as they choose to own.


> But the authority is all delegated from a center.

Yes, but authority can be delegated recursively, thus disempowering the top of the tree (ICANN) over the leaf nodes (the rest of us). The root server could technically lie and say thepiratebay.org.'s A record is 127.0.0.1, but recursive resolution (from right to left) prevents that so that only a direct parent could lie.

So because you can get names from pretty much everyone and only a direct parent has authority over you, power is very balanced overall.


> only a direct parent has authority over you, power is very balanced overall

That's silly. There's a single organization in charge of everything, with total enforcement power.

> The root server could technically lie and say thepiratebay.org.'s A record is 127.0.0.1

They don't have to "lie."

The people who own `.` can simply issue public orders to the people who own `org.`. They can set policies, demand payment, demand the removal of hosts, and so on.

Since `.` has the power to remove the `org.` delegation, the owners of `org.` are forced to comply.


> The people who own `.` can simply issue public orders to the people who own `org.`. They can set policies, demand payment, demand the removal of hosts, and so on.

Maybe i've missed a few episodes in the DNS wars saga. Do you have a few links on this topic? I wasn't aware that ICANN felt powerful enough to threaten to take entire TLDs off the root zone.


I'm not talking about the feelings of anyone involved in ICANN. I'm talking about what the technological function of the DNS root is.

> I wasn't aware that ICANN felt powerful enough to threaten to take entire TLDs off the root zone.

What do you think happens if you don't pay ICANN the fee for a gTLD?

Of course, they're not going to remove `org.` -- and `org.` isn't going to defy their rules.


> I'm talking about what the technological function of the DNS root is.

I somewhat agree with that argument, i'd be much happier with a public-key delegation process like in the GNU Name System. But to be fair, given the technical constraints of the time, i would argue DNS is as close as you can get to an anarchist protocol: it appears centralized on the outside, but when you dig into the technical details it was explicitly designed to decentralize powers from the hands of the hosts file maintainers.

> Of course, they're not going to remove `org.` -- and `org.` isn't going to defy their rules.

Do you have examples of .org being ordered by ICANN to take certain actions? Or .org domains being seized? wikileaks.org and thepiratebay.org, which arguably a lot of people have tried to take down over the years are still around and well.


> i would argue DNS is as close as you can get to an anarchist protocol: it appears centralized on the outside, but when you dig into the technical details it was explicitly designed to decentralize powers from the hands of the hosts file maintainers

_Powers_ aren't decentralized at all. Only administration and hosting is decentralized. Every single power has to be delegated from above, and continuously renewed. That is what makes it possible to cut people off for non-payment.

> Do you have examples of .org being ordered by ICANN to take certain actions?

ICANN's "Registrar Compliance Program" has helpfully published this powerpoint-style summary of their compliance requirements:

https://www.icann.org/en/system/files/files/registrar-compli...

However, I would say that the most important example is simply that ICANN charges an annual fee for each and every domain!

The power to make the rules is the power to demand the rents.


> The central point owns as much of the data set as they choose to own.

Less inaccurately, ICANN has social consensus to be trusted with the root delegations. If they abuse that position, that consensus would very quickly disappear because they don’t own any of it.


That is the kind of thing that one says about a centralized system when they want to defend its center.


So … what’s your alternative?


I'm not here to debate DNS.

I only explain that DNS delegation is structured with a central `.` zone.

The central node is all-powerful. It can remove any name by removing the top level delegation for that name.


Try running a root server like I do for myself. You will reevaluate what you have just said by then.


Sure you can run your own internet too and incompatibly use the same addresses IANA assigns. Guess that proves IP addresses aren't centrally allocated.


This is a good list for running DNS servers for publicly resolvable domains. Another common reason is to run a server for internal domains only accessible via VPN.


I started running DNS servers 25 years ago so it doesn't take much mental effort for me. It is easy for me to do, and I have full control over it. I can do it in my sleep.

The biggest effort for me was about 24 years ago, when BIND 8 replaced BIND 4.

Probably the last thing I had to learn was putting AAAA records in (easy enough) and putting SPF records (yes, I run my own personal postfix as well).


Because you can self-host many floss software and you don't need to call it Web3


I run dnsmasq backed by dnscrypt-proxy, haven't had issues.


report : Just installed a DNS resolver on my laptop after reading this thread. I'm surprised that every website I visit feels 2x faster. Try it.


[flagged]


IPv4 vs IPv6 doesn't change anything in this regard.


> Well lets brush over the limited nature of IPv4 and focus on IPv6 for this to apply.

Can you explain what you think would be different if IPv4 was gone or IPv6 never existed? I can't think of a situation where that would matter: you'd be resolving AAAA records instead of A but the logical DNS hierarchy would be exactly the same.


> I wrote a custom DNS server for mess with dns

Weird flex but ok


Another point I would add is DNSSEC. With your own authoritative server you actually own the keys and don't have to trust another company.

What's also not mentioned is the possibility to run your own hidden master and use a DNS provider (or multiple!) as slaves. This way you have full control over your zone but you don't have to run your own network of nameservers.


Since almost nobody runs DNSSEC (try a list of popular domains, like the Moz 500, and `dig ds $domain +short`), this is unlikely to be a big issue for most people. There's also practically no upside to running DNSSEC, and a lot of downside (see: Slack disappearing from the Internet for a whole day).


I take issue with your characterization of Slack going offline as a downside.


As I wrote a month ago (https://news.ycombinator.com/item?id=29378633#29385866):

The problem for Slack was not caused by DNSSEC directly. It was caused by:

1. A bug in Route 53 which caused wildcard record not to work with DNSSEC signing. Anyone not using Route 53 would not have had any problems with DNSSEC.

2. Slack decided to revert the DNSSEC rollout, but botched the process badly, effectively locking themselves in the trunk and throwing away the key. If they hadn’t tried to revert the DNSSEC rollout, or if they had been a bit more deliberate and careful while doing it, this would not have happened.

(Also, except for DNSSEC solving the obvious problem of not having any way to authenticate DNS responses, you also can’t use newer e-mail security standards like DANE without DNSSEC. MTA-STS is an obvious ugly hack, requiring a web server to run an e-mail server.)


I don't run DNSSEC either, but as you know, the Slack issue was caused by haphazard implementation, followed by apparent panicked incorrect rollback. They would have had basically the same issue if they had added incorrect AAAA records with long expiry. There may well be reasons not to deploy IPv6, but I don't think this is a good one. Similarly, while I agree that DNSSEC offers few upsides, I don't think this particular example is a good one against DNSSEC itself.


To quote from the Slack engineering report

> This indicated there was likely a problem with the ‘*.slack.com’ wildcard record since we didn’t have a wildcard record in any of the other domains where we had rolled out DNSSEC on

I'm not going to stick my hand in either camp for the sake of this discussion, but dynamic/wildcard DNS records are exactly the type of thing I'd suspect DNSSEC to have trouble with


I, on the other hand, can speak from experience, and I say that where I work we currently have over 100 domains with DNSSEC and a wildcard record, and they all work just fine.


I wasn't implying that wildcard records are something entirely incompatible with DNSSEC, more that certain nameserver implementations could potentially have trouble with them.


Your guess was proven correct, as it was indeed a bug in Route 53 which broke Slack. But you did not write “certain DNSSEC implementations”, you wrote “DNSSEC”, which I interpreted as implying that DNSSEC itself, inherently, had problems with wildcard records. But my experience told me otherwise, hence my comment.


Fair enough


There are some fairly good arguments against using DNSSEC. I think the author of this post is on HN.

https://sockpuppet.org/blog/2015/01/15/against-dnssec/


DNS is decentralized about as well as TLDs are decentralized... they're not. You'd specifically have to run your own DNS to make it decentralized and use other upstream providers that allow the use of open TLDs.


What is your definition of decentralized? There's no one system which controls of the DNS records or makes changes to them, and the root TLDs are run by an international consortium which has limited ability to force changes. Even if the U.S. government gets in a shooting war with Russia, there's no plausible outcome where .ru records come under the control of the U.S. government or vice versa.

That's about as decentralized as a real system gets while still being usable.


Decentralization usually requires some incentive to get people to run it and no single entity on charge of it. Since most ISPs incentive is to redirect failed entries incorrectly and DNS root servers are only reporting centralized TLD providers that require payment for registration, no it isn't decentralized by any standard compared to the decentralization of IPFS using Bitcoin.


> decentralization of DNS or Bitcoin.

Uh, this is DNS. Bitcoin has a similar but far more expensive mechanism for social consent but more importantly it also doesn’t have equivalent functionality for a global namespace. That’s the underlying problem here: the purpose of the DNS system is to map human-meaningful names to addresses and that requires a mechanism for guaranteeing uniqueness and dealing with abuse. Bitcoin can’t do that and things like ENS have the usual problems handling abuse.


A decentralized system won't ever have a central place for a privileged censor role that can "handle abuse."


What definition are you using for decentralized? Note that I didn’t say anything about a privileged central role - as you learn about how decentralized systems like DNS, email, the web, etc. work note how often that means something like selecting the work of a third-party aligned with your views (e.g. email server operators use black lists run following policies they agree with).

This is why, for example, ICANN doesn’t have the ability to arbitrarily transfer domains — their operation of the root servers is limited to the terms agreed to, and if they tried to abuse their technical access they’d be replaced by an alt root. The Russian government has already done this for political reasons and it’s not especially hard to do given a reason.


I'm talking about the structure of the technology.

What you are replying with is a political science style defense of the idea of centralized power.

Your argument is like the political science idea of the benevolent dictator, who is vulnerable to coup unless they can keep the factions happy.

Political science also has the idea that the institutional democracy can be more resilient than that to such simple attacks as a coup by a would-be dictator.

When I'm talking about decentralized I'm saying that the technology doesn't even have a place for the benevolent dictator to exist. Not that the node is constrained politically. The node is not even there.

Centralization is easy to build and provides easy "solutions" to problems like abuse. Centralization simply reduces all problems to the one problem of choosing the center. Like you say: "selecting the work of a third-party aligned with your views."

Centralization works, insofar as it does, because the human beings controlling the center have to maintain some kind of political alliance in outside society to maintain their spot.

Fair enough, I guess, for some purposes. But it's like CAP theorem, sometimes you want a different set of tradeoffs for a different purpose. Not all systems work by choosing a center. Some systems fragment, some unify by non-political (mathematical) means.


> When I'm talking about decentralized I'm saying that the technology doesn't even have a place for the benevolent dictator to exist. Not that the node is constrained politically. The node is not even there.

This is why I asked your definition and alternatives because that isn’t possible or desirable for a system like DNS, or almost anything else. At some point you need a query for your bank to go to the intended party, not a scammer, and that means that you usually end up relying on third-parties. A decentralized in the standard definition system like DNS handles the root issue using social consensus which makes abuse obvious and limited to the period before an untrustworthy party is no longer consulted.


DNS is decentralized in only weak ways, NOT with respect to the naming authority.

I mean the DNS root, which holds all authority, is not decentralized.

You only call it decentralized by looking at the aspects of the system besides how naming authority is structured!

I think nobody cares about your "standard" definition at all. The naming authority is the part of the system that matters (and is big business).


> The Russian government has already done this for political reasons

Perils of a centralized system!


You mean decentralized, right? It only affects people who trust Russian ISPs. There isn’t a magical technical fix for what a sovereign authority can do within its jurisdiction.


It affects users of their DNS servers, which isn't the same as people who trust them.

And it's because of the centralized structure of DNS technology that it's possible for Russian sovereign orders to propagate DNS information to those users.

Just like the political science benevolent dictator replaced in a coup, retaining the previously-built chain of command.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: