Hacker News new | past | comments | ask | show | jobs | submit login
Paul Vixie thinks more people should be running their own DNS servers (businessinsider.com)
204 points by indigodaddy on March 30, 2019 | hide | past | favorite | 151 comments



More people should be running their own mail servers, their own web servers, their own IRC servers, etc.

But I don't think we are ever going back to that direction. The arguments and benefits for running one locally are not enough the trouble as well.

Performance? Due to DNS caching at the resolver level, it is probably faster to use Google's 8.8.8.8 or CloudFlare's 1.1.1.1, than anything local (where all dns requests are a MISS).

Privacy? With DNS over TLS/DNS over HTTPS, your ISPs can't see what you are doing. If you run DNS locally, they can. Yes, they will see all the requests your resolvers are doing to the auth DNS servers.

Security? Some good resolvers, like Quad9 or CleanBrowsing will block malicious domains. CleanBrowsing will also help blocking adult content if you have kids. I don't think maintaining such control is practical for most people (pi-hole helps, but still hard to keep it updated and find good enough databases to use).

I would love a de-centralized web, but it is pretty hard to go back.


> Performance? Due to DNS caching at the resolver level,

Not sure about the validity these arguments. Yes, Google is likely to have more cached data than you, but they can also be a half a country or two away.

In my experience caches matter less than one might think, since more popular data is usually the one that's low latency anyway.

> Privacy? With DNS over TLS/DNS over HTTPS, your ISPs can't see

Let's agree on one thing: That surf data is a lot more valuable to Google than most other actors, including your ISP, because they're the ones in a position to monetize it.

Your ISP has more data than they know what to do with anyway. Should they try to monetize it despite the murky legal waters (they really wouldn't want to knowingly help copyright infringement, for example), the realistic option would be for them to sell it to someone very much like Google. It should not come as a surprise that the latter is happy to shortcut the process.


>Yes, Google is likely to have more cached data than you, but they can also be a half a country or two away.

Google has edge nodes in pretty much every ISP of every country except China. A query to Google's public DNS never leaves your ISP, much less your country.

>When clients send queries to Google Public DNS, they are routed to the nearest location advertising the anycast address used (8.8.8.8, 8.8.4.4, or one of the IPv6 addresses in 2001:4860:4860::). The specific locations advertising these anycast addresses change due to network conditions and traffic load, and include nearly all of the Core data centers and Edge Points of Presence (PoPs) in the Google Edge Network.

https://developers.google.com/speed/public-dns/faq


Single data point, but I've been running a home DNS server (bind) for many years; it's set to be authoritative for the .local domain and caching for everything else (except for major tracking and advertising sites, which it blackholes).

For hits that are in the cache (the usual case) it's obviously faster than going out to the 'net,. The black-holing combined with ad-blockers mean browsing is a lot faster and considerably more peaceful.

In terms of maintenance, it's no real effort other than updating every few years (being behind NAT the security risks aren't huge) and it means the entire household sees the benefit (plus access to webservers/wiki's etc on the .local domain).

Obviously the downside is it needs to be on something that runs 24x7 and a modicum of IT skills are required to set it up. One other catch is that your ISP might block you for not using their DNS; BT (UK ISP) did this, but it is possible to turn off this 'security feature' via a rather obfuscated web page (may have changed since I last did it).


The .local TLD is reserved for mDNS. You may run into devices that completely refuse to resolve hosts in it using regular DNS.


Indeed, 'zero-configuration' only applies to users.


> One other catch is that your ISP might block you for not using their DNS; BT (UK ISP) did this

Erm ... what? How does that work? If they don't see DNS requests from you at their resolver for a week, they disable your connection?!


What happened is going online resulted in everything being redirected to a BT page saying you're not using our DNS and to change settings so you do. Some googling revealed a few people who'd had the same issue and found the (obscure) page that allowed you to undo the block.

I assume they detected it simply by seeing DNS queries going to non-BT servers. Note this was a few years ago when it was pretty common for PC malware to hijack DNS requests, so could be it's changed in the meantime.

N.B. I also recall BT redirecting requests for non-existent domains to some partner of theirs, I assume experimentally as I haven't seen or heard of that for a while.


"N.B. I also recall BT redirecting requests for non-existent domains to some partner of theirs, I assume experimentally as I haven't seen or heard of that for a while"

They are still doing this. I ran into it just yesterday. They do however make it very easy (click a couple of links) to turn it off.


Still, interception of communications is a serious crime (unless your monopoly-scale ISP does it, of course).

Thanks for the update (if somewhat depressing to know they still get away with it).


Did you change DNS using the BT Hub?

Mine forces me to use their DNS, would love to turn it off at a router level. I know I can buy a new router but I can't justify that right now.


The work around for this (unfortunately) is to use another device to dish out DHCP. I have setup my Pi Hole to serve this and my DNS


Never used the BT hub, I've always bought my own router (I've also had a linux box as a firewall between the router and home network since the days of dial-up).


> Performance? Due to DNS caching at the resolver level, it is probably faster to use Google's 8.8.8.8 or CloudFlare's 1.1.1.1, than anything local (where all dns requests are a MISS).

It's perfectly possible to recursive resolve your misses to Google's DNS server if you want.


> It's perfectly possible to recursive resolve your misses to Google's DNS server if you want.

I don't think that's how recursive name servers work. It's been a while since I've had to reason about this but, for example if 'www.google.com' in not cache, it contacts a root domain server, then a 'com' domain server, then a 'google.com' domain server, which finally answers the query, which then gets cached by the recursive name server.

Eventually the recursive name server on your family or organization's local LAN will have a decent cache hit ratio, and the round trip times to your local recursive server could be potentially an order of magnitude (2ms vs. 20ms) faster than talking to Google or CloudFlare.

It's possible that your ISP can still know which DNS lookups you're doing by snooping the traffic between your recursive DNS server and other DNS servers on the net, but I'm guessing that they're not doing this because it's not as easy as just ingesting their own DNS logs.


What you said is incorrect.

Almost all DNS servers ever have an ability to set a DNS forwarder instead of using root hints.


The only chance we have for people to run their own services is if we make it dead easy to do so and the advantages are clearly communicated.

Unfortunately, that hasn’t been open source’s forte historically.


Until one of the big players decides they don't like your domain or IP range. I've been running my own email/web server for years, but even if you'd fully automate installation, there's a can of worms full of fun things like spam, DoS and shenanigans by the big players that really are not worth your time for just one person's email setup.

I suppose we could come up with some easy configurable templates that would automatically install servers for privacy conscious individuals, but if at some point something goes wrong, most people are going to be stuck without service and no easy fix.


> ”The only chance we have for people to run their own services is if we make it dead easy”

In the case of DNS, it’s dead easy if the server is built in to home routers.

Many of them do, in fact, ship with local DNS already - it’s just that many users override it with 8.8.8.8 or whatever.


The DNS servers shipped in home routers are usually set up as caching DNS proxies by default, and will forward queries for cache misses to the DNS resolvers or caching proxies that your ISP is running, rather than the DNS servers in home routers doing the full resolution themselves. So it’s not what I would consider “true” “local DNS”.

However, even if we did switch everyone to running their own DNS resolvers, what would happen then? Without the massive shared caching we have today, the load would significantly increase on the authorative DNS servers for each domain. Even with client local caching.

So the number of companies running their own authorative DNS servers would probably decrease — more of them would be using hosted DNS provided by a third party. A lot of companies host the authorative DNS of their domains with a third party big already. Including myself — I use Cloudflare for all my sites because of the HTTP caching and other things they offer on top of hosted DNS.

Increased load on authorative servers will likely lead to further centralization of DNS hosting with a few big providers IMO. Because even a lot companies that specialize in hosting DNS might not be able to handle the load when everyone is running their own resolver. Only the big DNS hosting companies will be able to afford it. So we end up with everyone hosting DNS with a few DNS hosting providers — Cloudflare, Amazon Route 53, etc.

So by decentralizing the DNS resolvers that clients use, you push companies to centralize the authorative DNS servers further. The net effect is that you will only have shifted where in the resolution the queries centralize.

And let’s say that this happens and Google sees the amount of queries received by 8.8.8.8 drop to near zero over night. Odds are that if Google values the data they gain from clients using these resolvers, they will make a big push to ensure that they host the DNS for as many companies as possible, so that they still end up with their hands on the query data. (And Google does value this data — otherwise they wouldn’t still be offering public DNS query servers.)

And also, what about the root servers? Will they be able to handle the massive increase in load? And won’t the root server traffic be subject to surveillance by state actors wanting to know what sites someone is browsing?

DNS is kind of funny because in a way it is both centralized and decentralized at the same time. But if you want the web to be truly decentralized I believe for the reasons stated above that having people run their own DNS resolvers is not part of the solution.

You are going to have to replace DNS altogether. Realistically I don’t think DNS is going away anytime soon. The web and the internet in general is too reliant on it. But I really wish we could.


Why not sell a NAS with these services installed?


Performance for Google or Cloudflare isn't going to be better. Where do you get that idea? Do you think all DNS simply lives in their caches?

"your ISPs can't see what you are doing". If they're analyzing traffic, they can, and if they're doing that, they can see to whom I'm connecting, anyway. But you say nothing about why we should trust Google or Cloudflare. I trust my ISP to be big and dumb. I trust Goole and Cloudflare to want to make money.


Yes. Performance of Google / Cloudflare DNS will be better simply because so many other people are using them: any common DNS query result will probably already be cached...

FYI I run my own DNS server anyway.


But what is common for you will usually be cached already in your own server as well, so most requests will still be cache hits--and all those hits avoid the ~ 20 to 100 ms round trip to the internet.

Take news.ycombinator.com, for example: The A record has a TTL of 300 seconds. So, after the first visit, which probably will take a bit longer than asking Google/CF, for every request in the next five minutes, you will have a reduced lookup latency.

Then, after five minutes, the next lookup will go out to the authoritative server. But mind you that the NS records for ycombinator.com have a TTL of two days, so those are still cached, and the refresh is indeed a single request to the authoritative server--which more often than not takes about as long as a cache hit from the recursive Google/CF resolvers (it's also one round trip to the internet ...).

And then, there is stuff like BIND's prefetch mechanism which will start the refresh of an expiring record when it sees a query for that record shortly before its expiry: That query is answered immediately from the cache, and a refresh is started in the background, so that the refreshed record should arrive in the cache before the old version expires ... thus completely eliminating the lookup latency for often-used records. Though you might need to tune it to trigger earlier for your personal use than in the default configuration ...


True, true. I did not realize BIND had prefetch now. My internal nameserver was running an older bind (from Debian jessie?) I just upgraded to 9.10...


Google has spiders that crawl the entire web. Cloudflare RUNS DNS for a bunch of the web.

Yes, a lot of DNS does live in their cache, quite literally.


> I would love a de-centralized web, but it is pretty hard to go back.

As the powers that be continue to centralize and exert control in a negative way, I have a feeling the pendulum will swing the other way once people get annoyed with it.

Kids are already using VPNs to circumvent controls.

Need to make a cyberpunk-esque decentralization kit for the next generation to adopt.


That would be really cool!

I feel like one of the surprisingly-big barriers is just the difficulty of geting a static IP address assigned to your home. If you could do that, then (I think) they could run everything from a Raspberry Pi: their own website, hosting their own email, etc etc.

(It's actually not much harder technically to set this all up on a VPS, but then the kid has to put a monthly fee on a card -- probably a big barrier for parents.)

Am I right, or is there an easy way to get around the static IP issue?


I don't have a static IP at home. It changes every few weeks. I have a hack where a cron job uploads my home IP to a cloud server so I can know what it is when the IP changes.

Maybe a Distributed Hash Table DNS interface is in order? I think it could work if you cache your peers, and reach out to let them know your current state. Even during the Venezuela blackouts, not all IPs went dark.


That would be really cool! The question I see is how to handle resolutions where two people both claim to be example.com. Maybe you could use some cryptography to make sure it's always whoever first claims a domain.


> I have a hack where a cron job uploads my home IP to a cloud server so I can know what it is when the IP changes.

I don't have a clue what a "Distributed Hash Table DNS interface" is, but I'll note that a solution [0] to this problem has been around for over two decades.

[0]: https://tools.ietf.org/html/rfc2136


If you don't have distributed backups handled, then you're just putting most users in a worse situation. Now they can lose important parts of their digital life in one disk crash.


Great point, though I'd point out backups don't have to be distributed or cloud-based.

I was thinking it would be interesting to have a protocol (probably built on bittorrent) that encrypts your data and backs it up onto other's servers in return for storing some of their encrypted backups on yours.


Pihole's default lists and using opendns upstream is pretty solid. In my experience their databases are thorough.


Regarding mailservers, it’s feasible to run one at home.

But many people rely on spamlists, ie lists of ips known to relay spam. The proble with this is twofold:

1. Some people just took the authority to decide who sends spam and who does not. If you get on one of those lists, usually you have to get in touch and pay to get out.

2. Such people usually include residential ip subnets by default, for no technical reason whatsoever.

So in the end my mailserver at home has been in a spamlist for years, even though i never relayed spam and was very careful to configure outbound relay authentication/authorisation, spf, dkim etc.


> 2. Such people usually include residential ip subnets by default, for no technical reason whatsoever.

Are you talking about residential or about dynamic? Because there kinda is a reason for this for dynamic addresses (PC malware sending spam, and the impossibility to list the particular affected PC because it's constantly changing addreses, so you only can block all the addresses those PCs could be using).

If you do have static addresses, whether residential or not, those should not be listed in dialup block lists.

If your home internet connection has dynamic addresses, you still can run your mail server at home by renting some tiny VPS and tunneling its addresses to your home server ...


Where do you rent the VPS? Example from personal experience: Yahoo and Hotmail block DigitalOcean IPs outright. At least they have the decency to reject your mail at delivery, not spamhole it.

Practical DIY SMTP is a lost battle. Unless you're in it for the experience of the hosting itself.


> Where do you rent the VPS? Example from personal experience: Yahoo and Hotmail block DigitalOcean IPs outright. At least they have the decency to reject your mail at delivery, not spamhole it.

Any one of the thousands of VPS hosters that are not one of the half dozen huge "cloud server" companies?

> Practical DIY SMTP is a lost battle. Unless you're in it for the experience of the hosting itself.

No, it's very much not, it works perfectly fine. Or at least well enough--arguably, you should be able to send directly from dialup hosts with a well-established domain and SPF, so things aren't as good as they could be, but far from what some people claim.


> If you do have static addresses, whether residential or not, those should not be listed in dialup block lists.

Indeed my residential internet connection has a static address but it still gets flagged for spam because it’s inside a residential subnet.


> because it’s inside a residential subnet.

As in? I mean, what makes it a "residential subnet"? Have you tried talking to your ISP about this?


The problem is not on my isp side. It’s on the spamlist side.


> Performance? Due to DNS caching at the resolver level, it is probably faster to use Google's 8.8.8.8 or CloudFlare's 1.1.1.1, than anything local (where all dns requests are a MISS).

Some resolvers (like unbound) can be configured to prefetch cached entries before they expire. And in any case if you use something locally you are probably going to configure it to fetch recursively from one of the "big guys"


>With DNS over TLS/DNS over HTTPS, your ISPs can't see what you are doing.

Is this true? They still can see what IPs you're connecting to can't they?


Yes, they can always see the metadata. To, from, ports, bytes sent, packets, time, duration of the flow, etc.


Given so much is (sadly) behind cloudflare or on AWS, I'm not sure that helps them a huge amount without seeing actual packet contents


> Privacy? With DNS over TLS/DNS over HTTPS, your ISPs can't see what you are doing. If you run DNS locally, they can.

Use DNS Crypt Proxy. It acts as a DNS forwarder, cache and ad blocker for your entire network whilst also encrypting your lookups to 8.8.8.8 and 1.1 (Fun fact: did you know 1.1 is short for CloudFlare 1.0.0.1 BTW?)


Google cache only exists at local nodes in the cluster behind 8.8.8.8; chances you hitting the same node behind 8.8.8.8 for non fb like domains are slim.


Won't IPv6 make this easier? Isn't the primary reason people don't run their own servers now because of NAT?


The other reason is that maintaining a server and its services is sometimes a full-time job :/


This is anecdotal but I decided to host my own sites. I bought the cheapest droplet from digital ocean that I could and I set it up running Fedora. Then I installed Apache, MySQL and PHP so I could run some wordpress sites.

The server kept running out of memory and shutting down MySQL so my sites stopped working. I started to learn how to read logs and saw that there is a huge amount of malicious activity directed at my server all the time.

Yesterday I installed Fail2ban which meant installing postfix. I got it working (took most of the day) but I can't send emails to my gmail account because I need to set up dkim and dmarc and other stuff. I have a list. But first I have to learn how to do all this stuff.

I use Fedora every day at work but I'm obviously no sysadmin and you are so right. All this takes hours and hours to learn and then to stay on top of it. When I was on shared hosting I had a lot less control and options but it was a lot easier as well.


Move mysql to a separate droplet. vultr: $2.50 otherwise you have to change mysql defaults to get things to fit. Great tutorials are out there in general reduce your workers/processes.

Why do you need dkim or dmarc to send to gmail? If you send a test php mail does gmail pick it up?


If I send a test email with postfix - gmail rejects it and gives me a link to their page explaining why I need to add a bunch of stuff so they know I'm not spamming or fishing. Specifically I get this:

gmail-smtp-in.l.google.com[173.194.207.27] said: 550-5.7.1 This message does not have authentication information or fails to pass 550-5.7.1 authentication checks. To best protect our users from spam, the 550-5.7.1 message has been blocked. Please visit 550-5.7.1 https://support.google.com/mail/answer/81126#authentication for more 550 5.7.1 information. d203si1756652qkb.228 - gsmtp (in reply to end of DATA command)

there are tons of guides on troubleshooting mysql resource issues. Are they great? I don't know. Have I tried what is mentioned in many of them? Yes. I still have issues. I don't think it's just mysql though. I think it is a lot of little things that I'm slowly eliminating one by one.


I spent awhile with loader.io and digital ocean. Putting mysql on a seperate box was huge and allowed me to accept 10x more traffic. This guide helped me back in the day.

http://digitaloceanvps.blogspot.com/2014/04/best-configurati...

For gmail interesting.. wonder why google trusts me.


My browsing habits are pretty regular, though: there’s half a dozen sites I visit regularly and the rest are random blogs/etc. I suspect about 20% of the domains I visit account for 80% of my browsing traffic and there would be enormous benefits from a dns speed perspective to a local caching resolver.


Maybe. Many sites use pretty short TTLs. So your sites may be dropping out of your cache more frequently than you think.

Easy to test though. Run dnsmasq and enable query logging and see how often it’s having to forward requests. Then realize a recursing resolver is potentially having to go all the way to the TLDs for those requests.


unbound for instance provides 'cache-min-ttl' which allows you to prevent excessively small ttl values.


Is there resolver with Tor proxy? I love being able to use onion websites from browser.


I'd really like it if there was an easy way for "ordinary" consumers to setup a web/email/etc. server on, say, an AWS or GCP instance. Wouldn't exactly be decentralized, but it'd give users more control.


For pi-hole dns vpn self-hosted https://ba.net/adblock/vpn/doc/howto.html


It must handle backups and restores too, otherwise you're just making it too easy for folks to lose all their data.


With 8.8.8.8 Google sees all requests, which is probably the reason for its existence. How can an ISP not see what you are doing anyway? traceroute $addr obviously always includes ISP servers.

Unless you use a VPN, but that is a different story.


I've been thinking there could be a kind of NUC with cable or DSL or fiber adapter cards, and a bunch of basic services, with a good firewall and whatnot. And an update service, ideally free, for as forever as possible.


I run my own dns server with a forward rule to a local cloudflared dns proxy. ISP's can't see my queries.


Ironically, the instructions linked to in this article for running your own DNS server[1] suggest configuring it to forward all non-local queries to your ISP or Google DNS.

(It’s not clear to me whether Vixie is more bothered by the loss of privacy in using Google/Cloudflare/OpenDNS/etc and/or it’s the loss of privacy.)

If you’re going to do that, you might as well use dnsmasq or just use your ISPs servers directly. If your concern is privacy, you need to configure BIND to operate in recursive mode instead of as a forwarder (dnsmasq is a forwarder only, but you could use unbind if you don’t like BIND). But note that your ISP could in theory still snoop your recursive DNS queried. It all depends who you trust the least.

You could also, per my sibling comment, run dnsmasq locally and then run a recursive DNS server on a cloud server, using either a VPN or DOH in-between. That would give you a local cache with your own recursive DNS that your ISP can’t snoop. But do you trust your cloud provider? (Also make sure if you do this that you configure edns0 client subnet or your video streaming may break.)

[1] https://www.ionos.com/digitalguide/server/configuration/how-...


well there is dnscrypt. i use a local cache for fast revisits. ya i'm not sure everyone having full dns servers would be a good thing or even practical.


DNSCrypt only provides authentication, not confidentiality, and it’s only between the client and the recursive server. So it doesn’t address either the performance or the privacy concern of routing all your DNS through someone else’s recursive servers.

Edit: apparently it encrypts traffic as well:

https://dnscrypt.info/faq/

So it’s comparable to DoH which prevents your ISP from snooping but per my other comments here doesn’t address the privacy concern of now having to trust the upstream resolver.


Are you sure you're not thinking of DNSCurve? It doesn't provide confidentiality, but AFAICT DNSCrypt does.


You’re right. I went of the Wikipedia page for it which says:

DNSCrypt wraps unmodified DNS traffic between a client and a DNS resolver in a cryptographic construction in order to detect forgery. Though it doesn't provide end-to-end security, it protects the local network against man-in-the-middle attacks.

https://en.wikipedia.org/wiki/DNSCrypt

But according to dnscrypt.info it’s encrypted.


DNSCrypt and DNSCurve both provide confidentiality (and are very similar to each other). The thing that doesn't provide confidentiality is DNSSEC.


DNSCurve does provide confidentiality, using x25519-xsalsa20poly1305.


I agree with Paul Vixie. The internet, IMO, is not a playground for large corporations.

What originally made the Internet amazing was the participatory nature of it. As it started 'standardizing' or 'accruing', autonomy was lost in the pursuit of efficiency.

Today, 2-3 corporations are just trying to own the internet, and this needs to stop. I favour a participation in the Internet than what it is today.

Okay, so how can we make it happen?

- Make a decent DNS Server in Go/Rust/<Any-Clean-Coded-Implementation>.

- Make this embeddable in routers/open stacks, maybe in OpenWRT.

- Make it easier to define top level zones/domains in a modern, easy data format (maybe JSON/YAML). Make this an overlay/augment format.

- Publish bootstrap corpus of data for such independent DNS servers; No, they do not need to have all the root server content updated as frequently. It could be easy to sync this periodically with a git pull. This should be an internet-wide mirrored effort, like Bitcoin.

- Isolate oneself from people, arguments and organizations who want to use AWS/Google/<Insert-Popular-Provider-Of-Choice> because they work at scale and are cheap. One should know that to have an independent internet, the change starts with the self.

- Run the DNS server at home, production, *cloud and protect the Internet.


What's wrong with opkg install unbound? It's robust, doesn't require maintenance, and already available an an optional install in OpenWRT.

The reason people don't use it is probably just that it isn't default. Some captive portals mess with DNS resolution and it's probably easier for OpenWRT to just let them.


> What's wrong with opkg install unbound? It's robust, doesn't require maintenance,

Good joke. It's written in C, so in addition to typical protocol/logic flaws, it'll have its share of security and memory leak problems. No maintenance? Have a look at https://nlnetlabs.nl/svn/unbound/tags/release-1.9.1/doc/Chan... and its security advisories... Regular updates are necessary.


Here is a list (and some discussion) of the 4 CVEs that unbound has had starting in 2011:

* https://nlnetlabs.nl/projects/unbound/security-advisories/

Nothing there looks particularly scary to me.


Congratulations, you've just spend your time, and your mobility.


Then he should have made it a lot easier to do so. The whole problem with DNS is the finicky configuration, it is about as tricky to set up properly as a mail server in spite of the outward simplicity. Mail, DNS and also WWW servers are the ideal components of federated systems but the degree to which you have to be a networking guru, security guy and systems administrator to keep all three up and running without issues over a longer period of time is such that many people will simply not take the trouble.


Isn't that more a problem of the implementations then the DNS standard?



Ironically my ISP is using their DNS servers to block archive.is


Sure sucks to live in New Zealand right now, doesn't it?


I'm in NZ and the link works fine for me on my Slingshot connection. Probably an ISP-specific issue.


The block is a minor inconvenience. I'm okay with it as long as it is a temporary thing.


I'm not. It tells me I can't trust my phone company to reliably provide the internet access I pay them for: who knows what they'll try to block next? I made a complaint, and got a response that quoted the terms of use which say they can mess with traffic if they want to. I know that, and my position is they shouldn't half-assedly try to police content. So, I installed a VPN on my phone.


Please name and shame.


This was on HN the other day: https://www.privateinternetaccess.com/blog/2019/03/isps-in-a...

Spark NZ, Vodafone NZ, and Vocus NZ, at least.



I have my own home server on a standard consumer Comcast internet (cheap plan at 60mbps) running c0d3.com, and the students that are learning how to code on it never had any issues with stability (except that one time where I had a power outage at home).

I also have my router configured to use our server as a DNS server and the speed is incredible. Since I'm hosting my sites at home, when people in my WiFi network use my sites it feels almost instantaneous (because network request resolves locally)

3 of my students were inspired to set up their own servers and they love the experience so far. Finding people who run their own services is so rare, I hope more people do it.

I am concerned about security implications though. Could people hack into my home server, then hack into the router, and then launch a man in the middle attack?


If you're serving your website from your router as a webserver (dumb), this would be a concern. Serve your website(s) from a webserver running in a VM/container, and you're doing okay.


I ran my own DNS servers in the past (and email servers). It's not too difficult to setup (email is significantly harder), although you'll probably have to run at least two DNS servers in order to use it with a domain because most registrars won't let you change the nameservers unless you have at least two.

I think it's a worthwhile thing to do since it demystifies how DNS works (similarly with running your own email servers), but if you're running everything on cloud infrastructure I don't see much benefit aside from the educational aspect.


There's a nice and free secondary DNS service available at https://freedns.afraid.org - so if you trust that service, you can get away with running just one master DNS yourself.


It's not too hard to find a secondary server; Hurricane Electric offers a free secondary service.


I learned recently that my home router runs a forwarding DNS server. I suspect many people are already doing this and don't know it.


The article is partly about the performance issue of not having a local server and partly about the privacy loss of sending all your DNS queries to Google. Even without a local dns server there’s still a stub resolver on your OS that provides some degree of caching.


I think we are approaching another tipping point towards decentralization. As more and more people become aware of the privacy and other abuses from Google and Facebook there will be a growing migration towards anyone who offers alternatives and more choices.

I remember the great excitement of those early days when the internet first started to become a mass public phenonema. It was going to change everything, become the great leveler. Those huge entrenched monopolistic corporations would have trouble competing against small quick startups. And for a while that happened, entire industries were changed by tiny startups in garages, like Google. But as it became bigger Google changed for the worse.

I think this is going to be a continuing cycle. But one great thing is that we will be creating new tools such as blockchain and will have a clearer roadmap of what to do when somebody amasses so much centralized power that they start to limit our choices, to enrich themselves.

Another thing, today people are walking around with enough combined computing power in the phones in their pocket to dwarf the resources of even Google, Facebook, and even the CIA and NSA and I know we have enough hackers that would consider it a challenge and even fun to organize all that power to counteract any serious abuses.

For instance many phones these days have at least 8 processors around 4GHZ and it now is possible to add a 1 Terabyte SD Card. That is more than enough to use it as a DNS Server.

Soon we will be seeing more and more peer to peer mesh networks, decreasing the need to use an ISP. I think more and more local Co-ops will be formed with people networking together their computer resources. And these Co-ops could network together themselves. For example they could form an online buyers club with all purchases going through a specific IP Address and no transaction being able to be traced to individuals. And a small percentage of the purchase price can be earmarked to pay people running the DNS or other services or pay them for any useful specialized software they have developed for these uses.


For an end user, a resolver only DNS server is probably one of the easiest services to run as there's really nothing to configure. I run my own resolver only DNS server at home. The configuration is minimal and I haven't had to touch it in years (about every five years I update the root zone to pick up new root servers if any).

I also happen to run DNS for my domain (as well as email, web, gopher and qotd) and that is a bit more involved than just resolving only, but it's by far easier to manage than an email server.


I have been serving myself a custom root.zone for almost 20 years now.

I use tinydns for this which I think has always been the ideal choice for personal use. The author from the beginning recommended users not to use third party DNS and that advice has proven to be more and more prudent over the years. tinydns stores records on disk and has never been limited by RAM as would be something like nsd, for example. Today, I manage to fit all the data I need on tmpfs anyway.

I am certainly not the only person to serve their own root.

There used to be a project called ORSC that started around 1998 when there were people actively protesting ICANN management of domain names. ORSC ran their own root servers, as a service for others, as an alternative to ICANN. I remember seeing a page -- it may have been associated with ORSC -- showing how to run an alternative root. The software used was tinydns.

I also remember a former head of ICANN who said he ran his own local root.zone. Not sure what software he used. This was years before any "expert", e.g. Cricket Liu, even admitted running a local cache (nevermind a local root) could be a good idea.

Managing DNS for myself I noticed a few things over the years.

The amount of DNS data I will need for all internet use in the course of a lifetime -- subtracting all data for ad servers -- is relatively small. With today's computer equipment it can easily be stored locally.

Within that subset of DNS data the amount that is changing constantly is also relatively small. The Mockapetris DNS is premised on handling dynamic data but I manage to meet own needs with almost all static data. Further, the sampling I have done kept showing that most data stored in the DNS as a whole was not very dynamic.

Serving the data I need via authoritative servers like tinydns or nsd reduces the need for a cache, let alone one shared with others (who could possibly poison it... thereby reducing need for more complexity to protect against such poisoning).

As mentioned in the article, Vixie's problem was with Google hardware. Something like not being able to edit /etc/resolv.conf. Several solutions exist.

What happens when the ISP is redirecting all queries to port 53 to their own DNS servers? Imagine where the ISP has made its resolvers authoritative for everything, where it modifies the answers and you cannot access any other remote DNS server on port 53.

What is the solution? Multiple possibilities. If the needs are only for a relatively small amount of mainly static DNS data, then one option is to prefetch the data in bulk via FTP/HTTP. If the user wants a DNS cache, then another option is to set up own remote cache listening on a port other than 53 then forward queries there. VPN seems like overkill when the only issue is DNS traffic.

As such, "running their own DNS servers" could involve more than just using a RPi on the local network.



Back when I was on 1.5mbps DSL, one of the biggest performance gains in all the things I tried wasn't content caching via a proxy. It wasn't ad blocking (although that helped!) It was running my own DNS. Web pages got snappy again. I'm on a fast internet connection now, but I still contemplate running my own DNS just to get that snappiness back.


How hard is it to make a personal use raspberry pi dns server that assembles a local database using the consensus of a bunch of major sources?


That's not really how customer DNS works, so you're talking about a custom monitoring project. It doesn't seem very hard to monitor the consensus - get a stream of domains and run "dig" against the known endpoints.

But if you want an actual server doing that, I don't think there's much point. You'll get differences for various valid reasons. Entries changing, different anycasts getting different geo responses, etc. It's a bit like "a man with a watch knows the right time, a man with two watches can never be sure".

So the answer is really - why do you want to do this? Different reasons here lead to different approaches.


Thanks for sharing. I don't understand DNS as well. I wasn't sure they could disagree for valid reasons for example.


Like pi-hole.net?


Quick, name a non-niche dns server that is easy to configure and maintain that hasn't had a major vulnerability in the last six months.


dnsmasq's last CVE was in October 2017 https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=dnsmasq

despite that "safety record", I am probably going to switch to https://github.com/bluejekyll/trust-dns


Thanks for the shoutout to trust-dns.

I would take the fact that there have been no CVEs on the server with a grain of salt, as I don’t think it’s seeing a lot of use. The embedded resolver is getting a lot...

Feel free to open any issues for features you’d like to see.


Unbound.


Michael Lucas wrote up a nice piece about setting up an Unbound DNS Server on OpenBSD eight years ago.[0] It might need some updating, but probably you could do that by reading the man page.[1]

[0] https://mwl.io/archives/580

[1] https://man.openbsd.org/unbound.conf


I've been running BIND9 on a raspberry pi powered off of a usb port on my wifi router. Sure 8.8.8.8 and 1.1.1.1 will know some of my households queries, but not how much they are being queried. It's actually surprisingly busy with redundant requests. Request that google et al know less about now.

It's also handy to review the DNS logs. I found out my off brand wifi cameras were phoning home to china every minute. Blocked that domain in a hot minute!


That seems like a bit much. I think getting people to run a raspberry and pi-hole is a much more realistic aim in terms of usefulness and creating awareness.

Plus it's been pretty eye opening. I'm running uBlock Origin and Privacy badger...and still the pi-hole filter 25% of my traffic. A full fkin quarter after adblockers...


Pi-Hole is great as a caching mechanism and is easy enough for non-techie friends and family to use (just make sure you have it auto-update for them, or do it yourself once in a while).

Bind9, a true dns server, doesn't provide the privacy enhancements that pi-hole does and it is much more opaque for normal users. I think you're right - it's much easier to look at targets that are a little easier to hit rather than suggesting a bind9 setup to everyone. I have the know-how to do both, and I prefer pi-hole anyway!


All the filtering you're doing on pihole you can do in uBlock. You're just using different lists. (or not refreshing your adblocker list very often?) DNS blocking is still useful for things like mobile apps, but if you want to remove a few ms from your page loads, then merging your entries into uBlock may be a good idea.


I run dnsmasq at home talking to Google DNS via DoH. I’ve been thinking about running my own recursive resolver but that theoretically lets my ISP see all my DNS lookups.

I think as a compromise I’ll run my own recursive DNS on a digital ocean droplet and point my local dnsmasq instance at that.


So you're cool with Google, a company whose primary business is tracking people, seeing all your DNS, but not your ISP, whose primary business is delivering network access, notwithstanding their bumbling efforts to branch out.

Now, Google does claim they don't track DNS requests. But consider why that is? Once upon a time they didn't scan Gmail content either, but that was before GMail dominated the webmail space.

What do you think is going to happen once DNS becomes centralized? If it's taken too far we won't be able to go back. And it can easily go too far. Chrome and Firefox are ubiquitous enough that if they succeed in removing local resolvers from the loop it will mean that the entire ecosystem will have transformed to accommodate them. Software stacks, configuration policies, etc will have all evolved to disfavor niche use cases and favor Google, Cloudflare, etc.

ISPs can already see the IP address we're all connecting to, and the correlation between domains and IPv4 addresses is more than strong enough to provide the necessary information for commercial profiling. IPv6 will virtually make it 1:1. (So Encrypted SNI likewise provides little benefit.)

The shift to TLS accounts for 90% of the potential capacity for avoiding ISP snooping, short of VPNs or TOR. That last 10% comes with a huge price tag.


Disclaimer: I work for Google, but not on DNS or Gmail.

> Now, Google does claim they don't track DNS requests. But consider why that is? Once upon a time they didn't scan Gmail content either, but that was before GMail dominated the webmail space.

You seem to assume that it's a singular organization with a unified agenda, but this really isn't the case. It's the same thing about when folks assume Google looks at your Drive files to recommend ads to you -- it isn't true, there's different motives there.

Drive: we want to sell you storage, your data isn't scanned (except for viruses). Google DNS: speed up DNS, which improves load times, which improves the overall web experience. Photos: Ditto, we want to sell you storage.

Performance is a feature, and most ISP resolvers are junk. Worse, many of those resolvers like to inject their own NXDOMAIN pages. :\

You could argue that Google DNS does positively impact Ads, but only in the respect that faster DNS resolution helps ads load faster too. Overall, I see it as one of those "long term greedy" (my own words) strategies.

As a privacy-conscious Googler myself, I've taken a look at Google DNS to convince myself that it's what it says on the tin. As far as I can tell it is, but I don't expect you to take my word for it. What logging exists is extremely temporary (short-term debugging.)

Re: Gmail, this isn't true either. Sure, there's still processing of your emails (we receive your email, scan it for spam), but it isn't used for Gmail ads. The public perception of this was so bad and the incremental improvement in ad quality so low, that now ads just use your general ad profile. No email scanning involved.

> Software stacks, configuration policies, etc will have all evolved to disfavor niche use cases and favor Google, Cloudflare, etc.

This is a different matter entirely, but this isn't _always_ a bad thing. I'm thinking of TCP here, which has almost entirely been ossified by middleboxes. Same for TLS -- TLS development has been hamstrung by these same kinds of middleboxes and "protocol accelerators." This kind of incredible technology position has allowed for the acceleration of HTTP/2 and the development of QUIC (and therefore HTTP/3). Overall, Google has been incredibly open with the development of these and worked to include everyone. I'm sure it's not always that way. Can you bring up some examples where "niche use-cases" have been locked out by Google-driven software stacks and configuration policies?


I can't imagine using Google services if one is remotely privacy conscious. Just from your own defense:

Drive: > your data isn't scanned (except for...

Google DNS: > What logging exists is extremely temporary...

Gmail: > we receive your email, scan it for ....


With that logic, how could anyone remotely privacy conscious use any service on the Internet?

There’s a lot to worry about w.r.t. privacy online. Virus scanning, spam filtering, and debug logging aren’t high on my worry-list.


I think the point is not necessarily what they are scanning now, but what they might scanning in the future for other purposes.


I am having a hard time with the do not read email part here. Let me tell you why.

1. I do not use google for DNS 2. I do not use chrome. I use firefox with ad blocking 3. I only browse in private browsing mode 99% of the time. 4. I have a script that updates a block list of 10s of 1000s IPs for ad and tracking blocking, etc into my host file.

So I order a box of cigars. Confirmation is to a gmail account. Next day I get stop smoking ads in YouTube. Never seen them before then.

So...


There’s a difference between saying we don’t scan the emails and saying we don’t track the metadata either. So if you bought from an online tobacconist rather than amazon they wouldn’t need to scan the contents.


What is the metadata you are looking at? The email was from orders@randomonlinecigarshop.com? Is the email address and subject metadata? If so you are being disingenuous about not reading emails. The idea of google saying we do not read your emails, will be understood by the masses to mean we do not read you emails, not hey we take careful note of the sender and any marks on the envelope but we do not open it. It is free email, got it, but seems a bit shady in the presentation of your do’s and don’ts.


"Google DNS: speed up DNS, which improves load times, which improves the overall web experience."

Oh, just stop.

It's even more disappointing to consider that you believe this to be true.


As a counter-anecdote to your disbelief, I've enjoyed internet on an ISP whose DNS servers were very slow. Slow enough for me to spend the effort to find out what's the holdup between enter key and first paint. DNS responses were about 350ms, compared to 8.8.8.8 sub-20ms.

Edit: I should add that the slowness wasn't a peak hour thing, it was consistent, all day, for several months.

Switching made my subjective experience better.


What exactly is it that you believe is the truth then?


My ISP is AT&T, so I do indeed trust them less than Google. And no, I don't particularly trust Google either, which is why wrote that I can avoid them both by running a recursive resolver on Digital Ocean (but now I have to trust DO).

I could run a VPN full time, but I'm not willing to accept the added latency and bandwidth cost.

What would you suggest?


Fair enough. I glossed over the Digital Ocean part, or at least failed to appreciate it--you are (or will be) independent and not contributing to centralization of DNS.

Thank you! We need more people to run their own network services in order to preserve our freedom and privacy.


FWIW, the first machine I ever had broken-in to was a personal box I ran outside the firewall of an employer and the vulnerability was in BIND 4. Circa 1996 probably.

I’d still run my own email if it weren’t such a pain in the ass. I’m not new to this stuff[1], but at some point you get tired of doing SA stuff at home when it’s also your day job.

1. https://duckduckgo.com/?q=qmail+jay+soffian


Yep. I've been using OpenBSD since circa 2000 and rarely deviate from the stock software and configuration. At least as important as security (and not unrelated), they ship HTTP, SMTP, and DNS services in base; services which the developers use themselves.[1]

Package management is more difficult, but if I have to install something as a package I probably don't want the headache. Upgrades are more manual as compared to Linux distros, but they're simple and consistent and well-documented so require no more than an hour every 6 months. Sysmerge (for upgrading /etc) and now syspatch (for kernel patches) have made it even simpler. The upgrades come precisely every 6 months. The system evolves incrementally so I don't need to invest much effort in keeping pace--just stay on schedule.

I only backup user data and a few key configuration files (e.g. domain specific rules for smtpd.conf and httpd.conf) as I can recreate a setup with minimal effort.

I stopped running POP and IMAP a long time ago. OpenBSD never provided native solutions. (They shipped a POP3 daemon, popa3d, for a few years but few people used it.) I use mutt, some others use alpine, and others just forward their email to somewhere else. I do greylisting with OpenBSD's native spamd and some simple RBL checks that run from the MDA, but that's it. I get more spam than I might otherwise, but it's tolerable, especially considering I don't have to maintain additional software. And most other users don't see the same spam volume--I've used my e-mail address on web pages and in public forums for nearly two decades so it's on pretty much every list traded among spammers and marketers.

[1] People criticize their "secure by default" mantra as disingenuous or misleading, but if you've been running these services for years or decades you know exactly what they mean by it.


> I’m not new to this stuff[1], but at some point you get tired of doing SA stuff at home when it’s also your day job.

Amen! Which is really sad sometimes. I really enjoyed that stuff when I was younger. Sometimes makes me wish I had picked another career so I might still enjoy fiddling with more or less trivial tech as a hobby.


>I don't particularly trust Google either, which is why wrote that I can avoid them both by running a recursive resolver on Digital Ocean (but now I have to trust DO).

Why not run DoH over tor? Much better privacy than a server/ip address only used by you and can be traced back to you.


Um, because at the end of the day I don’t care that much and want to actually be able to watch Netflix and stream iTunes and those things tend not to work unless they can route you to a reasonably close CDN, which means knowing at least your /24 usually.



I'm not certain it is relevant anymore but ISPs used to MITM DNS requests and would send you to redirects for search/advertising pages for what should've been NXDOMAIN responses. Google at the moment has a better track record here.


They didn’t MITM the requests ... you had to explicitly be using the ISP’s DNS servers. And yes this is one of the reasons I long ago stopped using my ISPs DNS servers.


So I've encountered ISPs that do MITM the requests regardless of which provider you use.


Maybe you shouldn't talk so confidently, considering you got it all wrong about GMail? It's always scanned your email. How do you think spam filtering works? And it used to scan emails for advertising, but doesn't anymore:

https://www.nytimes.com/2017/06/23/technology/gmail-ads.html


More prorocols should be designed such that random willing individuals can contribute to the core infrastructure? DNS was made in an era where just anyone can run a DNS server for the public,hence the design choices.


I live in NZ, and run my own DNS server. It's just unbound on pfsense, super simple. The good thing is when my ISP started blocking websites after the Christchurch attacks, I didn't even notice. (They were simple DNS blocks.)


Whats the difference between personal dns server, vs using google, and opendns? Arnt u resolving to that?


DNS protocol is largely a single UDP packet. In China the packet can be sniffed and dropped according to its content.

So even if you switch to 8.8.8.8, your ISP can still tamper it.


Unbound resolves to root DNS servers. They are the authorities on the domains they advertise.


I run my own dns server (and a bunch of other stuff), but as secondary dns with a hidden master. It lets me configure records how I want but with the benefit of using my provider's international dns server network.


Is this the Paul Vixie of Vixie Cron fame?

Edit: To answer my own question. Yes it seems

https://en.m.wikipedia.org/wiki/Paul_Vixie


well basically running your own dns server, might be better for privacy, however security might be worse if you are lazy. if you do not keep your stuff up to date you will be more in trouble and have probably less privacy. (the same with mail servers, etc..) of course the more tech savy and time you have the less of an issue will it be.


Did this for over a decade. Kinda easy but also pointless. I'm switching Google Cloud DNS.


This site is a paywall no matter if I go incognito or use the web link. Am i missing some ninja magic? Is it ok to be irritated by this?

Haven't been able to read the article but is it referring to this Nov 2018 tweet?

https://twitter.com/paulvixie/status/1063843157668970496?s=1...


It's ok to be irritated by this. I ended up reading it in links[0] initially, which is usually my go to. After reading your comment I decided to install Firefox 66 (which I've been meaning to do anyway), where NoScript is apparently still at thing. No Javascript, no annoying popovers.

[0]: https://en.wikipedia.org/wiki/Links_(web_browser)


Isn't there a distinction between a DNS server and a DNS resolver?


Paul probably has really nice internet access.

Most people in the USA have really poor access for running services. I have constant problems with Comcast myself.

One day when we have fiber and static ipv6, then we can run mail, IRC, Http, dns, etc.


Whatever DNS you have, the Government like South Korean one will tap, monitor and censor it as it wants to.


The article is behind a paywall for me. Will somebody post the full text?


Isn't there something as a decentralised DNS client?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: