Hacker News new | past | comments | ask | show | jobs | submit login
How and why I run my own DNS servers (zwischenzugs.com)
268 points by zwischenzug on Jan 26, 2018 | hide | past | favorite | 140 comments



Responding to several comments in this thread RE: what is the point of doing this ...

The point of running your own email and dns server is so that you are a peer on the network.[1]

This is important and is becoming lost in the current era of Internet adoption.

By many measures the Internet is the largest cultural and commercial force in the world today and by an accident of history, the researchers at (D)ARPA gave us a network that allowed normal citizens to be peers on the network.

Don't lose this.

[1] As opposed to, for instance, the telephone network. You can own your own domain and perform the first level of network interaction on your Internet systems, but the analogy on the phone network (owning your own phone number and controlling the first touch from other networks) by creating a CLEC is administratively and financially ($100k +) impossible.


You hit the nail straight across the head. It's staggering the number of people relegating so many things like mail and DNS to a handful of companies becoming too large to fight against, or force to be wholly transparent, and everything else that goes along with that. It's truly scary and it doesn't have to be this way.


I think calling it peer is misleading/imprecise, since in the context of the internet peer/peering refers to the interconnection of two networks, having your own AS, running BGP, etc.


"I think calling it peer is misleading/imprecise, since in the context of the internet peer/peering refers to the interconnection of two networks"

That is what peering agreements are but I believe in the broader context of the Internet, any routable IP address can, and should be, considered a peer:

"Each layer has one or more protocols for communicating with its peer at the same layer."[1]

[1] TCP/IP Illustrated, Volume 1, p.3


I get where you're coming from but wouldn't being a proper peer on the Internet imply that you would need some kind of backbone access and be an ISP yourself?


Comcast has a huge network but none of their customers are only talking to each other, they're talking to Verizon's customers and Google and Facebook and Amazon. Everyone has to interconnect with everyone else regardless of how big they are.

All it takes to be a proper peer on the internet is a public IP address.


My ISP rotates through ipv6 addresses now (and frequently, at least several times a day). Makes that tough


That feels like assholish thing to do. The great thing about IPv6 was that there's enough of them that everyone and everything could get one. If it changes it defeats that purpose.


Browsing using a static IPv6 (or IPv4) that is unique to you is terrible for your privacy.

Randomly reusing the same limited set of IPv6 addresses would help privacy a little.


I'm sure for a mere $25 more a month they'll provide a static address.



That isn't much help for running your own DNS server though, since it's is a bit of a chicken and egg problem then. It's also hard to run a mail server on a residential connection because most of them block SMTP.

Fortunately there is another alternative, which is to get your public address from somewhere other than your local ISP. Some VPN providers offer a static IP or you can set up your own VPN to a VM on any cloud host and forward incoming traffic back over the tunnel.


Nope, all you need is a public address, some technical know-how, and the will.


Maybe once net neutrality goes down the drain


No.


You're not running your own mail server without rDNS and that requires your ISP. This guy is talking about running a dynamic IP, he's not getting rDNS for that. Without rDNS, your mail is assumed spam and does not reach destination.

Also, not available without business class, at least on my ISP, is port 80. You won't be hosting many web properties if Joe user types in yourdomain.com into a browser without the leading https://, which nobody does anywhere. If browser makers defaulted to https, I'm sure the ISP would fix this glaring hole in their non-business class plans promptly.


I used to do this as well, with tinydns. I even wrote an article with a similar name[1]. Then I wrote another article with a similar name[2] when I decided that I was being silly.

I use Route53 now with a little cron that periodically updates the record that points at my home IP[3]. Route53 is bulletproof in a way that I'm unable to accomplish on my own.

edit: Route53 is not actually cheaper than this person's setup. That said, $0.50 per hosted zone is a bargain for what you get and there's a volume break to $0.10 after 25 zones. We're talking about global 100% DNS uptime with an SLA[4] for $0.50/mo.

[1]: https://www.petekeen.net/how-i-run-my-own-dns

[2]: https://www.petekeen.net/how-and-why-im-not-running-my-own-d...

[3]: https://github.com/peterkeen/route53_ddns

[4]: https://aws.amazon.com/route53/sla/


I do it for free by using cloudfare as the dns provider and used to do it for free by using the Linode DNS service that comes included with having a VM there.


+1 for linode. That's what I use for DNS. Previously I used dns.he.net which is free. I highly recommend it if you don't want to pay anyone for anything.


+1 for using Cloudflare. That is what I use as my DNS provider.


I use cloudflare as well. And they are one of the most performant. Feels too good to be true. I wonder if the offer and it performance will last...


I do the same thing (pointing a Route53 entry to my IP) with a script that runs on a Raspberry Pi in my living room. Very happy with the setup, though Cox changes my address so infrequently that it's almost unnecessary. In the name of sharing a python implementation, or maybe just shameless self-promotion, here's mine: https://github.com/benrad/pydydns


Are you sure it's cheaper than $2/month? I also run route53 and it comes out to about $20/month for 30ish domains.


You're right. Updated.


Furthermore, lots of FOSS router software, like pfSense, have support for Route53 for a Dynamic DNS target, built-in.


This is exactly what I do with PfSense. I run multiple services from home, so get PfSense to keep a Route 53 address of home.<domain>.com updated to point to my home IP, then create CNAMEs for each service that point to that home record.

It's then just a case of using HAProxy (which is also on the same box) to route to different internal services. I don't host anything important, just time saving things running in docker containers on a separate box. Things like email and personal site always go on cloud hosting or a service, since these need to be up for me.

Make sure your firewall rules are setup well, and look into some logging and monitoring.


Same here, but I do it for free by using the DNS of my registrar which has an API (this is gandi.net and their new API is extremely simpke and useful).


It is cheaper for you and the OP. It isn't cheaper "with a whole bunch of domains". Route53 would cost me ~$100/month vs. ~$21/month for my own setup.


That may be true, but few home users have 1000 domains to manage, and for the ones that do, they can do the math and decide what is most cost effective.

My company hosts several hundred domains, and using Route53 was a no-brainer -- even if the hardware is free, monitoring, patching and maintaining those bind servers is much more expensive than route53 even if we had 1000 domains.


Are you genuinely unaware they charge for DNS requests, healtchecks, etc?

If you are using it to "just" host a dumb domain record, you can get that for free at your registrar or with Cloudflare.

This whole "Route53 is cheap" mantra makes no sense unless you are doing something so simple you get it free with 2934902342390 other services.


it is cheap. because human time is always worth more than paying for routes, requests and healthchecks (ps. you won't get healthchecks that easily with your own setup, especially not health checks that will remove/add servers from your dns.)


> it is cheap. because human time is always worth more than paying for routes, requests and healthchecks (ps. you won't get healthchecks that easily with your own setup, especially not health checks that will remove/add servers from your dns.)

Do you genuinely not understand I'm talking about a production setup that already exists?

Like, if you think health checks are non-trivial you probably shouldn't be running your own DNS.


You clearly have needs that don’t match with the majority. That’s fine! I see nothing wrong with that. Arguing that Route53’s value proposition doesn’t fit your needs and therefore doesn’t fit anyone’s is disingenuous.


I'm specifically stating the idea it is cheaper was a misconception.

I didn't say everyone should quit Route53.


Are you genuinely unaware of how little they charge for requests?

Our last bill showed around 100 million queries last month, so that cost around $40.

We've got a dozen healthchecks, so that's another $6. $0.50 each is essentially free compared to the time it'd cost us to set up the equivalent healthcheck service with bind.

150 domains costs another $25

So our entire bill for 150 domains and a around 3M DNS queries/day is around $75/month.


So you use AWS for their hosting and want to stay in their "world", fine. I don't use AWS for hosting either and my equivalent of 12 health checks ends up being $33 which gets over the $100 figure I mentioned earlier.

If it genuinely takes you an hour a month to maintain your own setup, I guess your logic makes sense but for me it doesn't.

You've also basically admitted its alot less than 1k domains.

If I hosted with AWS, it would massively inflate my hosting costs as well. Lol.


Do you actually have that many domain names or are you counting subdomains? Because Route53’s 50 cent charge is per zone, so you can go nuts with subdomains for no extra charge.


Its a question of traffic, dns health checks, etc. as much as # of domains.

If you are using it to "just" host a dumb domain record, you can get that for free at your registrar or with Cloudflare.


Good article on How, pretty bad at describing why

# It’s Cheap

There are plenty of cheap & free DNS hosts out there.

# More Control

Every DNS host I've ever used has offered full control of DNS records. If all you've ever experienced is poor shared hosting maybe this looks is something new.

A why not section would be good

* High latency for people who do not live near one of your servers.

* Time to set up

* Cost (lots of cheaper alternatives)

* Some overhead. Running any server that is public facing has some overhead even if it's just installing patches.

Interestingly zwischenzugs.com isn't hosted on authors own DNS (maybe a restriction of wordpress.com?)


> Every DNS host I've ever used has offered full control of DNS records.

That's not true. Typically most dns hosting solutions offer so little control and it is so primitive, that they only treat records as static values, you can't have something like a view{} in bind letting you serve different people by different servers reducing latency and improving availability, say you have one server in America and one in Europe.

> High latency for people who do not live near one of your servers.

See my point above. Dynamically chosen records are fundamental to cheap good latency. And no, anycast is not a silver bullet, you can do pretty good with just dynamic records, you can use them for nameservers too, you know.


I guess he did say full, but views are a pretty specialized function (I say, as I am using them internally where I work.)


We are missing the most important reason:

>I’ve learned a lot by doing this, probably far more than any course would have taught me.

It is quite obvious that the world needs more people with a deep knowledge of how DNS works. Doing something like this is a quick and effective way to make the world a better place.


>It is quite obvious that the world needs more people with a deep knowledge of how DNS works. Doing something like this is a quick and effective way to make the world a better place.

I think you're overstating your case here. I found it interesting too, and us geeks like to dabble in many things, but you can never achieve specialized knowledge and expertise through mere tinkering. Using C++ didn't make me a language designer or even an expert on C++.


The point is there's now one more person in the world who knows C++, which is better than there being one less person in the world who knows it. The fewer people who learn these systems, the fewer people there will be available to innovate/maintain these things going forward.

Entire trades are facing slow extinction for lack of people who bothered to learn. Almost 3/4 of electrical or electronics repairmen are over 45 years old, and 30% of them are over 55 years old [1]. It would be a shame to wake up one day in 2030 and realize there are only a handful of (presumably very well paid) people on earth who know how DNS works.

1: https://www.lincolntech.edu/news/skilled-trades/baby-boomers...


How many repairable electronics are being made now a days though? Seems like a function of the market, to me.


> Every DNS host I've ever used has offered full control of DNS records.

The other day I asked my client to change the negative result TTL of their domain (down from, IMO, very high 86400). Answer: Can't do. Our DNS host won't allow that.

That is easily the most popular host in my country, btw.


Yeah, wordpress. Also my home server is for from 5 9s (but it's good enough for my services).


> Every DNS host I've ever used has offered full control of DNS records

Every tried to publish SSHFP records? Infoblox doesn't, seems like many of the wrapped DNS service with a cute website for customers doesn't allow publishing SSHFP records.


Although the article does cover this, perhaps it doesn't emphasize the point strongly enough:

The IP addresses for your authoritative servers are going to be stored in the glue record for your zone, which is physically held in the root servers (i.e. not your servers).

Those glue records can't be changed quickly.

Therefore you need to be very sure that your servers' IP addresses are really static.

We run our own DNS (for mostly historical and paranoia about reliability reasons). One of our servers is on a subnet that we own, so that totally under our control. The other is at a provider where I have had a detailed back-and-forth with the support staff about the circumstances under which its IP might change, and how to ensure it won't change, specifically mentioning that we are going to run an authoritative DNS server on their infrastructure (currently IBM/Softlayer, moving to Packet.net soon). I am skeptical that a low-cost provider (DO, etc) can give a strong enough guarantee that the machine's IP address won't change.

Makes Route53 look very attractive for common/garden purposes.


DNS doesn't require you to use a glue records though you can just provide name and the resolver will another query to figure the IP.

It's of course less efficient and probably you shouldn't do it, but DNS itself doesn't stop you from that.

As for authoritative servers what I did in the past is essentially working together with friends, I was backup name server for their domains and they were backup for mine.

There are also some free public DNS servers as well.


True. I didn't mention that because as you say you shouldn't do it.

A big problem with free/cheap DNS services is that (as far as I have seen) they do not support either secondaries off their network, nor being a secondary to some other primary. So you end up in an all-or-nothing situation where you either rely entirely on one provider, or you have to host yourself.


>The YOUREMAIL.YOUREMAILDOMAIN. part must be replaced by your own email. For example, my email address: ian.miell@gmail.com becomes ianmiell.gmail.com.. Note also that the dot between first and last name is dropped. email ignores those anyway!

Isn't that only the case for gmail (and maybe some others)?

As an aside I'm surprised someone setting up their own dns-server would still be using gmail. I've found running my own email-server to be very useful and satisfying. (0-configuration throwaway addresses, automatic sorting with sieve, personal and professional mail on the same account, etc. etc.)


> 0-configuration throwaway addresses

> personal and professional mail on the same account

This is a self plug, however this is exactly what i made https://ForwardMX.io for, doing all this within Gmail for the lazy :)


Looks useful. I'm not sure how the catch-all address works for your service, maybe it's the same maybe it's different from what I have.

My problem with a catch-all was that there is a lot of spam that gets send to various common email addresses such as "admin". Do you maintain a blacklist for your users?

I have a regular expression set as a username in my database of email addresses. (spoilers: it's just somesalt.*."2 or 3 characters" so for example secretsalt.ycombinator.com@mydomain.tld). So I can sign up to any random website by just entering salt.thatwebsite.tld@me as my email. That's the zero-configuration part. Honestly, this is worth paying $9/year for imho :P

The personal and and professional mail together is simply internally forwarding my professional mail to the same imap instance but a different folder :P. It's mostly future-proofing on my part. If I were to get/manage a different domain-name (say a gaming guild or business venture) I could merge those too and not have to set-up 27 different accounts in my email client.


I currently overwork the catch-all/matching part to support way more options. (Gmail style, timestamp invalidation, subdomains, blacklists ...)

Right now my approach is to have a [catch-all]@domain.tld enabled, and then build rules for individual domains i want to blacklist.

However sounds like you figured out a nice setup that works for you. So you are not the target audience anyway :)


Sounds like a great product, tbh. If I ever break my server I'll consider it.


How do you handle SPF records? That was the main reason I switched to Fastmail for my domains even though they all still filter into Gmail - they have a rewrite source address function


We do SPF. Gmail does fine without SPF if you teach the Spam filter, every other major provider seems to really have issues without.

Cant say anything against Fastmail tho, except surely we are cheaper as we dont have to provide these kind of interfaces and space.


> Isn't that only the case for gmail?

Yes. Email providers are free to create that kind of rules, and this one looks very specific to gmail.

> As an aside I'm surprised someone setting up their own dns-server would still be using gmail.

Well, be wary of getting contacts about your DNS in an email that depends on your DNS. This is the one place to use a gmail address, not one you control.


I haven't ever got comfortable with running mail. Interested in any good guides I haven't already read.


I've spent years tweaking my mail server setup (Postfix, Dovecot, RSPAMD, LDAP...) and did a full writeup a few months ago. I've used other guides online but found most of the rest lacking on details.

https://www.c0ffee.net/blog/mail-server-guide


Thanks a lot for that! I've been running a similar stack (ldap took me a while to grasp) but without rspamd which I wanted to add. Your writeup is the perfect excuse to finally start with it.


Thanks from me too, this looks great. I also knew immediately why the SVG looks wrong on Firefox because I had the same problem before - The text is 'live' text and not stroked to paths. It's a pain to do if you have a lot of text content, but if you can click on text in an SVG and discover it is still editable, it won't render correctly for many viewers. Once you are sure there are no typos, stroke all text to paths and it will look perfect on any browser.


I've submitted this for discussion and asked the mods to give you the credit you deserve for putting in so much effort!

https://news.ycombinator.com/item?id=16238937


Thanks. I was just complaining last week to my friends that all howtos about postfix and spam protection seem to be a decade old. Now you fixed it and I wlil add rspamd (which I did not know about). My previous plan was to add some kind of Right Hand Spam Filter, do you have any opinion on that?


Hope you find it helpful! Not sure what you mean by Right Hand Spam Filter, but Rspamd has been great. It integrates with postfix as a milter so there's very little configuration to get it working with your existing setup.

The daunting part is just how many options/features the project has - which is what I tried to clear up in my guide.


Are there any good open-source webmail clients?


That looks great, thanks.


Curious why you settled for Postfix?

I know its basically the standard but its a pain to configure and modify. I recently started to work with Haraka and its so much more of a plessure (even thought i am no JS fan, i prefer JS to cryptic/ancient config files)

Just curious if you went through a evaluation process


I don't feel like I "settled" for Postfix. The configuration is quite simple, the documentation is great, and it's been battle-tested for decades.

I have basically no experience with Javascript or web stuff, and the last thing I want to do is figure out some leftpad-style NPM package dependency while my mail server is down. Maybe I'm just an old-school Unix guy at heart though - running a JavaScript interpreter on a privileged port just doesn't sit right with me.


If you want "pain to configure and modify", take a look at sendmail, which was the standard for decades.

Postfix is a breeze to work with in comparison.


I don't agree. Sendmail definitely has the weirder config file syntax, but (having set both up multiple times) both have the exact same setup technique—reading through the manual looking at the config options and copying/pasting the lines into the config.


That same technique can be applied to basically anything.

I've setup both multiple times, and have worked with Sendmail since 1994. Postfix config files are much simpler.

To configure sendmail, you have to do extra layers of weirdness, like deal with "m4". That's mental overhead you just don't have with Postfix.


There's not really any extra layers of weirdness unless you're digging down into the nasty .cf files (which you probably never ever need to do). The m4 is just a detail (so you end up commenting with "dnl").

The relative complexity of the files is about the same—my postfix server config is roughly the same number of lines as my sendmail server config. And each line is just a single conf thing. Sendmail isn't really more complicated at all. It's just ugly.


Did not know Haraka, looks interesting.

Do you use it together with an imap server (like dovecot)?


Not yet no. But curious about such a setup as well.


If you want something a little hands off, I use Mail-in-a-box (https://mailinabox.email/) which does all the setup on the server for you automatically. I switched over to running my own email server several months ago and it's been working without any issues.


Hmm, I can't seem to find the tutorial I followed. I thought it was on DigitalOcean or Linode but I can't seem to find it. It was one of the big vps providers though...

This one[1] is similar however, although it's a bit less detailed. Basically: I use postfix with mysql for a user database as my MTA (the postman so to speak) and dovecot for the IMAP client (a smart mailbox equivalent).

edit: it's slightly different from what perlgod wrote (rspamd+ldap vs spamassassin+mysql) but the idea is the same.

Now the tutorial will give you a basic set-up, with spamassassin as a spamfilter. Which already "just-worked(tm)" for me. In addition to what's listed I added the following steps over time:

- First check your ip address on mxtoolbox.com for any blacklists. If you're on any, you could get removed if you ask them or you could ask your hosting provider to give you a different ip.

- get a certificate from let's-encrypt and encrypt all outgoing mail. Rejecting unencrypted mail is not a good idea even if it would be in an ideal world.

- add a blacklist MySQL table and a regex addres to the users table. Postfix has an option for parsing regex IIRC, so you can just set the email-adress to be a regex in the table as you would any other email. Then set the MySQL query in postfix to something like "user in table users AND NOT in table blacklist". This way you can use a unique email for each website you sign up to (say: somesalt.domain.tld@yourdomain.tld) and if you ever get any spam, you will know what website got hacked/sold your info ;P. I have only one website on my blacklist so far, and that was because their unsubscribe link didn't work.

- Install Sieve, this let's you add a sorting-script to your imap, letting you automatically sort incoming mail into different folders using all kinds of regexp. I have for example "personal, work, work/personal (directly to me and not a list), anonymous (throwaway adresses for each website I sign up to), admin (postmaster, cron, etc.), purchases (regex match to anything containing order, shipment, etc. which gets put into a folder which is backed up for longer), Uni, git notifications, and Twitch (because they send a ton of short-lived notifications. Messages in this folder get purged after 2h).

- Set up a r-dns pointer (you said you wanted to try more obscure dns features :D). this is an ip->domain mapping. For me this meant sending a message to my vps provider asking them to do so. p.s. vpsdime has insanely good/fast support. Took them literally less than a minute.

and finally:

- Set up DMARC (DKIM+SPF). Spf is pretty simple. It's simply a dns record which says which ip-adresses are allowed to send mail on your behalf. DKIM is a bit more complicated: It use public-private key encryption (with the public-key in the dns records) to digitally sign various fields (to,from, content, cc, etc. can all be signed separately) of your email to make sure they haven't been tampered with. The daemon set-up is quite easy, but it's easy to mess up the settings. If you're sending sensitive business emails I would set it up (my bank has it for example) but for personal email, I would only set it to sign the bare minimum such as the 'from' field, or nothing at all. Even if you don't sign any fields, having it set up will almost surely prevent you from being put into spam folders by the big providers.

I haven't had any issues so far, except for an overly strict DKIM set-up. Once marking email send to my work's mailing lists as spam when forwarded to gmail. (i.e. me->work list -> someone@work.tld -> someone@gmail.com) which in an ideal world wouldn't cause issues, but my work's mail server was misconfigured causing them to modify the email's envelope without respecting the DKIM signatures.

The other time was when my university email forwarded messages from @intel.com, which has strict security settings too. This was actually an issue when I forwarded from my uni to my gmail before too, but I never noticed because gmail was (as per Intel's configuration) silently discarding any emails I got. I only noticed the problem when I looked at my mail server logs for any rejections. I now have Intel.com whitelisted. (my uni said they'd fix it... 1.5 years ago...).

Having written all this out I noticed two things:

1. Okay, maybe setting up an email server is a bit of work after all... Mine grew organically over a few weekends so I never noticed.

2. When I finally start that blog I've been meaning to do, I should do a clean email-server install and write it up.

[1] https://www.digitalocean.com/community/tutorials/how-to-conf...


What email server do you use? Any particular anti-spam tool? This is what messes me up compared to gmail.


spamassassin, that's all really. I'm not a huge fan because I still have trouble figuring out how to configure it, but the default seems to work haha. I haven't had much spam yet that didn't get put into the spam folder or auto-deleted. The little bits that I did get also showed up in my gmail before (think parents/grandparents + virus spam).

I do have some publicly harvestable emails (on github and such), but I've never been spammed on those yet. Only on my personal address which I only give out IRL or occasionally reply with.

The catch-all emails are probably the best anti-spam you can have. The moment I get spam addresed to ycombinator.com@mydomain.tld (which matches a regex), I just blacklist it and move on with my life.


>personal and professional on the same account

To me that is a bug not a feature


You can set imap to only sync specific folders, so you can have just your personal email on one machine and work on another if you want.

My work email is (was, internship) reasonably quite so it didn't bother me. If it would become annoying i'd just set it to manual sync on my phone.


I've been down this route but ultimately found much more stability running BIND as a hidden master and pushing NOTIFYs to secondary nameservers (I use DNSMadeEasy) whenever the zone is modified. Supports DNSSEC as well.

I wrote up my setup here: https://www.c0ffee.net/blog/dns-hidden-master

I host mostly static IPs, but I also use this setup with shared keys and PFSense's RFC2136 feature to push dynamic DNS updates for my home network.


I’m stuck on the DNSSEC part with OVH as a registar, as they won’t let you add DNSSEC records if you don’t use their DNS. Not cool.

But your post gave me an idea to try, so anyway, thank you, either it will work or not.


Enable DNS zone, you don't even need to use it. Then, you'll be able to set custom DS records on a domain.

I asked them. It's not a bug, it's a feature!


Somebody pointed out to me that you can get a free DNS service here:

https://dns.he.net/


Good thing with Hurricane Electric's service is that they also provide slave DNS capability.

So you can run your own DNS master server and rely on HE for availability when your server is down or even give preference to the slave DNS, so your users can get the results quick.


Still good to go through the process of setting this up and making yourself less reliant on third parties.


Yes, totally agree. All the pain put the knowledge in my bones.


You're "less reliant", but chances are, you'll end up having worse reliability.


Why do you think that? There is nothing to prevent one from running two, three or four DNS servers each with different providers. The internet was designed to be decentralised and run in such a manor, no?


After dyndns.org decided to remove the free service, I was so pissed that I had to update all my devices including git checkouts etc.

So I decided not to go with any free providers anymore and get my own domain instead. So far no regret.


For anyone willing to take the risk, another 3rd party service currently free was promoted on HN 6 months ago:

https://news.ycombinator.com/item?id=14856277#14858784

>opie34: A friend and I put together a free dynamic DNS service [1] offering cool custom domains aimed at the Raspberry Pi community (and similar hardware hackers.) It's not strictly a hardware project, but it's a crucial building block for any network-enabled Raspberry Pi project, and we'd love your feedback.

[1]: https://www.legitdns.com


Why didn't you just pay dyndns? That's what I did.


Not TS, but I did pay dyn for several years.

The problem with them however was that they did rack up the price over time and not just a little bit.

In 2006, I had to pay $9.95 for their yearly "Pro" offering. In 2008 the deal became "only $23.00 for 2 years", humm OK. Two years later in 2010 it was "$30 for 2 years", in 2012 the bill became $40.00 and when 2016 came the price was upgraded again... (yet again without any other benefits from a customer point of view)

That's when I figured that it was too much and moved over to he.net. Which still is free and still works great for the dynamic DNS needed.

For static DNS services I also happen to run my own DNS servers, if needed then the he.net services could be moved over to their, but not seeing the need for now.


Once my annual maintenance exceeds what I would value my time at for 3-4 hours, then I may consider hosting my own DNS


Sorry that I wasn't clear. the DNS services for which I was using dyn for were moved over to he.net, which is also free and easy to use and has specific support for dynamic DNS.

The DNS servers that I happen to run have nothing to do with dyn's pricing, I was already running those and they are for more serious needs as just dynamic DNS.


First of all, I didn't like that move and did not want to reward it.

Next, I wanted to become independent. Now I can choose my DNS hoster or do it myself, but as long as I keep my domain the migration is easy. (And the custom domain looks much better ;-)

So I have no hard feelings against dyn.com as it was completely okay to stop their free service. Nevertheless, I did not want to be in the same situation again.


It seems dyndns is €24 per year. My baremetal hosting (4 Dedicated ARM Cores, 2GB memory, 200Mbit/s Unmetered, 50GB HD) is €43.2 per year. I am not ready to pay more than €10 per year for a domain name. For the moment, I enjoy 3 free domain names from no-ip.org.


I use duckdns.org. They are free and simple.


+1 for Hurricane Electric's free DNS. I ordered a .ninja domain, which they didn't support at the time. I contacted customer service, and they added it within a few hours. I love service when I have a problem!


Came here because of this advice:

> setup a strong root password

You should ideally disable root login over SSH and only allow key-based login. Checkout /etc/ssh/sshd_config for more info on that. I don't think this has been suggested yet.


Modern alternatives to BIND that I have had good (though limited) experience with:

- unbound (recursive resolver) https://www.unbound.net

- nsd (authoritative server) https://www.nlnetlabs.nl/projects/nsd


You can also run NSD as an authoratative frontend to your BIND servers, and unbound as a caching resolver with forward-zone entries to your BIND server for your domains.

This is what I do, which allows me the full gamut of BIND features without exposing those servers directly to any networks (there is a non-routed vlan that nsd/unbound/bind servers use). This is using split-horizon, DDNS from ISC DHCP and DNSSEC, so not a non-trivial setup, but it is also my home network setup so not so heavy duty as to be particularly hard to set up and automate.

I also have a round-robin DNSCRYPT setup hooked into the whole thing for semi-anonymity of queries.


On the other side: I run my own DNS recursive resolver on my laptop/desktop, and it's one of the things I really miss on the ChromeBook. I've done this for a long time, originally starting with BIND, then switching to powerdns, but lately I've used dnsmasq and it works great. It has a really nice way to set up multiple resolution zones, so I can have my work IPs resolve using the private DNS servers over the VPN.

The down side is sometimes wireless hotspots will block all traffic until you hit their portal, including DNS resolution, and some captive portals don't work when you can't resolve the name. I've worked around this by letting NetworkManager poke the DNS settings in, and then my VPN will update the resolv.conf once the VPN is up.

Means I don't end up getting weird DNS responses from clever hotspots or ISPs.


Unbound is pretty slick as well. Not sure if dnsmasq supports DNSSEC, but unbound does.


Thinking about all the servers I've run over the years, I think DNS is one that was most satisfying in a weird way. Incredibly handy also for making amendments to a bunch of records.


I've got a little script that runs on my home router that makes zone updates to CloudFlare over its API. Cost per month: $0, infrastructure to manage: $0.


I've done this for my domain parking company too. For my need, it's (probably) a must, since you want to make sure you have a reliable DNS server which you can fully control.

I've used PowerDNS, which was a breeze for me. It's super efficient too. So I set up my DNS on a very cheap VPS on Vultr ($5/month) and everything has been running well.

I do wish PowerDNS had a better web interface, but hey it does the job.


I know it is not the point of the article, but it is possible to do this with one VPS if the provider offers an API to update DNS records. I have this working with Digital Ocean: https://developers.digitalocean.com/documentation/v2/#update...


I've run my own DNS servers since the mid 90's. Anyone doing this should check out the "DNS and BIND" O'Reilly book.


Lately, I've been feeling the urge to rent colo space for my own servers. I used to have my own colo space & servers, but like everyone else was "sold" on the benefits of moving to the cloud.

Now, I have a different perspective and believe more people should be owning their own data and servers.


I have a similar problem, but there's just no way I'm running a DNS server in the open (amplification attacks, etc.). I was thinking of using https://icanhazip.com + OVH's API to regularly update my A records.

However, I still didn't get around to finding (or writing) a CLI for their DNS offering (it is possible, because acme.sh does it [0] -- maybe I'll just use this as a base?)

[0] https://github.com/Neilpang/acme.sh/tree/master/dnsapi


Just a random input. I use Cloudflare for this. Mostly because changes are more or less instant. I've used Namecheap and OVH before and both could end up with longer delays (~1h)


> there's just no way I'm running a DNS server in the open (amplification attacks, etc.)

Why not? Bind has rate limiting to make it useless for amplification attacks. You can also use smaller than 4k udp response sizes, forcing clients to switch to tcp. Nothing to be afraid of, your dns hosting provider probably does the same thing.


Contact me if you want.

nsupdate (mentioned below) may help also.


While this is an entertaining read - i.e. all the technical details, it can be made so much less work. If you register a domain or transfer it to a registrar that supports dynamic DNS updates you just run a daemon inside your network and forget about it. I have several domains on Namecheap with a dynamic IP at home and do this [1].

[1] https://www.namecheap.com/support/knowledgebase/article.aspx...


Anybody knows why he uses ssh to update the records and not nsupdate?


Not sure in his case, but I keep my dns configuration in git and deploy over ssh git pull followed by a dns server restart. I prefer ssh as I know better how to secure that and it seems like less attack surface.


I use both, ssh <host> "nsupdate -l". That way I don't trust nsupdates security model, yet I can still automate updates from any machine I choose.


I use Route53 for two reasons:

1. $$$

2. certbot certonly --dns-route53 [...]


Does anyone have experience with using dot.tk domains as described in the article?


I used them since early 2000. They were much better before FreeNOM started managing the TLD.

As long as you don't need to change or don't have any issue they are ok as any other domain.

Their interface is horrible, it took me a while until I figured out the right step order to properly set up glue records.

If you have an issue, their support can be hit or miss, I have feeling that they just ignore whenever a ticket is opened and only respond when you follow up. It also doesn't send notification by email when they respond so often it might take days to resolve a simple issue. This is especially bad if they block the service and domain no longer resolves.

TK also doesn't support DNSSEC.

The nice thing is that it is free, but you need to make sure you have a working web server that returns some content otherwise they will block the service.

This restriction doesn't apply if you pay for the domain.

To summarize, it's ok for free service, but if you pay you might as well just use better managed registrar, their price is not better compared to other registrars that have better support and better interface.


tcl.tk has been running without a hitch for 15+ years. I know the admins and they are satisfied with the service.


tl,dr:

1. host them on the cheapest dodgy vps provider you can find 2. host primary and secondary on the same provider 3. use a free throwaway domain registrar 4. use the dns server software with the worst security track record


This is not really a great idea. It's just adding more brittleness to your system. Leave DNS to people with distributed DNS networks and redundancy.

I mean obviously you can do it if you want to, I'm not stopping you, but to me it's silly.


Along these lines, one of the obvious things missed in this post: monitoring.

Setting up DNS servers on low cost VPS providers has some inherenet risks as they tend to attract all kinds of abuse, which can lead to things like mass scale UDP filtering to keep operations online.

When I scaled down my colo footprint I started to move DNS operations to various VPS providers to maintain redundancy, but kept my monitoring in place to perform health checks at 60 second intervals. Finally got annoyed enough with all of the filtering events tripping monitoring that I migrated everything to a hosted DNS provider.


DNS is designed for adversity. Assuming you don't care much about how long it takes to resolve, most recursive resolvers will try pretty hard to resolve your names -- all the authoritative severs will be tried, so you just need to make sure one is working. I use free secondaries for my personal domains (now using he.net), which helps a lot.


All you need to do is have atleast 2 physically separate servers and DNS by design does the distributed/redundancy part - as long as the records are setup correctly, any resolver will find you, and public ones like Google DNS will cache the result for most people.


DNS is only distributed if you run it yourself, not when you rely on a centralized service.


That's... that's not what "distributed" means. DNS is distributed because it's arranged as a tree, with the root nodes delegating to the TLDs delegating to individual name servers for each zone. Just because someone chooses to use a service instead of running a nameserver themselves doesn't make DNS centralized.


If a bunch of people choose a single DNS provider they all create a centralized point in this tree, through witch all of the clients wanting to access their services have to go through. This is exactly what centralization is and is exactly what caused downtime for a lot of websites when Dyn was DDoSed.


I pretty sure that's not what eebv meant. They're talking about redundancy for high-availability.


Redundancy and high-availability is something DNS has by design. DNS providers are incentivized to highlight those things as if they were unique to them, but actually the only thing they can offer is anycast for lower latency. Incidentally anycast also makes them less reliable, not more.


How does this compare to pi-hole?


That's really not the point of the article. He updates the A records for his home machine that doesn't have a static IP.


the title is pretty bad for that article... I would never have guessed


Pi-hole is a DNS resolver: it takes questions from clients, such as "what's the IP address for google.com"? And takes care of contacting an authoritative server (or passes it to downstream resolver, which does the work) to get the answer.

An authoritative server is responsible for answering that same question, but it's been labeled as authoritative for a domain, via the domain registrar system. To see what servers are authoritative for a domain, you can use commands like dig or host; here's an example host command (I'm running OSX, but this should work on any *nix that has it installed):

  $ host -t ns ycombinator.com
  ycombinator.com name server ns-1411.awsdns-48.org.
  ycombinator.com name server ns-1914.awsdns-47.co.uk.
  ycombinator.com name server ns-225.awsdns-28.com.
  ycombinator.com name server ns-556.awsdns-05.net.


This is pretty simple stuff, and the two ads for your book make this look like an ad rather than something not otherwise posted on tens of other blogs.


Everything is simple once you know how.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: