Hacker News new | past | comments | ask | show | jobs | submit login

That's not how that sound be done. Just use a mix of two providers. Using your own servers and TinyDNS is silly for million/billion dollar companies.

See MaxCDN for example who uses a mix of dns providers (AWS Route53 and NS1):

    ns-5.awsdns-00.com.   ['205.251.192.5']   [TTL=172800] 
    ns-926.awsdns-51.net.   ['205.251.195.158']   [TTL=172800] 
    ns-1762.awsdns-28.co.uk.   ['205.251.198.226'] (NO GLUE)   [TTL=172800] 
    ns-1295.awsdns-33.org.   ['205.251.197.15'] (NO GLUE)   [TTL=172800] 
    dns1.p03.nsone.net.   ['198.51.44.3']   [TTL=172800] 
    dns2.p03.nsone.net.   ['198.51.45.3']   [TTL=172800] 
    dns3.p03.nsone.net.   ['198.51.44.67']   [TTL=172800] 
    dns4.p03.nsone.net.   ['198.51.45.67']   [TTL=172800]
Curious, are you the kind of person that runs their own smtp email server and complains about GitHub pricing being too expensive?



No tool is silly as long as it does the job adequately. Are paperclips silly for a billion-dollar company?

If both Dyn and R53 go down, it's exactly when you want a service like PagerDuty work without a hitch.


You're asserting that your (or their) homegrown DNS service will have better reliability than Dyn and Route53 combined. That assertion gets even worse when it's a backup because people never, ever test backups. And "ready to go" means an extremely low TTL on NS records if you need to change them (which, for a hidden backup, you will), and many resolvers ignore that when it suits them, so have fun getting back to 100% of traffic.

Spoiler: I'd bet my complete net worth against your assertion and give you incredible odds.

Golden rule: Fixing a DNS outage with actions that require DNS propagation = game over. You'd might as well hop in the car and start driving your content to people's homes.


Idea: Chaos Monkey for DNS outages


I don't know how big PagerDuty is; IIRC over 200 employees, so, a decent size.

I was giving a bare-minimum example of how this or (some other backup solution) should have already been setup and ready to be switched over.

DNS is bog-simple to serve and secure (provided you don't try to do the fancier stuff and just serve DNS records): it is basically like serving static HTML in terms of difficulty.

That a company would have a backup of all important sites/IP addresses locally available and ready to deploy on some other service, or even be built by hand via some quickly rented servers, is I think quite a reasonable thing to have. I guess it would also be simple to run on GCE and Azure as well, if you don't like the idea of dedicated servers.


Not neccesarily. Granted this is how I would configure a system (two providers), but it is just as sensical to use one major provider which falls back to company servers in the event of an attack like this. It is all in sysadmin preference, while it is smart to relegate low-level tasks to managed providers it is also smart to have a backup solution that is under full control just in case that control needs to be taken at some point in time.


That would be a quick fix similar to adding another NS provider. Of course if dyn is out completely they might not have their master zone anywhere else. Then it's similar to any service rebuilding without a backup.


+1 for using a mix of two providers. That's what we do at my startup. Never had a problem since (knock on wood).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: