
Before the DNS: how yours truly upstaged the NIC's official HOSTS.TXT (2004) - fanf2
https://iconia.com/before_the_dns.txt
======
bradknowles
And in 1995, when the Defense Information Systems Agency was building the
classified SIPRnet, the principal network manager wanted to use HOSTS.TXT
tables instead of the DNS, because they thought it would be easier. They also
wanted to use random network numbers pulled out of their ass, for the same
reason.

Fortunately, I got wind of this, and as the DISA.MIL Technical POC, I had a
meeting with them. Ultimately, I was able to convince them of the folly of
their ways, and to use real DNS servers as registered from the NIC, and to use
real network numbers as registered from the NIC.

The kicker was that I knew they ultimately wanted to be able to connect the
classified SIPRnet to the “unwashed masses”, through a mythical “multi level
secure gateway” that the NSA was supposedly building which would theoretically
keep the truly classified stuff from touching the unclassified stuff.

But how would you route packets from one side to the other, if you had
colliding network numbers because one side just pulled random numbers out of
their ass?

How would you connect through to the unclassified side, if you weren’t using
the “real” DNS?

Yes, this was 1995, and they still thought HOSTS.TXT files were a good idea.

To those of you who have used SIPRnet, you’re welcome.

~~~
paranoidrobot
There are, unfortunately, people who still think this way.

I worked for someone a few years ago who wanted me to build tooling manage the
hosts file across several hundred devices.

When I said no that that's exactly what DNS servers are for, they got shitty
and went on about how DNS was unreliable and the cause of so many outages.

~~~
geofft
I wish I had a term for this sort of confusion. On the one hand, yes, DNS is
the cause of many outages, and yes, in particular uncommon circumstances
("small networks" and "having an unusually good system for syncing text files
that you're already keeping at five nines", etc.), syncing a hosts file might
be reasonable. But most DNS outages aren't about DNS itself as a protocol, and
won't be addressed by syncing the hosts file - they're about stale information
in DNS, or incorrect information being stored in DNS, or something else like
that. Whatever system you build to replace DNS is going to be prone to the
same problems.

I have seen the same line of thinking in many other contexts - "Why don't we
just replace problematic component X with other component Y" where X is,
genuinely, not great but Y would need to be shaped just like X and our
problems with X are actually its shape, and if you were able to reshape Y, you
might as well just reshape X and not bother deploying Y. But I'm not sure this
fallacy has a name.

~~~
geocar
For federated names DNS has won. There’s no point arguing how good or bad DNS
is because it’s a necessary evil at this point. I wouldn’t use it in a new
protocol or on a fully internal network though and that’s because nearly
everything is better than DNS.

For internal networks this is obvious. I had hundreds of machines syncd up on
my hosts file in the 1990s so _global_ DNS outages didn’t affect my internal
connectivity. Win.

If you make a mistake in DNS you either wait for caches to discover your
correction or you ring up a bunch of other sysadmins and get them to restart
named. If you rsync hosts files you just run rsync again. Easy.

If you are trying to diagnose a name problem, you check the hosts file. If you
use DNS even for your internal name discovery you have to check every
resolv.conf every listed nameserver and compare with going to the roots.
What’s the point? This runbook is long while running rsync is easy!

New protocols want IP mobility and cloud discovery- what does DNS do here?
Causes problems that’s what. What do you do if you want to address a _service_
instead of a machine? Know any browsers using SRV records?

And what about the home user? My printer is dell12345.local but how does DNS
know that? The dhcp server could’ve told it but what stops clients from lying?
Who signs the SSL certificate for my printer? How does layering on one more
piece of bullshit like DNS help anything?

So not only is DNS not an obvious choice for anything new now, it wasn’t an
obvious choice for this new thing then (naming federation). Communication was
slower then and tech/science has always been a certain amount of cargo
cult/echo chamber. These guys don’t know what they’re doing but they’re
exhausting and they’re going to do it anyway. So we end up with the worst
thing that could possibly succeed: DNS.

So yeah. DNS problems are absolutely because of DNS and almost anything will
work better for any specific use-case. That means part of the reason this
“fallacy” doesn’t have a name is because it isn’t a fallacy. Some things just
suck, and seeking out a workaround _sometimes any workaround_ needs to be
considered a cry for help instead of browbeating them with how great DNS is
and hope they end up with Stockholm syndrome.

~~~
jamespo
"If you are trying to diagnose a name problem, you check the hosts file. If
you use DNS even for your internal name discovery you have to check every
resolv.conf"

If you are using a distributed hosts file you have to check every hosts file
on every host and that it's in sync.

~~~
geocar
Nonsense. You already have something that copies the hosts file to every host.
It does that check as a product of copying the hosts file. rsync is old, and
even before that we had rcp!

What you don't have, and have never had, is the ability to check all of your
recursive resolvers from every machine and easily compare the results. rsh
might've failed for other reasons that you have to check first. Nameservers
might be ok, but ethernet hub might be broken. Might be temporary. Might be an
active cache poisoning attack. Might be out of ports. You just don't know. DNS
is _always_ a moving target. A huge amount of energy is put into building
monitoring and recording the results of monitoring and looking back on
previous observations, and it's still not perfect: nagios can say everything
is okay, and still service was down. Sometimes you never find out why.

A better way to think about it is: push don't poll. You know when the master
hosts file changes and you know how long it takes to push it to all the
machines. Polling the DNS servers just in case it changes is silly.

~~~
icedchai
The entire Internet uses DNS for billions of hosts. To say it won't work for a
tiny internal network seems a bit strange.

Also, if you can push files with rsync, you can write a script to SSH to every
host and check its DNS settings. Pretty simple stuff.

DNS isn't "new" at this point. I remember configuring old SunOS boxes in the
90's, switching them from NIS to DNS. Exciting times.

~~~
geocar
> The entire Internet uses DNS for billions of hosts. To say it won't work for
> a tiny internal network seems a bit strange.

Ok. Then look at it this way. A DNS server effectively has to look at its own
hosts file and publish the results in response to online queries.

Assuming the failure rate of getting answers from a hosts file is constant,
why exactly do you think those online requests have a negative failure rate?
That’s what would be required for DNS to beat hosts files!

If we’re talking about different networks then the hosts files do not have the
same failure rate, and that’s the first time DNS (or something like it) could
be better.

> DNS isn't "new" at this point. I remember configuring old SunOS boxes in the
> 90's, switching them from NIS to DNS. Exciting times.

Confessions from the cargo cult. Oh well. Hopefully the above makes it clear
why this was a mistake.

If not, I’m not sure what else to say. In 1996 I was converting sunos boxes at
that time back to hosts files and yp because the failure rate of DNS was
higher and the previous guy left. What else could I do? If I’d told my boss we
needed to keep using DNS because “it works for the Internet” I would’ve gotten
laughed at. We had real work to do!

~~~
icedchai
Your time would've been better spent fixing your DNS servers, adding a second
one for redundancy. If you told me you couldn't make DNS work in 1996, I
would've laughed, figured you were just inexperienced. If you told me you
couldn't make it work _today_ , I'd ask HR to get the paperwork ready.

~~~
geocar
Wow.

You would choose a failure rate of nonzero over a failure rate of zero and
threaten a coworker with an HR trip for disagreeing with you?

I'm so glad I don't work with you.

~~~
icedchai
Ok. Let's go back to 1985 and use host files. Who wants to join me?

------
JohnFen
I am part of a group that runs a small "private internet" \-- that is, a
tcp/ip network that adheres to most internet RFCs and provides most of the
same services as are available on the internet, but operates independently of
the internet itself.

Right now, we don't run a DNS -- we go old-school with a master hosts file as
was done before DNS existed on the internet. For our situation, it's the
easiest solution and is entirely manageable, since there aren't a ton of
domain names and the hosts file rarely needs updating.

But it's clear that eventually this will no longer be sustainable. Kudos and
thanks to the author for blazing this trail for us. It will make the eventual
shift to a private DNS much easier.

------
DonHopkins
>This was easy because in those early days the Network Control Program known
as NCP (this was before TCP/IP) would broadcast messages called RSTs to every
possible host address on the network when they booted.

Note that NCP used 8 bit host addresses, so it wasn't as if you couldn't write
a program to connect to all 256 possible addresses and see if a computer
answered. But that would be rude.

[https://en.wikipedia.org/wiki/Network_Control_Program](https://en.wikipedia.org/wiki/Network_Control_Program)

The author's top level web page says:

[https://iconia.com/](https://iconia.com/)

>from the keyboard of geoff goodfellow

I'm glad to learn that Geoffrey has finally upgraded his tty to a real
keyboard! He used to always use this From address:

[https://iconia.com/TELECOMDigestV2.33.txt](https://iconia.com/TELECOMDigestV2.33.txt)

From: the tty of Geoffrey S. Goodfellow

------
oldandcold
As a one-time DNS dev, this story is at once heartwarming and terrifying.
Still the best thing I've read in a while! LOL!

------
gmiller123456
>I would then telnet or ftp to these nameless hosts and see what host name the
operating system login prompt gave me or what host name the ftp server
announced in its greeting. I would then plug this information into my systems
host table.

So, you removed what little security there was at the time to make sure that
the machine pointed to by a host name actually had some connection to what the
hostname implied. This is just one step below making the HOST.TXT file world
editable. The NIC had a good reason for not adopting your method.

~~~
flingo
What're you implying? That someone would change their hostname to take over
someone else's site?

~~~
gmiller123456
Not really "taking over", more like password harvesting. You just report a
hostname at login that you don't own, the maintainer will update the hosts
file to point to your system. People will gladly connect to your system and
type in their usernames and passwords.

