
Stop building stuff on sand – The Internet's DNS and Linux - CodinM
http://codin.ro/stop-building-stuff-on-sand-the-internet-linux/
======
convolvatron
first of all DNS has stood up pretty fantastically to the massive increase of
scope since its initial conception. So, congratulations and thanks to PVM and
Paul Vixie and all the other countless people who contributed.

So, you're right to point fingers at all the companies with multi-billion
dollar caps built on top of BGP, and DNS, and all the clearly inadequate
infrastructure.

I think you're falling short by suggesting that companies should have
technical specialists.

Ok so we admit that DNS is inadequate for the task. People are dumping a whole
lot of policy in there that doesn't make sense. DNS security seems to continue
to have limited uptake. Despite any cast the DNS infrastructure is both
vulnerable to attack and more importantly able to be used as a amplifier for
other attacks.

So sure, insufficient investment by either neutral research funders or
commercial entities. But can you imagine what would happen if we tried to get
together interested parties and develop a replacement? Even if the end product
were technically flawless can you imagine the difficulties in spurring
adoption? PVM wrote some RFCs and said 'what do you guys think', and the 50
people around the table said 'seems ok', and some people wrote some code, and
everyone else who trusted them installed it because it was better than ftp-ing
host.txt files and merging them by hand every month.

can you imagine that happening today? I don't disagree with what you've said,
and I'm fundamentally frustrated by this ossification myself. I can't see any
answers.

~~~
dllthomas
A "technically perfect" solution in this context would have to include a
practical migration path.

~~~
convolvatron
i guess i don't know how to engineer for that without some sort of
overwhelming economic incentive. just curious, how would you rate the effort
spent trying to make the v6 transition painless. clearly it wasn't perfect,
but they tried pretty hard.

~~~
dllthomas
Various things can make migration easier. Backwards compatibility being a big
one, particularly if it can be phased out gradually by either party.

I don't know enough about IPv6 migration to comment there.

------
jlgaddis
There's a lot of bitching and complaining in this post. Unfortunately, I seem
to have missed the part where the author offered up his solution and explained
how he was getting started on it.

~~~
CodinM
I'm stating the issue first, solutions come afterwards.

------
guitarbill
> How about stopping for a bit and challenging the IESG (Internet Engineering
> Steering Group) and IETF(Internet Engineering Task Force) to actually assess
> the situation at hand, and decide upon actual improvements, creating proper
> documentation and generally creating a proper professional environment
> regarding the technology so that you don’t have to open 40 tabs and read
> documentation that you may or may not need. Making the information more
> easily accessible and readable means more people will actually go through it
> and that means more security.

I agree with the author, RFCs are awful to work with. For example, DHCP
(because I'm familiar with it). So let's look at the RFC [0], which
"Obsoletes: 1541" and is "Updated by: 3396, 4361, 5494, 6842". It turns out,
there's about 60 (!) RFCs relevant for DHCP [1], and probably more references
in those. What the hell.

An example of reference overload and horribleness through backwards
compatibility is the second field in a DHCP header, `htype` or hardware type
requires you to look at RFC 1700. It has 230 pages, and some of the hardware
types are: Ethernet, Experimental Ethernet, Amateur Radio AX.25, Proteon
ProNET Token Ring, etc. Almost all but Ethernet are completely useless today.

Except, oh shit, RFC 1700 is obsoleted by RFC 3232. Which says everything has
been moved into a database, but _HAS NO LINK_ to where you might find that
database. Now, in this database there are values which require two bytes, but
`htype` is only a byte long! Brilliant.

One solution would be to mandate passed RFCs have behavioural tests. In
effect, you'd be encoding the standards in a way that computers can
understand, and not having the potential error of `human -> computer
(txt/html) -> human -> computer (programming)` conversion.

\---

[0] [https://tools.ietf.org/html/rfc2131](https://tools.ietf.org/html/rfc2131)
[1]
[http://www.zytrax.com/books/dhcp/apc/](http://www.zytrax.com/books/dhcp/apc/)

