Hacker News new | past | comments | ask | show | jobs | submit login
DNS: The Good Parts (petekeen.net)
123 points by zrail on July 20, 2013 | hide | past | favorite | 51 comments



> Almost always you'll want to redirect a bare domain like iskettlemanstillopen.com to www.iskettlemanstillopen.com. Registrars like Namecheap and DNSimple call this a URL Redirect. In Namecheap you would set up a URL Redirect like this:

Coincidentally yesterday I snapped and decided that no longer was I happy using a URL forwarding service that requires configuration or running a web server that redirects old domains, so I built http://cnamer.com/.

    subdomain.source.com. CNAME google.com.cnamer.com.
That will redirect source.com to google.com.

    cnamer.samryan.co.uk. CNAME minotar.net-opts-query.true-querystring.avatar-querystring.citricsquid.cnamer.com.
That redirects http://cnamer.samryan.co.uk to minotar.net/avatar/citricsquid

The code powering it sucks at the moment and I intend on adding the ability to use TXT records to set the redirect but it /works/ for now.


That looks like an extremely useful service (good work!), but your examples (source.com.) don't work, since the zone apex can't be a CNAME (this is addressed in the article under "Why CNAME is Messed Up"). So this can be used with www.source.com. but not source.com.


Fixed! Not sure how I missed that, thanks! :-)


great idea :-) i m curious how many queries per day could the linode server handle, even if they are non mission critical...


The version live right now is just a proof of concept more than anything, once I've got a few hours spare (hopefully sometime this week) I'm going to rebuild it to operate in a sane manner and will benchmark the load it can support, the Linode itself shouldn't have problems supporting millions of requests a day, it'll be cnamer itself that causes a bottleneck if there is one. Worst case if it turns out there's too much usage for the Linode to support (unlikely...) I'll upgrade the server / move it elsewhere. I was hoping to rebuild + benchmark it before mentioning it anywhere but this blog post seem super relevant heh.


Minor error in this post: 'IN' doesn't refer to the answer's inclusion in the given DNS type -- it's a short form of Internet, and is meant to denote the address family being returned (the field is called 'CLASS' in DNS-speak). Other options include Hesiod and Chaos (as well as a number of reserved / private values)

https://www.iana.org/assignments/dns-parameters/dns-paramete...


Another very forgivable error is a mis-understanding about how recursive queries work. The article says

"dig randomly picked one of the root server responses, in this case f.root-servers.net., and asked it for the next part of the domain name in question, info. The info section of the hierarchy is run by a company that operates their own set of servers. dig asks one of these servers for the NS records for bugsplat.info and then finally asks one of those servers for the A record for empoknor.bugsplat.info.."

but in actuality every server in the hierarchy is queried for the full name; "empoknor.bugsplat.info". Technically even the root servers may be authoritative for that name, there's no way to know in advance where the zone-cuts are.


Correct, and very common misconception. While I applaud the author's attempts to simplify DNS, it's somewhat evident they've not looked at the very clearly written RFCs which cover both mistakes.


Thanks for the corrections! I'll fix the article when I get back to a computer.


Fixed both errors. Thanks guys!


Sorry if I came off as condescending, it looked that way when I read it again. The rfcs are 1034 and 35 and are very interesting reads. 5321 is good too. For example, did you know that you don't need an mx record for email? If none is present, it will default to the a record. DNSSEC is in 4034, among others. Also, I think maybe more explanation about the main benefit of CNAMES, a virtual host based setup where many domains point to the same IP.


No worries, you were right on the money. I've linked those RFCs as further reading in the wrap-up section.


    Almost always you'll want to redirect a bare domain like iskettlemanstillopen.com
    to www.iskettlemanstillopen.com. Registrars like Namecheap and DNSimple
    call this a URL Redirect.
I won't redirect a root domain using a URL redirect because DNSimple's URL redirects can't redirect over SSL.[1] I use an ALIAS record[2] to point the root domain to the subdomain and I use Rack Canonical Host[3] for redirects.

[1] http://support.dnsimple.com/articles/url-redirect-ssl

[2] http://support.dnsimple.com/articles/alias-record

[3] https://github.com/tylerhunt/rack-canonical-host


The architecture of DNS is pretty solid, but the encodings are unnecessarily complex. That complexity has a huge, hidden opportunity cost.

If anyone wants to help, I'm pretty confident a Space based encoding would greatly improve DNS:

https://github.com/nudgepad/space/issues/54


I'm not sure you actually understand what's going on here. The zone file that you're looking at is only used internally by the DNS server - it is never sent across the wire, so the verbosity of the format is largely irrelevant.


You're right, I don't have a complete working knowledge of DNS, BIND, Zone, et cetera. Hence, the details of my DSL are going to be off.

But I'm very confident in my hypothesis that the status quo is unnecessarily complex, all encodings could be reduced to Space DSLs, and doing so would lead to drastic unforeseen benefits.

All it takes is someone with knowledge of the details to implement it. I'd do it myself, but am currently doing the same in other domains (right now on HTML, HTTP, CSS).


Zone files is a side show. Many other DNS servers use entirely different formats, including formats that are more space efficient than what you suggest on your Github page, and some of which are not serialized to text files at all (all data stored in databases). These formats are not standardized at all.

Note that your Bind zone file example is also quite verbose compared to what it needs to be. E.g. here's a more compact version:

    $ORIGIN brekyunits.com.
    $TTL 3600
    @ IN SOA NS53.DOMAINCONTROL.com. dns.jomax.net. (2013041502 28800 7200 604800 3600 )
    @ A 107.21.225.189
    @ MX 0 smtp.secureserver.net.
    @ MX 10 mailstore1.secureserver.net.
    @ NS NS53.DOMAINCONTROL.com.
    @ NS ns54.DOMAINCONTROL.com.
In comparison to that, yours doesn't seem to provide any real benefits that I can see.

When it comes to the over the wire format, the over the wire format is designed to be much more compact, because if the request and response fits in a single UDP datagram, the request will be handled much faster - if the reply is too long, then the server might have to tell the client to retry over TCP, for example, which means two wasted packets to establish that, then about half a dozen packets or more for the request/reply over TCP. Your format would not help there.

If you want to improve DNS, read the RFC's first.


Thanks for the comments. I actually have read a lot of the RFCs.

A Space DSL to replace all this stuff won't materialize out of thin air, someone will have to take the time to think it through and iterate a couple times. But I've been in the lucky position over the past while to see Space in action, and at this point I'm pretty confident that it will disrupt many areas, including DNS.

I haven't done a good job of making my case. Talk is cheap, more Space code is coming.

But my point in general is that all of these formats can be described as premature optimization. Or perhaps, no longer necessary optimization.

The example you provided clearly illustrates that. Dollar signs, at signs, periods, parentheses and abbreviations galore. In my mind, the current state of things is ugly code. I can make sense of that. But someone off the street can't make sense of that. In my mind, that is a problem. That is a huge problem. The common reaction is to "teach people DNS". I'm off the opinion, "No, these formats suck! No need for all that complexity. The world has changed. Let's start looking ahead."

To use a word mentioned earlier, I believe code should be trivial, as trivial as reading a road sign. I believe the Internet is due for a big refactor. I believe the unreadability of these formats is causing us a ton in lost innovation. Why should only l33t hackers be able to understand the full stack? There's no reason in my estimation, other than that the stack is permeated by ugly, unnecessary complexity. So many things people do nowadays over the Internet are done on top of 10 levels of abstraction when they could be done at 2 or 3.

I think there is a real opportunity here for someone to improve upon DNS.


It isn't a problem that someone off the street can't make sense of that, since it's a format that is an internal detail of a complicated tool. Someone off the street isn't expected to understand that: There's a plethora of tools that lets them enter stuff in other formats, and a plethora of web frontends.

What they do need to understand, though, to do anything beyond the extremely trivial, is the DNS data model, and the Bind style zone files reflects that data model extremely directly, so almost whatever they end up with doing, they will sooner or later come across something that may remind them of the zone files. And I get that is a lot of what causes people to be concerned about Bind zone files.

As I pointed out, there are lots of DNS servers where you won't see a Bind style zone file ever, and you don't need to require configuration to be done via a file at all. Djbdns is one of the former. ldapdns is one of the latter - that exposes DNS entries pulled out of an LDAP server. But both will be similar in structure because they map to the same data model.

The only thing you will achieve by devising a format like what you describe on top of DNS is drive people crazy, because they now need to learn a format that does not map easily directly to the DNS data model (if it does, it will appear very similar to zone files and will be pointless).

I've written a DNS server, that was used for a registrar platform (I co-founded the company that ran ".name"), and I didn't use zone files, but the data was still mostly the same.

If you want to change that, beyond the extremely superficial (e.g. renaming "MX" to "mailserver" or similar) that will only confuse people communicating with those using the original names, you'd need to start lower:

With the DNS data model and wire format. Good luck with that - there are trillions of dollars hinging on DNS working these days, and a lot of design decisions in DNS follows directly from real-life constraints that are not going to change.

They are simple: The data must be small - if records are too large for a single UDP datagram, it slows things down instantly. A DNS record consists of a label, an address family (can usually be omitted, as almost everyone just uses "IN"), a resource type (A,CNAME,MX etc. - you could rename those in your format), a TTL (which can usually be left out to use the default), and a resource type specific data field which is usually a string, label or IPv4 or v6 address (obvious exceptions: MX, which includes the priorty, and SOA).

You could possibly drop the address family, but as I pointed out, "everyone" just uses IN anyway, and most software will default to that if you leave it out. You can rename the record types to something "friendly", but you don't need to change internals of the software for that - just add a mapping to the UI of various tools - these query types are mapping to byte values internally in DNS servers anyway.

So what do you think you'd achieve? What can make DNS more understandable without breaking functionality people depend on?

> To use a word mentioned earlier, I believe code should be trivial, as trivial as reading a road sign.

Not going to happen, because code implements real world requirements, and many real world requirements are far more complicated than reading a road sign.

You can write a DNS server that is simple to understand for a programmer, though - DNS is not all that complicated. But then you don't need to mess with the DNS data model - it takes little effort to learn enough of it that zone files is not the problem. It took me less than a week to write my first fully functional DNS server without ever having looked at the RFC's before.

If you want to tackle unnecessary complexity, look somewhere else, like IMAP (or get people to stop using SOAP...) - now there is ridiculous complexity for no good reason (there are reasons, they just aren't good)

> Why should only l33t hackers be able to understand the full stack?

Sometimes because it often takes a huge amount of basic knowledge about the domain to even understand what people are talking about.

But you don't need that to understand DNS. Seriously.

Read RFC 1034 and 1035. Equip yourself with "tcpdump", "dig" and or "nslookup". Write some DNS encoding and decoding routines based on the data in RFC 1034. Run tcpdump and do capture some queries that you carry out via dig or nslookup. Test your encoding and decoding routines to see that they match based on the dig input/output and captured packets until you have them working.

Then write a simple authoritative resolver based on the rules in the RFC's (authoritative resolvers are simplest because you are only answering based on "I know this, it's in my local data set", as opposed to recursive resolvers that have to know when it is right to cache, when it is right to forward requests, who it should allow recursion for, and [these days] how to avoid being part of a DNS amplification attack unwittingly). That isn't much more complex than: Wait for incoming query; parse query; extract name it is asking for; if it is a name your server should serve, return the data you have for that name, otherwise return an error (handling corner cases will make it a bit more complex, such as handling fallback to TCP).

If you have any prior network programming experience, this should be no more than a week of coding if you keep things simple. If you're experienced, you might even have something basic working (the "first 80%") within the day. At least if you pick something higher level than C to write it in.

There are many areas to improve upon in DNS, but data model complexity is pretty much bottom of the list. I'd have agreed with your second to last paragraph if it was better targeted than against DNS.


After reading https://github.com/nudgepad/space/issues/54#issuecomment-212... , I don't see how your method is superior to the existing BIND zonefile syntax. It requires one to keep significantly more state in one's head while editing the file and (because the meaning of the data in the file depends on where it is in the file) makes drive-by one-off editing much harder. (Being able to "echo '@ 60 IN A 1.1.1.1' >> my.zonefile.zone" is pretty convenient.)


Thanks for the comment. I'm sure there's a much better DSL than the one I proposed, perhaps that takes into account the ability to append to the file.

My theorem though, is that there exists a Space DSL that is 10--100x better than the current BIND syntax. In other words, you can accomplish everything that BIND does, using a syntax that is general/universal. The latter part is what gives you the 10x-100x improvement over BIND. Learning BIND provides you almost no benefit in other domains, and worse, mastering other languages is irrelevant when you encounter BIND. Why am I so confident in this? We've been applying Space in all sorts of domains, and time and again it comes up that it works in any domain, better than the existing specific languages.

The benefit if you know how to use Space--and knowing Space takes less than 20 minutes to master it for a programmer since there are 3 rules, 2 syntax characters (space and newline), and it is impossible to have a syntax error--you then can easily read and write code that works across all sorts of domains--HTML, CSS, Makefiles, BIND, HTTP---et cetera.

I'm a seasoned programmer (15,000 hours experience), and looking at BIND makes me groan.

Now, the DSL I whipped up I'm sure is not the DSL that could replace BIND--I took less than 7 minutes to come up with that one and my knowledge of BIND/DNS is only partially--but I'm highly confident that it is relatively close and with a few tweaks you could create a DSL that is better than the status quo. Which is pretty amazing because this is so level that your work would improve the Internet and DNS in a significant way.


Sorry, to me this is absolutely nonsense. Your keywords is also syntax. That there are only two "syntax characters" is not an advantage - it forces you to make the grammar for your DSL's vastly more verbose by using keywords for all kinds of situations that could otherwise easily be expressed with syntax. As a (programming) language geek that cares deeply about syntax and writes compilers for fun, the very idea is abhorrent to me. It sounds like COBOL for the new millennium.

And that you "groan" from looking at BIND zone files tells me that you don't understand the underlying data model, rather than that the zone files are complicated - they map very well to the data model, and there are very few caveats with the format. (The one big one that tends to trip people up is the full stop vs. no full stop at end of host names).

Further, I think you still are confused about the importance of zone files here: If you don't like them, don't use them. Use software that gives you a nice fussy interface instead. Zone files are only still important because people still pick software that uses zones files.

A new format here would change pretty much nothing: Those who run bind will not change software because of a file format change; people run Bind because it has a track record, and is well understood. People switch from Bind because they have different needs. Many of those who pick something else pick DNS servers that don't even store records in files, or in other formats, so a replacement for the zone file format is irrelevant to them.


> Further, I think you still are confused about the importance of zone files here:

You are right. To be honest I don't know off the top of my head how many protocols/encodings are currently used to make DNS work.

I know I have an /etc/hosts file on my machine. I know there are zone files on DNS servers. In the past I've downloaded those massive files from places like Verisign et cetera that contain all the records. I know dig does some stuff.

My point is all that is unnecessary.

The whole thing could be made much simpler, if done in an object oriented way using Space as the encoding.

Say I wanted to go to google.com.

My browser could check the local domains.space file to see if there's a google.com match. If not, it could then send a space message to my ISP's (or other) DNS server:

    question google.com
And get a response:

    question google.com
    answer 123.123.123.123
My point is you could apply this clean, simple, object oriented, punctuationless system across the entire DNS stack.


First, what makes you think your 'Space' format has "no syntax" ? What do you think keywords and whitespace are? You've just replaced 'A' with 'ip', and presumably 'CNAME' with 'address' (which is actually much less descriptive). You've also added a keyword to specify the TTL of individual records, rather than an optional positional parameter.

More importantly, semantics are always more involved than syntax. By the time you understand how DNS works (which will take one basically an afternoon of playing), the few idiosyncrasies of zone file syntax are trivial. Each line is pretty much a textual description of the records that are sent over the wire, with a little punctuation and directives to set defaults for the file.

PS if you're looking to do anything programmatic, install PowerDNS instead - it can use a database for your zone info.


Right, there is a syntax. No punctuation, is what I should say.

Again, the DSL I gave is not optimal. It would take a few iterations.

My argument is that the few idiosyncrasies of zone file syntax are not trivial. Road signs are trivial. Hundreds of millions of people can see a sign and get it right away. My argument is that DNS could be trivial, and my prediction is a Space based DSL will emerge that makes it so.


...But why do I care?

Like, why would I use this? Yes, it takes an hour or so to understand simple DNS and it can certainly take weeks or even years to really understand complex DNS...but it's not the zone file format that makes this hard, it's that DNS is hard.

"More efficient" doesn't make me care because it's already efficient enough that the bottleneck in DNS is elsewhere. Being more useful, doing things I can't already do, makes me care. So why should I care?


DNS creates an alias from a name to the implementation.

This is really powerful for a set of reasons.

One, humans think with a combinatorially larger language than numbers, so having to work with addresses like 723.123.322.123 would be painfully slow. "Apple.com" is much easier.

Two, you can refactor the implementation without changing the interface. Ie, you can build a whole new Apple server, but just change the alias, and not have to teach the whole world to now go to Apple2.com.

It's pretty neat. The whole thing should be pretty simple. However, it's not. If it were simple, you would see a lot of programmers interacting with DNS on a more meaningful level (than just buying domains and changing A records and CNAME records according to steps they Googled and not according to an understanding of how DNS works).

Each DNS record could just be an object encoded in Space. Doing so would allow you to start innovating at a core part of the Internet, instead of on a tower of cryptic stacks.

IE:

    record
     domain apple.com
     ip4 723.123.322.123
     ip6 21:21:213:213:::213
     expires 2/23/2013
     createdBy Tim Cook
     mirrors
      1 723.123.322.322
      2 123.123.322.322


Setting your condescension aside, I am a programmer, and I do interact with DNS on a more meaningful level. It's not the interface: it's the concepts that are hard.


Frankly, I don't understand what's so complicated about HOST TTL [CLASS] TYPE RDATA

Even the SOA record's RDATA is pretty simple: AUTHORITATIVE_NAMESERVER ADMIN_EMAIL_ADDR (SERIAL_NUMBER REFRESH_INTERVAL UPDATE_RETRY_INTERVAL EXPIRE_INTERVAL NXDOMAIN_CACHE_INTERVAL)


Great comment, simoncion.

Let's define a simple empirical test to compare "X" to "Y" in terms of complexity:

How long would it take for a person to write code that reads and writes code in format "X"?

How long would it take for a person to write code that reads and writes code in format "Y"?

Let's call the current status quo of Bind/Zone "X".

My prediction, is there exists a Space based DSL that is functionally equivalent to X, let's call it "Y", that measures an order of magnitude better than X in terms of the complexity measure I defined above.

And why is this important? If it were a lot easier to write code that reads and writes code that operates at one of the lowest levels of the Internet, it would unleash a wave of innovation that is hard to predict the magnitude of.


I'm certainly not one to dismiss someone's hobby project; if it's a thing that is fun to work on, more power to you. You might even get a really useful result out of tinkering with it!

I'm having a hard time imagining anything that's simpler than this p-code parser for a simple BIND-style zone file:

  if syntax error is encountered, abort with error
  read host
  read ttl
  optionally read class
  read type
  load expected rdata format for type
  read rdata
  if record read was duplicate data, abort with error
Things get slightly more complicated when you add in $KEYWORDS, but not all that much.

Anyway, if you can find a format that's an order-of-magnitude easier to work with than this, my hat's off to you. (If I were the sort of person that regularly found these sorts of things, I'd own an apartment building, rather than renting space in one. ;) )


You don't need to read or write zone files to write DNS servers. Zone files are irrelevant unless you want to be able to import Bind zone files for interoperability.

So why exactly do you think this "operates at one of the lowest levels of the internet"? It does not.

DNS servers do not talk to each other using this format.


> DNS servers do not talk to each other using this format.

Right. I'm talking about all the formats.


Great code example simoncion! Thanks for taking the time to write up that comment.

You're example does indeed show a simple reader of BIND-style zone files.

But, to write that, you brought to the table an understanding of BIND that I'm sure took you a long time to develop.

A good Space DSL does not require a lot of domain knowledge to pick up.

In other words, while the current BIND zone format uses punctation to indicate meaning, which then requires domain knowledge to read/write (or a textbook by your side), a good Space DSL is self documenting. So, my example using things like A and CNAME and TTL would be bad. Better would be something like this:

    breckyunits.com
     ip4 123.123.123.123
     ip6 ::ffff:192.0.2.128
     updateEvery 3600seconds


What is updated every 3600 seconds here? The entire breckyunits.com zone? The "ipv4" record? the ipv6 record? both?

Zone files can express all of those easily, and need to. To be able to replicate their functionality, your DSL would need to be able to handle TTL's for each and every record.

To me, your example is extremely ambiguous. It is certainly not self documenting. Because I happen to know DNS, I'd assume you're assigning a TTL for all the records of 3600.

But how can I set the TTL only for one of them?

And how do I add records for subdomains? Do I have to repeat the full name every time? Because that's a non-starter.

And do you realize you've actually made this massively harder to work on with simple tools? Bind zone files keeps all except the SOA record on one line. This means that with some simple conventions, a site can easily format everything so that a single record can always be manipulated simply by operating on a single line of text. That makes it trivial to e.g. modify zones files in shell scripts with tools like "awk" etc.

Almost anyone that needs to read and write zone files have advanced requirements - or we'd use a more user friendly, high level tool. And so a replacement would need to meet those requirements, including being able to write records on a single line, and being able to refer to the origin and use relative names.


Great questions. Now I'm having to think about implementation more.

Here's the basics of a system:

    /domains/
    /domains/google.com.space
Let's say the key requirements are:

1.) Domain pattern + ip address 2.) Some expiration timestamp 3.) Some property that tells you who told you that domain matched that IP

The contents of file "google.com.space" then could be something like this:

    ip4 123.123.123.123
    expiresEvery 3600seconds
    lastUpdated 2/2/2013 4:00pm
    receivedFrom dns.verizon.com
For subdomains, you could do a "/domains/foobar.google.com.space" file, or maybe a "/domains/*.google.com.space" file, or maybe you just put them in the file above like so:

    ip4 123.123.123.123
    expiresEvery 3600seconds
    lastUpdated 2/2/2013 4:00pm
    receivedFrom dns.verizon.com
    subdomains
     mail.google.com
      ip4 123.123.123.123
      ttl 3600
Et cetera. The implementation details would be very important. There are plenty of sub decisions to make to come up with a great DSL. But I'm fundamentally confident that at the root level, the decision to create a Space based, object oriented domain system is not only correct, it is inevitable. It just feels right to me. Again, I've had the benefit of working with this structure for a while now, and am amazed at how so many things simplify down to it. Space is object oriented programming in an extremely simple, concise, robust, human readable way. Just as most (if not all) major computer programs nowadays are written in an object oriented way, I am predicting there will come a time when most major message passing systems are written in a Space encoded way.

> That makes it trivial to e.g. modify zones files in shell scripts with tools like "awk" etc.

I look at awk and I think that's something that will at some point be buried, and should be. awk is kind of like a neanderthal or some other species in between ape and human. Served it's purpose, helped the evolutionary chain, is a very honorable tool, but at some point will go extinct.


It took me an afternoon of reading the docs on Wikipedia and Zytrax to grasp the basics of both zone files and BIND config files, and another hour or three with the tunnelbroker.net "certification" tests to grasp stuff like glue records. That's... not a long time to invest for such a useful thing.

Also, the only punctuation I see in a zone file is for comments, $KEYWORDS, the SOA record, and "@" to indicate that a record is meant to apply the $ORIGIN. Did I miss something?

If you don't like "A", "CNAME", and "TTL", let me propose a preprocessor that lets you do the following:

  host1.example.com update-3600-secs ip4 1.1.1.1
  host1.example.com update-12-hours ip6 2001::1
  host2.example.com update-24-hours alias host1.example.com
  host1.example.com update-1-week physical-location 52 22 23.000 N 4 53 32.000 E -2.00m 0.00m 10000m 10m
This format retains the greppability of the existing zone file format as well as its drive-by appendability, while allowing one to ease one's way into the actual zone file syntax that's being used by your DNS server.


I was hoping you were referring to the wire-format for DNS, in which the keys (the domain/host names) are stored using a unique compression scheme (which in my experience is rather hard to enode and you have to be careful when decoding to detect loops). But no, you are talking about BIND zone files, which I don't find all that complex.


I'm talking about the whole kit and kaboodle. All of the formats.

I'm pretty much of the opinion that almost all of these formats are no longer needed optimizations.


So we should ditch decades of work and switch to less efficient formats and accept the enormous aggregate performance hit and lose hard won security and interop benefits?

Brilliant!


I'm saying let's throw them all away starting today. We'll turn off the internet for a while but when we switch it back on it will be way better. :)

Kidding of course. Despite my optimism for this, I'm a pragmatist. It would take many years to switch, and would be a gradual process. I have just seen time and again how if you make things simpler, more natural, you reap drastic rewards. Look at the introduction of touch interfaces. Hundreds of millions of people are now using more powerful computers than most of us owned just a few years ago.

I believe there's a huge pent up demand for people who want to get their hands on the DNS layer of the Internet, and simplifying the protocols would be one big step toward allowing that to happen.


Perhaps you'd be interested in RubyDNS... also, what is so good about space, compared to YAML, JSON, XML, and many other existing coding standards?


Great question. Space is just doubling down on the good parts of XML, JSON, and YAML; while stripping out anything that isn't absolutely necessary. It turns out, almost every feature in those encodings is unnecessary.

Space is a total accidental discovery. We were just using a few ways to store objects (JSON, HAML, some others). We kept having to simplify the rules because we needed formats that worked with a variety of languages and eventually we had a very simple YAML like language. But even that was too complicated to support across languages, and so we kept cutting features and rules.

Pretty soon we were down to no punctation. Just the space character and newline character to give structure to your objects. I was very surprised to discover that this was all you needed no matter what domain you were working in.


This is why you can't have a CNAME on a root domain like petekeen.net, because you generally have to have other records for that domain like MX.

The issue isn't with an MX record which is optional, it's with the SOA record which is required. BIND also requires at least one NS record.


Does the "+trace" example work for other people? For me, after it lists the 13 root servers, it pauses for a while and says ";; connection timed out; no servers could be reached".

So is someone between me and these servers blocking tracing, or is DNS more unpredictable and weird than the author claims, or both?


> Almost always you'll want to redirect a bare domain like iskettlemanstillopen.com to www.iskettlemanstillopen.com.

I've always read the opposite. That using the www. subdomain was bad form. That it was an anachronism and meant people would need to remember extra chars that had nothing to do with your branding.


It almost certainly doesn't matter for branding. Normal people look past the www because it's so common. Putting your site on a subdomain gives you the distinct advantage of being able to CNAME to whatever you want, since CNAMEs don't work well on a bare domain. Putting a redirect in place means that your branding and marketing can just have "example.com" and it'll just automatically work.


newbie question: I understand that by delegating dns resolution for domains (and subdomains) to their zones, dns scales well. However, isn't there a huge load on the global top level domains. I imagine most sites use .com and all dns requests for www.xxx.com , the queries for . and .com may be heavily cached, but if xxx isn't very popular , would that not put a huge number of requests against the .com nameservers to resolve to the xxx.com domain?If we had a huge number of low popularity xxx domains in the com domain, won't all dns queries have to hit the top level .com nameservers?


Yes. There are currently 13 virtual nodes that handle .com and, just like for the root servers, each one of those virtual nodes is actually a group of machines all bound to the same IP address via Anycast. There are hundreds of physical machines spread around the world handling the TLDs. The NS records that these servers return are readily cached with long lifetimes, though, so if you use any relatively popular DNS server, either your ISPs or Google's public server, the resolution process will be really short even for uncommon names.


Can someone tell me who operates the root servers? Organizations? Companies? Governments?

I was just thinking that aren't these prime targets for hackers? I mean if one root server is compromised, that would be a royal breach.


http://www.root-servers.org

The organizations across the top of that page run the root servers. As you can see on the map, there are quite a few physical servers for those 13 separate root domains, 359 to be exact.

The root servers don't actually replicate amongst themselves and they don't actually hold all that much data. They all serve the same set of zones, all from one text file that is updated infrequently, specifically the DNS Root Zone file on this page: http://www.iana.org/about/popular-links




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: