A distributed naming system intended to work with mesh networks like GNUnet.
GNS does not support names which are simultaneously global, secure and human-readable. Instead, names are either global and not human-readable or not globally unique and human-readable. In GNS, each user manages their own zones and can delegate subdomains to zones managed by other users.
For example, ICANN could just create 'DNS zone' that would embed DNS as a zone into GNS.
Indeed that would work. In theory. Especially since we thought of that use case (delegation into DNS) with the GNS2DNS record type.
There is a BUT: You need an initial label for ICANN zone to resolve the names.
Unless you have a resolver implementation that "hides" the zkey of ICANN in the UI. But technically, under the hood, a name for this ICANN zone would look like:
www.example.com.THEICANNZKEY...
ICANN could also publish the TLDs individually as zones, however, and you could have an "ICANN Start Zone" (see Start Zone in the RFC) consisting of the TLD/zone key mappings.
Or, I guess, "someone" could apply for a custom gTLD and link it. Of course, that "someone" would need the $200K needed to review the application and all that stuff :P
Eh, you realize that this very work on the GNU Name System prompted IETF to create the ".alt" zone for this purpose already, minus the $200k fee? Registration is open at https://gana.gnunet.org/dot-alt/dot_alt.html
If I understand they want a (globally unique, secure) GNS name and a (globally unique, human friendly) traditional DNS name which acts as an alias for the GNS name via CNAME.
This can work, and sounds like a good compromise in that it lets machines and people who care deeply about security use your secure name (which is more portable than an IP address), while providing a human friendly name for people who don't care and just want things to work.
These are all valid deployment questions, which we tried to address in Appendix A.
In a nutshell, we expect that resolvers would ship with a (large) set of default "suffix-to-zone" mappings, that can be overridden by the user to provide a usable and convenient out-of-the box experience.
Not that "we expect" means that this would be the ideal scenario, not something to expect when installing our reference implementation right now.
Because if globally unique, human-readable DNS still works, I see no point in migrating off it. If the point is smoother migration, then we should start forgetting about human-readability, because it's going to disappear anyway.
TLS certs without having to buy a domain? Create a GNS domain, set a LEHO record with the necessary Host name, and make your cert based on that? Obviously you'll need a CA that's willing to issue certs for GNS LEHO names, but that way you can use the current TLS CA system to a domain without having to actually spend money on one. Alternatively, have the CA issue wildcard certs for zTLDs and then you can manage your own zTLD without issue.
Would <UUID>.example.com not work? With first to register getting priority. Or <pubkey>.example.com with the corresponding private key needed to do updates.
The "nicer" domain I am referring to would be a normal domain from a registrar.
Note that this is in the Independent Submission stream, Informational category:
> This document is not an Internet Standards Track specification; it is published for informational purposes. This is a contribution to the RFC Series, independently of any other RFC stream. The RFC Editor has chosen to publish this document at its discretion and makes no statement about its value for implementation or deployment. Documents approved for publication by the RFC Editor are not candidates for any level of Internet Standard; see Section 2 of RFC 7841.
People usually think of RFCs as being the output of the IETF, and the IETF is the biggest contributor by far. Roughly half of the IETF's RFCs are standards or on what's called the "standards track" (most Internet standards are formally at the "Proposed Standard" rather than "Standard" level). The remainder have some other status such as "Informational".
However, there are also other entities that publish into the RFC Series, including the Internet Research Task Force (IRTF), and the Internet Architecture Board (IAB). In addition, there is what's called the Independent Stream, in which an appointed editor just determines what documents can be published. Importantly, this last category hasn't gone through the IETF consensus process: they're just something someone wanted to publish as an RFC and the Independent Series Editor agreed. GNU Name System falls into this category.
This is correct.
I would like to add that the ISE also had us engage with a variety of stakeholders within the IETF (dnsop, for example, as part of discussions on RFC 9476) and also expected some third party reviews.
I have mixed feelings. A unitary root of naming and the dns is a huge value proposition to walk away from. Fitting gnu names under .alt is only partly ameliorative.
Momentum, random names and (possibly) higher latency. IIRC IPFS is particularly bad latency-wise but I'm not sure how much of that is name lookup vs file transfer, and that could be implementation specific. Name lookups are also very cacheable.
I think what's missing is that squatters and other bad actors are going to attack the distributed name system, too. It should be somehow resilient, and ideally resistant, against deliberate misuse by powerful parties. For instance, DoS attacks that pollute the namespace and could make the distributed naming service too slow or resource-intensive will necessarily be mounted.
This is one place where a significant proof of work, along the lines of Namecoin or handshake.org, would make sense. (Another place is password hashing, for example.)
> IPFS is particularly bad latency-wise I'm not sure how much of that is name lookup
All of that is due to their mistake of trying to use a sessionful protocol for their DHT.
Bittorrent got this right -- sessionless DHT -- which is why IPFS remains a rounding error compared to bittorrent, and will remain so until they adopt a sessionless DHT.
GNS, in theory, could replace DNS (the technology) and reuse its current root governance model as default.
From the point of view of GNS namespace governance is separate from name resolution protocols.
Of course, from the point of view of a lot of DNS folks, DNS is both: The governance (ICANN) and the technology (RFC 1035 et al) and indivisible.
Has anyone been following along with what the plan is for GNS? An RFC is quite cool but are there any zones being distributed over GNS to play around with?
Its a matter of personal opinion and perspective. I find their attitude generally to be toxic, even if it is just based in good ideals; how you communicate with others does matter.
GNS does not support names which are simultaneously global, secure and human-readable. Instead, names are either global and not human-readable or not globally unique and human-readable. In GNS, each user manages their own zones and can delegate subdomains to zones managed by other users.
For example, ICANN could just create 'DNS zone' that would embed DNS as a zone into GNS.