Great to see people learning from SDSI and SPKI, which have largely been forgotten despite their achievements.
Perhaps the spec would be better if it added a table of definitions at the top, just like it has a table of names for the cryptographic primitives.
I think there is some domain knowledge that can be reasonably assumed when writing documents.
"This document does not specifiy the properties of the underlying distributed hash table (DHT) which is required by any GNS implementation."
Not really. RFCs in the Informational category don't need consensus:
> An "Informational" specification is published for the general information of the Internet community, and does not represent an Internet community consensus or recommendation. The Informational designation is intended to provide for the timely publication of a very broad range of responsible informational documents from many sources, subject only to editorial considerations and to verification that there has been adequate coordination with the standards process (see section 4.2.3).
Example: RFC 2448 AT&T's Error Resilient Video Transmission Technique.
This particular ID is informational rather than on the standards track.
In practice, that's not been true for many years. The process was recently clarified: "IETF Stream Documents Require IETF Rough Consensus" https://www.rfc-editor.org/rfc/rfc8789.html
They don't, but they also need to not conflict with any IETF activity:
"4.2.3 Procedures for Experimental and Informational RFCs
To ensure that the non-standards track Experimental and Informational
designations are not misused to circumvent the Internet Standards
Process, the IESG and the RFC Editor have agreed that the RFC Editor
will refer to the IESG any document submitted for Experimental or
Informational publication which, in the opinion of the RFC Editor,
may be related to work being done, or expected to be done, within the
Apart from that, it's unclear to me what the document is actually trying to do [cf. my comment in https://news.ycombinator.com/item?id=23769617]
GP claims getting consensus is hard. You argue that it’s not needed. Correct, but tangential.
This pointless bickering also adds zero new info, so I’ll stop here.
Nothing in the spec suggests it must run on GNUnet. This is almost like asking why should IETF standardize JSON-schema when gRPC is not standardized. One thing does not require the other or depend on the other.
> "This document does not specifiy the properties of the underlying distributed hash table (DHT) which is required by any GNS implementation."
Well they do specify the interfaces, why should they care about the impliementation?
> GNS resource records are published in a distributed hash table (DHT). We assume that a DHT provides two functions: GET(key) and PUT(key,value).
> Okay... so...
So, it's currently a disconnected lego brick hanging in the air. It serves no purpose until other documents accompany it that specify how to use it either for GNUnet, for IP, or something else. If there is more than one application document coming, at least one should (IMHO) be submitted at the same time¹ in order to aid review and understanding. If only one such application is expected, they should probably be merged into the same document.
Particularly, if it's to be applied for non-GNUnet (e.g. IP), it does need to say what underlying DHT is to be used. Otherwise it's not actually a naming system but 10 naming systems – which is, er, rather useless.
¹: it doesn't need to progress or be finalized at the same time, but the document should at least be published.
RFC6537 references OpenDHT as a "downwards" (wire direction) interface and points to XML-RPC calls and expected behavior on that. The GNS draft simply presumes the existence of a "DHT".
Maybe you could tell me which part of my comment is not true? I don't see what your example is intended to illustrate...
[Edited to add:]
P.S.: please read the guidelines at https://www.ietf.org/standards/process/informational-vs-expe... (section 3.)
GNS is very much something that can be "practiced" — or rather, it should be possible to practice. I don't think it can with your draft. I suppose that might be why you went for Informational rather than Experimental, but that kinda misses the point: you should be going for Experimental.
To walk through the guideline bulletpoints:
1. GNS can be practiced. Or rather: should be. It's a protocol.
2. If you want to push it through the IETF, you should probably open it to changes. What's the point otherwise? The IETF is not a rubber-stamping organization.
3. I'm pretty sure you're not publishing this as "dropped, just for the record"?
4. "If the IETF may publish something based on this on the standards track once we know how well this one works, it's Experimental." I feel that's exactly your intent?
5. Doesn't seem to apply. Maybe it should too?
All I'm saying is that your draft should be practice-able, but isn't, and I think it should be.
> Well they do specify the interfaces, why should they care about the impliementation?
The choice of DHT typically has protocolar implications, such as the use of XOR metric in Kademlia, or the bootstrap algorithm.
All nodes must use the same DHT.
As a result, this cannot be implemented without knowing the particular DHT in use.
Furthermore, different DHTs have distinct properties, including provisions for fault tolerance, resource recovery, latency bounds, and byzantine tolerance, which here seems like a concern.
> Furthermore, different DHTs have distinct properties, including provisions for fault tolerance, resource recovery, latency bounds, and byzantine tolerance, which here seems like a concern.
What in the RFC is dependent or contingent on those? from what I read in it none of those properties affect what is specified in the draft.
> All nodes must use the same DHT. As a result, this cannot be implemented without knowing the particular DHT in use.
Not according to the draft. Sure, if a zone uses a specific DHT and then other nodes not using the same DHT they won't be able to resolve addresses from that zones, but everything in this draft would still be valid.
The non AEAD symmetric cipher is IMHO completely unjustifiable in 2020.
So what happens if some sort of weakness is discovered in one of both of them? The internet falls apart?
You need to define what is supposed to happen if your cryptography breaks.
If ECDSA or Curve25519 are broken (e.g. because of quantum crypto), we can simply introduce a PKEY replacement (such as PKEY2) and migrate.
EDIT: However, point well taken. This needs more space in the draft and is definitely a point to address.
At some point during this transition, it will break and they will have to transition. It will not be hard to create an new RFC for that task. It will be hard to replace all of the implementations in the field. We are getting better at this with our experiences in TLS but it is still somewhat painful.
Crypto agility can be really nice but it can also be messy. Sometimes it's best to be opinionated and sometimes it's easier to just be flexible.