Hacker News new | past | comments | ask | show | jobs | submit login
IETF Draft: Centralization, Decentralization, and Internet Standards (ietf.org)
134 points by rapnie on July 11, 2022 | hide | past | favorite | 31 comments



A significant, welcome and overdue document.

As well as providing some carefully worded definitions that are really useful in discussions, it sets out a stance of the IETF against massive commercial centralisation.

> Despite being designed and operated as a decentralized network-of-networks, the Internet is continuously subjected to forces that encourage centralization.

Practically the first words in any network course I teach these days are to explain to students who think "The Internet" is Facebook and Google what a catastrophe "Mainframe 2.0" really is.

> call into question what role architectural regulation -- in particular, that performed by open standards bodies such as the IETF -- should play in preventing, mitigating, and controlling Internet centralization.

IETF rightfully challenges (a decade too late) what this deviation from design principles really means, including the threat to its own relevance.

> The primary audience for this document is the engineers who design and standardize Internet protocols.

A problem here is that without a civics/ethics foundation contemporary engineers may not understand the deeper nature of a protocol versus a black-box platform. IETF may be moving into a new educational role to explain why a protocol is more desirable.

I am still reading but so far I have not seen the key issue of resilience (availability is mentioned) including the impact on national infrastructure resilience.


> A problem here is that without a civics/ethics foundation contemporary engineers may not understand the deeper nature of a protocol versus a black-box platform. IETF may be moving into a new educational role to explain why a protocol is more desirable.

There is the Evolvability, Deployability, & Maintainability [0] working group that has some good documents that relate to that.

[0] https://datatracker.ietf.org/program/edm/about/


Simplicity is something that got completely lost in most (not all) modern Internet protocols. As an example you might look at QUIC & HTTP/3 and its many extensions. The IETF draft of the QUIC specification is already many times longer than the one for TCP, and then there are the many extensions like MASQUE, HTTP3-DATAGRAMS etc. that are themselves quite complex. This complexity makes it in turn quite hard to implement these protocols completely and correctly. For example, Cloudflare's official QUIC implementation still lacks support for TLS key updates, which leads to QUIC connections simply timing out after the server requests a key update. So if you use a quic-go QUIC server and a QUICHE based client you're currently out of luck unless you patch the server to not perform any key updates (which is really bad for long-lived connections due to lack of forward secrecy).


"The goals of completeness and diversity are sometimes in tension. If a standard is extremely complex, it may discourage implementation diversity because the cost of a complete implementation is too high (consider: Web browsers). On the other hand, if the specification is too simple, it may not offer enough functionality to be complete, and the resulting proprietary extensions may make switching difficult (see Section 4.6)."

Noncommercial internet user here. I will gladly trade "functionality" for simplicity and implementation diversity, i.e., choice. Often "functionality" is not something internet users ask for, but something standards authors want, where either the authors are commercial entities purporting^1 to act on behalf of internet users or they are under pressure from commercial interests wanting to exploit internet users, i.e., internet traffic.

1. In truth they act on their own behalf and to satisfy their paying customers, e.g., advertisers or other "tech" companies, not internet users.


QUIC does a lot more than "the one for TCP". While I also believe that modern TCP consists of more than just one RFC (which you already hinted at).

I guess the art in protocol design is to have as few as possible mandatory-to-implement parts, which are itself minimized in complexity, so that a minimal implementation is doable with a reasonable amount of effort while already achieving a good result (and UX). Then the optional parts can be added piece-by-piece, after the implementation was already published/released.


Simplicity is something that got completely lost in all modern computing.


Simplicity is something lost in society. Most software complexity stems from that IMO.


That may be a factor, but I think a lot of it just comes from the tendency of sophomore engineers to want to use all the things in as many ways as possible.

https://www.smart-jokes.org/programmer-evolution.html


I saw a version of that joke on a BBS in the 90s. I definitely remember the manager progression at the end. But the 90s version I saw also had a very over-engineered Windows version. I wish I remembered more about it than that.


> Yet another example of multi-stakeholderism is the standardization of Internet protocols themselves. Because a specification controls implementation behavior, the standardization process can be seen as a single point of control. As a result, Internet standards bodies like the IETF allow open participation....

This cuts to the heart of it. In my book "The Big Bucks" the characters live through this, and in the 80s it was very often assumed that "multi-stakeholderism" meant a government-chartered body like the CCITT. In fact, in 1990, the federal government actually mandated OSI via GOSIP (there was no commercial Internet then). The consensus on the internet-history mailing list is that by then the war was already over, and TCP had won.

After all, governments MUST be the proper place to balance all the competing interests, mustn't they? So why didn't it work?

Well, TCP was "simple" enough that anyone could implement it, whereas OSI had a whole bunch of "profiles" to accommodate what everyone wanted. As a result, interoperability was impossible as a practical matter.

Network operators didn't even try, since TCP had Interop conferences where vendors came and proved themselves TCP citizens. OSI had nothing remotely comparable. With IETF, things didn't become standards until you had multiple interoperating implementations.

It's a different world now, though. The stakeholders are not all grad students and defense company network administrators. However, a government-sponsored "standards" organization won't work any better now than it did then.


The importance of those Interop conferences cannot be overstated. Even the simplest, most precisely worded standard will always be implemented differently. Those conferences were a very smart way for the customers to apply pressure to vendors in order to get them to work out their incompatibilities.


Indeed. And they were WAY fun, too. I went to the 1988 one in Santa Clara, and some in the early 90s.


Centralization is an inherently human condition where we tend to entrust relatively successful entities with more power, such as a corporation choosing the "best" cloud vendor, or people joining the same social network as their friends.

Is it even possible for the IETF (or a similar standardization body) to mitigate it?


Yes. You have made a false/weak assumption. Humans do not always favor centralization, if we did then you would expect our society to be eusocial like bees or ants or something. Humans aren't machines, we adjust our level of cooperation and competition to suit our environment, and the technology in our environment can either favor centralization or decentralization of social structures.


> Centralization is an inherently human condition

I disagree. Humans are successful because we exhibit rather few immutable "inherent conditions", and there's volumes of literature from the anthropologists on that flexibility.

Maybe what you mean is that systems (which complex societies and networks are) exhibit, amongst other things, an inherent possibility of agglomeration. Sometimes we call that a "network effect" - power to the powerful, riches to the rich, and so on.

Yes that's true of inanimate systems, like star formation. Human beings, and indeed other life forms, offer the precise counter to that force in being agents that choose an optimal level of proximity and connectivity. I like to use the analogy of matter, balanced between strong and weak forces of repulsion and attraction. People have a Goldilocks point for connection complexity, as seen in Robin Dunbar's number or George Miller's magic "seven plus or minus two", and this replicate at various levels of organisational scale as people like Thomas Schelling and Fred Brooks have indicated.

Unfortunately the way "network effects" are talked about today in tech creates a distortion around such dynamic equilibriums, painting them as necessarily "winner takes all".

> Is it even possible for the IETF (or a similar standardization body) to mitigate it?

That's a really good question. And several questions inside it to unpack. Their remit? Their capability? Their moral role?

I think the answer is broadly "Yes" (to all), but for small values of "it". What requires and deserves protection (morally), and can be protected (practically via regulation), is the substrate on which other things are built. In the IETF document, protocols like IP, TCP and DNS are mentioned frequently, but obviously there's a much larger suite. That, in concert with anti-trust, competition and interoperability laws will allow choice around as much or little centralisation as people want. The idea that new protocols should continue to adhere to the same standards and values as the foundation is the thrust (insofar as I read it).

IETF say there cannot be an "Internet Police" but in many ways that's exactly what is needed, and maybe what they wish they are - in a benevolent sense. Because ultimately the corporations are their own worst enemy. They will destroy the foundation on which they're built. Like a child that needs guidance, organisations like IETF act as a parent to stop cannibalism. Or, as verbal-judo proponent George Thompson said... "to help you make the best decisions for yourself now, that you would make if you were thinking clearly in some future frame of regret."


> 4. By design, distributed consensus protocols diffuse responsibility for a function among several difficult-to-identify parties. While this may be an effective way to prevent some kinds of centralization, it also means that making someone accountable for how the function is performed is difficult, and often impossible. While the protocol might use cryptographic techniques to assure correct operation, they may not capture all requirements, and may not be correctly used by the protocol designers.

Ummm, what? If something is decentralized, then that's kind of the point that you can't hold "someone accountable for how the function is performed". If you could hold someone accountable, then you would be able to censor the function at that point of control. This just seems like a criticism of decentralization as a whole.

Also, at the beginning of the Decentralization Techniques section they said...

> "Choosing the appropriate techniques for decentralization requires balancing the specific goals of the function against centralization risk, because completely precluding all forms of centralization through technical means is rarely achievable."

So is it that decentralization is impossible to achieve or that it's achievable, but undesirable? I feel like I'm being gaslighted by the IETF.


Not all parties are necessarily equal: if you have a distributed system, but 90% of the nodes are controlled by Google (or Meta, or Amazon, etc.) do you actually have a distributed system? It can be important to know who controls the nodes in a distributed system, and this can often be opaque or designed to be undiscoverable. This is the concern in those places.

Merely knowing who controls the nodes doesn’t mean you actually can trivially censor them: if you have thousands of nodes controlled by different organisations globally you’ll have a much harder time censoring them than a distributed system where a single organisation has a significant share.


If 90% of the nodes are controlled by Google, then I think the answer is clear that you do NOT in fact have a distributed system. If this occurs, then the system was not designed well and should be abandoned for a new one.

Agreed, you want the nodes to be controlled by many different entities globally, but that should be a function of the technical implementation of the distributed system rather than having to identify a responsible party that is accountable for this objective. Privacy is crucial to ensure a distributed system continues to function; otherwise, orgs or individuals running nodes could be censored in mass by authorities or participants may stop using the system due to security concerns.


> So is it that decentralization is impossible to achieve or that it's achievable, but undesirable? I feel like I'm being gaslighted by the IETF.

While reading this, I tried to apply what they mention at the beginning of the Centralization and Decentralization sections, that is that both are continuums.

They try to quantify this using "centralization risk". For some functions, greater decentralization might be not worth the effort, is actually undesirable, or even impossible due to technical reasons. For others, these are in greater number, everything should be done to prevent centralization.


As a note this is a draft informational RFC from one member, albeit a cool one :). It is not an overall stance, consensus, or best practices of the IETF. From RFC 2026:

> An "Informational" specification is published for the general information of the Internet community, and does not represent an Internet community consensus or recommendation.

A lot of people associate "RFC" with standards track RFCs so I felt this could cause some confusion.


Not to be pedantic but this is not an RFC (yet). It is only a draft and should hopefully become an RFC in the future. The basic difference is that a draft is versioned and can change in content but an RFC is stable.

Also, RFCs are certainly not only standards track. There is a mix of types including Informational, Experimental and Standards track RFCs.


I agree 100% it's not yet an true "RFC". "Internet Draft" would be the proper terminology but I couldn't figure how to cleanly work that explanation in one swoop without distracting from the main point that it's just informational and instead fudged "draft informational RFC".

There was no claim that RFCs must be standards track, this is actually a false relation the last line explicitly calls out people for having. Maybe it should have been made even clearer this is a false assumption without relying on context.

I wonder if RFC 2026 is worthy of its own HN post. Just submitted :).


Even a "draft" RFC like this one goes through an extensive review process. My own RFC (1697) never advanced beyond Draft stage, but it was still the product of an open process attended by all the major RDBMS vendors.

The stages of "standardness" have been there since the 1980s. Many important RFCs have never gone beyond Draft.


The main note here is more this is an informational RFC, which happens to still be draft, unlike e.g. RFC 1697 which is standards track. For non-standards track the review process can be extremely light in comparison since you're not making any claim there is consensus or official recommendation. For informational you pretty much submit it and as long as the editor agrees it's submitted to the correct category, somewhat reasonable, and formatted appropriately then it will be published. See section 4.2.3 of the aforementioned RFC 2026 (which itself is Best Current Practices, an entirely different group of documents!) for the specific details.

As an example of an equivilant informational draft I was involved in https://www.ietf.org/archive/id/draft-lapuh-spb-deployment-0... I can say it reached this stage with basically no review.

If you wanted to have an equivalently published document on centralisation stating the exact opposite of everything this one does you could have it up by the end of the month without worrying about review.


Thanks, I realized after saying that, that "Informational" RFCs are different from standards-track.

That said, they say right at the top that this is Informational, don't they?

Also, if you did try "an equivalently published document on centralisation stating the exact opposite" of the IETF position, I have a suspicion you'd run into some problems. Wouldn't you? At least some public statements that "that's just his opinion, man."


The main trouble would be you'd need to sell it as sounding reasonable so you'd actually have to have reasoning not just state the opposite for the heck of it.

That said there is nothing wrong with conflicting informational RFCs, they are meant to share information with the community not to convey consensus so it's fine to have conflicting views as long as it sources from differing information. That said it's generally considered better for you and the other person try to come to a common understanding and share that information as one informational document.


RFCs for IPv6 (scale the internet horizontally) and NAT (scale the internet vertically/by adding hierarchy) were both published in the late 90s.

But NAT seems to have gotten a lot more practical use, with IPv6 very gradually getting adopted over the decades.

Centralization of various forms is natural and useful and shouldn't be viewed as some ideological enemy.


> Centralization of various forms is natural and useful and shouldn't be viewed as some ideological enemy

Emergence of monopolies, nepotism and corruption are also very natural and usefull, but not to me as a consumer for sure.


Yes we should talk specifically about those issues rather than the ambiguous "centralization" bogeyman.


Section 2.1 specifically talks about the concerns including creating a monoculture, inhibiting competition and limiting innovation.


huh? centralisation and monopoly are basically synonymous




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: