
Decentralization Dilemma - molus
https://www.molus.org/blog/decentralization-dilemma/
======
gojomo
An important extra dimension is whether, within a certain system, the natural
tendency towards centralization-for-convenience also generates centralization
lock-in.

If there's a dependence on canonical global registries or fixed network
locations in the manner of addressing people/content, that becomes the case.
Centralization occurs via a one-way ratchet – sought for ease but then
practically irreversible due to high switching costs. "Federated" systems
usually still have this centralization problem.

Systems based on cryptographically-derived names (for content or identities)
offer a potential escape route. While you might choose certain
centralized/dominant providers for a quick-and-easy start, retaining one's own
stable crypto identity ensures absolute portability and location/provider
indifference.

Such an architecturally-enforced right-of-exit can also make higher levels of
centralization tolerable, before centralized power becomes a problem. The
_potential_ for choosing other arrangements, rapidly and with minimum
disruption, deters abusive acts by larger proprietors.

------
ocdtrekkie
Given what I just posted about Mastodon a couple hours ago, I definitely
recognize the challenge of recentralization tendencies:
[https://news.ycombinator.com/item?id=16765156](https://news.ycombinator.com/item?id=16765156)

I think the biggest issue is our home connections. I'd rather have a box in my
house, but the need for the connection to not suck means I can't, which means
there's added hosting costs, I have to trust someone to manage it, etc. If we
had good symmetrical-speed Internet, I'd say a box with some reasonable
defaults you plug in at home would be the way to go, and it wouldn't make
sense to recentralize, because you aren't just switching hosts, you're leaving
your house.

~~~
CaptSpify
> I'd rather have a box in my house, but the need for the connection to not
> suck means I can't

I agree with you to some extent, but I also think it's far more feasible than
most people realize. I run a bunch of services (mail, website, repos, etc...)
out of my house, and it's super rare that I have any problems with it. I know
it wouldn't work for everyone, I'm just lucky enough to live in a city that
has decent internet, but I think it is entirely doable for a lot more people
than we realize.

~~~
Steltek
Doesn't email have a lot of problems with residential IPs? Either they're
blacklisted out the wazoo or the provider blocks SMTP? It's not just about
having "decent Internet".

~~~
CaptSpify
Not that I've ever experienced.

The provider may block it, but IME that's as easy as sending them an email to
open the port (which I would consider under the umbrella of "decent
internet").

I can't say I've ever had my IP blacklisted that I know of, and I've never had
delivery problems.

------
hathawsh
IMHO there is a simple solution that will likely lead to mass adoption of
decentralized services: create a wi-fi router that lets home users install
server apps on the router. Make it easy for users to install server apps and
create an open marketplace for server apps.

Historically, decentralized services don't get mass adoption because computers
or mobile devices are off or unavailable most of the time. They are also
behind NAT.

The wireless router is in a unique position. It's always on, it's a first-
class citizen on the Internet, it's not very expensive, its ownership is clear
and simple, and it often has significant unused computing power. It looks like
a major, unexploited opportunity to me.

I'm waiting for some company to take advantage of this opportunity. Think of
what this could do for decentralized social networking, file storage, currency
exchange, etc. It's quite easy to think of possible server apps; just think of
any centralized service and consider how valuable it would be to decentralize
it.

I would do it myself if I weren't already involved in trying to build a
possibly huge business or two. :-)

~~~
carussell
I used to think so, too. Over the winter, though, someone here on HN pointed
me towards Beaker Browser and Dat. (For a completely unrelated problem. Turns
out, it works pretty well as a solution to this problem, too.)

It seems likelier that a P2P web based on Dat (or something like it) will gain
adoption in the large before getting embedded devices manufacturers (a) to
willingly take on the role required here, and (b) to get it right, which are
each big asks on their own. There's also the hidden third problem (c): getting
the masses to actually go out and upgrade a piece of hardware that would
actually support this kind of thing. It's a lot easier to just download a new
browser (or an update for their current browser) and sign up for a service
that will act as an always-on peer on your behalf—for exactly the same reason
that the centralization of the Web happened in the first place.

~~~
hathawsh
Thanks for the pointer! Dat looks very powerful. I will study more.

------
tw1010
This is a great great point. The higher the derivative of the learning curve,
the higher the adoption rate. It's as if the product sweeps into the
population and the magnitude of the derivative acts as a friction or an
accelerant. It could also be noted that beyond this piece of the puzzle is
also the incentive structure that surrounds the entire network, like an
invisible fluid that you only notice when you take a step back. If demand is
high and the market is ready for the idea, it'll flow as easily as through
water. But if no demand is there to push it like a carrot in front of a horse,
then the learning rate won't matter one iota, because the incentive structure
is as dense as jello. Decentralized social networks are an example where this
is the case. They're easy enough to learn, but without a demand for it, it
won't propogate.

------
kienankb
Maybe the key to improving incentive is to bind the acts of participating in
and supporting a decentralized service so that simply using it helps construct
the network itself, i.e. every client is itself a federated server. The
tradeoff is that the service has to be robust enough to support a certain
percentage of clients being offline at any moment, and it certainly takes more
effort to construct a client that includes what would normally be delegated to
a server.

(I'm sure someone's already thought of/worked with this idea before, though.)

~~~
Ralz
I remember hearing in a podcast that openbazaar.org tried this approach but
because of unreliable clients parts of the network would be completely
unaccessible. They then went with a partially decentralized solution with a
few main nodes always supporting the system. I'm not sure what has changed in
the design since that podcast episode though.

~~~
tylersmith
I'm not sure which podcast that was. Currently content is served by others
that have viewed that content, but new content gets pushed to a few
configurable nodes to store that content before anybody has viewed it. This
allows vendors to make changes and shutdown the software immediately instead
of needing to wait until somebody has viewed their change.

------
marknadal
Yes, this why federated systems (Mastodon, Matrix), etc. will just repeat
history like with email centralizing into gmail.

However, IPFS, Scuttlebutt, GUN (full disclosure: mine ->
[https://github.com/amark/gun](https://github.com/amark/gun) ), Beaker, etc.
don't have this problem. Most of these, though, are tools for building
decentralized apps, not necessarily the apps themselves - so there is plenty
of love that we all need to have for enticing designers to build easy and
beautiful apps.

ZeroNet seemed pretty easy though and well designed, so I'm not sure about the
author's grievances against that one. Although it is more of an app (or app
store) than a library/tool.

~~~
pspeter3
Have you seen people setting up GUN instances?

~~~
marknadal
Yupe! It is super easy (like, 1 click deploy buttons) and lots of people in
the community run one. Most people use them for their own project though, not
necessarily as a collective network. Several people have said we should switch
it to automatically connect to other peers, so that we can create a
decentralized network, which I think is a good idea to try. I just originally
didn't want to put random outbound network calls in the code because that
seemed malware/"call home"/suspicious. :P

------
tomcooks
I agree 101% with the author but I fail to understand how this case is
different from anything else in the world. From pizza to operating systems,
from orgasms to entertainment, people tend to be lazy and opt for the simpler,
low stress alternative - no?

~~~
sewercake
When the usability of a the service (device, whatever) depends on, or is
proportional to the number of other people using it, then adoption rate
becomes very important not only for the success of the service, but it's
usefulness too.

i.e: There can be a sewing kit that has a high learning curve, but would
result in faster production / better products. While this may result in the
the kit not being (economically) successful, it would still be just as useful
to each individual who bought it, and learned to use it properly.

------
kodablah
Depends on how centralization is framed. Common CPU company (e.g. Intel)?
Common cloud company (e.g. AWS)? Common website company(e.g. Facebook)? If we
can separate these terms, I think people are asking to not have a common DB
store maintained by a common website company. Framed in that way, we should
strive to allow people to store things on their desktop computers and treat
them as home servers. For nomadic or not-always-on desktop type, a
monetization approach for these decentralized networks would be to
resell/affiliate with hosting providers (where the user owns and has the AWS
account, not you) and do one-click provisions and what not.

People might re-centralize for discovery, but shouldn't need to for storage or
other conveniences. And centralized discovery is not that bad of a problem.

------
VikingCoder
There's a Law, what's it called?

The virality of a signal is related to the simplest pipe it will pass through.

Dang it, I hate when I can't remember names like this.

------
tCfD
Not the most in depth HN comment, but my user ID - tCfD - stands for The
Center For Decentralization, an intentional oxymoron which I sometimes use in
casual conversation to call attention to the fundamentally paradoxical (and
the closer you get, the more they confound) premises of this subject.

------
remir
Most people outside the "tech bubble" don't care about decentralization or
open source software. These ideals are not enough to convince them to use a
new platform.

At this stage, I don't see how a new decentralized social network could
realistically compete against the big guys.

------
danbruc
What problems are people trying to solve with decentralization? The list of
problems I can think of and for which the best solution is decentralization,
is a rather short list. End in the realm of mass market consumer products it
gets even shorter and could very well be empty.

~~~
molus
> What problems are people trying to solve with decentralization?

If you are following the rise of "dapps" or blockchain applications (circa
2017) then I would answer your question with "almost anything Internet-
related".

Some examples:

* Decentralized Storage (IPFS)

* Decentralized Computation (Golem)

* Decentralized Professional Networking (Indorse)

There are many other examples, but a lot of them are vaporware and not worth
mentioning.

~~~
danbruc
That are solutions, what are the problems they tried to solve? What is wrong
with storing your files on Dropbox, Google Drive or Amazon S3? What is the
advantage of IPFS?

~~~
fenwick67
[https://ipfs.io/#uses](https://ipfs.io/#uses)

~~~
danbruc
Those points are neither specific to IPFS nor to a decentralized solution and
they do not really highlight any advantage of IPFS over something like Dropbox
to begin with. The best point is probably saving bandwidth due to its peer-to-
peer nature. There are certainly more traditional solutions where you can save
bandwidth if you can convince other people to provide it for you, for example
classical FTP mirrors, but this is certainly a valid use case for a
distributed solution. It however is more to the benefit of the service
provider and not the user and it really hinges on the popularity of that data
you want distribute, if it is not really popular just using a file server will
most likely provide a better user experience as compared to depending on a few
potentially low bandwidth peers.

------
coldacid
Good article, but I think the adoption rate/learning curve graph is a bit too
generous. By the time you've hit 10 on the learning curve, adoption's probably
already down to 10%. People are exceptionally lazy.

------
jasode
You brought up good points for discussion.

 _> 1\. The relative difficulty of running your own as an absolute beginner_

The ease-of-use is brought up several times as a barrier to decentralization
but I don't think this is really the fundamental issue. Yes, it _appears_ to
be the problem but it really isn't. If hackers invent a super simple UI, or
hypothetical set-and-forget "IPFS/Sandstorm appliance", or an auto-configuring
node... it still won't help widespread adoption for decentralization.

I was an expert for installing SMTP email servers but I don't bother with
running a "decentralized" home email server anymore. Many sysadmin experts
have abandoned the idea of running home email servers. If decentralization is
the ideal, why do hackers like us not follow it? We're certainly not waiting
for beginner-friendly email server software. (My previous comment about
this.[1])

Also, any attempt to bake the "ease-of-use" into an idiot-proof software
package or hardware appliance becomes its own vector for an attack on the
unwitting homeowner. (Previous comment.[2])

 _> 2\. The eventual centralization on top of the most well-run versions (like
Matrix)_

Yes, this is the most unsolved problem by far: _costs_. One can write or
decree that a _protocol specification_ to be distributed but it doesn't change
the fact that the _real-world implementation_ of that protocol always costs
real money and the _money spent is not distributed_. That leads to
centralization. (Previous comments.[3][4])

I've been studying the decentralization space for years and have read every
whitepaper about Diaspora, IPFS, Filecoin, Sandstorm, Mastodon, bitcoin, etc
and nobody has figured this out. (Suggestion to HN readers... every time you
see a proposal for a new decentralized protocol, do a Ctrl+F for "costs" and
"money". It's a very under discussed topic.)

Real _costs of hardware_ such as cpu+disk+bandwidth and _costs of human labor_
such as trust+maintenance are inescapable and it is the #1 puzzle to
decentralization.

E.g. Mastodon was recently suggested in various HN threads about "alternative
to Facebook". Hmmmm.... if cpu+disk are not free... who's paying for the
mastodon servers?!? Well, I see that several volunteers run Patreon
crowdfunding to keep the lights on.[5] That's very noble but that funding
model is also not scalable. That recreates how many BBS (bulletin board
systems) were run in the 1980s with dial up modems. Only a small group of
enthusiasts were users on each server.

[1]
[https://news.ycombinator.com/item?id=15526089](https://news.ycombinator.com/item?id=15526089)

[2]
[https://news.ycombinator.com/item?id=11861683](https://news.ycombinator.com/item?id=11861683)

[3]
[https://news.ycombinator.com/item?id=14125730](https://news.ycombinator.com/item?id=14125730)

[4]
[https://news.ycombinator.com/item?id=15878668](https://news.ycombinator.com/item?id=15878668)

[5]
[https://www.google.com/search?q=mastadon+patreon](https://www.google.com/search?q=mastadon+patreon)

~~~
zzzcpan
Yeah, ease of use really isn't a problem at all. Bitcoin and bittorrent are
the prime examples of that.

But it's not exactly about the costs either, but rather about incentives.
Right incentives can drive funding into adoption of the technology. I mean
people being able to profit from other people using the technology is what can
bring funding, promotion of the technology and adoption.

------
mitjak
I don't see centralization of a federated solution as a problem. If the
popular service becomes for whatever reason unappealing, the solution allows
for another provider to step in.

~~~
_greim_
You can maybe ratchet your way out of this POV, like so:

Would there ever be a centralized provider without there being an incentive
for it? Would such an incentive ever exist without an accompanying incentive
to leverage your dominance to keep competing providers out?

~~~
mitjak
As in the case of Google Talk/Meet?

------
jchanimal
The answer is for the server to be your phone.

~~~
ocdtrekkie
What happens when you lose your phone? Something you take out and about and
that someone can easily steal or can easily be damaged or lost is not a good
server.

~~~
jacquesm
Yes, but you should have a backup anyway. The bigger problem will be
connectivity and bandwidth costs in case one of your posts gets hit by a site
like HN.

------
munificent
I think one of the fundamental problems with even federated systems is that
your identity is tied to your server. If I decide to leave mstdn.io in favor
of some other Mastadon server, as far as I know, I lose all of my followers.

Even email has this problem. It's virtually impossible to change email
providers unless you happen to own your own domain that you use for email
instead of "@gmail.com" or "@yahoo.com" like most people do. Having your own
domain works, but then you have the burden of dealing with the server that
responds to that domain.

Strangely, one system _has_ solved this, and they solved it decades after
initial deployment: phones. Used to be, if you moved or switched phone
provider, you had to get a new phone number. These days, you can "keep your
phone number" when you switch providers.

The technology behind that is called Local Number Portability and is managed
by the Number Portability Administration Center [1]. That's a private
organization, but I think has some federabal obligations for fairness [2].
That sounds a little similar to how domain name registration and ISBN numbers
are managed [3].

There's probably a lesson here. Maybe part of the solution for federation and
privacy is a single authoritative registrar for root identifiers (i.e. the
"@blah" part of your identity). And then all of the federated systems sit on
top of that. The individual user owns that identifier instead of the server.

I think you need _some_ kind of level of indirection like these, so that a
user's identity isn't directly bound to the federated server they happen to be
using today. Basically a mutable map that users control where they can map a
logical identity to the current server they are using. So, if your, say,
Mastodon ID was "munificent@muni_root", Mastodon would ask muni_root, "What's
the current Mastodon server ID for user munificent". If I move from one
Mastodon server to another, I just update that one record and everything keeps
working.

We could, I suppose, use domain names for this. But in practice (1) that space
is already getting used up and (2) the usability is low and the overhead and
costs high because that's not what it was designed for.

[1]: [https://www.npac.com/number-portability/how-lnp-
works](https://www.npac.com/number-portability/how-lnp-works) [2]:
[https://www.npac.com/the-npac/about/neutrality](https://www.npac.com/the-
npac/about/neutrality) [3]:
[https://en.wikipedia.org/wiki/International_Standard_Book_Nu...](https://en.wikipedia.org/wiki/International_Standard_Book_Number#How_ISBNs_are_issued)

