
Decentralization: I Want to Believe - api
http://adamierymenko.com/decentralization-i-want-to-believe/
======
PeterisP
I think that the concept of "Zero Knowledge Centralization" is the key novelty
of this article.

If the (sufficiently small) centralized parts can be made so that they're
oblivious to the content and can't discriminate it; if their only practical
choice is to work or not work; then most of the disadvantages of centralized
systems go away.

Sure, you might disable such a system by destroying the centralized nodes; but
it does prevent using them for the much more likely risks such as censorship
or MITM attacks for eavesdropping or impersonation.

~~~
nly
If you're willing to accept some zero-knowledge centralization, then the
feature set of things like Bitcoin begin to look very weak. Bitcoin spends so
much energy solving the double-spend problem that other desirable aspects of a
monetary system (transacting offline, fast transactions, provisioning for
lines of credit, anonymity) are largely not addressed.

Stefan Brands described a digital cash system using zero-knowledge signatures
more than 20 years ago that provides far greater payer and payee anonymity
than the blockchain, and is fully distributed with no requirement for full-
time network connectivity to engage in trade. The only real downside is it
requires a traditional bank at the backend to act as an honest reserve.

[http://groups.csail.mit.edu/mac/classes/6.805/articles/money...](http://groups.csail.mit.edu/mac/classes/6.805/articles/money/nsamint/nsamint.htm)
\- half way down, under the heading "PROTOCOL 5" for a high level description
of operation.

Many people don't realise that the only substantial benefit unique to Bitcoins
Blockchain architecture is the lack of central issue from a reserve bank.

~~~
Cyther606
It's only when you _badly_ want to forego the need for centralized trust that
decentralized solutions begin to make sense. There's a large cost to
decentralization, which is why Bitcoin isn't nearly as interesting as a mere
payments innovation, vs. a distributed economic system that operates outside
the purview of the State. Calling Bitcoin a "payments innovation", while
technically true, completely glosses over its fundamental value. [1]

[1] [http://cointelegraph.com/news/112183/bitcoins-political-
neut...](http://cointelegraph.com/news/112183/bitcoins-political-neutrality-
is-a-myth-amir-taaki-interview)

~~~
api
The other day I wondered if Bitcoin might not be considered a kind of AI. The
fundamental value of Bitcoin is in the fact that the block chain is itself a
trustable third party. Does that make it a citizen? A nation state?

Still... I wonder how Bitcoin would stand up to a serious attack. Imagine the
Chinese government wanted to kill it. They could build 51% worth of ASIC in
secret and spring it on the network all at once, taking over the net and maybe
just wrecking it. It might cost them so much it would not be economically
beneficial, but if it were done for political reasons they might not care
about actually profiting from it. It would be a military attack.

Of course flying a plane into the Federal Reserve headquarters is also
possible. At some point you have to accept that any system can be attacked.

~~~
erikpukinskis
The other thing to keep in mind is that 51% doesn't let you destroy the
network, it just lets you spend money twice _maybe_. And if the Chinese
government double spent, or helped other people double spend, it would all be
on the public ledger. There would be a record of the forks, and there would be
people to testify "I sold X to Y and the tree forked".

So it's not so much that Bitcoin would be destroyed, as a new avenue for theft
would open up. Probably temporarily, as other groups would respond by fighting
back with more computational power and probably with ad-hoc blocking.

------
jaekwon
For a while I wondered whether proof-of-stake is possible, or whether all
attempts would suffer from the "nothing at stake" problem. About a year after
my initial pondering, I found a solution after rummaging through old academic
papers.

The problem as you stated feels like it should be solvable. The CAP theorem is
not relevant here -- you're not bound by strong consistency because you don't
need to provide connectivity 100% of the time. There is strong geographical
stickiness in connections, so that looks like something to exploit. Combined
with smart localized buffering of packets near the destination, it looks like
a solution should exist.

The solution may not be compatible with your existing semi-centralized
solution, which may be blinding.

~~~
comex
Yeah. This kind of network also cannot truly be partition tolerant, not if the
partition is between the two nodes that are trying to establish a connection!

The article is fairly vague - what is the risk of Sybil attacks exactly? (I
looked at the FAQ, but it's sparse on details; only glanced at source.)

Is it some kind of spoofing? Since addresses are supposed to be collision
resistant and tied to public keys, it should be possible to discard all
messages regarding a peer that don't come from it. Although 40 bits is
actually way too small...

Denial of service by pretending to be able to route to a user? Okay, but there
should be some way to punish IP addresses that consistently fail to route
things they claim to be able to... complicated, but not impossible.

Of course, I've only thought about this for 10 minutes, while the author has
studied it for a long time. I'm sure they have more insight about the
problems, but without more details I can only speculate.

~~~
PeterisP
If you allow the nodes themselves to determine which nodes are trustworthy
(instead of a centralized solution), then an example Sybil attack would be a
botnet with a huge amount of identities managing to either vote that
Botnode12345 is totally trustworthy, and that all data should be sent to it,
or that NiceGuyGreg, comex, peterisp and a bunch of other nodes "consistently
fail to route things they claim to be able to". Both ways can make the system
unusable.

------
Sami_Lehtinen
It has been seen over and over again, that people don't want and don't care
about decentralized systems. Major problem is that decentralized systems are
basically mobile hostile. Some companies have used these solutions to limit
burden on their servers, pushing to burden to clients, which are then unhappy
about it. Clients can consume a lot of cpu time, memory, disk space, disk
access, cause lot of network traffic, be potentially used to generate DDoS
attacks, or malicious traffic etc. All are great reasons why not to use
decentralized solutions.

Zero Configuration is also basically impossible because you have to bootstrap
the network some how. Fully decentralised solutions still require bootstrap
information. Which is unfortunately hats enough for many and therefore works
add efficient show stopper.

Last nail to the coffin is that most people really do not care about security
at all. User base is after all just a small bunch of delusional geeks.

Otherwise something like RetroShare would be widely used.

I've been considering and wondering these same questions for several years.
Btw. About bitcoin style messaging check out bitmessage.

~~~
pknight
You make a lot of cynical, flawed assumptions. People do care about security
and the increases to the degree of their awareness. But whether people care or
not about security/decentralized implementations is irrelevant. People may not
necessarily care how it's implemented, doesn't mean that the implementation is
therefore any less important. That's developer's jobs to fix.

The challenge remains to create something that is simple and convenient to use
and is at the very least on par with popular alternatives. Nothing more than
that. Counting all the ways a decentralized implementation in principle is
going to supposedly struggle without mentioning a longer list of problems with
centralized systems is being excessively negative and cynical and for what
purpose?

~~~
Sami_Lehtinen
I was personally quite sure that trackers and other download sites would be
replaced with distributed systems. Instead sites like TPB and others continue
to use centralized systems. I'm of course pro distributed & anonymous systems,
that's the reason why I've been studying those for years. I'm kind of
disappointed how little those are used, if used at all.

When TPB guys said they would make something cool, I naturally assumed they
would release a fully decentralized, pseudonymous, anonymising solution, which
would replace bittorrent which is inherently dangerous because it leaks so
much metadata as well as reveals to whole world what you're downloading.
Instead they released something (technically) lame, the piratebrowser. Which
just allows accessing their site via Tor (basically working as proxy), even if
it's blocked locally.

I really do like Freenet & GNUnet design, because relays do store and cache
data, creating a lot of sources quickly for popular data. Many systems have
serious flaws, bitmessage is not scalable with it's current design. Most of
other fully decentralized systems suffer from flooding, sybil and metadata
leaks. I personally played a little with a bitmessage creating over 100k fake
peers just for fun and it worked beautifully flooding the network with noise
and generated junk control traffic. Because it's distributed solution, up to
10k junk messages were stored on each peer, wasting also disk space.
Decentralized design also makes nodes running the system vulnerable unless
additional steps are taken to limit resource consumption. After resources need
to be limited, then it's a great question what should be limited and how that
affects the network.

Like the original article says, it's really hard to come up with solution
which wouldn't be bad in someway.

When mobile devices get more and more common, the fact is that it's really
hard to beat the centralized solution. Of course the system can be partially
distributed so mobile clients can connect to one spot only. Server hubs doing
routing and data storage. But basically that's no different to email, DNS and
web at all. So no news here either. After all, email isn't bad solution at
all. It's secure (if configured correctly) and it's distributed. - Yes, I'm
one of those running own email servers, as well are my friends too.

Final question is, how to protect metadata and hide communication patterns. As
I mentioned in my post about Bleep. It didn't impress me at all.
[http://www.sami-lehtinen.net/blog/bittorrent-bleep-my-
analys...](http://www.sami-lehtinen.net/blog/bittorrent-bleep-my-analysis)

I'm not attacking the original article writer. He clearly has done a lot of
job and considered options as well as written clearly and well about it. But
I'm just thinking these issues in completely technical manners.

I'm happy to hear comments. I'm just cynical because the distributed systems
didn't turn out as popular as I would have expected and wanted.

What I would like to see? A perfect fully decentralized P2P network running as
HTML5 App which wouldn't require installation. That would be something which I
would rate as pretty cool currently.

~~~
pessimizer
Didn't TPB move to all magnet links years ago? Finding peers through DHT is
certainly not anonymous, but it's decentralized.

~~~
api
BitTorrent Kad is one of the only huge scale totally decentralized systems in
use today.

I wonder if it could stand up to a big Sybil attack though? Most of the black
hats that could do it don't because they all watch illegal movies and have no
motive to attack that network. I would bet you a donut the RIAA and MPAA have
talked about it, but actually doing it could expose them to civil and even
criminal penalties since it would technically be computer fraud and abuse.

~~~
Sami_Lehtinen
Actually I do think that fake peer injections are used. Sometimes there are
some fresh new torrents that can show something like 80k seeds and 100k
downloaders. Yet you can't connect almost any peers. It's exctly the attack I
used against bitmessage, and was fun to perform with edonkey and emule
servers. Worked nicely even before DHT based systems. Basically you could down
smaller sites by injecting sites IP and port to peer to peer network, because
now all nodes looking for peers would try to connect those sites. Afaik, this
has been used to introduce DDoS situations earlier. Especially if client tries
to connect hosts multiple times and doesn't immediately flag potential peers
which it can't connect. But this is all very basic stuff.

------
walterbell
Can you compare ZeroTier with Skype's P2P design?

Edit: The 1400-page "Handbook of Peer to Peer Networking" (2010) has a good
list of relevant papers. Even though the book itself is expensive, many of the
papers are available elsewhere.

[http://link.springer.com/book/10.1007/978-0-387-09751-0](http://link.springer.com/book/10.1007/978-0-387-09751-0)

~~~
api
It's similar to the original pre-Microsoft design in many ways, but ZT is
actually less asymmetrical in that client and "server" run the same code. A
supernode is just a regular node blessed as such. (And at a fixed address with
a lot of bandwidth.)

Supernodes are important because they permit instant on, rapid lookup, and
almost instant nat traversal. They also bridge sparse multicast groups as
described here.

[https://github.com/zerotier/ZeroTierOne/wiki/Multicast-
Algor...](https://github.com/zerotier/ZeroTierOne/wiki/Multicast-Algorithm-
Notes)

Both Skype and Spotify have backed away from p2p. In Skype's case I suspect
PRISM. I don't know why in the case of Spotify. That seems like an ideal
application. Since a file is not a moving target, a lot of the problems in
distributed databases disappear. That's why BitTorrent magnet links van work
pretty well, though I bet a determined attacker could Sybil the Kad DHT to
death.

~~~
walterbell
Would there be any benefit in allowing users to optionally choose supernodes,
e.g. providing ways for supernodes to disclose identify information or even to
run a business the way VPN/proxies do today?

In other words, if some nodes have to be centralized for technical reasons,
apply a decentralized, Web-of-Trust, out of band, policy/reputation approach
to governance and auditing the identity of the supernode operators.

E.g. Supernode N is in City A, owned by Person B, regulated by Country C,
packet-inspected by ISP D, and is vouched for by the following list of DevOps
infrastructure auditors.

~~~
api
It would fragment the network, almost like NAT. Instead of just being able to
link up by address, I would have to know what trust choices you'd made.

A federated model or some kind of expanded hierarchy is possible but it would
still have a center or at least a certificate authority. Otherwise you get
Sybil.

~~~
hamburglar
This is a new topic for me, so apologies if this is an obvious notion, but
what if one of the properties of a super-peer is a certain level of
transparency and auditability so that any peer can verify its operation? That
is, the super-peer contributes bandwidth and reliability because those are the
resources unique to it, but the decision-making aspects and operational state
of being a super-peer could be out in the open.

~~~
PeterisP
You can't verify the 'decision-making aspects and operational state' of a
human or remote server, they can fake anything that you can check. Crypto tech
allows us to verify possession of some information or token, or a proof of
work, but that's it.

Either the supernode _can 't_ do anything bad even if it's malicious (such as
bitcoin miners) and you don't need the transparency or it _can_ do something
bad and then you can't audit it remotely, you need trust.

If "something bad" is altering data, then if you can detect if the data is
'proper' (such as bitcoin transactions), then a node simply can't fake data;
but if you can't detect it then you need to trust the node. The same for
reading data, if it's cryptographically impossible for the node to read your
messages, then you don't need to audit that node; but if it's possible, then
you can't be really sure unless (and probably even if) the node is under your
full control.

[edit] Ahh, and if by 'verify' we mean not 'automatic verification that the
system can do' but 'human verification where I check that Bob seems to be a
nice guy, and thus his node _can 't_ be evil', then we're back to the original
post - either centralized certificate authority chain, or Sybil problems.

~~~
walterbell
> You can't verify the 'decision-making aspects and operational state' of a
> human or remote server

If the node is running an operating system that supports a dynamic root of
trust, it's theoretically possible to perform a remote attestation for binary
hashes of all software & config running on the remote node.

Not suggesting that is easy, but it can be done with the right hardware (TXT,
TPM) and Linux software stack.

~~~
PeterisP
Nope, that's just delegating the problem to the trust chain - it assumes that
the TPM does as it should be; and that noone in the chain ever signs anything
wrong. That assumption is obviously wrong, there are many public incidents
that show that the core signing keys of pretty much any 'trusted organization'
can be used by three letter agencies to sign malicious code as, say, valid
windows updates; and that both hardware and OS should be assumed to contain
backdoors - especially in the TPM chain.

In addition, having a true and valid hash of my OS, software and config
doesn't give you any true assurance - a security hole in that software or OS
can easily give an ability to execute arbitrary code, so the "proper
configuration" can behave in whatever way, despite being verified. That's what
I meant by saying that you can't be sure if it's not doing anything bad, even
if you completely control the node - if the underlying math ensures that
reading or altering X isn't possible, then it's okay; but otherwise you have
to hope/trust that it doesn't. This is meant by zero-knowledge systems as
opposed to 'secure' systems that simply ignore knowledge that they shouldn't
look at.

~~~
walterbell
> a security hole in that software or OS can easily give an ability to execute
> arbitrary code

> if the underlying math ensures that reading or altering X isn't possible,
> then it's okay

If an attacker has access to a zero-day exploit that can compromise the OS,
could they compromise random generation of numbers and weaken the zero-
knowledge system?

~~~
PeterisP
They can weaken _your_ part of the system by compromising your machine or your
keys, but compromising the centralized node (either by an exploit or a
subpoena) shouldn't change anything if that system is zero-knowledge; a
compromised node can do no more than an actively malicious node.

------
DanielBMarkham
Are we thinking of the problem incorrectly? Math may actually be our enemy
here. (Egads!)

So yes, for a fixed problem domain, the information transversing it requires
topological constraints. Or another words, the fact that the data is
structured requires the network to be traversible.

So how about this? We remove the structured requirement from the data. My
device "knows" several other devices. Each device has some biometric
identifying data to show who is using it currently. Those devices send my
device unstructured data and app code from time to time. Each device decides
who and what to share things with and whether and how much to trust incoming
material. The data and the app are linked through a cataloging system provided
by the sender.

The trick is this: each device really only makes one decision: how much do I
trust the other devices? If trust is established, the app and code can
continue to propagate. So we're sort of making viruses a good thing!

Now your data has no structure, so there's nothing to coordinate. If some of
the initial apps helped folks communicate, then you could communicate to those
you trusted. I could imagine some "master" data types that all devices could
use: URL, IPv6, Text, HTML, JSON, XML, email, IM, etc. You could even have
some master device types.

The beauty of this is that you start building security at the same time you're
doing anything else -- because if you don't, somebody's going to own your
machine, and rather quickly. Each device has no knowledge of the whole, but it
has to quickly evolve a very complex and nuanced understanding of what it's
willing to store and do for other devices.

I don't think the math would work on this at all, but I think it's a workable
system - much the same as a cocktail party is a terrible place to interview
people, yet lots of folks end up with jobs because of conversations held
around cocktail parties. (Do people still have cocktail parties?)

------
retsevrah
Interesting article, something I have given thought several times as well. I
agree a fully decentralized approach is a more fundamental and a harder
problem than is often assumed and one that is still actively researched in
academia.

A few interesting efforts that I rarely see mentioned in this context may add
to your opinion on the matter:

\- the creator of Kademlia, Petar Maymounkov, has been working on a successor
on and off for years, his site offers some context and in 1 of his own
comments there are a couple of pointers to his currents efforts in this area,
you'll find it from here if you're interested, updates are a bit scattered
throughout different media and time :)

\- another field that is not that fundamentally different in what it wants to
achieve is 'content centric networking' which is advocated by PARC and is
backed by some of the oldschool networking guys/researchers. A wikipedia page
is here but i don't really like it:
[http://en.wikipedia.org/wiki/Named_data_networking](http://en.wikipedia.org/wiki/Named_data_networking)

A presentation by Van Jacobsen of PARC on the subject can be seen here:
[https://www.youtube.com/watch?v=3zOLrQJ5kbU](https://www.youtube.com/watch?v=3zOLrQJ5kbU)

the talk seems intended for a low tech audience at first and i can imagine
staying concentrated is hard here but imo the talk does deliver.

The problems these two encounter and are trying to solve are of a different
nature than what I see in most projects - which i think often get entangled in
nonfundamental problems and eventually ending up in a unrecoverable mess
because these fundamental issues were never addressed, because it's hard and
may be impossible currently... In my opinion cooperation/adaptation in lower
osi layers will be needed to achieve the efficiency, security and
decentralization you mention, single purpose application may be easier to get
away with. Whether this will ever be able to happen or not is maybe more a
geopolitical discussion than a technical one, complicated stuff, we'll see :)

~~~
retsevrah
Forgot to add the url for Petar's site, tried the edit function but it seems
to be broken?

Here it is:
[http://www.maymounkov.org/kademlia](http://www.maymounkov.org/kademlia)

~~~
jaekwon
I second the old school research. If you have the funds, get an ACM membership
or similar.

------
matt_kantor
> There aren't any decentralized alternatives to Facebook, Google, or Twitter
> because nobody's been able to ship one.

This makes it sound like a technical problem, but it's also at least partially
a market problem. It's much more straightforward to squeeze revenue streams
out of centralized walled gardens like these than decentralized equivalents.

~~~
api
True, but this is at least partly balanced by the fact that decentralized
systems cost considerably less to operate. With my own example, I've already
calculated that I could handle over a million concurrent users on a few
hundred a month in hosting costs. If I had to relay all that bandwidth it
might be hundreds of thousands in costs.

A decentralized Facebook would have less revenue, but also negligible hosting
costs.

------
tristanls
The paper quoted as reaching the same intuitions about the CAP-like
limitations ("On the Power of (even a little) Centralization in Distributed
Processing") makes a simplifying assumption that is not valid for
decentralized systems that are... "interesting" (for lack of a better word).

The assumption is (quoting from the Conclusion): "For example, the
transmission delays between the local and central stations are assumed to be
negligible compared to processing times; this may not be true for data centers
that are separated by significant geographic distances." This sounds to me
awfully like an "oracle machine
([http://en.wikipedia.org/wiki/Oracle_machine](http://en.wikipedia.org/wiki/Oracle_machine)).
It has "more computation" and it can decide instantaneously on where to apply
it. I'm skeptical of generalizing the result from the paper to all distributed
systems, in particular, the ones where latency-to-effort ratio doesn't fit
into the stated assumption.

------
CMCDragonkai
How does this compare with cjdns?

Also have you looked into eventual consistency?

~~~
api
They're different things architecturally. cjdns is cool but as far as I know
could not scale really really huge or survive, say, a Sybil type attack by
someone with NSA or top-tier black hat hacker group expertise. I'm not sure my
system could survive that either, but the problem is I'm not convinced a
centerless mesh _can_ be made that robust.

I know some community mesh nets have used it with some success. Just because
something has limits doesn't mean it can't be very useful for lots of
applications.

~~~
CMCDragonkai
What if the entire network was a VPN where you the operator know all the nodes
that are part of it. Would it still be vulnerable to the sybil attack?

~~~
walterbell
Related: Tinc open-source VPN with some P2P properties,
[http://blogs.operationaldynamics.com/andrew/software/researc...](http://blogs.operationaldynamics.com/andrew/software/research/using-
tinc-vpn)

------
transfire
Perhaps we need to think about the problem a bit differently. We tend to think
pie-in-the-sky when we think decentralized computing. But what is it that we
really need? I am happy with all my files being on _my_ computer. But it would
be nice for them to be _backed-up_ in a decentralized cloud store, and
accessible from anywhere, if need be. But I can accept some latency in the
later if I don't have access to my own systems. That's all I really need. I
suspect that's all most of us need. We don't really require the internet to be
our personal hard drive.

------
Joeboy
> There aren't any decentralized alternatives to Facebook,

> Google, or Twitter because nobody's been able to ship one.

Diaspora and identi.ca(/pump.io) are decentralized alternatives to Facebook
and Twitter. Admittedly not in fully decentralized sense the author is talking
about, but that isn't their problem. People don't use them, because people
don't use them.

~~~
dublinben
Some people do use them though, and they actually work.

~~~
lisper
Diaspora doesn't "work" for the definition of "work" that most people use.

Suppose I wanted to join Diaspora. I search for "diaspora" on Google. The
first result is the Wikipedia article, and the second is a link to
joindiaspora.com. Cool, looks promising, but when I click on the link I find
this:

"JoinDiaspora.com Registrations are closed"

But OK, maybe there's hope:

"Diaspora is Decentralized! But don't worry! There are lots of other pods you
can register at. You can also choose to set up your own pod if you'd like."

So I click on "other pods" and I get sent to a naked list of sites. No
explanation, no basis for choosing among them. Many of them in hinky-looking
international domains.

OK, so I go back and click on "Set up your own pod", which takes me back to
joindiaspora.com. At that point I give up.

Diaspora may work for hard-core geeks, but it doesn't work for normal people.

~~~
Joeboy
The "official" diaspora site has always been aggressively self-sabotaging. I
don't know why. If you want a reliable, well maintained pod
[http://diasp.org](http://diasp.org) should do you.

~~~
lisper
"Registrations here are closed"

~~~
mnutt
I'm guessing that language was hard-coded into the landing page and got pushed
to diasp.org in an update, and nobody noticed. It looks like you can still
sign up there by clicking the signup link.

~~~
lisper
OK, but my point remains: a regular person who doesn't hang out on HN is not
going to have a good UX signing up for Diaspora.

------
brianchu
Really interesting. About a year ago I spent some time thinking about the
concept of building semi-decentralized versions of typical web apps with
WebRTC and a series of trusted centralized hubs.

------
emsimot
Could Ethereum perhaps function as a blind idiot God of the internet?

~~~
kordless
You mean an oracle?

------
yarrel
> There aren't any decentralized alternatives to Facebook, Google, or Twitter
> because nobody's been able to ship one.

Untrue.

[http://gnu.io/](http://gnu.io/)

~~~
zaphar
You've just made his point for him. When he says no one has been able to ship
one what he means is not that no one tried to create one or even launched one.
What he means is that no one created one and launched it with more than a
small group of niche users.

[http://gnu.io/](http://gnu.io/) will only ever be used by niche users and
will never get to the size or even utility of Facebook,Google, and Twitter.
Pretending otherwise is just willful blindness.

~~~
Lennie
I think there are 2 reasons for that.

1\. building distributed systems is a lot harder 2\. so deploying/building
centralized systems just gets on the market faster and is quicker to develop
for

------
EGreg
The OP wrote: _There aren 't any decentralized alternatives to Facebook,
Google, or Twitter because nobody's been able to ship one._

Well, that's what my company's been working to change. We started a little
earlier than the Diaspora project was announced. And we've already designed
the whole platform and built over 95% of it. As for the list, we have slightly
different priorities.

#1 - we are putting this way lower on the list, and consider it unnecessary if
the network is really distributed. Just like Google Plus's "Real Name
Requirement" is unnecessary because if Bill Gates doesn't want me to see his
real name, he doesn't want me to find his account via that name. It's more to
enrich Google than to help people. The current IP system and DNS / Certificate
Authority system can a good compromise for this, and notably they rely on
centralized, albeit, delegated, mechanisms. One could argue that acquiring a
domain can be completely decentralized in order to prevent censorship. But
none of that is required for real time point-to-point connectivity, because
with total privacy, you won't find the node you're looking for unless you
follow a path through the network by which it wants to be found. All the nodes
in that path would have to be online as you navigate them, but that's about
it. I'll call this requirement #1a.

#2 - Absolutely. This would probably be shifted to #1 for us. For example, if
I am on NYU's network and I friended a few people, and then I go join
Columbia's network, it should seamlessly reconstruct my social graph there
from the people who are also on Columbia AND want me to know that. Requests
for information published by multiple parties would have to be routed
correctly and responses would have to be combined. And so forth.

#3 - For you this is part of #2, but also assumes #1. Of course, to accomplish
this, two nodes sharing a session would simply need to agree at all times to
use one (or more) other nodes which are currently online, to pick up the
session. If your device goes out of range of your WiFi and switches to some
other network, it still contains the routing information for that node. This
is a bit different in that it doesn't require #1, but only #1a.

#4 - I guess I do not need to worry so much about lower protocol levels, but
basically if you can assume https, then that's enough (although you are at the
mercy of the certificate authorities until TLS starts to use Web of Trust
instead of x.509). Usually, people run clients instead of servers, and they
choose to trust some server to run an application. Presumably this server
would not be blocked by its internet service provider -- in fact it may be a
wifi hotspot in an african village that doesn't even have internet. It would
be a single source of failure, but only for that subnetwork. In a distributed
social platform, one could easily just go use another server hosting the same
app.

#5 - Private, encrypted, secure communication gets tough for some
applications. For example, multi user chat or mental poker where you may not
trust other participants. So until we have zero-trust technology for
everything, people trusting a server of their choice seems to be a good
general solution. And all this communication once again relies first on a
reliable identity for users, apps, servers, etc. which can of course be
achieved by self-signed certificates, but needs to be more seamless across
networks as per #2.

#6 - Everything I've described above can be decentralized in the same sense
the internet is -- smaller networks joining into larger ones. But you don't
really need a central DNS, you don't really need a centralized identity. I
also assumed you did, until February of this year when I met with Albert
Wegner from Union Square Ventures. Our system was already able to handle
everything as decentralized, but we wanted to have unique ids for every user
in SOME database. Then we realized that this isn't necessary.

That said, there is one problem with the web's security model that does not
allow apps to truly be decentralized. The problem is that, if I sign in using
one app somewhere, and then visit another app, the other app doesn't know
where I've signed in or which account I prefer to use. People attempted to
solve it with proposals like xAuth which are merely stopgap measures ... but
to date, no one has really been able to make identity totally seamless because
of it.

In short ... I now disagree that #1 should be the biggest priority, because
every doesn't REALLY need to be able to connect to every other node on demand,
but only to nodes it has encountered while using the network. That makes
everything able to be distributed.

As for the CAP theorem, I guess the reason you can't maximize Availability and
Consistency while at the same time having Partition Tolerance is because
subnetworks can disconnect at any time, and if you need consistency you'll
have to wait for them to reconnect. But many problems do not require
consistency across the WHOLE system! That's why we've chosen to extend the web
model, where clients connect to a server they trust, and this can be used to
power chatrooms and other group activities. Partition Tolerance within this
subnetwork is only limited to splits within the subnetwork, not outside of it.
And since the network consists of very few nodes compared to the whole world,
the splits happen rarely, and the rest of the time, we can have high
Availability while preserving total Consistency.

------
zackmorris
The first link to IP sums up the misgivings I have with statically typed
languages, that I could never quite put my finger on:

"If you’re using early-binding languages as most people do, rather than late-
binding languages, then you really start getting locked in to stuff that
you’ve already done. You can’t reformulate things that easily." -Alan Kay

The problem for me is not so much that a variable has a static type once it's
been set. It's more that the variable shouldn't even exist in the first place.
The way we concentrate so much on variables in C-based languages is really
kind of absurd, once we stop to consider that they are only temporary holding
sites for the flow of information.

I had never quite made the connection before that this is really the problem
with the internet at large. We're focussing on websites, IP addresses,
servers, etc, when really it's all about the information and the relationships
and transformations that connect it.

The distributed internet that eventually replaces the one we pay for now will
probably work more like a solver, so we'll create a request or query of some
kind and the network will derive the solution, which most of the time will
amount to simply fetching the data that matches a hash. It will be interesting
to see what happens when we throw out archaic protocols like NAT that reduce
everyone to second class netizens, and even abandon the idea of data bound to
device or location, and think of it more as a medium like time or space where
we can freely exchange ideas without having to depend on central points of
failure or limiting protocols.

What kind of haunts me though is the leverage that this will give software.
Imagine how much more powerful the internet is compared to, say, a library.
Now imagine if instead of having one computer, with all the limitations we
can’t even perceive anymore because we’ve been stuck in these limiting
paradigms, we each have thousands or millions at our disposal. I picture it a
bit like wolframalpha.com except with natural language processing, so you
could ask it anything you wanted, and even if it couldn’t find the answer, it
would do its best to reconstruct the missing pieces and get you close to your
goal anyway. I just feel in my gut somehow that this will be part of the basis
of artificial intelligence, and this general way of storing, retrieving and
transforming information by hash, diff, or schema-less relationship and
probability works from the lowest level logic gates and languages like Go up
to big data APIs like Hadoop. Like maybe what we think of as intelligence is a
subset of a universal problem solver. I guess that’s why I’m so interested in
a free, distributed internet. It’s like right now what we think of as the
internet is really the inside of a cage, and we can see out through the bars
and imagine an ecosystem where countless ideas are evolving untamed.

I’m not sure if I’ve said anything at all here, but hey, it’s Saturday so
might as well post it.

------
floatboth
No FreeBSD support in ZeroTier One? :(

~~~
api
Todo. I'm doing some refactoring to make it easier.

