
Cjdns v20 Release - cjd
https://cryptpad.fr/pad/#/1/view/XnDofWIIasrwcpgQUcFWKg/Vh1pZR0tVZgUT2I9Lec4coqTdn0mwRuA+lWH5klSSfw/
======
woah
Most of the mesh networking / routing protocol community has always viewed
CJDNS with great puzzlement. What is it? You're not going to find out from the
community around it, who invariably respond to technical questions with lots
of handwaving about how nothing else is "decentralized enough". You're not
really going to get a very clear picture from the whitepaper on Github, which
has a huge amount of detail on a cute encoding scheme and some handrolled
crypto, but no real overview of any of the theory of the system.

It's basically a network of VPN tunnels. They call the network "hyperboria"
and, as an overlay network, it runs on top of the internet. It doesn't provide
anything that https doesn't already provide, but the community insists that it
is a "mesh network" that will replace the internet. The basis for this claim
is the fact that one could conceivably run one of the tunnels over an ethernet
link. But today, as far as I can tell, it's a bunch of dudes with regular ISP
connections looking at websites over a VPN. A very different thing than the
community mesh networks actually providing internet access in places like
Athens, Catalonia, and Berlin.

The one interesting thing about CJDNS was the DHT routing technique that they
just got rid of. It could drastically cut down on the amount of memory used
for routing information by each node. In theory, you could route to any
destination in the world on a completely flat network. This is in contrast to
the system today, where routers know about every IP reachable on their subnet
and rely on other routers to deal with traffic bound for other subnets. Some
previous work[1][2] (unfortunately not cited by CJDNS in their paper), can
give you a much clearer and theoretically solid idea of how this works.

I believe that if somebody is ever able to make DHT routing performant, it
will be a huge breakthrough. I guess CJDNS will not be that project. AFAIK the
DHT routing protocol that CJDNS just replaced was not able to find optimal
routes. Your packets would get to their destination somehow, but it's very
unlikely that they would get there on the shortest path. Sounds like CJDNS has
given up on trying to improve it, and is now purely a hobbyist VPN network
with centralized control.

I highly recommend you read the papers below, DHT routing is actually very
interesting. Hopefully a newer, more theoretically rigorous project to achieve
this will emerge now that CJDNS has dropped the concept.

[1] [http://os.itec.kit.edu/downloads/publ_2006_fuhrmann-
ua_pushi...](http://os.itec.kit.edu/downloads/publ_2006_fuhrmann-ua_pushing-
chord.pdf) [2] [https://www.microsoft.com/en-us/research/wp-
content/uploads/...](https://www.microsoft.com/en-us/research/wp-
content/uploads/2016/02/virtualring.pdf)

~~~
neilalexander
> AFAIK the DHT routing protocol that CJDNS just replaced was not able to find
> optimal routes. Your packets would get to their destination somehow, but
> it's very unlikely that they would get there on the shortest path. Sounds
> like CJDNS has given up on trying to improve it, and is now purely a
> hobbyist VPN network with centralized control.

It's not really just about optimal routes. A pretty major problem with the DHT
is that it is very chatty, and even edge nodes that might like to be "left
alone" are often included in the conversation. This isn't ideal for any kind
of low-power or mobile device. Plus DHT convergence is incredibly slow - not
great for mobile devices or jumping between networks either. It actually turns
out that sometimes a bit of centralisation drastically improves efficiency.

------
DonbunEf7
This still feels like wandering in the wilderness compared to stuff like
batman-adv or tinc, both of which are very precise and succinct about network
and protocol design.

Has the cryptography been audited yet? I seem to remember that that was
something big coming up Real Soon Now.

------
mike-cardwell
Is there a Debian apt repo for cjdns yet? I used cjdns for a while, but it was
annoying having a separate procedure for updating it when everything else on
my system just required an apt-get dist-upgrade, so I stopped using it.

------
evgen
It is always amusing when various "debcentralise the X" efforts learn the same
hard lessons over and over, namely that when compared to centralised
equivalents, decentralisation is more complicated, more difficult to debug and
verify correctness, and always more expensive.

~~~
jstanley
So what's your answer? Hand over all control to the Microsofts, Apples, and
Googles of the world? Become a cog in their machine, just because freeing
ourselves from their machine is hard? No thanks.

People probably laughed at the efforts of those trying to overthrow monarchs
and dictators, and instate democracy. And that was hard too, but it doesn't
mean it wasn't worthwhile.

~~~
ChristianBundy
Decentralization is futile*

* Except in all the places where it's succeeded and vastly improved the world.

~~~
evgen
Can you name an example where complete decentralisation succeeded? In most
cases, fully decntralized systems were developed for a small, closed network
of agents and then when they failed to scale well the architects were forced
to introduce mid-tier centralisation points to assist coordination and make
the system more efficient.

~~~
jstanley
Decentralisation isn't a binary succeed/fail, it's a direction to head
towards. Increasing decentralisation incrementally is nearly always a win.

The web, git, bitcoin, email. They are all incremental improvements on more-
centralised predecessors.

~~~
evgen
It is a continuum, but it is also the case that the most efficient sweet spot
for reliability and cost effectiveness changes on this continuum as your scale
up.

The web is very decentralized when it started, but it was impossible to find
anything, so you would pass along lists of interesting URLs via email lists,
usenet postings, etc. Eventually we get catalogues like Yahoo and later search
engines like Altavista and Google. The later were a huge step towards
centralization of the web and we have not really gotten around this problem.

Email was initially very decentralized (I am talking things like UUCP, Bitnet
mail, Fido, etc.) but message routing was a major pain as anyone who remembers
UUCP bangpaths can tell you. SMTP is a bit of a sidestep on this path, making
addressing easier by moving somewhat in a decentralized direction with direct
TCP delivery (and then a half-step back with MX records and centralization of
campus-wide email systems.) While email was decentralized it was also open to
parasites and we fought the good fight for a while but are now at a point
where more than half of all email in the US runs through three organizations
because they have the resources and expertise to deliver a good product and
control spam -- not exactly decentralized anymore is it?

Git? Yeah, for your small repo. Now try to make that repo open to the rest of
the internet and participate in the global source web. Oh yeah, up to github
or bitbucket you go. Once again we centralize to scale up and to solve the
discoverability problem. Outside of a local context git is almost completely
centralized.

Bitcoin? I would point out how only a few mining consortiums control the bulk
of the hashing output, but we are now reaching fish in a barrel stage.

If you examine all of these 'improvements' I think you will see that they
became more centralized as they scaled up to solve problems with efficiency,
discoverability, complexity, and controlling parasites and increasing the cost
for attackers. As I stated initially, decentralization for its own sake is
almost always a sure sign of an amateur who does not understand the math of
network relationships or who has no experience in the field and is just
itching to make a dash through the minefield without noticing all of the dead
bodies ahead of them or bothering to learn from these lessons.

